##// END OF EJS Templates
Unify API between completers....
Matthias Bussonnier -
Show More
@@ -1,2102 +1,2101 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\greek small letter alpha<tab>
38 \\greek small letter alpha<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press `<tab>` to expand it to its latex form.
53 and press `<tab>` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103 """
103 """
104
104
105
105
106 # Copyright (c) IPython Development Team.
106 # Copyright (c) IPython Development Team.
107 # Distributed under the terms of the Modified BSD License.
107 # Distributed under the terms of the Modified BSD License.
108 #
108 #
109 # Some of this code originated from rlcompleter in the Python standard library
109 # Some of this code originated from rlcompleter in the Python standard library
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
111
111
112
112
113 import builtins as builtin_mod
113 import builtins as builtin_mod
114 import glob
114 import glob
115 import inspect
115 import inspect
116 import itertools
116 import itertools
117 import keyword
117 import keyword
118 import os
118 import os
119 import re
119 import re
120 import string
120 import string
121 import sys
121 import sys
122 import time
122 import time
123 import unicodedata
123 import unicodedata
124 import warnings
124 import warnings
125 from contextlib import contextmanager
125 from contextlib import contextmanager
126 from importlib import import_module
126 from importlib import import_module
127 from types import SimpleNamespace
127 from types import SimpleNamespace
128 from typing import Iterable, Iterator, List, Tuple
128 from typing import Iterable, Iterator, List, Tuple
129
129
130 from IPython.core.error import TryNext
130 from IPython.core.error import TryNext
131 from IPython.core.inputtransformer2 import ESC_MAGIC
131 from IPython.core.inputtransformer2 import ESC_MAGIC
132 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
132 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
133 from IPython.core.oinspect import InspectColors
133 from IPython.core.oinspect import InspectColors
134 from IPython.utils import generics
134 from IPython.utils import generics
135 from IPython.utils.dir2 import dir2, get_real_method
135 from IPython.utils.dir2 import dir2, get_real_method
136 from IPython.utils.process import arg_split
136 from IPython.utils.process import arg_split
137 from traitlets import Bool, Enum, Int, observe
137 from traitlets import Bool, Enum, Int, observe
138 from traitlets.config.configurable import Configurable
138 from traitlets.config.configurable import Configurable
139
139
140 import __main__
140 import __main__
141
141
142 # skip module docstests
142 # skip module docstests
143 skip_doctest = True
143 skip_doctest = True
144
144
145 try:
145 try:
146 import jedi
146 import jedi
147 jedi.settings.case_insensitive_completion = False
147 jedi.settings.case_insensitive_completion = False
148 import jedi.api.helpers
148 import jedi.api.helpers
149 import jedi.api.classes
149 import jedi.api.classes
150 JEDI_INSTALLED = True
150 JEDI_INSTALLED = True
151 except ImportError:
151 except ImportError:
152 JEDI_INSTALLED = False
152 JEDI_INSTALLED = False
153 #-----------------------------------------------------------------------------
153 #-----------------------------------------------------------------------------
154 # Globals
154 # Globals
155 #-----------------------------------------------------------------------------
155 #-----------------------------------------------------------------------------
156
156
157 # Public API
157 # Public API
158 __all__ = ['Completer','IPCompleter']
158 __all__ = ['Completer','IPCompleter']
159
159
160 if sys.platform == 'win32':
160 if sys.platform == 'win32':
161 PROTECTABLES = ' '
161 PROTECTABLES = ' '
162 else:
162 else:
163 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
163 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
164
164
165 # Protect against returning an enormous number of completions which the frontend
165 # Protect against returning an enormous number of completions which the frontend
166 # may have trouble processing.
166 # may have trouble processing.
167 MATCHES_LIMIT = 500
167 MATCHES_LIMIT = 500
168
168
169 _deprecation_readline_sentinel = object()
169 _deprecation_readline_sentinel = object()
170
170
171
171
172 class ProvisionalCompleterWarning(FutureWarning):
172 class ProvisionalCompleterWarning(FutureWarning):
173 """
173 """
174 Exception raise by an experimental feature in this module.
174 Exception raise by an experimental feature in this module.
175
175
176 Wrap code in :any:`provisionalcompleter` context manager if you
176 Wrap code in :any:`provisionalcompleter` context manager if you
177 are certain you want to use an unstable feature.
177 are certain you want to use an unstable feature.
178 """
178 """
179 pass
179 pass
180
180
181 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
181 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
182
182
183 @contextmanager
183 @contextmanager
184 def provisionalcompleter(action='ignore'):
184 def provisionalcompleter(action='ignore'):
185 """
185 """
186
186
187
187
188 This context manager has to be used in any place where unstable completer
188 This context manager has to be used in any place where unstable completer
189 behavior and API may be called.
189 behavior and API may be called.
190
190
191 >>> with provisionalcompleter():
191 >>> with provisionalcompleter():
192 ... completer.do_experimental_things() # works
192 ... completer.do_experimental_things() # works
193
193
194 >>> completer.do_experimental_things() # raises.
194 >>> completer.do_experimental_things() # raises.
195
195
196 .. note:: Unstable
196 .. note:: Unstable
197
197
198 By using this context manager you agree that the API in use may change
198 By using this context manager you agree that the API in use may change
199 without warning, and that you won't complain if they do so.
199 without warning, and that you won't complain if they do so.
200
200
201 You also understand that, if the API is not to your liking, you should report
201 You also understand that, if the API is not to your liking, you should report
202 a bug to explain your use case upstream.
202 a bug to explain your use case upstream.
203
203
204 We'll be happy to get your feedback, feature requests, and improvements on
204 We'll be happy to get your feedback, feature requests, and improvements on
205 any of the unstable APIs!
205 any of the unstable APIs!
206 """
206 """
207 with warnings.catch_warnings():
207 with warnings.catch_warnings():
208 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
208 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
209 yield
209 yield
210
210
211
211
212 def has_open_quotes(s):
212 def has_open_quotes(s):
213 """Return whether a string has open quotes.
213 """Return whether a string has open quotes.
214
214
215 This simply counts whether the number of quote characters of either type in
215 This simply counts whether the number of quote characters of either type in
216 the string is odd.
216 the string is odd.
217
217
218 Returns
218 Returns
219 -------
219 -------
220 If there is an open quote, the quote character is returned. Else, return
220 If there is an open quote, the quote character is returned. Else, return
221 False.
221 False.
222 """
222 """
223 # We check " first, then ', so complex cases with nested quotes will get
223 # We check " first, then ', so complex cases with nested quotes will get
224 # the " to take precedence.
224 # the " to take precedence.
225 if s.count('"') % 2:
225 if s.count('"') % 2:
226 return '"'
226 return '"'
227 elif s.count("'") % 2:
227 elif s.count("'") % 2:
228 return "'"
228 return "'"
229 else:
229 else:
230 return False
230 return False
231
231
232
232
233 def protect_filename(s, protectables=PROTECTABLES):
233 def protect_filename(s, protectables=PROTECTABLES):
234 """Escape a string to protect certain characters."""
234 """Escape a string to protect certain characters."""
235 if set(s) & set(protectables):
235 if set(s) & set(protectables):
236 if sys.platform == "win32":
236 if sys.platform == "win32":
237 return '"' + s + '"'
237 return '"' + s + '"'
238 else:
238 else:
239 return "".join(("\\" + c if c in protectables else c) for c in s)
239 return "".join(("\\" + c if c in protectables else c) for c in s)
240 else:
240 else:
241 return s
241 return s
242
242
243
243
244 def expand_user(path:str) -> Tuple[str, bool, str]:
244 def expand_user(path:str) -> Tuple[str, bool, str]:
245 """Expand ``~``-style usernames in strings.
245 """Expand ``~``-style usernames in strings.
246
246
247 This is similar to :func:`os.path.expanduser`, but it computes and returns
247 This is similar to :func:`os.path.expanduser`, but it computes and returns
248 extra information that will be useful if the input was being used in
248 extra information that will be useful if the input was being used in
249 computing completions, and you wish to return the completions with the
249 computing completions, and you wish to return the completions with the
250 original '~' instead of its expanded value.
250 original '~' instead of its expanded value.
251
251
252 Parameters
252 Parameters
253 ----------
253 ----------
254 path : str
254 path : str
255 String to be expanded. If no ~ is present, the output is the same as the
255 String to be expanded. If no ~ is present, the output is the same as the
256 input.
256 input.
257
257
258 Returns
258 Returns
259 -------
259 -------
260 newpath : str
260 newpath : str
261 Result of ~ expansion in the input path.
261 Result of ~ expansion in the input path.
262 tilde_expand : bool
262 tilde_expand : bool
263 Whether any expansion was performed or not.
263 Whether any expansion was performed or not.
264 tilde_val : str
264 tilde_val : str
265 The value that ~ was replaced with.
265 The value that ~ was replaced with.
266 """
266 """
267 # Default values
267 # Default values
268 tilde_expand = False
268 tilde_expand = False
269 tilde_val = ''
269 tilde_val = ''
270 newpath = path
270 newpath = path
271
271
272 if path.startswith('~'):
272 if path.startswith('~'):
273 tilde_expand = True
273 tilde_expand = True
274 rest = len(path)-1
274 rest = len(path)-1
275 newpath = os.path.expanduser(path)
275 newpath = os.path.expanduser(path)
276 if rest:
276 if rest:
277 tilde_val = newpath[:-rest]
277 tilde_val = newpath[:-rest]
278 else:
278 else:
279 tilde_val = newpath
279 tilde_val = newpath
280
280
281 return newpath, tilde_expand, tilde_val
281 return newpath, tilde_expand, tilde_val
282
282
283
283
284 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
284 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
285 """Does the opposite of expand_user, with its outputs.
285 """Does the opposite of expand_user, with its outputs.
286 """
286 """
287 if tilde_expand:
287 if tilde_expand:
288 return path.replace(tilde_val, '~')
288 return path.replace(tilde_val, '~')
289 else:
289 else:
290 return path
290 return path
291
291
292
292
293 def completions_sorting_key(word):
293 def completions_sorting_key(word):
294 """key for sorting completions
294 """key for sorting completions
295
295
296 This does several things:
296 This does several things:
297
297
298 - Demote any completions starting with underscores to the end
298 - Demote any completions starting with underscores to the end
299 - Insert any %magic and %%cellmagic completions in the alphabetical order
299 - Insert any %magic and %%cellmagic completions in the alphabetical order
300 by their name
300 by their name
301 """
301 """
302 prio1, prio2 = 0, 0
302 prio1, prio2 = 0, 0
303
303
304 if word.startswith('__'):
304 if word.startswith('__'):
305 prio1 = 2
305 prio1 = 2
306 elif word.startswith('_'):
306 elif word.startswith('_'):
307 prio1 = 1
307 prio1 = 1
308
308
309 if word.endswith('='):
309 if word.endswith('='):
310 prio1 = -1
310 prio1 = -1
311
311
312 if word.startswith('%%'):
312 if word.startswith('%%'):
313 # If there's another % in there, this is something else, so leave it alone
313 # If there's another % in there, this is something else, so leave it alone
314 if not "%" in word[2:]:
314 if not "%" in word[2:]:
315 word = word[2:]
315 word = word[2:]
316 prio2 = 2
316 prio2 = 2
317 elif word.startswith('%'):
317 elif word.startswith('%'):
318 if not "%" in word[1:]:
318 if not "%" in word[1:]:
319 word = word[1:]
319 word = word[1:]
320 prio2 = 1
320 prio2 = 1
321
321
322 return prio1, word, prio2
322 return prio1, word, prio2
323
323
324
324
325 class _FakeJediCompletion:
325 class _FakeJediCompletion:
326 """
326 """
327 This is a workaround to communicate to the UI that Jedi has crashed and to
327 This is a workaround to communicate to the UI that Jedi has crashed and to
328 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
328 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
329
329
330 Added in IPython 6.0 so should likely be removed for 7.0
330 Added in IPython 6.0 so should likely be removed for 7.0
331
331
332 """
332 """
333
333
334 def __init__(self, name):
334 def __init__(self, name):
335
335
336 self.name = name
336 self.name = name
337 self.complete = name
337 self.complete = name
338 self.type = 'crashed'
338 self.type = 'crashed'
339 self.name_with_symbols = name
339 self.name_with_symbols = name
340 self.signature = ''
340 self.signature = ''
341 self._origin = 'fake'
341 self._origin = 'fake'
342
342
343 def __repr__(self):
343 def __repr__(self):
344 return '<Fake completion object jedi has crashed>'
344 return '<Fake completion object jedi has crashed>'
345
345
346
346
347 class Completion:
347 class Completion:
348 """
348 """
349 Completion object used and return by IPython completers.
349 Completion object used and return by IPython completers.
350
350
351 .. warning:: Unstable
351 .. warning:: Unstable
352
352
353 This function is unstable, API may change without warning.
353 This function is unstable, API may change without warning.
354 It will also raise unless use in proper context manager.
354 It will also raise unless use in proper context manager.
355
355
356 This act as a middle ground :any:`Completion` object between the
356 This act as a middle ground :any:`Completion` object between the
357 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
357 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
358 object. While Jedi need a lot of information about evaluator and how the
358 object. While Jedi need a lot of information about evaluator and how the
359 code should be ran/inspected, PromptToolkit (and other frontend) mostly
359 code should be ran/inspected, PromptToolkit (and other frontend) mostly
360 need user facing information.
360 need user facing information.
361
361
362 - Which range should be replaced replaced by what.
362 - Which range should be replaced replaced by what.
363 - Some metadata (like completion type), or meta information to displayed to
363 - Some metadata (like completion type), or meta information to displayed to
364 the use user.
364 the use user.
365
365
366 For debugging purpose we can also store the origin of the completion (``jedi``,
366 For debugging purpose we can also store the origin of the completion (``jedi``,
367 ``IPython.python_matches``, ``IPython.magics_matches``...).
367 ``IPython.python_matches``, ``IPython.magics_matches``...).
368 """
368 """
369
369
370 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
370 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
371
371
372 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
372 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
373 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
373 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
374 "It may change without warnings. "
374 "It may change without warnings. "
375 "Use in corresponding context manager.",
375 "Use in corresponding context manager.",
376 category=ProvisionalCompleterWarning, stacklevel=2)
376 category=ProvisionalCompleterWarning, stacklevel=2)
377
377
378 self.start = start
378 self.start = start
379 self.end = end
379 self.end = end
380 self.text = text
380 self.text = text
381 self.type = type
381 self.type = type
382 self.signature = signature
382 self.signature = signature
383 self._origin = _origin
383 self._origin = _origin
384
384
385 def __repr__(self):
385 def __repr__(self):
386 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
386 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
387 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
387 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
388
388
389 def __eq__(self, other)->Bool:
389 def __eq__(self, other)->Bool:
390 """
390 """
391 Equality and hash do not hash the type (as some completer may not be
391 Equality and hash do not hash the type (as some completer may not be
392 able to infer the type), but are use to (partially) de-duplicate
392 able to infer the type), but are use to (partially) de-duplicate
393 completion.
393 completion.
394
394
395 Completely de-duplicating completion is a bit tricker that just
395 Completely de-duplicating completion is a bit tricker that just
396 comparing as it depends on surrounding text, which Completions are not
396 comparing as it depends on surrounding text, which Completions are not
397 aware of.
397 aware of.
398 """
398 """
399 return self.start == other.start and \
399 return self.start == other.start and \
400 self.end == other.end and \
400 self.end == other.end and \
401 self.text == other.text
401 self.text == other.text
402
402
403 def __hash__(self):
403 def __hash__(self):
404 return hash((self.start, self.end, self.text))
404 return hash((self.start, self.end, self.text))
405
405
406
406
407 _IC = Iterable[Completion]
407 _IC = Iterable[Completion]
408
408
409
409
410 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
410 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
411 """
411 """
412 Deduplicate a set of completions.
412 Deduplicate a set of completions.
413
413
414 .. warning:: Unstable
414 .. warning:: Unstable
415
415
416 This function is unstable, API may change without warning.
416 This function is unstable, API may change without warning.
417
417
418 Parameters
418 Parameters
419 ----------
419 ----------
420 text: str
420 text: str
421 text that should be completed.
421 text that should be completed.
422 completions: Iterator[Completion]
422 completions: Iterator[Completion]
423 iterator over the completions to deduplicate
423 iterator over the completions to deduplicate
424
424
425 Yields
425 Yields
426 ------
426 ------
427 `Completions` objects
427 `Completions` objects
428
428
429
429
430 Completions coming from multiple sources, may be different but end up having
430 Completions coming from multiple sources, may be different but end up having
431 the same effect when applied to ``text``. If this is the case, this will
431 the same effect when applied to ``text``. If this is the case, this will
432 consider completions as equal and only emit the first encountered.
432 consider completions as equal and only emit the first encountered.
433
433
434 Not folded in `completions()` yet for debugging purpose, and to detect when
434 Not folded in `completions()` yet for debugging purpose, and to detect when
435 the IPython completer does return things that Jedi does not, but should be
435 the IPython completer does return things that Jedi does not, but should be
436 at some point.
436 at some point.
437 """
437 """
438 completions = list(completions)
438 completions = list(completions)
439 if not completions:
439 if not completions:
440 return
440 return
441
441
442 new_start = min(c.start for c in completions)
442 new_start = min(c.start for c in completions)
443 new_end = max(c.end for c in completions)
443 new_end = max(c.end for c in completions)
444
444
445 seen = set()
445 seen = set()
446 for c in completions:
446 for c in completions:
447 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
447 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
448 if new_text not in seen:
448 if new_text not in seen:
449 yield c
449 yield c
450 seen.add(new_text)
450 seen.add(new_text)
451
451
452
452
453 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
453 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
454 """
454 """
455 Rectify a set of completions to all have the same ``start`` and ``end``
455 Rectify a set of completions to all have the same ``start`` and ``end``
456
456
457 .. warning:: Unstable
457 .. warning:: Unstable
458
458
459 This function is unstable, API may change without warning.
459 This function is unstable, API may change without warning.
460 It will also raise unless use in proper context manager.
460 It will also raise unless use in proper context manager.
461
461
462 Parameters
462 Parameters
463 ----------
463 ----------
464 text: str
464 text: str
465 text that should be completed.
465 text that should be completed.
466 completions: Iterator[Completion]
466 completions: Iterator[Completion]
467 iterator over the completions to rectify
467 iterator over the completions to rectify
468
468
469
469
470 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
470 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
471 the Jupyter Protocol requires them to behave like so. This will readjust
471 the Jupyter Protocol requires them to behave like so. This will readjust
472 the completion to have the same ``start`` and ``end`` by padding both
472 the completion to have the same ``start`` and ``end`` by padding both
473 extremities with surrounding text.
473 extremities with surrounding text.
474
474
475 During stabilisation should support a ``_debug`` option to log which
475 During stabilisation should support a ``_debug`` option to log which
476 completion are return by the IPython completer and not found in Jedi in
476 completion are return by the IPython completer and not found in Jedi in
477 order to make upstream bug report.
477 order to make upstream bug report.
478 """
478 """
479 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
479 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
480 "It may change without warnings. "
480 "It may change without warnings. "
481 "Use in corresponding context manager.",
481 "Use in corresponding context manager.",
482 category=ProvisionalCompleterWarning, stacklevel=2)
482 category=ProvisionalCompleterWarning, stacklevel=2)
483
483
484 completions = list(completions)
484 completions = list(completions)
485 if not completions:
485 if not completions:
486 return
486 return
487 starts = (c.start for c in completions)
487 starts = (c.start for c in completions)
488 ends = (c.end for c in completions)
488 ends = (c.end for c in completions)
489
489
490 new_start = min(starts)
490 new_start = min(starts)
491 new_end = max(ends)
491 new_end = max(ends)
492
492
493 seen_jedi = set()
493 seen_jedi = set()
494 seen_python_matches = set()
494 seen_python_matches = set()
495 for c in completions:
495 for c in completions:
496 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
496 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
497 if c._origin == 'jedi':
497 if c._origin == 'jedi':
498 seen_jedi.add(new_text)
498 seen_jedi.add(new_text)
499 elif c._origin == 'IPCompleter.python_matches':
499 elif c._origin == 'IPCompleter.python_matches':
500 seen_python_matches.add(new_text)
500 seen_python_matches.add(new_text)
501 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
501 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
502 diff = seen_python_matches.difference(seen_jedi)
502 diff = seen_python_matches.difference(seen_jedi)
503 if diff and _debug:
503 if diff and _debug:
504 print('IPython.python matches have extras:', diff)
504 print('IPython.python matches have extras:', diff)
505
505
506
506
507 if sys.platform == 'win32':
507 if sys.platform == 'win32':
508 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
508 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
509 else:
509 else:
510 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
510 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
511
511
512 GREEDY_DELIMS = ' =\r\n'
512 GREEDY_DELIMS = ' =\r\n'
513
513
514
514
515 class CompletionSplitter(object):
515 class CompletionSplitter(object):
516 """An object to split an input line in a manner similar to readline.
516 """An object to split an input line in a manner similar to readline.
517
517
518 By having our own implementation, we can expose readline-like completion in
518 By having our own implementation, we can expose readline-like completion in
519 a uniform manner to all frontends. This object only needs to be given the
519 a uniform manner to all frontends. This object only needs to be given the
520 line of text to be split and the cursor position on said line, and it
520 line of text to be split and the cursor position on said line, and it
521 returns the 'word' to be completed on at the cursor after splitting the
521 returns the 'word' to be completed on at the cursor after splitting the
522 entire line.
522 entire line.
523
523
524 What characters are used as splitting delimiters can be controlled by
524 What characters are used as splitting delimiters can be controlled by
525 setting the ``delims`` attribute (this is a property that internally
525 setting the ``delims`` attribute (this is a property that internally
526 automatically builds the necessary regular expression)"""
526 automatically builds the necessary regular expression)"""
527
527
528 # Private interface
528 # Private interface
529
529
530 # A string of delimiter characters. The default value makes sense for
530 # A string of delimiter characters. The default value makes sense for
531 # IPython's most typical usage patterns.
531 # IPython's most typical usage patterns.
532 _delims = DELIMS
532 _delims = DELIMS
533
533
534 # The expression (a normal string) to be compiled into a regular expression
534 # The expression (a normal string) to be compiled into a regular expression
535 # for actual splitting. We store it as an attribute mostly for ease of
535 # for actual splitting. We store it as an attribute mostly for ease of
536 # debugging, since this type of code can be so tricky to debug.
536 # debugging, since this type of code can be so tricky to debug.
537 _delim_expr = None
537 _delim_expr = None
538
538
539 # The regular expression that does the actual splitting
539 # The regular expression that does the actual splitting
540 _delim_re = None
540 _delim_re = None
541
541
542 def __init__(self, delims=None):
542 def __init__(self, delims=None):
543 delims = CompletionSplitter._delims if delims is None else delims
543 delims = CompletionSplitter._delims if delims is None else delims
544 self.delims = delims
544 self.delims = delims
545
545
546 @property
546 @property
547 def delims(self):
547 def delims(self):
548 """Return the string of delimiter characters."""
548 """Return the string of delimiter characters."""
549 return self._delims
549 return self._delims
550
550
551 @delims.setter
551 @delims.setter
552 def delims(self, delims):
552 def delims(self, delims):
553 """Set the delimiters for line splitting."""
553 """Set the delimiters for line splitting."""
554 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
554 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
555 self._delim_re = re.compile(expr)
555 self._delim_re = re.compile(expr)
556 self._delims = delims
556 self._delims = delims
557 self._delim_expr = expr
557 self._delim_expr = expr
558
558
559 def split_line(self, line, cursor_pos=None):
559 def split_line(self, line, cursor_pos=None):
560 """Split a line of text with a cursor at the given position.
560 """Split a line of text with a cursor at the given position.
561 """
561 """
562 l = line if cursor_pos is None else line[:cursor_pos]
562 l = line if cursor_pos is None else line[:cursor_pos]
563 return self._delim_re.split(l)[-1]
563 return self._delim_re.split(l)[-1]
564
564
565
565
566
566
567 class Completer(Configurable):
567 class Completer(Configurable):
568
568
569 greedy = Bool(False,
569 greedy = Bool(False,
570 help="""Activate greedy completion
570 help="""Activate greedy completion
571 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
571 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
572
572
573 This will enable completion on elements of lists, results of function calls, etc.,
573 This will enable completion on elements of lists, results of function calls, etc.,
574 but can be unsafe because the code is actually evaluated on TAB.
574 but can be unsafe because the code is actually evaluated on TAB.
575 """
575 """
576 ).tag(config=True)
576 ).tag(config=True)
577
577
578 use_jedi = Bool(default_value=JEDI_INSTALLED,
578 use_jedi = Bool(default_value=JEDI_INSTALLED,
579 help="Experimental: Use Jedi to generate autocompletions. "
579 help="Experimental: Use Jedi to generate autocompletions. "
580 "Default to True if jedi is installed.").tag(config=True)
580 "Default to True if jedi is installed.").tag(config=True)
581
581
582 jedi_compute_type_timeout = Int(default_value=400,
582 jedi_compute_type_timeout = Int(default_value=400,
583 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
583 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
584 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
584 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
585 performance by preventing jedi to build its cache.
585 performance by preventing jedi to build its cache.
586 """).tag(config=True)
586 """).tag(config=True)
587
587
588 debug = Bool(default_value=False,
588 debug = Bool(default_value=False,
589 help='Enable debug for the Completer. Mostly print extra '
589 help='Enable debug for the Completer. Mostly print extra '
590 'information for experimental jedi integration.')\
590 'information for experimental jedi integration.')\
591 .tag(config=True)
591 .tag(config=True)
592
592
593 backslash_combining_completions = Bool(True,
593 backslash_combining_completions = Bool(True,
594 help="Enable unicode completions, e.g. \\alpha<tab> . "
594 help="Enable unicode completions, e.g. \\alpha<tab> . "
595 "Includes completion of latex commands, unicode names, and expanding "
595 "Includes completion of latex commands, unicode names, and expanding "
596 "unicode characters back to latex commands.").tag(config=True)
596 "unicode characters back to latex commands.").tag(config=True)
597
597
598
598
599
599
600 def __init__(self, namespace=None, global_namespace=None, **kwargs):
600 def __init__(self, namespace=None, global_namespace=None, **kwargs):
601 """Create a new completer for the command line.
601 """Create a new completer for the command line.
602
602
603 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
603 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
604
604
605 If unspecified, the default namespace where completions are performed
605 If unspecified, the default namespace where completions are performed
606 is __main__ (technically, __main__.__dict__). Namespaces should be
606 is __main__ (technically, __main__.__dict__). Namespaces should be
607 given as dictionaries.
607 given as dictionaries.
608
608
609 An optional second namespace can be given. This allows the completer
609 An optional second namespace can be given. This allows the completer
610 to handle cases where both the local and global scopes need to be
610 to handle cases where both the local and global scopes need to be
611 distinguished.
611 distinguished.
612 """
612 """
613
613
614 # Don't bind to namespace quite yet, but flag whether the user wants a
614 # Don't bind to namespace quite yet, but flag whether the user wants a
615 # specific namespace or to use __main__.__dict__. This will allow us
615 # specific namespace or to use __main__.__dict__. This will allow us
616 # to bind to __main__.__dict__ at completion time, not now.
616 # to bind to __main__.__dict__ at completion time, not now.
617 if namespace is None:
617 if namespace is None:
618 self.use_main_ns = True
618 self.use_main_ns = True
619 else:
619 else:
620 self.use_main_ns = False
620 self.use_main_ns = False
621 self.namespace = namespace
621 self.namespace = namespace
622
622
623 # The global namespace, if given, can be bound directly
623 # The global namespace, if given, can be bound directly
624 if global_namespace is None:
624 if global_namespace is None:
625 self.global_namespace = {}
625 self.global_namespace = {}
626 else:
626 else:
627 self.global_namespace = global_namespace
627 self.global_namespace = global_namespace
628
628
629 self.custom_matchers = []
629 self.custom_matchers = []
630
630
631 super(Completer, self).__init__(**kwargs)
631 super(Completer, self).__init__(**kwargs)
632
632
633 def complete(self, text, state):
633 def complete(self, text, state):
634 """Return the next possible completion for 'text'.
634 """Return the next possible completion for 'text'.
635
635
636 This is called successively with state == 0, 1, 2, ... until it
636 This is called successively with state == 0, 1, 2, ... until it
637 returns None. The completion should begin with 'text'.
637 returns None. The completion should begin with 'text'.
638
638
639 """
639 """
640 if self.use_main_ns:
640 if self.use_main_ns:
641 self.namespace = __main__.__dict__
641 self.namespace = __main__.__dict__
642
642
643 if state == 0:
643 if state == 0:
644 if "." in text:
644 if "." in text:
645 self.matches = self.attr_matches(text)
645 self.matches = self.attr_matches(text)
646 else:
646 else:
647 self.matches = self.global_matches(text)
647 self.matches = self.global_matches(text)
648 try:
648 try:
649 return self.matches[state]
649 return self.matches[state]
650 except IndexError:
650 except IndexError:
651 return None
651 return None
652
652
653 def global_matches(self, text):
653 def global_matches(self, text):
654 """Compute matches when text is a simple name.
654 """Compute matches when text is a simple name.
655
655
656 Return a list of all keywords, built-in functions and names currently
656 Return a list of all keywords, built-in functions and names currently
657 defined in self.namespace or self.global_namespace that match.
657 defined in self.namespace or self.global_namespace that match.
658
658
659 """
659 """
660 matches = []
660 matches = []
661 match_append = matches.append
661 match_append = matches.append
662 n = len(text)
662 n = len(text)
663 for lst in [keyword.kwlist,
663 for lst in [keyword.kwlist,
664 builtin_mod.__dict__.keys(),
664 builtin_mod.__dict__.keys(),
665 self.namespace.keys(),
665 self.namespace.keys(),
666 self.global_namespace.keys()]:
666 self.global_namespace.keys()]:
667 for word in lst:
667 for word in lst:
668 if word[:n] == text and word != "__builtins__":
668 if word[:n] == text and word != "__builtins__":
669 match_append(word)
669 match_append(word)
670
670
671 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
671 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
672 for lst in [self.namespace.keys(),
672 for lst in [self.namespace.keys(),
673 self.global_namespace.keys()]:
673 self.global_namespace.keys()]:
674 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
674 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
675 for word in lst if snake_case_re.match(word)}
675 for word in lst if snake_case_re.match(word)}
676 for word in shortened.keys():
676 for word in shortened.keys():
677 if word[:n] == text and word != "__builtins__":
677 if word[:n] == text and word != "__builtins__":
678 match_append(shortened[word])
678 match_append(shortened[word])
679 return matches
679 return matches
680
680
681 def attr_matches(self, text):
681 def attr_matches(self, text):
682 """Compute matches when text contains a dot.
682 """Compute matches when text contains a dot.
683
683
684 Assuming the text is of the form NAME.NAME....[NAME], and is
684 Assuming the text is of the form NAME.NAME....[NAME], and is
685 evaluatable in self.namespace or self.global_namespace, it will be
685 evaluatable in self.namespace or self.global_namespace, it will be
686 evaluated and its attributes (as revealed by dir()) are used as
686 evaluated and its attributes (as revealed by dir()) are used as
687 possible completions. (For class instances, class members are
687 possible completions. (For class instances, class members are
688 also considered.)
688 also considered.)
689
689
690 WARNING: this can still invoke arbitrary C code, if an object
690 WARNING: this can still invoke arbitrary C code, if an object
691 with a __getattr__ hook is evaluated.
691 with a __getattr__ hook is evaluated.
692
692
693 """
693 """
694
694
695 # Another option, seems to work great. Catches things like ''.<tab>
695 # Another option, seems to work great. Catches things like ''.<tab>
696 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
696 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
697
697
698 if m:
698 if m:
699 expr, attr = m.group(1, 3)
699 expr, attr = m.group(1, 3)
700 elif self.greedy:
700 elif self.greedy:
701 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
701 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
702 if not m2:
702 if not m2:
703 return []
703 return []
704 expr, attr = m2.group(1,2)
704 expr, attr = m2.group(1,2)
705 else:
705 else:
706 return []
706 return []
707
707
708 try:
708 try:
709 obj = eval(expr, self.namespace)
709 obj = eval(expr, self.namespace)
710 except:
710 except:
711 try:
711 try:
712 obj = eval(expr, self.global_namespace)
712 obj = eval(expr, self.global_namespace)
713 except:
713 except:
714 return []
714 return []
715
715
716 if self.limit_to__all__ and hasattr(obj, '__all__'):
716 if self.limit_to__all__ and hasattr(obj, '__all__'):
717 words = get__all__entries(obj)
717 words = get__all__entries(obj)
718 else:
718 else:
719 words = dir2(obj)
719 words = dir2(obj)
720
720
721 try:
721 try:
722 words = generics.complete_object(obj, words)
722 words = generics.complete_object(obj, words)
723 except TryNext:
723 except TryNext:
724 pass
724 pass
725 except AssertionError:
725 except AssertionError:
726 raise
726 raise
727 except Exception:
727 except Exception:
728 # Silence errors from completion function
728 # Silence errors from completion function
729 #raise # dbg
729 #raise # dbg
730 pass
730 pass
731 # Build match list to return
731 # Build match list to return
732 n = len(attr)
732 n = len(attr)
733 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
733 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
734
734
735
735
736 def get__all__entries(obj):
736 def get__all__entries(obj):
737 """returns the strings in the __all__ attribute"""
737 """returns the strings in the __all__ attribute"""
738 try:
738 try:
739 words = getattr(obj, '__all__')
739 words = getattr(obj, '__all__')
740 except:
740 except:
741 return []
741 return []
742
742
743 return [w for w in words if isinstance(w, str)]
743 return [w for w in words if isinstance(w, str)]
744
744
745
745
746 def match_dict_keys(keys: List[str], prefix: str, delims: str):
746 def match_dict_keys(keys: List[str], prefix: str, delims: str):
747 """Used by dict_key_matches, matching the prefix to a list of keys
747 """Used by dict_key_matches, matching the prefix to a list of keys
748
748
749 Parameters
749 Parameters
750 ==========
750 ==========
751 keys:
751 keys:
752 list of keys in dictionary currently being completed.
752 list of keys in dictionary currently being completed.
753 prefix:
753 prefix:
754 Part of the text already typed by the user. e.g. `mydict[b'fo`
754 Part of the text already typed by the user. e.g. `mydict[b'fo`
755 delims:
755 delims:
756 String of delimiters to consider when finding the current key.
756 String of delimiters to consider when finding the current key.
757
757
758 Returns
758 Returns
759 =======
759 =======
760
760
761 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
761 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
762 ``quote`` being the quote that need to be used to close current string.
762 ``quote`` being the quote that need to be used to close current string.
763 ``token_start`` the position where the replacement should start occurring,
763 ``token_start`` the position where the replacement should start occurring,
764 ``matches`` a list of replacement/completion
764 ``matches`` a list of replacement/completion
765
765
766 """
766 """
767 if not prefix:
767 if not prefix:
768 return None, 0, [repr(k) for k in keys
768 return None, 0, [repr(k) for k in keys
769 if isinstance(k, (str, bytes))]
769 if isinstance(k, (str, bytes))]
770 quote_match = re.search('["\']', prefix)
770 quote_match = re.search('["\']', prefix)
771 quote = quote_match.group()
771 quote = quote_match.group()
772 try:
772 try:
773 prefix_str = eval(prefix + quote, {})
773 prefix_str = eval(prefix + quote, {})
774 except Exception:
774 except Exception:
775 return None, 0, []
775 return None, 0, []
776
776
777 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
777 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
778 token_match = re.search(pattern, prefix, re.UNICODE)
778 token_match = re.search(pattern, prefix, re.UNICODE)
779 token_start = token_match.start()
779 token_start = token_match.start()
780 token_prefix = token_match.group()
780 token_prefix = token_match.group()
781
781
782 matched = []
782 matched = []
783 for key in keys:
783 for key in keys:
784 try:
784 try:
785 if not key.startswith(prefix_str):
785 if not key.startswith(prefix_str):
786 continue
786 continue
787 except (AttributeError, TypeError, UnicodeError):
787 except (AttributeError, TypeError, UnicodeError):
788 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
788 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
789 continue
789 continue
790
790
791 # reformat remainder of key to begin with prefix
791 # reformat remainder of key to begin with prefix
792 rem = key[len(prefix_str):]
792 rem = key[len(prefix_str):]
793 # force repr wrapped in '
793 # force repr wrapped in '
794 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
794 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
795 if rem_repr.startswith('u') and prefix[0] not in 'uU':
795 if rem_repr.startswith('u') and prefix[0] not in 'uU':
796 # Found key is unicode, but prefix is Py2 string.
796 # Found key is unicode, but prefix is Py2 string.
797 # Therefore attempt to interpret key as string.
797 # Therefore attempt to interpret key as string.
798 try:
798 try:
799 rem_repr = repr(rem.encode('ascii') + '"')
799 rem_repr = repr(rem.encode('ascii') + '"')
800 except UnicodeEncodeError:
800 except UnicodeEncodeError:
801 continue
801 continue
802
802
803 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
803 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
804 if quote == '"':
804 if quote == '"':
805 # The entered prefix is quoted with ",
805 # The entered prefix is quoted with ",
806 # but the match is quoted with '.
806 # but the match is quoted with '.
807 # A contained " hence needs escaping for comparison:
807 # A contained " hence needs escaping for comparison:
808 rem_repr = rem_repr.replace('"', '\\"')
808 rem_repr = rem_repr.replace('"', '\\"')
809
809
810 # then reinsert prefix from start of token
810 # then reinsert prefix from start of token
811 matched.append('%s%s' % (token_prefix, rem_repr))
811 matched.append('%s%s' % (token_prefix, rem_repr))
812 return quote, token_start, matched
812 return quote, token_start, matched
813
813
814
814
815 def cursor_to_position(text:str, line:int, column:int)->int:
815 def cursor_to_position(text:str, line:int, column:int)->int:
816 """
816 """
817
817
818 Convert the (line,column) position of the cursor in text to an offset in a
818 Convert the (line,column) position of the cursor in text to an offset in a
819 string.
819 string.
820
820
821 Parameters
821 Parameters
822 ----------
822 ----------
823
823
824 text : str
824 text : str
825 The text in which to calculate the cursor offset
825 The text in which to calculate the cursor offset
826 line : int
826 line : int
827 Line of the cursor; 0-indexed
827 Line of the cursor; 0-indexed
828 column : int
828 column : int
829 Column of the cursor 0-indexed
829 Column of the cursor 0-indexed
830
830
831 Return
831 Return
832 ------
832 ------
833 Position of the cursor in ``text``, 0-indexed.
833 Position of the cursor in ``text``, 0-indexed.
834
834
835 See Also
835 See Also
836 --------
836 --------
837 position_to_cursor: reciprocal of this function
837 position_to_cursor: reciprocal of this function
838
838
839 """
839 """
840 lines = text.split('\n')
840 lines = text.split('\n')
841 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
841 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
842
842
843 return sum(len(l) + 1 for l in lines[:line]) + column
843 return sum(len(l) + 1 for l in lines[:line]) + column
844
844
845 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
845 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
846 """
846 """
847 Convert the position of the cursor in text (0 indexed) to a line
847 Convert the position of the cursor in text (0 indexed) to a line
848 number(0-indexed) and a column number (0-indexed) pair
848 number(0-indexed) and a column number (0-indexed) pair
849
849
850 Position should be a valid position in ``text``.
850 Position should be a valid position in ``text``.
851
851
852 Parameters
852 Parameters
853 ----------
853 ----------
854
854
855 text : str
855 text : str
856 The text in which to calculate the cursor offset
856 The text in which to calculate the cursor offset
857 offset : int
857 offset : int
858 Position of the cursor in ``text``, 0-indexed.
858 Position of the cursor in ``text``, 0-indexed.
859
859
860 Return
860 Return
861 ------
861 ------
862 (line, column) : (int, int)
862 (line, column) : (int, int)
863 Line of the cursor; 0-indexed, column of the cursor 0-indexed
863 Line of the cursor; 0-indexed, column of the cursor 0-indexed
864
864
865
865
866 See Also
866 See Also
867 --------
867 --------
868 cursor_to_position : reciprocal of this function
868 cursor_to_position : reciprocal of this function
869
869
870
870
871 """
871 """
872
872
873 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
873 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
874
874
875 before = text[:offset]
875 before = text[:offset]
876 blines = before.split('\n') # ! splitnes trim trailing \n
876 blines = before.split('\n') # ! splitnes trim trailing \n
877 line = before.count('\n')
877 line = before.count('\n')
878 col = len(blines[-1])
878 col = len(blines[-1])
879 return line, col
879 return line, col
880
880
881
881
882 def _safe_isinstance(obj, module, class_name):
882 def _safe_isinstance(obj, module, class_name):
883 """Checks if obj is an instance of module.class_name if loaded
883 """Checks if obj is an instance of module.class_name if loaded
884 """
884 """
885 return (module in sys.modules and
885 return (module in sys.modules and
886 isinstance(obj, getattr(import_module(module), class_name)))
886 isinstance(obj, getattr(import_module(module), class_name)))
887
887
888
888
889 def back_unicode_name_matches(text):
889 def back_unicode_name_matches(text):
890 u"""Match unicode characters back to unicode name
890 u"""Match unicode characters back to unicode name
891
891
892 This does ``β˜ƒ`` -> ``\\snowman``
892 This does ``β˜ƒ`` -> ``\\snowman``
893
893
894 Note that snowman is not a valid python3 combining character but will be expanded.
894 Note that snowman is not a valid python3 combining character but will be expanded.
895 Though it will not recombine back to the snowman character by the completion machinery.
895 Though it will not recombine back to the snowman character by the completion machinery.
896
896
897 This will not either back-complete standard sequences like \\n, \\b ...
897 This will not either back-complete standard sequences like \\n, \\b ...
898
898
899 Used on Python 3 only.
899 Used on Python 3 only.
900 """
900 """
901 if len(text)<2:
901 if len(text)<2:
902 return u'', ()
902 return u'', ()
903 maybe_slash = text[-2]
903 maybe_slash = text[-2]
904 if maybe_slash != '\\':
904 if maybe_slash != '\\':
905 return u'', ()
905 return u'', ()
906
906
907 char = text[-1]
907 char = text[-1]
908 # no expand on quote for completion in strings.
908 # no expand on quote for completion in strings.
909 # nor backcomplete standard ascii keys
909 # nor backcomplete standard ascii keys
910 if char in string.ascii_letters or char in ['"',"'"]:
910 if char in string.ascii_letters or char in ['"',"'"]:
911 return u'', ()
911 return u'', ()
912 try :
912 try :
913 unic = unicodedata.name(char)
913 unic = unicodedata.name(char)
914 return '\\'+char,['\\'+unic]
914 return '\\'+char,['\\'+unic]
915 except KeyError:
915 except KeyError:
916 pass
916 pass
917 return u'', ()
917 return u'', ()
918
918
919 def back_latex_name_matches(text:str):
919 def back_latex_name_matches(text:str):
920 """Match latex characters back to unicode name
920 """Match latex characters back to unicode name
921
921
922 This does ``\\β„΅`` -> ``\\aleph``
922 This does ``\\β„΅`` -> ``\\aleph``
923
923
924 Used on Python 3 only.
924 Used on Python 3 only.
925 """
925 """
926 if len(text)<2:
926 if len(text)<2:
927 return u'', ()
927 return u'', ()
928 maybe_slash = text[-2]
928 maybe_slash = text[-2]
929 if maybe_slash != '\\':
929 if maybe_slash != '\\':
930 return u'', ()
930 return u'', ()
931
931
932
932
933 char = text[-1]
933 char = text[-1]
934 # no expand on quote for completion in strings.
934 # no expand on quote for completion in strings.
935 # nor backcomplete standard ascii keys
935 # nor backcomplete standard ascii keys
936 if char in string.ascii_letters or char in ['"',"'"]:
936 if char in string.ascii_letters or char in ['"',"'"]:
937 return u'', ()
937 return u'', ()
938 try :
938 try :
939 latex = reverse_latex_symbol[char]
939 latex = reverse_latex_symbol[char]
940 # '\\' replace the \ as well
940 # '\\' replace the \ as well
941 return '\\'+char,[latex]
941 return '\\'+char,[latex]
942 except KeyError:
942 except KeyError:
943 pass
943 pass
944 return u'', ()
944 return u'', ()
945
945
946
946
947 def _formatparamchildren(parameter) -> str:
947 def _formatparamchildren(parameter) -> str:
948 """
948 """
949 Get parameter name and value from Jedi Private API
949 Get parameter name and value from Jedi Private API
950
950
951 Jedi does not expose a simple way to get `param=value` from its API.
951 Jedi does not expose a simple way to get `param=value` from its API.
952
952
953 Parameter
953 Parameter
954 =========
954 =========
955
955
956 parameter:
956 parameter:
957 Jedi's function `Param`
957 Jedi's function `Param`
958
958
959 Returns
959 Returns
960 =======
960 =======
961
961
962 A string like 'a', 'b=1', '*args', '**kwargs'
962 A string like 'a', 'b=1', '*args', '**kwargs'
963
963
964
964
965 """
965 """
966 description = parameter.description
966 description = parameter.description
967 if not description.startswith('param '):
967 if not description.startswith('param '):
968 raise ValueError('Jedi function parameter description have change format.'
968 raise ValueError('Jedi function parameter description have change format.'
969 'Expected "param ...", found %r".' % description)
969 'Expected "param ...", found %r".' % description)
970 return description[6:]
970 return description[6:]
971
971
972 def _make_signature(completion)-> str:
972 def _make_signature(completion)-> str:
973 """
973 """
974 Make the signature from a jedi completion
974 Make the signature from a jedi completion
975
975
976 Parameter
976 Parameter
977 =========
977 =========
978
978
979 completion: jedi.Completion
979 completion: jedi.Completion
980 object does not complete a function type
980 object does not complete a function type
981
981
982 Returns
982 Returns
983 =======
983 =======
984
984
985 a string consisting of the function signature, with the parenthesis but
985 a string consisting of the function signature, with the parenthesis but
986 without the function name. example:
986 without the function name. example:
987 `(a, *args, b=1, **kwargs)`
987 `(a, *args, b=1, **kwargs)`
988
988
989 """
989 """
990
990
991 # it looks like this might work on jedi 0.17
991 # it looks like this might work on jedi 0.17
992 if hasattr(completion, 'get_signatures'):
992 if hasattr(completion, 'get_signatures'):
993 signatures = completion.get_signatures()
993 signatures = completion.get_signatures()
994 if not signatures:
994 if not signatures:
995 return '(?)'
995 return '(?)'
996
996
997 c0 = completion.get_signatures()[0]
997 c0 = completion.get_signatures()[0]
998 return '('+c0.to_string().split('(', maxsplit=1)[1]
998 return '('+c0.to_string().split('(', maxsplit=1)[1]
999
999
1000 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1000 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1001 for p in signature.defined_names()) if f])
1001 for p in signature.defined_names()) if f])
1002
1002
1003 class IPCompleter(Completer):
1003 class IPCompleter(Completer):
1004 """Extension of the completer class with IPython-specific features"""
1004 """Extension of the completer class with IPython-specific features"""
1005
1005
1006 _names = None
1006 _names = None
1007
1007
1008 @observe('greedy')
1008 @observe('greedy')
1009 def _greedy_changed(self, change):
1009 def _greedy_changed(self, change):
1010 """update the splitter and readline delims when greedy is changed"""
1010 """update the splitter and readline delims when greedy is changed"""
1011 if change['new']:
1011 if change['new']:
1012 self.splitter.delims = GREEDY_DELIMS
1012 self.splitter.delims = GREEDY_DELIMS
1013 else:
1013 else:
1014 self.splitter.delims = DELIMS
1014 self.splitter.delims = DELIMS
1015
1015
1016 dict_keys_only = Bool(False,
1016 dict_keys_only = Bool(False,
1017 help="""Whether to show dict key matches only""")
1017 help="""Whether to show dict key matches only""")
1018
1018
1019 merge_completions = Bool(True,
1019 merge_completions = Bool(True,
1020 help="""Whether to merge completion results into a single list
1020 help="""Whether to merge completion results into a single list
1021
1021
1022 If False, only the completion results from the first non-empty
1022 If False, only the completion results from the first non-empty
1023 completer will be returned.
1023 completer will be returned.
1024 """
1024 """
1025 ).tag(config=True)
1025 ).tag(config=True)
1026 omit__names = Enum((0,1,2), default_value=2,
1026 omit__names = Enum((0,1,2), default_value=2,
1027 help="""Instruct the completer to omit private method names
1027 help="""Instruct the completer to omit private method names
1028
1028
1029 Specifically, when completing on ``object.<tab>``.
1029 Specifically, when completing on ``object.<tab>``.
1030
1030
1031 When 2 [default]: all names that start with '_' will be excluded.
1031 When 2 [default]: all names that start with '_' will be excluded.
1032
1032
1033 When 1: all 'magic' names (``__foo__``) will be excluded.
1033 When 1: all 'magic' names (``__foo__``) will be excluded.
1034
1034
1035 When 0: nothing will be excluded.
1035 When 0: nothing will be excluded.
1036 """
1036 """
1037 ).tag(config=True)
1037 ).tag(config=True)
1038 limit_to__all__ = Bool(False,
1038 limit_to__all__ = Bool(False,
1039 help="""
1039 help="""
1040 DEPRECATED as of version 5.0.
1040 DEPRECATED as of version 5.0.
1041
1041
1042 Instruct the completer to use __all__ for the completion
1042 Instruct the completer to use __all__ for the completion
1043
1043
1044 Specifically, when completing on ``object.<tab>``.
1044 Specifically, when completing on ``object.<tab>``.
1045
1045
1046 When True: only those names in obj.__all__ will be included.
1046 When True: only those names in obj.__all__ will be included.
1047
1047
1048 When False [default]: the __all__ attribute is ignored
1048 When False [default]: the __all__ attribute is ignored
1049 """,
1049 """,
1050 ).tag(config=True)
1050 ).tag(config=True)
1051
1051
1052 @observe('limit_to__all__')
1052 @observe('limit_to__all__')
1053 def _limit_to_all_changed(self, change):
1053 def _limit_to_all_changed(self, change):
1054 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1054 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1055 'value has been deprecated since IPython 5.0, will be made to have '
1055 'value has been deprecated since IPython 5.0, will be made to have '
1056 'no effects and then removed in future version of IPython.',
1056 'no effects and then removed in future version of IPython.',
1057 UserWarning)
1057 UserWarning)
1058
1058
1059 def __init__(self, shell=None, namespace=None, global_namespace=None,
1059 def __init__(self, shell=None, namespace=None, global_namespace=None,
1060 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1060 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1061 """IPCompleter() -> completer
1061 """IPCompleter() -> completer
1062
1062
1063 Return a completer object.
1063 Return a completer object.
1064
1064
1065 Parameters
1065 Parameters
1066 ----------
1066 ----------
1067
1067
1068 shell
1068 shell
1069 a pointer to the ipython shell itself. This is needed
1069 a pointer to the ipython shell itself. This is needed
1070 because this completer knows about magic functions, and those can
1070 because this completer knows about magic functions, and those can
1071 only be accessed via the ipython instance.
1071 only be accessed via the ipython instance.
1072
1072
1073 namespace : dict, optional
1073 namespace : dict, optional
1074 an optional dict where completions are performed.
1074 an optional dict where completions are performed.
1075
1075
1076 global_namespace : dict, optional
1076 global_namespace : dict, optional
1077 secondary optional dict for completions, to
1077 secondary optional dict for completions, to
1078 handle cases (such as IPython embedded inside functions) where
1078 handle cases (such as IPython embedded inside functions) where
1079 both Python scopes are visible.
1079 both Python scopes are visible.
1080
1080
1081 use_readline : bool, optional
1081 use_readline : bool, optional
1082 DEPRECATED, ignored since IPython 6.0, will have no effects
1082 DEPRECATED, ignored since IPython 6.0, will have no effects
1083 """
1083 """
1084
1084
1085 self.magic_escape = ESC_MAGIC
1085 self.magic_escape = ESC_MAGIC
1086 self.splitter = CompletionSplitter()
1086 self.splitter = CompletionSplitter()
1087
1087
1088 if use_readline is not _deprecation_readline_sentinel:
1088 if use_readline is not _deprecation_readline_sentinel:
1089 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1089 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1090 DeprecationWarning, stacklevel=2)
1090 DeprecationWarning, stacklevel=2)
1091
1091
1092 # _greedy_changed() depends on splitter and readline being defined:
1092 # _greedy_changed() depends on splitter and readline being defined:
1093 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1093 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1094 config=config, **kwargs)
1094 config=config, **kwargs)
1095
1095
1096 # List where completion matches will be stored
1096 # List where completion matches will be stored
1097 self.matches = []
1097 self.matches = []
1098 self.shell = shell
1098 self.shell = shell
1099 # Regexp to split filenames with spaces in them
1099 # Regexp to split filenames with spaces in them
1100 self.space_name_re = re.compile(r'([^\\] )')
1100 self.space_name_re = re.compile(r'([^\\] )')
1101 # Hold a local ref. to glob.glob for speed
1101 # Hold a local ref. to glob.glob for speed
1102 self.glob = glob.glob
1102 self.glob = glob.glob
1103
1103
1104 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1104 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1105 # buffers, to avoid completion problems.
1105 # buffers, to avoid completion problems.
1106 term = os.environ.get('TERM','xterm')
1106 term = os.environ.get('TERM','xterm')
1107 self.dumb_terminal = term in ['dumb','emacs']
1107 self.dumb_terminal = term in ['dumb','emacs']
1108
1108
1109 # Special handling of backslashes needed in win32 platforms
1109 # Special handling of backslashes needed in win32 platforms
1110 if sys.platform == "win32":
1110 if sys.platform == "win32":
1111 self.clean_glob = self._clean_glob_win32
1111 self.clean_glob = self._clean_glob_win32
1112 else:
1112 else:
1113 self.clean_glob = self._clean_glob
1113 self.clean_glob = self._clean_glob
1114
1114
1115 #regexp to parse docstring for function signature
1115 #regexp to parse docstring for function signature
1116 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1116 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1117 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1117 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1118 #use this if positional argument name is also needed
1118 #use this if positional argument name is also needed
1119 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1119 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1120
1120
1121 self.magic_arg_matchers = [
1121 self.magic_arg_matchers = [
1122 self.magic_config_matches,
1122 self.magic_config_matches,
1123 self.magic_color_matches,
1123 self.magic_color_matches,
1124 ]
1124 ]
1125
1125
1126 # This is set externally by InteractiveShell
1126 # This is set externally by InteractiveShell
1127 self.custom_completers = None
1127 self.custom_completers = None
1128
1128
1129 @property
1129 @property
1130 def matchers(self):
1130 def matchers(self):
1131 """All active matcher routines for completion"""
1131 """All active matcher routines for completion"""
1132 if self.dict_keys_only:
1132 if self.dict_keys_only:
1133 return [self.dict_key_matches]
1133 return [self.dict_key_matches]
1134
1134
1135 if self.use_jedi:
1135 if self.use_jedi:
1136 return [
1136 return [
1137 *self.custom_matchers,
1137 *self.custom_matchers,
1138 self.file_matches,
1138 self.file_matches,
1139 self.magic_matches,
1139 self.magic_matches,
1140 self.dict_key_matches,
1140 self.dict_key_matches,
1141 ]
1141 ]
1142 else:
1142 else:
1143 return [
1143 return [
1144 *self.custom_matchers,
1144 *self.custom_matchers,
1145 self.python_matches,
1145 self.python_matches,
1146 self.file_matches,
1146 self.file_matches,
1147 self.magic_matches,
1147 self.magic_matches,
1148 self.python_func_kw_matches,
1148 self.python_func_kw_matches,
1149 self.dict_key_matches,
1149 self.dict_key_matches,
1150 ]
1150 ]
1151
1151
1152 def all_completions(self, text) -> List[str]:
1152 def all_completions(self, text) -> List[str]:
1153 """
1153 """
1154 Wrapper around the completion methods for the benefit of emacs.
1154 Wrapper around the completion methods for the benefit of emacs.
1155 """
1155 """
1156 prefix = text.rpartition('.')[0]
1156 prefix = text.rpartition('.')[0]
1157 with provisionalcompleter():
1157 with provisionalcompleter():
1158 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1158 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1159 for c in self.completions(text, len(text))]
1159 for c in self.completions(text, len(text))]
1160
1160
1161 return self.complete(text)[1]
1161 return self.complete(text)[1]
1162
1162
1163 def _clean_glob(self, text):
1163 def _clean_glob(self, text):
1164 return self.glob("%s*" % text)
1164 return self.glob("%s*" % text)
1165
1165
1166 def _clean_glob_win32(self,text):
1166 def _clean_glob_win32(self,text):
1167 return [f.replace("\\","/")
1167 return [f.replace("\\","/")
1168 for f in self.glob("%s*" % text)]
1168 for f in self.glob("%s*" % text)]
1169
1169
1170 def file_matches(self, text):
1170 def file_matches(self, text):
1171 """Match filenames, expanding ~USER type strings.
1171 """Match filenames, expanding ~USER type strings.
1172
1172
1173 Most of the seemingly convoluted logic in this completer is an
1173 Most of the seemingly convoluted logic in this completer is an
1174 attempt to handle filenames with spaces in them. And yet it's not
1174 attempt to handle filenames with spaces in them. And yet it's not
1175 quite perfect, because Python's readline doesn't expose all of the
1175 quite perfect, because Python's readline doesn't expose all of the
1176 GNU readline details needed for this to be done correctly.
1176 GNU readline details needed for this to be done correctly.
1177
1177
1178 For a filename with a space in it, the printed completions will be
1178 For a filename with a space in it, the printed completions will be
1179 only the parts after what's already been typed (instead of the
1179 only the parts after what's already been typed (instead of the
1180 full completions, as is normally done). I don't think with the
1180 full completions, as is normally done). I don't think with the
1181 current (as of Python 2.3) Python readline it's possible to do
1181 current (as of Python 2.3) Python readline it's possible to do
1182 better."""
1182 better."""
1183
1183
1184 # chars that require escaping with backslash - i.e. chars
1184 # chars that require escaping with backslash - i.e. chars
1185 # that readline treats incorrectly as delimiters, but we
1185 # that readline treats incorrectly as delimiters, but we
1186 # don't want to treat as delimiters in filename matching
1186 # don't want to treat as delimiters in filename matching
1187 # when escaped with backslash
1187 # when escaped with backslash
1188 if text.startswith('!'):
1188 if text.startswith('!'):
1189 text = text[1:]
1189 text = text[1:]
1190 text_prefix = u'!'
1190 text_prefix = u'!'
1191 else:
1191 else:
1192 text_prefix = u''
1192 text_prefix = u''
1193
1193
1194 text_until_cursor = self.text_until_cursor
1194 text_until_cursor = self.text_until_cursor
1195 # track strings with open quotes
1195 # track strings with open quotes
1196 open_quotes = has_open_quotes(text_until_cursor)
1196 open_quotes = has_open_quotes(text_until_cursor)
1197
1197
1198 if '(' in text_until_cursor or '[' in text_until_cursor:
1198 if '(' in text_until_cursor or '[' in text_until_cursor:
1199 lsplit = text
1199 lsplit = text
1200 else:
1200 else:
1201 try:
1201 try:
1202 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1202 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1203 lsplit = arg_split(text_until_cursor)[-1]
1203 lsplit = arg_split(text_until_cursor)[-1]
1204 except ValueError:
1204 except ValueError:
1205 # typically an unmatched ", or backslash without escaped char.
1205 # typically an unmatched ", or backslash without escaped char.
1206 if open_quotes:
1206 if open_quotes:
1207 lsplit = text_until_cursor.split(open_quotes)[-1]
1207 lsplit = text_until_cursor.split(open_quotes)[-1]
1208 else:
1208 else:
1209 return []
1209 return []
1210 except IndexError:
1210 except IndexError:
1211 # tab pressed on empty line
1211 # tab pressed on empty line
1212 lsplit = ""
1212 lsplit = ""
1213
1213
1214 if not open_quotes and lsplit != protect_filename(lsplit):
1214 if not open_quotes and lsplit != protect_filename(lsplit):
1215 # if protectables are found, do matching on the whole escaped name
1215 # if protectables are found, do matching on the whole escaped name
1216 has_protectables = True
1216 has_protectables = True
1217 text0,text = text,lsplit
1217 text0,text = text,lsplit
1218 else:
1218 else:
1219 has_protectables = False
1219 has_protectables = False
1220 text = os.path.expanduser(text)
1220 text = os.path.expanduser(text)
1221
1221
1222 if text == "":
1222 if text == "":
1223 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1223 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1224
1224
1225 # Compute the matches from the filesystem
1225 # Compute the matches from the filesystem
1226 if sys.platform == 'win32':
1226 if sys.platform == 'win32':
1227 m0 = self.clean_glob(text)
1227 m0 = self.clean_glob(text)
1228 else:
1228 else:
1229 m0 = self.clean_glob(text.replace('\\', ''))
1229 m0 = self.clean_glob(text.replace('\\', ''))
1230
1230
1231 if has_protectables:
1231 if has_protectables:
1232 # If we had protectables, we need to revert our changes to the
1232 # If we had protectables, we need to revert our changes to the
1233 # beginning of filename so that we don't double-write the part
1233 # beginning of filename so that we don't double-write the part
1234 # of the filename we have so far
1234 # of the filename we have so far
1235 len_lsplit = len(lsplit)
1235 len_lsplit = len(lsplit)
1236 matches = [text_prefix + text0 +
1236 matches = [text_prefix + text0 +
1237 protect_filename(f[len_lsplit:]) for f in m0]
1237 protect_filename(f[len_lsplit:]) for f in m0]
1238 else:
1238 else:
1239 if open_quotes:
1239 if open_quotes:
1240 # if we have a string with an open quote, we don't need to
1240 # if we have a string with an open quote, we don't need to
1241 # protect the names beyond the quote (and we _shouldn't_, as
1241 # protect the names beyond the quote (and we _shouldn't_, as
1242 # it would cause bugs when the filesystem call is made).
1242 # it would cause bugs when the filesystem call is made).
1243 matches = m0 if sys.platform == "win32" else\
1243 matches = m0 if sys.platform == "win32" else\
1244 [protect_filename(f, open_quotes) for f in m0]
1244 [protect_filename(f, open_quotes) for f in m0]
1245 else:
1245 else:
1246 matches = [text_prefix +
1246 matches = [text_prefix +
1247 protect_filename(f) for f in m0]
1247 protect_filename(f) for f in m0]
1248
1248
1249 # Mark directories in input list by appending '/' to their names.
1249 # Mark directories in input list by appending '/' to their names.
1250 return [x+'/' if os.path.isdir(x) else x for x in matches]
1250 return [x+'/' if os.path.isdir(x) else x for x in matches]
1251
1251
1252 def magic_matches(self, text):
1252 def magic_matches(self, text):
1253 """Match magics"""
1253 """Match magics"""
1254 # Get all shell magics now rather than statically, so magics loaded at
1254 # Get all shell magics now rather than statically, so magics loaded at
1255 # runtime show up too.
1255 # runtime show up too.
1256 lsm = self.shell.magics_manager.lsmagic()
1256 lsm = self.shell.magics_manager.lsmagic()
1257 line_magics = lsm['line']
1257 line_magics = lsm['line']
1258 cell_magics = lsm['cell']
1258 cell_magics = lsm['cell']
1259 pre = self.magic_escape
1259 pre = self.magic_escape
1260 pre2 = pre+pre
1260 pre2 = pre+pre
1261
1261
1262 explicit_magic = text.startswith(pre)
1262 explicit_magic = text.startswith(pre)
1263
1263
1264 # Completion logic:
1264 # Completion logic:
1265 # - user gives %%: only do cell magics
1265 # - user gives %%: only do cell magics
1266 # - user gives %: do both line and cell magics
1266 # - user gives %: do both line and cell magics
1267 # - no prefix: do both
1267 # - no prefix: do both
1268 # In other words, line magics are skipped if the user gives %% explicitly
1268 # In other words, line magics are skipped if the user gives %% explicitly
1269 #
1269 #
1270 # We also exclude magics that match any currently visible names:
1270 # We also exclude magics that match any currently visible names:
1271 # https://github.com/ipython/ipython/issues/4877, unless the user has
1271 # https://github.com/ipython/ipython/issues/4877, unless the user has
1272 # typed a %:
1272 # typed a %:
1273 # https://github.com/ipython/ipython/issues/10754
1273 # https://github.com/ipython/ipython/issues/10754
1274 bare_text = text.lstrip(pre)
1274 bare_text = text.lstrip(pre)
1275 global_matches = self.global_matches(bare_text)
1275 global_matches = self.global_matches(bare_text)
1276 if not explicit_magic:
1276 if not explicit_magic:
1277 def matches(magic):
1277 def matches(magic):
1278 """
1278 """
1279 Filter magics, in particular remove magics that match
1279 Filter magics, in particular remove magics that match
1280 a name present in global namespace.
1280 a name present in global namespace.
1281 """
1281 """
1282 return ( magic.startswith(bare_text) and
1282 return ( magic.startswith(bare_text) and
1283 magic not in global_matches )
1283 magic not in global_matches )
1284 else:
1284 else:
1285 def matches(magic):
1285 def matches(magic):
1286 return magic.startswith(bare_text)
1286 return magic.startswith(bare_text)
1287
1287
1288 comp = [ pre2+m for m in cell_magics if matches(m)]
1288 comp = [ pre2+m for m in cell_magics if matches(m)]
1289 if not text.startswith(pre2):
1289 if not text.startswith(pre2):
1290 comp += [ pre+m for m in line_magics if matches(m)]
1290 comp += [ pre+m for m in line_magics if matches(m)]
1291
1291
1292 return comp
1292 return comp
1293
1293
1294 def magic_config_matches(self, text:str) -> List[str]:
1294 def magic_config_matches(self, text:str) -> List[str]:
1295 """ Match class names and attributes for %config magic """
1295 """ Match class names and attributes for %config magic """
1296 texts = text.strip().split()
1296 texts = text.strip().split()
1297
1297
1298 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1298 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1299 # get all configuration classes
1299 # get all configuration classes
1300 classes = sorted(set([ c for c in self.shell.configurables
1300 classes = sorted(set([ c for c in self.shell.configurables
1301 if c.__class__.class_traits(config=True)
1301 if c.__class__.class_traits(config=True)
1302 ]), key=lambda x: x.__class__.__name__)
1302 ]), key=lambda x: x.__class__.__name__)
1303 classnames = [ c.__class__.__name__ for c in classes ]
1303 classnames = [ c.__class__.__name__ for c in classes ]
1304
1304
1305 # return all classnames if config or %config is given
1305 # return all classnames if config or %config is given
1306 if len(texts) == 1:
1306 if len(texts) == 1:
1307 return classnames
1307 return classnames
1308
1308
1309 # match classname
1309 # match classname
1310 classname_texts = texts[1].split('.')
1310 classname_texts = texts[1].split('.')
1311 classname = classname_texts[0]
1311 classname = classname_texts[0]
1312 classname_matches = [ c for c in classnames
1312 classname_matches = [ c for c in classnames
1313 if c.startswith(classname) ]
1313 if c.startswith(classname) ]
1314
1314
1315 # return matched classes or the matched class with attributes
1315 # return matched classes or the matched class with attributes
1316 if texts[1].find('.') < 0:
1316 if texts[1].find('.') < 0:
1317 return classname_matches
1317 return classname_matches
1318 elif len(classname_matches) == 1 and \
1318 elif len(classname_matches) == 1 and \
1319 classname_matches[0] == classname:
1319 classname_matches[0] == classname:
1320 cls = classes[classnames.index(classname)].__class__
1320 cls = classes[classnames.index(classname)].__class__
1321 help = cls.class_get_help()
1321 help = cls.class_get_help()
1322 # strip leading '--' from cl-args:
1322 # strip leading '--' from cl-args:
1323 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1323 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1324 return [ attr.split('=')[0]
1324 return [ attr.split('=')[0]
1325 for attr in help.strip().splitlines()
1325 for attr in help.strip().splitlines()
1326 if attr.startswith(texts[1]) ]
1326 if attr.startswith(texts[1]) ]
1327 return []
1327 return []
1328
1328
1329 def magic_color_matches(self, text:str) -> List[str] :
1329 def magic_color_matches(self, text:str) -> List[str] :
1330 """ Match color schemes for %colors magic"""
1330 """ Match color schemes for %colors magic"""
1331 texts = text.split()
1331 texts = text.split()
1332 if text.endswith(' '):
1332 if text.endswith(' '):
1333 # .split() strips off the trailing whitespace. Add '' back
1333 # .split() strips off the trailing whitespace. Add '' back
1334 # so that: '%colors ' -> ['%colors', '']
1334 # so that: '%colors ' -> ['%colors', '']
1335 texts.append('')
1335 texts.append('')
1336
1336
1337 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1337 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1338 prefix = texts[1]
1338 prefix = texts[1]
1339 return [ color for color in InspectColors.keys()
1339 return [ color for color in InspectColors.keys()
1340 if color.startswith(prefix) ]
1340 if color.startswith(prefix) ]
1341 return []
1341 return []
1342
1342
1343 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str):
1343 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str):
1344 """
1344 """
1345
1345
1346 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1346 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1347 cursor position.
1347 cursor position.
1348
1348
1349 Parameters
1349 Parameters
1350 ----------
1350 ----------
1351 cursor_column : int
1351 cursor_column : int
1352 column position of the cursor in ``text``, 0-indexed.
1352 column position of the cursor in ``text``, 0-indexed.
1353 cursor_line : int
1353 cursor_line : int
1354 line position of the cursor in ``text``, 0-indexed
1354 line position of the cursor in ``text``, 0-indexed
1355 text : str
1355 text : str
1356 text to complete
1356 text to complete
1357
1357
1358 Debugging
1358 Debugging
1359 ---------
1359 ---------
1360
1360
1361 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1361 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1362 object containing a string with the Jedi debug information attached.
1362 object containing a string with the Jedi debug information attached.
1363 """
1363 """
1364 namespaces = [self.namespace]
1364 namespaces = [self.namespace]
1365 if self.global_namespace is not None:
1365 if self.global_namespace is not None:
1366 namespaces.append(self.global_namespace)
1366 namespaces.append(self.global_namespace)
1367
1367
1368 completion_filter = lambda x:x
1368 completion_filter = lambda x:x
1369 offset = cursor_to_position(text, cursor_line, cursor_column)
1369 offset = cursor_to_position(text, cursor_line, cursor_column)
1370 # filter output if we are completing for object members
1370 # filter output if we are completing for object members
1371 if offset:
1371 if offset:
1372 pre = text[offset-1]
1372 pre = text[offset-1]
1373 if pre == '.':
1373 if pre == '.':
1374 if self.omit__names == 2:
1374 if self.omit__names == 2:
1375 completion_filter = lambda c:not c.name.startswith('_')
1375 completion_filter = lambda c:not c.name.startswith('_')
1376 elif self.omit__names == 1:
1376 elif self.omit__names == 1:
1377 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1377 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1378 elif self.omit__names == 0:
1378 elif self.omit__names == 0:
1379 completion_filter = lambda x:x
1379 completion_filter = lambda x:x
1380 else:
1380 else:
1381 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1381 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1382
1382
1383 interpreter = jedi.Interpreter(text[:offset], namespaces)
1383 interpreter = jedi.Interpreter(text[:offset], namespaces)
1384 try_jedi = True
1384 try_jedi = True
1385
1385
1386 try:
1386 try:
1387 # find the first token in the current tree -- if it is a ' or " then we are in a string
1387 # find the first token in the current tree -- if it is a ' or " then we are in a string
1388 completing_string = False
1388 completing_string = False
1389 try:
1389 try:
1390 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1390 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1391 except StopIteration:
1391 except StopIteration:
1392 pass
1392 pass
1393 else:
1393 else:
1394 # note the value may be ', ", or it may also be ''' or """, or
1394 # note the value may be ', ", or it may also be ''' or """, or
1395 # in some cases, """what/you/typed..., but all of these are
1395 # in some cases, """what/you/typed..., but all of these are
1396 # strings.
1396 # strings.
1397 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1397 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1398
1398
1399 # if we are in a string jedi is likely not the right candidate for
1399 # if we are in a string jedi is likely not the right candidate for
1400 # now. Skip it.
1400 # now. Skip it.
1401 try_jedi = not completing_string
1401 try_jedi = not completing_string
1402 except Exception as e:
1402 except Exception as e:
1403 # many of things can go wrong, we are using private API just don't crash.
1403 # many of things can go wrong, we are using private API just don't crash.
1404 if self.debug:
1404 if self.debug:
1405 print("Error detecting if completing a non-finished string :", e, '|')
1405 print("Error detecting if completing a non-finished string :", e, '|')
1406
1406
1407 if not try_jedi:
1407 if not try_jedi:
1408 return []
1408 return []
1409 try:
1409 try:
1410 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1410 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1411 except Exception as e:
1411 except Exception as e:
1412 if self.debug:
1412 if self.debug:
1413 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1413 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1414 else:
1414 else:
1415 return []
1415 return []
1416
1416
1417 def python_matches(self, text):
1417 def python_matches(self, text):
1418 """Match attributes or global python names"""
1418 """Match attributes or global python names"""
1419 if "." in text:
1419 if "." in text:
1420 try:
1420 try:
1421 matches = self.attr_matches(text)
1421 matches = self.attr_matches(text)
1422 if text.endswith('.') and self.omit__names:
1422 if text.endswith('.') and self.omit__names:
1423 if self.omit__names == 1:
1423 if self.omit__names == 1:
1424 # true if txt is _not_ a __ name, false otherwise:
1424 # true if txt is _not_ a __ name, false otherwise:
1425 no__name = (lambda txt:
1425 no__name = (lambda txt:
1426 re.match(r'.*\.__.*?__',txt) is None)
1426 re.match(r'.*\.__.*?__',txt) is None)
1427 else:
1427 else:
1428 # true if txt is _not_ a _ name, false otherwise:
1428 # true if txt is _not_ a _ name, false otherwise:
1429 no__name = (lambda txt:
1429 no__name = (lambda txt:
1430 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1430 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1431 matches = filter(no__name, matches)
1431 matches = filter(no__name, matches)
1432 except NameError:
1432 except NameError:
1433 # catches <undefined attributes>.<tab>
1433 # catches <undefined attributes>.<tab>
1434 matches = []
1434 matches = []
1435 else:
1435 else:
1436 matches = self.global_matches(text)
1436 matches = self.global_matches(text)
1437 return matches
1437 return matches
1438
1438
1439 def _default_arguments_from_docstring(self, doc):
1439 def _default_arguments_from_docstring(self, doc):
1440 """Parse the first line of docstring for call signature.
1440 """Parse the first line of docstring for call signature.
1441
1441
1442 Docstring should be of the form 'min(iterable[, key=func])\n'.
1442 Docstring should be of the form 'min(iterable[, key=func])\n'.
1443 It can also parse cython docstring of the form
1443 It can also parse cython docstring of the form
1444 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1444 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1445 """
1445 """
1446 if doc is None:
1446 if doc is None:
1447 return []
1447 return []
1448
1448
1449 #care only the firstline
1449 #care only the firstline
1450 line = doc.lstrip().splitlines()[0]
1450 line = doc.lstrip().splitlines()[0]
1451
1451
1452 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1452 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1453 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1453 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1454 sig = self.docstring_sig_re.search(line)
1454 sig = self.docstring_sig_re.search(line)
1455 if sig is None:
1455 if sig is None:
1456 return []
1456 return []
1457 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1457 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1458 sig = sig.groups()[0].split(',')
1458 sig = sig.groups()[0].split(',')
1459 ret = []
1459 ret = []
1460 for s in sig:
1460 for s in sig:
1461 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1461 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1462 ret += self.docstring_kwd_re.findall(s)
1462 ret += self.docstring_kwd_re.findall(s)
1463 return ret
1463 return ret
1464
1464
1465 def _default_arguments(self, obj):
1465 def _default_arguments(self, obj):
1466 """Return the list of default arguments of obj if it is callable,
1466 """Return the list of default arguments of obj if it is callable,
1467 or empty list otherwise."""
1467 or empty list otherwise."""
1468 call_obj = obj
1468 call_obj = obj
1469 ret = []
1469 ret = []
1470 if inspect.isbuiltin(obj):
1470 if inspect.isbuiltin(obj):
1471 pass
1471 pass
1472 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1472 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1473 if inspect.isclass(obj):
1473 if inspect.isclass(obj):
1474 #for cython embedsignature=True the constructor docstring
1474 #for cython embedsignature=True the constructor docstring
1475 #belongs to the object itself not __init__
1475 #belongs to the object itself not __init__
1476 ret += self._default_arguments_from_docstring(
1476 ret += self._default_arguments_from_docstring(
1477 getattr(obj, '__doc__', ''))
1477 getattr(obj, '__doc__', ''))
1478 # for classes, check for __init__,__new__
1478 # for classes, check for __init__,__new__
1479 call_obj = (getattr(obj, '__init__', None) or
1479 call_obj = (getattr(obj, '__init__', None) or
1480 getattr(obj, '__new__', None))
1480 getattr(obj, '__new__', None))
1481 # for all others, check if they are __call__able
1481 # for all others, check if they are __call__able
1482 elif hasattr(obj, '__call__'):
1482 elif hasattr(obj, '__call__'):
1483 call_obj = obj.__call__
1483 call_obj = obj.__call__
1484 ret += self._default_arguments_from_docstring(
1484 ret += self._default_arguments_from_docstring(
1485 getattr(call_obj, '__doc__', ''))
1485 getattr(call_obj, '__doc__', ''))
1486
1486
1487 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1487 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1488 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1488 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1489
1489
1490 try:
1490 try:
1491 sig = inspect.signature(call_obj)
1491 sig = inspect.signature(call_obj)
1492 ret.extend(k for k, v in sig.parameters.items() if
1492 ret.extend(k for k, v in sig.parameters.items() if
1493 v.kind in _keeps)
1493 v.kind in _keeps)
1494 except ValueError:
1494 except ValueError:
1495 pass
1495 pass
1496
1496
1497 return list(set(ret))
1497 return list(set(ret))
1498
1498
1499 def python_func_kw_matches(self,text):
1499 def python_func_kw_matches(self,text):
1500 """Match named parameters (kwargs) of the last open function"""
1500 """Match named parameters (kwargs) of the last open function"""
1501
1501
1502 if "." in text: # a parameter cannot be dotted
1502 if "." in text: # a parameter cannot be dotted
1503 return []
1503 return []
1504 try: regexp = self.__funcParamsRegex
1504 try: regexp = self.__funcParamsRegex
1505 except AttributeError:
1505 except AttributeError:
1506 regexp = self.__funcParamsRegex = re.compile(r'''
1506 regexp = self.__funcParamsRegex = re.compile(r'''
1507 '.*?(?<!\\)' | # single quoted strings or
1507 '.*?(?<!\\)' | # single quoted strings or
1508 ".*?(?<!\\)" | # double quoted strings or
1508 ".*?(?<!\\)" | # double quoted strings or
1509 \w+ | # identifier
1509 \w+ | # identifier
1510 \S # other characters
1510 \S # other characters
1511 ''', re.VERBOSE | re.DOTALL)
1511 ''', re.VERBOSE | re.DOTALL)
1512 # 1. find the nearest identifier that comes before an unclosed
1512 # 1. find the nearest identifier that comes before an unclosed
1513 # parenthesis before the cursor
1513 # parenthesis before the cursor
1514 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1514 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1515 tokens = regexp.findall(self.text_until_cursor)
1515 tokens = regexp.findall(self.text_until_cursor)
1516 iterTokens = reversed(tokens); openPar = 0
1516 iterTokens = reversed(tokens); openPar = 0
1517
1517
1518 for token in iterTokens:
1518 for token in iterTokens:
1519 if token == ')':
1519 if token == ')':
1520 openPar -= 1
1520 openPar -= 1
1521 elif token == '(':
1521 elif token == '(':
1522 openPar += 1
1522 openPar += 1
1523 if openPar > 0:
1523 if openPar > 0:
1524 # found the last unclosed parenthesis
1524 # found the last unclosed parenthesis
1525 break
1525 break
1526 else:
1526 else:
1527 return []
1527 return []
1528 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1528 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1529 ids = []
1529 ids = []
1530 isId = re.compile(r'\w+$').match
1530 isId = re.compile(r'\w+$').match
1531
1531
1532 while True:
1532 while True:
1533 try:
1533 try:
1534 ids.append(next(iterTokens))
1534 ids.append(next(iterTokens))
1535 if not isId(ids[-1]):
1535 if not isId(ids[-1]):
1536 ids.pop(); break
1536 ids.pop(); break
1537 if not next(iterTokens) == '.':
1537 if not next(iterTokens) == '.':
1538 break
1538 break
1539 except StopIteration:
1539 except StopIteration:
1540 break
1540 break
1541
1541
1542 # Find all named arguments already assigned to, as to avoid suggesting
1542 # Find all named arguments already assigned to, as to avoid suggesting
1543 # them again
1543 # them again
1544 usedNamedArgs = set()
1544 usedNamedArgs = set()
1545 par_level = -1
1545 par_level = -1
1546 for token, next_token in zip(tokens, tokens[1:]):
1546 for token, next_token in zip(tokens, tokens[1:]):
1547 if token == '(':
1547 if token == '(':
1548 par_level += 1
1548 par_level += 1
1549 elif token == ')':
1549 elif token == ')':
1550 par_level -= 1
1550 par_level -= 1
1551
1551
1552 if par_level != 0:
1552 if par_level != 0:
1553 continue
1553 continue
1554
1554
1555 if next_token != '=':
1555 if next_token != '=':
1556 continue
1556 continue
1557
1557
1558 usedNamedArgs.add(token)
1558 usedNamedArgs.add(token)
1559
1559
1560 argMatches = []
1560 argMatches = []
1561 try:
1561 try:
1562 callableObj = '.'.join(ids[::-1])
1562 callableObj = '.'.join(ids[::-1])
1563 namedArgs = self._default_arguments(eval(callableObj,
1563 namedArgs = self._default_arguments(eval(callableObj,
1564 self.namespace))
1564 self.namespace))
1565
1565
1566 # Remove used named arguments from the list, no need to show twice
1566 # Remove used named arguments from the list, no need to show twice
1567 for namedArg in set(namedArgs) - usedNamedArgs:
1567 for namedArg in set(namedArgs) - usedNamedArgs:
1568 if namedArg.startswith(text):
1568 if namedArg.startswith(text):
1569 argMatches.append(u"%s=" %namedArg)
1569 argMatches.append(u"%s=" %namedArg)
1570 except:
1570 except:
1571 pass
1571 pass
1572
1572
1573 return argMatches
1573 return argMatches
1574
1574
1575 def dict_key_matches(self, text):
1575 def dict_key_matches(self, text):
1576 "Match string keys in a dictionary, after e.g. 'foo[' "
1576 "Match string keys in a dictionary, after e.g. 'foo[' "
1577 def get_keys(obj):
1577 def get_keys(obj):
1578 # Objects can define their own completions by defining an
1578 # Objects can define their own completions by defining an
1579 # _ipy_key_completions_() method.
1579 # _ipy_key_completions_() method.
1580 method = get_real_method(obj, '_ipython_key_completions_')
1580 method = get_real_method(obj, '_ipython_key_completions_')
1581 if method is not None:
1581 if method is not None:
1582 return method()
1582 return method()
1583
1583
1584 # Special case some common in-memory dict-like types
1584 # Special case some common in-memory dict-like types
1585 if isinstance(obj, dict) or\
1585 if isinstance(obj, dict) or\
1586 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1586 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1587 try:
1587 try:
1588 return list(obj.keys())
1588 return list(obj.keys())
1589 except Exception:
1589 except Exception:
1590 return []
1590 return []
1591 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1591 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1592 _safe_isinstance(obj, 'numpy', 'void'):
1592 _safe_isinstance(obj, 'numpy', 'void'):
1593 return obj.dtype.names or []
1593 return obj.dtype.names or []
1594 return []
1594 return []
1595
1595
1596 try:
1596 try:
1597 regexps = self.__dict_key_regexps
1597 regexps = self.__dict_key_regexps
1598 except AttributeError:
1598 except AttributeError:
1599 dict_key_re_fmt = r'''(?x)
1599 dict_key_re_fmt = r'''(?x)
1600 ( # match dict-referring expression wrt greedy setting
1600 ( # match dict-referring expression wrt greedy setting
1601 %s
1601 %s
1602 )
1602 )
1603 \[ # open bracket
1603 \[ # open bracket
1604 \s* # and optional whitespace
1604 \s* # and optional whitespace
1605 ([uUbB]? # string prefix (r not handled)
1605 ([uUbB]? # string prefix (r not handled)
1606 (?: # unclosed string
1606 (?: # unclosed string
1607 '(?:[^']|(?<!\\)\\')*
1607 '(?:[^']|(?<!\\)\\')*
1608 |
1608 |
1609 "(?:[^"]|(?<!\\)\\")*
1609 "(?:[^"]|(?<!\\)\\")*
1610 )
1610 )
1611 )?
1611 )?
1612 $
1612 $
1613 '''
1613 '''
1614 regexps = self.__dict_key_regexps = {
1614 regexps = self.__dict_key_regexps = {
1615 False: re.compile(dict_key_re_fmt % r'''
1615 False: re.compile(dict_key_re_fmt % r'''
1616 # identifiers separated by .
1616 # identifiers separated by .
1617 (?!\d)\w+
1617 (?!\d)\w+
1618 (?:\.(?!\d)\w+)*
1618 (?:\.(?!\d)\w+)*
1619 '''),
1619 '''),
1620 True: re.compile(dict_key_re_fmt % '''
1620 True: re.compile(dict_key_re_fmt % '''
1621 .+
1621 .+
1622 ''')
1622 ''')
1623 }
1623 }
1624
1624
1625 match = regexps[self.greedy].search(self.text_until_cursor)
1625 match = regexps[self.greedy].search(self.text_until_cursor)
1626 if match is None:
1626 if match is None:
1627 return []
1627 return []
1628
1628
1629 expr, prefix = match.groups()
1629 expr, prefix = match.groups()
1630 try:
1630 try:
1631 obj = eval(expr, self.namespace)
1631 obj = eval(expr, self.namespace)
1632 except Exception:
1632 except Exception:
1633 try:
1633 try:
1634 obj = eval(expr, self.global_namespace)
1634 obj = eval(expr, self.global_namespace)
1635 except Exception:
1635 except Exception:
1636 return []
1636 return []
1637
1637
1638 keys = get_keys(obj)
1638 keys = get_keys(obj)
1639 if not keys:
1639 if not keys:
1640 return keys
1640 return keys
1641 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims)
1641 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims)
1642 if not matches:
1642 if not matches:
1643 return matches
1643 return matches
1644
1644
1645 # get the cursor position of
1645 # get the cursor position of
1646 # - the text being completed
1646 # - the text being completed
1647 # - the start of the key text
1647 # - the start of the key text
1648 # - the start of the completion
1648 # - the start of the completion
1649 text_start = len(self.text_until_cursor) - len(text)
1649 text_start = len(self.text_until_cursor) - len(text)
1650 if prefix:
1650 if prefix:
1651 key_start = match.start(2)
1651 key_start = match.start(2)
1652 completion_start = key_start + token_offset
1652 completion_start = key_start + token_offset
1653 else:
1653 else:
1654 key_start = completion_start = match.end()
1654 key_start = completion_start = match.end()
1655
1655
1656 # grab the leading prefix, to make sure all completions start with `text`
1656 # grab the leading prefix, to make sure all completions start with `text`
1657 if text_start > key_start:
1657 if text_start > key_start:
1658 leading = ''
1658 leading = ''
1659 else:
1659 else:
1660 leading = text[text_start:completion_start]
1660 leading = text[text_start:completion_start]
1661
1661
1662 # the index of the `[` character
1662 # the index of the `[` character
1663 bracket_idx = match.end(1)
1663 bracket_idx = match.end(1)
1664
1664
1665 # append closing quote and bracket as appropriate
1665 # append closing quote and bracket as appropriate
1666 # this is *not* appropriate if the opening quote or bracket is outside
1666 # this is *not* appropriate if the opening quote or bracket is outside
1667 # the text given to this method
1667 # the text given to this method
1668 suf = ''
1668 suf = ''
1669 continuation = self.line_buffer[len(self.text_until_cursor):]
1669 continuation = self.line_buffer[len(self.text_until_cursor):]
1670 if key_start > text_start and closing_quote:
1670 if key_start > text_start and closing_quote:
1671 # quotes were opened inside text, maybe close them
1671 # quotes were opened inside text, maybe close them
1672 if continuation.startswith(closing_quote):
1672 if continuation.startswith(closing_quote):
1673 continuation = continuation[len(closing_quote):]
1673 continuation = continuation[len(closing_quote):]
1674 else:
1674 else:
1675 suf += closing_quote
1675 suf += closing_quote
1676 if bracket_idx > text_start:
1676 if bracket_idx > text_start:
1677 # brackets were opened inside text, maybe close them
1677 # brackets were opened inside text, maybe close them
1678 if not continuation.startswith(']'):
1678 if not continuation.startswith(']'):
1679 suf += ']'
1679 suf += ']'
1680
1680
1681 return [leading + k + suf for k in matches]
1681 return [leading + k + suf for k in matches]
1682
1682
1683 def unicode_name_matches(self, text):
1683 def unicode_name_matches(self, text):
1684 u"""Match Latex-like syntax for unicode characters base
1684 u"""Match Latex-like syntax for unicode characters base
1685 on the name of the character.
1685 on the name of the character.
1686
1686
1687 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1687 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1688
1688
1689 Works only on valid python 3 identifier, or on combining characters that
1689 Works only on valid python 3 identifier, or on combining characters that
1690 will combine to form a valid identifier.
1690 will combine to form a valid identifier.
1691
1691
1692 Used on Python 3 only.
1692 Used on Python 3 only.
1693 """
1693 """
1694 slashpos = text.rfind('\\')
1694 slashpos = text.rfind('\\')
1695 if slashpos > -1:
1695 if slashpos > -1:
1696 s = text[slashpos+1:]
1696 s = text[slashpos+1:]
1697 try :
1697 try :
1698 unic = unicodedata.lookup(s)
1698 unic = unicodedata.lookup(s)
1699 # allow combining chars
1699 # allow combining chars
1700 if ('a'+unic).isidentifier():
1700 if ('a'+unic).isidentifier():
1701 return '\\'+s,[unic]
1701 return '\\'+s,[unic]
1702 except KeyError:
1702 except KeyError:
1703 pass
1703 pass
1704 return u'', []
1704 return u'', []
1705
1705
1706
1706
1707 def latex_matches(self, text):
1707 def latex_matches(self, text):
1708 u"""Match Latex syntax for unicode characters.
1708 u"""Match Latex syntax for unicode characters.
1709
1709
1710 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1710 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1711
1712 Used on Python 3 only.
1713 """
1711 """
1714 slashpos = text.rfind('\\')
1712 slashpos = text.rfind('\\')
1715 if slashpos > -1:
1713 if slashpos > -1:
1716 s = text[slashpos:]
1714 s = text[slashpos:]
1717 if s in latex_symbols:
1715 if s in latex_symbols:
1718 # Try to complete a full latex symbol to unicode
1716 # Try to complete a full latex symbol to unicode
1719 # \\alpha -> Ξ±
1717 # \\alpha -> Ξ±
1720 return s, [latex_symbols[s]]
1718 return s, [latex_symbols[s]]
1721 else:
1719 else:
1722 # If a user has partially typed a latex symbol, give them
1720 # If a user has partially typed a latex symbol, give them
1723 # a full list of options \al -> [\aleph, \alpha]
1721 # a full list of options \al -> [\aleph, \alpha]
1724 matches = [k for k in latex_symbols if k.startswith(s)]
1722 matches = [k for k in latex_symbols if k.startswith(s)]
1723 if matches:
1725 return s, matches
1724 return s, matches
1726 return u'', []
1725 return u'', []
1727
1726
1728 def dispatch_custom_completer(self, text):
1727 def dispatch_custom_completer(self, text):
1729 if not self.custom_completers:
1728 if not self.custom_completers:
1730 return
1729 return
1731
1730
1732 line = self.line_buffer
1731 line = self.line_buffer
1733 if not line.strip():
1732 if not line.strip():
1734 return None
1733 return None
1735
1734
1736 # Create a little structure to pass all the relevant information about
1735 # Create a little structure to pass all the relevant information about
1737 # the current completion to any custom completer.
1736 # the current completion to any custom completer.
1738 event = SimpleNamespace()
1737 event = SimpleNamespace()
1739 event.line = line
1738 event.line = line
1740 event.symbol = text
1739 event.symbol = text
1741 cmd = line.split(None,1)[0]
1740 cmd = line.split(None,1)[0]
1742 event.command = cmd
1741 event.command = cmd
1743 event.text_until_cursor = self.text_until_cursor
1742 event.text_until_cursor = self.text_until_cursor
1744
1743
1745 # for foo etc, try also to find completer for %foo
1744 # for foo etc, try also to find completer for %foo
1746 if not cmd.startswith(self.magic_escape):
1745 if not cmd.startswith(self.magic_escape):
1747 try_magic = self.custom_completers.s_matches(
1746 try_magic = self.custom_completers.s_matches(
1748 self.magic_escape + cmd)
1747 self.magic_escape + cmd)
1749 else:
1748 else:
1750 try_magic = []
1749 try_magic = []
1751
1750
1752 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1751 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1753 try_magic,
1752 try_magic,
1754 self.custom_completers.flat_matches(self.text_until_cursor)):
1753 self.custom_completers.flat_matches(self.text_until_cursor)):
1755 try:
1754 try:
1756 res = c(event)
1755 res = c(event)
1757 if res:
1756 if res:
1758 # first, try case sensitive match
1757 # first, try case sensitive match
1759 withcase = [r for r in res if r.startswith(text)]
1758 withcase = [r for r in res if r.startswith(text)]
1760 if withcase:
1759 if withcase:
1761 return withcase
1760 return withcase
1762 # if none, then case insensitive ones are ok too
1761 # if none, then case insensitive ones are ok too
1763 text_low = text.lower()
1762 text_low = text.lower()
1764 return [r for r in res if r.lower().startswith(text_low)]
1763 return [r for r in res if r.lower().startswith(text_low)]
1765 except TryNext:
1764 except TryNext:
1766 pass
1765 pass
1767 except KeyboardInterrupt:
1766 except KeyboardInterrupt:
1768 """
1767 """
1769 If custom completer take too long,
1768 If custom completer take too long,
1770 let keyboard interrupt abort and return nothing.
1769 let keyboard interrupt abort and return nothing.
1771 """
1770 """
1772 break
1771 break
1773
1772
1774 return None
1773 return None
1775
1774
1776 def completions(self, text: str, offset: int)->Iterator[Completion]:
1775 def completions(self, text: str, offset: int)->Iterator[Completion]:
1777 """
1776 """
1778 Returns an iterator over the possible completions
1777 Returns an iterator over the possible completions
1779
1778
1780 .. warning:: Unstable
1779 .. warning:: Unstable
1781
1780
1782 This function is unstable, API may change without warning.
1781 This function is unstable, API may change without warning.
1783 It will also raise unless use in proper context manager.
1782 It will also raise unless use in proper context manager.
1784
1783
1785 Parameters
1784 Parameters
1786 ----------
1785 ----------
1787
1786
1788 text:str
1787 text:str
1789 Full text of the current input, multi line string.
1788 Full text of the current input, multi line string.
1790 offset:int
1789 offset:int
1791 Integer representing the position of the cursor in ``text``. Offset
1790 Integer representing the position of the cursor in ``text``. Offset
1792 is 0-based indexed.
1791 is 0-based indexed.
1793
1792
1794 Yields
1793 Yields
1795 ------
1794 ------
1796 :any:`Completion` object
1795 :any:`Completion` object
1797
1796
1798
1797
1799 The cursor on a text can either be seen as being "in between"
1798 The cursor on a text can either be seen as being "in between"
1800 characters or "On" a character depending on the interface visible to
1799 characters or "On" a character depending on the interface visible to
1801 the user. For consistency the cursor being on "in between" characters X
1800 the user. For consistency the cursor being on "in between" characters X
1802 and Y is equivalent to the cursor being "on" character Y, that is to say
1801 and Y is equivalent to the cursor being "on" character Y, that is to say
1803 the character the cursor is on is considered as being after the cursor.
1802 the character the cursor is on is considered as being after the cursor.
1804
1803
1805 Combining characters may span more that one position in the
1804 Combining characters may span more that one position in the
1806 text.
1805 text.
1807
1806
1808
1807
1809 .. note::
1808 .. note::
1810
1809
1811 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1810 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1812 fake Completion token to distinguish completion returned by Jedi
1811 fake Completion token to distinguish completion returned by Jedi
1813 and usual IPython completion.
1812 and usual IPython completion.
1814
1813
1815 .. note::
1814 .. note::
1816
1815
1817 Completions are not completely deduplicated yet. If identical
1816 Completions are not completely deduplicated yet. If identical
1818 completions are coming from different sources this function does not
1817 completions are coming from different sources this function does not
1819 ensure that each completion object will only be present once.
1818 ensure that each completion object will only be present once.
1820 """
1819 """
1821 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1820 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1822 "It may change without warnings. "
1821 "It may change without warnings. "
1823 "Use in corresponding context manager.",
1822 "Use in corresponding context manager.",
1824 category=ProvisionalCompleterWarning, stacklevel=2)
1823 category=ProvisionalCompleterWarning, stacklevel=2)
1825
1824
1826 seen = set()
1825 seen = set()
1827 try:
1826 try:
1828 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1827 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1829 if c and (c in seen):
1828 if c and (c in seen):
1830 continue
1829 continue
1831 yield c
1830 yield c
1832 seen.add(c)
1831 seen.add(c)
1833 except KeyboardInterrupt:
1832 except KeyboardInterrupt:
1834 """if completions take too long and users send keyboard interrupt,
1833 """if completions take too long and users send keyboard interrupt,
1835 do not crash and return ASAP. """
1834 do not crash and return ASAP. """
1836 pass
1835 pass
1837
1836
1838 def _completions(self, full_text: str, offset: int, *, _timeout)->Iterator[Completion]:
1837 def _completions(self, full_text: str, offset: int, *, _timeout)->Iterator[Completion]:
1839 """
1838 """
1840 Core completion module.Same signature as :any:`completions`, with the
1839 Core completion module.Same signature as :any:`completions`, with the
1841 extra `timeout` parameter (in seconds).
1840 extra `timeout` parameter (in seconds).
1842
1841
1843
1842
1844 Computing jedi's completion ``.type`` can be quite expensive (it is a
1843 Computing jedi's completion ``.type`` can be quite expensive (it is a
1845 lazy property) and can require some warm-up, more warm up than just
1844 lazy property) and can require some warm-up, more warm up than just
1846 computing the ``name`` of a completion. The warm-up can be :
1845 computing the ``name`` of a completion. The warm-up can be :
1847
1846
1848 - Long warm-up the first time a module is encountered after
1847 - Long warm-up the first time a module is encountered after
1849 install/update: actually build parse/inference tree.
1848 install/update: actually build parse/inference tree.
1850
1849
1851 - first time the module is encountered in a session: load tree from
1850 - first time the module is encountered in a session: load tree from
1852 disk.
1851 disk.
1853
1852
1854 We don't want to block completions for tens of seconds so we give the
1853 We don't want to block completions for tens of seconds so we give the
1855 completer a "budget" of ``_timeout`` seconds per invocation to compute
1854 completer a "budget" of ``_timeout`` seconds per invocation to compute
1856 completions types, the completions that have not yet been computed will
1855 completions types, the completions that have not yet been computed will
1857 be marked as "unknown" an will have a chance to be computed next round
1856 be marked as "unknown" an will have a chance to be computed next round
1858 are things get cached.
1857 are things get cached.
1859
1858
1860 Keep in mind that Jedi is not the only thing treating the completion so
1859 Keep in mind that Jedi is not the only thing treating the completion so
1861 keep the timeout short-ish as if we take more than 0.3 second we still
1860 keep the timeout short-ish as if we take more than 0.3 second we still
1862 have lots of processing to do.
1861 have lots of processing to do.
1863
1862
1864 """
1863 """
1865 deadline = time.monotonic() + _timeout
1864 deadline = time.monotonic() + _timeout
1866
1865
1867
1866
1868 before = full_text[:offset]
1867 before = full_text[:offset]
1869 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1868 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1870
1869
1871 matched_text, matches, matches_origin, jedi_matches = self._complete(
1870 matched_text, matches, matches_origin, jedi_matches = self._complete(
1872 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1871 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1873
1872
1874 iter_jm = iter(jedi_matches)
1873 iter_jm = iter(jedi_matches)
1875 if _timeout:
1874 if _timeout:
1876 for jm in iter_jm:
1875 for jm in iter_jm:
1877 try:
1876 try:
1878 type_ = jm.type
1877 type_ = jm.type
1879 except Exception:
1878 except Exception:
1880 if self.debug:
1879 if self.debug:
1881 print("Error in Jedi getting type of ", jm)
1880 print("Error in Jedi getting type of ", jm)
1882 type_ = None
1881 type_ = None
1883 delta = len(jm.name_with_symbols) - len(jm.complete)
1882 delta = len(jm.name_with_symbols) - len(jm.complete)
1884 if type_ == 'function':
1883 if type_ == 'function':
1885 signature = _make_signature(jm)
1884 signature = _make_signature(jm)
1886 else:
1885 else:
1887 signature = ''
1886 signature = ''
1888 yield Completion(start=offset - delta,
1887 yield Completion(start=offset - delta,
1889 end=offset,
1888 end=offset,
1890 text=jm.name_with_symbols,
1889 text=jm.name_with_symbols,
1891 type=type_,
1890 type=type_,
1892 signature=signature,
1891 signature=signature,
1893 _origin='jedi')
1892 _origin='jedi')
1894
1893
1895 if time.monotonic() > deadline:
1894 if time.monotonic() > deadline:
1896 break
1895 break
1897
1896
1898 for jm in iter_jm:
1897 for jm in iter_jm:
1899 delta = len(jm.name_with_symbols) - len(jm.complete)
1898 delta = len(jm.name_with_symbols) - len(jm.complete)
1900 yield Completion(start=offset - delta,
1899 yield Completion(start=offset - delta,
1901 end=offset,
1900 end=offset,
1902 text=jm.name_with_symbols,
1901 text=jm.name_with_symbols,
1903 type='<unknown>', # don't compute type for speed
1902 type='<unknown>', # don't compute type for speed
1904 _origin='jedi',
1903 _origin='jedi',
1905 signature='')
1904 signature='')
1906
1905
1907
1906
1908 start_offset = before.rfind(matched_text)
1907 start_offset = before.rfind(matched_text)
1909
1908
1910 # TODO:
1909 # TODO:
1911 # Suppress this, right now just for debug.
1910 # Suppress this, right now just for debug.
1912 if jedi_matches and matches and self.debug:
1911 if jedi_matches and matches and self.debug:
1913 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
1912 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
1914 _origin='debug', type='none', signature='')
1913 _origin='debug', type='none', signature='')
1915
1914
1916 # I'm unsure if this is always true, so let's assert and see if it
1915 # I'm unsure if this is always true, so let's assert and see if it
1917 # crash
1916 # crash
1918 assert before.endswith(matched_text)
1917 assert before.endswith(matched_text)
1919 for m, t in zip(matches, matches_origin):
1918 for m, t in zip(matches, matches_origin):
1920 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
1919 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
1921
1920
1922
1921
1923 def complete(self, text=None, line_buffer=None, cursor_pos=None):
1922 def complete(self, text=None, line_buffer=None, cursor_pos=None):
1924 """Find completions for the given text and line context.
1923 """Find completions for the given text and line context.
1925
1924
1926 Note that both the text and the line_buffer are optional, but at least
1925 Note that both the text and the line_buffer are optional, but at least
1927 one of them must be given.
1926 one of them must be given.
1928
1927
1929 Parameters
1928 Parameters
1930 ----------
1929 ----------
1931 text : string, optional
1930 text : string, optional
1932 Text to perform the completion on. If not given, the line buffer
1931 Text to perform the completion on. If not given, the line buffer
1933 is split using the instance's CompletionSplitter object.
1932 is split using the instance's CompletionSplitter object.
1934
1933
1935 line_buffer : string, optional
1934 line_buffer : string, optional
1936 If not given, the completer attempts to obtain the current line
1935 If not given, the completer attempts to obtain the current line
1937 buffer via readline. This keyword allows clients which are
1936 buffer via readline. This keyword allows clients which are
1938 requesting for text completions in non-readline contexts to inform
1937 requesting for text completions in non-readline contexts to inform
1939 the completer of the entire text.
1938 the completer of the entire text.
1940
1939
1941 cursor_pos : int, optional
1940 cursor_pos : int, optional
1942 Index of the cursor in the full line buffer. Should be provided by
1941 Index of the cursor in the full line buffer. Should be provided by
1943 remote frontends where kernel has no access to frontend state.
1942 remote frontends where kernel has no access to frontend state.
1944
1943
1945 Returns
1944 Returns
1946 -------
1945 -------
1947 text : str
1946 text : str
1948 Text that was actually used in the completion.
1947 Text that was actually used in the completion.
1949
1948
1950 matches : list
1949 matches : list
1951 A list of completion matches.
1950 A list of completion matches.
1952
1951
1953
1952
1954 .. note::
1953 .. note::
1955
1954
1956 This API is likely to be deprecated and replaced by
1955 This API is likely to be deprecated and replaced by
1957 :any:`IPCompleter.completions` in the future.
1956 :any:`IPCompleter.completions` in the future.
1958
1957
1959
1958
1960 """
1959 """
1961 warnings.warn('`Completer.complete` is pending deprecation since '
1960 warnings.warn('`Completer.complete` is pending deprecation since '
1962 'IPython 6.0 and will be replaced by `Completer.completions`.',
1961 'IPython 6.0 and will be replaced by `Completer.completions`.',
1963 PendingDeprecationWarning)
1962 PendingDeprecationWarning)
1964 # potential todo, FOLD the 3rd throw away argument of _complete
1963 # potential todo, FOLD the 3rd throw away argument of _complete
1965 # into the first 2 one.
1964 # into the first 2 one.
1966 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
1965 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
1967
1966
1968 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
1967 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
1969 full_text=None) -> Tuple[str, List[str], List[str], Iterable[_FakeJediCompletion]]:
1968 full_text=None) -> Tuple[str, List[str], List[str], Iterable[_FakeJediCompletion]]:
1970 """
1969 """
1971
1970
1972 Like complete but can also returns raw jedi completions as well as the
1971 Like complete but can also returns raw jedi completions as well as the
1973 origin of the completion text. This could (and should) be made much
1972 origin of the completion text. This could (and should) be made much
1974 cleaner but that will be simpler once we drop the old (and stateful)
1973 cleaner but that will be simpler once we drop the old (and stateful)
1975 :any:`complete` API.
1974 :any:`complete` API.
1976
1975
1977
1976
1978 With current provisional API, cursor_pos act both (depending on the
1977 With current provisional API, cursor_pos act both (depending on the
1979 caller) as the offset in the ``text`` or ``line_buffer``, or as the
1978 caller) as the offset in the ``text`` or ``line_buffer``, or as the
1980 ``column`` when passing multiline strings this could/should be renamed
1979 ``column`` when passing multiline strings this could/should be renamed
1981 but would add extra noise.
1980 but would add extra noise.
1982 """
1981 """
1983
1982
1984 # if the cursor position isn't given, the only sane assumption we can
1983 # if the cursor position isn't given, the only sane assumption we can
1985 # make is that it's at the end of the line (the common case)
1984 # make is that it's at the end of the line (the common case)
1986 if cursor_pos is None:
1985 if cursor_pos is None:
1987 cursor_pos = len(line_buffer) if text is None else len(text)
1986 cursor_pos = len(line_buffer) if text is None else len(text)
1988
1987
1989 if self.use_main_ns:
1988 if self.use_main_ns:
1990 self.namespace = __main__.__dict__
1989 self.namespace = __main__.__dict__
1991
1990
1992 # if text is either None or an empty string, rely on the line buffer
1991 # if text is either None or an empty string, rely on the line buffer
1993 if (not line_buffer) and full_text:
1992 if (not line_buffer) and full_text:
1994 line_buffer = full_text.split('\n')[cursor_line]
1993 line_buffer = full_text.split('\n')[cursor_line]
1995 if not text:
1994 if not text:
1996 text = self.splitter.split_line(line_buffer, cursor_pos)
1995 text = self.splitter.split_line(line_buffer, cursor_pos)
1997
1996
1998 if self.backslash_combining_completions:
1997 if self.backslash_combining_completions:
1999 # allow deactivation of these on windows.
1998 # allow deactivation of these on windows.
2000 base_text = text if not line_buffer else line_buffer[:cursor_pos]
1999 base_text = text if not line_buffer else line_buffer[:cursor_pos]
2001 latex_text, latex_matches = self.latex_matches(base_text)
2000 latex_text, latex_matches = self.latex_matches(base_text)
2002 if latex_matches:
2001 if latex_matches:
2003 return latex_text, latex_matches, ['latex_matches']*len(latex_matches), ()
2002 return latex_text, latex_matches, ['latex_matches']*len(latex_matches), ()
2004 name_text = ''
2003 name_text = ''
2005 name_matches = []
2004 name_matches = []
2006 # need to add self.fwd_unicode_match() function here when done
2005 # need to add self.fwd_unicode_match() function here when done
2007 for meth in (self.unicode_name_matches, back_latex_name_matches, back_unicode_name_matches, self.fwd_unicode_match):
2006 for meth in (self.unicode_name_matches, back_latex_name_matches, back_unicode_name_matches, self.fwd_unicode_match):
2008 name_text, name_matches = meth(base_text)
2007 name_text, name_matches = meth(base_text)
2009 if name_text:
2008 if name_text:
2010 return name_text, name_matches[:MATCHES_LIMIT], \
2009 return name_text, name_matches[:MATCHES_LIMIT], \
2011 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ()
2010 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ()
2012
2011
2013
2012
2014 # If no line buffer is given, assume the input text is all there was
2013 # If no line buffer is given, assume the input text is all there was
2015 if line_buffer is None:
2014 if line_buffer is None:
2016 line_buffer = text
2015 line_buffer = text
2017
2016
2018 self.line_buffer = line_buffer
2017 self.line_buffer = line_buffer
2019 self.text_until_cursor = self.line_buffer[:cursor_pos]
2018 self.text_until_cursor = self.line_buffer[:cursor_pos]
2020
2019
2021 # Do magic arg matches
2020 # Do magic arg matches
2022 for matcher in self.magic_arg_matchers:
2021 for matcher in self.magic_arg_matchers:
2023 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2022 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2024 if matches:
2023 if matches:
2025 origins = [matcher.__qualname__] * len(matches)
2024 origins = [matcher.__qualname__] * len(matches)
2026 return text, matches, origins, ()
2025 return text, matches, origins, ()
2027
2026
2028 # Start with a clean slate of completions
2027 # Start with a clean slate of completions
2029 matches = []
2028 matches = []
2030
2029
2031 # FIXME: we should extend our api to return a dict with completions for
2030 # FIXME: we should extend our api to return a dict with completions for
2032 # different types of objects. The rlcomplete() method could then
2031 # different types of objects. The rlcomplete() method could then
2033 # simply collapse the dict into a list for readline, but we'd have
2032 # simply collapse the dict into a list for readline, but we'd have
2034 # richer completion semantics in other environments.
2033 # richer completion semantics in other environments.
2035 completions = ()
2034 completions = ()
2036 if self.use_jedi:
2035 if self.use_jedi:
2037 if not full_text:
2036 if not full_text:
2038 full_text = line_buffer
2037 full_text = line_buffer
2039 completions = self._jedi_matches(
2038 completions = self._jedi_matches(
2040 cursor_pos, cursor_line, full_text)
2039 cursor_pos, cursor_line, full_text)
2041
2040
2042 if self.merge_completions:
2041 if self.merge_completions:
2043 matches = []
2042 matches = []
2044 for matcher in self.matchers:
2043 for matcher in self.matchers:
2045 try:
2044 try:
2046 matches.extend([(m, matcher.__qualname__)
2045 matches.extend([(m, matcher.__qualname__)
2047 for m in matcher(text)])
2046 for m in matcher(text)])
2048 except:
2047 except:
2049 # Show the ugly traceback if the matcher causes an
2048 # Show the ugly traceback if the matcher causes an
2050 # exception, but do NOT crash the kernel!
2049 # exception, but do NOT crash the kernel!
2051 sys.excepthook(*sys.exc_info())
2050 sys.excepthook(*sys.exc_info())
2052 else:
2051 else:
2053 for matcher in self.matchers:
2052 for matcher in self.matchers:
2054 matches = [(m, matcher.__qualname__)
2053 matches = [(m, matcher.__qualname__)
2055 for m in matcher(text)]
2054 for m in matcher(text)]
2056 if matches:
2055 if matches:
2057 break
2056 break
2058
2057
2059 seen = set()
2058 seen = set()
2060 filtered_matches = set()
2059 filtered_matches = set()
2061 for m in matches:
2060 for m in matches:
2062 t, c = m
2061 t, c = m
2063 if t not in seen:
2062 if t not in seen:
2064 filtered_matches.add(m)
2063 filtered_matches.add(m)
2065 seen.add(t)
2064 seen.add(t)
2066
2065
2067 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2066 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2068
2067
2069 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2068 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2070
2069
2071 _filtered_matches = custom_res or _filtered_matches
2070 _filtered_matches = custom_res or _filtered_matches
2072
2071
2073 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2072 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2074 _matches = [m[0] for m in _filtered_matches]
2073 _matches = [m[0] for m in _filtered_matches]
2075 origins = [m[1] for m in _filtered_matches]
2074 origins = [m[1] for m in _filtered_matches]
2076
2075
2077 self.matches = _matches
2076 self.matches = _matches
2078
2077
2079 return text, _matches, origins, completions
2078 return text, _matches, origins, completions
2080
2079
2081 def fwd_unicode_match(self, text:str) -> Tuple[str, list]:
2080 def fwd_unicode_match(self, text:str) -> Tuple[str, list]:
2082 if self._names is None:
2081 if self._names is None:
2083 self._names = []
2082 self._names = []
2084 for c in range(0,0x10FFFF + 1):
2083 for c in range(0,0x10FFFF + 1):
2085 try:
2084 try:
2086 self._names.append(unicodedata.name(chr(c)))
2085 self._names.append(unicodedata.name(chr(c)))
2087 except ValueError:
2086 except ValueError:
2088 pass
2087 pass
2089
2088
2090 slashpos = text.rfind('\\')
2089 slashpos = text.rfind('\\')
2091 # if text starts with slash
2090 # if text starts with slash
2092 if slashpos > -1:
2091 if slashpos > -1:
2093 s = text[slashpos+1:]
2092 s = text[slashpos+1:]
2094 candidates = [x for x in self._names if x.startswith(s)]
2093 candidates = [x for x in self._names if x.startswith(s)]
2095 if candidates:
2094 if candidates:
2096 return s, candidates
2095 return s, candidates
2097 else:
2096 else:
2098 return '', ()
2097 return '', ()
2099
2098
2100 # if text does not start with slash
2099 # if text does not start with slash
2101 else:
2100 else:
2102 return u'', ()
2101 return u'', ()
@@ -1,1104 +1,1111 b''
1 # encoding: utf-8
1 # encoding: utf-8
2 """Tests for the IPython tab-completion machinery."""
2 """Tests for the IPython tab-completion machinery."""
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import os
7 import os
8 import sys
8 import sys
9 import textwrap
9 import textwrap
10 import unittest
10 import unittest
11
11
12 from contextlib import contextmanager
12 from contextlib import contextmanager
13
13
14 import nose.tools as nt
14 import nose.tools as nt
15
15
16 from traitlets.config.loader import Config
16 from traitlets.config.loader import Config
17 from IPython import get_ipython
17 from IPython import get_ipython
18 from IPython.core import completer
18 from IPython.core import completer
19 from IPython.external import decorators
19 from IPython.external import decorators
20 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
20 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
21 from IPython.utils.generics import complete_object
21 from IPython.utils.generics import complete_object
22 from IPython.testing import decorators as dec
22 from IPython.testing import decorators as dec
23
23
24 from IPython.core.completer import (
24 from IPython.core.completer import (
25 Completion,
25 Completion,
26 provisionalcompleter,
26 provisionalcompleter,
27 match_dict_keys,
27 match_dict_keys,
28 _deduplicate_completions,
28 _deduplicate_completions,
29 )
29 )
30 from nose.tools import assert_in, assert_not_in
30 from nose.tools import assert_in, assert_not_in
31
31
32 # -----------------------------------------------------------------------------
32 # -----------------------------------------------------------------------------
33 # Test functions
33 # Test functions
34 # -----------------------------------------------------------------------------
34 # -----------------------------------------------------------------------------
35
35
36
36
37 @contextmanager
37 @contextmanager
38 def greedy_completion():
38 def greedy_completion():
39 ip = get_ipython()
39 ip = get_ipython()
40 greedy_original = ip.Completer.greedy
40 greedy_original = ip.Completer.greedy
41 try:
41 try:
42 ip.Completer.greedy = True
42 ip.Completer.greedy = True
43 yield
43 yield
44 finally:
44 finally:
45 ip.Completer.greedy = greedy_original
45 ip.Completer.greedy = greedy_original
46
46
47
47
48 def test_protect_filename():
48 def test_protect_filename():
49 if sys.platform == "win32":
49 if sys.platform == "win32":
50 pairs = [
50 pairs = [
51 ("abc", "abc"),
51 ("abc", "abc"),
52 (" abc", '" abc"'),
52 (" abc", '" abc"'),
53 ("a bc", '"a bc"'),
53 ("a bc", '"a bc"'),
54 ("a bc", '"a bc"'),
54 ("a bc", '"a bc"'),
55 (" bc", '" bc"'),
55 (" bc", '" bc"'),
56 ]
56 ]
57 else:
57 else:
58 pairs = [
58 pairs = [
59 ("abc", "abc"),
59 ("abc", "abc"),
60 (" abc", r"\ abc"),
60 (" abc", r"\ abc"),
61 ("a bc", r"a\ bc"),
61 ("a bc", r"a\ bc"),
62 ("a bc", r"a\ \ bc"),
62 ("a bc", r"a\ \ bc"),
63 (" bc", r"\ \ bc"),
63 (" bc", r"\ \ bc"),
64 # On posix, we also protect parens and other special characters.
64 # On posix, we also protect parens and other special characters.
65 ("a(bc", r"a\(bc"),
65 ("a(bc", r"a\(bc"),
66 ("a)bc", r"a\)bc"),
66 ("a)bc", r"a\)bc"),
67 ("a( )bc", r"a\(\ \)bc"),
67 ("a( )bc", r"a\(\ \)bc"),
68 ("a[1]bc", r"a\[1\]bc"),
68 ("a[1]bc", r"a\[1\]bc"),
69 ("a{1}bc", r"a\{1\}bc"),
69 ("a{1}bc", r"a\{1\}bc"),
70 ("a#bc", r"a\#bc"),
70 ("a#bc", r"a\#bc"),
71 ("a?bc", r"a\?bc"),
71 ("a?bc", r"a\?bc"),
72 ("a=bc", r"a\=bc"),
72 ("a=bc", r"a\=bc"),
73 ("a\\bc", r"a\\bc"),
73 ("a\\bc", r"a\\bc"),
74 ("a|bc", r"a\|bc"),
74 ("a|bc", r"a\|bc"),
75 ("a;bc", r"a\;bc"),
75 ("a;bc", r"a\;bc"),
76 ("a:bc", r"a\:bc"),
76 ("a:bc", r"a\:bc"),
77 ("a'bc", r"a\'bc"),
77 ("a'bc", r"a\'bc"),
78 ("a*bc", r"a\*bc"),
78 ("a*bc", r"a\*bc"),
79 ('a"bc', r"a\"bc"),
79 ('a"bc', r"a\"bc"),
80 ("a^bc", r"a\^bc"),
80 ("a^bc", r"a\^bc"),
81 ("a&bc", r"a\&bc"),
81 ("a&bc", r"a\&bc"),
82 ]
82 ]
83 # run the actual tests
83 # run the actual tests
84 for s1, s2 in pairs:
84 for s1, s2 in pairs:
85 s1p = completer.protect_filename(s1)
85 s1p = completer.protect_filename(s1)
86 nt.assert_equal(s1p, s2)
86 nt.assert_equal(s1p, s2)
87
87
88
88
89 def check_line_split(splitter, test_specs):
89 def check_line_split(splitter, test_specs):
90 for part1, part2, split in test_specs:
90 for part1, part2, split in test_specs:
91 cursor_pos = len(part1)
91 cursor_pos = len(part1)
92 line = part1 + part2
92 line = part1 + part2
93 out = splitter.split_line(line, cursor_pos)
93 out = splitter.split_line(line, cursor_pos)
94 nt.assert_equal(out, split)
94 nt.assert_equal(out, split)
95
95
96
96
97 def test_line_split():
97 def test_line_split():
98 """Basic line splitter test with default specs."""
98 """Basic line splitter test with default specs."""
99 sp = completer.CompletionSplitter()
99 sp = completer.CompletionSplitter()
100 # The format of the test specs is: part1, part2, expected answer. Parts 1
100 # The format of the test specs is: part1, part2, expected answer. Parts 1
101 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
101 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
102 # was at the end of part1. So an empty part2 represents someone hitting
102 # was at the end of part1. So an empty part2 represents someone hitting
103 # tab at the end of the line, the most common case.
103 # tab at the end of the line, the most common case.
104 t = [
104 t = [
105 ("run some/scrip", "", "some/scrip"),
105 ("run some/scrip", "", "some/scrip"),
106 ("run scripts/er", "ror.py foo", "scripts/er"),
106 ("run scripts/er", "ror.py foo", "scripts/er"),
107 ("echo $HOM", "", "HOM"),
107 ("echo $HOM", "", "HOM"),
108 ("print sys.pa", "", "sys.pa"),
108 ("print sys.pa", "", "sys.pa"),
109 ("print(sys.pa", "", "sys.pa"),
109 ("print(sys.pa", "", "sys.pa"),
110 ("execfile('scripts/er", "", "scripts/er"),
110 ("execfile('scripts/er", "", "scripts/er"),
111 ("a[x.", "", "x."),
111 ("a[x.", "", "x."),
112 ("a[x.", "y", "x."),
112 ("a[x.", "y", "x."),
113 ('cd "some_file/', "", "some_file/"),
113 ('cd "some_file/', "", "some_file/"),
114 ]
114 ]
115 check_line_split(sp, t)
115 check_line_split(sp, t)
116 # Ensure splitting works OK with unicode by re-running the tests with
116 # Ensure splitting works OK with unicode by re-running the tests with
117 # all inputs turned into unicode
117 # all inputs turned into unicode
118 check_line_split(sp, [map(str, p) for p in t])
118 check_line_split(sp, [map(str, p) for p in t])
119
119
120
120
121 class NamedInstanceMetaclass(type):
121 class NamedInstanceMetaclass(type):
122 def __getitem__(cls, item):
122 def __getitem__(cls, item):
123 return cls.get_instance(item)
123 return cls.get_instance(item)
124
124
125
125
126 class NamedInstanceClass(metaclass=NamedInstanceMetaclass):
126 class NamedInstanceClass(metaclass=NamedInstanceMetaclass):
127 def __init__(self, name):
127 def __init__(self, name):
128 if not hasattr(self.__class__, "instances"):
128 if not hasattr(self.__class__, "instances"):
129 self.__class__.instances = {}
129 self.__class__.instances = {}
130 self.__class__.instances[name] = self
130 self.__class__.instances[name] = self
131
131
132 @classmethod
132 @classmethod
133 def _ipython_key_completions_(cls):
133 def _ipython_key_completions_(cls):
134 return cls.instances.keys()
134 return cls.instances.keys()
135
135
136 @classmethod
136 @classmethod
137 def get_instance(cls, name):
137 def get_instance(cls, name):
138 return cls.instances[name]
138 return cls.instances[name]
139
139
140
140
141 class KeyCompletable:
141 class KeyCompletable:
142 def __init__(self, things=()):
142 def __init__(self, things=()):
143 self.things = things
143 self.things = things
144
144
145 def _ipython_key_completions_(self):
145 def _ipython_key_completions_(self):
146 return list(self.things)
146 return list(self.things)
147
147
148
148
149 class TestCompleter(unittest.TestCase):
149 class TestCompleter(unittest.TestCase):
150 def setUp(self):
150 def setUp(self):
151 """
151 """
152 We want to silence all PendingDeprecationWarning when testing the completer
152 We want to silence all PendingDeprecationWarning when testing the completer
153 """
153 """
154 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
154 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
155 self._assertwarns.__enter__()
155 self._assertwarns.__enter__()
156
156
157 def tearDown(self):
157 def tearDown(self):
158 try:
158 try:
159 self._assertwarns.__exit__(None, None, None)
159 self._assertwarns.__exit__(None, None, None)
160 except AssertionError:
160 except AssertionError:
161 pass
161 pass
162
162
163 def test_custom_completion_error(self):
163 def test_custom_completion_error(self):
164 """Test that errors from custom attribute completers are silenced."""
164 """Test that errors from custom attribute completers are silenced."""
165 ip = get_ipython()
165 ip = get_ipython()
166
166
167 class A:
167 class A:
168 pass
168 pass
169
169
170 ip.user_ns["x"] = A()
170 ip.user_ns["x"] = A()
171
171
172 @complete_object.register(A)
172 @complete_object.register(A)
173 def complete_A(a, existing_completions):
173 def complete_A(a, existing_completions):
174 raise TypeError("this should be silenced")
174 raise TypeError("this should be silenced")
175
175
176 ip.complete("x.")
176 ip.complete("x.")
177
177
178 def test_custom_completion_ordering(self):
178 def test_custom_completion_ordering(self):
179 """Test that errors from custom attribute completers are silenced."""
179 """Test that errors from custom attribute completers are silenced."""
180 ip = get_ipython()
180 ip = get_ipython()
181
181
182 _, matches = ip.complete('in')
182 _, matches = ip.complete('in')
183 assert matches.index('input') < matches.index('int')
183 assert matches.index('input') < matches.index('int')
184
184
185 def complete_example(a):
185 def complete_example(a):
186 return ['example2', 'example1']
186 return ['example2', 'example1']
187
187
188 ip.Completer.custom_completers.add_re('ex*', complete_example)
188 ip.Completer.custom_completers.add_re('ex*', complete_example)
189 _, matches = ip.complete('ex')
189 _, matches = ip.complete('ex')
190 assert matches.index('example2') < matches.index('example1')
190 assert matches.index('example2') < matches.index('example1')
191
191
192 def test_unicode_completions(self):
192 def test_unicode_completions(self):
193 ip = get_ipython()
193 ip = get_ipython()
194 # Some strings that trigger different types of completion. Check them both
194 # Some strings that trigger different types of completion. Check them both
195 # in str and unicode forms
195 # in str and unicode forms
196 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
196 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
197 for t in s + list(map(str, s)):
197 for t in s + list(map(str, s)):
198 # We don't need to check exact completion values (they may change
198 # We don't need to check exact completion values (they may change
199 # depending on the state of the namespace, but at least no exceptions
199 # depending on the state of the namespace, but at least no exceptions
200 # should be thrown and the return value should be a pair of text, list
200 # should be thrown and the return value should be a pair of text, list
201 # values.
201 # values.
202 text, matches = ip.complete(t)
202 text, matches = ip.complete(t)
203 nt.assert_true(isinstance(text, str))
203 nt.assert_true(isinstance(text, str))
204 nt.assert_true(isinstance(matches, list))
204 nt.assert_true(isinstance(matches, list))
205
205
206 def test_latex_completions(self):
206 def test_latex_completions(self):
207 from IPython.core.latex_symbols import latex_symbols
207 from IPython.core.latex_symbols import latex_symbols
208 import random
208 import random
209
209
210 ip = get_ipython()
210 ip = get_ipython()
211 # Test some random unicode symbols
211 # Test some random unicode symbols
212 keys = random.sample(latex_symbols.keys(), 10)
212 keys = random.sample(latex_symbols.keys(), 10)
213 for k in keys:
213 for k in keys:
214 text, matches = ip.complete(k)
214 text, matches = ip.complete(k)
215 nt.assert_equal(len(matches), 1)
215 nt.assert_equal(len(matches), 1)
216 nt.assert_equal(text, k)
216 nt.assert_equal(text, k)
217 nt.assert_equal(matches[0], latex_symbols[k])
217 nt.assert_equal(matches[0], latex_symbols[k])
218 # Test a more complex line
218 # Test a more complex line
219 text, matches = ip.complete("print(\\alpha")
219 text, matches = ip.complete("print(\\alpha")
220 nt.assert_equal(text, "\\alpha")
220 nt.assert_equal(text, "\\alpha")
221 nt.assert_equal(matches[0], latex_symbols["\\alpha"])
221 nt.assert_equal(matches[0], latex_symbols["\\alpha"])
222 # Test multiple matching latex symbols
222 # Test multiple matching latex symbols
223 text, matches = ip.complete("\\al")
223 text, matches = ip.complete("\\al")
224 nt.assert_in("\\alpha", matches)
224 nt.assert_in("\\alpha", matches)
225 nt.assert_in("\\aleph", matches)
225 nt.assert_in("\\aleph", matches)
226
226
227 def test_latex_no_results(self):
228 """
229 forward latex should really return nothing in either field if nothing is found.
230 """
231 ip = get_ipython()
232 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
233 nt.assert_equal(text, "")
234 nt.assert_equal(matches, [])
235
227 def test_back_latex_completion(self):
236 def test_back_latex_completion(self):
228 ip = get_ipython()
237 ip = get_ipython()
229
238
230 # do not return more than 1 matches fro \beta, only the latex one.
239 # do not return more than 1 matches fro \beta, only the latex one.
231 name, matches = ip.complete("\\Ξ²")
240 name, matches = ip.complete("\\Ξ²")
232 nt.assert_equal(len(matches), 1)
241 nt.assert_equal(matches, ['\\beta'])
233 nt.assert_equal(matches[0], "\\beta")
234
242
235 def test_back_unicode_completion(self):
243 def test_back_unicode_completion(self):
236 ip = get_ipython()
244 ip = get_ipython()
237
245
238 name, matches = ip.complete("\\β…€")
246 name, matches = ip.complete("\\β…€")
239 nt.assert_equal(len(matches), 1)
247 nt.assert_equal(matches, ["\\ROMAN NUMERAL FIVE"])
240 nt.assert_equal(matches[0], "\\ROMAN NUMERAL FIVE")
241
248
242 def test_forward_unicode_completion(self):
249 def test_forward_unicode_completion(self):
243 ip = get_ipython()
250 ip = get_ipython()
244
251
245 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
252 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
246 nt.assert_equal(len(matches), 1)
253 nt.assert_equal(len(matches), 1)
247 nt.assert_equal(matches[0], "β…€")
254 nt.assert_equal(matches[0], "β…€")
248
255
249 @nt.nottest # now we have a completion for \jmath
256 @nt.nottest # now we have a completion for \jmath
250 @decorators.knownfailureif(
257 @decorators.knownfailureif(
251 sys.platform == "win32", "Fails if there is a C:\\j... path"
258 sys.platform == "win32", "Fails if there is a C:\\j... path"
252 )
259 )
253 def test_no_ascii_back_completion(self):
260 def test_no_ascii_back_completion(self):
254 ip = get_ipython()
261 ip = get_ipython()
255 with TemporaryWorkingDirectory(): # Avoid any filename completions
262 with TemporaryWorkingDirectory(): # Avoid any filename completions
256 # single ascii letter that don't have yet completions
263 # single ascii letter that don't have yet completions
257 for letter in "jJ":
264 for letter in "jJ":
258 name, matches = ip.complete("\\" + letter)
265 name, matches = ip.complete("\\" + letter)
259 nt.assert_equal(matches, [])
266 nt.assert_equal(matches, [])
260
267
261 class CompletionSplitterTestCase(unittest.TestCase):
268 class CompletionSplitterTestCase(unittest.TestCase):
262 def setUp(self):
269 def setUp(self):
263 self.sp = completer.CompletionSplitter()
270 self.sp = completer.CompletionSplitter()
264
271
265 def test_delim_setting(self):
272 def test_delim_setting(self):
266 self.sp.delims = " "
273 self.sp.delims = " "
267 nt.assert_equal(self.sp.delims, " ")
274 nt.assert_equal(self.sp.delims, " ")
268 nt.assert_equal(self.sp._delim_expr, r"[\ ]")
275 nt.assert_equal(self.sp._delim_expr, r"[\ ]")
269
276
270 def test_spaces(self):
277 def test_spaces(self):
271 """Test with only spaces as split chars."""
278 """Test with only spaces as split chars."""
272 self.sp.delims = " "
279 self.sp.delims = " "
273 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
280 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
274 check_line_split(self.sp, t)
281 check_line_split(self.sp, t)
275
282
276 def test_has_open_quotes1(self):
283 def test_has_open_quotes1(self):
277 for s in ["'", "'''", "'hi' '"]:
284 for s in ["'", "'''", "'hi' '"]:
278 nt.assert_equal(completer.has_open_quotes(s), "'")
285 nt.assert_equal(completer.has_open_quotes(s), "'")
279
286
280 def test_has_open_quotes2(self):
287 def test_has_open_quotes2(self):
281 for s in ['"', '"""', '"hi" "']:
288 for s in ['"', '"""', '"hi" "']:
282 nt.assert_equal(completer.has_open_quotes(s), '"')
289 nt.assert_equal(completer.has_open_quotes(s), '"')
283
290
284 def test_has_open_quotes3(self):
291 def test_has_open_quotes3(self):
285 for s in ["''", "''' '''", "'hi' 'ipython'"]:
292 for s in ["''", "''' '''", "'hi' 'ipython'"]:
286 nt.assert_false(completer.has_open_quotes(s))
293 nt.assert_false(completer.has_open_quotes(s))
287
294
288 def test_has_open_quotes4(self):
295 def test_has_open_quotes4(self):
289 for s in ['""', '""" """', '"hi" "ipython"']:
296 for s in ['""', '""" """', '"hi" "ipython"']:
290 nt.assert_false(completer.has_open_quotes(s))
297 nt.assert_false(completer.has_open_quotes(s))
291
298
292 @decorators.knownfailureif(
299 @decorators.knownfailureif(
293 sys.platform == "win32", "abspath completions fail on Windows"
300 sys.platform == "win32", "abspath completions fail on Windows"
294 )
301 )
295 def test_abspath_file_completions(self):
302 def test_abspath_file_completions(self):
296 ip = get_ipython()
303 ip = get_ipython()
297 with TemporaryDirectory() as tmpdir:
304 with TemporaryDirectory() as tmpdir:
298 prefix = os.path.join(tmpdir, "foo")
305 prefix = os.path.join(tmpdir, "foo")
299 suffixes = ["1", "2"]
306 suffixes = ["1", "2"]
300 names = [prefix + s for s in suffixes]
307 names = [prefix + s for s in suffixes]
301 for n in names:
308 for n in names:
302 open(n, "w").close()
309 open(n, "w").close()
303
310
304 # Check simple completion
311 # Check simple completion
305 c = ip.complete(prefix)[1]
312 c = ip.complete(prefix)[1]
306 nt.assert_equal(c, names)
313 nt.assert_equal(c, names)
307
314
308 # Now check with a function call
315 # Now check with a function call
309 cmd = 'a = f("%s' % prefix
316 cmd = 'a = f("%s' % prefix
310 c = ip.complete(prefix, cmd)[1]
317 c = ip.complete(prefix, cmd)[1]
311 comp = [prefix + s for s in suffixes]
318 comp = [prefix + s for s in suffixes]
312 nt.assert_equal(c, comp)
319 nt.assert_equal(c, comp)
313
320
314 def test_local_file_completions(self):
321 def test_local_file_completions(self):
315 ip = get_ipython()
322 ip = get_ipython()
316 with TemporaryWorkingDirectory():
323 with TemporaryWorkingDirectory():
317 prefix = "./foo"
324 prefix = "./foo"
318 suffixes = ["1", "2"]
325 suffixes = ["1", "2"]
319 names = [prefix + s for s in suffixes]
326 names = [prefix + s for s in suffixes]
320 for n in names:
327 for n in names:
321 open(n, "w").close()
328 open(n, "w").close()
322
329
323 # Check simple completion
330 # Check simple completion
324 c = ip.complete(prefix)[1]
331 c = ip.complete(prefix)[1]
325 nt.assert_equal(c, names)
332 nt.assert_equal(c, names)
326
333
327 # Now check with a function call
334 # Now check with a function call
328 cmd = 'a = f("%s' % prefix
335 cmd = 'a = f("%s' % prefix
329 c = ip.complete(prefix, cmd)[1]
336 c = ip.complete(prefix, cmd)[1]
330 comp = {prefix + s for s in suffixes}
337 comp = {prefix + s for s in suffixes}
331 nt.assert_true(comp.issubset(set(c)))
338 nt.assert_true(comp.issubset(set(c)))
332
339
333 def test_quoted_file_completions(self):
340 def test_quoted_file_completions(self):
334 ip = get_ipython()
341 ip = get_ipython()
335 with TemporaryWorkingDirectory():
342 with TemporaryWorkingDirectory():
336 name = "foo'bar"
343 name = "foo'bar"
337 open(name, "w").close()
344 open(name, "w").close()
338
345
339 # Don't escape Windows
346 # Don't escape Windows
340 escaped = name if sys.platform == "win32" else "foo\\'bar"
347 escaped = name if sys.platform == "win32" else "foo\\'bar"
341
348
342 # Single quote matches embedded single quote
349 # Single quote matches embedded single quote
343 text = "open('foo"
350 text = "open('foo"
344 c = ip.Completer._complete(
351 c = ip.Completer._complete(
345 cursor_line=0, cursor_pos=len(text), full_text=text
352 cursor_line=0, cursor_pos=len(text), full_text=text
346 )[1]
353 )[1]
347 nt.assert_equal(c, [escaped])
354 nt.assert_equal(c, [escaped])
348
355
349 # Double quote requires no escape
356 # Double quote requires no escape
350 text = 'open("foo'
357 text = 'open("foo'
351 c = ip.Completer._complete(
358 c = ip.Completer._complete(
352 cursor_line=0, cursor_pos=len(text), full_text=text
359 cursor_line=0, cursor_pos=len(text), full_text=text
353 )[1]
360 )[1]
354 nt.assert_equal(c, [name])
361 nt.assert_equal(c, [name])
355
362
356 # No quote requires an escape
363 # No quote requires an escape
357 text = "%ls foo"
364 text = "%ls foo"
358 c = ip.Completer._complete(
365 c = ip.Completer._complete(
359 cursor_line=0, cursor_pos=len(text), full_text=text
366 cursor_line=0, cursor_pos=len(text), full_text=text
360 )[1]
367 )[1]
361 nt.assert_equal(c, [escaped])
368 nt.assert_equal(c, [escaped])
362
369
363 def test_all_completions_dups(self):
370 def test_all_completions_dups(self):
364 """
371 """
365 Make sure the output of `IPCompleter.all_completions` does not have
372 Make sure the output of `IPCompleter.all_completions` does not have
366 duplicated prefixes.
373 duplicated prefixes.
367 """
374 """
368 ip = get_ipython()
375 ip = get_ipython()
369 c = ip.Completer
376 c = ip.Completer
370 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
377 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
371 for jedi_status in [True, False]:
378 for jedi_status in [True, False]:
372 with provisionalcompleter():
379 with provisionalcompleter():
373 ip.Completer.use_jedi = jedi_status
380 ip.Completer.use_jedi = jedi_status
374 matches = c.all_completions("TestCl")
381 matches = c.all_completions("TestCl")
375 assert matches == ['TestClass'], jedi_status
382 assert matches == ['TestClass'], jedi_status
376 matches = c.all_completions("TestClass.")
383 matches = c.all_completions("TestClass.")
377 assert len(matches) > 2, jedi_status
384 assert len(matches) > 2, jedi_status
378 matches = c.all_completions("TestClass.a")
385 matches = c.all_completions("TestClass.a")
379 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
386 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
380
387
381 def test_jedi(self):
388 def test_jedi(self):
382 """
389 """
383 A couple of issue we had with Jedi
390 A couple of issue we had with Jedi
384 """
391 """
385 ip = get_ipython()
392 ip = get_ipython()
386
393
387 def _test_complete(reason, s, comp, start=None, end=None):
394 def _test_complete(reason, s, comp, start=None, end=None):
388 l = len(s)
395 l = len(s)
389 start = start if start is not None else l
396 start = start if start is not None else l
390 end = end if end is not None else l
397 end = end if end is not None else l
391 with provisionalcompleter():
398 with provisionalcompleter():
392 ip.Completer.use_jedi = True
399 ip.Completer.use_jedi = True
393 completions = set(ip.Completer.completions(s, l))
400 completions = set(ip.Completer.completions(s, l))
394 ip.Completer.use_jedi = False
401 ip.Completer.use_jedi = False
395 assert_in(Completion(start, end, comp), completions, reason)
402 assert_in(Completion(start, end, comp), completions, reason)
396
403
397 def _test_not_complete(reason, s, comp):
404 def _test_not_complete(reason, s, comp):
398 l = len(s)
405 l = len(s)
399 with provisionalcompleter():
406 with provisionalcompleter():
400 ip.Completer.use_jedi = True
407 ip.Completer.use_jedi = True
401 completions = set(ip.Completer.completions(s, l))
408 completions = set(ip.Completer.completions(s, l))
402 ip.Completer.use_jedi = False
409 ip.Completer.use_jedi = False
403 assert_not_in(Completion(l, l, comp), completions, reason)
410 assert_not_in(Completion(l, l, comp), completions, reason)
404
411
405 import jedi
412 import jedi
406
413
407 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
414 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
408 if jedi_version > (0, 10):
415 if jedi_version > (0, 10):
409 yield _test_complete, "jedi >0.9 should complete and not crash", "a=1;a.", "real"
416 yield _test_complete, "jedi >0.9 should complete and not crash", "a=1;a.", "real"
410 yield _test_complete, "can infer first argument", 'a=(1,"foo");a[0].', "real"
417 yield _test_complete, "can infer first argument", 'a=(1,"foo");a[0].', "real"
411 yield _test_complete, "can infer second argument", 'a=(1,"foo");a[1].', "capitalize"
418 yield _test_complete, "can infer second argument", 'a=(1,"foo");a[1].', "capitalize"
412 yield _test_complete, "cover duplicate completions", "im", "import", 0, 2
419 yield _test_complete, "cover duplicate completions", "im", "import", 0, 2
413
420
414 yield _test_not_complete, "does not mix types", 'a=(1,"foo");a[0].', "capitalize"
421 yield _test_not_complete, "does not mix types", 'a=(1,"foo");a[0].', "capitalize"
415
422
416 def test_completion_have_signature(self):
423 def test_completion_have_signature(self):
417 """
424 """
418 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
425 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
419 """
426 """
420 ip = get_ipython()
427 ip = get_ipython()
421 with provisionalcompleter():
428 with provisionalcompleter():
422 ip.Completer.use_jedi = True
429 ip.Completer.use_jedi = True
423 completions = ip.Completer.completions("ope", 3)
430 completions = ip.Completer.completions("ope", 3)
424 c = next(completions) # should be `open`
431 c = next(completions) # should be `open`
425 ip.Completer.use_jedi = False
432 ip.Completer.use_jedi = False
426 assert "file" in c.signature, "Signature of function was not found by completer"
433 assert "file" in c.signature, "Signature of function was not found by completer"
427 assert (
434 assert (
428 "encoding" in c.signature
435 "encoding" in c.signature
429 ), "Signature of function was not found by completer"
436 ), "Signature of function was not found by completer"
430
437
431 def test_deduplicate_completions(self):
438 def test_deduplicate_completions(self):
432 """
439 """
433 Test that completions are correctly deduplicated (even if ranges are not the same)
440 Test that completions are correctly deduplicated (even if ranges are not the same)
434 """
441 """
435 ip = get_ipython()
442 ip = get_ipython()
436 ip.ex(
443 ip.ex(
437 textwrap.dedent(
444 textwrap.dedent(
438 """
445 """
439 class Z:
446 class Z:
440 zoo = 1
447 zoo = 1
441 """
448 """
442 )
449 )
443 )
450 )
444 with provisionalcompleter():
451 with provisionalcompleter():
445 ip.Completer.use_jedi = True
452 ip.Completer.use_jedi = True
446 l = list(
453 l = list(
447 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
454 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
448 )
455 )
449 ip.Completer.use_jedi = False
456 ip.Completer.use_jedi = False
450
457
451 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
458 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
452 assert l[0].text == "zoo" # and not `it.accumulate`
459 assert l[0].text == "zoo" # and not `it.accumulate`
453
460
454 def test_greedy_completions(self):
461 def test_greedy_completions(self):
455 """
462 """
456 Test the capability of the Greedy completer.
463 Test the capability of the Greedy completer.
457
464
458 Most of the test here does not really show off the greedy completer, for proof
465 Most of the test here does not really show off the greedy completer, for proof
459 each of the text below now pass with Jedi. The greedy completer is capable of more.
466 each of the text below now pass with Jedi. The greedy completer is capable of more.
460
467
461 See the :any:`test_dict_key_completion_contexts`
468 See the :any:`test_dict_key_completion_contexts`
462
469
463 """
470 """
464 ip = get_ipython()
471 ip = get_ipython()
465 ip.ex("a=list(range(5))")
472 ip.ex("a=list(range(5))")
466 _, c = ip.complete(".", line="a[0].")
473 _, c = ip.complete(".", line="a[0].")
467 nt.assert_false(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
474 nt.assert_false(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
468
475
469 def _(line, cursor_pos, expect, message, completion):
476 def _(line, cursor_pos, expect, message, completion):
470 with greedy_completion(), provisionalcompleter():
477 with greedy_completion(), provisionalcompleter():
471 ip.Completer.use_jedi = False
478 ip.Completer.use_jedi = False
472 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
479 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
473 nt.assert_in(expect, c, message % c)
480 nt.assert_in(expect, c, message % c)
474
481
475 ip.Completer.use_jedi = True
482 ip.Completer.use_jedi = True
476 with provisionalcompleter():
483 with provisionalcompleter():
477 completions = ip.Completer.completions(line, cursor_pos)
484 completions = ip.Completer.completions(line, cursor_pos)
478 nt.assert_in(completion, completions)
485 nt.assert_in(completion, completions)
479
486
480 with provisionalcompleter():
487 with provisionalcompleter():
481 yield _, "a[0].", 5, "a[0].real", "Should have completed on a[0].: %s", Completion(
488 yield _, "a[0].", 5, "a[0].real", "Should have completed on a[0].: %s", Completion(
482 5, 5, "real"
489 5, 5, "real"
483 )
490 )
484 yield _, "a[0].r", 6, "a[0].real", "Should have completed on a[0].r: %s", Completion(
491 yield _, "a[0].r", 6, "a[0].real", "Should have completed on a[0].r: %s", Completion(
485 5, 6, "real"
492 5, 6, "real"
486 )
493 )
487
494
488 yield _, "a[0].from_", 10, "a[0].from_bytes", "Should have completed on a[0].from_: %s", Completion(
495 yield _, "a[0].from_", 10, "a[0].from_bytes", "Should have completed on a[0].from_: %s", Completion(
489 5, 10, "from_bytes"
496 5, 10, "from_bytes"
490 )
497 )
491
498
492 def test_omit__names(self):
499 def test_omit__names(self):
493 # also happens to test IPCompleter as a configurable
500 # also happens to test IPCompleter as a configurable
494 ip = get_ipython()
501 ip = get_ipython()
495 ip._hidden_attr = 1
502 ip._hidden_attr = 1
496 ip._x = {}
503 ip._x = {}
497 c = ip.Completer
504 c = ip.Completer
498 ip.ex("ip=get_ipython()")
505 ip.ex("ip=get_ipython()")
499 cfg = Config()
506 cfg = Config()
500 cfg.IPCompleter.omit__names = 0
507 cfg.IPCompleter.omit__names = 0
501 c.update_config(cfg)
508 c.update_config(cfg)
502 with provisionalcompleter():
509 with provisionalcompleter():
503 c.use_jedi = False
510 c.use_jedi = False
504 s, matches = c.complete("ip.")
511 s, matches = c.complete("ip.")
505 nt.assert_in("ip.__str__", matches)
512 nt.assert_in("ip.__str__", matches)
506 nt.assert_in("ip._hidden_attr", matches)
513 nt.assert_in("ip._hidden_attr", matches)
507
514
508 # c.use_jedi = True
515 # c.use_jedi = True
509 # completions = set(c.completions('ip.', 3))
516 # completions = set(c.completions('ip.', 3))
510 # nt.assert_in(Completion(3, 3, '__str__'), completions)
517 # nt.assert_in(Completion(3, 3, '__str__'), completions)
511 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
518 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
512
519
513 cfg = Config()
520 cfg = Config()
514 cfg.IPCompleter.omit__names = 1
521 cfg.IPCompleter.omit__names = 1
515 c.update_config(cfg)
522 c.update_config(cfg)
516 with provisionalcompleter():
523 with provisionalcompleter():
517 c.use_jedi = False
524 c.use_jedi = False
518 s, matches = c.complete("ip.")
525 s, matches = c.complete("ip.")
519 nt.assert_not_in("ip.__str__", matches)
526 nt.assert_not_in("ip.__str__", matches)
520 # nt.assert_in('ip._hidden_attr', matches)
527 # nt.assert_in('ip._hidden_attr', matches)
521
528
522 # c.use_jedi = True
529 # c.use_jedi = True
523 # completions = set(c.completions('ip.', 3))
530 # completions = set(c.completions('ip.', 3))
524 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
531 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
525 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
532 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
526
533
527 cfg = Config()
534 cfg = Config()
528 cfg.IPCompleter.omit__names = 2
535 cfg.IPCompleter.omit__names = 2
529 c.update_config(cfg)
536 c.update_config(cfg)
530 with provisionalcompleter():
537 with provisionalcompleter():
531 c.use_jedi = False
538 c.use_jedi = False
532 s, matches = c.complete("ip.")
539 s, matches = c.complete("ip.")
533 nt.assert_not_in("ip.__str__", matches)
540 nt.assert_not_in("ip.__str__", matches)
534 nt.assert_not_in("ip._hidden_attr", matches)
541 nt.assert_not_in("ip._hidden_attr", matches)
535
542
536 # c.use_jedi = True
543 # c.use_jedi = True
537 # completions = set(c.completions('ip.', 3))
544 # completions = set(c.completions('ip.', 3))
538 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
545 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
539 # nt.assert_not_in(Completion(3,3, "_hidden_attr"), completions)
546 # nt.assert_not_in(Completion(3,3, "_hidden_attr"), completions)
540
547
541 with provisionalcompleter():
548 with provisionalcompleter():
542 c.use_jedi = False
549 c.use_jedi = False
543 s, matches = c.complete("ip._x.")
550 s, matches = c.complete("ip._x.")
544 nt.assert_in("ip._x.keys", matches)
551 nt.assert_in("ip._x.keys", matches)
545
552
546 # c.use_jedi = True
553 # c.use_jedi = True
547 # completions = set(c.completions('ip._x.', 6))
554 # completions = set(c.completions('ip._x.', 6))
548 # nt.assert_in(Completion(6,6, "keys"), completions)
555 # nt.assert_in(Completion(6,6, "keys"), completions)
549
556
550 del ip._hidden_attr
557 del ip._hidden_attr
551 del ip._x
558 del ip._x
552
559
553 def test_limit_to__all__False_ok(self):
560 def test_limit_to__all__False_ok(self):
554 """
561 """
555 Limit to all is deprecated, once we remove it this test can go away.
562 Limit to all is deprecated, once we remove it this test can go away.
556 """
563 """
557 ip = get_ipython()
564 ip = get_ipython()
558 c = ip.Completer
565 c = ip.Completer
559 c.use_jedi = False
566 c.use_jedi = False
560 ip.ex("class D: x=24")
567 ip.ex("class D: x=24")
561 ip.ex("d=D()")
568 ip.ex("d=D()")
562 cfg = Config()
569 cfg = Config()
563 cfg.IPCompleter.limit_to__all__ = False
570 cfg.IPCompleter.limit_to__all__ = False
564 c.update_config(cfg)
571 c.update_config(cfg)
565 s, matches = c.complete("d.")
572 s, matches = c.complete("d.")
566 nt.assert_in("d.x", matches)
573 nt.assert_in("d.x", matches)
567
574
568 def test_get__all__entries_ok(self):
575 def test_get__all__entries_ok(self):
569 class A:
576 class A:
570 __all__ = ["x", 1]
577 __all__ = ["x", 1]
571
578
572 words = completer.get__all__entries(A())
579 words = completer.get__all__entries(A())
573 nt.assert_equal(words, ["x"])
580 nt.assert_equal(words, ["x"])
574
581
575 def test_get__all__entries_no__all__ok(self):
582 def test_get__all__entries_no__all__ok(self):
576 class A:
583 class A:
577 pass
584 pass
578
585
579 words = completer.get__all__entries(A())
586 words = completer.get__all__entries(A())
580 nt.assert_equal(words, [])
587 nt.assert_equal(words, [])
581
588
582 def test_func_kw_completions(self):
589 def test_func_kw_completions(self):
583 ip = get_ipython()
590 ip = get_ipython()
584 c = ip.Completer
591 c = ip.Completer
585 c.use_jedi = False
592 c.use_jedi = False
586 ip.ex("def myfunc(a=1,b=2): return a+b")
593 ip.ex("def myfunc(a=1,b=2): return a+b")
587 s, matches = c.complete(None, "myfunc(1,b")
594 s, matches = c.complete(None, "myfunc(1,b")
588 nt.assert_in("b=", matches)
595 nt.assert_in("b=", matches)
589 # Simulate completing with cursor right after b (pos==10):
596 # Simulate completing with cursor right after b (pos==10):
590 s, matches = c.complete(None, "myfunc(1,b)", 10)
597 s, matches = c.complete(None, "myfunc(1,b)", 10)
591 nt.assert_in("b=", matches)
598 nt.assert_in("b=", matches)
592 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
599 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
593 nt.assert_in("b=", matches)
600 nt.assert_in("b=", matches)
594 # builtin function
601 # builtin function
595 s, matches = c.complete(None, "min(k, k")
602 s, matches = c.complete(None, "min(k, k")
596 nt.assert_in("key=", matches)
603 nt.assert_in("key=", matches)
597
604
598 def test_default_arguments_from_docstring(self):
605 def test_default_arguments_from_docstring(self):
599 ip = get_ipython()
606 ip = get_ipython()
600 c = ip.Completer
607 c = ip.Completer
601 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
608 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
602 nt.assert_equal(kwd, ["key"])
609 nt.assert_equal(kwd, ["key"])
603 # with cython type etc
610 # with cython type etc
604 kwd = c._default_arguments_from_docstring(
611 kwd = c._default_arguments_from_docstring(
605 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
612 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
606 )
613 )
607 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
614 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
608 # white spaces
615 # white spaces
609 kwd = c._default_arguments_from_docstring(
616 kwd = c._default_arguments_from_docstring(
610 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
617 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
611 )
618 )
612 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
619 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
613
620
614 def test_line_magics(self):
621 def test_line_magics(self):
615 ip = get_ipython()
622 ip = get_ipython()
616 c = ip.Completer
623 c = ip.Completer
617 s, matches = c.complete(None, "lsmag")
624 s, matches = c.complete(None, "lsmag")
618 nt.assert_in("%lsmagic", matches)
625 nt.assert_in("%lsmagic", matches)
619 s, matches = c.complete(None, "%lsmag")
626 s, matches = c.complete(None, "%lsmag")
620 nt.assert_in("%lsmagic", matches)
627 nt.assert_in("%lsmagic", matches)
621
628
622 def test_cell_magics(self):
629 def test_cell_magics(self):
623 from IPython.core.magic import register_cell_magic
630 from IPython.core.magic import register_cell_magic
624
631
625 @register_cell_magic
632 @register_cell_magic
626 def _foo_cellm(line, cell):
633 def _foo_cellm(line, cell):
627 pass
634 pass
628
635
629 ip = get_ipython()
636 ip = get_ipython()
630 c = ip.Completer
637 c = ip.Completer
631
638
632 s, matches = c.complete(None, "_foo_ce")
639 s, matches = c.complete(None, "_foo_ce")
633 nt.assert_in("%%_foo_cellm", matches)
640 nt.assert_in("%%_foo_cellm", matches)
634 s, matches = c.complete(None, "%%_foo_ce")
641 s, matches = c.complete(None, "%%_foo_ce")
635 nt.assert_in("%%_foo_cellm", matches)
642 nt.assert_in("%%_foo_cellm", matches)
636
643
637 def test_line_cell_magics(self):
644 def test_line_cell_magics(self):
638 from IPython.core.magic import register_line_cell_magic
645 from IPython.core.magic import register_line_cell_magic
639
646
640 @register_line_cell_magic
647 @register_line_cell_magic
641 def _bar_cellm(line, cell):
648 def _bar_cellm(line, cell):
642 pass
649 pass
643
650
644 ip = get_ipython()
651 ip = get_ipython()
645 c = ip.Completer
652 c = ip.Completer
646
653
647 # The policy here is trickier, see comments in completion code. The
654 # The policy here is trickier, see comments in completion code. The
648 # returned values depend on whether the user passes %% or not explicitly,
655 # returned values depend on whether the user passes %% or not explicitly,
649 # and this will show a difference if the same name is both a line and cell
656 # and this will show a difference if the same name is both a line and cell
650 # magic.
657 # magic.
651 s, matches = c.complete(None, "_bar_ce")
658 s, matches = c.complete(None, "_bar_ce")
652 nt.assert_in("%_bar_cellm", matches)
659 nt.assert_in("%_bar_cellm", matches)
653 nt.assert_in("%%_bar_cellm", matches)
660 nt.assert_in("%%_bar_cellm", matches)
654 s, matches = c.complete(None, "%_bar_ce")
661 s, matches = c.complete(None, "%_bar_ce")
655 nt.assert_in("%_bar_cellm", matches)
662 nt.assert_in("%_bar_cellm", matches)
656 nt.assert_in("%%_bar_cellm", matches)
663 nt.assert_in("%%_bar_cellm", matches)
657 s, matches = c.complete(None, "%%_bar_ce")
664 s, matches = c.complete(None, "%%_bar_ce")
658 nt.assert_not_in("%_bar_cellm", matches)
665 nt.assert_not_in("%_bar_cellm", matches)
659 nt.assert_in("%%_bar_cellm", matches)
666 nt.assert_in("%%_bar_cellm", matches)
660
667
661 def test_magic_completion_order(self):
668 def test_magic_completion_order(self):
662 ip = get_ipython()
669 ip = get_ipython()
663 c = ip.Completer
670 c = ip.Completer
664
671
665 # Test ordering of line and cell magics.
672 # Test ordering of line and cell magics.
666 text, matches = c.complete("timeit")
673 text, matches = c.complete("timeit")
667 nt.assert_equal(matches, ["%timeit", "%%timeit"])
674 nt.assert_equal(matches, ["%timeit", "%%timeit"])
668
675
669 def test_magic_completion_shadowing(self):
676 def test_magic_completion_shadowing(self):
670 ip = get_ipython()
677 ip = get_ipython()
671 c = ip.Completer
678 c = ip.Completer
672 c.use_jedi = False
679 c.use_jedi = False
673
680
674 # Before importing matplotlib, %matplotlib magic should be the only option.
681 # Before importing matplotlib, %matplotlib magic should be the only option.
675 text, matches = c.complete("mat")
682 text, matches = c.complete("mat")
676 nt.assert_equal(matches, ["%matplotlib"])
683 nt.assert_equal(matches, ["%matplotlib"])
677
684
678 # The newly introduced name should shadow the magic.
685 # The newly introduced name should shadow the magic.
679 ip.run_cell("matplotlib = 1")
686 ip.run_cell("matplotlib = 1")
680 text, matches = c.complete("mat")
687 text, matches = c.complete("mat")
681 nt.assert_equal(matches, ["matplotlib"])
688 nt.assert_equal(matches, ["matplotlib"])
682
689
683 # After removing matplotlib from namespace, the magic should again be
690 # After removing matplotlib from namespace, the magic should again be
684 # the only option.
691 # the only option.
685 del ip.user_ns["matplotlib"]
692 del ip.user_ns["matplotlib"]
686 text, matches = c.complete("mat")
693 text, matches = c.complete("mat")
687 nt.assert_equal(matches, ["%matplotlib"])
694 nt.assert_equal(matches, ["%matplotlib"])
688
695
689 def test_magic_completion_shadowing_explicit(self):
696 def test_magic_completion_shadowing_explicit(self):
690 """
697 """
691 If the user try to complete a shadowed magic, and explicit % start should
698 If the user try to complete a shadowed magic, and explicit % start should
692 still return the completions.
699 still return the completions.
693 """
700 """
694 ip = get_ipython()
701 ip = get_ipython()
695 c = ip.Completer
702 c = ip.Completer
696
703
697 # Before importing matplotlib, %matplotlib magic should be the only option.
704 # Before importing matplotlib, %matplotlib magic should be the only option.
698 text, matches = c.complete("%mat")
705 text, matches = c.complete("%mat")
699 nt.assert_equal(matches, ["%matplotlib"])
706 nt.assert_equal(matches, ["%matplotlib"])
700
707
701 ip.run_cell("matplotlib = 1")
708 ip.run_cell("matplotlib = 1")
702
709
703 # After removing matplotlib from namespace, the magic should still be
710 # After removing matplotlib from namespace, the magic should still be
704 # the only option.
711 # the only option.
705 text, matches = c.complete("%mat")
712 text, matches = c.complete("%mat")
706 nt.assert_equal(matches, ["%matplotlib"])
713 nt.assert_equal(matches, ["%matplotlib"])
707
714
708 def test_magic_config(self):
715 def test_magic_config(self):
709 ip = get_ipython()
716 ip = get_ipython()
710 c = ip.Completer
717 c = ip.Completer
711
718
712 s, matches = c.complete(None, "conf")
719 s, matches = c.complete(None, "conf")
713 nt.assert_in("%config", matches)
720 nt.assert_in("%config", matches)
714 s, matches = c.complete(None, "conf")
721 s, matches = c.complete(None, "conf")
715 nt.assert_not_in("AliasManager", matches)
722 nt.assert_not_in("AliasManager", matches)
716 s, matches = c.complete(None, "config ")
723 s, matches = c.complete(None, "config ")
717 nt.assert_in("AliasManager", matches)
724 nt.assert_in("AliasManager", matches)
718 s, matches = c.complete(None, "%config ")
725 s, matches = c.complete(None, "%config ")
719 nt.assert_in("AliasManager", matches)
726 nt.assert_in("AliasManager", matches)
720 s, matches = c.complete(None, "config Ali")
727 s, matches = c.complete(None, "config Ali")
721 nt.assert_list_equal(["AliasManager"], matches)
728 nt.assert_list_equal(["AliasManager"], matches)
722 s, matches = c.complete(None, "%config Ali")
729 s, matches = c.complete(None, "%config Ali")
723 nt.assert_list_equal(["AliasManager"], matches)
730 nt.assert_list_equal(["AliasManager"], matches)
724 s, matches = c.complete(None, "config AliasManager")
731 s, matches = c.complete(None, "config AliasManager")
725 nt.assert_list_equal(["AliasManager"], matches)
732 nt.assert_list_equal(["AliasManager"], matches)
726 s, matches = c.complete(None, "%config AliasManager")
733 s, matches = c.complete(None, "%config AliasManager")
727 nt.assert_list_equal(["AliasManager"], matches)
734 nt.assert_list_equal(["AliasManager"], matches)
728 s, matches = c.complete(None, "config AliasManager.")
735 s, matches = c.complete(None, "config AliasManager.")
729 nt.assert_in("AliasManager.default_aliases", matches)
736 nt.assert_in("AliasManager.default_aliases", matches)
730 s, matches = c.complete(None, "%config AliasManager.")
737 s, matches = c.complete(None, "%config AliasManager.")
731 nt.assert_in("AliasManager.default_aliases", matches)
738 nt.assert_in("AliasManager.default_aliases", matches)
732 s, matches = c.complete(None, "config AliasManager.de")
739 s, matches = c.complete(None, "config AliasManager.de")
733 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
740 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
734 s, matches = c.complete(None, "config AliasManager.de")
741 s, matches = c.complete(None, "config AliasManager.de")
735 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
742 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
736
743
737 def test_magic_color(self):
744 def test_magic_color(self):
738 ip = get_ipython()
745 ip = get_ipython()
739 c = ip.Completer
746 c = ip.Completer
740
747
741 s, matches = c.complete(None, "colo")
748 s, matches = c.complete(None, "colo")
742 nt.assert_in("%colors", matches)
749 nt.assert_in("%colors", matches)
743 s, matches = c.complete(None, "colo")
750 s, matches = c.complete(None, "colo")
744 nt.assert_not_in("NoColor", matches)
751 nt.assert_not_in("NoColor", matches)
745 s, matches = c.complete(None, "%colors") # No trailing space
752 s, matches = c.complete(None, "%colors") # No trailing space
746 nt.assert_not_in("NoColor", matches)
753 nt.assert_not_in("NoColor", matches)
747 s, matches = c.complete(None, "colors ")
754 s, matches = c.complete(None, "colors ")
748 nt.assert_in("NoColor", matches)
755 nt.assert_in("NoColor", matches)
749 s, matches = c.complete(None, "%colors ")
756 s, matches = c.complete(None, "%colors ")
750 nt.assert_in("NoColor", matches)
757 nt.assert_in("NoColor", matches)
751 s, matches = c.complete(None, "colors NoCo")
758 s, matches = c.complete(None, "colors NoCo")
752 nt.assert_list_equal(["NoColor"], matches)
759 nt.assert_list_equal(["NoColor"], matches)
753 s, matches = c.complete(None, "%colors NoCo")
760 s, matches = c.complete(None, "%colors NoCo")
754 nt.assert_list_equal(["NoColor"], matches)
761 nt.assert_list_equal(["NoColor"], matches)
755
762
756 def test_match_dict_keys(self):
763 def test_match_dict_keys(self):
757 """
764 """
758 Test that match_dict_keys works on a couple of use case does return what
765 Test that match_dict_keys works on a couple of use case does return what
759 expected, and does not crash
766 expected, and does not crash
760 """
767 """
761 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
768 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
762
769
763 keys = ["foo", b"far"]
770 keys = ["foo", b"far"]
764 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
771 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
765 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
772 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
766 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
773 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
767 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
774 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
768
775
769 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
776 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
770 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
777 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
771 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
778 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
772 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
779 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
773
780
774 match_dict_keys
781 match_dict_keys
775
782
776 def test_dict_key_completion_string(self):
783 def test_dict_key_completion_string(self):
777 """Test dictionary key completion for string keys"""
784 """Test dictionary key completion for string keys"""
778 ip = get_ipython()
785 ip = get_ipython()
779 complete = ip.Completer.complete
786 complete = ip.Completer.complete
780
787
781 ip.user_ns["d"] = {"abc": None}
788 ip.user_ns["d"] = {"abc": None}
782
789
783 # check completion at different stages
790 # check completion at different stages
784 _, matches = complete(line_buffer="d[")
791 _, matches = complete(line_buffer="d[")
785 nt.assert_in("'abc'", matches)
792 nt.assert_in("'abc'", matches)
786 nt.assert_not_in("'abc']", matches)
793 nt.assert_not_in("'abc']", matches)
787
794
788 _, matches = complete(line_buffer="d['")
795 _, matches = complete(line_buffer="d['")
789 nt.assert_in("abc", matches)
796 nt.assert_in("abc", matches)
790 nt.assert_not_in("abc']", matches)
797 nt.assert_not_in("abc']", matches)
791
798
792 _, matches = complete(line_buffer="d['a")
799 _, matches = complete(line_buffer="d['a")
793 nt.assert_in("abc", matches)
800 nt.assert_in("abc", matches)
794 nt.assert_not_in("abc']", matches)
801 nt.assert_not_in("abc']", matches)
795
802
796 # check use of different quoting
803 # check use of different quoting
797 _, matches = complete(line_buffer='d["')
804 _, matches = complete(line_buffer='d["')
798 nt.assert_in("abc", matches)
805 nt.assert_in("abc", matches)
799 nt.assert_not_in('abc"]', matches)
806 nt.assert_not_in('abc"]', matches)
800
807
801 _, matches = complete(line_buffer='d["a')
808 _, matches = complete(line_buffer='d["a')
802 nt.assert_in("abc", matches)
809 nt.assert_in("abc", matches)
803 nt.assert_not_in('abc"]', matches)
810 nt.assert_not_in('abc"]', matches)
804
811
805 # check sensitivity to following context
812 # check sensitivity to following context
806 _, matches = complete(line_buffer="d[]", cursor_pos=2)
813 _, matches = complete(line_buffer="d[]", cursor_pos=2)
807 nt.assert_in("'abc'", matches)
814 nt.assert_in("'abc'", matches)
808
815
809 _, matches = complete(line_buffer="d['']", cursor_pos=3)
816 _, matches = complete(line_buffer="d['']", cursor_pos=3)
810 nt.assert_in("abc", matches)
817 nt.assert_in("abc", matches)
811 nt.assert_not_in("abc'", matches)
818 nt.assert_not_in("abc'", matches)
812 nt.assert_not_in("abc']", matches)
819 nt.assert_not_in("abc']", matches)
813
820
814 # check multiple solutions are correctly returned and that noise is not
821 # check multiple solutions are correctly returned and that noise is not
815 ip.user_ns["d"] = {
822 ip.user_ns["d"] = {
816 "abc": None,
823 "abc": None,
817 "abd": None,
824 "abd": None,
818 "bad": None,
825 "bad": None,
819 object(): None,
826 object(): None,
820 5: None,
827 5: None,
821 }
828 }
822
829
823 _, matches = complete(line_buffer="d['a")
830 _, matches = complete(line_buffer="d['a")
824 nt.assert_in("abc", matches)
831 nt.assert_in("abc", matches)
825 nt.assert_in("abd", matches)
832 nt.assert_in("abd", matches)
826 nt.assert_not_in("bad", matches)
833 nt.assert_not_in("bad", matches)
827 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
834 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
828
835
829 # check escaping and whitespace
836 # check escaping and whitespace
830 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
837 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
831 _, matches = complete(line_buffer="d['a")
838 _, matches = complete(line_buffer="d['a")
832 nt.assert_in("a\\nb", matches)
839 nt.assert_in("a\\nb", matches)
833 nt.assert_in("a\\'b", matches)
840 nt.assert_in("a\\'b", matches)
834 nt.assert_in('a"b', matches)
841 nt.assert_in('a"b', matches)
835 nt.assert_in("a word", matches)
842 nt.assert_in("a word", matches)
836 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
843 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
837
844
838 # - can complete on non-initial word of the string
845 # - can complete on non-initial word of the string
839 _, matches = complete(line_buffer="d['a w")
846 _, matches = complete(line_buffer="d['a w")
840 nt.assert_in("word", matches)
847 nt.assert_in("word", matches)
841
848
842 # - understands quote escaping
849 # - understands quote escaping
843 _, matches = complete(line_buffer="d['a\\'")
850 _, matches = complete(line_buffer="d['a\\'")
844 nt.assert_in("b", matches)
851 nt.assert_in("b", matches)
845
852
846 # - default quoting should work like repr
853 # - default quoting should work like repr
847 _, matches = complete(line_buffer="d[")
854 _, matches = complete(line_buffer="d[")
848 nt.assert_in('"a\'b"', matches)
855 nt.assert_in('"a\'b"', matches)
849
856
850 # - when opening quote with ", possible to match with unescaped apostrophe
857 # - when opening quote with ", possible to match with unescaped apostrophe
851 _, matches = complete(line_buffer="d[\"a'")
858 _, matches = complete(line_buffer="d[\"a'")
852 nt.assert_in("b", matches)
859 nt.assert_in("b", matches)
853
860
854 # need to not split at delims that readline won't split at
861 # need to not split at delims that readline won't split at
855 if "-" not in ip.Completer.splitter.delims:
862 if "-" not in ip.Completer.splitter.delims:
856 ip.user_ns["d"] = {"before-after": None}
863 ip.user_ns["d"] = {"before-after": None}
857 _, matches = complete(line_buffer="d['before-af")
864 _, matches = complete(line_buffer="d['before-af")
858 nt.assert_in("before-after", matches)
865 nt.assert_in("before-after", matches)
859
866
860 def test_dict_key_completion_contexts(self):
867 def test_dict_key_completion_contexts(self):
861 """Test expression contexts in which dict key completion occurs"""
868 """Test expression contexts in which dict key completion occurs"""
862 ip = get_ipython()
869 ip = get_ipython()
863 complete = ip.Completer.complete
870 complete = ip.Completer.complete
864 d = {"abc": None}
871 d = {"abc": None}
865 ip.user_ns["d"] = d
872 ip.user_ns["d"] = d
866
873
867 class C:
874 class C:
868 data = d
875 data = d
869
876
870 ip.user_ns["C"] = C
877 ip.user_ns["C"] = C
871 ip.user_ns["get"] = lambda: d
878 ip.user_ns["get"] = lambda: d
872
879
873 def assert_no_completion(**kwargs):
880 def assert_no_completion(**kwargs):
874 _, matches = complete(**kwargs)
881 _, matches = complete(**kwargs)
875 nt.assert_not_in("abc", matches)
882 nt.assert_not_in("abc", matches)
876 nt.assert_not_in("abc'", matches)
883 nt.assert_not_in("abc'", matches)
877 nt.assert_not_in("abc']", matches)
884 nt.assert_not_in("abc']", matches)
878 nt.assert_not_in("'abc'", matches)
885 nt.assert_not_in("'abc'", matches)
879 nt.assert_not_in("'abc']", matches)
886 nt.assert_not_in("'abc']", matches)
880
887
881 def assert_completion(**kwargs):
888 def assert_completion(**kwargs):
882 _, matches = complete(**kwargs)
889 _, matches = complete(**kwargs)
883 nt.assert_in("'abc'", matches)
890 nt.assert_in("'abc'", matches)
884 nt.assert_not_in("'abc']", matches)
891 nt.assert_not_in("'abc']", matches)
885
892
886 # no completion after string closed, even if reopened
893 # no completion after string closed, even if reopened
887 assert_no_completion(line_buffer="d['a'")
894 assert_no_completion(line_buffer="d['a'")
888 assert_no_completion(line_buffer='d["a"')
895 assert_no_completion(line_buffer='d["a"')
889 assert_no_completion(line_buffer="d['a' + ")
896 assert_no_completion(line_buffer="d['a' + ")
890 assert_no_completion(line_buffer="d['a' + '")
897 assert_no_completion(line_buffer="d['a' + '")
891
898
892 # completion in non-trivial expressions
899 # completion in non-trivial expressions
893 assert_completion(line_buffer="+ d[")
900 assert_completion(line_buffer="+ d[")
894 assert_completion(line_buffer="(d[")
901 assert_completion(line_buffer="(d[")
895 assert_completion(line_buffer="C.data[")
902 assert_completion(line_buffer="C.data[")
896
903
897 # greedy flag
904 # greedy flag
898 def assert_completion(**kwargs):
905 def assert_completion(**kwargs):
899 _, matches = complete(**kwargs)
906 _, matches = complete(**kwargs)
900 nt.assert_in("get()['abc']", matches)
907 nt.assert_in("get()['abc']", matches)
901
908
902 assert_no_completion(line_buffer="get()[")
909 assert_no_completion(line_buffer="get()[")
903 with greedy_completion():
910 with greedy_completion():
904 assert_completion(line_buffer="get()[")
911 assert_completion(line_buffer="get()[")
905 assert_completion(line_buffer="get()['")
912 assert_completion(line_buffer="get()['")
906 assert_completion(line_buffer="get()['a")
913 assert_completion(line_buffer="get()['a")
907 assert_completion(line_buffer="get()['ab")
914 assert_completion(line_buffer="get()['ab")
908 assert_completion(line_buffer="get()['abc")
915 assert_completion(line_buffer="get()['abc")
909
916
910 def test_dict_key_completion_bytes(self):
917 def test_dict_key_completion_bytes(self):
911 """Test handling of bytes in dict key completion"""
918 """Test handling of bytes in dict key completion"""
912 ip = get_ipython()
919 ip = get_ipython()
913 complete = ip.Completer.complete
920 complete = ip.Completer.complete
914
921
915 ip.user_ns["d"] = {"abc": None, b"abd": None}
922 ip.user_ns["d"] = {"abc": None, b"abd": None}
916
923
917 _, matches = complete(line_buffer="d[")
924 _, matches = complete(line_buffer="d[")
918 nt.assert_in("'abc'", matches)
925 nt.assert_in("'abc'", matches)
919 nt.assert_in("b'abd'", matches)
926 nt.assert_in("b'abd'", matches)
920
927
921 if False: # not currently implemented
928 if False: # not currently implemented
922 _, matches = complete(line_buffer="d[b")
929 _, matches = complete(line_buffer="d[b")
923 nt.assert_in("b'abd'", matches)
930 nt.assert_in("b'abd'", matches)
924 nt.assert_not_in("b'abc'", matches)
931 nt.assert_not_in("b'abc'", matches)
925
932
926 _, matches = complete(line_buffer="d[b'")
933 _, matches = complete(line_buffer="d[b'")
927 nt.assert_in("abd", matches)
934 nt.assert_in("abd", matches)
928 nt.assert_not_in("abc", matches)
935 nt.assert_not_in("abc", matches)
929
936
930 _, matches = complete(line_buffer="d[B'")
937 _, matches = complete(line_buffer="d[B'")
931 nt.assert_in("abd", matches)
938 nt.assert_in("abd", matches)
932 nt.assert_not_in("abc", matches)
939 nt.assert_not_in("abc", matches)
933
940
934 _, matches = complete(line_buffer="d['")
941 _, matches = complete(line_buffer="d['")
935 nt.assert_in("abc", matches)
942 nt.assert_in("abc", matches)
936 nt.assert_not_in("abd", matches)
943 nt.assert_not_in("abd", matches)
937
944
938 def test_dict_key_completion_unicode_py3(self):
945 def test_dict_key_completion_unicode_py3(self):
939 """Test handling of unicode in dict key completion"""
946 """Test handling of unicode in dict key completion"""
940 ip = get_ipython()
947 ip = get_ipython()
941 complete = ip.Completer.complete
948 complete = ip.Completer.complete
942
949
943 ip.user_ns["d"] = {"a\u05d0": None}
950 ip.user_ns["d"] = {"a\u05d0": None}
944
951
945 # query using escape
952 # query using escape
946 if sys.platform != "win32":
953 if sys.platform != "win32":
947 # Known failure on Windows
954 # Known failure on Windows
948 _, matches = complete(line_buffer="d['a\\u05d0")
955 _, matches = complete(line_buffer="d['a\\u05d0")
949 nt.assert_in("u05d0", matches) # tokenized after \\
956 nt.assert_in("u05d0", matches) # tokenized after \\
950
957
951 # query using character
958 # query using character
952 _, matches = complete(line_buffer="d['a\u05d0")
959 _, matches = complete(line_buffer="d['a\u05d0")
953 nt.assert_in("a\u05d0", matches)
960 nt.assert_in("a\u05d0", matches)
954
961
955 with greedy_completion():
962 with greedy_completion():
956 # query using escape
963 # query using escape
957 _, matches = complete(line_buffer="d['a\\u05d0")
964 _, matches = complete(line_buffer="d['a\\u05d0")
958 nt.assert_in("d['a\\u05d0']", matches) # tokenized after \\
965 nt.assert_in("d['a\\u05d0']", matches) # tokenized after \\
959
966
960 # query using character
967 # query using character
961 _, matches = complete(line_buffer="d['a\u05d0")
968 _, matches = complete(line_buffer="d['a\u05d0")
962 nt.assert_in("d['a\u05d0']", matches)
969 nt.assert_in("d['a\u05d0']", matches)
963
970
964 @dec.skip_without("numpy")
971 @dec.skip_without("numpy")
965 def test_struct_array_key_completion(self):
972 def test_struct_array_key_completion(self):
966 """Test dict key completion applies to numpy struct arrays"""
973 """Test dict key completion applies to numpy struct arrays"""
967 import numpy
974 import numpy
968
975
969 ip = get_ipython()
976 ip = get_ipython()
970 complete = ip.Completer.complete
977 complete = ip.Completer.complete
971 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
978 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
972 _, matches = complete(line_buffer="d['")
979 _, matches = complete(line_buffer="d['")
973 nt.assert_in("hello", matches)
980 nt.assert_in("hello", matches)
974 nt.assert_in("world", matches)
981 nt.assert_in("world", matches)
975 # complete on the numpy struct itself
982 # complete on the numpy struct itself
976 dt = numpy.dtype(
983 dt = numpy.dtype(
977 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
984 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
978 )
985 )
979 x = numpy.zeros(2, dtype=dt)
986 x = numpy.zeros(2, dtype=dt)
980 ip.user_ns["d"] = x[1]
987 ip.user_ns["d"] = x[1]
981 _, matches = complete(line_buffer="d['")
988 _, matches = complete(line_buffer="d['")
982 nt.assert_in("my_head", matches)
989 nt.assert_in("my_head", matches)
983 nt.assert_in("my_data", matches)
990 nt.assert_in("my_data", matches)
984 # complete on a nested level
991 # complete on a nested level
985 with greedy_completion():
992 with greedy_completion():
986 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
993 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
987 _, matches = complete(line_buffer="d[1]['my_head']['")
994 _, matches = complete(line_buffer="d[1]['my_head']['")
988 nt.assert_true(any(["my_dt" in m for m in matches]))
995 nt.assert_true(any(["my_dt" in m for m in matches]))
989 nt.assert_true(any(["my_df" in m for m in matches]))
996 nt.assert_true(any(["my_df" in m for m in matches]))
990
997
991 @dec.skip_without("pandas")
998 @dec.skip_without("pandas")
992 def test_dataframe_key_completion(self):
999 def test_dataframe_key_completion(self):
993 """Test dict key completion applies to pandas DataFrames"""
1000 """Test dict key completion applies to pandas DataFrames"""
994 import pandas
1001 import pandas
995
1002
996 ip = get_ipython()
1003 ip = get_ipython()
997 complete = ip.Completer.complete
1004 complete = ip.Completer.complete
998 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1005 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
999 _, matches = complete(line_buffer="d['")
1006 _, matches = complete(line_buffer="d['")
1000 nt.assert_in("hello", matches)
1007 nt.assert_in("hello", matches)
1001 nt.assert_in("world", matches)
1008 nt.assert_in("world", matches)
1002
1009
1003 def test_dict_key_completion_invalids(self):
1010 def test_dict_key_completion_invalids(self):
1004 """Smoke test cases dict key completion can't handle"""
1011 """Smoke test cases dict key completion can't handle"""
1005 ip = get_ipython()
1012 ip = get_ipython()
1006 complete = ip.Completer.complete
1013 complete = ip.Completer.complete
1007
1014
1008 ip.user_ns["no_getitem"] = None
1015 ip.user_ns["no_getitem"] = None
1009 ip.user_ns["no_keys"] = []
1016 ip.user_ns["no_keys"] = []
1010 ip.user_ns["cant_call_keys"] = dict
1017 ip.user_ns["cant_call_keys"] = dict
1011 ip.user_ns["empty"] = {}
1018 ip.user_ns["empty"] = {}
1012 ip.user_ns["d"] = {"abc": 5}
1019 ip.user_ns["d"] = {"abc": 5}
1013
1020
1014 _, matches = complete(line_buffer="no_getitem['")
1021 _, matches = complete(line_buffer="no_getitem['")
1015 _, matches = complete(line_buffer="no_keys['")
1022 _, matches = complete(line_buffer="no_keys['")
1016 _, matches = complete(line_buffer="cant_call_keys['")
1023 _, matches = complete(line_buffer="cant_call_keys['")
1017 _, matches = complete(line_buffer="empty['")
1024 _, matches = complete(line_buffer="empty['")
1018 _, matches = complete(line_buffer="name_error['")
1025 _, matches = complete(line_buffer="name_error['")
1019 _, matches = complete(line_buffer="d['\\") # incomplete escape
1026 _, matches = complete(line_buffer="d['\\") # incomplete escape
1020
1027
1021 def test_object_key_completion(self):
1028 def test_object_key_completion(self):
1022 ip = get_ipython()
1029 ip = get_ipython()
1023 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1030 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1024
1031
1025 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1032 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1026 nt.assert_in("qwerty", matches)
1033 nt.assert_in("qwerty", matches)
1027 nt.assert_in("qwick", matches)
1034 nt.assert_in("qwick", matches)
1028
1035
1029 def test_class_key_completion(self):
1036 def test_class_key_completion(self):
1030 ip = get_ipython()
1037 ip = get_ipython()
1031 NamedInstanceClass("qwerty")
1038 NamedInstanceClass("qwerty")
1032 NamedInstanceClass("qwick")
1039 NamedInstanceClass("qwick")
1033 ip.user_ns["named_instance_class"] = NamedInstanceClass
1040 ip.user_ns["named_instance_class"] = NamedInstanceClass
1034
1041
1035 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1042 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1036 nt.assert_in("qwerty", matches)
1043 nt.assert_in("qwerty", matches)
1037 nt.assert_in("qwick", matches)
1044 nt.assert_in("qwick", matches)
1038
1045
1039 def test_tryimport(self):
1046 def test_tryimport(self):
1040 """
1047 """
1041 Test that try-import don't crash on trailing dot, and import modules before
1048 Test that try-import don't crash on trailing dot, and import modules before
1042 """
1049 """
1043 from IPython.core.completerlib import try_import
1050 from IPython.core.completerlib import try_import
1044
1051
1045 assert try_import("IPython.")
1052 assert try_import("IPython.")
1046
1053
1047 def test_aimport_module_completer(self):
1054 def test_aimport_module_completer(self):
1048 ip = get_ipython()
1055 ip = get_ipython()
1049 _, matches = ip.complete("i", "%aimport i")
1056 _, matches = ip.complete("i", "%aimport i")
1050 nt.assert_in("io", matches)
1057 nt.assert_in("io", matches)
1051 nt.assert_not_in("int", matches)
1058 nt.assert_not_in("int", matches)
1052
1059
1053 def test_nested_import_module_completer(self):
1060 def test_nested_import_module_completer(self):
1054 ip = get_ipython()
1061 ip = get_ipython()
1055 _, matches = ip.complete(None, "import IPython.co", 17)
1062 _, matches = ip.complete(None, "import IPython.co", 17)
1056 nt.assert_in("IPython.core", matches)
1063 nt.assert_in("IPython.core", matches)
1057 nt.assert_not_in("import IPython.core", matches)
1064 nt.assert_not_in("import IPython.core", matches)
1058 nt.assert_not_in("IPython.display", matches)
1065 nt.assert_not_in("IPython.display", matches)
1059
1066
1060 def test_import_module_completer(self):
1067 def test_import_module_completer(self):
1061 ip = get_ipython()
1068 ip = get_ipython()
1062 _, matches = ip.complete("i", "import i")
1069 _, matches = ip.complete("i", "import i")
1063 nt.assert_in("io", matches)
1070 nt.assert_in("io", matches)
1064 nt.assert_not_in("int", matches)
1071 nt.assert_not_in("int", matches)
1065
1072
1066 def test_from_module_completer(self):
1073 def test_from_module_completer(self):
1067 ip = get_ipython()
1074 ip = get_ipython()
1068 _, matches = ip.complete("B", "from io import B", 16)
1075 _, matches = ip.complete("B", "from io import B", 16)
1069 nt.assert_in("BytesIO", matches)
1076 nt.assert_in("BytesIO", matches)
1070 nt.assert_not_in("BaseException", matches)
1077 nt.assert_not_in("BaseException", matches)
1071
1078
1072 def test_snake_case_completion(self):
1079 def test_snake_case_completion(self):
1073 ip = get_ipython()
1080 ip = get_ipython()
1074 ip.Completer.use_jedi = False
1081 ip.Completer.use_jedi = False
1075 ip.user_ns["some_three"] = 3
1082 ip.user_ns["some_three"] = 3
1076 ip.user_ns["some_four"] = 4
1083 ip.user_ns["some_four"] = 4
1077 _, matches = ip.complete("s_", "print(s_f")
1084 _, matches = ip.complete("s_", "print(s_f")
1078 nt.assert_in("some_three", matches)
1085 nt.assert_in("some_three", matches)
1079 nt.assert_in("some_four", matches)
1086 nt.assert_in("some_four", matches)
1080
1087
1081 def test_mix_terms(self):
1088 def test_mix_terms(self):
1082 ip = get_ipython()
1089 ip = get_ipython()
1083 from textwrap import dedent
1090 from textwrap import dedent
1084
1091
1085 ip.Completer.use_jedi = False
1092 ip.Completer.use_jedi = False
1086 ip.ex(
1093 ip.ex(
1087 dedent(
1094 dedent(
1088 """
1095 """
1089 class Test:
1096 class Test:
1090 def meth(self, meth_arg1):
1097 def meth(self, meth_arg1):
1091 print("meth")
1098 print("meth")
1092
1099
1093 def meth_1(self, meth1_arg1, meth1_arg2):
1100 def meth_1(self, meth1_arg1, meth1_arg2):
1094 print("meth1")
1101 print("meth1")
1095
1102
1096 def meth_2(self, meth2_arg1, meth2_arg2):
1103 def meth_2(self, meth2_arg1, meth2_arg2):
1097 print("meth2")
1104 print("meth2")
1098 test = Test()
1105 test = Test()
1099 """
1106 """
1100 )
1107 )
1101 )
1108 )
1102 _, matches = ip.complete(None, "test.meth(")
1109 _, matches = ip.complete(None, "test.meth(")
1103 nt.assert_in("meth_arg1=", matches)
1110 nt.assert_in("meth_arg1=", matches)
1104 nt.assert_not_in("meth2_arg1=", matches)
1111 nt.assert_not_in("meth2_arg1=", matches)
General Comments 0
You need to be logged in to leave comments. Login now