##// END OF EJS Templates
Merge pull request #12282 from Carreau/completer-typing
Matthias Bussonnier -
r25717:4999ce9c merge
parent child Browse files
Show More
@@ -0,0 +1,4 b''
1 [mypy]
2 python_version = 3.6
3 ignore_missing_imports = True
4 follow_imports = silent
@@ -1,114 +1,116 b''
1 1 # http://travis-ci.org/#!/ipython/ipython
2 2 language: python
3 3 os: linux
4 4
5 5 addons:
6 6 apt:
7 7 packages:
8 8 - graphviz
9 9
10 10 python:
11 11 - 3.6
12 12
13 13 sudo: false
14 14
15 15 env:
16 16 global:
17 17 - PATH=$TRAVIS_BUILD_DIR/pandoc:$PATH
18 18
19 19 group: edge
20 20
21 21 before_install:
22 22 - |
23 23 # install Python on macOS
24 24 if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
25 25 env | sort
26 26 if ! which python$TRAVIS_PYTHON_VERSION; then
27 27 HOMEBREW_NO_AUTO_UPDATE=1 brew tap minrk/homebrew-python-frameworks
28 28 HOMEBREW_NO_AUTO_UPDATE=1 brew cask install python-framework-${TRAVIS_PYTHON_VERSION/./}
29 29 fi
30 30 python3 -m pip install virtualenv
31 31 python3 -m virtualenv -p $(which python$TRAVIS_PYTHON_VERSION) ~/travis-env
32 32 source ~/travis-env/bin/activate
33 33 fi
34 34 - python --version
35 35
36 36 install:
37 37 - pip install pip --upgrade
38 38 - pip install setuptools --upgrade
39 39 - pip install -e file://$PWD#egg=ipython[test] --upgrade
40 40 - pip install trio curio --upgrade --upgrade-strategy eager
41 41 - pip install pytest 'matplotlib !=3.2.0' mypy
42 42 - pip install codecov check-manifest --upgrade
43 - pip install mypy
43 44
44 45 script:
45 46 - check-manifest
46 47 - |
47 48 if [[ "$TRAVIS_PYTHON_VERSION" == "nightly" ]]; then
48 49 # on nightly fake parso known the grammar
49 50 cp /home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/parso/python/grammar38.txt /home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/parso/python/grammar39.txt
50 51 fi
51 52 - cd /tmp && iptest --coverage xml && cd -
52 53 - pytest IPython
53 - mypy --ignore-missing-imports -m IPython.terminal.ptutils
54 - mypy IPython/terminal/ptutils.py
55 - mypy IPython/core/c*.py
54 56 # On the latest Python (on Linux) only, make sure that the docs build.
55 57 - |
56 58 if [[ "$TRAVIS_PYTHON_VERSION" == "3.7" ]] && [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
57 59 pip install -r docs/requirements.txt
58 60 python tools/fixup_whats_new_pr.py
59 61 make -C docs/ html SPHINXOPTS="-W"
60 62 fi
61 63
62 64 after_success:
63 65 - cp /tmp/ipy_coverage.xml ./
64 66 - cp /tmp/.coverage ./
65 67 - codecov
66 68
67 69 matrix:
68 70 include:
69 71 - arch: amd64
70 72 python: "3.7"
71 73 dist: xenial
72 74 sudo: true
73 75 - arch: amd64
74 76 python: "3.8"
75 77 dist: xenial
76 78 sudo: true
77 79 - arch: amd64
78 80 python: "nightly"
79 81 dist: xenial
80 82 sudo: true
81 83 - arch: arm64
82 84 python: "nightly"
83 85 dist: bionic
84 86 env: ARM64=True
85 87 sudo: true
86 88 - os: osx
87 89 language: generic
88 90 python: 3.6
89 91 env: TRAVIS_PYTHON_VERSION=3.6
90 92 - os: osx
91 93 language: generic
92 94 python: 3.7
93 95 env: TRAVIS_PYTHON_VERSION=3.7
94 96 allow_failures:
95 97 - python: nightly
96 98
97 99 before_deploy:
98 100 - rm -rf dist/
99 101 - python setup.py sdist
100 102 - python setup.py bdist_wheel
101 103
102 104 deploy:
103 105 provider: releases
104 106 api_key:
105 107 secure: Y/Ae9tYs5aoBU8bDjN2YrwGG6tCbezj/h3Lcmtx8HQavSbBgXnhnZVRb2snOKD7auqnqjfT/7QMm4ZyKvaOEgyggGktKqEKYHC8KOZ7yp8I5/UMDtk6j9TnXpSqqBxPiud4MDV76SfRYEQiaDoG4tGGvSfPJ9KcNjKrNvSyyxns=
106 108 file: dist/*
107 109 file_glob: true
108 110 skip_cleanup: true
109 111 on:
110 112 repo: ipython/ipython
111 113 all_branches: true # Backports are released from e.g. 5.x branch
112 114 tags: true
113 115 python: 3.6 # Any version should work, but we only need one
114 116 condition: $TRAVIS_OS_NAME = "linux"
@@ -1,2146 +1,2218 b''
1 1 """Completion for IPython.
2 2
3 3 This module started as fork of the rlcompleter module in the Python standard
4 4 library. The original enhancements made to rlcompleter have been sent
5 5 upstream and were accepted as of Python 2.3,
6 6
7 7 This module now support a wide variety of completion mechanism both available
8 8 for normal classic Python code, as well as completer for IPython specific
9 9 Syntax like magics.
10 10
11 11 Latex and Unicode completion
12 12 ============================
13 13
14 14 IPython and compatible frontends not only can complete your code, but can help
15 15 you to input a wide range of characters. In particular we allow you to insert
16 16 a unicode character using the tab completion mechanism.
17 17
18 18 Forward latex/unicode completion
19 19 --------------------------------
20 20
21 21 Forward completion allows you to easily type a unicode character using its latex
22 22 name, or unicode long description. To do so type a backslash follow by the
23 23 relevant name and press tab:
24 24
25 25
26 26 Using latex completion:
27 27
28 28 .. code::
29 29
30 30 \\alpha<tab>
31 31 α
32 32
33 33 or using unicode completion:
34 34
35 35
36 36 .. code::
37 37
38 38 \\GREEK SMALL LETTER ALPHA<tab>
39 39 α
40 40
41 41
42 42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 43 dots) are also available, unlike latex they need to be put after the their
44 44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
45 45
46 46 Some browsers are known to display combining characters incorrectly.
47 47
48 48 Backward latex completion
49 49 -------------------------
50 50
51 51 It is sometime challenging to know how to type a character, if you are using
52 52 IPython, or any compatible frontend you can prepend backslash to the character
53 53 and press `<tab>` to expand it to its latex form.
54 54
55 55 .. code::
56 56
57 57 \\α<tab>
58 58 \\alpha
59 59
60 60
61 61 Both forward and backward completions can be deactivated by setting the
62 62 ``Completer.backslash_combining_completions`` option to ``False``.
63 63
64 64
65 65 Experimental
66 66 ============
67 67
68 68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 69 generate completions both using static analysis of the code, and dynamically
70 70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 71 for Python. The APIs attached to this new mechanism is unstable and will
72 72 raise unless use in an :any:`provisionalcompleter` context manager.
73 73
74 74 You will find that the following are experimental:
75 75
76 76 - :any:`provisionalcompleter`
77 77 - :any:`IPCompleter.completions`
78 78 - :any:`Completion`
79 79 - :any:`rectify_completions`
80 80
81 81 .. note::
82 82
83 83 better name for :any:`rectify_completions` ?
84 84
85 85 We welcome any feedback on these new API, and we also encourage you to try this
86 86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 87 to have extra logging information if :any:`jedi` is crashing, or if current
88 88 IPython completer pending deprecations are returning results not yet handled
89 89 by :any:`jedi`
90 90
91 91 Using Jedi for tab completion allow snippets like the following to work without
92 92 having to execute any code:
93 93
94 94 >>> myvar = ['hello', 42]
95 95 ... myvar[1].bi<tab>
96 96
97 97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 98 executing any code unlike the previously available ``IPCompleter.greedy``
99 99 option.
100 100
101 101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 102 current development version to get better completions.
103 103 """
104 104
105 105
106 106 # Copyright (c) IPython Development Team.
107 107 # Distributed under the terms of the Modified BSD License.
108 108 #
109 109 # Some of this code originated from rlcompleter in the Python standard library
110 110 # Copyright (C) 2001 Python Software Foundation, www.python.org
111 111
112 112
113 113 import builtins as builtin_mod
114 114 import glob
115 115 import inspect
116 116 import itertools
117 117 import keyword
118 118 import os
119 119 import re
120 120 import string
121 121 import sys
122 122 import time
123 123 import unicodedata
124 124 import uuid
125 125 import warnings
126 126 from contextlib import contextmanager
127 127 from importlib import import_module
128 128 from types import SimpleNamespace
129 from typing import Iterable, Iterator, List, Tuple
129 from typing import Iterable, Iterator, List, Tuple, Union, Any, Sequence, Dict, NamedTuple, Pattern, Optional
130 130
131 131 from IPython.core.error import TryNext
132 132 from IPython.core.inputtransformer2 import ESC_MAGIC
133 133 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
134 134 from IPython.core.oinspect import InspectColors
135 135 from IPython.utils import generics
136 136 from IPython.utils.dir2 import dir2, get_real_method
137 137 from IPython.utils.path import ensure_dir_exists
138 138 from IPython.utils.process import arg_split
139 139 from traitlets import Bool, Enum, Int, List as ListTrait, Unicode, default, observe
140 140 from traitlets.config.configurable import Configurable
141 141
142 142 import __main__
143 143
144 144 # skip module docstests
145 145 skip_doctest = True
146 146
147 147 try:
148 148 import jedi
149 149 jedi.settings.case_insensitive_completion = False
150 150 import jedi.api.helpers
151 151 import jedi.api.classes
152 152 JEDI_INSTALLED = True
153 153 except ImportError:
154 154 JEDI_INSTALLED = False
155 155 #-----------------------------------------------------------------------------
156 156 # Globals
157 157 #-----------------------------------------------------------------------------
158 158
159 # ranges where we have most of the valid unicode names. We could be more finer
160 # grained but is it worth it for performace While unicode have character in the
161 # rage 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
162 # write this). With below range we cover them all, with a density of ~67%
163 # biggest next gap we consider only adds up about 1% density and there are 600
164 # gaps that would need hard coding.
165 _UNICODE_RANGES = [(32, 0x2fa1e), (0xe0001, 0xe01f0)]
166
159 167 # Public API
160 168 __all__ = ['Completer','IPCompleter']
161 169
162 170 if sys.platform == 'win32':
163 171 PROTECTABLES = ' '
164 172 else:
165 173 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
166 174
167 175 # Protect against returning an enormous number of completions which the frontend
168 176 # may have trouble processing.
169 177 MATCHES_LIMIT = 500
170 178
171 179 _deprecation_readline_sentinel = object()
172 180
173 181
174 182 class ProvisionalCompleterWarning(FutureWarning):
175 183 """
176 184 Exception raise by an experimental feature in this module.
177 185
178 186 Wrap code in :any:`provisionalcompleter` context manager if you
179 187 are certain you want to use an unstable feature.
180 188 """
181 189 pass
182 190
183 191 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
184 192
185 193 @contextmanager
186 194 def provisionalcompleter(action='ignore'):
187 195 """
188 196
189 197
190 198 This context manager has to be used in any place where unstable completer
191 199 behavior and API may be called.
192 200
193 201 >>> with provisionalcompleter():
194 202 ... completer.do_experimental_things() # works
195 203
196 204 >>> completer.do_experimental_things() # raises.
197 205
198 206 .. note:: Unstable
199 207
200 208 By using this context manager you agree that the API in use may change
201 209 without warning, and that you won't complain if they do so.
202 210
203 211 You also understand that, if the API is not to your liking, you should report
204 212 a bug to explain your use case upstream.
205 213
206 214 We'll be happy to get your feedback, feature requests, and improvements on
207 215 any of the unstable APIs!
208 216 """
209 217 with warnings.catch_warnings():
210 218 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
211 219 yield
212 220
213 221
214 222 def has_open_quotes(s):
215 223 """Return whether a string has open quotes.
216 224
217 225 This simply counts whether the number of quote characters of either type in
218 226 the string is odd.
219 227
220 228 Returns
221 229 -------
222 230 If there is an open quote, the quote character is returned. Else, return
223 231 False.
224 232 """
225 233 # We check " first, then ', so complex cases with nested quotes will get
226 234 # the " to take precedence.
227 235 if s.count('"') % 2:
228 236 return '"'
229 237 elif s.count("'") % 2:
230 238 return "'"
231 239 else:
232 240 return False
233 241
234 242
235 243 def protect_filename(s, protectables=PROTECTABLES):
236 244 """Escape a string to protect certain characters."""
237 245 if set(s) & set(protectables):
238 246 if sys.platform == "win32":
239 247 return '"' + s + '"'
240 248 else:
241 249 return "".join(("\\" + c if c in protectables else c) for c in s)
242 250 else:
243 251 return s
244 252
245 253
246 254 def expand_user(path:str) -> Tuple[str, bool, str]:
247 255 """Expand ``~``-style usernames in strings.
248 256
249 257 This is similar to :func:`os.path.expanduser`, but it computes and returns
250 258 extra information that will be useful if the input was being used in
251 259 computing completions, and you wish to return the completions with the
252 260 original '~' instead of its expanded value.
253 261
254 262 Parameters
255 263 ----------
256 264 path : str
257 265 String to be expanded. If no ~ is present, the output is the same as the
258 266 input.
259 267
260 268 Returns
261 269 -------
262 270 newpath : str
263 271 Result of ~ expansion in the input path.
264 272 tilde_expand : bool
265 273 Whether any expansion was performed or not.
266 274 tilde_val : str
267 275 The value that ~ was replaced with.
268 276 """
269 277 # Default values
270 278 tilde_expand = False
271 279 tilde_val = ''
272 280 newpath = path
273 281
274 282 if path.startswith('~'):
275 283 tilde_expand = True
276 284 rest = len(path)-1
277 285 newpath = os.path.expanduser(path)
278 286 if rest:
279 287 tilde_val = newpath[:-rest]
280 288 else:
281 289 tilde_val = newpath
282 290
283 291 return newpath, tilde_expand, tilde_val
284 292
285 293
286 294 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
287 295 """Does the opposite of expand_user, with its outputs.
288 296 """
289 297 if tilde_expand:
290 298 return path.replace(tilde_val, '~')
291 299 else:
292 300 return path
293 301
294 302
295 303 def completions_sorting_key(word):
296 304 """key for sorting completions
297 305
298 306 This does several things:
299 307
300 308 - Demote any completions starting with underscores to the end
301 309 - Insert any %magic and %%cellmagic completions in the alphabetical order
302 310 by their name
303 311 """
304 312 prio1, prio2 = 0, 0
305 313
306 314 if word.startswith('__'):
307 315 prio1 = 2
308 316 elif word.startswith('_'):
309 317 prio1 = 1
310 318
311 319 if word.endswith('='):
312 320 prio1 = -1
313 321
314 322 if word.startswith('%%'):
315 323 # If there's another % in there, this is something else, so leave it alone
316 324 if not "%" in word[2:]:
317 325 word = word[2:]
318 326 prio2 = 2
319 327 elif word.startswith('%'):
320 328 if not "%" in word[1:]:
321 329 word = word[1:]
322 330 prio2 = 1
323 331
324 332 return prio1, word, prio2
325 333
326 334
327 335 class _FakeJediCompletion:
328 336 """
329 337 This is a workaround to communicate to the UI that Jedi has crashed and to
330 338 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
331 339
332 340 Added in IPython 6.0 so should likely be removed for 7.0
333 341
334 342 """
335 343
336 344 def __init__(self, name):
337 345
338 346 self.name = name
339 347 self.complete = name
340 348 self.type = 'crashed'
341 349 self.name_with_symbols = name
342 350 self.signature = ''
343 351 self._origin = 'fake'
344 352
345 353 def __repr__(self):
346 354 return '<Fake completion object jedi has crashed>'
347 355
348 356
349 357 class Completion:
350 358 """
351 359 Completion object used and return by IPython completers.
352 360
353 361 .. warning:: Unstable
354 362
355 363 This function is unstable, API may change without warning.
356 364 It will also raise unless use in proper context manager.
357 365
358 366 This act as a middle ground :any:`Completion` object between the
359 367 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
360 368 object. While Jedi need a lot of information about evaluator and how the
361 369 code should be ran/inspected, PromptToolkit (and other frontend) mostly
362 370 need user facing information.
363 371
364 372 - Which range should be replaced replaced by what.
365 373 - Some metadata (like completion type), or meta information to displayed to
366 374 the use user.
367 375
368 376 For debugging purpose we can also store the origin of the completion (``jedi``,
369 377 ``IPython.python_matches``, ``IPython.magics_matches``...).
370 378 """
371 379
372 380 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
373 381
374 382 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
375 383 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
376 384 "It may change without warnings. "
377 385 "Use in corresponding context manager.",
378 386 category=ProvisionalCompleterWarning, stacklevel=2)
379 387
380 388 self.start = start
381 389 self.end = end
382 390 self.text = text
383 391 self.type = type
384 392 self.signature = signature
385 393 self._origin = _origin
386 394
387 395 def __repr__(self):
388 396 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
389 397 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
390 398
391 399 def __eq__(self, other)->Bool:
392 400 """
393 401 Equality and hash do not hash the type (as some completer may not be
394 402 able to infer the type), but are use to (partially) de-duplicate
395 403 completion.
396 404
397 405 Completely de-duplicating completion is a bit tricker that just
398 406 comparing as it depends on surrounding text, which Completions are not
399 407 aware of.
400 408 """
401 409 return self.start == other.start and \
402 410 self.end == other.end and \
403 411 self.text == other.text
404 412
405 413 def __hash__(self):
406 414 return hash((self.start, self.end, self.text))
407 415
408 416
409 417 _IC = Iterable[Completion]
410 418
411 419
412 420 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
413 421 """
414 422 Deduplicate a set of completions.
415 423
416 424 .. warning:: Unstable
417 425
418 426 This function is unstable, API may change without warning.
419 427
420 428 Parameters
421 429 ----------
422 430 text: str
423 431 text that should be completed.
424 432 completions: Iterator[Completion]
425 433 iterator over the completions to deduplicate
426 434
427 435 Yields
428 436 ------
429 437 `Completions` objects
430 438
431 439
432 440 Completions coming from multiple sources, may be different but end up having
433 441 the same effect when applied to ``text``. If this is the case, this will
434 442 consider completions as equal and only emit the first encountered.
435 443
436 444 Not folded in `completions()` yet for debugging purpose, and to detect when
437 445 the IPython completer does return things that Jedi does not, but should be
438 446 at some point.
439 447 """
440 448 completions = list(completions)
441 449 if not completions:
442 450 return
443 451
444 452 new_start = min(c.start for c in completions)
445 453 new_end = max(c.end for c in completions)
446 454
447 455 seen = set()
448 456 for c in completions:
449 457 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
450 458 if new_text not in seen:
451 459 yield c
452 460 seen.add(new_text)
453 461
454 462
455 463 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
456 464 """
457 465 Rectify a set of completions to all have the same ``start`` and ``end``
458 466
459 467 .. warning:: Unstable
460 468
461 469 This function is unstable, API may change without warning.
462 470 It will also raise unless use in proper context manager.
463 471
464 472 Parameters
465 473 ----------
466 474 text: str
467 475 text that should be completed.
468 476 completions: Iterator[Completion]
469 477 iterator over the completions to rectify
470 478
471 479
472 480 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
473 481 the Jupyter Protocol requires them to behave like so. This will readjust
474 482 the completion to have the same ``start`` and ``end`` by padding both
475 483 extremities with surrounding text.
476 484
477 485 During stabilisation should support a ``_debug`` option to log which
478 486 completion are return by the IPython completer and not found in Jedi in
479 487 order to make upstream bug report.
480 488 """
481 489 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
482 490 "It may change without warnings. "
483 491 "Use in corresponding context manager.",
484 492 category=ProvisionalCompleterWarning, stacklevel=2)
485 493
486 494 completions = list(completions)
487 495 if not completions:
488 496 return
489 497 starts = (c.start for c in completions)
490 498 ends = (c.end for c in completions)
491 499
492 500 new_start = min(starts)
493 501 new_end = max(ends)
494 502
495 503 seen_jedi = set()
496 504 seen_python_matches = set()
497 505 for c in completions:
498 506 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
499 507 if c._origin == 'jedi':
500 508 seen_jedi.add(new_text)
501 509 elif c._origin == 'IPCompleter.python_matches':
502 510 seen_python_matches.add(new_text)
503 511 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
504 512 diff = seen_python_matches.difference(seen_jedi)
505 513 if diff and _debug:
506 514 print('IPython.python matches have extras:', diff)
507 515
508 516
509 517 if sys.platform == 'win32':
510 518 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
511 519 else:
512 520 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
513 521
514 522 GREEDY_DELIMS = ' =\r\n'
515 523
516 524
517 525 class CompletionSplitter(object):
518 526 """An object to split an input line in a manner similar to readline.
519 527
520 528 By having our own implementation, we can expose readline-like completion in
521 529 a uniform manner to all frontends. This object only needs to be given the
522 530 line of text to be split and the cursor position on said line, and it
523 531 returns the 'word' to be completed on at the cursor after splitting the
524 532 entire line.
525 533
526 534 What characters are used as splitting delimiters can be controlled by
527 535 setting the ``delims`` attribute (this is a property that internally
528 536 automatically builds the necessary regular expression)"""
529 537
530 538 # Private interface
531 539
532 540 # A string of delimiter characters. The default value makes sense for
533 541 # IPython's most typical usage patterns.
534 542 _delims = DELIMS
535 543
536 544 # The expression (a normal string) to be compiled into a regular expression
537 545 # for actual splitting. We store it as an attribute mostly for ease of
538 546 # debugging, since this type of code can be so tricky to debug.
539 547 _delim_expr = None
540 548
541 549 # The regular expression that does the actual splitting
542 550 _delim_re = None
543 551
544 552 def __init__(self, delims=None):
545 553 delims = CompletionSplitter._delims if delims is None else delims
546 554 self.delims = delims
547 555
548 556 @property
549 557 def delims(self):
550 558 """Return the string of delimiter characters."""
551 559 return self._delims
552 560
553 561 @delims.setter
554 562 def delims(self, delims):
555 563 """Set the delimiters for line splitting."""
556 564 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
557 565 self._delim_re = re.compile(expr)
558 566 self._delims = delims
559 567 self._delim_expr = expr
560 568
561 569 def split_line(self, line, cursor_pos=None):
562 570 """Split a line of text with a cursor at the given position.
563 571 """
564 572 l = line if cursor_pos is None else line[:cursor_pos]
565 573 return self._delim_re.split(l)[-1]
566 574
567 575
568 576
569 577 class Completer(Configurable):
570 578
571 579 greedy = Bool(False,
572 580 help="""Activate greedy completion
573 581 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
574 582
575 583 This will enable completion on elements of lists, results of function calls, etc.,
576 584 but can be unsafe because the code is actually evaluated on TAB.
577 585 """
578 586 ).tag(config=True)
579 587
580 588 use_jedi = Bool(default_value=JEDI_INSTALLED,
581 589 help="Experimental: Use Jedi to generate autocompletions. "
582 590 "Default to True if jedi is installed.").tag(config=True)
583 591
584 592 jedi_compute_type_timeout = Int(default_value=400,
585 593 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
586 594 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
587 595 performance by preventing jedi to build its cache.
588 596 """).tag(config=True)
589 597
590 598 debug = Bool(default_value=False,
591 599 help='Enable debug for the Completer. Mostly print extra '
592 600 'information for experimental jedi integration.')\
593 601 .tag(config=True)
594 602
595 603 backslash_combining_completions = Bool(True,
596 604 help="Enable unicode completions, e.g. \\alpha<tab> . "
597 605 "Includes completion of latex commands, unicode names, and expanding "
598 606 "unicode characters back to latex commands.").tag(config=True)
599 607
600 608
601 609
602 610 def __init__(self, namespace=None, global_namespace=None, **kwargs):
603 611 """Create a new completer for the command line.
604 612
605 613 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
606 614
607 615 If unspecified, the default namespace where completions are performed
608 616 is __main__ (technically, __main__.__dict__). Namespaces should be
609 617 given as dictionaries.
610 618
611 619 An optional second namespace can be given. This allows the completer
612 620 to handle cases where both the local and global scopes need to be
613 621 distinguished.
614 622 """
615 623
616 624 # Don't bind to namespace quite yet, but flag whether the user wants a
617 625 # specific namespace or to use __main__.__dict__. This will allow us
618 626 # to bind to __main__.__dict__ at completion time, not now.
619 627 if namespace is None:
620 628 self.use_main_ns = True
621 629 else:
622 630 self.use_main_ns = False
623 631 self.namespace = namespace
624 632
625 633 # The global namespace, if given, can be bound directly
626 634 if global_namespace is None:
627 635 self.global_namespace = {}
628 636 else:
629 637 self.global_namespace = global_namespace
630 638
631 639 self.custom_matchers = []
632 640
633 641 super(Completer, self).__init__(**kwargs)
634 642
635 643 def complete(self, text, state):
636 644 """Return the next possible completion for 'text'.
637 645
638 646 This is called successively with state == 0, 1, 2, ... until it
639 647 returns None. The completion should begin with 'text'.
640 648
641 649 """
642 650 if self.use_main_ns:
643 651 self.namespace = __main__.__dict__
644 652
645 653 if state == 0:
646 654 if "." in text:
647 655 self.matches = self.attr_matches(text)
648 656 else:
649 657 self.matches = self.global_matches(text)
650 658 try:
651 659 return self.matches[state]
652 660 except IndexError:
653 661 return None
654 662
655 663 def global_matches(self, text):
656 664 """Compute matches when text is a simple name.
657 665
658 666 Return a list of all keywords, built-in functions and names currently
659 667 defined in self.namespace or self.global_namespace that match.
660 668
661 669 """
662 670 matches = []
663 671 match_append = matches.append
664 672 n = len(text)
665 673 for lst in [keyword.kwlist,
666 674 builtin_mod.__dict__.keys(),
667 675 self.namespace.keys(),
668 676 self.global_namespace.keys()]:
669 677 for word in lst:
670 678 if word[:n] == text and word != "__builtins__":
671 679 match_append(word)
672 680
673 681 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
674 682 for lst in [self.namespace.keys(),
675 683 self.global_namespace.keys()]:
676 684 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
677 685 for word in lst if snake_case_re.match(word)}
678 686 for word in shortened.keys():
679 687 if word[:n] == text and word != "__builtins__":
680 688 match_append(shortened[word])
681 689 return matches
682 690
683 691 def attr_matches(self, text):
684 692 """Compute matches when text contains a dot.
685 693
686 694 Assuming the text is of the form NAME.NAME....[NAME], and is
687 695 evaluatable in self.namespace or self.global_namespace, it will be
688 696 evaluated and its attributes (as revealed by dir()) are used as
689 697 possible completions. (For class instances, class members are
690 698 also considered.)
691 699
692 700 WARNING: this can still invoke arbitrary C code, if an object
693 701 with a __getattr__ hook is evaluated.
694 702
695 703 """
696 704
697 705 # Another option, seems to work great. Catches things like ''.<tab>
698 706 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
699 707
700 708 if m:
701 709 expr, attr = m.group(1, 3)
702 710 elif self.greedy:
703 711 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
704 712 if not m2:
705 713 return []
706 714 expr, attr = m2.group(1,2)
707 715 else:
708 716 return []
709 717
710 718 try:
711 719 obj = eval(expr, self.namespace)
712 720 except:
713 721 try:
714 722 obj = eval(expr, self.global_namespace)
715 723 except:
716 724 return []
717 725
718 726 if self.limit_to__all__ and hasattr(obj, '__all__'):
719 727 words = get__all__entries(obj)
720 728 else:
721 729 words = dir2(obj)
722 730
723 731 try:
724 732 words = generics.complete_object(obj, words)
725 733 except TryNext:
726 734 pass
727 735 except AssertionError:
728 736 raise
729 737 except Exception:
730 738 # Silence errors from completion function
731 739 #raise # dbg
732 740 pass
733 741 # Build match list to return
734 742 n = len(attr)
735 743 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
736 744
737 745
738 746 def get__all__entries(obj):
739 747 """returns the strings in the __all__ attribute"""
740 748 try:
741 749 words = getattr(obj, '__all__')
742 750 except:
743 751 return []
744 752
745 753 return [w for w in words if isinstance(w, str)]
746 754
747 755
748 def match_dict_keys(keys: List[str], prefix: str, delims: str):
756 def match_dict_keys(keys: List[Union[str, bytes]], prefix: str, delims: str) -> Tuple[str, int, List[str]]:
749 757 """Used by dict_key_matches, matching the prefix to a list of keys
750 758
751 759 Parameters
752 760 ==========
753 761 keys:
754 762 list of keys in dictionary currently being completed.
755 763 prefix:
756 764 Part of the text already typed by the user. e.g. `mydict[b'fo`
757 765 delims:
758 766 String of delimiters to consider when finding the current key.
759 767
760 768 Returns
761 769 =======
762 770
763 771 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
764 772 ``quote`` being the quote that need to be used to close current string.
765 773 ``token_start`` the position where the replacement should start occurring,
766 774 ``matches`` a list of replacement/completion
767 775
768 776 """
777 keys = [k for k in keys if isinstance(k, (str, bytes))]
769 778 if not prefix:
770 return None, 0, [repr(k) for k in keys
779 return '', 0, [repr(k) for k in keys
771 780 if isinstance(k, (str, bytes))]
772 781 quote_match = re.search('["\']', prefix)
782 assert quote_match is not None # silence mypy
773 783 quote = quote_match.group()
774 784 try:
775 785 prefix_str = eval(prefix + quote, {})
776 786 except Exception:
777 return None, 0, []
787 return '', 0, []
778 788
779 789 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
780 790 token_match = re.search(pattern, prefix, re.UNICODE)
791 assert token_match is not None # silence mypy
781 792 token_start = token_match.start()
782 793 token_prefix = token_match.group()
783 794
784 matched = []
795 matched:List[str] = []
785 796 for key in keys:
786 797 try:
787 798 if not key.startswith(prefix_str):
788 799 continue
789 800 except (AttributeError, TypeError, UnicodeError):
790 801 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
791 802 continue
792 803
793 804 # reformat remainder of key to begin with prefix
794 805 rem = key[len(prefix_str):]
795 806 # force repr wrapped in '
796 807 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
797 if rem_repr.startswith('u') and prefix[0] not in 'uU':
798 # Found key is unicode, but prefix is Py2 string.
799 # Therefore attempt to interpret key as string.
800 try:
801 rem_repr = repr(rem.encode('ascii') + '"')
802 except UnicodeEncodeError:
803 continue
804
805 808 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
806 809 if quote == '"':
807 810 # The entered prefix is quoted with ",
808 811 # but the match is quoted with '.
809 812 # A contained " hence needs escaping for comparison:
810 813 rem_repr = rem_repr.replace('"', '\\"')
811 814
812 815 # then reinsert prefix from start of token
813 816 matched.append('%s%s' % (token_prefix, rem_repr))
814 817 return quote, token_start, matched
815 818
816 819
817 820 def cursor_to_position(text:str, line:int, column:int)->int:
818 821 """
819 822
820 823 Convert the (line,column) position of the cursor in text to an offset in a
821 824 string.
822 825
823 826 Parameters
824 827 ----------
825 828
826 829 text : str
827 830 The text in which to calculate the cursor offset
828 831 line : int
829 832 Line of the cursor; 0-indexed
830 833 column : int
831 834 Column of the cursor 0-indexed
832 835
833 836 Return
834 837 ------
835 838 Position of the cursor in ``text``, 0-indexed.
836 839
837 840 See Also
838 841 --------
839 842 position_to_cursor: reciprocal of this function
840 843
841 844 """
842 845 lines = text.split('\n')
843 846 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
844 847
845 848 return sum(len(l) + 1 for l in lines[:line]) + column
846 849
847 850 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
848 851 """
849 852 Convert the position of the cursor in text (0 indexed) to a line
850 853 number(0-indexed) and a column number (0-indexed) pair
851 854
852 855 Position should be a valid position in ``text``.
853 856
854 857 Parameters
855 858 ----------
856 859
857 860 text : str
858 861 The text in which to calculate the cursor offset
859 862 offset : int
860 863 Position of the cursor in ``text``, 0-indexed.
861 864
862 865 Return
863 866 ------
864 867 (line, column) : (int, int)
865 868 Line of the cursor; 0-indexed, column of the cursor 0-indexed
866 869
867 870
868 871 See Also
869 872 --------
870 873 cursor_to_position : reciprocal of this function
871 874
872 875
873 876 """
874 877
875 878 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
876 879
877 880 before = text[:offset]
878 881 blines = before.split('\n') # ! splitnes trim trailing \n
879 882 line = before.count('\n')
880 883 col = len(blines[-1])
881 884 return line, col
882 885
883 886
884 887 def _safe_isinstance(obj, module, class_name):
885 888 """Checks if obj is an instance of module.class_name if loaded
886 889 """
887 890 return (module in sys.modules and
888 891 isinstance(obj, getattr(import_module(module), class_name)))
889 892
890
891 def back_unicode_name_matches(text):
892 u"""Match unicode characters back to unicode name
893 def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
894 """Match Unicode characters back to Unicode name
893 895
894 896 This does ``☃`` -> ``\\snowman``
895 897
896 898 Note that snowman is not a valid python3 combining character but will be expanded.
897 899 Though it will not recombine back to the snowman character by the completion machinery.
898 900
899 901 This will not either back-complete standard sequences like \\n, \\b ...
900 902
901 Used on Python 3 only.
903 Returns
904 =======
905
906 Return a tuple with two elements:
907
908 - The Unicode character that was matched (preceded with a backslash), or
909 empty string,
910 - a sequence (of 1), name for the match Unicode character, preceded by
911 backslash, or empty if no match.
912
902 913 """
903 914 if len(text)<2:
904 return u'', ()
915 return '', ()
905 916 maybe_slash = text[-2]
906 917 if maybe_slash != '\\':
907 return u'', ()
918 return '', ()
908 919
909 920 char = text[-1]
910 921 # no expand on quote for completion in strings.
911 922 # nor backcomplete standard ascii keys
912 if char in string.ascii_letters or char in ['"',"'"]:
913 return u'', ()
923 if char in string.ascii_letters or char in ('"',"'"):
924 return '', ()
914 925 try :
915 926 unic = unicodedata.name(char)
916 return '\\'+char,['\\'+unic]
927 return '\\'+char,('\\'+unic,)
917 928 except KeyError:
918 929 pass
919 return u'', ()
930 return '', ()
920 931
921 def back_latex_name_matches(text:str):
932 def back_latex_name_matches(text:str) -> Tuple[str, Sequence[str]] :
922 933 """Match latex characters back to unicode name
923 934
924 935 This does ``\\ℵ`` -> ``\\aleph``
925 936
926 Used on Python 3 only.
927 937 """
928 938 if len(text)<2:
929 return u'', ()
939 return '', ()
930 940 maybe_slash = text[-2]
931 941 if maybe_slash != '\\':
932 return u'', ()
942 return '', ()
933 943
934 944
935 945 char = text[-1]
936 946 # no expand on quote for completion in strings.
937 947 # nor backcomplete standard ascii keys
938 if char in string.ascii_letters or char in ['"',"'"]:
939 return u'', ()
948 if char in string.ascii_letters or char in ('"',"'"):
949 return '', ()
940 950 try :
941 951 latex = reverse_latex_symbol[char]
942 952 # '\\' replace the \ as well
943 953 return '\\'+char,[latex]
944 954 except KeyError:
945 955 pass
946 return u'', ()
956 return '', ()
947 957
948 958
949 959 def _formatparamchildren(parameter) -> str:
950 960 """
951 961 Get parameter name and value from Jedi Private API
952 962
953 963 Jedi does not expose a simple way to get `param=value` from its API.
954 964
955 965 Parameter
956 966 =========
957 967
958 968 parameter:
959 969 Jedi's function `Param`
960 970
961 971 Returns
962 972 =======
963 973
964 974 A string like 'a', 'b=1', '*args', '**kwargs'
965 975
966 976
967 977 """
968 978 description = parameter.description
969 979 if not description.startswith('param '):
970 980 raise ValueError('Jedi function parameter description have change format.'
971 981 'Expected "param ...", found %r".' % description)
972 982 return description[6:]
973 983
974 984 def _make_signature(completion)-> str:
975 985 """
976 986 Make the signature from a jedi completion
977 987
978 988 Parameter
979 989 =========
980 990
981 991 completion: jedi.Completion
982 992 object does not complete a function type
983 993
984 994 Returns
985 995 =======
986 996
987 997 a string consisting of the function signature, with the parenthesis but
988 998 without the function name. example:
989 999 `(a, *args, b=1, **kwargs)`
990 1000
991 1001 """
992 1002
993 1003 # it looks like this might work on jedi 0.17
994 1004 if hasattr(completion, 'get_signatures'):
995 1005 signatures = completion.get_signatures()
996 1006 if not signatures:
997 1007 return '(?)'
998 1008
999 1009 c0 = completion.get_signatures()[0]
1000 1010 return '('+c0.to_string().split('(', maxsplit=1)[1]
1001 1011
1002 1012 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1003 1013 for p in signature.defined_names()) if f])
1004 1014
1015
1016 class _CompleteResult(NamedTuple):
1017 matched_text : str
1018 matches: Sequence[str]
1019 matches_origin: Sequence[str]
1020 jedi_matches: Any
1021
1022
1005 1023 class IPCompleter(Completer):
1006 1024 """Extension of the completer class with IPython-specific features"""
1007 1025
1026 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1027
1008 1028 @observe('greedy')
1009 1029 def _greedy_changed(self, change):
1010 1030 """update the splitter and readline delims when greedy is changed"""
1011 1031 if change['new']:
1012 1032 self.splitter.delims = GREEDY_DELIMS
1013 1033 else:
1014 1034 self.splitter.delims = DELIMS
1015 1035
1016 1036 dict_keys_only = Bool(False,
1017 1037 help="""Whether to show dict key matches only""")
1018 1038
1019 1039 merge_completions = Bool(True,
1020 1040 help="""Whether to merge completion results into a single list
1021 1041
1022 1042 If False, only the completion results from the first non-empty
1023 1043 completer will be returned.
1024 1044 """
1025 1045 ).tag(config=True)
1026 1046 omit__names = Enum((0,1,2), default_value=2,
1027 1047 help="""Instruct the completer to omit private method names
1028 1048
1029 1049 Specifically, when completing on ``object.<tab>``.
1030 1050
1031 1051 When 2 [default]: all names that start with '_' will be excluded.
1032 1052
1033 1053 When 1: all 'magic' names (``__foo__``) will be excluded.
1034 1054
1035 1055 When 0: nothing will be excluded.
1036 1056 """
1037 1057 ).tag(config=True)
1038 1058 limit_to__all__ = Bool(False,
1039 1059 help="""
1040 1060 DEPRECATED as of version 5.0.
1041 1061
1042 1062 Instruct the completer to use __all__ for the completion
1043 1063
1044 1064 Specifically, when completing on ``object.<tab>``.
1045 1065
1046 1066 When True: only those names in obj.__all__ will be included.
1047 1067
1048 1068 When False [default]: the __all__ attribute is ignored
1049 1069 """,
1050 1070 ).tag(config=True)
1051 1071
1052 1072 profile_completions = Bool(
1053 1073 default_value=False,
1054 1074 help="If True, emit profiling data for completion subsystem using cProfile."
1055 1075 ).tag(config=True)
1056 1076
1057 1077 profiler_output_dir = Unicode(
1058 1078 default_value=".completion_profiles",
1059 1079 help="Template for path at which to output profile data for completions."
1060 1080 ).tag(config=True)
1061 1081
1062 1082 @observe('limit_to__all__')
1063 1083 def _limit_to_all_changed(self, change):
1064 1084 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1065 1085 'value has been deprecated since IPython 5.0, will be made to have '
1066 1086 'no effects and then removed in future version of IPython.',
1067 1087 UserWarning)
1068 1088
1069 1089 def __init__(self, shell=None, namespace=None, global_namespace=None,
1070 1090 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1071 1091 """IPCompleter() -> completer
1072 1092
1073 1093 Return a completer object.
1074 1094
1075 1095 Parameters
1076 1096 ----------
1077 1097
1078 1098 shell
1079 1099 a pointer to the ipython shell itself. This is needed
1080 1100 because this completer knows about magic functions, and those can
1081 1101 only be accessed via the ipython instance.
1082 1102
1083 1103 namespace : dict, optional
1084 1104 an optional dict where completions are performed.
1085 1105
1086 1106 global_namespace : dict, optional
1087 1107 secondary optional dict for completions, to
1088 1108 handle cases (such as IPython embedded inside functions) where
1089 1109 both Python scopes are visible.
1090 1110
1091 1111 use_readline : bool, optional
1092 1112 DEPRECATED, ignored since IPython 6.0, will have no effects
1093 1113 """
1094 1114
1095 1115 self.magic_escape = ESC_MAGIC
1096 1116 self.splitter = CompletionSplitter()
1097 1117
1098 1118 if use_readline is not _deprecation_readline_sentinel:
1099 1119 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1100 1120 DeprecationWarning, stacklevel=2)
1101 1121
1102 1122 # _greedy_changed() depends on splitter and readline being defined:
1103 1123 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1104 1124 config=config, **kwargs)
1105 1125
1106 1126 # List where completion matches will be stored
1107 1127 self.matches = []
1108 1128 self.shell = shell
1109 1129 # Regexp to split filenames with spaces in them
1110 1130 self.space_name_re = re.compile(r'([^\\] )')
1111 1131 # Hold a local ref. to glob.glob for speed
1112 1132 self.glob = glob.glob
1113 1133
1114 1134 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1115 1135 # buffers, to avoid completion problems.
1116 1136 term = os.environ.get('TERM','xterm')
1117 1137 self.dumb_terminal = term in ['dumb','emacs']
1118 1138
1119 1139 # Special handling of backslashes needed in win32 platforms
1120 1140 if sys.platform == "win32":
1121 1141 self.clean_glob = self._clean_glob_win32
1122 1142 else:
1123 1143 self.clean_glob = self._clean_glob
1124 1144
1125 1145 #regexp to parse docstring for function signature
1126 1146 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1127 1147 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1128 1148 #use this if positional argument name is also needed
1129 1149 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1130 1150
1131 1151 self.magic_arg_matchers = [
1132 1152 self.magic_config_matches,
1133 1153 self.magic_color_matches,
1134 1154 ]
1135 1155
1136 1156 # This is set externally by InteractiveShell
1137 1157 self.custom_completers = None
1138 1158
1139 1159 # This is a list of names of unicode characters that can be completed
1140 1160 # into their corresponding unicode value. The list is large, so we
1141 1161 # laziliy initialize it on first use. Consuming code should access this
1142 1162 # attribute through the `@unicode_names` property.
1143 1163 self._unicode_names = None
1144 1164
1145 1165 @property
1146 def matchers(self):
1166 def matchers(self) -> List[Any]:
1147 1167 """All active matcher routines for completion"""
1148 1168 if self.dict_keys_only:
1149 1169 return [self.dict_key_matches]
1150 1170
1151 1171 if self.use_jedi:
1152 1172 return [
1153 1173 *self.custom_matchers,
1154 1174 self.file_matches,
1155 1175 self.magic_matches,
1156 1176 self.dict_key_matches,
1157 1177 ]
1158 1178 else:
1159 1179 return [
1160 1180 *self.custom_matchers,
1161 1181 self.python_matches,
1162 1182 self.file_matches,
1163 1183 self.magic_matches,
1164 1184 self.python_func_kw_matches,
1165 1185 self.dict_key_matches,
1166 1186 ]
1167 1187
1168 def all_completions(self, text) -> List[str]:
1188 def all_completions(self, text:str) -> List[str]:
1169 1189 """
1170 1190 Wrapper around the completion methods for the benefit of emacs.
1171 1191 """
1172 1192 prefix = text.rpartition('.')[0]
1173 1193 with provisionalcompleter():
1174 1194 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1175 1195 for c in self.completions(text, len(text))]
1176 1196
1177 1197 return self.complete(text)[1]
1178 1198
1179 def _clean_glob(self, text):
1199 def _clean_glob(self, text:str):
1180 1200 return self.glob("%s*" % text)
1181 1201
1182 def _clean_glob_win32(self,text):
1202 def _clean_glob_win32(self, text:str):
1183 1203 return [f.replace("\\","/")
1184 1204 for f in self.glob("%s*" % text)]
1185 1205
1186 def file_matches(self, text):
1206 def file_matches(self, text:str)->List[str]:
1187 1207 """Match filenames, expanding ~USER type strings.
1188 1208
1189 1209 Most of the seemingly convoluted logic in this completer is an
1190 1210 attempt to handle filenames with spaces in them. And yet it's not
1191 1211 quite perfect, because Python's readline doesn't expose all of the
1192 1212 GNU readline details needed for this to be done correctly.
1193 1213
1194 1214 For a filename with a space in it, the printed completions will be
1195 1215 only the parts after what's already been typed (instead of the
1196 1216 full completions, as is normally done). I don't think with the
1197 1217 current (as of Python 2.3) Python readline it's possible to do
1198 1218 better."""
1199 1219
1200 1220 # chars that require escaping with backslash - i.e. chars
1201 1221 # that readline treats incorrectly as delimiters, but we
1202 1222 # don't want to treat as delimiters in filename matching
1203 1223 # when escaped with backslash
1204 1224 if text.startswith('!'):
1205 1225 text = text[1:]
1206 1226 text_prefix = u'!'
1207 1227 else:
1208 1228 text_prefix = u''
1209 1229
1210 1230 text_until_cursor = self.text_until_cursor
1211 1231 # track strings with open quotes
1212 1232 open_quotes = has_open_quotes(text_until_cursor)
1213 1233
1214 1234 if '(' in text_until_cursor or '[' in text_until_cursor:
1215 1235 lsplit = text
1216 1236 else:
1217 1237 try:
1218 1238 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1219 1239 lsplit = arg_split(text_until_cursor)[-1]
1220 1240 except ValueError:
1221 1241 # typically an unmatched ", or backslash without escaped char.
1222 1242 if open_quotes:
1223 1243 lsplit = text_until_cursor.split(open_quotes)[-1]
1224 1244 else:
1225 1245 return []
1226 1246 except IndexError:
1227 1247 # tab pressed on empty line
1228 1248 lsplit = ""
1229 1249
1230 1250 if not open_quotes and lsplit != protect_filename(lsplit):
1231 1251 # if protectables are found, do matching on the whole escaped name
1232 1252 has_protectables = True
1233 1253 text0,text = text,lsplit
1234 1254 else:
1235 1255 has_protectables = False
1236 1256 text = os.path.expanduser(text)
1237 1257
1238 1258 if text == "":
1239 1259 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1240 1260
1241 1261 # Compute the matches from the filesystem
1242 1262 if sys.platform == 'win32':
1243 1263 m0 = self.clean_glob(text)
1244 1264 else:
1245 1265 m0 = self.clean_glob(text.replace('\\', ''))
1246 1266
1247 1267 if has_protectables:
1248 1268 # If we had protectables, we need to revert our changes to the
1249 1269 # beginning of filename so that we don't double-write the part
1250 1270 # of the filename we have so far
1251 1271 len_lsplit = len(lsplit)
1252 1272 matches = [text_prefix + text0 +
1253 1273 protect_filename(f[len_lsplit:]) for f in m0]
1254 1274 else:
1255 1275 if open_quotes:
1256 1276 # if we have a string with an open quote, we don't need to
1257 1277 # protect the names beyond the quote (and we _shouldn't_, as
1258 1278 # it would cause bugs when the filesystem call is made).
1259 1279 matches = m0 if sys.platform == "win32" else\
1260 1280 [protect_filename(f, open_quotes) for f in m0]
1261 1281 else:
1262 1282 matches = [text_prefix +
1263 1283 protect_filename(f) for f in m0]
1264 1284
1265 1285 # Mark directories in input list by appending '/' to their names.
1266 1286 return [x+'/' if os.path.isdir(x) else x for x in matches]
1267 1287
1268 def magic_matches(self, text):
1288 def magic_matches(self, text:str):
1269 1289 """Match magics"""
1270 1290 # Get all shell magics now rather than statically, so magics loaded at
1271 1291 # runtime show up too.
1272 1292 lsm = self.shell.magics_manager.lsmagic()
1273 1293 line_magics = lsm['line']
1274 1294 cell_magics = lsm['cell']
1275 1295 pre = self.magic_escape
1276 1296 pre2 = pre+pre
1277 1297
1278 1298 explicit_magic = text.startswith(pre)
1279 1299
1280 1300 # Completion logic:
1281 1301 # - user gives %%: only do cell magics
1282 1302 # - user gives %: do both line and cell magics
1283 1303 # - no prefix: do both
1284 1304 # In other words, line magics are skipped if the user gives %% explicitly
1285 1305 #
1286 1306 # We also exclude magics that match any currently visible names:
1287 1307 # https://github.com/ipython/ipython/issues/4877, unless the user has
1288 1308 # typed a %:
1289 1309 # https://github.com/ipython/ipython/issues/10754
1290 1310 bare_text = text.lstrip(pre)
1291 1311 global_matches = self.global_matches(bare_text)
1292 1312 if not explicit_magic:
1293 1313 def matches(magic):
1294 1314 """
1295 1315 Filter magics, in particular remove magics that match
1296 1316 a name present in global namespace.
1297 1317 """
1298 1318 return ( magic.startswith(bare_text) and
1299 1319 magic not in global_matches )
1300 1320 else:
1301 1321 def matches(magic):
1302 1322 return magic.startswith(bare_text)
1303 1323
1304 1324 comp = [ pre2+m for m in cell_magics if matches(m)]
1305 1325 if not text.startswith(pre2):
1306 1326 comp += [ pre+m for m in line_magics if matches(m)]
1307 1327
1308 1328 return comp
1309 1329
1310 1330 def magic_config_matches(self, text:str) -> List[str]:
1311 1331 """ Match class names and attributes for %config magic """
1312 1332 texts = text.strip().split()
1313 1333
1314 1334 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1315 1335 # get all configuration classes
1316 1336 classes = sorted(set([ c for c in self.shell.configurables
1317 1337 if c.__class__.class_traits(config=True)
1318 1338 ]), key=lambda x: x.__class__.__name__)
1319 1339 classnames = [ c.__class__.__name__ for c in classes ]
1320 1340
1321 1341 # return all classnames if config or %config is given
1322 1342 if len(texts) == 1:
1323 1343 return classnames
1324 1344
1325 1345 # match classname
1326 1346 classname_texts = texts[1].split('.')
1327 1347 classname = classname_texts[0]
1328 1348 classname_matches = [ c for c in classnames
1329 1349 if c.startswith(classname) ]
1330 1350
1331 1351 # return matched classes or the matched class with attributes
1332 1352 if texts[1].find('.') < 0:
1333 1353 return classname_matches
1334 1354 elif len(classname_matches) == 1 and \
1335 1355 classname_matches[0] == classname:
1336 1356 cls = classes[classnames.index(classname)].__class__
1337 1357 help = cls.class_get_help()
1338 1358 # strip leading '--' from cl-args:
1339 1359 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1340 1360 return [ attr.split('=')[0]
1341 1361 for attr in help.strip().splitlines()
1342 1362 if attr.startswith(texts[1]) ]
1343 1363 return []
1344 1364
1345 1365 def magic_color_matches(self, text:str) -> List[str] :
1346 1366 """ Match color schemes for %colors magic"""
1347 1367 texts = text.split()
1348 1368 if text.endswith(' '):
1349 1369 # .split() strips off the trailing whitespace. Add '' back
1350 1370 # so that: '%colors ' -> ['%colors', '']
1351 1371 texts.append('')
1352 1372
1353 1373 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1354 1374 prefix = texts[1]
1355 1375 return [ color for color in InspectColors.keys()
1356 1376 if color.startswith(prefix) ]
1357 1377 return []
1358 1378
1359 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str):
1379 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterable[Any]:
1360 1380 """
1361 1381
1362 1382 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1363 1383 cursor position.
1364 1384
1365 1385 Parameters
1366 1386 ----------
1367 1387 cursor_column : int
1368 1388 column position of the cursor in ``text``, 0-indexed.
1369 1389 cursor_line : int
1370 1390 line position of the cursor in ``text``, 0-indexed
1371 1391 text : str
1372 1392 text to complete
1373 1393
1374 1394 Debugging
1375 1395 ---------
1376 1396
1377 1397 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1378 1398 object containing a string with the Jedi debug information attached.
1379 1399 """
1380 1400 namespaces = [self.namespace]
1381 1401 if self.global_namespace is not None:
1382 1402 namespaces.append(self.global_namespace)
1383 1403
1384 1404 completion_filter = lambda x:x
1385 1405 offset = cursor_to_position(text, cursor_line, cursor_column)
1386 1406 # filter output if we are completing for object members
1387 1407 if offset:
1388 1408 pre = text[offset-1]
1389 1409 if pre == '.':
1390 1410 if self.omit__names == 2:
1391 1411 completion_filter = lambda c:not c.name.startswith('_')
1392 1412 elif self.omit__names == 1:
1393 1413 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1394 1414 elif self.omit__names == 0:
1395 1415 completion_filter = lambda x:x
1396 1416 else:
1397 1417 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1398 1418
1399 1419 interpreter = jedi.Interpreter(text[:offset], namespaces)
1400 1420 try_jedi = True
1401 1421
1402 1422 try:
1403 1423 # find the first token in the current tree -- if it is a ' or " then we are in a string
1404 1424 completing_string = False
1405 1425 try:
1406 1426 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1407 1427 except StopIteration:
1408 1428 pass
1409 1429 else:
1410 1430 # note the value may be ', ", or it may also be ''' or """, or
1411 1431 # in some cases, """what/you/typed..., but all of these are
1412 1432 # strings.
1413 1433 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1414 1434
1415 1435 # if we are in a string jedi is likely not the right candidate for
1416 1436 # now. Skip it.
1417 1437 try_jedi = not completing_string
1418 1438 except Exception as e:
1419 1439 # many of things can go wrong, we are using private API just don't crash.
1420 1440 if self.debug:
1421 1441 print("Error detecting if completing a non-finished string :", e, '|')
1422 1442
1423 1443 if not try_jedi:
1424 1444 return []
1425 1445 try:
1426 1446 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1427 1447 except Exception as e:
1428 1448 if self.debug:
1429 1449 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1430 1450 else:
1431 1451 return []
1432 1452
1433 def python_matches(self, text):
1453 def python_matches(self, text:str)->List[str]:
1434 1454 """Match attributes or global python names"""
1435 1455 if "." in text:
1436 1456 try:
1437 1457 matches = self.attr_matches(text)
1438 1458 if text.endswith('.') and self.omit__names:
1439 1459 if self.omit__names == 1:
1440 1460 # true if txt is _not_ a __ name, false otherwise:
1441 1461 no__name = (lambda txt:
1442 1462 re.match(r'.*\.__.*?__',txt) is None)
1443 1463 else:
1444 1464 # true if txt is _not_ a _ name, false otherwise:
1445 1465 no__name = (lambda txt:
1446 1466 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1447 1467 matches = filter(no__name, matches)
1448 1468 except NameError:
1449 1469 # catches <undefined attributes>.<tab>
1450 1470 matches = []
1451 1471 else:
1452 1472 matches = self.global_matches(text)
1453 1473 return matches
1454 1474
1455 1475 def _default_arguments_from_docstring(self, doc):
1456 1476 """Parse the first line of docstring for call signature.
1457 1477
1458 1478 Docstring should be of the form 'min(iterable[, key=func])\n'.
1459 1479 It can also parse cython docstring of the form
1460 1480 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1461 1481 """
1462 1482 if doc is None:
1463 1483 return []
1464 1484
1465 1485 #care only the firstline
1466 1486 line = doc.lstrip().splitlines()[0]
1467 1487
1468 1488 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1469 1489 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1470 1490 sig = self.docstring_sig_re.search(line)
1471 1491 if sig is None:
1472 1492 return []
1473 1493 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1474 1494 sig = sig.groups()[0].split(',')
1475 1495 ret = []
1476 1496 for s in sig:
1477 1497 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1478 1498 ret += self.docstring_kwd_re.findall(s)
1479 1499 return ret
1480 1500
1481 1501 def _default_arguments(self, obj):
1482 1502 """Return the list of default arguments of obj if it is callable,
1483 1503 or empty list otherwise."""
1484 1504 call_obj = obj
1485 1505 ret = []
1486 1506 if inspect.isbuiltin(obj):
1487 1507 pass
1488 1508 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1489 1509 if inspect.isclass(obj):
1490 1510 #for cython embedsignature=True the constructor docstring
1491 1511 #belongs to the object itself not __init__
1492 1512 ret += self._default_arguments_from_docstring(
1493 1513 getattr(obj, '__doc__', ''))
1494 1514 # for classes, check for __init__,__new__
1495 1515 call_obj = (getattr(obj, '__init__', None) or
1496 1516 getattr(obj, '__new__', None))
1497 1517 # for all others, check if they are __call__able
1498 1518 elif hasattr(obj, '__call__'):
1499 1519 call_obj = obj.__call__
1500 1520 ret += self._default_arguments_from_docstring(
1501 1521 getattr(call_obj, '__doc__', ''))
1502 1522
1503 1523 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1504 1524 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1505 1525
1506 1526 try:
1507 1527 sig = inspect.signature(call_obj)
1508 1528 ret.extend(k for k, v in sig.parameters.items() if
1509 1529 v.kind in _keeps)
1510 1530 except ValueError:
1511 1531 pass
1512 1532
1513 1533 return list(set(ret))
1514 1534
1515 def python_func_kw_matches(self,text):
1535 def python_func_kw_matches(self, text):
1516 1536 """Match named parameters (kwargs) of the last open function"""
1517 1537
1518 1538 if "." in text: # a parameter cannot be dotted
1519 1539 return []
1520 1540 try: regexp = self.__funcParamsRegex
1521 1541 except AttributeError:
1522 1542 regexp = self.__funcParamsRegex = re.compile(r'''
1523 1543 '.*?(?<!\\)' | # single quoted strings or
1524 1544 ".*?(?<!\\)" | # double quoted strings or
1525 1545 \w+ | # identifier
1526 1546 \S # other characters
1527 1547 ''', re.VERBOSE | re.DOTALL)
1528 1548 # 1. find the nearest identifier that comes before an unclosed
1529 1549 # parenthesis before the cursor
1530 1550 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1531 1551 tokens = regexp.findall(self.text_until_cursor)
1532 1552 iterTokens = reversed(tokens); openPar = 0
1533 1553
1534 1554 for token in iterTokens:
1535 1555 if token == ')':
1536 1556 openPar -= 1
1537 1557 elif token == '(':
1538 1558 openPar += 1
1539 1559 if openPar > 0:
1540 1560 # found the last unclosed parenthesis
1541 1561 break
1542 1562 else:
1543 1563 return []
1544 1564 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1545 1565 ids = []
1546 1566 isId = re.compile(r'\w+$').match
1547 1567
1548 1568 while True:
1549 1569 try:
1550 1570 ids.append(next(iterTokens))
1551 1571 if not isId(ids[-1]):
1552 1572 ids.pop(); break
1553 1573 if not next(iterTokens) == '.':
1554 1574 break
1555 1575 except StopIteration:
1556 1576 break
1557 1577
1558 1578 # Find all named arguments already assigned to, as to avoid suggesting
1559 1579 # them again
1560 1580 usedNamedArgs = set()
1561 1581 par_level = -1
1562 1582 for token, next_token in zip(tokens, tokens[1:]):
1563 1583 if token == '(':
1564 1584 par_level += 1
1565 1585 elif token == ')':
1566 1586 par_level -= 1
1567 1587
1568 1588 if par_level != 0:
1569 1589 continue
1570 1590
1571 1591 if next_token != '=':
1572 1592 continue
1573 1593
1574 1594 usedNamedArgs.add(token)
1575 1595
1576 1596 argMatches = []
1577 1597 try:
1578 1598 callableObj = '.'.join(ids[::-1])
1579 1599 namedArgs = self._default_arguments(eval(callableObj,
1580 1600 self.namespace))
1581 1601
1582 1602 # Remove used named arguments from the list, no need to show twice
1583 1603 for namedArg in set(namedArgs) - usedNamedArgs:
1584 1604 if namedArg.startswith(text):
1585 argMatches.append(u"%s=" %namedArg)
1605 argMatches.append("%s=" %namedArg)
1586 1606 except:
1587 1607 pass
1588 1608
1589 1609 return argMatches
1590 1610
1591 def dict_key_matches(self, text):
1611 @staticmethod
1612 def _get_keys(obj: Any) -> List[Any]:
1613 # Objects can define their own completions by defining an
1614 # _ipy_key_completions_() method.
1615 method = get_real_method(obj, '_ipython_key_completions_')
1616 if method is not None:
1617 return method()
1618
1619 # Special case some common in-memory dict-like types
1620 if isinstance(obj, dict) or\
1621 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1622 try:
1623 return list(obj.keys())
1624 except Exception:
1625 return []
1626 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1627 _safe_isinstance(obj, 'numpy', 'void'):
1628 return obj.dtype.names or []
1629 return []
1630
1631 def dict_key_matches(self, text:str) -> List[str]:
1592 1632 "Match string keys in a dictionary, after e.g. 'foo[' "
1593 def get_keys(obj):
1594 # Objects can define their own completions by defining an
1595 # _ipy_key_completions_() method.
1596 method = get_real_method(obj, '_ipython_key_completions_')
1597 if method is not None:
1598 return method()
1599
1600 # Special case some common in-memory dict-like types
1601 if isinstance(obj, dict) or\
1602 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1603 try:
1604 return list(obj.keys())
1605 except Exception:
1606 return []
1607 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1608 _safe_isinstance(obj, 'numpy', 'void'):
1609 return obj.dtype.names or []
1610 return []
1611 1633
1612 try:
1634
1635 if self.__dict_key_regexps is not None:
1613 1636 regexps = self.__dict_key_regexps
1614 except AttributeError:
1637 else:
1615 1638 dict_key_re_fmt = r'''(?x)
1616 1639 ( # match dict-referring expression wrt greedy setting
1617 1640 %s
1618 1641 )
1619 1642 \[ # open bracket
1620 1643 \s* # and optional whitespace
1621 1644 ([uUbB]? # string prefix (r not handled)
1622 1645 (?: # unclosed string
1623 1646 '(?:[^']|(?<!\\)\\')*
1624 1647 |
1625 1648 "(?:[^"]|(?<!\\)\\")*
1626 1649 )
1627 1650 )?
1628 1651 $
1629 1652 '''
1630 1653 regexps = self.__dict_key_regexps = {
1631 1654 False: re.compile(dict_key_re_fmt % r'''
1632 1655 # identifiers separated by .
1633 1656 (?!\d)\w+
1634 1657 (?:\.(?!\d)\w+)*
1635 1658 '''),
1636 1659 True: re.compile(dict_key_re_fmt % '''
1637 1660 .+
1638 1661 ''')
1639 1662 }
1640 1663
1641 1664 match = regexps[self.greedy].search(self.text_until_cursor)
1642 1665 if match is None:
1643 1666 return []
1644 1667
1645 1668 expr, prefix = match.groups()
1646 1669 try:
1647 1670 obj = eval(expr, self.namespace)
1648 1671 except Exception:
1649 1672 try:
1650 1673 obj = eval(expr, self.global_namespace)
1651 1674 except Exception:
1652 1675 return []
1653 1676
1654 keys = get_keys(obj)
1677 keys = self._get_keys(obj)
1655 1678 if not keys:
1656 1679 return keys
1657 1680 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims)
1658 1681 if not matches:
1659 1682 return matches
1660 1683
1661 1684 # get the cursor position of
1662 1685 # - the text being completed
1663 1686 # - the start of the key text
1664 1687 # - the start of the completion
1665 1688 text_start = len(self.text_until_cursor) - len(text)
1666 1689 if prefix:
1667 1690 key_start = match.start(2)
1668 1691 completion_start = key_start + token_offset
1669 1692 else:
1670 1693 key_start = completion_start = match.end()
1671 1694
1672 1695 # grab the leading prefix, to make sure all completions start with `text`
1673 1696 if text_start > key_start:
1674 1697 leading = ''
1675 1698 else:
1676 1699 leading = text[text_start:completion_start]
1677 1700
1678 1701 # the index of the `[` character
1679 1702 bracket_idx = match.end(1)
1680 1703
1681 1704 # append closing quote and bracket as appropriate
1682 1705 # this is *not* appropriate if the opening quote or bracket is outside
1683 1706 # the text given to this method
1684 1707 suf = ''
1685 1708 continuation = self.line_buffer[len(self.text_until_cursor):]
1686 1709 if key_start > text_start and closing_quote:
1687 1710 # quotes were opened inside text, maybe close them
1688 1711 if continuation.startswith(closing_quote):
1689 1712 continuation = continuation[len(closing_quote):]
1690 1713 else:
1691 1714 suf += closing_quote
1692 1715 if bracket_idx > text_start:
1693 1716 # brackets were opened inside text, maybe close them
1694 1717 if not continuation.startswith(']'):
1695 1718 suf += ']'
1696 1719
1697 1720 return [leading + k + suf for k in matches]
1698 1721
1699 def unicode_name_matches(self, text):
1700 u"""Match Latex-like syntax for unicode characters base
1722 @staticmethod
1723 def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
1724 """Match Latex-like syntax for unicode characters base
1701 1725 on the name of the character.
1702 1726
1703 1727 This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
1704 1728
1705 1729 Works only on valid python 3 identifier, or on combining characters that
1706 1730 will combine to form a valid identifier.
1707
1708 Used on Python 3 only.
1709 1731 """
1710 1732 slashpos = text.rfind('\\')
1711 1733 if slashpos > -1:
1712 1734 s = text[slashpos+1:]
1713 1735 try :
1714 1736 unic = unicodedata.lookup(s)
1715 1737 # allow combining chars
1716 1738 if ('a'+unic).isidentifier():
1717 1739 return '\\'+s,[unic]
1718 1740 except KeyError:
1719 1741 pass
1720 return u'', []
1742 return '', []
1721 1743
1722 1744
1723 def latex_matches(self, text):
1724 u"""Match Latex syntax for unicode characters.
1745 def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
1746 """Match Latex syntax for unicode characters.
1725 1747
1726 1748 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
1727 1749 """
1728 1750 slashpos = text.rfind('\\')
1729 1751 if slashpos > -1:
1730 1752 s = text[slashpos:]
1731 1753 if s in latex_symbols:
1732 1754 # Try to complete a full latex symbol to unicode
1733 1755 # \\alpha -> α
1734 1756 return s, [latex_symbols[s]]
1735 1757 else:
1736 1758 # If a user has partially typed a latex symbol, give them
1737 1759 # a full list of options \al -> [\aleph, \alpha]
1738 1760 matches = [k for k in latex_symbols if k.startswith(s)]
1739 1761 if matches:
1740 1762 return s, matches
1741 return u'', []
1763 return '', ()
1742 1764
1743 1765 def dispatch_custom_completer(self, text):
1744 1766 if not self.custom_completers:
1745 1767 return
1746 1768
1747 1769 line = self.line_buffer
1748 1770 if not line.strip():
1749 1771 return None
1750 1772
1751 1773 # Create a little structure to pass all the relevant information about
1752 1774 # the current completion to any custom completer.
1753 1775 event = SimpleNamespace()
1754 1776 event.line = line
1755 1777 event.symbol = text
1756 1778 cmd = line.split(None,1)[0]
1757 1779 event.command = cmd
1758 1780 event.text_until_cursor = self.text_until_cursor
1759 1781
1760 1782 # for foo etc, try also to find completer for %foo
1761 1783 if not cmd.startswith(self.magic_escape):
1762 1784 try_magic = self.custom_completers.s_matches(
1763 1785 self.magic_escape + cmd)
1764 1786 else:
1765 1787 try_magic = []
1766 1788
1767 1789 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1768 1790 try_magic,
1769 1791 self.custom_completers.flat_matches(self.text_until_cursor)):
1770 1792 try:
1771 1793 res = c(event)
1772 1794 if res:
1773 1795 # first, try case sensitive match
1774 1796 withcase = [r for r in res if r.startswith(text)]
1775 1797 if withcase:
1776 1798 return withcase
1777 1799 # if none, then case insensitive ones are ok too
1778 1800 text_low = text.lower()
1779 1801 return [r for r in res if r.lower().startswith(text_low)]
1780 1802 except TryNext:
1781 1803 pass
1782 1804 except KeyboardInterrupt:
1783 1805 """
1784 1806 If custom completer take too long,
1785 1807 let keyboard interrupt abort and return nothing.
1786 1808 """
1787 1809 break
1788 1810
1789 1811 return None
1790 1812
1791 1813 def completions(self, text: str, offset: int)->Iterator[Completion]:
1792 1814 """
1793 1815 Returns an iterator over the possible completions
1794 1816
1795 1817 .. warning:: Unstable
1796 1818
1797 1819 This function is unstable, API may change without warning.
1798 1820 It will also raise unless use in proper context manager.
1799 1821
1800 1822 Parameters
1801 1823 ----------
1802 1824
1803 1825 text:str
1804 1826 Full text of the current input, multi line string.
1805 1827 offset:int
1806 1828 Integer representing the position of the cursor in ``text``. Offset
1807 1829 is 0-based indexed.
1808 1830
1809 1831 Yields
1810 1832 ------
1811 1833 :any:`Completion` object
1812 1834
1813 1835
1814 1836 The cursor on a text can either be seen as being "in between"
1815 1837 characters or "On" a character depending on the interface visible to
1816 1838 the user. For consistency the cursor being on "in between" characters X
1817 1839 and Y is equivalent to the cursor being "on" character Y, that is to say
1818 1840 the character the cursor is on is considered as being after the cursor.
1819 1841
1820 1842 Combining characters may span more that one position in the
1821 1843 text.
1822 1844
1823 1845
1824 1846 .. note::
1825 1847
1826 1848 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1827 1849 fake Completion token to distinguish completion returned by Jedi
1828 1850 and usual IPython completion.
1829 1851
1830 1852 .. note::
1831 1853
1832 1854 Completions are not completely deduplicated yet. If identical
1833 1855 completions are coming from different sources this function does not
1834 1856 ensure that each completion object will only be present once.
1835 1857 """
1836 1858 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1837 1859 "It may change without warnings. "
1838 1860 "Use in corresponding context manager.",
1839 1861 category=ProvisionalCompleterWarning, stacklevel=2)
1840 1862
1841 1863 seen = set()
1864 profiler:Optional[cProfile.Profile]
1842 1865 try:
1843 1866 if self.profile_completions:
1844 1867 import cProfile
1845 1868 profiler = cProfile.Profile()
1846 1869 profiler.enable()
1847 1870 else:
1848 1871 profiler = None
1849 1872
1850 1873 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1851 1874 if c and (c in seen):
1852 1875 continue
1853 1876 yield c
1854 1877 seen.add(c)
1855 1878 except KeyboardInterrupt:
1856 1879 """if completions take too long and users send keyboard interrupt,
1857 1880 do not crash and return ASAP. """
1858 1881 pass
1859 1882 finally:
1860 1883 if profiler is not None:
1861 1884 profiler.disable()
1862 1885 ensure_dir_exists(self.profiler_output_dir)
1863 1886 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
1864 1887 print("Writing profiler output to", output_path)
1865 1888 profiler.dump_stats(output_path)
1866 1889
1867 def _completions(self, full_text: str, offset: int, *, _timeout)->Iterator[Completion]:
1890 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
1868 1891 """
1869 1892 Core completion module.Same signature as :any:`completions`, with the
1870 1893 extra `timeout` parameter (in seconds).
1871 1894
1872 1895
1873 1896 Computing jedi's completion ``.type`` can be quite expensive (it is a
1874 1897 lazy property) and can require some warm-up, more warm up than just
1875 1898 computing the ``name`` of a completion. The warm-up can be :
1876 1899
1877 1900 - Long warm-up the first time a module is encountered after
1878 1901 install/update: actually build parse/inference tree.
1879 1902
1880 1903 - first time the module is encountered in a session: load tree from
1881 1904 disk.
1882 1905
1883 1906 We don't want to block completions for tens of seconds so we give the
1884 1907 completer a "budget" of ``_timeout`` seconds per invocation to compute
1885 1908 completions types, the completions that have not yet been computed will
1886 1909 be marked as "unknown" an will have a chance to be computed next round
1887 1910 are things get cached.
1888 1911
1889 1912 Keep in mind that Jedi is not the only thing treating the completion so
1890 1913 keep the timeout short-ish as if we take more than 0.3 second we still
1891 1914 have lots of processing to do.
1892 1915
1893 1916 """
1894 1917 deadline = time.monotonic() + _timeout
1895 1918
1896 1919
1897 1920 before = full_text[:offset]
1898 1921 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1899 1922
1900 1923 matched_text, matches, matches_origin, jedi_matches = self._complete(
1901 1924 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1902 1925
1903 1926 iter_jm = iter(jedi_matches)
1904 1927 if _timeout:
1905 1928 for jm in iter_jm:
1906 1929 try:
1907 1930 type_ = jm.type
1908 1931 except Exception:
1909 1932 if self.debug:
1910 1933 print("Error in Jedi getting type of ", jm)
1911 1934 type_ = None
1912 1935 delta = len(jm.name_with_symbols) - len(jm.complete)
1913 1936 if type_ == 'function':
1914 1937 signature = _make_signature(jm)
1915 1938 else:
1916 1939 signature = ''
1917 1940 yield Completion(start=offset - delta,
1918 1941 end=offset,
1919 1942 text=jm.name_with_symbols,
1920 1943 type=type_,
1921 1944 signature=signature,
1922 1945 _origin='jedi')
1923 1946
1924 1947 if time.monotonic() > deadline:
1925 1948 break
1926 1949
1927 1950 for jm in iter_jm:
1928 1951 delta = len(jm.name_with_symbols) - len(jm.complete)
1929 1952 yield Completion(start=offset - delta,
1930 1953 end=offset,
1931 1954 text=jm.name_with_symbols,
1932 1955 type='<unknown>', # don't compute type for speed
1933 1956 _origin='jedi',
1934 1957 signature='')
1935 1958
1936 1959
1937 1960 start_offset = before.rfind(matched_text)
1938 1961
1939 1962 # TODO:
1940 1963 # Suppress this, right now just for debug.
1941 1964 if jedi_matches and matches and self.debug:
1942 1965 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
1943 1966 _origin='debug', type='none', signature='')
1944 1967
1945 1968 # I'm unsure if this is always true, so let's assert and see if it
1946 1969 # crash
1947 1970 assert before.endswith(matched_text)
1948 1971 for m, t in zip(matches, matches_origin):
1949 1972 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
1950 1973
1951 1974
1952 def complete(self, text=None, line_buffer=None, cursor_pos=None):
1975 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
1953 1976 """Find completions for the given text and line context.
1954 1977
1955 1978 Note that both the text and the line_buffer are optional, but at least
1956 1979 one of them must be given.
1957 1980
1958 1981 Parameters
1959 1982 ----------
1960 1983 text : string, optional
1961 1984 Text to perform the completion on. If not given, the line buffer
1962 1985 is split using the instance's CompletionSplitter object.
1963 1986
1964 1987 line_buffer : string, optional
1965 1988 If not given, the completer attempts to obtain the current line
1966 1989 buffer via readline. This keyword allows clients which are
1967 1990 requesting for text completions in non-readline contexts to inform
1968 1991 the completer of the entire text.
1969 1992
1970 1993 cursor_pos : int, optional
1971 1994 Index of the cursor in the full line buffer. Should be provided by
1972 1995 remote frontends where kernel has no access to frontend state.
1973 1996
1974 1997 Returns
1975 1998 -------
1999 Tuple of two items:
1976 2000 text : str
1977 2001 Text that was actually used in the completion.
1978
1979 2002 matches : list
1980 2003 A list of completion matches.
1981 2004
1982 2005
1983 2006 .. note::
1984 2007
1985 2008 This API is likely to be deprecated and replaced by
1986 2009 :any:`IPCompleter.completions` in the future.
1987 2010
1988 2011
1989 2012 """
1990 2013 warnings.warn('`Completer.complete` is pending deprecation since '
1991 2014 'IPython 6.0 and will be replaced by `Completer.completions`.',
1992 2015 PendingDeprecationWarning)
1993 2016 # potential todo, FOLD the 3rd throw away argument of _complete
1994 2017 # into the first 2 one.
1995 2018 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
1996 2019
1997 2020 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
1998 full_text=None) -> Tuple[str, List[str], List[str], Iterable[_FakeJediCompletion]]:
2021 full_text=None) -> _CompleteResult:
1999 2022 """
2000 2023
2001 2024 Like complete but can also returns raw jedi completions as well as the
2002 2025 origin of the completion text. This could (and should) be made much
2003 2026 cleaner but that will be simpler once we drop the old (and stateful)
2004 2027 :any:`complete` API.
2005 2028
2006 2029
2007 2030 With current provisional API, cursor_pos act both (depending on the
2008 2031 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2009 2032 ``column`` when passing multiline strings this could/should be renamed
2010 2033 but would add extra noise.
2034
2035 Return
2036 ======
2037
2038 A tuple of N elements which are (likely):
2039
2040 matched_text: ? the text that the complete matched
2041 matches: list of completions ?
2042 matches_origin: ? list same lenght as matches, and where each completion came from
2043 jedi_matches: list of Jedi matches, have it's own structure.
2011 2044 """
2012 2045
2046
2013 2047 # if the cursor position isn't given, the only sane assumption we can
2014 2048 # make is that it's at the end of the line (the common case)
2015 2049 if cursor_pos is None:
2016 2050 cursor_pos = len(line_buffer) if text is None else len(text)
2017 2051
2018 2052 if self.use_main_ns:
2019 2053 self.namespace = __main__.__dict__
2020 2054
2021 2055 # if text is either None or an empty string, rely on the line buffer
2022 2056 if (not line_buffer) and full_text:
2023 2057 line_buffer = full_text.split('\n')[cursor_line]
2024 2058 if not text:
2025 2059 text = self.splitter.split_line(line_buffer, cursor_pos)
2026 2060
2027 2061 if self.backslash_combining_completions:
2028 2062 # allow deactivation of these on windows.
2029 2063 base_text = text if not line_buffer else line_buffer[:cursor_pos]
2030 latex_text, latex_matches = self.latex_matches(base_text)
2031 if latex_matches:
2032 return latex_text, latex_matches, ['latex_matches']*len(latex_matches), ()
2033 name_text = ''
2034 name_matches = []
2035 # need to add self.fwd_unicode_match() function here when done
2036 for meth in (self.unicode_name_matches, back_latex_name_matches, back_unicode_name_matches, self.fwd_unicode_match):
2064
2065 for meth in (self.latex_matches,
2066 self.unicode_name_matches,
2067 back_latex_name_matches,
2068 back_unicode_name_matches,
2069 self.fwd_unicode_match):
2037 2070 name_text, name_matches = meth(base_text)
2038 2071 if name_text:
2039 return name_text, name_matches[:MATCHES_LIMIT], \
2040 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ()
2072 return _CompleteResult(name_text, name_matches[:MATCHES_LIMIT], \
2073 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ())
2041 2074
2042 2075
2043 2076 # If no line buffer is given, assume the input text is all there was
2044 2077 if line_buffer is None:
2045 2078 line_buffer = text
2046 2079
2047 2080 self.line_buffer = line_buffer
2048 2081 self.text_until_cursor = self.line_buffer[:cursor_pos]
2049 2082
2050 2083 # Do magic arg matches
2051 2084 for matcher in self.magic_arg_matchers:
2052 2085 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2053 2086 if matches:
2054 2087 origins = [matcher.__qualname__] * len(matches)
2055 return text, matches, origins, ()
2088 return _CompleteResult(text, matches, origins, ())
2056 2089
2057 2090 # Start with a clean slate of completions
2058 2091 matches = []
2059 2092
2060 2093 # FIXME: we should extend our api to return a dict with completions for
2061 2094 # different types of objects. The rlcomplete() method could then
2062 2095 # simply collapse the dict into a list for readline, but we'd have
2063 2096 # richer completion semantics in other environments.
2064 completions = ()
2097 completions:Iterable[Any] = []
2065 2098 if self.use_jedi:
2066 2099 if not full_text:
2067 2100 full_text = line_buffer
2068 2101 completions = self._jedi_matches(
2069 2102 cursor_pos, cursor_line, full_text)
2070
2103
2071 2104 if self.merge_completions:
2072 2105 matches = []
2073 2106 for matcher in self.matchers:
2074 2107 try:
2075 2108 matches.extend([(m, matcher.__qualname__)
2076 2109 for m in matcher(text)])
2077 2110 except:
2078 2111 # Show the ugly traceback if the matcher causes an
2079 2112 # exception, but do NOT crash the kernel!
2080 2113 sys.excepthook(*sys.exc_info())
2081 2114 else:
2082 2115 for matcher in self.matchers:
2083 2116 matches = [(m, matcher.__qualname__)
2084 2117 for m in matcher(text)]
2085 2118 if matches:
2086 2119 break
2087 2120
2088 2121 seen = set()
2089 2122 filtered_matches = set()
2090 2123 for m in matches:
2091 2124 t, c = m
2092 2125 if t not in seen:
2093 2126 filtered_matches.add(m)
2094 2127 seen.add(t)
2095 2128
2096 2129 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2097 2130
2098 2131 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2099 2132
2100 2133 _filtered_matches = custom_res or _filtered_matches
2101 2134
2102 2135 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2103 2136 _matches = [m[0] for m in _filtered_matches]
2104 2137 origins = [m[1] for m in _filtered_matches]
2105 2138
2106 2139 self.matches = _matches
2107 2140
2108 return text, _matches, origins, completions
2141 return _CompleteResult(text, _matches, origins, completions)
2109 2142
2110 def fwd_unicode_match(self, text:str) -> Tuple[str, list]:
2143 def fwd_unicode_match(self, text:str) -> Tuple[str, Sequence[str]]:
2144 """
2145
2146 Forward match a string starting with a backslash with a list of
2147 potential Unicode completions.
2148
2149 Will compute list list of Unicode character names on first call and cache it.
2150
2151 Return
2152 ======
2153
2154 At tuple with:
2155 - matched text (empty if no matches)
2156 - list of potential completions, empty tuple otherwise)
2157 """
2158 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2159 # We could do a faster match using a Trie.
2160
2161 # Using pygtrie the follwing seem to work:
2162
2163 # s = PrefixSet()
2164
2165 # for c in range(0,0x10FFFF + 1):
2166 # try:
2167 # s.add(unicodedata.name(chr(c)))
2168 # except ValueError:
2169 # pass
2170 # [''.join(k) for k in s.iter(prefix)]
2171
2172 # But need to be timed and adds an extra dependency.
2111 2173
2112 2174 slashpos = text.rfind('\\')
2113 2175 # if text starts with slash
2114 2176 if slashpos > -1:
2115 2177 # PERF: It's important that we don't access self._unicode_names
2116 2178 # until we're inside this if-block. _unicode_names is lazily
2117 2179 # initialized, and it takes a user-noticeable amount of time to
2118 2180 # initialize it, so we don't want to initialize it unless we're
2119 2181 # actually going to use it.
2120 2182 s = text[slashpos+1:]
2121 2183 candidates = [x for x in self.unicode_names if x.startswith(s)]
2122 2184 if candidates:
2123 2185 return s, candidates
2124 2186 else:
2125 2187 return '', ()
2126 2188
2127 2189 # if text does not start with slash
2128 2190 else:
2129 return u'', ()
2191 return '', ()
2130 2192
2131 2193 @property
2132 2194 def unicode_names(self) -> List[str]:
2133 2195 """List of names of unicode code points that can be completed.
2134 2196
2135 2197 The list is lazily initialized on first access.
2136 2198 """
2137 2199 if self._unicode_names is None:
2138 2200 names = []
2139 2201 for c in range(0,0x10FFFF + 1):
2140 2202 try:
2141 2203 names.append(unicodedata.name(chr(c)))
2142 2204 except ValueError:
2143 2205 pass
2144 self._unicode_names = names
2206 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2145 2207
2146 2208 return self._unicode_names
2209
2210 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2211 names = []
2212 for start,stop in ranges:
2213 for c in range(start, stop) :
2214 try:
2215 names.append(unicodedata.name(chr(c)))
2216 except ValueError:
2217 pass
2218 return names
@@ -1,1111 +1,1129 b''
1 1 # encoding: utf-8
2 2 """Tests for the IPython tab-completion machinery."""
3 3
4 4 # Copyright (c) IPython Development Team.
5 5 # Distributed under the terms of the Modified BSD License.
6 6
7 7 import os
8 8 import sys
9 9 import textwrap
10 10 import unittest
11 11
12 12 from contextlib import contextmanager
13 13
14 14 import nose.tools as nt
15 15
16 16 from traitlets.config.loader import Config
17 17 from IPython import get_ipython
18 18 from IPython.core import completer
19 19 from IPython.external import decorators
20 20 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
21 21 from IPython.utils.generics import complete_object
22 22 from IPython.testing import decorators as dec
23 23
24 24 from IPython.core.completer import (
25 25 Completion,
26 26 provisionalcompleter,
27 27 match_dict_keys,
28 28 _deduplicate_completions,
29 29 )
30 30 from nose.tools import assert_in, assert_not_in
31 31
32 32 # -----------------------------------------------------------------------------
33 33 # Test functions
34 34 # -----------------------------------------------------------------------------
35 35
36 def test_unicode_range():
37 """
38 Test that the ranges we test for unicode names give the same number of
39 results than testing the full length.
40 """
41 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
42
43 expected_list = _unicode_name_compute([(0, 0x110000)])
44 test = _unicode_name_compute(_UNICODE_RANGES)
45 len_exp = len(expected_list)
46 len_test = len(test)
47
48 # do not inline the len() or on error pytest will try to print the 130 000 +
49 # elements.
50 assert len_exp == len_test
51
52 # fail if new unicode symbols have been added.
53 assert len_exp <= 131808
54
36 55
37 56 @contextmanager
38 57 def greedy_completion():
39 58 ip = get_ipython()
40 59 greedy_original = ip.Completer.greedy
41 60 try:
42 61 ip.Completer.greedy = True
43 62 yield
44 63 finally:
45 64 ip.Completer.greedy = greedy_original
46 65
47 66
48 67 def test_protect_filename():
49 68 if sys.platform == "win32":
50 69 pairs = [
51 70 ("abc", "abc"),
52 71 (" abc", '" abc"'),
53 72 ("a bc", '"a bc"'),
54 73 ("a bc", '"a bc"'),
55 74 (" bc", '" bc"'),
56 75 ]
57 76 else:
58 77 pairs = [
59 78 ("abc", "abc"),
60 79 (" abc", r"\ abc"),
61 80 ("a bc", r"a\ bc"),
62 81 ("a bc", r"a\ \ bc"),
63 82 (" bc", r"\ \ bc"),
64 83 # On posix, we also protect parens and other special characters.
65 84 ("a(bc", r"a\(bc"),
66 85 ("a)bc", r"a\)bc"),
67 86 ("a( )bc", r"a\(\ \)bc"),
68 87 ("a[1]bc", r"a\[1\]bc"),
69 88 ("a{1}bc", r"a\{1\}bc"),
70 89 ("a#bc", r"a\#bc"),
71 90 ("a?bc", r"a\?bc"),
72 91 ("a=bc", r"a\=bc"),
73 92 ("a\\bc", r"a\\bc"),
74 93 ("a|bc", r"a\|bc"),
75 94 ("a;bc", r"a\;bc"),
76 95 ("a:bc", r"a\:bc"),
77 96 ("a'bc", r"a\'bc"),
78 97 ("a*bc", r"a\*bc"),
79 98 ('a"bc', r"a\"bc"),
80 99 ("a^bc", r"a\^bc"),
81 100 ("a&bc", r"a\&bc"),
82 101 ]
83 102 # run the actual tests
84 103 for s1, s2 in pairs:
85 104 s1p = completer.protect_filename(s1)
86 105 nt.assert_equal(s1p, s2)
87 106
88 107
89 108 def check_line_split(splitter, test_specs):
90 109 for part1, part2, split in test_specs:
91 110 cursor_pos = len(part1)
92 111 line = part1 + part2
93 112 out = splitter.split_line(line, cursor_pos)
94 113 nt.assert_equal(out, split)
95 114
96 115
97 116 def test_line_split():
98 117 """Basic line splitter test with default specs."""
99 118 sp = completer.CompletionSplitter()
100 119 # The format of the test specs is: part1, part2, expected answer. Parts 1
101 120 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
102 121 # was at the end of part1. So an empty part2 represents someone hitting
103 122 # tab at the end of the line, the most common case.
104 123 t = [
105 124 ("run some/scrip", "", "some/scrip"),
106 125 ("run scripts/er", "ror.py foo", "scripts/er"),
107 126 ("echo $HOM", "", "HOM"),
108 127 ("print sys.pa", "", "sys.pa"),
109 128 ("print(sys.pa", "", "sys.pa"),
110 129 ("execfile('scripts/er", "", "scripts/er"),
111 130 ("a[x.", "", "x."),
112 131 ("a[x.", "y", "x."),
113 132 ('cd "some_file/', "", "some_file/"),
114 133 ]
115 134 check_line_split(sp, t)
116 135 # Ensure splitting works OK with unicode by re-running the tests with
117 136 # all inputs turned into unicode
118 137 check_line_split(sp, [map(str, p) for p in t])
119 138
120 139
121 140 class NamedInstanceMetaclass(type):
122 141 def __getitem__(cls, item):
123 142 return cls.get_instance(item)
124 143
125 144
126 145 class NamedInstanceClass(metaclass=NamedInstanceMetaclass):
127 146 def __init__(self, name):
128 147 if not hasattr(self.__class__, "instances"):
129 148 self.__class__.instances = {}
130 149 self.__class__.instances[name] = self
131 150
132 151 @classmethod
133 152 def _ipython_key_completions_(cls):
134 153 return cls.instances.keys()
135 154
136 155 @classmethod
137 156 def get_instance(cls, name):
138 157 return cls.instances[name]
139 158
140 159
141 160 class KeyCompletable:
142 161 def __init__(self, things=()):
143 162 self.things = things
144 163
145 164 def _ipython_key_completions_(self):
146 165 return list(self.things)
147 166
148 167
149 168 class TestCompleter(unittest.TestCase):
150 169 def setUp(self):
151 170 """
152 171 We want to silence all PendingDeprecationWarning when testing the completer
153 172 """
154 173 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
155 174 self._assertwarns.__enter__()
156 175
157 176 def tearDown(self):
158 177 try:
159 178 self._assertwarns.__exit__(None, None, None)
160 179 except AssertionError:
161 180 pass
162 181
163 182 def test_custom_completion_error(self):
164 183 """Test that errors from custom attribute completers are silenced."""
165 184 ip = get_ipython()
166 185
167 186 class A:
168 187 pass
169 188
170 189 ip.user_ns["x"] = A()
171 190
172 191 @complete_object.register(A)
173 192 def complete_A(a, existing_completions):
174 193 raise TypeError("this should be silenced")
175 194
176 195 ip.complete("x.")
177 196
178 197 def test_custom_completion_ordering(self):
179 198 """Test that errors from custom attribute completers are silenced."""
180 199 ip = get_ipython()
181 200
182 201 _, matches = ip.complete('in')
183 202 assert matches.index('input') < matches.index('int')
184 203
185 204 def complete_example(a):
186 205 return ['example2', 'example1']
187 206
188 207 ip.Completer.custom_completers.add_re('ex*', complete_example)
189 208 _, matches = ip.complete('ex')
190 209 assert matches.index('example2') < matches.index('example1')
191 210
192 211 def test_unicode_completions(self):
193 212 ip = get_ipython()
194 213 # Some strings that trigger different types of completion. Check them both
195 214 # in str and unicode forms
196 215 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
197 216 for t in s + list(map(str, s)):
198 217 # We don't need to check exact completion values (they may change
199 218 # depending on the state of the namespace, but at least no exceptions
200 219 # should be thrown and the return value should be a pair of text, list
201 220 # values.
202 221 text, matches = ip.complete(t)
203 222 nt.assert_true(isinstance(text, str))
204 223 nt.assert_true(isinstance(matches, list))
205 224
206 225 def test_latex_completions(self):
207 226 from IPython.core.latex_symbols import latex_symbols
208 227 import random
209 228
210 229 ip = get_ipython()
211 230 # Test some random unicode symbols
212 231 keys = random.sample(latex_symbols.keys(), 10)
213 232 for k in keys:
214 233 text, matches = ip.complete(k)
215 nt.assert_equal(len(matches), 1)
216 234 nt.assert_equal(text, k)
217 nt.assert_equal(matches[0], latex_symbols[k])
235 nt.assert_equal(matches, [latex_symbols[k]])
218 236 # Test a more complex line
219 237 text, matches = ip.complete("print(\\alpha")
220 238 nt.assert_equal(text, "\\alpha")
221 239 nt.assert_equal(matches[0], latex_symbols["\\alpha"])
222 240 # Test multiple matching latex symbols
223 241 text, matches = ip.complete("\\al")
224 242 nt.assert_in("\\alpha", matches)
225 243 nt.assert_in("\\aleph", matches)
226 244
227 245 def test_latex_no_results(self):
228 246 """
229 247 forward latex should really return nothing in either field if nothing is found.
230 248 """
231 249 ip = get_ipython()
232 250 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
233 251 nt.assert_equal(text, "")
234 nt.assert_equal(matches, [])
252 nt.assert_equal(matches, ())
235 253
236 254 def test_back_latex_completion(self):
237 255 ip = get_ipython()
238 256
239 257 # do not return more than 1 matches fro \beta, only the latex one.
240 258 name, matches = ip.complete("\\β")
241 259 nt.assert_equal(matches, ['\\beta'])
242 260
243 261 def test_back_unicode_completion(self):
244 262 ip = get_ipython()
245 263
246 264 name, matches = ip.complete("\\Ⅴ")
247 nt.assert_equal(matches, ["\\ROMAN NUMERAL FIVE"])
265 nt.assert_equal(matches, ("\\ROMAN NUMERAL FIVE",))
248 266
249 267 def test_forward_unicode_completion(self):
250 268 ip = get_ipython()
251 269
252 270 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
253 nt.assert_equal(len(matches), 1)
254 nt.assert_equal(matches[0], "")
271 nt.assert_equal(matches, ["Ⅴ"] ) # This is not a V
272 nt.assert_equal(matches, ["\u2164"] ) # same as above but explicit.
255 273
256 274 @nt.nottest # now we have a completion for \jmath
257 275 @decorators.knownfailureif(
258 276 sys.platform == "win32", "Fails if there is a C:\\j... path"
259 277 )
260 278 def test_no_ascii_back_completion(self):
261 279 ip = get_ipython()
262 280 with TemporaryWorkingDirectory(): # Avoid any filename completions
263 281 # single ascii letter that don't have yet completions
264 282 for letter in "jJ":
265 283 name, matches = ip.complete("\\" + letter)
266 284 nt.assert_equal(matches, [])
267 285
268 286 class CompletionSplitterTestCase(unittest.TestCase):
269 287 def setUp(self):
270 288 self.sp = completer.CompletionSplitter()
271 289
272 290 def test_delim_setting(self):
273 291 self.sp.delims = " "
274 292 nt.assert_equal(self.sp.delims, " ")
275 293 nt.assert_equal(self.sp._delim_expr, r"[\ ]")
276 294
277 295 def test_spaces(self):
278 296 """Test with only spaces as split chars."""
279 297 self.sp.delims = " "
280 298 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
281 299 check_line_split(self.sp, t)
282 300
283 301 def test_has_open_quotes1(self):
284 302 for s in ["'", "'''", "'hi' '"]:
285 303 nt.assert_equal(completer.has_open_quotes(s), "'")
286 304
287 305 def test_has_open_quotes2(self):
288 306 for s in ['"', '"""', '"hi" "']:
289 307 nt.assert_equal(completer.has_open_quotes(s), '"')
290 308
291 309 def test_has_open_quotes3(self):
292 310 for s in ["''", "''' '''", "'hi' 'ipython'"]:
293 311 nt.assert_false(completer.has_open_quotes(s))
294 312
295 313 def test_has_open_quotes4(self):
296 314 for s in ['""', '""" """', '"hi" "ipython"']:
297 315 nt.assert_false(completer.has_open_quotes(s))
298 316
299 317 @decorators.knownfailureif(
300 318 sys.platform == "win32", "abspath completions fail on Windows"
301 319 )
302 320 def test_abspath_file_completions(self):
303 321 ip = get_ipython()
304 322 with TemporaryDirectory() as tmpdir:
305 323 prefix = os.path.join(tmpdir, "foo")
306 324 suffixes = ["1", "2"]
307 325 names = [prefix + s for s in suffixes]
308 326 for n in names:
309 327 open(n, "w").close()
310 328
311 329 # Check simple completion
312 330 c = ip.complete(prefix)[1]
313 331 nt.assert_equal(c, names)
314 332
315 333 # Now check with a function call
316 334 cmd = 'a = f("%s' % prefix
317 335 c = ip.complete(prefix, cmd)[1]
318 336 comp = [prefix + s for s in suffixes]
319 337 nt.assert_equal(c, comp)
320 338
321 339 def test_local_file_completions(self):
322 340 ip = get_ipython()
323 341 with TemporaryWorkingDirectory():
324 342 prefix = "./foo"
325 343 suffixes = ["1", "2"]
326 344 names = [prefix + s for s in suffixes]
327 345 for n in names:
328 346 open(n, "w").close()
329 347
330 348 # Check simple completion
331 349 c = ip.complete(prefix)[1]
332 350 nt.assert_equal(c, names)
333 351
334 352 # Now check with a function call
335 353 cmd = 'a = f("%s' % prefix
336 354 c = ip.complete(prefix, cmd)[1]
337 355 comp = {prefix + s for s in suffixes}
338 356 nt.assert_true(comp.issubset(set(c)))
339 357
340 358 def test_quoted_file_completions(self):
341 359 ip = get_ipython()
342 360 with TemporaryWorkingDirectory():
343 361 name = "foo'bar"
344 362 open(name, "w").close()
345 363
346 364 # Don't escape Windows
347 365 escaped = name if sys.platform == "win32" else "foo\\'bar"
348 366
349 367 # Single quote matches embedded single quote
350 368 text = "open('foo"
351 369 c = ip.Completer._complete(
352 370 cursor_line=0, cursor_pos=len(text), full_text=text
353 371 )[1]
354 372 nt.assert_equal(c, [escaped])
355 373
356 374 # Double quote requires no escape
357 375 text = 'open("foo'
358 376 c = ip.Completer._complete(
359 377 cursor_line=0, cursor_pos=len(text), full_text=text
360 378 )[1]
361 379 nt.assert_equal(c, [name])
362 380
363 381 # No quote requires an escape
364 382 text = "%ls foo"
365 383 c = ip.Completer._complete(
366 384 cursor_line=0, cursor_pos=len(text), full_text=text
367 385 )[1]
368 386 nt.assert_equal(c, [escaped])
369 387
370 388 def test_all_completions_dups(self):
371 389 """
372 390 Make sure the output of `IPCompleter.all_completions` does not have
373 391 duplicated prefixes.
374 392 """
375 393 ip = get_ipython()
376 394 c = ip.Completer
377 395 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
378 396 for jedi_status in [True, False]:
379 397 with provisionalcompleter():
380 398 ip.Completer.use_jedi = jedi_status
381 399 matches = c.all_completions("TestCl")
382 400 assert matches == ['TestClass'], jedi_status
383 401 matches = c.all_completions("TestClass.")
384 402 assert len(matches) > 2, jedi_status
385 403 matches = c.all_completions("TestClass.a")
386 404 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
387 405
388 406 def test_jedi(self):
389 407 """
390 408 A couple of issue we had with Jedi
391 409 """
392 410 ip = get_ipython()
393 411
394 412 def _test_complete(reason, s, comp, start=None, end=None):
395 413 l = len(s)
396 414 start = start if start is not None else l
397 415 end = end if end is not None else l
398 416 with provisionalcompleter():
399 417 ip.Completer.use_jedi = True
400 418 completions = set(ip.Completer.completions(s, l))
401 419 ip.Completer.use_jedi = False
402 420 assert_in(Completion(start, end, comp), completions, reason)
403 421
404 422 def _test_not_complete(reason, s, comp):
405 423 l = len(s)
406 424 with provisionalcompleter():
407 425 ip.Completer.use_jedi = True
408 426 completions = set(ip.Completer.completions(s, l))
409 427 ip.Completer.use_jedi = False
410 428 assert_not_in(Completion(l, l, comp), completions, reason)
411 429
412 430 import jedi
413 431
414 432 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
415 433 if jedi_version > (0, 10):
416 434 yield _test_complete, "jedi >0.9 should complete and not crash", "a=1;a.", "real"
417 435 yield _test_complete, "can infer first argument", 'a=(1,"foo");a[0].', "real"
418 436 yield _test_complete, "can infer second argument", 'a=(1,"foo");a[1].', "capitalize"
419 437 yield _test_complete, "cover duplicate completions", "im", "import", 0, 2
420 438
421 439 yield _test_not_complete, "does not mix types", 'a=(1,"foo");a[0].', "capitalize"
422 440
423 441 def test_completion_have_signature(self):
424 442 """
425 443 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
426 444 """
427 445 ip = get_ipython()
428 446 with provisionalcompleter():
429 447 ip.Completer.use_jedi = True
430 448 completions = ip.Completer.completions("ope", 3)
431 449 c = next(completions) # should be `open`
432 450 ip.Completer.use_jedi = False
433 451 assert "file" in c.signature, "Signature of function was not found by completer"
434 452 assert (
435 453 "encoding" in c.signature
436 454 ), "Signature of function was not found by completer"
437 455
438 456 def test_deduplicate_completions(self):
439 457 """
440 458 Test that completions are correctly deduplicated (even if ranges are not the same)
441 459 """
442 460 ip = get_ipython()
443 461 ip.ex(
444 462 textwrap.dedent(
445 463 """
446 464 class Z:
447 465 zoo = 1
448 466 """
449 467 )
450 468 )
451 469 with provisionalcompleter():
452 470 ip.Completer.use_jedi = True
453 471 l = list(
454 472 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
455 473 )
456 474 ip.Completer.use_jedi = False
457 475
458 476 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
459 477 assert l[0].text == "zoo" # and not `it.accumulate`
460 478
461 479 def test_greedy_completions(self):
462 480 """
463 481 Test the capability of the Greedy completer.
464 482
465 483 Most of the test here does not really show off the greedy completer, for proof
466 484 each of the text below now pass with Jedi. The greedy completer is capable of more.
467 485
468 486 See the :any:`test_dict_key_completion_contexts`
469 487
470 488 """
471 489 ip = get_ipython()
472 490 ip.ex("a=list(range(5))")
473 491 _, c = ip.complete(".", line="a[0].")
474 492 nt.assert_false(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
475 493
476 494 def _(line, cursor_pos, expect, message, completion):
477 495 with greedy_completion(), provisionalcompleter():
478 496 ip.Completer.use_jedi = False
479 497 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
480 498 nt.assert_in(expect, c, message % c)
481 499
482 500 ip.Completer.use_jedi = True
483 501 with provisionalcompleter():
484 502 completions = ip.Completer.completions(line, cursor_pos)
485 503 nt.assert_in(completion, completions)
486 504
487 505 with provisionalcompleter():
488 506 yield _, "a[0].", 5, "a[0].real", "Should have completed on a[0].: %s", Completion(
489 507 5, 5, "real"
490 508 )
491 509 yield _, "a[0].r", 6, "a[0].real", "Should have completed on a[0].r: %s", Completion(
492 510 5, 6, "real"
493 511 )
494 512
495 513 yield _, "a[0].from_", 10, "a[0].from_bytes", "Should have completed on a[0].from_: %s", Completion(
496 514 5, 10, "from_bytes"
497 515 )
498 516
499 517 def test_omit__names(self):
500 518 # also happens to test IPCompleter as a configurable
501 519 ip = get_ipython()
502 520 ip._hidden_attr = 1
503 521 ip._x = {}
504 522 c = ip.Completer
505 523 ip.ex("ip=get_ipython()")
506 524 cfg = Config()
507 525 cfg.IPCompleter.omit__names = 0
508 526 c.update_config(cfg)
509 527 with provisionalcompleter():
510 528 c.use_jedi = False
511 529 s, matches = c.complete("ip.")
512 530 nt.assert_in("ip.__str__", matches)
513 531 nt.assert_in("ip._hidden_attr", matches)
514 532
515 533 # c.use_jedi = True
516 534 # completions = set(c.completions('ip.', 3))
517 535 # nt.assert_in(Completion(3, 3, '__str__'), completions)
518 536 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
519 537
520 538 cfg = Config()
521 539 cfg.IPCompleter.omit__names = 1
522 540 c.update_config(cfg)
523 541 with provisionalcompleter():
524 542 c.use_jedi = False
525 543 s, matches = c.complete("ip.")
526 544 nt.assert_not_in("ip.__str__", matches)
527 545 # nt.assert_in('ip._hidden_attr', matches)
528 546
529 547 # c.use_jedi = True
530 548 # completions = set(c.completions('ip.', 3))
531 549 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
532 550 # nt.assert_in(Completion(3,3, "_hidden_attr"), completions)
533 551
534 552 cfg = Config()
535 553 cfg.IPCompleter.omit__names = 2
536 554 c.update_config(cfg)
537 555 with provisionalcompleter():
538 556 c.use_jedi = False
539 557 s, matches = c.complete("ip.")
540 558 nt.assert_not_in("ip.__str__", matches)
541 559 nt.assert_not_in("ip._hidden_attr", matches)
542 560
543 561 # c.use_jedi = True
544 562 # completions = set(c.completions('ip.', 3))
545 563 # nt.assert_not_in(Completion(3,3,'__str__'), completions)
546 564 # nt.assert_not_in(Completion(3,3, "_hidden_attr"), completions)
547 565
548 566 with provisionalcompleter():
549 567 c.use_jedi = False
550 568 s, matches = c.complete("ip._x.")
551 569 nt.assert_in("ip._x.keys", matches)
552 570
553 571 # c.use_jedi = True
554 572 # completions = set(c.completions('ip._x.', 6))
555 573 # nt.assert_in(Completion(6,6, "keys"), completions)
556 574
557 575 del ip._hidden_attr
558 576 del ip._x
559 577
560 578 def test_limit_to__all__False_ok(self):
561 579 """
562 580 Limit to all is deprecated, once we remove it this test can go away.
563 581 """
564 582 ip = get_ipython()
565 583 c = ip.Completer
566 584 c.use_jedi = False
567 585 ip.ex("class D: x=24")
568 586 ip.ex("d=D()")
569 587 cfg = Config()
570 588 cfg.IPCompleter.limit_to__all__ = False
571 589 c.update_config(cfg)
572 590 s, matches = c.complete("d.")
573 591 nt.assert_in("d.x", matches)
574 592
575 593 def test_get__all__entries_ok(self):
576 594 class A:
577 595 __all__ = ["x", 1]
578 596
579 597 words = completer.get__all__entries(A())
580 598 nt.assert_equal(words, ["x"])
581 599
582 600 def test_get__all__entries_no__all__ok(self):
583 601 class A:
584 602 pass
585 603
586 604 words = completer.get__all__entries(A())
587 605 nt.assert_equal(words, [])
588 606
589 607 def test_func_kw_completions(self):
590 608 ip = get_ipython()
591 609 c = ip.Completer
592 610 c.use_jedi = False
593 611 ip.ex("def myfunc(a=1,b=2): return a+b")
594 612 s, matches = c.complete(None, "myfunc(1,b")
595 613 nt.assert_in("b=", matches)
596 614 # Simulate completing with cursor right after b (pos==10):
597 615 s, matches = c.complete(None, "myfunc(1,b)", 10)
598 616 nt.assert_in("b=", matches)
599 617 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
600 618 nt.assert_in("b=", matches)
601 619 # builtin function
602 620 s, matches = c.complete(None, "min(k, k")
603 621 nt.assert_in("key=", matches)
604 622
605 623 def test_default_arguments_from_docstring(self):
606 624 ip = get_ipython()
607 625 c = ip.Completer
608 626 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
609 627 nt.assert_equal(kwd, ["key"])
610 628 # with cython type etc
611 629 kwd = c._default_arguments_from_docstring(
612 630 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
613 631 )
614 632 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
615 633 # white spaces
616 634 kwd = c._default_arguments_from_docstring(
617 635 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
618 636 )
619 637 nt.assert_equal(kwd, ["ncall", "resume", "nsplit"])
620 638
621 639 def test_line_magics(self):
622 640 ip = get_ipython()
623 641 c = ip.Completer
624 642 s, matches = c.complete(None, "lsmag")
625 643 nt.assert_in("%lsmagic", matches)
626 644 s, matches = c.complete(None, "%lsmag")
627 645 nt.assert_in("%lsmagic", matches)
628 646
629 647 def test_cell_magics(self):
630 648 from IPython.core.magic import register_cell_magic
631 649
632 650 @register_cell_magic
633 651 def _foo_cellm(line, cell):
634 652 pass
635 653
636 654 ip = get_ipython()
637 655 c = ip.Completer
638 656
639 657 s, matches = c.complete(None, "_foo_ce")
640 658 nt.assert_in("%%_foo_cellm", matches)
641 659 s, matches = c.complete(None, "%%_foo_ce")
642 660 nt.assert_in("%%_foo_cellm", matches)
643 661
644 662 def test_line_cell_magics(self):
645 663 from IPython.core.magic import register_line_cell_magic
646 664
647 665 @register_line_cell_magic
648 666 def _bar_cellm(line, cell):
649 667 pass
650 668
651 669 ip = get_ipython()
652 670 c = ip.Completer
653 671
654 672 # The policy here is trickier, see comments in completion code. The
655 673 # returned values depend on whether the user passes %% or not explicitly,
656 674 # and this will show a difference if the same name is both a line and cell
657 675 # magic.
658 676 s, matches = c.complete(None, "_bar_ce")
659 677 nt.assert_in("%_bar_cellm", matches)
660 678 nt.assert_in("%%_bar_cellm", matches)
661 679 s, matches = c.complete(None, "%_bar_ce")
662 680 nt.assert_in("%_bar_cellm", matches)
663 681 nt.assert_in("%%_bar_cellm", matches)
664 682 s, matches = c.complete(None, "%%_bar_ce")
665 683 nt.assert_not_in("%_bar_cellm", matches)
666 684 nt.assert_in("%%_bar_cellm", matches)
667 685
668 686 def test_magic_completion_order(self):
669 687 ip = get_ipython()
670 688 c = ip.Completer
671 689
672 690 # Test ordering of line and cell magics.
673 691 text, matches = c.complete("timeit")
674 692 nt.assert_equal(matches, ["%timeit", "%%timeit"])
675 693
676 694 def test_magic_completion_shadowing(self):
677 695 ip = get_ipython()
678 696 c = ip.Completer
679 697 c.use_jedi = False
680 698
681 699 # Before importing matplotlib, %matplotlib magic should be the only option.
682 700 text, matches = c.complete("mat")
683 701 nt.assert_equal(matches, ["%matplotlib"])
684 702
685 703 # The newly introduced name should shadow the magic.
686 704 ip.run_cell("matplotlib = 1")
687 705 text, matches = c.complete("mat")
688 706 nt.assert_equal(matches, ["matplotlib"])
689 707
690 708 # After removing matplotlib from namespace, the magic should again be
691 709 # the only option.
692 710 del ip.user_ns["matplotlib"]
693 711 text, matches = c.complete("mat")
694 712 nt.assert_equal(matches, ["%matplotlib"])
695 713
696 714 def test_magic_completion_shadowing_explicit(self):
697 715 """
698 716 If the user try to complete a shadowed magic, and explicit % start should
699 717 still return the completions.
700 718 """
701 719 ip = get_ipython()
702 720 c = ip.Completer
703 721
704 722 # Before importing matplotlib, %matplotlib magic should be the only option.
705 723 text, matches = c.complete("%mat")
706 724 nt.assert_equal(matches, ["%matplotlib"])
707 725
708 726 ip.run_cell("matplotlib = 1")
709 727
710 728 # After removing matplotlib from namespace, the magic should still be
711 729 # the only option.
712 730 text, matches = c.complete("%mat")
713 731 nt.assert_equal(matches, ["%matplotlib"])
714 732
715 733 def test_magic_config(self):
716 734 ip = get_ipython()
717 735 c = ip.Completer
718 736
719 737 s, matches = c.complete(None, "conf")
720 738 nt.assert_in("%config", matches)
721 739 s, matches = c.complete(None, "conf")
722 740 nt.assert_not_in("AliasManager", matches)
723 741 s, matches = c.complete(None, "config ")
724 742 nt.assert_in("AliasManager", matches)
725 743 s, matches = c.complete(None, "%config ")
726 744 nt.assert_in("AliasManager", matches)
727 745 s, matches = c.complete(None, "config Ali")
728 746 nt.assert_list_equal(["AliasManager"], matches)
729 747 s, matches = c.complete(None, "%config Ali")
730 748 nt.assert_list_equal(["AliasManager"], matches)
731 749 s, matches = c.complete(None, "config AliasManager")
732 750 nt.assert_list_equal(["AliasManager"], matches)
733 751 s, matches = c.complete(None, "%config AliasManager")
734 752 nt.assert_list_equal(["AliasManager"], matches)
735 753 s, matches = c.complete(None, "config AliasManager.")
736 754 nt.assert_in("AliasManager.default_aliases", matches)
737 755 s, matches = c.complete(None, "%config AliasManager.")
738 756 nt.assert_in("AliasManager.default_aliases", matches)
739 757 s, matches = c.complete(None, "config AliasManager.de")
740 758 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
741 759 s, matches = c.complete(None, "config AliasManager.de")
742 760 nt.assert_list_equal(["AliasManager.default_aliases"], matches)
743 761
744 762 def test_magic_color(self):
745 763 ip = get_ipython()
746 764 c = ip.Completer
747 765
748 766 s, matches = c.complete(None, "colo")
749 767 nt.assert_in("%colors", matches)
750 768 s, matches = c.complete(None, "colo")
751 769 nt.assert_not_in("NoColor", matches)
752 770 s, matches = c.complete(None, "%colors") # No trailing space
753 771 nt.assert_not_in("NoColor", matches)
754 772 s, matches = c.complete(None, "colors ")
755 773 nt.assert_in("NoColor", matches)
756 774 s, matches = c.complete(None, "%colors ")
757 775 nt.assert_in("NoColor", matches)
758 776 s, matches = c.complete(None, "colors NoCo")
759 777 nt.assert_list_equal(["NoColor"], matches)
760 778 s, matches = c.complete(None, "%colors NoCo")
761 779 nt.assert_list_equal(["NoColor"], matches)
762 780
763 781 def test_match_dict_keys(self):
764 782 """
765 783 Test that match_dict_keys works on a couple of use case does return what
766 784 expected, and does not crash
767 785 """
768 786 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
769 787
770 788 keys = ["foo", b"far"]
771 789 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
772 790 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
773 791 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
774 792 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
775 793
776 794 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
777 795 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
778 796 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
779 797 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
780 798
781 799 match_dict_keys
782 800
783 801 def test_dict_key_completion_string(self):
784 802 """Test dictionary key completion for string keys"""
785 803 ip = get_ipython()
786 804 complete = ip.Completer.complete
787 805
788 806 ip.user_ns["d"] = {"abc": None}
789 807
790 808 # check completion at different stages
791 809 _, matches = complete(line_buffer="d[")
792 810 nt.assert_in("'abc'", matches)
793 811 nt.assert_not_in("'abc']", matches)
794 812
795 813 _, matches = complete(line_buffer="d['")
796 814 nt.assert_in("abc", matches)
797 815 nt.assert_not_in("abc']", matches)
798 816
799 817 _, matches = complete(line_buffer="d['a")
800 818 nt.assert_in("abc", matches)
801 819 nt.assert_not_in("abc']", matches)
802 820
803 821 # check use of different quoting
804 822 _, matches = complete(line_buffer='d["')
805 823 nt.assert_in("abc", matches)
806 824 nt.assert_not_in('abc"]', matches)
807 825
808 826 _, matches = complete(line_buffer='d["a')
809 827 nt.assert_in("abc", matches)
810 828 nt.assert_not_in('abc"]', matches)
811 829
812 830 # check sensitivity to following context
813 831 _, matches = complete(line_buffer="d[]", cursor_pos=2)
814 832 nt.assert_in("'abc'", matches)
815 833
816 834 _, matches = complete(line_buffer="d['']", cursor_pos=3)
817 835 nt.assert_in("abc", matches)
818 836 nt.assert_not_in("abc'", matches)
819 837 nt.assert_not_in("abc']", matches)
820 838
821 839 # check multiple solutions are correctly returned and that noise is not
822 840 ip.user_ns["d"] = {
823 841 "abc": None,
824 842 "abd": None,
825 843 "bad": None,
826 844 object(): None,
827 845 5: None,
828 846 }
829 847
830 848 _, matches = complete(line_buffer="d['a")
831 849 nt.assert_in("abc", matches)
832 850 nt.assert_in("abd", matches)
833 851 nt.assert_not_in("bad", matches)
834 852 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
835 853
836 854 # check escaping and whitespace
837 855 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
838 856 _, matches = complete(line_buffer="d['a")
839 857 nt.assert_in("a\\nb", matches)
840 858 nt.assert_in("a\\'b", matches)
841 859 nt.assert_in('a"b', matches)
842 860 nt.assert_in("a word", matches)
843 861 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
844 862
845 863 # - can complete on non-initial word of the string
846 864 _, matches = complete(line_buffer="d['a w")
847 865 nt.assert_in("word", matches)
848 866
849 867 # - understands quote escaping
850 868 _, matches = complete(line_buffer="d['a\\'")
851 869 nt.assert_in("b", matches)
852 870
853 871 # - default quoting should work like repr
854 872 _, matches = complete(line_buffer="d[")
855 873 nt.assert_in('"a\'b"', matches)
856 874
857 875 # - when opening quote with ", possible to match with unescaped apostrophe
858 876 _, matches = complete(line_buffer="d[\"a'")
859 877 nt.assert_in("b", matches)
860 878
861 879 # need to not split at delims that readline won't split at
862 880 if "-" not in ip.Completer.splitter.delims:
863 881 ip.user_ns["d"] = {"before-after": None}
864 882 _, matches = complete(line_buffer="d['before-af")
865 883 nt.assert_in("before-after", matches)
866 884
867 885 def test_dict_key_completion_contexts(self):
868 886 """Test expression contexts in which dict key completion occurs"""
869 887 ip = get_ipython()
870 888 complete = ip.Completer.complete
871 889 d = {"abc": None}
872 890 ip.user_ns["d"] = d
873 891
874 892 class C:
875 893 data = d
876 894
877 895 ip.user_ns["C"] = C
878 896 ip.user_ns["get"] = lambda: d
879 897
880 898 def assert_no_completion(**kwargs):
881 899 _, matches = complete(**kwargs)
882 900 nt.assert_not_in("abc", matches)
883 901 nt.assert_not_in("abc'", matches)
884 902 nt.assert_not_in("abc']", matches)
885 903 nt.assert_not_in("'abc'", matches)
886 904 nt.assert_not_in("'abc']", matches)
887 905
888 906 def assert_completion(**kwargs):
889 907 _, matches = complete(**kwargs)
890 908 nt.assert_in("'abc'", matches)
891 909 nt.assert_not_in("'abc']", matches)
892 910
893 911 # no completion after string closed, even if reopened
894 912 assert_no_completion(line_buffer="d['a'")
895 913 assert_no_completion(line_buffer='d["a"')
896 914 assert_no_completion(line_buffer="d['a' + ")
897 915 assert_no_completion(line_buffer="d['a' + '")
898 916
899 917 # completion in non-trivial expressions
900 918 assert_completion(line_buffer="+ d[")
901 919 assert_completion(line_buffer="(d[")
902 920 assert_completion(line_buffer="C.data[")
903 921
904 922 # greedy flag
905 923 def assert_completion(**kwargs):
906 924 _, matches = complete(**kwargs)
907 925 nt.assert_in("get()['abc']", matches)
908 926
909 927 assert_no_completion(line_buffer="get()[")
910 928 with greedy_completion():
911 929 assert_completion(line_buffer="get()[")
912 930 assert_completion(line_buffer="get()['")
913 931 assert_completion(line_buffer="get()['a")
914 932 assert_completion(line_buffer="get()['ab")
915 933 assert_completion(line_buffer="get()['abc")
916 934
917 935 def test_dict_key_completion_bytes(self):
918 936 """Test handling of bytes in dict key completion"""
919 937 ip = get_ipython()
920 938 complete = ip.Completer.complete
921 939
922 940 ip.user_ns["d"] = {"abc": None, b"abd": None}
923 941
924 942 _, matches = complete(line_buffer="d[")
925 943 nt.assert_in("'abc'", matches)
926 944 nt.assert_in("b'abd'", matches)
927 945
928 946 if False: # not currently implemented
929 947 _, matches = complete(line_buffer="d[b")
930 948 nt.assert_in("b'abd'", matches)
931 949 nt.assert_not_in("b'abc'", matches)
932 950
933 951 _, matches = complete(line_buffer="d[b'")
934 952 nt.assert_in("abd", matches)
935 953 nt.assert_not_in("abc", matches)
936 954
937 955 _, matches = complete(line_buffer="d[B'")
938 956 nt.assert_in("abd", matches)
939 957 nt.assert_not_in("abc", matches)
940 958
941 959 _, matches = complete(line_buffer="d['")
942 960 nt.assert_in("abc", matches)
943 961 nt.assert_not_in("abd", matches)
944 962
945 963 def test_dict_key_completion_unicode_py3(self):
946 964 """Test handling of unicode in dict key completion"""
947 965 ip = get_ipython()
948 966 complete = ip.Completer.complete
949 967
950 968 ip.user_ns["d"] = {"a\u05d0": None}
951 969
952 970 # query using escape
953 971 if sys.platform != "win32":
954 972 # Known failure on Windows
955 973 _, matches = complete(line_buffer="d['a\\u05d0")
956 974 nt.assert_in("u05d0", matches) # tokenized after \\
957 975
958 976 # query using character
959 977 _, matches = complete(line_buffer="d['a\u05d0")
960 978 nt.assert_in("a\u05d0", matches)
961 979
962 980 with greedy_completion():
963 981 # query using escape
964 982 _, matches = complete(line_buffer="d['a\\u05d0")
965 983 nt.assert_in("d['a\\u05d0']", matches) # tokenized after \\
966 984
967 985 # query using character
968 986 _, matches = complete(line_buffer="d['a\u05d0")
969 987 nt.assert_in("d['a\u05d0']", matches)
970 988
971 989 @dec.skip_without("numpy")
972 990 def test_struct_array_key_completion(self):
973 991 """Test dict key completion applies to numpy struct arrays"""
974 992 import numpy
975 993
976 994 ip = get_ipython()
977 995 complete = ip.Completer.complete
978 996 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
979 997 _, matches = complete(line_buffer="d['")
980 998 nt.assert_in("hello", matches)
981 999 nt.assert_in("world", matches)
982 1000 # complete on the numpy struct itself
983 1001 dt = numpy.dtype(
984 1002 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
985 1003 )
986 1004 x = numpy.zeros(2, dtype=dt)
987 1005 ip.user_ns["d"] = x[1]
988 1006 _, matches = complete(line_buffer="d['")
989 1007 nt.assert_in("my_head", matches)
990 1008 nt.assert_in("my_data", matches)
991 1009 # complete on a nested level
992 1010 with greedy_completion():
993 1011 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
994 1012 _, matches = complete(line_buffer="d[1]['my_head']['")
995 1013 nt.assert_true(any(["my_dt" in m for m in matches]))
996 1014 nt.assert_true(any(["my_df" in m for m in matches]))
997 1015
998 1016 @dec.skip_without("pandas")
999 1017 def test_dataframe_key_completion(self):
1000 1018 """Test dict key completion applies to pandas DataFrames"""
1001 1019 import pandas
1002 1020
1003 1021 ip = get_ipython()
1004 1022 complete = ip.Completer.complete
1005 1023 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1006 1024 _, matches = complete(line_buffer="d['")
1007 1025 nt.assert_in("hello", matches)
1008 1026 nt.assert_in("world", matches)
1009 1027
1010 1028 def test_dict_key_completion_invalids(self):
1011 1029 """Smoke test cases dict key completion can't handle"""
1012 1030 ip = get_ipython()
1013 1031 complete = ip.Completer.complete
1014 1032
1015 1033 ip.user_ns["no_getitem"] = None
1016 1034 ip.user_ns["no_keys"] = []
1017 1035 ip.user_ns["cant_call_keys"] = dict
1018 1036 ip.user_ns["empty"] = {}
1019 1037 ip.user_ns["d"] = {"abc": 5}
1020 1038
1021 1039 _, matches = complete(line_buffer="no_getitem['")
1022 1040 _, matches = complete(line_buffer="no_keys['")
1023 1041 _, matches = complete(line_buffer="cant_call_keys['")
1024 1042 _, matches = complete(line_buffer="empty['")
1025 1043 _, matches = complete(line_buffer="name_error['")
1026 1044 _, matches = complete(line_buffer="d['\\") # incomplete escape
1027 1045
1028 1046 def test_object_key_completion(self):
1029 1047 ip = get_ipython()
1030 1048 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1031 1049
1032 1050 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1033 1051 nt.assert_in("qwerty", matches)
1034 1052 nt.assert_in("qwick", matches)
1035 1053
1036 1054 def test_class_key_completion(self):
1037 1055 ip = get_ipython()
1038 1056 NamedInstanceClass("qwerty")
1039 1057 NamedInstanceClass("qwick")
1040 1058 ip.user_ns["named_instance_class"] = NamedInstanceClass
1041 1059
1042 1060 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1043 1061 nt.assert_in("qwerty", matches)
1044 1062 nt.assert_in("qwick", matches)
1045 1063
1046 1064 def test_tryimport(self):
1047 1065 """
1048 1066 Test that try-import don't crash on trailing dot, and import modules before
1049 1067 """
1050 1068 from IPython.core.completerlib import try_import
1051 1069
1052 1070 assert try_import("IPython.")
1053 1071
1054 1072 def test_aimport_module_completer(self):
1055 1073 ip = get_ipython()
1056 1074 _, matches = ip.complete("i", "%aimport i")
1057 1075 nt.assert_in("io", matches)
1058 1076 nt.assert_not_in("int", matches)
1059 1077
1060 1078 def test_nested_import_module_completer(self):
1061 1079 ip = get_ipython()
1062 1080 _, matches = ip.complete(None, "import IPython.co", 17)
1063 1081 nt.assert_in("IPython.core", matches)
1064 1082 nt.assert_not_in("import IPython.core", matches)
1065 1083 nt.assert_not_in("IPython.display", matches)
1066 1084
1067 1085 def test_import_module_completer(self):
1068 1086 ip = get_ipython()
1069 1087 _, matches = ip.complete("i", "import i")
1070 1088 nt.assert_in("io", matches)
1071 1089 nt.assert_not_in("int", matches)
1072 1090
1073 1091 def test_from_module_completer(self):
1074 1092 ip = get_ipython()
1075 1093 _, matches = ip.complete("B", "from io import B", 16)
1076 1094 nt.assert_in("BytesIO", matches)
1077 1095 nt.assert_not_in("BaseException", matches)
1078 1096
1079 1097 def test_snake_case_completion(self):
1080 1098 ip = get_ipython()
1081 1099 ip.Completer.use_jedi = False
1082 1100 ip.user_ns["some_three"] = 3
1083 1101 ip.user_ns["some_four"] = 4
1084 1102 _, matches = ip.complete("s_", "print(s_f")
1085 1103 nt.assert_in("some_three", matches)
1086 1104 nt.assert_in("some_four", matches)
1087 1105
1088 1106 def test_mix_terms(self):
1089 1107 ip = get_ipython()
1090 1108 from textwrap import dedent
1091 1109
1092 1110 ip.Completer.use_jedi = False
1093 1111 ip.ex(
1094 1112 dedent(
1095 1113 """
1096 1114 class Test:
1097 1115 def meth(self, meth_arg1):
1098 1116 print("meth")
1099 1117
1100 1118 def meth_1(self, meth1_arg1, meth1_arg2):
1101 1119 print("meth1")
1102 1120
1103 1121 def meth_2(self, meth2_arg1, meth2_arg2):
1104 1122 print("meth2")
1105 1123 test = Test()
1106 1124 """
1107 1125 )
1108 1126 )
1109 1127 _, matches = ip.complete(None, "test.meth(")
1110 1128 nt.assert_in("meth_arg1=", matches)
1111 1129 nt.assert_not_in("meth2_arg1=", matches)
@@ -1,45 +1,46 b''
1 1 include README.rst
2 2 include COPYING.rst
3 3 include LICENSE
4 4 include setupbase.py
5 5 include setupegg.py
6 6 include MANIFEST.in
7 7 include pytest.ini
8 include mypy.ini
8 9 include .mailmap
9 10
10 11 recursive-exclude tools *
11 12 exclude tools
12 13 exclude CONTRIBUTING.md
13 14 exclude .editorconfig
14 15
15 16 graft setupext
16 17
17 18 graft scripts
18 19
19 20 # Load main dir but exclude things we don't want in the distro
20 21 graft IPython
21 22
22 23 # Documentation
23 24 graft docs
24 25 exclude docs/\#*
25 26 exclude docs/man/*.1.gz
26 27
27 28 exclude .git-blame-ignore-revs
28 29
29 30 # Examples
30 31 graft examples
31 32
32 33 # docs subdirs we want to skip
33 34 prune docs/build
34 35 prune docs/gh-pages
35 36 prune docs/dist
36 37
37 38 # Patterns to exclude from any directory
38 39 global-exclude *~
39 40 global-exclude *.flc
40 41 global-exclude *.yml
41 42 global-exclude *.pyc
42 43 global-exclude *.pyo
43 44 global-exclude .dircopy.log
44 45 global-exclude .git
45 46 global-exclude .ipynb_checkpoints
General Comments 0
You need to be logged in to leave comments. Login now