##// END OF EJS Templates
Apply 2to3 `next` fix....
Bradley M. Froehle -
Show More
@@ -1,933 +1,933 b''
1 """Word completion for IPython.
1 """Word completion for IPython.
2
2
3 This module is a fork of the rlcompleter module in the Python standard
3 This module is a fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3, but we need a lot more
5 upstream and were accepted as of Python 2.3, but we need a lot more
6 functionality specific to IPython, so this module will continue to live as an
6 functionality specific to IPython, so this module will continue to live as an
7 IPython-specific utility.
7 IPython-specific utility.
8
8
9 Original rlcompleter documentation:
9 Original rlcompleter documentation:
10
10
11 This requires the latest extension to the readline module (the
11 This requires the latest extension to the readline module (the
12 completes keywords, built-ins and globals in __main__; when completing
12 completes keywords, built-ins and globals in __main__; when completing
13 NAME.NAME..., it evaluates (!) the expression up to the last dot and
13 NAME.NAME..., it evaluates (!) the expression up to the last dot and
14 completes its attributes.
14 completes its attributes.
15
15
16 It's very cool to do "import string" type "string.", hit the
16 It's very cool to do "import string" type "string.", hit the
17 completion key (twice), and see the list of names defined by the
17 completion key (twice), and see the list of names defined by the
18 string module!
18 string module!
19
19
20 Tip: to use the tab key as the completion key, call
20 Tip: to use the tab key as the completion key, call
21
21
22 readline.parse_and_bind("tab: complete")
22 readline.parse_and_bind("tab: complete")
23
23
24 Notes:
24 Notes:
25
25
26 - Exceptions raised by the completer function are *ignored* (and
26 - Exceptions raised by the completer function are *ignored* (and
27 generally cause the completion to fail). This is a feature -- since
27 generally cause the completion to fail). This is a feature -- since
28 readline sets the tty device in raw (or cbreak) mode, printing a
28 readline sets the tty device in raw (or cbreak) mode, printing a
29 traceback wouldn't work well without some complicated hoopla to save,
29 traceback wouldn't work well without some complicated hoopla to save,
30 reset and restore the tty state.
30 reset and restore the tty state.
31
31
32 - The evaluation of the NAME.NAME... form may cause arbitrary
32 - The evaluation of the NAME.NAME... form may cause arbitrary
33 application defined code to be executed if an object with a
33 application defined code to be executed if an object with a
34 __getattr__ hook is found. Since it is the responsibility of the
34 __getattr__ hook is found. Since it is the responsibility of the
35 application (or the user) to enable this feature, I consider this an
35 application (or the user) to enable this feature, I consider this an
36 acceptable risk. More complicated expressions (e.g. function calls or
36 acceptable risk. More complicated expressions (e.g. function calls or
37 indexing operations) are *not* evaluated.
37 indexing operations) are *not* evaluated.
38
38
39 - GNU readline is also used by the built-in functions input() and
39 - GNU readline is also used by the built-in functions input() and
40 raw_input(), and thus these also benefit/suffer from the completer
40 raw_input(), and thus these also benefit/suffer from the completer
41 features. Clearly an interactive application can benefit by
41 features. Clearly an interactive application can benefit by
42 specifying its own completer function and using raw_input() for all
42 specifying its own completer function and using raw_input() for all
43 its input.
43 its input.
44
44
45 - When the original stdin is not a tty device, GNU readline is never
45 - When the original stdin is not a tty device, GNU readline is never
46 used, and this module (and the readline module) are silently inactive.
46 used, and this module (and the readline module) are silently inactive.
47 """
47 """
48
48
49 #*****************************************************************************
49 #*****************************************************************************
50 #
50 #
51 # Since this file is essentially a minimally modified copy of the rlcompleter
51 # Since this file is essentially a minimally modified copy of the rlcompleter
52 # module which is part of the standard Python distribution, I assume that the
52 # module which is part of the standard Python distribution, I assume that the
53 # proper procedure is to maintain its copyright as belonging to the Python
53 # proper procedure is to maintain its copyright as belonging to the Python
54 # Software Foundation (in addition to my own, for all new code).
54 # Software Foundation (in addition to my own, for all new code).
55 #
55 #
56 # Copyright (C) 2008 IPython Development Team
56 # Copyright (C) 2008 IPython Development Team
57 # Copyright (C) 2001 Fernando Perez. <fperez@colorado.edu>
57 # Copyright (C) 2001 Fernando Perez. <fperez@colorado.edu>
58 # Copyright (C) 2001 Python Software Foundation, www.python.org
58 # Copyright (C) 2001 Python Software Foundation, www.python.org
59 #
59 #
60 # Distributed under the terms of the BSD License. The full license is in
60 # Distributed under the terms of the BSD License. The full license is in
61 # the file COPYING, distributed as part of this software.
61 # the file COPYING, distributed as part of this software.
62 #
62 #
63 #*****************************************************************************
63 #*****************************************************************************
64
64
65 #-----------------------------------------------------------------------------
65 #-----------------------------------------------------------------------------
66 # Imports
66 # Imports
67 #-----------------------------------------------------------------------------
67 #-----------------------------------------------------------------------------
68
68
69 import __builtin__
69 import __builtin__
70 import __main__
70 import __main__
71 import glob
71 import glob
72 import inspect
72 import inspect
73 import itertools
73 import itertools
74 import keyword
74 import keyword
75 import os
75 import os
76 import re
76 import re
77 import shlex
77 import shlex
78 import sys
78 import sys
79
79
80 from IPython.config.configurable import Configurable
80 from IPython.config.configurable import Configurable
81 from IPython.core.error import TryNext
81 from IPython.core.error import TryNext
82 from IPython.core.inputsplitter import ESC_MAGIC
82 from IPython.core.inputsplitter import ESC_MAGIC
83 from IPython.utils import generics
83 from IPython.utils import generics
84 from IPython.utils import io
84 from IPython.utils import io
85 from IPython.utils.dir2 import dir2
85 from IPython.utils.dir2 import dir2
86 from IPython.utils.process import arg_split
86 from IPython.utils.process import arg_split
87 from IPython.utils.traitlets import CBool, Enum
87 from IPython.utils.traitlets import CBool, Enum
88
88
89 #-----------------------------------------------------------------------------
89 #-----------------------------------------------------------------------------
90 # Globals
90 # Globals
91 #-----------------------------------------------------------------------------
91 #-----------------------------------------------------------------------------
92
92
93 # Public API
93 # Public API
94 __all__ = ['Completer','IPCompleter']
94 __all__ = ['Completer','IPCompleter']
95
95
96 if sys.platform == 'win32':
96 if sys.platform == 'win32':
97 PROTECTABLES = ' '
97 PROTECTABLES = ' '
98 else:
98 else:
99 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
99 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
100
100
101 #-----------------------------------------------------------------------------
101 #-----------------------------------------------------------------------------
102 # Main functions and classes
102 # Main functions and classes
103 #-----------------------------------------------------------------------------
103 #-----------------------------------------------------------------------------
104
104
105 def has_open_quotes(s):
105 def has_open_quotes(s):
106 """Return whether a string has open quotes.
106 """Return whether a string has open quotes.
107
107
108 This simply counts whether the number of quote characters of either type in
108 This simply counts whether the number of quote characters of either type in
109 the string is odd.
109 the string is odd.
110
110
111 Returns
111 Returns
112 -------
112 -------
113 If there is an open quote, the quote character is returned. Else, return
113 If there is an open quote, the quote character is returned. Else, return
114 False.
114 False.
115 """
115 """
116 # We check " first, then ', so complex cases with nested quotes will get
116 # We check " first, then ', so complex cases with nested quotes will get
117 # the " to take precedence.
117 # the " to take precedence.
118 if s.count('"') % 2:
118 if s.count('"') % 2:
119 return '"'
119 return '"'
120 elif s.count("'") % 2:
120 elif s.count("'") % 2:
121 return "'"
121 return "'"
122 else:
122 else:
123 return False
123 return False
124
124
125
125
126 def protect_filename(s):
126 def protect_filename(s):
127 """Escape a string to protect certain characters."""
127 """Escape a string to protect certain characters."""
128
128
129 return "".join([(ch in PROTECTABLES and '\\' + ch or ch)
129 return "".join([(ch in PROTECTABLES and '\\' + ch or ch)
130 for ch in s])
130 for ch in s])
131
131
132 def expand_user(path):
132 def expand_user(path):
133 """Expand '~'-style usernames in strings.
133 """Expand '~'-style usernames in strings.
134
134
135 This is similar to :func:`os.path.expanduser`, but it computes and returns
135 This is similar to :func:`os.path.expanduser`, but it computes and returns
136 extra information that will be useful if the input was being used in
136 extra information that will be useful if the input was being used in
137 computing completions, and you wish to return the completions with the
137 computing completions, and you wish to return the completions with the
138 original '~' instead of its expanded value.
138 original '~' instead of its expanded value.
139
139
140 Parameters
140 Parameters
141 ----------
141 ----------
142 path : str
142 path : str
143 String to be expanded. If no ~ is present, the output is the same as the
143 String to be expanded. If no ~ is present, the output is the same as the
144 input.
144 input.
145
145
146 Returns
146 Returns
147 -------
147 -------
148 newpath : str
148 newpath : str
149 Result of ~ expansion in the input path.
149 Result of ~ expansion in the input path.
150 tilde_expand : bool
150 tilde_expand : bool
151 Whether any expansion was performed or not.
151 Whether any expansion was performed or not.
152 tilde_val : str
152 tilde_val : str
153 The value that ~ was replaced with.
153 The value that ~ was replaced with.
154 """
154 """
155 # Default values
155 # Default values
156 tilde_expand = False
156 tilde_expand = False
157 tilde_val = ''
157 tilde_val = ''
158 newpath = path
158 newpath = path
159
159
160 if path.startswith('~'):
160 if path.startswith('~'):
161 tilde_expand = True
161 tilde_expand = True
162 rest = len(path)-1
162 rest = len(path)-1
163 newpath = os.path.expanduser(path)
163 newpath = os.path.expanduser(path)
164 if rest:
164 if rest:
165 tilde_val = newpath[:-rest]
165 tilde_val = newpath[:-rest]
166 else:
166 else:
167 tilde_val = newpath
167 tilde_val = newpath
168
168
169 return newpath, tilde_expand, tilde_val
169 return newpath, tilde_expand, tilde_val
170
170
171
171
172 def compress_user(path, tilde_expand, tilde_val):
172 def compress_user(path, tilde_expand, tilde_val):
173 """Does the opposite of expand_user, with its outputs.
173 """Does the opposite of expand_user, with its outputs.
174 """
174 """
175 if tilde_expand:
175 if tilde_expand:
176 return path.replace(tilde_val, '~')
176 return path.replace(tilde_val, '~')
177 else:
177 else:
178 return path
178 return path
179
179
180
180
181 class Bunch(object): pass
181 class Bunch(object): pass
182
182
183
183
184 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
184 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
185 GREEDY_DELIMS = ' \r\n'
185 GREEDY_DELIMS = ' \r\n'
186
186
187
187
188 class CompletionSplitter(object):
188 class CompletionSplitter(object):
189 """An object to split an input line in a manner similar to readline.
189 """An object to split an input line in a manner similar to readline.
190
190
191 By having our own implementation, we can expose readline-like completion in
191 By having our own implementation, we can expose readline-like completion in
192 a uniform manner to all frontends. This object only needs to be given the
192 a uniform manner to all frontends. This object only needs to be given the
193 line of text to be split and the cursor position on said line, and it
193 line of text to be split and the cursor position on said line, and it
194 returns the 'word' to be completed on at the cursor after splitting the
194 returns the 'word' to be completed on at the cursor after splitting the
195 entire line.
195 entire line.
196
196
197 What characters are used as splitting delimiters can be controlled by
197 What characters are used as splitting delimiters can be controlled by
198 setting the `delims` attribute (this is a property that internally
198 setting the `delims` attribute (this is a property that internally
199 automatically builds the necessary regular expression)"""
199 automatically builds the necessary regular expression)"""
200
200
201 # Private interface
201 # Private interface
202
202
203 # A string of delimiter characters. The default value makes sense for
203 # A string of delimiter characters. The default value makes sense for
204 # IPython's most typical usage patterns.
204 # IPython's most typical usage patterns.
205 _delims = DELIMS
205 _delims = DELIMS
206
206
207 # The expression (a normal string) to be compiled into a regular expression
207 # The expression (a normal string) to be compiled into a regular expression
208 # for actual splitting. We store it as an attribute mostly for ease of
208 # for actual splitting. We store it as an attribute mostly for ease of
209 # debugging, since this type of code can be so tricky to debug.
209 # debugging, since this type of code can be so tricky to debug.
210 _delim_expr = None
210 _delim_expr = None
211
211
212 # The regular expression that does the actual splitting
212 # The regular expression that does the actual splitting
213 _delim_re = None
213 _delim_re = None
214
214
215 def __init__(self, delims=None):
215 def __init__(self, delims=None):
216 delims = CompletionSplitter._delims if delims is None else delims
216 delims = CompletionSplitter._delims if delims is None else delims
217 self.delims = delims
217 self.delims = delims
218
218
219 @property
219 @property
220 def delims(self):
220 def delims(self):
221 """Return the string of delimiter characters."""
221 """Return the string of delimiter characters."""
222 return self._delims
222 return self._delims
223
223
224 @delims.setter
224 @delims.setter
225 def delims(self, delims):
225 def delims(self, delims):
226 """Set the delimiters for line splitting."""
226 """Set the delimiters for line splitting."""
227 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
227 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
228 self._delim_re = re.compile(expr)
228 self._delim_re = re.compile(expr)
229 self._delims = delims
229 self._delims = delims
230 self._delim_expr = expr
230 self._delim_expr = expr
231
231
232 def split_line(self, line, cursor_pos=None):
232 def split_line(self, line, cursor_pos=None):
233 """Split a line of text with a cursor at the given position.
233 """Split a line of text with a cursor at the given position.
234 """
234 """
235 l = line if cursor_pos is None else line[:cursor_pos]
235 l = line if cursor_pos is None else line[:cursor_pos]
236 return self._delim_re.split(l)[-1]
236 return self._delim_re.split(l)[-1]
237
237
238
238
239 class Completer(Configurable):
239 class Completer(Configurable):
240
240
241 greedy = CBool(False, config=True,
241 greedy = CBool(False, config=True,
242 help="""Activate greedy completion
242 help="""Activate greedy completion
243
243
244 This will enable completion on elements of lists, results of function calls, etc.,
244 This will enable completion on elements of lists, results of function calls, etc.,
245 but can be unsafe because the code is actually evaluated on TAB.
245 but can be unsafe because the code is actually evaluated on TAB.
246 """
246 """
247 )
247 )
248
248
249
249
250 def __init__(self, namespace=None, global_namespace=None, config=None, **kwargs):
250 def __init__(self, namespace=None, global_namespace=None, config=None, **kwargs):
251 """Create a new completer for the command line.
251 """Create a new completer for the command line.
252
252
253 Completer(namespace=ns,global_namespace=ns2) -> completer instance.
253 Completer(namespace=ns,global_namespace=ns2) -> completer instance.
254
254
255 If unspecified, the default namespace where completions are performed
255 If unspecified, the default namespace where completions are performed
256 is __main__ (technically, __main__.__dict__). Namespaces should be
256 is __main__ (technically, __main__.__dict__). Namespaces should be
257 given as dictionaries.
257 given as dictionaries.
258
258
259 An optional second namespace can be given. This allows the completer
259 An optional second namespace can be given. This allows the completer
260 to handle cases where both the local and global scopes need to be
260 to handle cases where both the local and global scopes need to be
261 distinguished.
261 distinguished.
262
262
263 Completer instances should be used as the completion mechanism of
263 Completer instances should be used as the completion mechanism of
264 readline via the set_completer() call:
264 readline via the set_completer() call:
265
265
266 readline.set_completer(Completer(my_namespace).complete)
266 readline.set_completer(Completer(my_namespace).complete)
267 """
267 """
268
268
269 # Don't bind to namespace quite yet, but flag whether the user wants a
269 # Don't bind to namespace quite yet, but flag whether the user wants a
270 # specific namespace or to use __main__.__dict__. This will allow us
270 # specific namespace or to use __main__.__dict__. This will allow us
271 # to bind to __main__.__dict__ at completion time, not now.
271 # to bind to __main__.__dict__ at completion time, not now.
272 if namespace is None:
272 if namespace is None:
273 self.use_main_ns = 1
273 self.use_main_ns = 1
274 else:
274 else:
275 self.use_main_ns = 0
275 self.use_main_ns = 0
276 self.namespace = namespace
276 self.namespace = namespace
277
277
278 # The global namespace, if given, can be bound directly
278 # The global namespace, if given, can be bound directly
279 if global_namespace is None:
279 if global_namespace is None:
280 self.global_namespace = {}
280 self.global_namespace = {}
281 else:
281 else:
282 self.global_namespace = global_namespace
282 self.global_namespace = global_namespace
283
283
284 super(Completer, self).__init__(config=config, **kwargs)
284 super(Completer, self).__init__(config=config, **kwargs)
285
285
286 def complete(self, text, state):
286 def complete(self, text, state):
287 """Return the next possible completion for 'text'.
287 """Return the next possible completion for 'text'.
288
288
289 This is called successively with state == 0, 1, 2, ... until it
289 This is called successively with state == 0, 1, 2, ... until it
290 returns None. The completion should begin with 'text'.
290 returns None. The completion should begin with 'text'.
291
291
292 """
292 """
293 if self.use_main_ns:
293 if self.use_main_ns:
294 self.namespace = __main__.__dict__
294 self.namespace = __main__.__dict__
295
295
296 if state == 0:
296 if state == 0:
297 if "." in text:
297 if "." in text:
298 self.matches = self.attr_matches(text)
298 self.matches = self.attr_matches(text)
299 else:
299 else:
300 self.matches = self.global_matches(text)
300 self.matches = self.global_matches(text)
301 try:
301 try:
302 return self.matches[state]
302 return self.matches[state]
303 except IndexError:
303 except IndexError:
304 return None
304 return None
305
305
306 def global_matches(self, text):
306 def global_matches(self, text):
307 """Compute matches when text is a simple name.
307 """Compute matches when text is a simple name.
308
308
309 Return a list of all keywords, built-in functions and names currently
309 Return a list of all keywords, built-in functions and names currently
310 defined in self.namespace or self.global_namespace that match.
310 defined in self.namespace or self.global_namespace that match.
311
311
312 """
312 """
313 #print 'Completer->global_matches, txt=%r' % text # dbg
313 #print 'Completer->global_matches, txt=%r' % text # dbg
314 matches = []
314 matches = []
315 match_append = matches.append
315 match_append = matches.append
316 n = len(text)
316 n = len(text)
317 for lst in [keyword.kwlist,
317 for lst in [keyword.kwlist,
318 __builtin__.__dict__.keys(),
318 __builtin__.__dict__.keys(),
319 self.namespace.keys(),
319 self.namespace.keys(),
320 self.global_namespace.keys()]:
320 self.global_namespace.keys()]:
321 for word in lst:
321 for word in lst:
322 if word[:n] == text and word != "__builtins__":
322 if word[:n] == text and word != "__builtins__":
323 match_append(word)
323 match_append(word)
324 return matches
324 return matches
325
325
326 def attr_matches(self, text):
326 def attr_matches(self, text):
327 """Compute matches when text contains a dot.
327 """Compute matches when text contains a dot.
328
328
329 Assuming the text is of the form NAME.NAME....[NAME], and is
329 Assuming the text is of the form NAME.NAME....[NAME], and is
330 evaluatable in self.namespace or self.global_namespace, it will be
330 evaluatable in self.namespace or self.global_namespace, it will be
331 evaluated and its attributes (as revealed by dir()) are used as
331 evaluated and its attributes (as revealed by dir()) are used as
332 possible completions. (For class instances, class members are are
332 possible completions. (For class instances, class members are are
333 also considered.)
333 also considered.)
334
334
335 WARNING: this can still invoke arbitrary C code, if an object
335 WARNING: this can still invoke arbitrary C code, if an object
336 with a __getattr__ hook is evaluated.
336 with a __getattr__ hook is evaluated.
337
337
338 """
338 """
339
339
340 #io.rprint('Completer->attr_matches, txt=%r' % text) # dbg
340 #io.rprint('Completer->attr_matches, txt=%r' % text) # dbg
341 # Another option, seems to work great. Catches things like ''.<tab>
341 # Another option, seems to work great. Catches things like ''.<tab>
342 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
342 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
343
343
344 if m:
344 if m:
345 expr, attr = m.group(1, 3)
345 expr, attr = m.group(1, 3)
346 elif self.greedy:
346 elif self.greedy:
347 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
347 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
348 if not m2:
348 if not m2:
349 return []
349 return []
350 expr, attr = m2.group(1,2)
350 expr, attr = m2.group(1,2)
351 else:
351 else:
352 return []
352 return []
353
353
354 try:
354 try:
355 obj = eval(expr, self.namespace)
355 obj = eval(expr, self.namespace)
356 except:
356 except:
357 try:
357 try:
358 obj = eval(expr, self.global_namespace)
358 obj = eval(expr, self.global_namespace)
359 except:
359 except:
360 return []
360 return []
361
361
362 if self.limit_to__all__ and hasattr(obj, '__all__'):
362 if self.limit_to__all__ and hasattr(obj, '__all__'):
363 words = get__all__entries(obj)
363 words = get__all__entries(obj)
364 else:
364 else:
365 words = dir2(obj)
365 words = dir2(obj)
366
366
367 try:
367 try:
368 words = generics.complete_object(obj, words)
368 words = generics.complete_object(obj, words)
369 except TryNext:
369 except TryNext:
370 pass
370 pass
371 except Exception:
371 except Exception:
372 # Silence errors from completion function
372 # Silence errors from completion function
373 #raise # dbg
373 #raise # dbg
374 pass
374 pass
375 # Build match list to return
375 # Build match list to return
376 n = len(attr)
376 n = len(attr)
377 res = ["%s.%s" % (expr, w) for w in words if w[:n] == attr ]
377 res = ["%s.%s" % (expr, w) for w in words if w[:n] == attr ]
378 return res
378 return res
379
379
380
380
381 def get__all__entries(obj):
381 def get__all__entries(obj):
382 """returns the strings in the __all__ attribute"""
382 """returns the strings in the __all__ attribute"""
383 try:
383 try:
384 words = getattr(obj, '__all__')
384 words = getattr(obj, '__all__')
385 except:
385 except:
386 return []
386 return []
387
387
388 return [w for w in words if isinstance(w, basestring)]
388 return [w for w in words if isinstance(w, basestring)]
389
389
390
390
391 class IPCompleter(Completer):
391 class IPCompleter(Completer):
392 """Extension of the completer class with IPython-specific features"""
392 """Extension of the completer class with IPython-specific features"""
393
393
394 def _greedy_changed(self, name, old, new):
394 def _greedy_changed(self, name, old, new):
395 """update the splitter and readline delims when greedy is changed"""
395 """update the splitter and readline delims when greedy is changed"""
396 if new:
396 if new:
397 self.splitter.delims = GREEDY_DELIMS
397 self.splitter.delims = GREEDY_DELIMS
398 else:
398 else:
399 self.splitter.delims = DELIMS
399 self.splitter.delims = DELIMS
400
400
401 if self.readline:
401 if self.readline:
402 self.readline.set_completer_delims(self.splitter.delims)
402 self.readline.set_completer_delims(self.splitter.delims)
403
403
404 merge_completions = CBool(True, config=True,
404 merge_completions = CBool(True, config=True,
405 help="""Whether to merge completion results into a single list
405 help="""Whether to merge completion results into a single list
406
406
407 If False, only the completion results from the first non-empty
407 If False, only the completion results from the first non-empty
408 completer will be returned.
408 completer will be returned.
409 """
409 """
410 )
410 )
411 omit__names = Enum((0,1,2), default_value=2, config=True,
411 omit__names = Enum((0,1,2), default_value=2, config=True,
412 help="""Instruct the completer to omit private method names
412 help="""Instruct the completer to omit private method names
413
413
414 Specifically, when completing on ``object.<tab>``.
414 Specifically, when completing on ``object.<tab>``.
415
415
416 When 2 [default]: all names that start with '_' will be excluded.
416 When 2 [default]: all names that start with '_' will be excluded.
417
417
418 When 1: all 'magic' names (``__foo__``) will be excluded.
418 When 1: all 'magic' names (``__foo__``) will be excluded.
419
419
420 When 0: nothing will be excluded.
420 When 0: nothing will be excluded.
421 """
421 """
422 )
422 )
423 limit_to__all__ = CBool(default_value=False, config=True,
423 limit_to__all__ = CBool(default_value=False, config=True,
424 help="""Instruct the completer to use __all__ for the completion
424 help="""Instruct the completer to use __all__ for the completion
425
425
426 Specifically, when completing on ``object.<tab>``.
426 Specifically, when completing on ``object.<tab>``.
427
427
428 When True: only those names in obj.__all__ will be included.
428 When True: only those names in obj.__all__ will be included.
429
429
430 When False [default]: the __all__ attribute is ignored
430 When False [default]: the __all__ attribute is ignored
431 """
431 """
432 )
432 )
433
433
434 def __init__(self, shell=None, namespace=None, global_namespace=None,
434 def __init__(self, shell=None, namespace=None, global_namespace=None,
435 alias_table=None, use_readline=True,
435 alias_table=None, use_readline=True,
436 config=None, **kwargs):
436 config=None, **kwargs):
437 """IPCompleter() -> completer
437 """IPCompleter() -> completer
438
438
439 Return a completer object suitable for use by the readline library
439 Return a completer object suitable for use by the readline library
440 via readline.set_completer().
440 via readline.set_completer().
441
441
442 Inputs:
442 Inputs:
443
443
444 - shell: a pointer to the ipython shell itself. This is needed
444 - shell: a pointer to the ipython shell itself. This is needed
445 because this completer knows about magic functions, and those can
445 because this completer knows about magic functions, and those can
446 only be accessed via the ipython instance.
446 only be accessed via the ipython instance.
447
447
448 - namespace: an optional dict where completions are performed.
448 - namespace: an optional dict where completions are performed.
449
449
450 - global_namespace: secondary optional dict for completions, to
450 - global_namespace: secondary optional dict for completions, to
451 handle cases (such as IPython embedded inside functions) where
451 handle cases (such as IPython embedded inside functions) where
452 both Python scopes are visible.
452 both Python scopes are visible.
453
453
454 - If alias_table is supplied, it should be a dictionary of aliases
454 - If alias_table is supplied, it should be a dictionary of aliases
455 to complete.
455 to complete.
456
456
457 use_readline : bool, optional
457 use_readline : bool, optional
458 If true, use the readline library. This completer can still function
458 If true, use the readline library. This completer can still function
459 without readline, though in that case callers must provide some extra
459 without readline, though in that case callers must provide some extra
460 information on each call about the current line."""
460 information on each call about the current line."""
461
461
462 self.magic_escape = ESC_MAGIC
462 self.magic_escape = ESC_MAGIC
463 self.splitter = CompletionSplitter()
463 self.splitter = CompletionSplitter()
464
464
465 # Readline configuration, only used by the rlcompleter method.
465 # Readline configuration, only used by the rlcompleter method.
466 if use_readline:
466 if use_readline:
467 # We store the right version of readline so that later code
467 # We store the right version of readline so that later code
468 import IPython.utils.rlineimpl as readline
468 import IPython.utils.rlineimpl as readline
469 self.readline = readline
469 self.readline = readline
470 else:
470 else:
471 self.readline = None
471 self.readline = None
472
472
473 # _greedy_changed() depends on splitter and readline being defined:
473 # _greedy_changed() depends on splitter and readline being defined:
474 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
474 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
475 config=config, **kwargs)
475 config=config, **kwargs)
476
476
477 # List where completion matches will be stored
477 # List where completion matches will be stored
478 self.matches = []
478 self.matches = []
479 self.shell = shell
479 self.shell = shell
480 if alias_table is None:
480 if alias_table is None:
481 alias_table = {}
481 alias_table = {}
482 self.alias_table = alias_table
482 self.alias_table = alias_table
483 # Regexp to split filenames with spaces in them
483 # Regexp to split filenames with spaces in them
484 self.space_name_re = re.compile(r'([^\\] )')
484 self.space_name_re = re.compile(r'([^\\] )')
485 # Hold a local ref. to glob.glob for speed
485 # Hold a local ref. to glob.glob for speed
486 self.glob = glob.glob
486 self.glob = glob.glob
487
487
488 # Determine if we are running on 'dumb' terminals, like (X)Emacs
488 # Determine if we are running on 'dumb' terminals, like (X)Emacs
489 # buffers, to avoid completion problems.
489 # buffers, to avoid completion problems.
490 term = os.environ.get('TERM','xterm')
490 term = os.environ.get('TERM','xterm')
491 self.dumb_terminal = term in ['dumb','emacs']
491 self.dumb_terminal = term in ['dumb','emacs']
492
492
493 # Special handling of backslashes needed in win32 platforms
493 # Special handling of backslashes needed in win32 platforms
494 if sys.platform == "win32":
494 if sys.platform == "win32":
495 self.clean_glob = self._clean_glob_win32
495 self.clean_glob = self._clean_glob_win32
496 else:
496 else:
497 self.clean_glob = self._clean_glob
497 self.clean_glob = self._clean_glob
498
498
499 # All active matcher routines for completion
499 # All active matcher routines for completion
500 self.matchers = [self.python_matches,
500 self.matchers = [self.python_matches,
501 self.file_matches,
501 self.file_matches,
502 self.magic_matches,
502 self.magic_matches,
503 self.alias_matches,
503 self.alias_matches,
504 self.python_func_kw_matches,
504 self.python_func_kw_matches,
505 ]
505 ]
506
506
507 def all_completions(self, text):
507 def all_completions(self, text):
508 """
508 """
509 Wrapper around the complete method for the benefit of emacs
509 Wrapper around the complete method for the benefit of emacs
510 and pydb.
510 and pydb.
511 """
511 """
512 return self.complete(text)[1]
512 return self.complete(text)[1]
513
513
514 def _clean_glob(self,text):
514 def _clean_glob(self,text):
515 return self.glob("%s*" % text)
515 return self.glob("%s*" % text)
516
516
517 def _clean_glob_win32(self,text):
517 def _clean_glob_win32(self,text):
518 return [f.replace("\\","/")
518 return [f.replace("\\","/")
519 for f in self.glob("%s*" % text)]
519 for f in self.glob("%s*" % text)]
520
520
521 def file_matches(self, text):
521 def file_matches(self, text):
522 """Match filenames, expanding ~USER type strings.
522 """Match filenames, expanding ~USER type strings.
523
523
524 Most of the seemingly convoluted logic in this completer is an
524 Most of the seemingly convoluted logic in this completer is an
525 attempt to handle filenames with spaces in them. And yet it's not
525 attempt to handle filenames with spaces in them. And yet it's not
526 quite perfect, because Python's readline doesn't expose all of the
526 quite perfect, because Python's readline doesn't expose all of the
527 GNU readline details needed for this to be done correctly.
527 GNU readline details needed for this to be done correctly.
528
528
529 For a filename with a space in it, the printed completions will be
529 For a filename with a space in it, the printed completions will be
530 only the parts after what's already been typed (instead of the
530 only the parts after what's already been typed (instead of the
531 full completions, as is normally done). I don't think with the
531 full completions, as is normally done). I don't think with the
532 current (as of Python 2.3) Python readline it's possible to do
532 current (as of Python 2.3) Python readline it's possible to do
533 better."""
533 better."""
534
534
535 #io.rprint('Completer->file_matches: <%r>' % text) # dbg
535 #io.rprint('Completer->file_matches: <%r>' % text) # dbg
536
536
537 # chars that require escaping with backslash - i.e. chars
537 # chars that require escaping with backslash - i.e. chars
538 # that readline treats incorrectly as delimiters, but we
538 # that readline treats incorrectly as delimiters, but we
539 # don't want to treat as delimiters in filename matching
539 # don't want to treat as delimiters in filename matching
540 # when escaped with backslash
540 # when escaped with backslash
541 if text.startswith('!'):
541 if text.startswith('!'):
542 text = text[1:]
542 text = text[1:]
543 text_prefix = '!'
543 text_prefix = '!'
544 else:
544 else:
545 text_prefix = ''
545 text_prefix = ''
546
546
547 text_until_cursor = self.text_until_cursor
547 text_until_cursor = self.text_until_cursor
548 # track strings with open quotes
548 # track strings with open quotes
549 open_quotes = has_open_quotes(text_until_cursor)
549 open_quotes = has_open_quotes(text_until_cursor)
550
550
551 if '(' in text_until_cursor or '[' in text_until_cursor:
551 if '(' in text_until_cursor or '[' in text_until_cursor:
552 lsplit = text
552 lsplit = text
553 else:
553 else:
554 try:
554 try:
555 # arg_split ~ shlex.split, but with unicode bugs fixed by us
555 # arg_split ~ shlex.split, but with unicode bugs fixed by us
556 lsplit = arg_split(text_until_cursor)[-1]
556 lsplit = arg_split(text_until_cursor)[-1]
557 except ValueError:
557 except ValueError:
558 # typically an unmatched ", or backslash without escaped char.
558 # typically an unmatched ", or backslash without escaped char.
559 if open_quotes:
559 if open_quotes:
560 lsplit = text_until_cursor.split(open_quotes)[-1]
560 lsplit = text_until_cursor.split(open_quotes)[-1]
561 else:
561 else:
562 return []
562 return []
563 except IndexError:
563 except IndexError:
564 # tab pressed on empty line
564 # tab pressed on empty line
565 lsplit = ""
565 lsplit = ""
566
566
567 if not open_quotes and lsplit != protect_filename(lsplit):
567 if not open_quotes and lsplit != protect_filename(lsplit):
568 # if protectables are found, do matching on the whole escaped name
568 # if protectables are found, do matching on the whole escaped name
569 has_protectables = True
569 has_protectables = True
570 text0,text = text,lsplit
570 text0,text = text,lsplit
571 else:
571 else:
572 has_protectables = False
572 has_protectables = False
573 text = os.path.expanduser(text)
573 text = os.path.expanduser(text)
574
574
575 if text == "":
575 if text == "":
576 return [text_prefix + protect_filename(f) for f in self.glob("*")]
576 return [text_prefix + protect_filename(f) for f in self.glob("*")]
577
577
578 # Compute the matches from the filesystem
578 # Compute the matches from the filesystem
579 m0 = self.clean_glob(text.replace('\\',''))
579 m0 = self.clean_glob(text.replace('\\',''))
580
580
581 if has_protectables:
581 if has_protectables:
582 # If we had protectables, we need to revert our changes to the
582 # If we had protectables, we need to revert our changes to the
583 # beginning of filename so that we don't double-write the part
583 # beginning of filename so that we don't double-write the part
584 # of the filename we have so far
584 # of the filename we have so far
585 len_lsplit = len(lsplit)
585 len_lsplit = len(lsplit)
586 matches = [text_prefix + text0 +
586 matches = [text_prefix + text0 +
587 protect_filename(f[len_lsplit:]) for f in m0]
587 protect_filename(f[len_lsplit:]) for f in m0]
588 else:
588 else:
589 if open_quotes:
589 if open_quotes:
590 # if we have a string with an open quote, we don't need to
590 # if we have a string with an open quote, we don't need to
591 # protect the names at all (and we _shouldn't_, as it
591 # protect the names at all (and we _shouldn't_, as it
592 # would cause bugs when the filesystem call is made).
592 # would cause bugs when the filesystem call is made).
593 matches = m0
593 matches = m0
594 else:
594 else:
595 matches = [text_prefix +
595 matches = [text_prefix +
596 protect_filename(f) for f in m0]
596 protect_filename(f) for f in m0]
597
597
598 #io.rprint('mm', matches) # dbg
598 #io.rprint('mm', matches) # dbg
599
599
600 # Mark directories in input list by appending '/' to their names.
600 # Mark directories in input list by appending '/' to their names.
601 matches = [x+'/' if os.path.isdir(x) else x for x in matches]
601 matches = [x+'/' if os.path.isdir(x) else x for x in matches]
602 return matches
602 return matches
603
603
604 def magic_matches(self, text):
604 def magic_matches(self, text):
605 """Match magics"""
605 """Match magics"""
606 #print 'Completer->magic_matches:',text,'lb',self.text_until_cursor # dbg
606 #print 'Completer->magic_matches:',text,'lb',self.text_until_cursor # dbg
607 # Get all shell magics now rather than statically, so magics loaded at
607 # Get all shell magics now rather than statically, so magics loaded at
608 # runtime show up too.
608 # runtime show up too.
609 lsm = self.shell.magics_manager.lsmagic()
609 lsm = self.shell.magics_manager.lsmagic()
610 line_magics = lsm['line']
610 line_magics = lsm['line']
611 cell_magics = lsm['cell']
611 cell_magics = lsm['cell']
612 pre = self.magic_escape
612 pre = self.magic_escape
613 pre2 = pre+pre
613 pre2 = pre+pre
614
614
615 # Completion logic:
615 # Completion logic:
616 # - user gives %%: only do cell magics
616 # - user gives %%: only do cell magics
617 # - user gives %: do both line and cell magics
617 # - user gives %: do both line and cell magics
618 # - no prefix: do both
618 # - no prefix: do both
619 # In other words, line magics are skipped if the user gives %% explicitly
619 # In other words, line magics are skipped if the user gives %% explicitly
620 bare_text = text.lstrip(pre)
620 bare_text = text.lstrip(pre)
621 comp = [ pre2+m for m in cell_magics if m.startswith(bare_text)]
621 comp = [ pre2+m for m in cell_magics if m.startswith(bare_text)]
622 if not text.startswith(pre2):
622 if not text.startswith(pre2):
623 comp += [ pre+m for m in line_magics if m.startswith(bare_text)]
623 comp += [ pre+m for m in line_magics if m.startswith(bare_text)]
624 return comp
624 return comp
625
625
626 def alias_matches(self, text):
626 def alias_matches(self, text):
627 """Match internal system aliases"""
627 """Match internal system aliases"""
628 #print 'Completer->alias_matches:',text,'lb',self.text_until_cursor # dbg
628 #print 'Completer->alias_matches:',text,'lb',self.text_until_cursor # dbg
629
629
630 # if we are not in the first 'item', alias matching
630 # if we are not in the first 'item', alias matching
631 # doesn't make sense - unless we are starting with 'sudo' command.
631 # doesn't make sense - unless we are starting with 'sudo' command.
632 main_text = self.text_until_cursor.lstrip()
632 main_text = self.text_until_cursor.lstrip()
633 if ' ' in main_text and not main_text.startswith('sudo'):
633 if ' ' in main_text and not main_text.startswith('sudo'):
634 return []
634 return []
635 text = os.path.expanduser(text)
635 text = os.path.expanduser(text)
636 aliases = self.alias_table.keys()
636 aliases = self.alias_table.keys()
637 if text == '':
637 if text == '':
638 return aliases
638 return aliases
639 else:
639 else:
640 return [a for a in aliases if a.startswith(text)]
640 return [a for a in aliases if a.startswith(text)]
641
641
642 def python_matches(self,text):
642 def python_matches(self,text):
643 """Match attributes or global python names"""
643 """Match attributes or global python names"""
644
644
645 #io.rprint('Completer->python_matches, txt=%r' % text) # dbg
645 #io.rprint('Completer->python_matches, txt=%r' % text) # dbg
646 if "." in text:
646 if "." in text:
647 try:
647 try:
648 matches = self.attr_matches(text)
648 matches = self.attr_matches(text)
649 if text.endswith('.') and self.omit__names:
649 if text.endswith('.') and self.omit__names:
650 if self.omit__names == 1:
650 if self.omit__names == 1:
651 # true if txt is _not_ a __ name, false otherwise:
651 # true if txt is _not_ a __ name, false otherwise:
652 no__name = (lambda txt:
652 no__name = (lambda txt:
653 re.match(r'.*\.__.*?__',txt) is None)
653 re.match(r'.*\.__.*?__',txt) is None)
654 else:
654 else:
655 # true if txt is _not_ a _ name, false otherwise:
655 # true if txt is _not_ a _ name, false otherwise:
656 no__name = (lambda txt:
656 no__name = (lambda txt:
657 re.match(r'.*\._.*?',txt) is None)
657 re.match(r'.*\._.*?',txt) is None)
658 matches = filter(no__name, matches)
658 matches = filter(no__name, matches)
659 except NameError:
659 except NameError:
660 # catches <undefined attributes>.<tab>
660 # catches <undefined attributes>.<tab>
661 matches = []
661 matches = []
662 else:
662 else:
663 matches = self.global_matches(text)
663 matches = self.global_matches(text)
664
664
665 return matches
665 return matches
666
666
667 def _default_arguments(self, obj):
667 def _default_arguments(self, obj):
668 """Return the list of default arguments of obj if it is callable,
668 """Return the list of default arguments of obj if it is callable,
669 or empty list otherwise."""
669 or empty list otherwise."""
670
670
671 if not (inspect.isfunction(obj) or inspect.ismethod(obj)):
671 if not (inspect.isfunction(obj) or inspect.ismethod(obj)):
672 # for classes, check for __init__,__new__
672 # for classes, check for __init__,__new__
673 if inspect.isclass(obj):
673 if inspect.isclass(obj):
674 obj = (getattr(obj,'__init__',None) or
674 obj = (getattr(obj,'__init__',None) or
675 getattr(obj,'__new__',None))
675 getattr(obj,'__new__',None))
676 # for all others, check if they are __call__able
676 # for all others, check if they are __call__able
677 elif hasattr(obj, '__call__'):
677 elif hasattr(obj, '__call__'):
678 obj = obj.__call__
678 obj = obj.__call__
679 # XXX: is there a way to handle the builtins ?
679 # XXX: is there a way to handle the builtins ?
680 try:
680 try:
681 args,_,_1,defaults = inspect.getargspec(obj)
681 args,_,_1,defaults = inspect.getargspec(obj)
682 if defaults:
682 if defaults:
683 return args[-len(defaults):]
683 return args[-len(defaults):]
684 except TypeError: pass
684 except TypeError: pass
685 return []
685 return []
686
686
687 def python_func_kw_matches(self,text):
687 def python_func_kw_matches(self,text):
688 """Match named parameters (kwargs) of the last open function"""
688 """Match named parameters (kwargs) of the last open function"""
689
689
690 if "." in text: # a parameter cannot be dotted
690 if "." in text: # a parameter cannot be dotted
691 return []
691 return []
692 try: regexp = self.__funcParamsRegex
692 try: regexp = self.__funcParamsRegex
693 except AttributeError:
693 except AttributeError:
694 regexp = self.__funcParamsRegex = re.compile(r'''
694 regexp = self.__funcParamsRegex = re.compile(r'''
695 '.*?(?<!\\)' | # single quoted strings or
695 '.*?(?<!\\)' | # single quoted strings or
696 ".*?(?<!\\)" | # double quoted strings or
696 ".*?(?<!\\)" | # double quoted strings or
697 \w+ | # identifier
697 \w+ | # identifier
698 \S # other characters
698 \S # other characters
699 ''', re.VERBOSE | re.DOTALL)
699 ''', re.VERBOSE | re.DOTALL)
700 # 1. find the nearest identifier that comes before an unclosed
700 # 1. find the nearest identifier that comes before an unclosed
701 # parenthesis before the cursor
701 # parenthesis before the cursor
702 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
702 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
703 tokens = regexp.findall(self.text_until_cursor)
703 tokens = regexp.findall(self.text_until_cursor)
704 tokens.reverse()
704 tokens.reverse()
705 iterTokens = iter(tokens); openPar = 0
705 iterTokens = iter(tokens); openPar = 0
706 for token in iterTokens:
706 for token in iterTokens:
707 if token == ')':
707 if token == ')':
708 openPar -= 1
708 openPar -= 1
709 elif token == '(':
709 elif token == '(':
710 openPar += 1
710 openPar += 1
711 if openPar > 0:
711 if openPar > 0:
712 # found the last unclosed parenthesis
712 # found the last unclosed parenthesis
713 break
713 break
714 else:
714 else:
715 return []
715 return []
716 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
716 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
717 ids = []
717 ids = []
718 isId = re.compile(r'\w+$').match
718 isId = re.compile(r'\w+$').match
719 while True:
719 while True:
720 try:
720 try:
721 ids.append(iterTokens.next())
721 ids.append(next(iterTokens))
722 if not isId(ids[-1]):
722 if not isId(ids[-1]):
723 ids.pop(); break
723 ids.pop(); break
724 if not iterTokens.next() == '.':
724 if not next(iterTokens) == '.':
725 break
725 break
726 except StopIteration:
726 except StopIteration:
727 break
727 break
728 # lookup the candidate callable matches either using global_matches
728 # lookup the candidate callable matches either using global_matches
729 # or attr_matches for dotted names
729 # or attr_matches for dotted names
730 if len(ids) == 1:
730 if len(ids) == 1:
731 callableMatches = self.global_matches(ids[0])
731 callableMatches = self.global_matches(ids[0])
732 else:
732 else:
733 callableMatches = self.attr_matches('.'.join(ids[::-1]))
733 callableMatches = self.attr_matches('.'.join(ids[::-1]))
734 argMatches = []
734 argMatches = []
735 for callableMatch in callableMatches:
735 for callableMatch in callableMatches:
736 try:
736 try:
737 namedArgs = self._default_arguments(eval(callableMatch,
737 namedArgs = self._default_arguments(eval(callableMatch,
738 self.namespace))
738 self.namespace))
739 except:
739 except:
740 continue
740 continue
741 for namedArg in namedArgs:
741 for namedArg in namedArgs:
742 if namedArg.startswith(text):
742 if namedArg.startswith(text):
743 argMatches.append("%s=" %namedArg)
743 argMatches.append("%s=" %namedArg)
744 return argMatches
744 return argMatches
745
745
746 def dispatch_custom_completer(self, text):
746 def dispatch_custom_completer(self, text):
747 #io.rprint("Custom! '%s' %s" % (text, self.custom_completers)) # dbg
747 #io.rprint("Custom! '%s' %s" % (text, self.custom_completers)) # dbg
748 line = self.line_buffer
748 line = self.line_buffer
749 if not line.strip():
749 if not line.strip():
750 return None
750 return None
751
751
752 # Create a little structure to pass all the relevant information about
752 # Create a little structure to pass all the relevant information about
753 # the current completion to any custom completer.
753 # the current completion to any custom completer.
754 event = Bunch()
754 event = Bunch()
755 event.line = line
755 event.line = line
756 event.symbol = text
756 event.symbol = text
757 cmd = line.split(None,1)[0]
757 cmd = line.split(None,1)[0]
758 event.command = cmd
758 event.command = cmd
759 event.text_until_cursor = self.text_until_cursor
759 event.text_until_cursor = self.text_until_cursor
760
760
761 #print "\ncustom:{%s]\n" % event # dbg
761 #print "\ncustom:{%s]\n" % event # dbg
762
762
763 # for foo etc, try also to find completer for %foo
763 # for foo etc, try also to find completer for %foo
764 if not cmd.startswith(self.magic_escape):
764 if not cmd.startswith(self.magic_escape):
765 try_magic = self.custom_completers.s_matches(
765 try_magic = self.custom_completers.s_matches(
766 self.magic_escape + cmd)
766 self.magic_escape + cmd)
767 else:
767 else:
768 try_magic = []
768 try_magic = []
769
769
770 for c in itertools.chain(self.custom_completers.s_matches(cmd),
770 for c in itertools.chain(self.custom_completers.s_matches(cmd),
771 try_magic,
771 try_magic,
772 self.custom_completers.flat_matches(self.text_until_cursor)):
772 self.custom_completers.flat_matches(self.text_until_cursor)):
773 #print "try",c # dbg
773 #print "try",c # dbg
774 try:
774 try:
775 res = c(event)
775 res = c(event)
776 if res:
776 if res:
777 # first, try case sensitive match
777 # first, try case sensitive match
778 withcase = [r for r in res if r.startswith(text)]
778 withcase = [r for r in res if r.startswith(text)]
779 if withcase:
779 if withcase:
780 return withcase
780 return withcase
781 # if none, then case insensitive ones are ok too
781 # if none, then case insensitive ones are ok too
782 text_low = text.lower()
782 text_low = text.lower()
783 return [r for r in res if r.lower().startswith(text_low)]
783 return [r for r in res if r.lower().startswith(text_low)]
784 except TryNext:
784 except TryNext:
785 pass
785 pass
786
786
787 return None
787 return None
788
788
789 def complete(self, text=None, line_buffer=None, cursor_pos=None):
789 def complete(self, text=None, line_buffer=None, cursor_pos=None):
790 """Find completions for the given text and line context.
790 """Find completions for the given text and line context.
791
791
792 This is called successively with state == 0, 1, 2, ... until it
792 This is called successively with state == 0, 1, 2, ... until it
793 returns None. The completion should begin with 'text'.
793 returns None. The completion should begin with 'text'.
794
794
795 Note that both the text and the line_buffer are optional, but at least
795 Note that both the text and the line_buffer are optional, but at least
796 one of them must be given.
796 one of them must be given.
797
797
798 Parameters
798 Parameters
799 ----------
799 ----------
800 text : string, optional
800 text : string, optional
801 Text to perform the completion on. If not given, the line buffer
801 Text to perform the completion on. If not given, the line buffer
802 is split using the instance's CompletionSplitter object.
802 is split using the instance's CompletionSplitter object.
803
803
804 line_buffer : string, optional
804 line_buffer : string, optional
805 If not given, the completer attempts to obtain the current line
805 If not given, the completer attempts to obtain the current line
806 buffer via readline. This keyword allows clients which are
806 buffer via readline. This keyword allows clients which are
807 requesting for text completions in non-readline contexts to inform
807 requesting for text completions in non-readline contexts to inform
808 the completer of the entire text.
808 the completer of the entire text.
809
809
810 cursor_pos : int, optional
810 cursor_pos : int, optional
811 Index of the cursor in the full line buffer. Should be provided by
811 Index of the cursor in the full line buffer. Should be provided by
812 remote frontends where kernel has no access to frontend state.
812 remote frontends where kernel has no access to frontend state.
813
813
814 Returns
814 Returns
815 -------
815 -------
816 text : str
816 text : str
817 Text that was actually used in the completion.
817 Text that was actually used in the completion.
818
818
819 matches : list
819 matches : list
820 A list of completion matches.
820 A list of completion matches.
821 """
821 """
822 #io.rprint('\nCOMP1 %r %r %r' % (text, line_buffer, cursor_pos)) # dbg
822 #io.rprint('\nCOMP1 %r %r %r' % (text, line_buffer, cursor_pos)) # dbg
823
823
824 # if the cursor position isn't given, the only sane assumption we can
824 # if the cursor position isn't given, the only sane assumption we can
825 # make is that it's at the end of the line (the common case)
825 # make is that it's at the end of the line (the common case)
826 if cursor_pos is None:
826 if cursor_pos is None:
827 cursor_pos = len(line_buffer) if text is None else len(text)
827 cursor_pos = len(line_buffer) if text is None else len(text)
828
828
829 # if text is either None or an empty string, rely on the line buffer
829 # if text is either None or an empty string, rely on the line buffer
830 if not text:
830 if not text:
831 text = self.splitter.split_line(line_buffer, cursor_pos)
831 text = self.splitter.split_line(line_buffer, cursor_pos)
832
832
833 # If no line buffer is given, assume the input text is all there was
833 # If no line buffer is given, assume the input text is all there was
834 if line_buffer is None:
834 if line_buffer is None:
835 line_buffer = text
835 line_buffer = text
836
836
837 self.line_buffer = line_buffer
837 self.line_buffer = line_buffer
838 self.text_until_cursor = self.line_buffer[:cursor_pos]
838 self.text_until_cursor = self.line_buffer[:cursor_pos]
839 #io.rprint('COMP2 %r %r %r' % (text, line_buffer, cursor_pos)) # dbg
839 #io.rprint('COMP2 %r %r %r' % (text, line_buffer, cursor_pos)) # dbg
840
840
841 # Start with a clean slate of completions
841 # Start with a clean slate of completions
842 self.matches[:] = []
842 self.matches[:] = []
843 custom_res = self.dispatch_custom_completer(text)
843 custom_res = self.dispatch_custom_completer(text)
844 if custom_res is not None:
844 if custom_res is not None:
845 # did custom completers produce something?
845 # did custom completers produce something?
846 self.matches = custom_res
846 self.matches = custom_res
847 else:
847 else:
848 # Extend the list of completions with the results of each
848 # Extend the list of completions with the results of each
849 # matcher, so we return results to the user from all
849 # matcher, so we return results to the user from all
850 # namespaces.
850 # namespaces.
851 if self.merge_completions:
851 if self.merge_completions:
852 self.matches = []
852 self.matches = []
853 for matcher in self.matchers:
853 for matcher in self.matchers:
854 try:
854 try:
855 self.matches.extend(matcher(text))
855 self.matches.extend(matcher(text))
856 except:
856 except:
857 # Show the ugly traceback if the matcher causes an
857 # Show the ugly traceback if the matcher causes an
858 # exception, but do NOT crash the kernel!
858 # exception, but do NOT crash the kernel!
859 sys.excepthook(*sys.exc_info())
859 sys.excepthook(*sys.exc_info())
860 else:
860 else:
861 for matcher in self.matchers:
861 for matcher in self.matchers:
862 self.matches = matcher(text)
862 self.matches = matcher(text)
863 if self.matches:
863 if self.matches:
864 break
864 break
865 # FIXME: we should extend our api to return a dict with completions for
865 # FIXME: we should extend our api to return a dict with completions for
866 # different types of objects. The rlcomplete() method could then
866 # different types of objects. The rlcomplete() method could then
867 # simply collapse the dict into a list for readline, but we'd have
867 # simply collapse the dict into a list for readline, but we'd have
868 # richer completion semantics in other evironments.
868 # richer completion semantics in other evironments.
869 self.matches = sorted(set(self.matches))
869 self.matches = sorted(set(self.matches))
870 #io.rprint('COMP TEXT, MATCHES: %r, %r' % (text, self.matches)) # dbg
870 #io.rprint('COMP TEXT, MATCHES: %r, %r' % (text, self.matches)) # dbg
871 return text, self.matches
871 return text, self.matches
872
872
873 def rlcomplete(self, text, state):
873 def rlcomplete(self, text, state):
874 """Return the state-th possible completion for 'text'.
874 """Return the state-th possible completion for 'text'.
875
875
876 This is called successively with state == 0, 1, 2, ... until it
876 This is called successively with state == 0, 1, 2, ... until it
877 returns None. The completion should begin with 'text'.
877 returns None. The completion should begin with 'text'.
878
878
879 Parameters
879 Parameters
880 ----------
880 ----------
881 text : string
881 text : string
882 Text to perform the completion on.
882 Text to perform the completion on.
883
883
884 state : int
884 state : int
885 Counter used by readline.
885 Counter used by readline.
886 """
886 """
887 if state==0:
887 if state==0:
888
888
889 self.line_buffer = line_buffer = self.readline.get_line_buffer()
889 self.line_buffer = line_buffer = self.readline.get_line_buffer()
890 cursor_pos = self.readline.get_endidx()
890 cursor_pos = self.readline.get_endidx()
891
891
892 #io.rprint("\nRLCOMPLETE: %r %r %r" %
892 #io.rprint("\nRLCOMPLETE: %r %r %r" %
893 # (text, line_buffer, cursor_pos) ) # dbg
893 # (text, line_buffer, cursor_pos) ) # dbg
894
894
895 # if there is only a tab on a line with only whitespace, instead of
895 # if there is only a tab on a line with only whitespace, instead of
896 # the mostly useless 'do you want to see all million completions'
896 # the mostly useless 'do you want to see all million completions'
897 # message, just do the right thing and give the user his tab!
897 # message, just do the right thing and give the user his tab!
898 # Incidentally, this enables pasting of tabbed text from an editor
898 # Incidentally, this enables pasting of tabbed text from an editor
899 # (as long as autoindent is off).
899 # (as long as autoindent is off).
900
900
901 # It should be noted that at least pyreadline still shows file
901 # It should be noted that at least pyreadline still shows file
902 # completions - is there a way around it?
902 # completions - is there a way around it?
903
903
904 # don't apply this on 'dumb' terminals, such as emacs buffers, so
904 # don't apply this on 'dumb' terminals, such as emacs buffers, so
905 # we don't interfere with their own tab-completion mechanism.
905 # we don't interfere with their own tab-completion mechanism.
906 if not (self.dumb_terminal or line_buffer.strip()):
906 if not (self.dumb_terminal or line_buffer.strip()):
907 self.readline.insert_text('\t')
907 self.readline.insert_text('\t')
908 sys.stdout.flush()
908 sys.stdout.flush()
909 return None
909 return None
910
910
911 # Note: debugging exceptions that may occur in completion is very
911 # Note: debugging exceptions that may occur in completion is very
912 # tricky, because readline unconditionally silences them. So if
912 # tricky, because readline unconditionally silences them. So if
913 # during development you suspect a bug in the completion code, turn
913 # during development you suspect a bug in the completion code, turn
914 # this flag on temporarily by uncommenting the second form (don't
914 # this flag on temporarily by uncommenting the second form (don't
915 # flip the value in the first line, as the '# dbg' marker can be
915 # flip the value in the first line, as the '# dbg' marker can be
916 # automatically detected and is used elsewhere).
916 # automatically detected and is used elsewhere).
917 DEBUG = False
917 DEBUG = False
918 #DEBUG = True # dbg
918 #DEBUG = True # dbg
919 if DEBUG:
919 if DEBUG:
920 try:
920 try:
921 self.complete(text, line_buffer, cursor_pos)
921 self.complete(text, line_buffer, cursor_pos)
922 except:
922 except:
923 import traceback; traceback.print_exc()
923 import traceback; traceback.print_exc()
924 else:
924 else:
925 # The normal production version is here
925 # The normal production version is here
926
926
927 # This method computes the self.matches array
927 # This method computes the self.matches array
928 self.complete(text, line_buffer, cursor_pos)
928 self.complete(text, line_buffer, cursor_pos)
929
929
930 try:
930 try:
931 return self.matches[state]
931 return self.matches[state]
932 except IndexError:
932 except IndexError:
933 return None
933 return None
@@ -1,1900 +1,1903 b''
1 """Pexpect is a Python module for spawning child applications and controlling
1 """Pexpect is a Python module for spawning child applications and controlling
2 them automatically. Pexpect can be used for automating interactive applications
2 them automatically. Pexpect can be used for automating interactive applications
3 such as ssh, ftp, passwd, telnet, etc. It can be used to a automate setup
3 such as ssh, ftp, passwd, telnet, etc. It can be used to a automate setup
4 scripts for duplicating software package installations on different servers. It
4 scripts for duplicating software package installations on different servers. It
5 can be used for automated software testing. Pexpect is in the spirit of Don
5 can be used for automated software testing. Pexpect is in the spirit of Don
6 Libes' Expect, but Pexpect is pure Python. Other Expect-like modules for Python
6 Libes' Expect, but Pexpect is pure Python. Other Expect-like modules for Python
7 require TCL and Expect or require C extensions to be compiled. Pexpect does not
7 require TCL and Expect or require C extensions to be compiled. Pexpect does not
8 use C, Expect, or TCL extensions. It should work on any platform that supports
8 use C, Expect, or TCL extensions. It should work on any platform that supports
9 the standard Python pty module. The Pexpect interface focuses on ease of use so
9 the standard Python pty module. The Pexpect interface focuses on ease of use so
10 that simple tasks are easy.
10 that simple tasks are easy.
11
11
12 There are two main interfaces to the Pexpect system; these are the function,
12 There are two main interfaces to the Pexpect system; these are the function,
13 run() and the class, spawn. The spawn class is more powerful. The run()
13 run() and the class, spawn. The spawn class is more powerful. The run()
14 function is simpler than spawn, and is good for quickly calling program. When
14 function is simpler than spawn, and is good for quickly calling program. When
15 you call the run() function it executes a given program and then returns the
15 you call the run() function it executes a given program and then returns the
16 output. This is a handy replacement for os.system().
16 output. This is a handy replacement for os.system().
17
17
18 For example::
18 For example::
19
19
20 pexpect.run('ls -la')
20 pexpect.run('ls -la')
21
21
22 The spawn class is the more powerful interface to the Pexpect system. You can
22 The spawn class is the more powerful interface to the Pexpect system. You can
23 use this to spawn a child program then interact with it by sending input and
23 use this to spawn a child program then interact with it by sending input and
24 expecting responses (waiting for patterns in the child's output).
24 expecting responses (waiting for patterns in the child's output).
25
25
26 For example::
26 For example::
27
27
28 child = pexpect.spawn('scp foo myname@host.example.com:.')
28 child = pexpect.spawn('scp foo myname@host.example.com:.')
29 child.expect ('Password:')
29 child.expect ('Password:')
30 child.sendline (mypassword)
30 child.sendline (mypassword)
31
31
32 This works even for commands that ask for passwords or other input outside of
32 This works even for commands that ask for passwords or other input outside of
33 the normal stdio streams. For example, ssh reads input directly from the TTY
33 the normal stdio streams. For example, ssh reads input directly from the TTY
34 device which bypasses stdin.
34 device which bypasses stdin.
35
35
36 Credits: Noah Spurrier, Richard Holden, Marco Molteni, Kimberley Burchett,
36 Credits: Noah Spurrier, Richard Holden, Marco Molteni, Kimberley Burchett,
37 Robert Stone, Hartmut Goebel, Chad Schroeder, Erick Tryzelaar, Dave Kirby, Ids
37 Robert Stone, Hartmut Goebel, Chad Schroeder, Erick Tryzelaar, Dave Kirby, Ids
38 vander Molen, George Todd, Noel Taylor, Nicolas D. Cesar, Alexander Gattin,
38 vander Molen, George Todd, Noel Taylor, Nicolas D. Cesar, Alexander Gattin,
39 Jacques-Etienne Baudoux, Geoffrey Marshall, Francisco Lourenco, Glen Mabey,
39 Jacques-Etienne Baudoux, Geoffrey Marshall, Francisco Lourenco, Glen Mabey,
40 Karthik Gurusamy, Fernando Perez, Corey Minyard, Jon Cohen, Guillaume
40 Karthik Gurusamy, Fernando Perez, Corey Minyard, Jon Cohen, Guillaume
41 Chazarain, Andrew Ryan, Nick Craig-Wood, Andrew Stone, Jorgen Grahn, John
41 Chazarain, Andrew Ryan, Nick Craig-Wood, Andrew Stone, Jorgen Grahn, John
42 Spiegel, Jan Grant, Shane Kerr and Thomas Kluyver. Let me know if I forgot anyone.
42 Spiegel, Jan Grant, Shane Kerr and Thomas Kluyver. Let me know if I forgot anyone.
43
43
44 Pexpect is free, open source, and all that good stuff.
44 Pexpect is free, open source, and all that good stuff.
45
45
46 Permission is hereby granted, free of charge, to any person obtaining a copy of
46 Permission is hereby granted, free of charge, to any person obtaining a copy of
47 this software and associated documentation files (the "Software"), to deal in
47 this software and associated documentation files (the "Software"), to deal in
48 the Software without restriction, including without limitation the rights to
48 the Software without restriction, including without limitation the rights to
49 use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
49 use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
50 of the Software, and to permit persons to whom the Software is furnished to do
50 of the Software, and to permit persons to whom the Software is furnished to do
51 so, subject to the following conditions:
51 so, subject to the following conditions:
52
52
53 The above copyright notice and this permission notice shall be included in all
53 The above copyright notice and this permission notice shall be included in all
54 copies or substantial portions of the Software.
54 copies or substantial portions of the Software.
55
55
56 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
56 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
57 IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
57 IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
58 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
58 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
59 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
59 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
60 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
60 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
61 OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
61 OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
62 SOFTWARE.
62 SOFTWARE.
63
63
64 Pexpect Copyright (c) 2008-2011 Noah Spurrier
64 Pexpect Copyright (c) 2008-2011 Noah Spurrier
65 http://pexpect.sourceforge.net/
65 http://pexpect.sourceforge.net/
66 """
66 """
67
67
68 try:
68 try:
69 import os, sys, time
69 import os, sys, time
70 import select
70 import select
71 import re
71 import re
72 import struct
72 import struct
73 import resource
73 import resource
74 import types
74 import types
75 import pty
75 import pty
76 import tty
76 import tty
77 import termios
77 import termios
78 import fcntl
78 import fcntl
79 import errno
79 import errno
80 import traceback
80 import traceback
81 import signal
81 import signal
82 except ImportError as e:
82 except ImportError as e:
83 raise ImportError (str(e) + """
83 raise ImportError (str(e) + """
84
84
85 A critical module was not found. Probably this operating system does not
85 A critical module was not found. Probably this operating system does not
86 support it. Pexpect is intended for UNIX-like operating systems.""")
86 support it. Pexpect is intended for UNIX-like operating systems.""")
87
87
88 __version__ = '2.6.dev'
88 __version__ = '2.6.dev'
89 version = __version__
89 version = __version__
90 version_info = (2,6,'dev')
90 version_info = (2,6,'dev')
91 __all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'spawnb', 'run', 'which',
91 __all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'spawnb', 'run', 'which',
92 'split_command_line', '__version__']
92 'split_command_line', '__version__']
93
93
94 # Exception classes used by this module.
94 # Exception classes used by this module.
95 class ExceptionPexpect(Exception):
95 class ExceptionPexpect(Exception):
96
96
97 """Base class for all exceptions raised by this module.
97 """Base class for all exceptions raised by this module.
98 """
98 """
99
99
100 def __init__(self, value):
100 def __init__(self, value):
101
101
102 self.value = value
102 self.value = value
103
103
104 def __str__(self):
104 def __str__(self):
105
105
106 return str(self.value)
106 return str(self.value)
107
107
108 def get_trace(self):
108 def get_trace(self):
109
109
110 """This returns an abbreviated stack trace with lines that only concern
110 """This returns an abbreviated stack trace with lines that only concern
111 the caller. In other words, the stack trace inside the Pexpect module
111 the caller. In other words, the stack trace inside the Pexpect module
112 is not included. """
112 is not included. """
113
113
114 tblist = traceback.extract_tb(sys.exc_info()[2])
114 tblist = traceback.extract_tb(sys.exc_info()[2])
115 #tblist = filter(self.__filter_not_pexpect, tblist)
115 #tblist = filter(self.__filter_not_pexpect, tblist)
116 tblist = [item for item in tblist if self.__filter_not_pexpect(item)]
116 tblist = [item for item in tblist if self.__filter_not_pexpect(item)]
117 tblist = traceback.format_list(tblist)
117 tblist = traceback.format_list(tblist)
118 return ''.join(tblist)
118 return ''.join(tblist)
119
119
120 def __filter_not_pexpect(self, trace_list_item):
120 def __filter_not_pexpect(self, trace_list_item):
121
121
122 """This returns True if list item 0 the string 'pexpect.py' in it. """
122 """This returns True if list item 0 the string 'pexpect.py' in it. """
123
123
124 if trace_list_item[0].find('pexpect.py') == -1:
124 if trace_list_item[0].find('pexpect.py') == -1:
125 return True
125 return True
126 else:
126 else:
127 return False
127 return False
128
128
129 class EOF(ExceptionPexpect):
129 class EOF(ExceptionPexpect):
130
130
131 """Raised when EOF is read from a child. This usually means the child has exited."""
131 """Raised when EOF is read from a child. This usually means the child has exited."""
132
132
133 class TIMEOUT(ExceptionPexpect):
133 class TIMEOUT(ExceptionPexpect):
134
134
135 """Raised when a read time exceeds the timeout. """
135 """Raised when a read time exceeds the timeout. """
136
136
137 ##class TIMEOUT_PATTERN(TIMEOUT):
137 ##class TIMEOUT_PATTERN(TIMEOUT):
138 ## """Raised when the pattern match time exceeds the timeout.
138 ## """Raised when the pattern match time exceeds the timeout.
139 ## This is different than a read TIMEOUT because the child process may
139 ## This is different than a read TIMEOUT because the child process may
140 ## give output, thus never give a TIMEOUT, but the output
140 ## give output, thus never give a TIMEOUT, but the output
141 ## may never match a pattern.
141 ## may never match a pattern.
142 ## """
142 ## """
143 ##class MAXBUFFER(ExceptionPexpect):
143 ##class MAXBUFFER(ExceptionPexpect):
144 ## """Raised when a scan buffer fills before matching an expected pattern."""
144 ## """Raised when a scan buffer fills before matching an expected pattern."""
145
145
146 PY3 = (sys.version_info[0] >= 3)
146 PY3 = (sys.version_info[0] >= 3)
147
147
148 def _cast_bytes(s, enc):
148 def _cast_bytes(s, enc):
149 if isinstance(s, unicode):
149 if isinstance(s, unicode):
150 return s.encode(enc)
150 return s.encode(enc)
151 return s
151 return s
152
152
153 def _cast_unicode(s, enc):
153 def _cast_unicode(s, enc):
154 if isinstance(s, bytes):
154 if isinstance(s, bytes):
155 return s.decode(enc)
155 return s.decode(enc)
156 return s
156 return s
157
157
158 re_type = type(re.compile(''))
158 re_type = type(re.compile(''))
159
159
160 def run (command, timeout=-1, withexitstatus=False, events=None, extra_args=None,
160 def run (command, timeout=-1, withexitstatus=False, events=None, extra_args=None,
161 logfile=None, cwd=None, env=None, encoding='utf-8'):
161 logfile=None, cwd=None, env=None, encoding='utf-8'):
162
162
163 """
163 """
164 This function runs the given command; waits for it to finish; then
164 This function runs the given command; waits for it to finish; then
165 returns all output as a string. STDERR is included in output. If the full
165 returns all output as a string. STDERR is included in output. If the full
166 path to the command is not given then the path is searched.
166 path to the command is not given then the path is searched.
167
167
168 Note that lines are terminated by CR/LF (\\r\\n) combination even on
168 Note that lines are terminated by CR/LF (\\r\\n) combination even on
169 UNIX-like systems because this is the standard for pseudo ttys. If you set
169 UNIX-like systems because this is the standard for pseudo ttys. If you set
170 'withexitstatus' to true, then run will return a tuple of (command_output,
170 'withexitstatus' to true, then run will return a tuple of (command_output,
171 exitstatus). If 'withexitstatus' is false then this returns just
171 exitstatus). If 'withexitstatus' is false then this returns just
172 command_output.
172 command_output.
173
173
174 The run() function can often be used instead of creating a spawn instance.
174 The run() function can often be used instead of creating a spawn instance.
175 For example, the following code uses spawn::
175 For example, the following code uses spawn::
176
176
177 from pexpect import *
177 from pexpect import *
178 child = spawn('scp foo myname@host.example.com:.')
178 child = spawn('scp foo myname@host.example.com:.')
179 child.expect ('(?i)password')
179 child.expect ('(?i)password')
180 child.sendline (mypassword)
180 child.sendline (mypassword)
181
181
182 The previous code can be replace with the following::
182 The previous code can be replace with the following::
183
183
184 from pexpect import *
184 from pexpect import *
185 run ('scp foo myname@host.example.com:.', events={'(?i)password': mypassword})
185 run ('scp foo myname@host.example.com:.', events={'(?i)password': mypassword})
186
186
187 Examples
187 Examples
188 ========
188 ========
189
189
190 Start the apache daemon on the local machine::
190 Start the apache daemon on the local machine::
191
191
192 from pexpect import *
192 from pexpect import *
193 run ("/usr/local/apache/bin/apachectl start")
193 run ("/usr/local/apache/bin/apachectl start")
194
194
195 Check in a file using SVN::
195 Check in a file using SVN::
196
196
197 from pexpect import *
197 from pexpect import *
198 run ("svn ci -m 'automatic commit' my_file.py")
198 run ("svn ci -m 'automatic commit' my_file.py")
199
199
200 Run a command and capture exit status::
200 Run a command and capture exit status::
201
201
202 from pexpect import *
202 from pexpect import *
203 (command_output, exitstatus) = run ('ls -l /bin', withexitstatus=1)
203 (command_output, exitstatus) = run ('ls -l /bin', withexitstatus=1)
204
204
205 Tricky Examples
205 Tricky Examples
206 ===============
206 ===============
207
207
208 The following will run SSH and execute 'ls -l' on the remote machine. The
208 The following will run SSH and execute 'ls -l' on the remote machine. The
209 password 'secret' will be sent if the '(?i)password' pattern is ever seen::
209 password 'secret' will be sent if the '(?i)password' pattern is ever seen::
210
210
211 run ("ssh username@machine.example.com 'ls -l'", events={'(?i)password':'secret\\n'})
211 run ("ssh username@machine.example.com 'ls -l'", events={'(?i)password':'secret\\n'})
212
212
213 This will start mencoder to rip a video from DVD. This will also display
213 This will start mencoder to rip a video from DVD. This will also display
214 progress ticks every 5 seconds as it runs. For example::
214 progress ticks every 5 seconds as it runs. For example::
215
215
216 from pexpect import *
216 from pexpect import *
217 def print_ticks(d):
217 def print_ticks(d):
218 print d['event_count'],
218 print d['event_count'],
219 run ("mencoder dvd://1 -o video.avi -oac copy -ovc copy", events={TIMEOUT:print_ticks}, timeout=5)
219 run ("mencoder dvd://1 -o video.avi -oac copy -ovc copy", events={TIMEOUT:print_ticks}, timeout=5)
220
220
221 The 'events' argument should be a dictionary of patterns and responses.
221 The 'events' argument should be a dictionary of patterns and responses.
222 Whenever one of the patterns is seen in the command out run() will send the
222 Whenever one of the patterns is seen in the command out run() will send the
223 associated response string. Note that you should put newlines in your
223 associated response string. Note that you should put newlines in your
224 string if Enter is necessary. The responses may also contain callback
224 string if Enter is necessary. The responses may also contain callback
225 functions. Any callback is function that takes a dictionary as an argument.
225 functions. Any callback is function that takes a dictionary as an argument.
226 The dictionary contains all the locals from the run() function, so you can
226 The dictionary contains all the locals from the run() function, so you can
227 access the child spawn object or any other variable defined in run()
227 access the child spawn object or any other variable defined in run()
228 (event_count, child, and extra_args are the most useful). A callback may
228 (event_count, child, and extra_args are the most useful). A callback may
229 return True to stop the current run process otherwise run() continues until
229 return True to stop the current run process otherwise run() continues until
230 the next event. A callback may also return a string which will be sent to
230 the next event. A callback may also return a string which will be sent to
231 the child. 'extra_args' is not used by directly run(). It provides a way to
231 the child. 'extra_args' is not used by directly run(). It provides a way to
232 pass data to a callback function through run() through the locals
232 pass data to a callback function through run() through the locals
233 dictionary passed to a callback."""
233 dictionary passed to a callback."""
234
234
235 if timeout == -1:
235 if timeout == -1:
236 child = spawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env,
236 child = spawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env,
237 encoding=encoding)
237 encoding=encoding)
238 else:
238 else:
239 child = spawn(command, timeout=timeout, maxread=2000, logfile=logfile,
239 child = spawn(command, timeout=timeout, maxread=2000, logfile=logfile,
240 cwd=cwd, env=env, encoding=encoding)
240 cwd=cwd, env=env, encoding=encoding)
241 if events is not None:
241 if events is not None:
242 patterns = events.keys()
242 patterns = events.keys()
243 responses = events.values()
243 responses = events.values()
244 else:
244 else:
245 patterns=None # We assume that EOF or TIMEOUT will save us.
245 patterns=None # We assume that EOF or TIMEOUT will save us.
246 responses=None
246 responses=None
247 child_result_list = []
247 child_result_list = []
248 event_count = 0
248 event_count = 0
249 while 1:
249 while 1:
250 try:
250 try:
251 index = child.expect (patterns)
251 index = child.expect (patterns)
252 if isinstance(child.after, basestring):
252 if isinstance(child.after, basestring):
253 child_result_list.append(child.before + child.after)
253 child_result_list.append(child.before + child.after)
254 else: # child.after may have been a TIMEOUT or EOF, so don't cat those.
254 else: # child.after may have been a TIMEOUT or EOF, so don't cat those.
255 child_result_list.append(child.before)
255 child_result_list.append(child.before)
256 if isinstance(responses[index], basestring):
256 if isinstance(responses[index], basestring):
257 child.send(responses[index])
257 child.send(responses[index])
258 elif type(responses[index]) is types.FunctionType:
258 elif type(responses[index]) is types.FunctionType:
259 callback_result = responses[index](locals())
259 callback_result = responses[index](locals())
260 sys.stdout.flush()
260 sys.stdout.flush()
261 if isinstance(callback_result, basestring):
261 if isinstance(callback_result, basestring):
262 child.send(callback_result)
262 child.send(callback_result)
263 elif callback_result:
263 elif callback_result:
264 break
264 break
265 else:
265 else:
266 raise TypeError ('The callback must be a string or function type.')
266 raise TypeError ('The callback must be a string or function type.')
267 event_count = event_count + 1
267 event_count = event_count + 1
268 except TIMEOUT as e:
268 except TIMEOUT as e:
269 child_result_list.append(child.before)
269 child_result_list.append(child.before)
270 break
270 break
271 except EOF as e:
271 except EOF as e:
272 child_result_list.append(child.before)
272 child_result_list.append(child.before)
273 break
273 break
274 child_result = child._empty_buffer.join(child_result_list)
274 child_result = child._empty_buffer.join(child_result_list)
275 if withexitstatus:
275 if withexitstatus:
276 child.close()
276 child.close()
277 return (child_result, child.exitstatus)
277 return (child_result, child.exitstatus)
278 else:
278 else:
279 return child_result
279 return child_result
280
280
281 class spawnb(object):
281 class spawnb(object):
282 """Use this class to start and control child applications with a pure-bytes
282 """Use this class to start and control child applications with a pure-bytes
283 interface."""
283 interface."""
284
284
285 _buffer_type = bytes
285 _buffer_type = bytes
286 def _cast_buffer_type(self, s):
286 def _cast_buffer_type(self, s):
287 return _cast_bytes(s, self.encoding)
287 return _cast_bytes(s, self.encoding)
288 _empty_buffer = b''
288 _empty_buffer = b''
289 _pty_newline = b'\r\n'
289 _pty_newline = b'\r\n'
290
290
291 # Some code needs this to exist, but it's mainly for the spawn subclass.
291 # Some code needs this to exist, but it's mainly for the spawn subclass.
292 encoding = 'utf-8'
292 encoding = 'utf-8'
293
293
294 def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None,
294 def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None,
295 logfile=None, cwd=None, env=None):
295 logfile=None, cwd=None, env=None):
296
296
297 """This is the constructor. The command parameter may be a string that
297 """This is the constructor. The command parameter may be a string that
298 includes a command and any arguments to the command. For example::
298 includes a command and any arguments to the command. For example::
299
299
300 child = pexpect.spawn ('/usr/bin/ftp')
300 child = pexpect.spawn ('/usr/bin/ftp')
301 child = pexpect.spawn ('/usr/bin/ssh user@example.com')
301 child = pexpect.spawn ('/usr/bin/ssh user@example.com')
302 child = pexpect.spawn ('ls -latr /tmp')
302 child = pexpect.spawn ('ls -latr /tmp')
303
303
304 You may also construct it with a list of arguments like so::
304 You may also construct it with a list of arguments like so::
305
305
306 child = pexpect.spawn ('/usr/bin/ftp', [])
306 child = pexpect.spawn ('/usr/bin/ftp', [])
307 child = pexpect.spawn ('/usr/bin/ssh', ['user@example.com'])
307 child = pexpect.spawn ('/usr/bin/ssh', ['user@example.com'])
308 child = pexpect.spawn ('ls', ['-latr', '/tmp'])
308 child = pexpect.spawn ('ls', ['-latr', '/tmp'])
309
309
310 After this the child application will be created and will be ready to
310 After this the child application will be created and will be ready to
311 talk to. For normal use, see expect() and send() and sendline().
311 talk to. For normal use, see expect() and send() and sendline().
312
312
313 Remember that Pexpect does NOT interpret shell meta characters such as
313 Remember that Pexpect does NOT interpret shell meta characters such as
314 redirect, pipe, or wild cards (>, |, or *). This is a common mistake.
314 redirect, pipe, or wild cards (>, |, or *). This is a common mistake.
315 If you want to run a command and pipe it through another command then
315 If you want to run a command and pipe it through another command then
316 you must also start a shell. For example::
316 you must also start a shell. For example::
317
317
318 child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > log_list.txt"')
318 child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > log_list.txt"')
319 child.expect(pexpect.EOF)
319 child.expect(pexpect.EOF)
320
320
321 The second form of spawn (where you pass a list of arguments) is useful
321 The second form of spawn (where you pass a list of arguments) is useful
322 in situations where you wish to spawn a command and pass it its own
322 in situations where you wish to spawn a command and pass it its own
323 argument list. This can make syntax more clear. For example, the
323 argument list. This can make syntax more clear. For example, the
324 following is equivalent to the previous example::
324 following is equivalent to the previous example::
325
325
326 shell_cmd = 'ls -l | grep LOG > log_list.txt'
326 shell_cmd = 'ls -l | grep LOG > log_list.txt'
327 child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
327 child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
328 child.expect(pexpect.EOF)
328 child.expect(pexpect.EOF)
329
329
330 The maxread attribute sets the read buffer size. This is maximum number
330 The maxread attribute sets the read buffer size. This is maximum number
331 of bytes that Pexpect will try to read from a TTY at one time. Setting
331 of bytes that Pexpect will try to read from a TTY at one time. Setting
332 the maxread size to 1 will turn off buffering. Setting the maxread
332 the maxread size to 1 will turn off buffering. Setting the maxread
333 value higher may help performance in cases where large amounts of
333 value higher may help performance in cases where large amounts of
334 output are read back from the child. This feature is useful in
334 output are read back from the child. This feature is useful in
335 conjunction with searchwindowsize.
335 conjunction with searchwindowsize.
336
336
337 The searchwindowsize attribute sets the how far back in the incomming
337 The searchwindowsize attribute sets the how far back in the incomming
338 seach buffer Pexpect will search for pattern matches. Every time
338 seach buffer Pexpect will search for pattern matches. Every time
339 Pexpect reads some data from the child it will append the data to the
339 Pexpect reads some data from the child it will append the data to the
340 incomming buffer. The default is to search from the beginning of the
340 incomming buffer. The default is to search from the beginning of the
341 imcomming buffer each time new data is read from the child. But this is
341 imcomming buffer each time new data is read from the child. But this is
342 very inefficient if you are running a command that generates a large
342 very inefficient if you are running a command that generates a large
343 amount of data where you want to match The searchwindowsize does not
343 amount of data where you want to match The searchwindowsize does not
344 effect the size of the incomming data buffer. You will still have
344 effect the size of the incomming data buffer. You will still have
345 access to the full buffer after expect() returns.
345 access to the full buffer after expect() returns.
346
346
347 The logfile member turns on or off logging. All input and output will
347 The logfile member turns on or off logging. All input and output will
348 be copied to the given file object. Set logfile to None to stop
348 be copied to the given file object. Set logfile to None to stop
349 logging. This is the default. Set logfile to sys.stdout to echo
349 logging. This is the default. Set logfile to sys.stdout to echo
350 everything to standard output. The logfile is flushed after each write.
350 everything to standard output. The logfile is flushed after each write.
351
351
352 Example log input and output to a file::
352 Example log input and output to a file::
353
353
354 child = pexpect.spawn('some_command')
354 child = pexpect.spawn('some_command')
355 fout = open('mylog.txt','w')
355 fout = open('mylog.txt','w')
356 child.logfile = fout
356 child.logfile = fout
357
357
358 Example log to stdout::
358 Example log to stdout::
359
359
360 child = pexpect.spawn('some_command')
360 child = pexpect.spawn('some_command')
361 child.logfile = sys.stdout
361 child.logfile = sys.stdout
362
362
363 The logfile_read and logfile_send members can be used to separately log
363 The logfile_read and logfile_send members can be used to separately log
364 the input from the child and output sent to the child. Sometimes you
364 the input from the child and output sent to the child. Sometimes you
365 don't want to see everything you write to the child. You only want to
365 don't want to see everything you write to the child. You only want to
366 log what the child sends back. For example::
366 log what the child sends back. For example::
367
367
368 child = pexpect.spawn('some_command')
368 child = pexpect.spawn('some_command')
369 child.logfile_read = sys.stdout
369 child.logfile_read = sys.stdout
370
370
371 To separately log output sent to the child use logfile_send::
371 To separately log output sent to the child use logfile_send::
372
372
373 self.logfile_send = fout
373 self.logfile_send = fout
374
374
375 The delaybeforesend helps overcome a weird behavior that many users
375 The delaybeforesend helps overcome a weird behavior that many users
376 were experiencing. The typical problem was that a user would expect() a
376 were experiencing. The typical problem was that a user would expect() a
377 "Password:" prompt and then immediately call sendline() to send the
377 "Password:" prompt and then immediately call sendline() to send the
378 password. The user would then see that their password was echoed back
378 password. The user would then see that their password was echoed back
379 to them. Passwords don't normally echo. The problem is caused by the
379 to them. Passwords don't normally echo. The problem is caused by the
380 fact that most applications print out the "Password" prompt and then
380 fact that most applications print out the "Password" prompt and then
381 turn off stdin echo, but if you send your password before the
381 turn off stdin echo, but if you send your password before the
382 application turned off echo, then you get your password echoed.
382 application turned off echo, then you get your password echoed.
383 Normally this wouldn't be a problem when interacting with a human at a
383 Normally this wouldn't be a problem when interacting with a human at a
384 real keyboard. If you introduce a slight delay just before writing then
384 real keyboard. If you introduce a slight delay just before writing then
385 this seems to clear up the problem. This was such a common problem for
385 this seems to clear up the problem. This was such a common problem for
386 many users that I decided that the default pexpect behavior should be
386 many users that I decided that the default pexpect behavior should be
387 to sleep just before writing to the child application. 1/20th of a
387 to sleep just before writing to the child application. 1/20th of a
388 second (50 ms) seems to be enough to clear up the problem. You can set
388 second (50 ms) seems to be enough to clear up the problem. You can set
389 delaybeforesend to 0 to return to the old behavior. Most Linux machines
389 delaybeforesend to 0 to return to the old behavior. Most Linux machines
390 don't like this to be below 0.03. I don't know why.
390 don't like this to be below 0.03. I don't know why.
391
391
392 Note that spawn is clever about finding commands on your path.
392 Note that spawn is clever about finding commands on your path.
393 It uses the same logic that "which" uses to find executables.
393 It uses the same logic that "which" uses to find executables.
394
394
395 If you wish to get the exit status of the child you must call the
395 If you wish to get the exit status of the child you must call the
396 close() method. The exit or signal status of the child will be stored
396 close() method. The exit or signal status of the child will be stored
397 in self.exitstatus or self.signalstatus. If the child exited normally
397 in self.exitstatus or self.signalstatus. If the child exited normally
398 then exitstatus will store the exit return code and signalstatus will
398 then exitstatus will store the exit return code and signalstatus will
399 be None. If the child was terminated abnormally with a signal then
399 be None. If the child was terminated abnormally with a signal then
400 signalstatus will store the signal value and exitstatus will be None.
400 signalstatus will store the signal value and exitstatus will be None.
401 If you need more detail you can also read the self.status member which
401 If you need more detail you can also read the self.status member which
402 stores the status returned by os.waitpid. You can interpret this using
402 stores the status returned by os.waitpid. You can interpret this using
403 os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG. """
403 os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG. """
404
404
405 self.STDIN_FILENO = pty.STDIN_FILENO
405 self.STDIN_FILENO = pty.STDIN_FILENO
406 self.STDOUT_FILENO = pty.STDOUT_FILENO
406 self.STDOUT_FILENO = pty.STDOUT_FILENO
407 self.STDERR_FILENO = pty.STDERR_FILENO
407 self.STDERR_FILENO = pty.STDERR_FILENO
408 self.stdin = sys.stdin
408 self.stdin = sys.stdin
409 self.stdout = sys.stdout
409 self.stdout = sys.stdout
410 self.stderr = sys.stderr
410 self.stderr = sys.stderr
411
411
412 self.searcher = None
412 self.searcher = None
413 self.ignorecase = False
413 self.ignorecase = False
414 self.before = None
414 self.before = None
415 self.after = None
415 self.after = None
416 self.match = None
416 self.match = None
417 self.match_index = None
417 self.match_index = None
418 self.terminated = True
418 self.terminated = True
419 self.exitstatus = None
419 self.exitstatus = None
420 self.signalstatus = None
420 self.signalstatus = None
421 self.status = None # status returned by os.waitpid
421 self.status = None # status returned by os.waitpid
422 self.flag_eof = False
422 self.flag_eof = False
423 self.pid = None
423 self.pid = None
424 self.child_fd = -1 # initially closed
424 self.child_fd = -1 # initially closed
425 self.timeout = timeout
425 self.timeout = timeout
426 self.delimiter = EOF
426 self.delimiter = EOF
427 self.logfile = logfile
427 self.logfile = logfile
428 self.logfile_read = None # input from child (read_nonblocking)
428 self.logfile_read = None # input from child (read_nonblocking)
429 self.logfile_send = None # output to send (send, sendline)
429 self.logfile_send = None # output to send (send, sendline)
430 self.maxread = maxread # max bytes to read at one time into buffer
430 self.maxread = maxread # max bytes to read at one time into buffer
431 self.buffer = self._empty_buffer # This is the read buffer. See maxread.
431 self.buffer = self._empty_buffer # This is the read buffer. See maxread.
432 self.searchwindowsize = searchwindowsize # Anything before searchwindowsize point is preserved, but not searched.
432 self.searchwindowsize = searchwindowsize # Anything before searchwindowsize point is preserved, but not searched.
433 # Most Linux machines don't like delaybeforesend to be below 0.03 (30 ms).
433 # Most Linux machines don't like delaybeforesend to be below 0.03 (30 ms).
434 self.delaybeforesend = 0.05 # Sets sleep time used just before sending data to child. Time in seconds.
434 self.delaybeforesend = 0.05 # Sets sleep time used just before sending data to child. Time in seconds.
435 self.delayafterclose = 0.1 # Sets delay in close() method to allow kernel time to update process status. Time in seconds.
435 self.delayafterclose = 0.1 # Sets delay in close() method to allow kernel time to update process status. Time in seconds.
436 self.delayafterterminate = 0.1 # Sets delay in terminate() method to allow kernel time to update process status. Time in seconds.
436 self.delayafterterminate = 0.1 # Sets delay in terminate() method to allow kernel time to update process status. Time in seconds.
437 self.softspace = False # File-like object.
437 self.softspace = False # File-like object.
438 self.name = '<' + repr(self) + '>' # File-like object.
438 self.name = '<' + repr(self) + '>' # File-like object.
439 self.closed = True # File-like object.
439 self.closed = True # File-like object.
440 self.cwd = cwd
440 self.cwd = cwd
441 self.env = env
441 self.env = env
442 self.__irix_hack = (sys.platform.lower().find('irix')>=0) # This flags if we are running on irix
442 self.__irix_hack = (sys.platform.lower().find('irix')>=0) # This flags if we are running on irix
443 # Solaris uses internal __fork_pty(). All others use pty.fork().
443 # Solaris uses internal __fork_pty(). All others use pty.fork().
444 if 'solaris' in sys.platform.lower() or 'sunos5' in sys.platform.lower():
444 if 'solaris' in sys.platform.lower() or 'sunos5' in sys.platform.lower():
445 self.use_native_pty_fork = False
445 self.use_native_pty_fork = False
446 else:
446 else:
447 self.use_native_pty_fork = True
447 self.use_native_pty_fork = True
448
448
449
449
450 # allow dummy instances for subclasses that may not use command or args.
450 # allow dummy instances for subclasses that may not use command or args.
451 if command is None:
451 if command is None:
452 self.command = None
452 self.command = None
453 self.args = None
453 self.args = None
454 self.name = '<pexpect factory incomplete>'
454 self.name = '<pexpect factory incomplete>'
455 else:
455 else:
456 self._spawn (command, args)
456 self._spawn (command, args)
457
457
458 def __del__(self):
458 def __del__(self):
459
459
460 """This makes sure that no system resources are left open. Python only
460 """This makes sure that no system resources are left open. Python only
461 garbage collects Python objects. OS file descriptors are not Python
461 garbage collects Python objects. OS file descriptors are not Python
462 objects, so they must be handled explicitly. If the child file
462 objects, so they must be handled explicitly. If the child file
463 descriptor was opened outside of this class (passed to the constructor)
463 descriptor was opened outside of this class (passed to the constructor)
464 then this does not close it. """
464 then this does not close it. """
465
465
466 if not self.closed:
466 if not self.closed:
467 # It is possible for __del__ methods to execute during the
467 # It is possible for __del__ methods to execute during the
468 # teardown of the Python VM itself. Thus self.close() may
468 # teardown of the Python VM itself. Thus self.close() may
469 # trigger an exception because os.close may be None.
469 # trigger an exception because os.close may be None.
470 # -- Fernando Perez
470 # -- Fernando Perez
471 try:
471 try:
472 self.close()
472 self.close()
473 except:
473 except:
474 pass
474 pass
475
475
476 def __str__(self):
476 def __str__(self):
477
477
478 """This returns a human-readable string that represents the state of
478 """This returns a human-readable string that represents the state of
479 the object. """
479 the object. """
480
480
481 s = []
481 s = []
482 s.append(repr(self))
482 s.append(repr(self))
483 s.append('version: ' + __version__)
483 s.append('version: ' + __version__)
484 s.append('command: ' + str(self.command))
484 s.append('command: ' + str(self.command))
485 s.append('args: ' + str(self.args))
485 s.append('args: ' + str(self.args))
486 s.append('searcher: ' + str(self.searcher))
486 s.append('searcher: ' + str(self.searcher))
487 s.append('buffer (last 100 chars): ' + str(self.buffer)[-100:])
487 s.append('buffer (last 100 chars): ' + str(self.buffer)[-100:])
488 s.append('before (last 100 chars): ' + str(self.before)[-100:])
488 s.append('before (last 100 chars): ' + str(self.before)[-100:])
489 s.append('after: ' + str(self.after))
489 s.append('after: ' + str(self.after))
490 s.append('match: ' + str(self.match))
490 s.append('match: ' + str(self.match))
491 s.append('match_index: ' + str(self.match_index))
491 s.append('match_index: ' + str(self.match_index))
492 s.append('exitstatus: ' + str(self.exitstatus))
492 s.append('exitstatus: ' + str(self.exitstatus))
493 s.append('flag_eof: ' + str(self.flag_eof))
493 s.append('flag_eof: ' + str(self.flag_eof))
494 s.append('pid: ' + str(self.pid))
494 s.append('pid: ' + str(self.pid))
495 s.append('child_fd: ' + str(self.child_fd))
495 s.append('child_fd: ' + str(self.child_fd))
496 s.append('closed: ' + str(self.closed))
496 s.append('closed: ' + str(self.closed))
497 s.append('timeout: ' + str(self.timeout))
497 s.append('timeout: ' + str(self.timeout))
498 s.append('delimiter: ' + str(self.delimiter))
498 s.append('delimiter: ' + str(self.delimiter))
499 s.append('logfile: ' + str(self.logfile))
499 s.append('logfile: ' + str(self.logfile))
500 s.append('logfile_read: ' + str(self.logfile_read))
500 s.append('logfile_read: ' + str(self.logfile_read))
501 s.append('logfile_send: ' + str(self.logfile_send))
501 s.append('logfile_send: ' + str(self.logfile_send))
502 s.append('maxread: ' + str(self.maxread))
502 s.append('maxread: ' + str(self.maxread))
503 s.append('ignorecase: ' + str(self.ignorecase))
503 s.append('ignorecase: ' + str(self.ignorecase))
504 s.append('searchwindowsize: ' + str(self.searchwindowsize))
504 s.append('searchwindowsize: ' + str(self.searchwindowsize))
505 s.append('delaybeforesend: ' + str(self.delaybeforesend))
505 s.append('delaybeforesend: ' + str(self.delaybeforesend))
506 s.append('delayafterclose: ' + str(self.delayafterclose))
506 s.append('delayafterclose: ' + str(self.delayafterclose))
507 s.append('delayafterterminate: ' + str(self.delayafterterminate))
507 s.append('delayafterterminate: ' + str(self.delayafterterminate))
508 return '\n'.join(s)
508 return '\n'.join(s)
509
509
510 def _spawn(self,command,args=[]):
510 def _spawn(self,command,args=[]):
511
511
512 """This starts the given command in a child process. This does all the
512 """This starts the given command in a child process. This does all the
513 fork/exec type of stuff for a pty. This is called by __init__. If args
513 fork/exec type of stuff for a pty. This is called by __init__. If args
514 is empty then command will be parsed (split on spaces) and args will be
514 is empty then command will be parsed (split on spaces) and args will be
515 set to parsed arguments. """
515 set to parsed arguments. """
516
516
517 # The pid and child_fd of this object get set by this method.
517 # The pid and child_fd of this object get set by this method.
518 # Note that it is difficult for this method to fail.
518 # Note that it is difficult for this method to fail.
519 # You cannot detect if the child process cannot start.
519 # You cannot detect if the child process cannot start.
520 # So the only way you can tell if the child process started
520 # So the only way you can tell if the child process started
521 # or not is to try to read from the file descriptor. If you get
521 # or not is to try to read from the file descriptor. If you get
522 # EOF immediately then it means that the child is already dead.
522 # EOF immediately then it means that the child is already dead.
523 # That may not necessarily be bad because you may haved spawned a child
523 # That may not necessarily be bad because you may haved spawned a child
524 # that performs some task; creates no stdout output; and then dies.
524 # that performs some task; creates no stdout output; and then dies.
525
525
526 # If command is an int type then it may represent a file descriptor.
526 # If command is an int type then it may represent a file descriptor.
527 if type(command) == type(0):
527 if type(command) == type(0):
528 raise ExceptionPexpect ('Command is an int type. If this is a file descriptor then maybe you want to use fdpexpect.fdspawn which takes an existing file descriptor instead of a command string.')
528 raise ExceptionPexpect ('Command is an int type. If this is a file descriptor then maybe you want to use fdpexpect.fdspawn which takes an existing file descriptor instead of a command string.')
529
529
530 if type (args) != type([]):
530 if type (args) != type([]):
531 raise TypeError ('The argument, args, must be a list.')
531 raise TypeError ('The argument, args, must be a list.')
532
532
533 if args == []:
533 if args == []:
534 self.args = split_command_line(command)
534 self.args = split_command_line(command)
535 self.command = self.args[0]
535 self.command = self.args[0]
536 else:
536 else:
537 self.args = args[:] # work with a copy
537 self.args = args[:] # work with a copy
538 self.args.insert (0, command)
538 self.args.insert (0, command)
539 self.command = command
539 self.command = command
540
540
541 command_with_path = which(self.command)
541 command_with_path = which(self.command)
542 if command_with_path is None:
542 if command_with_path is None:
543 raise ExceptionPexpect ('The command was not found or was not executable: %s.' % self.command)
543 raise ExceptionPexpect ('The command was not found or was not executable: %s.' % self.command)
544 self.command = command_with_path
544 self.command = command_with_path
545 self.args[0] = self.command
545 self.args[0] = self.command
546
546
547 self.name = '<' + ' '.join (self.args) + '>'
547 self.name = '<' + ' '.join (self.args) + '>'
548
548
549 assert self.pid is None, 'The pid member should be None.'
549 assert self.pid is None, 'The pid member should be None.'
550 assert self.command is not None, 'The command member should not be None.'
550 assert self.command is not None, 'The command member should not be None.'
551
551
552 if self.use_native_pty_fork:
552 if self.use_native_pty_fork:
553 try:
553 try:
554 self.pid, self.child_fd = pty.fork()
554 self.pid, self.child_fd = pty.fork()
555 except OSError as e:
555 except OSError as e:
556 raise ExceptionPexpect('Error! pty.fork() failed: ' + str(e))
556 raise ExceptionPexpect('Error! pty.fork() failed: ' + str(e))
557 else: # Use internal __fork_pty
557 else: # Use internal __fork_pty
558 self.pid, self.child_fd = self.__fork_pty()
558 self.pid, self.child_fd = self.__fork_pty()
559
559
560 if self.pid == 0: # Child
560 if self.pid == 0: # Child
561 try:
561 try:
562 self.child_fd = sys.stdout.fileno() # used by setwinsize()
562 self.child_fd = sys.stdout.fileno() # used by setwinsize()
563 self.setwinsize(24, 80)
563 self.setwinsize(24, 80)
564 except:
564 except:
565 # Some platforms do not like setwinsize (Cygwin).
565 # Some platforms do not like setwinsize (Cygwin).
566 # This will cause problem when running applications that
566 # This will cause problem when running applications that
567 # are very picky about window size.
567 # are very picky about window size.
568 # This is a serious limitation, but not a show stopper.
568 # This is a serious limitation, but not a show stopper.
569 pass
569 pass
570 # Do not allow child to inherit open file descriptors from parent.
570 # Do not allow child to inherit open file descriptors from parent.
571 max_fd = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
571 max_fd = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
572 for i in range (3, max_fd):
572 for i in range (3, max_fd):
573 try:
573 try:
574 os.close (i)
574 os.close (i)
575 except OSError:
575 except OSError:
576 pass
576 pass
577
577
578 # I don't know why this works, but ignoring SIGHUP fixes a
578 # I don't know why this works, but ignoring SIGHUP fixes a
579 # problem when trying to start a Java daemon with sudo
579 # problem when trying to start a Java daemon with sudo
580 # (specifically, Tomcat).
580 # (specifically, Tomcat).
581 signal.signal(signal.SIGHUP, signal.SIG_IGN)
581 signal.signal(signal.SIGHUP, signal.SIG_IGN)
582
582
583 if self.cwd is not None:
583 if self.cwd is not None:
584 os.chdir(self.cwd)
584 os.chdir(self.cwd)
585 if self.env is None:
585 if self.env is None:
586 os.execv(self.command, self.args)
586 os.execv(self.command, self.args)
587 else:
587 else:
588 os.execvpe(self.command, self.args, self.env)
588 os.execvpe(self.command, self.args, self.env)
589
589
590 # Parent
590 # Parent
591 self.terminated = False
591 self.terminated = False
592 self.closed = False
592 self.closed = False
593
593
594 def __fork_pty(self):
594 def __fork_pty(self):
595
595
596 """This implements a substitute for the forkpty system call. This
596 """This implements a substitute for the forkpty system call. This
597 should be more portable than the pty.fork() function. Specifically,
597 should be more portable than the pty.fork() function. Specifically,
598 this should work on Solaris.
598 this should work on Solaris.
599
599
600 Modified 10.06.05 by Geoff Marshall: Implemented __fork_pty() method to
600 Modified 10.06.05 by Geoff Marshall: Implemented __fork_pty() method to
601 resolve the issue with Python's pty.fork() not supporting Solaris,
601 resolve the issue with Python's pty.fork() not supporting Solaris,
602 particularly ssh. Based on patch to posixmodule.c authored by Noah
602 particularly ssh. Based on patch to posixmodule.c authored by Noah
603 Spurrier::
603 Spurrier::
604
604
605 http://mail.python.org/pipermail/python-dev/2003-May/035281.html
605 http://mail.python.org/pipermail/python-dev/2003-May/035281.html
606
606
607 """
607 """
608
608
609 parent_fd, child_fd = os.openpty()
609 parent_fd, child_fd = os.openpty()
610 if parent_fd < 0 or child_fd < 0:
610 if parent_fd < 0 or child_fd < 0:
611 raise ExceptionPexpect("Error! Could not open pty with os.openpty().")
611 raise ExceptionPexpect("Error! Could not open pty with os.openpty().")
612
612
613 pid = os.fork()
613 pid = os.fork()
614 if pid < 0:
614 if pid < 0:
615 raise ExceptionPexpect("Error! Failed os.fork().")
615 raise ExceptionPexpect("Error! Failed os.fork().")
616 elif pid == 0:
616 elif pid == 0:
617 # Child.
617 # Child.
618 os.close(parent_fd)
618 os.close(parent_fd)
619 self.__pty_make_controlling_tty(child_fd)
619 self.__pty_make_controlling_tty(child_fd)
620
620
621 os.dup2(child_fd, 0)
621 os.dup2(child_fd, 0)
622 os.dup2(child_fd, 1)
622 os.dup2(child_fd, 1)
623 os.dup2(child_fd, 2)
623 os.dup2(child_fd, 2)
624
624
625 if child_fd > 2:
625 if child_fd > 2:
626 os.close(child_fd)
626 os.close(child_fd)
627 else:
627 else:
628 # Parent.
628 # Parent.
629 os.close(child_fd)
629 os.close(child_fd)
630
630
631 return pid, parent_fd
631 return pid, parent_fd
632
632
633 def __pty_make_controlling_tty(self, tty_fd):
633 def __pty_make_controlling_tty(self, tty_fd):
634
634
635 """This makes the pseudo-terminal the controlling tty. This should be
635 """This makes the pseudo-terminal the controlling tty. This should be
636 more portable than the pty.fork() function. Specifically, this should
636 more portable than the pty.fork() function. Specifically, this should
637 work on Solaris. """
637 work on Solaris. """
638
638
639 child_name = os.ttyname(tty_fd)
639 child_name = os.ttyname(tty_fd)
640
640
641 # Disconnect from controlling tty. Harmless if not already connected.
641 # Disconnect from controlling tty. Harmless if not already connected.
642 try:
642 try:
643 fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
643 fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
644 if fd >= 0:
644 if fd >= 0:
645 os.close(fd)
645 os.close(fd)
646 except:
646 except:
647 # Already disconnected. This happens if running inside cron.
647 # Already disconnected. This happens if running inside cron.
648 pass
648 pass
649
649
650 os.setsid()
650 os.setsid()
651
651
652 # Verify we are disconnected from controlling tty
652 # Verify we are disconnected from controlling tty
653 # by attempting to open it again.
653 # by attempting to open it again.
654 try:
654 try:
655 fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
655 fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
656 if fd >= 0:
656 if fd >= 0:
657 os.close(fd)
657 os.close(fd)
658 raise ExceptionPexpect("Error! Failed to disconnect from controlling tty. It is still possible to open /dev/tty.")
658 raise ExceptionPexpect("Error! Failed to disconnect from controlling tty. It is still possible to open /dev/tty.")
659 except:
659 except:
660 # Good! We are disconnected from a controlling tty.
660 # Good! We are disconnected from a controlling tty.
661 pass
661 pass
662
662
663 # Verify we can open child pty.
663 # Verify we can open child pty.
664 fd = os.open(child_name, os.O_RDWR);
664 fd = os.open(child_name, os.O_RDWR);
665 if fd < 0:
665 if fd < 0:
666 raise ExceptionPexpect("Error! Could not open child pty, " + child_name)
666 raise ExceptionPexpect("Error! Could not open child pty, " + child_name)
667 else:
667 else:
668 os.close(fd)
668 os.close(fd)
669
669
670 # Verify we now have a controlling tty.
670 # Verify we now have a controlling tty.
671 fd = os.open("/dev/tty", os.O_WRONLY)
671 fd = os.open("/dev/tty", os.O_WRONLY)
672 if fd < 0:
672 if fd < 0:
673 raise ExceptionPexpect("Error! Could not open controlling tty, /dev/tty")
673 raise ExceptionPexpect("Error! Could not open controlling tty, /dev/tty")
674 else:
674 else:
675 os.close(fd)
675 os.close(fd)
676
676
677 def fileno (self): # File-like object.
677 def fileno (self): # File-like object.
678
678
679 """This returns the file descriptor of the pty for the child.
679 """This returns the file descriptor of the pty for the child.
680 """
680 """
681
681
682 return self.child_fd
682 return self.child_fd
683
683
684 def close (self, force=True): # File-like object.
684 def close (self, force=True): # File-like object.
685
685
686 """This closes the connection with the child application. Note that
686 """This closes the connection with the child application. Note that
687 calling close() more than once is valid. This emulates standard Python
687 calling close() more than once is valid. This emulates standard Python
688 behavior with files. Set force to True if you want to make sure that
688 behavior with files. Set force to True if you want to make sure that
689 the child is terminated (SIGKILL is sent if the child ignores SIGHUP
689 the child is terminated (SIGKILL is sent if the child ignores SIGHUP
690 and SIGINT). """
690 and SIGINT). """
691
691
692 if not self.closed:
692 if not self.closed:
693 self.flush()
693 self.flush()
694 os.close (self.child_fd)
694 os.close (self.child_fd)
695 time.sleep(self.delayafterclose) # Give kernel time to update process status.
695 time.sleep(self.delayafterclose) # Give kernel time to update process status.
696 if self.isalive():
696 if self.isalive():
697 if not self.terminate(force):
697 if not self.terminate(force):
698 raise ExceptionPexpect ('close() could not terminate the child using terminate()')
698 raise ExceptionPexpect ('close() could not terminate the child using terminate()')
699 self.child_fd = -1
699 self.child_fd = -1
700 self.closed = True
700 self.closed = True
701 #self.pid = None
701 #self.pid = None
702
702
703 def flush (self): # File-like object.
703 def flush (self): # File-like object.
704
704
705 """This does nothing. It is here to support the interface for a
705 """This does nothing. It is here to support the interface for a
706 File-like object. """
706 File-like object. """
707
707
708 pass
708 pass
709
709
710 def isatty (self): # File-like object.
710 def isatty (self): # File-like object.
711
711
712 """This returns True if the file descriptor is open and connected to a
712 """This returns True if the file descriptor is open and connected to a
713 tty(-like) device, else False. """
713 tty(-like) device, else False. """
714
714
715 return os.isatty(self.child_fd)
715 return os.isatty(self.child_fd)
716
716
717 def waitnoecho (self, timeout=-1):
717 def waitnoecho (self, timeout=-1):
718
718
719 """This waits until the terminal ECHO flag is set False. This returns
719 """This waits until the terminal ECHO flag is set False. This returns
720 True if the echo mode is off. This returns False if the ECHO flag was
720 True if the echo mode is off. This returns False if the ECHO flag was
721 not set False before the timeout. This can be used to detect when the
721 not set False before the timeout. This can be used to detect when the
722 child is waiting for a password. Usually a child application will turn
722 child is waiting for a password. Usually a child application will turn
723 off echo mode when it is waiting for the user to enter a password. For
723 off echo mode when it is waiting for the user to enter a password. For
724 example, instead of expecting the "password:" prompt you can wait for
724 example, instead of expecting the "password:" prompt you can wait for
725 the child to set ECHO off::
725 the child to set ECHO off::
726
726
727 p = pexpect.spawn ('ssh user@example.com')
727 p = pexpect.spawn ('ssh user@example.com')
728 p.waitnoecho()
728 p.waitnoecho()
729 p.sendline(mypassword)
729 p.sendline(mypassword)
730
730
731 If timeout==-1 then this method will use the value in self.timeout.
731 If timeout==-1 then this method will use the value in self.timeout.
732 If timeout==None then this method to block until ECHO flag is False.
732 If timeout==None then this method to block until ECHO flag is False.
733 """
733 """
734
734
735 if timeout == -1:
735 if timeout == -1:
736 timeout = self.timeout
736 timeout = self.timeout
737 if timeout is not None:
737 if timeout is not None:
738 end_time = time.time() + timeout
738 end_time = time.time() + timeout
739 while True:
739 while True:
740 if not self.getecho():
740 if not self.getecho():
741 return True
741 return True
742 if timeout < 0 and timeout is not None:
742 if timeout < 0 and timeout is not None:
743 return False
743 return False
744 if timeout is not None:
744 if timeout is not None:
745 timeout = end_time - time.time()
745 timeout = end_time - time.time()
746 time.sleep(0.1)
746 time.sleep(0.1)
747
747
748 def getecho (self):
748 def getecho (self):
749
749
750 """This returns the terminal echo mode. This returns True if echo is
750 """This returns the terminal echo mode. This returns True if echo is
751 on or False if echo is off. Child applications that are expecting you
751 on or False if echo is off. Child applications that are expecting you
752 to enter a password often set ECHO False. See waitnoecho(). """
752 to enter a password often set ECHO False. See waitnoecho(). """
753
753
754 attr = termios.tcgetattr(self.child_fd)
754 attr = termios.tcgetattr(self.child_fd)
755 if attr[3] & termios.ECHO:
755 if attr[3] & termios.ECHO:
756 return True
756 return True
757 return False
757 return False
758
758
759 def setecho (self, state):
759 def setecho (self, state):
760
760
761 """This sets the terminal echo mode on or off. Note that anything the
761 """This sets the terminal echo mode on or off. Note that anything the
762 child sent before the echo will be lost, so you should be sure that
762 child sent before the echo will be lost, so you should be sure that
763 your input buffer is empty before you call setecho(). For example, the
763 your input buffer is empty before you call setecho(). For example, the
764 following will work as expected::
764 following will work as expected::
765
765
766 p = pexpect.spawn('cat')
766 p = pexpect.spawn('cat')
767 p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
767 p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
768 p.expect (['1234'])
768 p.expect (['1234'])
769 p.expect (['1234'])
769 p.expect (['1234'])
770 p.setecho(False) # Turn off tty echo
770 p.setecho(False) # Turn off tty echo
771 p.sendline ('abcd') # We will set this only once (echoed by cat).
771 p.sendline ('abcd') # We will set this only once (echoed by cat).
772 p.sendline ('wxyz') # We will set this only once (echoed by cat)
772 p.sendline ('wxyz') # We will set this only once (echoed by cat)
773 p.expect (['abcd'])
773 p.expect (['abcd'])
774 p.expect (['wxyz'])
774 p.expect (['wxyz'])
775
775
776 The following WILL NOT WORK because the lines sent before the setecho
776 The following WILL NOT WORK because the lines sent before the setecho
777 will be lost::
777 will be lost::
778
778
779 p = pexpect.spawn('cat')
779 p = pexpect.spawn('cat')
780 p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
780 p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
781 p.setecho(False) # Turn off tty echo
781 p.setecho(False) # Turn off tty echo
782 p.sendline ('abcd') # We will set this only once (echoed by cat).
782 p.sendline ('abcd') # We will set this only once (echoed by cat).
783 p.sendline ('wxyz') # We will set this only once (echoed by cat)
783 p.sendline ('wxyz') # We will set this only once (echoed by cat)
784 p.expect (['1234'])
784 p.expect (['1234'])
785 p.expect (['1234'])
785 p.expect (['1234'])
786 p.expect (['abcd'])
786 p.expect (['abcd'])
787 p.expect (['wxyz'])
787 p.expect (['wxyz'])
788 """
788 """
789
789
790 self.child_fd
790 self.child_fd
791 attr = termios.tcgetattr(self.child_fd)
791 attr = termios.tcgetattr(self.child_fd)
792 if state:
792 if state:
793 attr[3] = attr[3] | termios.ECHO
793 attr[3] = attr[3] | termios.ECHO
794 else:
794 else:
795 attr[3] = attr[3] & ~termios.ECHO
795 attr[3] = attr[3] & ~termios.ECHO
796 # I tried TCSADRAIN and TCSAFLUSH, but these were inconsistent
796 # I tried TCSADRAIN and TCSAFLUSH, but these were inconsistent
797 # and blocked on some platforms. TCSADRAIN is probably ideal if it worked.
797 # and blocked on some platforms. TCSADRAIN is probably ideal if it worked.
798 termios.tcsetattr(self.child_fd, termios.TCSANOW, attr)
798 termios.tcsetattr(self.child_fd, termios.TCSANOW, attr)
799
799
800 def read_nonblocking (self, size = 1, timeout = -1):
800 def read_nonblocking (self, size = 1, timeout = -1):
801
801
802 """This reads at most size bytes from the child application. It
802 """This reads at most size bytes from the child application. It
803 includes a timeout. If the read does not complete within the timeout
803 includes a timeout. If the read does not complete within the timeout
804 period then a TIMEOUT exception is raised. If the end of file is read
804 period then a TIMEOUT exception is raised. If the end of file is read
805 then an EOF exception will be raised. If a log file was set using
805 then an EOF exception will be raised. If a log file was set using
806 setlog() then all data will also be written to the log file.
806 setlog() then all data will also be written to the log file.
807
807
808 If timeout is None then the read may block indefinitely. If timeout is -1
808 If timeout is None then the read may block indefinitely. If timeout is -1
809 then the self.timeout value is used. If timeout is 0 then the child is
809 then the self.timeout value is used. If timeout is 0 then the child is
810 polled and if there was no data immediately ready then this will raise
810 polled and if there was no data immediately ready then this will raise
811 a TIMEOUT exception.
811 a TIMEOUT exception.
812
812
813 The timeout refers only to the amount of time to read at least one
813 The timeout refers only to the amount of time to read at least one
814 character. This is not effected by the 'size' parameter, so if you call
814 character. This is not effected by the 'size' parameter, so if you call
815 read_nonblocking(size=100, timeout=30) and only one character is
815 read_nonblocking(size=100, timeout=30) and only one character is
816 available right away then one character will be returned immediately.
816 available right away then one character will be returned immediately.
817 It will not wait for 30 seconds for another 99 characters to come in.
817 It will not wait for 30 seconds for another 99 characters to come in.
818
818
819 This is a wrapper around os.read(). It uses select.select() to
819 This is a wrapper around os.read(). It uses select.select() to
820 implement the timeout. """
820 implement the timeout. """
821
821
822 if self.closed:
822 if self.closed:
823 raise ValueError ('I/O operation on closed file in read_nonblocking().')
823 raise ValueError ('I/O operation on closed file in read_nonblocking().')
824
824
825 if timeout == -1:
825 if timeout == -1:
826 timeout = self.timeout
826 timeout = self.timeout
827
827
828 # Note that some systems such as Solaris do not give an EOF when
828 # Note that some systems such as Solaris do not give an EOF when
829 # the child dies. In fact, you can still try to read
829 # the child dies. In fact, you can still try to read
830 # from the child_fd -- it will block forever or until TIMEOUT.
830 # from the child_fd -- it will block forever or until TIMEOUT.
831 # For this case, I test isalive() before doing any reading.
831 # For this case, I test isalive() before doing any reading.
832 # If isalive() is false, then I pretend that this is the same as EOF.
832 # If isalive() is false, then I pretend that this is the same as EOF.
833 if not self.isalive():
833 if not self.isalive():
834 r,w,e = self.__select([self.child_fd], [], [], 0) # timeout of 0 means "poll"
834 r,w,e = self.__select([self.child_fd], [], [], 0) # timeout of 0 means "poll"
835 if not r:
835 if not r:
836 self.flag_eof = True
836 self.flag_eof = True
837 raise EOF ('End Of File (EOF) in read_nonblocking(). Braindead platform.')
837 raise EOF ('End Of File (EOF) in read_nonblocking(). Braindead platform.')
838 elif self.__irix_hack:
838 elif self.__irix_hack:
839 # This is a hack for Irix. It seems that Irix requires a long delay before checking isalive.
839 # This is a hack for Irix. It seems that Irix requires a long delay before checking isalive.
840 # This adds a 2 second delay, but only when the child is terminated.
840 # This adds a 2 second delay, but only when the child is terminated.
841 r, w, e = self.__select([self.child_fd], [], [], 2)
841 r, w, e = self.__select([self.child_fd], [], [], 2)
842 if not r and not self.isalive():
842 if not r and not self.isalive():
843 self.flag_eof = True
843 self.flag_eof = True
844 raise EOF ('End Of File (EOF) in read_nonblocking(). Pokey platform.')
844 raise EOF ('End Of File (EOF) in read_nonblocking(). Pokey platform.')
845
845
846 r,w,e = self.__select([self.child_fd], [], [], timeout)
846 r,w,e = self.__select([self.child_fd], [], [], timeout)
847
847
848 if not r:
848 if not r:
849 if not self.isalive():
849 if not self.isalive():
850 # Some platforms, such as Irix, will claim that their processes are alive;
850 # Some platforms, such as Irix, will claim that their processes are alive;
851 # then timeout on the select; and then finally admit that they are not alive.
851 # then timeout on the select; and then finally admit that they are not alive.
852 self.flag_eof = True
852 self.flag_eof = True
853 raise EOF ('End of File (EOF) in read_nonblocking(). Very pokey platform.')
853 raise EOF ('End of File (EOF) in read_nonblocking(). Very pokey platform.')
854 else:
854 else:
855 raise TIMEOUT ('Timeout exceeded in read_nonblocking().')
855 raise TIMEOUT ('Timeout exceeded in read_nonblocking().')
856
856
857 if self.child_fd in r:
857 if self.child_fd in r:
858 try:
858 try:
859 s = os.read(self.child_fd, size)
859 s = os.read(self.child_fd, size)
860 except OSError as e: # Linux does this
860 except OSError as e: # Linux does this
861 self.flag_eof = True
861 self.flag_eof = True
862 raise EOF ('End Of File (EOF) in read_nonblocking(). Exception style platform.')
862 raise EOF ('End Of File (EOF) in read_nonblocking(). Exception style platform.')
863 if s == b'': # BSD style
863 if s == b'': # BSD style
864 self.flag_eof = True
864 self.flag_eof = True
865 raise EOF ('End Of File (EOF) in read_nonblocking(). Empty string style platform.')
865 raise EOF ('End Of File (EOF) in read_nonblocking(). Empty string style platform.')
866
866
867 s2 = self._cast_buffer_type(s)
867 s2 = self._cast_buffer_type(s)
868 if self.logfile is not None:
868 if self.logfile is not None:
869 self.logfile.write(s2)
869 self.logfile.write(s2)
870 self.logfile.flush()
870 self.logfile.flush()
871 if self.logfile_read is not None:
871 if self.logfile_read is not None:
872 self.logfile_read.write(s2)
872 self.logfile_read.write(s2)
873 self.logfile_read.flush()
873 self.logfile_read.flush()
874
874
875 return s
875 return s
876
876
877 raise ExceptionPexpect ('Reached an unexpected state in read_nonblocking().')
877 raise ExceptionPexpect ('Reached an unexpected state in read_nonblocking().')
878
878
879 def read (self, size = -1): # File-like object.
879 def read (self, size = -1): # File-like object.
880 """This reads at most "size" bytes from the file (less if the read hits
880 """This reads at most "size" bytes from the file (less if the read hits
881 EOF before obtaining size bytes). If the size argument is negative or
881 EOF before obtaining size bytes). If the size argument is negative or
882 omitted, read all data until EOF is reached. The bytes are returned as
882 omitted, read all data until EOF is reached. The bytes are returned as
883 a string object. An empty string is returned when EOF is encountered
883 a string object. An empty string is returned when EOF is encountered
884 immediately. """
884 immediately. """
885
885
886 if size == 0:
886 if size == 0:
887 return self._empty_buffer
887 return self._empty_buffer
888 if size < 0:
888 if size < 0:
889 self.expect (self.delimiter) # delimiter default is EOF
889 self.expect (self.delimiter) # delimiter default is EOF
890 return self.before
890 return self.before
891
891
892 # I could have done this more directly by not using expect(), but
892 # I could have done this more directly by not using expect(), but
893 # I deliberately decided to couple read() to expect() so that
893 # I deliberately decided to couple read() to expect() so that
894 # I would catch any bugs early and ensure consistant behavior.
894 # I would catch any bugs early and ensure consistant behavior.
895 # It's a little less efficient, but there is less for me to
895 # It's a little less efficient, but there is less for me to
896 # worry about if I have to later modify read() or expect().
896 # worry about if I have to later modify read() or expect().
897 # Note, it's OK if size==-1 in the regex. That just means it
897 # Note, it's OK if size==-1 in the regex. That just means it
898 # will never match anything in which case we stop only on EOF.
898 # will never match anything in which case we stop only on EOF.
899 if self._buffer_type is bytes:
899 if self._buffer_type is bytes:
900 pat = (u'.{%d}' % size).encode('ascii')
900 pat = (u'.{%d}' % size).encode('ascii')
901 else:
901 else:
902 pat = u'.{%d}' % size
902 pat = u'.{%d}' % size
903 cre = re.compile(pat, re.DOTALL)
903 cre = re.compile(pat, re.DOTALL)
904 index = self.expect ([cre, self.delimiter]) # delimiter default is EOF
904 index = self.expect ([cre, self.delimiter]) # delimiter default is EOF
905 if index == 0:
905 if index == 0:
906 return self.after ### self.before should be ''. Should I assert this?
906 return self.after ### self.before should be ''. Should I assert this?
907 return self.before
907 return self.before
908
908
909 def readline(self, size = -1):
909 def readline(self, size = -1):
910 """This reads and returns one entire line. A trailing newline is kept
910 """This reads and returns one entire line. A trailing newline is kept
911 in the string, but may be absent when a file ends with an incomplete
911 in the string, but may be absent when a file ends with an incomplete
912 line. Note: This readline() looks for a \\r\\n pair even on UNIX
912 line. Note: This readline() looks for a \\r\\n pair even on UNIX
913 because this is what the pseudo tty device returns. So contrary to what
913 because this is what the pseudo tty device returns. So contrary to what
914 you may expect you will receive the newline as \\r\\n. An empty string
914 you may expect you will receive the newline as \\r\\n. An empty string
915 is returned when EOF is hit immediately. Currently, the size argument is
915 is returned when EOF is hit immediately. Currently, the size argument is
916 mostly ignored, so this behavior is not standard for a file-like
916 mostly ignored, so this behavior is not standard for a file-like
917 object. If size is 0 then an empty string is returned. """
917 object. If size is 0 then an empty string is returned. """
918
918
919 if size == 0:
919 if size == 0:
920 return self._empty_buffer
920 return self._empty_buffer
921 index = self.expect ([self._pty_newline, self.delimiter]) # delimiter default is EOF
921 index = self.expect ([self._pty_newline, self.delimiter]) # delimiter default is EOF
922 if index == 0:
922 if index == 0:
923 return self.before + self._pty_newline
923 return self.before + self._pty_newline
924 return self.before
924 return self.before
925
925
926 def __iter__ (self): # File-like object.
926 def __iter__ (self): # File-like object.
927
927
928 """This is to support iterators over a file-like object.
928 """This is to support iterators over a file-like object.
929 """
929 """
930
930
931 return self
931 return self
932
932
933 def next (self): # File-like object.
933 def __next__ (self): # File-like object.
934
934
935 """This is to support iterators over a file-like object.
935 """This is to support iterators over a file-like object.
936 """
936 """
937
937
938 result = self.readline()
938 result = self.readline()
939 if result == self._empty_buffer:
939 if result == self._empty_buffer:
940 raise StopIteration
940 raise StopIteration
941 return result
941 return result
942
942
943 if not PY3:
944 next = __next__ # File-like object.
945
943 def readlines (self, sizehint = -1): # File-like object.
946 def readlines (self, sizehint = -1): # File-like object.
944
947
945 """This reads until EOF using readline() and returns a list containing
948 """This reads until EOF using readline() and returns a list containing
946 the lines thus read. The optional "sizehint" argument is ignored. """
949 the lines thus read. The optional "sizehint" argument is ignored. """
947
950
948 lines = []
951 lines = []
949 while True:
952 while True:
950 line = self.readline()
953 line = self.readline()
951 if not line:
954 if not line:
952 break
955 break
953 lines.append(line)
956 lines.append(line)
954 return lines
957 return lines
955
958
956 def write(self, s): # File-like object.
959 def write(self, s): # File-like object.
957
960
958 """This is similar to send() except that there is no return value.
961 """This is similar to send() except that there is no return value.
959 """
962 """
960
963
961 self.send (s)
964 self.send (s)
962
965
963 def writelines (self, sequence): # File-like object.
966 def writelines (self, sequence): # File-like object.
964
967
965 """This calls write() for each element in the sequence. The sequence
968 """This calls write() for each element in the sequence. The sequence
966 can be any iterable object producing strings, typically a list of
969 can be any iterable object producing strings, typically a list of
967 strings. This does not add line separators There is no return value.
970 strings. This does not add line separators There is no return value.
968 """
971 """
969
972
970 for s in sequence:
973 for s in sequence:
971 self.write (s)
974 self.write (s)
972
975
973 def send(self, s):
976 def send(self, s):
974
977
975 """This sends a string to the child process. This returns the number of
978 """This sends a string to the child process. This returns the number of
976 bytes written. If a log file was set then the data is also written to
979 bytes written. If a log file was set then the data is also written to
977 the log. """
980 the log. """
978
981
979 time.sleep(self.delaybeforesend)
982 time.sleep(self.delaybeforesend)
980
983
981 s2 = self._cast_buffer_type(s)
984 s2 = self._cast_buffer_type(s)
982 if self.logfile is not None:
985 if self.logfile is not None:
983 self.logfile.write(s2)
986 self.logfile.write(s2)
984 self.logfile.flush()
987 self.logfile.flush()
985 if self.logfile_send is not None:
988 if self.logfile_send is not None:
986 self.logfile_send.write(s2)
989 self.logfile_send.write(s2)
987 self.logfile_send.flush()
990 self.logfile_send.flush()
988 c = os.write (self.child_fd, _cast_bytes(s, self.encoding))
991 c = os.write (self.child_fd, _cast_bytes(s, self.encoding))
989 return c
992 return c
990
993
991 def sendline(self, s=''):
994 def sendline(self, s=''):
992
995
993 """This is like send(), but it adds a line feed (os.linesep). This
996 """This is like send(), but it adds a line feed (os.linesep). This
994 returns the number of bytes written. """
997 returns the number of bytes written. """
995
998
996 n = self.send (s)
999 n = self.send (s)
997 n = n + self.send (os.linesep)
1000 n = n + self.send (os.linesep)
998 return n
1001 return n
999
1002
1000 def sendcontrol(self, char):
1003 def sendcontrol(self, char):
1001
1004
1002 """This sends a control character to the child such as Ctrl-C or
1005 """This sends a control character to the child such as Ctrl-C or
1003 Ctrl-D. For example, to send a Ctrl-G (ASCII 7)::
1006 Ctrl-D. For example, to send a Ctrl-G (ASCII 7)::
1004
1007
1005 child.sendcontrol('g')
1008 child.sendcontrol('g')
1006
1009
1007 See also, sendintr() and sendeof().
1010 See also, sendintr() and sendeof().
1008 """
1011 """
1009
1012
1010 char = char.lower()
1013 char = char.lower()
1011 a = ord(char)
1014 a = ord(char)
1012 if a>=97 and a<=122:
1015 if a>=97 and a<=122:
1013 a = a - ord('a') + 1
1016 a = a - ord('a') + 1
1014 return self.send (chr(a))
1017 return self.send (chr(a))
1015 d = {'@':0, '`':0,
1018 d = {'@':0, '`':0,
1016 '[':27, '{':27,
1019 '[':27, '{':27,
1017 '\\':28, '|':28,
1020 '\\':28, '|':28,
1018 ']':29, '}': 29,
1021 ']':29, '}': 29,
1019 '^':30, '~':30,
1022 '^':30, '~':30,
1020 '_':31,
1023 '_':31,
1021 '?':127}
1024 '?':127}
1022 if char not in d:
1025 if char not in d:
1023 return 0
1026 return 0
1024 return self.send (chr(d[char]))
1027 return self.send (chr(d[char]))
1025
1028
1026 def sendeof(self):
1029 def sendeof(self):
1027
1030
1028 """This sends an EOF to the child. This sends a character which causes
1031 """This sends an EOF to the child. This sends a character which causes
1029 the pending parent output buffer to be sent to the waiting child
1032 the pending parent output buffer to be sent to the waiting child
1030 program without waiting for end-of-line. If it is the first character
1033 program without waiting for end-of-line. If it is the first character
1031 of the line, the read() in the user program returns 0, which signifies
1034 of the line, the read() in the user program returns 0, which signifies
1032 end-of-file. This means to work as expected a sendeof() has to be
1035 end-of-file. This means to work as expected a sendeof() has to be
1033 called at the beginning of a line. This method does not send a newline.
1036 called at the beginning of a line. This method does not send a newline.
1034 It is the responsibility of the caller to ensure the eof is sent at the
1037 It is the responsibility of the caller to ensure the eof is sent at the
1035 beginning of a line. """
1038 beginning of a line. """
1036
1039
1037 ### Hmmm... how do I send an EOF?
1040 ### Hmmm... how do I send an EOF?
1038 ###C if ((m = write(pty, *buf, p - *buf)) < 0)
1041 ###C if ((m = write(pty, *buf, p - *buf)) < 0)
1039 ###C return (errno == EWOULDBLOCK) ? n : -1;
1042 ###C return (errno == EWOULDBLOCK) ? n : -1;
1040 #fd = sys.stdin.fileno()
1043 #fd = sys.stdin.fileno()
1041 #old = termios.tcgetattr(fd) # remember current state
1044 #old = termios.tcgetattr(fd) # remember current state
1042 #attr = termios.tcgetattr(fd)
1045 #attr = termios.tcgetattr(fd)
1043 #attr[3] = attr[3] | termios.ICANON # ICANON must be set to recognize EOF
1046 #attr[3] = attr[3] | termios.ICANON # ICANON must be set to recognize EOF
1044 #try: # use try/finally to ensure state gets restored
1047 #try: # use try/finally to ensure state gets restored
1045 # termios.tcsetattr(fd, termios.TCSADRAIN, attr)
1048 # termios.tcsetattr(fd, termios.TCSADRAIN, attr)
1046 # if hasattr(termios, 'CEOF'):
1049 # if hasattr(termios, 'CEOF'):
1047 # os.write (self.child_fd, '%c' % termios.CEOF)
1050 # os.write (self.child_fd, '%c' % termios.CEOF)
1048 # else:
1051 # else:
1049 # # Silly platform does not define CEOF so assume CTRL-D
1052 # # Silly platform does not define CEOF so assume CTRL-D
1050 # os.write (self.child_fd, '%c' % 4)
1053 # os.write (self.child_fd, '%c' % 4)
1051 #finally: # restore state
1054 #finally: # restore state
1052 # termios.tcsetattr(fd, termios.TCSADRAIN, old)
1055 # termios.tcsetattr(fd, termios.TCSADRAIN, old)
1053 if hasattr(termios, 'VEOF'):
1056 if hasattr(termios, 'VEOF'):
1054 char = termios.tcgetattr(self.child_fd)[6][termios.VEOF]
1057 char = termios.tcgetattr(self.child_fd)[6][termios.VEOF]
1055 else:
1058 else:
1056 # platform does not define VEOF so assume CTRL-D
1059 # platform does not define VEOF so assume CTRL-D
1057 char = chr(4)
1060 char = chr(4)
1058 self.send(char)
1061 self.send(char)
1059
1062
1060 def sendintr(self):
1063 def sendintr(self):
1061
1064
1062 """This sends a SIGINT to the child. It does not require
1065 """This sends a SIGINT to the child. It does not require
1063 the SIGINT to be the first character on a line. """
1066 the SIGINT to be the first character on a line. """
1064
1067
1065 if hasattr(termios, 'VINTR'):
1068 if hasattr(termios, 'VINTR'):
1066 char = termios.tcgetattr(self.child_fd)[6][termios.VINTR]
1069 char = termios.tcgetattr(self.child_fd)[6][termios.VINTR]
1067 else:
1070 else:
1068 # platform does not define VINTR so assume CTRL-C
1071 # platform does not define VINTR so assume CTRL-C
1069 char = chr(3)
1072 char = chr(3)
1070 self.send (char)
1073 self.send (char)
1071
1074
1072 def eof (self):
1075 def eof (self):
1073
1076
1074 """This returns True if the EOF exception was ever raised.
1077 """This returns True if the EOF exception was ever raised.
1075 """
1078 """
1076
1079
1077 return self.flag_eof
1080 return self.flag_eof
1078
1081
1079 def terminate(self, force=False):
1082 def terminate(self, force=False):
1080
1083
1081 """This forces a child process to terminate. It starts nicely with
1084 """This forces a child process to terminate. It starts nicely with
1082 SIGHUP and SIGINT. If "force" is True then moves onto SIGKILL. This
1085 SIGHUP and SIGINT. If "force" is True then moves onto SIGKILL. This
1083 returns True if the child was terminated. This returns False if the
1086 returns True if the child was terminated. This returns False if the
1084 child could not be terminated. """
1087 child could not be terminated. """
1085
1088
1086 if not self.isalive():
1089 if not self.isalive():
1087 return True
1090 return True
1088 try:
1091 try:
1089 self.kill(signal.SIGHUP)
1092 self.kill(signal.SIGHUP)
1090 time.sleep(self.delayafterterminate)
1093 time.sleep(self.delayafterterminate)
1091 if not self.isalive():
1094 if not self.isalive():
1092 return True
1095 return True
1093 self.kill(signal.SIGCONT)
1096 self.kill(signal.SIGCONT)
1094 time.sleep(self.delayafterterminate)
1097 time.sleep(self.delayafterterminate)
1095 if not self.isalive():
1098 if not self.isalive():
1096 return True
1099 return True
1097 self.kill(signal.SIGINT)
1100 self.kill(signal.SIGINT)
1098 time.sleep(self.delayafterterminate)
1101 time.sleep(self.delayafterterminate)
1099 if not self.isalive():
1102 if not self.isalive():
1100 return True
1103 return True
1101 if force:
1104 if force:
1102 self.kill(signal.SIGKILL)
1105 self.kill(signal.SIGKILL)
1103 time.sleep(self.delayafterterminate)
1106 time.sleep(self.delayafterterminate)
1104 if not self.isalive():
1107 if not self.isalive():
1105 return True
1108 return True
1106 else:
1109 else:
1107 return False
1110 return False
1108 return False
1111 return False
1109 except OSError as e:
1112 except OSError as e:
1110 # I think there are kernel timing issues that sometimes cause
1113 # I think there are kernel timing issues that sometimes cause
1111 # this to happen. I think isalive() reports True, but the
1114 # this to happen. I think isalive() reports True, but the
1112 # process is dead to the kernel.
1115 # process is dead to the kernel.
1113 # Make one last attempt to see if the kernel is up to date.
1116 # Make one last attempt to see if the kernel is up to date.
1114 time.sleep(self.delayafterterminate)
1117 time.sleep(self.delayafterterminate)
1115 if not self.isalive():
1118 if not self.isalive():
1116 return True
1119 return True
1117 else:
1120 else:
1118 return False
1121 return False
1119
1122
1120 def wait(self):
1123 def wait(self):
1121
1124
1122 """This waits until the child exits. This is a blocking call. This will
1125 """This waits until the child exits. This is a blocking call. This will
1123 not read any data from the child, so this will block forever if the
1126 not read any data from the child, so this will block forever if the
1124 child has unread output and has terminated. In other words, the child
1127 child has unread output and has terminated. In other words, the child
1125 may have printed output then called exit(); but, technically, the child
1128 may have printed output then called exit(); but, technically, the child
1126 is still alive until its output is read. """
1129 is still alive until its output is read. """
1127
1130
1128 if self.isalive():
1131 if self.isalive():
1129 pid, status = os.waitpid(self.pid, 0)
1132 pid, status = os.waitpid(self.pid, 0)
1130 else:
1133 else:
1131 raise ExceptionPexpect ('Cannot wait for dead child process.')
1134 raise ExceptionPexpect ('Cannot wait for dead child process.')
1132 self.exitstatus = os.WEXITSTATUS(status)
1135 self.exitstatus = os.WEXITSTATUS(status)
1133 if os.WIFEXITED (status):
1136 if os.WIFEXITED (status):
1134 self.status = status
1137 self.status = status
1135 self.exitstatus = os.WEXITSTATUS(status)
1138 self.exitstatus = os.WEXITSTATUS(status)
1136 self.signalstatus = None
1139 self.signalstatus = None
1137 self.terminated = True
1140 self.terminated = True
1138 elif os.WIFSIGNALED (status):
1141 elif os.WIFSIGNALED (status):
1139 self.status = status
1142 self.status = status
1140 self.exitstatus = None
1143 self.exitstatus = None
1141 self.signalstatus = os.WTERMSIG(status)
1144 self.signalstatus = os.WTERMSIG(status)
1142 self.terminated = True
1145 self.terminated = True
1143 elif os.WIFSTOPPED (status):
1146 elif os.WIFSTOPPED (status):
1144 raise ExceptionPexpect ('Wait was called for a child process that is stopped. This is not supported. Is some other process attempting job control with our child pid?')
1147 raise ExceptionPexpect ('Wait was called for a child process that is stopped. This is not supported. Is some other process attempting job control with our child pid?')
1145 return self.exitstatus
1148 return self.exitstatus
1146
1149
1147 def isalive(self):
1150 def isalive(self):
1148
1151
1149 """This tests if the child process is running or not. This is
1152 """This tests if the child process is running or not. This is
1150 non-blocking. If the child was terminated then this will read the
1153 non-blocking. If the child was terminated then this will read the
1151 exitstatus or signalstatus of the child. This returns True if the child
1154 exitstatus or signalstatus of the child. This returns True if the child
1152 process appears to be running or False if not. It can take literally
1155 process appears to be running or False if not. It can take literally
1153 SECONDS for Solaris to return the right status. """
1156 SECONDS for Solaris to return the right status. """
1154
1157
1155 if self.terminated:
1158 if self.terminated:
1156 return False
1159 return False
1157
1160
1158 if self.flag_eof:
1161 if self.flag_eof:
1159 # This is for Linux, which requires the blocking form of waitpid to get
1162 # This is for Linux, which requires the blocking form of waitpid to get
1160 # status of a defunct process. This is super-lame. The flag_eof would have
1163 # status of a defunct process. This is super-lame. The flag_eof would have
1161 # been set in read_nonblocking(), so this should be safe.
1164 # been set in read_nonblocking(), so this should be safe.
1162 waitpid_options = 0
1165 waitpid_options = 0
1163 else:
1166 else:
1164 waitpid_options = os.WNOHANG
1167 waitpid_options = os.WNOHANG
1165
1168
1166 try:
1169 try:
1167 pid, status = os.waitpid(self.pid, waitpid_options)
1170 pid, status = os.waitpid(self.pid, waitpid_options)
1168 except OSError as e: # No child processes
1171 except OSError as e: # No child processes
1169 if e.errno == errno.ECHILD:
1172 if e.errno == errno.ECHILD:
1170 raise ExceptionPexpect ('isalive() encountered condition where "terminated" is 0, but there was no child process. Did someone else call waitpid() on our process?')
1173 raise ExceptionPexpect ('isalive() encountered condition where "terminated" is 0, but there was no child process. Did someone else call waitpid() on our process?')
1171 else:
1174 else:
1172 raise e
1175 raise e
1173
1176
1174 # I have to do this twice for Solaris. I can't even believe that I figured this out...
1177 # I have to do this twice for Solaris. I can't even believe that I figured this out...
1175 # If waitpid() returns 0 it means that no child process wishes to
1178 # If waitpid() returns 0 it means that no child process wishes to
1176 # report, and the value of status is undefined.
1179 # report, and the value of status is undefined.
1177 if pid == 0:
1180 if pid == 0:
1178 try:
1181 try:
1179 pid, status = os.waitpid(self.pid, waitpid_options) ### os.WNOHANG) # Solaris!
1182 pid, status = os.waitpid(self.pid, waitpid_options) ### os.WNOHANG) # Solaris!
1180 except OSError as e: # This should never happen...
1183 except OSError as e: # This should never happen...
1181 if e[0] == errno.ECHILD:
1184 if e[0] == errno.ECHILD:
1182 raise ExceptionPexpect ('isalive() encountered condition that should never happen. There was no child process. Did someone else call waitpid() on our process?')
1185 raise ExceptionPexpect ('isalive() encountered condition that should never happen. There was no child process. Did someone else call waitpid() on our process?')
1183 else:
1186 else:
1184 raise e
1187 raise e
1185
1188
1186 # If pid is still 0 after two calls to waitpid() then
1189 # If pid is still 0 after two calls to waitpid() then
1187 # the process really is alive. This seems to work on all platforms, except
1190 # the process really is alive. This seems to work on all platforms, except
1188 # for Irix which seems to require a blocking call on waitpid or select, so I let read_nonblocking
1191 # for Irix which seems to require a blocking call on waitpid or select, so I let read_nonblocking
1189 # take care of this situation (unfortunately, this requires waiting through the timeout).
1192 # take care of this situation (unfortunately, this requires waiting through the timeout).
1190 if pid == 0:
1193 if pid == 0:
1191 return True
1194 return True
1192
1195
1193 if pid == 0:
1196 if pid == 0:
1194 return True
1197 return True
1195
1198
1196 if os.WIFEXITED (status):
1199 if os.WIFEXITED (status):
1197 self.status = status
1200 self.status = status
1198 self.exitstatus = os.WEXITSTATUS(status)
1201 self.exitstatus = os.WEXITSTATUS(status)
1199 self.signalstatus = None
1202 self.signalstatus = None
1200 self.terminated = True
1203 self.terminated = True
1201 elif os.WIFSIGNALED (status):
1204 elif os.WIFSIGNALED (status):
1202 self.status = status
1205 self.status = status
1203 self.exitstatus = None
1206 self.exitstatus = None
1204 self.signalstatus = os.WTERMSIG(status)
1207 self.signalstatus = os.WTERMSIG(status)
1205 self.terminated = True
1208 self.terminated = True
1206 elif os.WIFSTOPPED (status):
1209 elif os.WIFSTOPPED (status):
1207 raise ExceptionPexpect ('isalive() encountered condition where child process is stopped. This is not supported. Is some other process attempting job control with our child pid?')
1210 raise ExceptionPexpect ('isalive() encountered condition where child process is stopped. This is not supported. Is some other process attempting job control with our child pid?')
1208 return False
1211 return False
1209
1212
1210 def kill(self, sig):
1213 def kill(self, sig):
1211
1214
1212 """This sends the given signal to the child application. In keeping
1215 """This sends the given signal to the child application. In keeping
1213 with UNIX tradition it has a misleading name. It does not necessarily
1216 with UNIX tradition it has a misleading name. It does not necessarily
1214 kill the child unless you send the right signal. """
1217 kill the child unless you send the right signal. """
1215
1218
1216 # Same as os.kill, but the pid is given for you.
1219 # Same as os.kill, but the pid is given for you.
1217 if self.isalive():
1220 if self.isalive():
1218 os.kill(self.pid, sig)
1221 os.kill(self.pid, sig)
1219
1222
1220 def compile_pattern_list(self, patterns):
1223 def compile_pattern_list(self, patterns):
1221
1224
1222 """This compiles a pattern-string or a list of pattern-strings.
1225 """This compiles a pattern-string or a list of pattern-strings.
1223 Patterns must be a StringType, EOF, TIMEOUT, SRE_Pattern, or a list of
1226 Patterns must be a StringType, EOF, TIMEOUT, SRE_Pattern, or a list of
1224 those. Patterns may also be None which results in an empty list (you
1227 those. Patterns may also be None which results in an empty list (you
1225 might do this if waiting for an EOF or TIMEOUT condition without
1228 might do this if waiting for an EOF or TIMEOUT condition without
1226 expecting any pattern).
1229 expecting any pattern).
1227
1230
1228 This is used by expect() when calling expect_list(). Thus expect() is
1231 This is used by expect() when calling expect_list(). Thus expect() is
1229 nothing more than::
1232 nothing more than::
1230
1233
1231 cpl = self.compile_pattern_list(pl)
1234 cpl = self.compile_pattern_list(pl)
1232 return self.expect_list(cpl, timeout)
1235 return self.expect_list(cpl, timeout)
1233
1236
1234 If you are using expect() within a loop it may be more
1237 If you are using expect() within a loop it may be more
1235 efficient to compile the patterns first and then call expect_list().
1238 efficient to compile the patterns first and then call expect_list().
1236 This avoid calls in a loop to compile_pattern_list()::
1239 This avoid calls in a loop to compile_pattern_list()::
1237
1240
1238 cpl = self.compile_pattern_list(my_pattern)
1241 cpl = self.compile_pattern_list(my_pattern)
1239 while some_condition:
1242 while some_condition:
1240 ...
1243 ...
1241 i = self.expect_list(clp, timeout)
1244 i = self.expect_list(clp, timeout)
1242 ...
1245 ...
1243 """
1246 """
1244
1247
1245 if patterns is None:
1248 if patterns is None:
1246 return []
1249 return []
1247 if not isinstance(patterns, list):
1250 if not isinstance(patterns, list):
1248 patterns = [patterns]
1251 patterns = [patterns]
1249
1252
1250 compile_flags = re.DOTALL # Allow dot to match \n
1253 compile_flags = re.DOTALL # Allow dot to match \n
1251 if self.ignorecase:
1254 if self.ignorecase:
1252 compile_flags = compile_flags | re.IGNORECASE
1255 compile_flags = compile_flags | re.IGNORECASE
1253 compiled_pattern_list = []
1256 compiled_pattern_list = []
1254 for p in patterns:
1257 for p in patterns:
1255 if isinstance(p, (bytes, unicode)):
1258 if isinstance(p, (bytes, unicode)):
1256 p = self._cast_buffer_type(p)
1259 p = self._cast_buffer_type(p)
1257 compiled_pattern_list.append(re.compile(p, compile_flags))
1260 compiled_pattern_list.append(re.compile(p, compile_flags))
1258 elif p is EOF:
1261 elif p is EOF:
1259 compiled_pattern_list.append(EOF)
1262 compiled_pattern_list.append(EOF)
1260 elif p is TIMEOUT:
1263 elif p is TIMEOUT:
1261 compiled_pattern_list.append(TIMEOUT)
1264 compiled_pattern_list.append(TIMEOUT)
1262 elif type(p) is re_type:
1265 elif type(p) is re_type:
1263 p = self._prepare_regex_pattern(p)
1266 p = self._prepare_regex_pattern(p)
1264 compiled_pattern_list.append(p)
1267 compiled_pattern_list.append(p)
1265 else:
1268 else:
1266 raise TypeError ('Argument must be one of StringTypes, EOF, TIMEOUT, SRE_Pattern, or a list of those type. %s' % str(type(p)))
1269 raise TypeError ('Argument must be one of StringTypes, EOF, TIMEOUT, SRE_Pattern, or a list of those type. %s' % str(type(p)))
1267
1270
1268 return compiled_pattern_list
1271 return compiled_pattern_list
1269
1272
1270 def _prepare_regex_pattern(self, p):
1273 def _prepare_regex_pattern(self, p):
1271 "Recompile unicode regexes as bytes regexes. Overridden in subclass."
1274 "Recompile unicode regexes as bytes regexes. Overridden in subclass."
1272 if isinstance(p.pattern, unicode):
1275 if isinstance(p.pattern, unicode):
1273 p = re.compile(p.pattern.encode('utf-8'), p.flags &~ re.UNICODE)
1276 p = re.compile(p.pattern.encode('utf-8'), p.flags &~ re.UNICODE)
1274 return p
1277 return p
1275
1278
1276 def expect(self, pattern, timeout = -1, searchwindowsize=-1):
1279 def expect(self, pattern, timeout = -1, searchwindowsize=-1):
1277
1280
1278 """This seeks through the stream until a pattern is matched. The
1281 """This seeks through the stream until a pattern is matched. The
1279 pattern is overloaded and may take several types. The pattern can be a
1282 pattern is overloaded and may take several types. The pattern can be a
1280 StringType, EOF, a compiled re, or a list of any of those types.
1283 StringType, EOF, a compiled re, or a list of any of those types.
1281 Strings will be compiled to re types. This returns the index into the
1284 Strings will be compiled to re types. This returns the index into the
1282 pattern list. If the pattern was not a list this returns index 0 on a
1285 pattern list. If the pattern was not a list this returns index 0 on a
1283 successful match. This may raise exceptions for EOF or TIMEOUT. To
1286 successful match. This may raise exceptions for EOF or TIMEOUT. To
1284 avoid the EOF or TIMEOUT exceptions add EOF or TIMEOUT to the pattern
1287 avoid the EOF or TIMEOUT exceptions add EOF or TIMEOUT to the pattern
1285 list. That will cause expect to match an EOF or TIMEOUT condition
1288 list. That will cause expect to match an EOF or TIMEOUT condition
1286 instead of raising an exception.
1289 instead of raising an exception.
1287
1290
1288 If you pass a list of patterns and more than one matches, the first match
1291 If you pass a list of patterns and more than one matches, the first match
1289 in the stream is chosen. If more than one pattern matches at that point,
1292 in the stream is chosen. If more than one pattern matches at that point,
1290 the leftmost in the pattern list is chosen. For example::
1293 the leftmost in the pattern list is chosen. For example::
1291
1294
1292 # the input is 'foobar'
1295 # the input is 'foobar'
1293 index = p.expect (['bar', 'foo', 'foobar'])
1296 index = p.expect (['bar', 'foo', 'foobar'])
1294 # returns 1 ('foo') even though 'foobar' is a "better" match
1297 # returns 1 ('foo') even though 'foobar' is a "better" match
1295
1298
1296 Please note, however, that buffering can affect this behavior, since
1299 Please note, however, that buffering can affect this behavior, since
1297 input arrives in unpredictable chunks. For example::
1300 input arrives in unpredictable chunks. For example::
1298
1301
1299 # the input is 'foobar'
1302 # the input is 'foobar'
1300 index = p.expect (['foobar', 'foo'])
1303 index = p.expect (['foobar', 'foo'])
1301 # returns 0 ('foobar') if all input is available at once,
1304 # returns 0 ('foobar') if all input is available at once,
1302 # but returs 1 ('foo') if parts of the final 'bar' arrive late
1305 # but returs 1 ('foo') if parts of the final 'bar' arrive late
1303
1306
1304 After a match is found the instance attributes 'before', 'after' and
1307 After a match is found the instance attributes 'before', 'after' and
1305 'match' will be set. You can see all the data read before the match in
1308 'match' will be set. You can see all the data read before the match in
1306 'before'. You can see the data that was matched in 'after'. The
1309 'before'. You can see the data that was matched in 'after'. The
1307 re.MatchObject used in the re match will be in 'match'. If an error
1310 re.MatchObject used in the re match will be in 'match'. If an error
1308 occurred then 'before' will be set to all the data read so far and
1311 occurred then 'before' will be set to all the data read so far and
1309 'after' and 'match' will be None.
1312 'after' and 'match' will be None.
1310
1313
1311 If timeout is -1 then timeout will be set to the self.timeout value.
1314 If timeout is -1 then timeout will be set to the self.timeout value.
1312
1315
1313 A list entry may be EOF or TIMEOUT instead of a string. This will
1316 A list entry may be EOF or TIMEOUT instead of a string. This will
1314 catch these exceptions and return the index of the list entry instead
1317 catch these exceptions and return the index of the list entry instead
1315 of raising the exception. The attribute 'after' will be set to the
1318 of raising the exception. The attribute 'after' will be set to the
1316 exception type. The attribute 'match' will be None. This allows you to
1319 exception type. The attribute 'match' will be None. This allows you to
1317 write code like this::
1320 write code like this::
1318
1321
1319 index = p.expect (['good', 'bad', pexpect.EOF, pexpect.TIMEOUT])
1322 index = p.expect (['good', 'bad', pexpect.EOF, pexpect.TIMEOUT])
1320 if index == 0:
1323 if index == 0:
1321 do_something()
1324 do_something()
1322 elif index == 1:
1325 elif index == 1:
1323 do_something_else()
1326 do_something_else()
1324 elif index == 2:
1327 elif index == 2:
1325 do_some_other_thing()
1328 do_some_other_thing()
1326 elif index == 3:
1329 elif index == 3:
1327 do_something_completely_different()
1330 do_something_completely_different()
1328
1331
1329 instead of code like this::
1332 instead of code like this::
1330
1333
1331 try:
1334 try:
1332 index = p.expect (['good', 'bad'])
1335 index = p.expect (['good', 'bad'])
1333 if index == 0:
1336 if index == 0:
1334 do_something()
1337 do_something()
1335 elif index == 1:
1338 elif index == 1:
1336 do_something_else()
1339 do_something_else()
1337 except EOF:
1340 except EOF:
1338 do_some_other_thing()
1341 do_some_other_thing()
1339 except TIMEOUT:
1342 except TIMEOUT:
1340 do_something_completely_different()
1343 do_something_completely_different()
1341
1344
1342 These two forms are equivalent. It all depends on what you want. You
1345 These two forms are equivalent. It all depends on what you want. You
1343 can also just expect the EOF if you are waiting for all output of a
1346 can also just expect the EOF if you are waiting for all output of a
1344 child to finish. For example::
1347 child to finish. For example::
1345
1348
1346 p = pexpect.spawn('/bin/ls')
1349 p = pexpect.spawn('/bin/ls')
1347 p.expect (pexpect.EOF)
1350 p.expect (pexpect.EOF)
1348 print p.before
1351 print p.before
1349
1352
1350 If you are trying to optimize for speed then see expect_list().
1353 If you are trying to optimize for speed then see expect_list().
1351 """
1354 """
1352
1355
1353 compiled_pattern_list = self.compile_pattern_list(pattern)
1356 compiled_pattern_list = self.compile_pattern_list(pattern)
1354 return self.expect_list(compiled_pattern_list, timeout, searchwindowsize)
1357 return self.expect_list(compiled_pattern_list, timeout, searchwindowsize)
1355
1358
1356 def expect_list(self, pattern_list, timeout = -1, searchwindowsize = -1):
1359 def expect_list(self, pattern_list, timeout = -1, searchwindowsize = -1):
1357
1360
1358 """This takes a list of compiled regular expressions and returns the
1361 """This takes a list of compiled regular expressions and returns the
1359 index into the pattern_list that matched the child output. The list may
1362 index into the pattern_list that matched the child output. The list may
1360 also contain EOF or TIMEOUT (which are not compiled regular
1363 also contain EOF or TIMEOUT (which are not compiled regular
1361 expressions). This method is similar to the expect() method except that
1364 expressions). This method is similar to the expect() method except that
1362 expect_list() does not recompile the pattern list on every call. This
1365 expect_list() does not recompile the pattern list on every call. This
1363 may help if you are trying to optimize for speed, otherwise just use
1366 may help if you are trying to optimize for speed, otherwise just use
1364 the expect() method. This is called by expect(). If timeout==-1 then
1367 the expect() method. This is called by expect(). If timeout==-1 then
1365 the self.timeout value is used. If searchwindowsize==-1 then the
1368 the self.timeout value is used. If searchwindowsize==-1 then the
1366 self.searchwindowsize value is used. """
1369 self.searchwindowsize value is used. """
1367
1370
1368 return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize)
1371 return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize)
1369
1372
1370 def expect_exact(self, pattern_list, timeout = -1, searchwindowsize = -1):
1373 def expect_exact(self, pattern_list, timeout = -1, searchwindowsize = -1):
1371
1374
1372 """This is similar to expect(), but uses plain string matching instead
1375 """This is similar to expect(), but uses plain string matching instead
1373 of compiled regular expressions in 'pattern_list'. The 'pattern_list'
1376 of compiled regular expressions in 'pattern_list'. The 'pattern_list'
1374 may be a string; a list or other sequence of strings; or TIMEOUT and
1377 may be a string; a list or other sequence of strings; or TIMEOUT and
1375 EOF.
1378 EOF.
1376
1379
1377 This call might be faster than expect() for two reasons: string
1380 This call might be faster than expect() for two reasons: string
1378 searching is faster than RE matching and it is possible to limit the
1381 searching is faster than RE matching and it is possible to limit the
1379 search to just the end of the input buffer.
1382 search to just the end of the input buffer.
1380
1383
1381 This method is also useful when you don't want to have to worry about
1384 This method is also useful when you don't want to have to worry about
1382 escaping regular expression characters that you want to match."""
1385 escaping regular expression characters that you want to match."""
1383
1386
1384 if isinstance(pattern_list, (bytes, unicode)) or pattern_list in (TIMEOUT, EOF):
1387 if isinstance(pattern_list, (bytes, unicode)) or pattern_list in (TIMEOUT, EOF):
1385 pattern_list = [pattern_list]
1388 pattern_list = [pattern_list]
1386 return self.expect_loop(searcher_string(pattern_list), timeout, searchwindowsize)
1389 return self.expect_loop(searcher_string(pattern_list), timeout, searchwindowsize)
1387
1390
1388 def expect_loop(self, searcher, timeout = -1, searchwindowsize = -1):
1391 def expect_loop(self, searcher, timeout = -1, searchwindowsize = -1):
1389
1392
1390 """This is the common loop used inside expect. The 'searcher' should be
1393 """This is the common loop used inside expect. The 'searcher' should be
1391 an instance of searcher_re or searcher_string, which describes how and what
1394 an instance of searcher_re or searcher_string, which describes how and what
1392 to search for in the input.
1395 to search for in the input.
1393
1396
1394 See expect() for other arguments, return value and exceptions. """
1397 See expect() for other arguments, return value and exceptions. """
1395
1398
1396 self.searcher = searcher
1399 self.searcher = searcher
1397
1400
1398 if timeout == -1:
1401 if timeout == -1:
1399 timeout = self.timeout
1402 timeout = self.timeout
1400 if timeout is not None:
1403 if timeout is not None:
1401 end_time = time.time() + timeout
1404 end_time = time.time() + timeout
1402 if searchwindowsize == -1:
1405 if searchwindowsize == -1:
1403 searchwindowsize = self.searchwindowsize
1406 searchwindowsize = self.searchwindowsize
1404
1407
1405 try:
1408 try:
1406 incoming = self.buffer
1409 incoming = self.buffer
1407 freshlen = len(incoming)
1410 freshlen = len(incoming)
1408 while True: # Keep reading until exception or return.
1411 while True: # Keep reading until exception or return.
1409 index = searcher.search(incoming, freshlen, searchwindowsize)
1412 index = searcher.search(incoming, freshlen, searchwindowsize)
1410 if index >= 0:
1413 if index >= 0:
1411 self.buffer = incoming[searcher.end : ]
1414 self.buffer = incoming[searcher.end : ]
1412 self.before = incoming[ : searcher.start]
1415 self.before = incoming[ : searcher.start]
1413 self.after = incoming[searcher.start : searcher.end]
1416 self.after = incoming[searcher.start : searcher.end]
1414 self.match = searcher.match
1417 self.match = searcher.match
1415 self.match_index = index
1418 self.match_index = index
1416 return self.match_index
1419 return self.match_index
1417 # No match at this point
1420 # No match at this point
1418 if timeout is not None and timeout < 0:
1421 if timeout is not None and timeout < 0:
1419 raise TIMEOUT ('Timeout exceeded in expect_any().')
1422 raise TIMEOUT ('Timeout exceeded in expect_any().')
1420 # Still have time left, so read more data
1423 # Still have time left, so read more data
1421 c = self.read_nonblocking (self.maxread, timeout)
1424 c = self.read_nonblocking (self.maxread, timeout)
1422 freshlen = len(c)
1425 freshlen = len(c)
1423 time.sleep (0.0001)
1426 time.sleep (0.0001)
1424 incoming = incoming + c
1427 incoming = incoming + c
1425 if timeout is not None:
1428 if timeout is not None:
1426 timeout = end_time - time.time()
1429 timeout = end_time - time.time()
1427 except EOF as e:
1430 except EOF as e:
1428 self.buffer = self._empty_buffer
1431 self.buffer = self._empty_buffer
1429 self.before = incoming
1432 self.before = incoming
1430 self.after = EOF
1433 self.after = EOF
1431 index = searcher.eof_index
1434 index = searcher.eof_index
1432 if index >= 0:
1435 if index >= 0:
1433 self.match = EOF
1436 self.match = EOF
1434 self.match_index = index
1437 self.match_index = index
1435 return self.match_index
1438 return self.match_index
1436 else:
1439 else:
1437 self.match = None
1440 self.match = None
1438 self.match_index = None
1441 self.match_index = None
1439 raise EOF (str(e) + '\n' + str(self))
1442 raise EOF (str(e) + '\n' + str(self))
1440 except TIMEOUT as e:
1443 except TIMEOUT as e:
1441 self.buffer = incoming
1444 self.buffer = incoming
1442 self.before = incoming
1445 self.before = incoming
1443 self.after = TIMEOUT
1446 self.after = TIMEOUT
1444 index = searcher.timeout_index
1447 index = searcher.timeout_index
1445 if index >= 0:
1448 if index >= 0:
1446 self.match = TIMEOUT
1449 self.match = TIMEOUT
1447 self.match_index = index
1450 self.match_index = index
1448 return self.match_index
1451 return self.match_index
1449 else:
1452 else:
1450 self.match = None
1453 self.match = None
1451 self.match_index = None
1454 self.match_index = None
1452 raise TIMEOUT (str(e) + '\n' + str(self))
1455 raise TIMEOUT (str(e) + '\n' + str(self))
1453 except:
1456 except:
1454 self.before = incoming
1457 self.before = incoming
1455 self.after = None
1458 self.after = None
1456 self.match = None
1459 self.match = None
1457 self.match_index = None
1460 self.match_index = None
1458 raise
1461 raise
1459
1462
1460 def getwinsize(self):
1463 def getwinsize(self):
1461
1464
1462 """This returns the terminal window size of the child tty. The return
1465 """This returns the terminal window size of the child tty. The return
1463 value is a tuple of (rows, cols). """
1466 value is a tuple of (rows, cols). """
1464
1467
1465 TIOCGWINSZ = getattr(termios, 'TIOCGWINSZ', 1074295912L)
1468 TIOCGWINSZ = getattr(termios, 'TIOCGWINSZ', 1074295912L)
1466 s = struct.pack('HHHH', 0, 0, 0, 0)
1469 s = struct.pack('HHHH', 0, 0, 0, 0)
1467 x = fcntl.ioctl(self.fileno(), TIOCGWINSZ, s)
1470 x = fcntl.ioctl(self.fileno(), TIOCGWINSZ, s)
1468 return struct.unpack('HHHH', x)[0:2]
1471 return struct.unpack('HHHH', x)[0:2]
1469
1472
1470 def setwinsize(self, r, c):
1473 def setwinsize(self, r, c):
1471
1474
1472 """This sets the terminal window size of the child tty. This will cause
1475 """This sets the terminal window size of the child tty. This will cause
1473 a SIGWINCH signal to be sent to the child. This does not change the
1476 a SIGWINCH signal to be sent to the child. This does not change the
1474 physical window size. It changes the size reported to TTY-aware
1477 physical window size. It changes the size reported to TTY-aware
1475 applications like vi or curses -- applications that respond to the
1478 applications like vi or curses -- applications that respond to the
1476 SIGWINCH signal. """
1479 SIGWINCH signal. """
1477
1480
1478 # Check for buggy platforms. Some Python versions on some platforms
1481 # Check for buggy platforms. Some Python versions on some platforms
1479 # (notably OSF1 Alpha and RedHat 7.1) truncate the value for
1482 # (notably OSF1 Alpha and RedHat 7.1) truncate the value for
1480 # termios.TIOCSWINSZ. It is not clear why this happens.
1483 # termios.TIOCSWINSZ. It is not clear why this happens.
1481 # These platforms don't seem to handle the signed int very well;
1484 # These platforms don't seem to handle the signed int very well;
1482 # yet other platforms like OpenBSD have a large negative value for
1485 # yet other platforms like OpenBSD have a large negative value for
1483 # TIOCSWINSZ and they don't have a truncate problem.
1486 # TIOCSWINSZ and they don't have a truncate problem.
1484 # Newer versions of Linux have totally different values for TIOCSWINSZ.
1487 # Newer versions of Linux have totally different values for TIOCSWINSZ.
1485 # Note that this fix is a hack.
1488 # Note that this fix is a hack.
1486 TIOCSWINSZ = getattr(termios, 'TIOCSWINSZ', -2146929561)
1489 TIOCSWINSZ = getattr(termios, 'TIOCSWINSZ', -2146929561)
1487 if TIOCSWINSZ == 2148037735L: # L is not required in Python >= 2.2.
1490 if TIOCSWINSZ == 2148037735L: # L is not required in Python >= 2.2.
1488 TIOCSWINSZ = -2146929561 # Same bits, but with sign.
1491 TIOCSWINSZ = -2146929561 # Same bits, but with sign.
1489 # Note, assume ws_xpixel and ws_ypixel are zero.
1492 # Note, assume ws_xpixel and ws_ypixel are zero.
1490 s = struct.pack('HHHH', r, c, 0, 0)
1493 s = struct.pack('HHHH', r, c, 0, 0)
1491 fcntl.ioctl(self.fileno(), TIOCSWINSZ, s)
1494 fcntl.ioctl(self.fileno(), TIOCSWINSZ, s)
1492
1495
1493 def interact(self, escape_character = b'\x1d', input_filter = None, output_filter = None):
1496 def interact(self, escape_character = b'\x1d', input_filter = None, output_filter = None):
1494
1497
1495 """This gives control of the child process to the interactive user (the
1498 """This gives control of the child process to the interactive user (the
1496 human at the keyboard). Keystrokes are sent to the child process, and
1499 human at the keyboard). Keystrokes are sent to the child process, and
1497 the stdout and stderr output of the child process is printed. This
1500 the stdout and stderr output of the child process is printed. This
1498 simply echos the child stdout and child stderr to the real stdout and
1501 simply echos the child stdout and child stderr to the real stdout and
1499 it echos the real stdin to the child stdin. When the user types the
1502 it echos the real stdin to the child stdin. When the user types the
1500 escape_character this method will stop. The default for
1503 escape_character this method will stop. The default for
1501 escape_character is ^]. This should not be confused with ASCII 27 --
1504 escape_character is ^]. This should not be confused with ASCII 27 --
1502 the ESC character. ASCII 29 was chosen for historical merit because
1505 the ESC character. ASCII 29 was chosen for historical merit because
1503 this is the character used by 'telnet' as the escape character. The
1506 this is the character used by 'telnet' as the escape character. The
1504 escape_character will not be sent to the child process.
1507 escape_character will not be sent to the child process.
1505
1508
1506 You may pass in optional input and output filter functions. These
1509 You may pass in optional input and output filter functions. These
1507 functions should take a string and return a string. The output_filter
1510 functions should take a string and return a string. The output_filter
1508 will be passed all the output from the child process. The input_filter
1511 will be passed all the output from the child process. The input_filter
1509 will be passed all the keyboard input from the user. The input_filter
1512 will be passed all the keyboard input from the user. The input_filter
1510 is run BEFORE the check for the escape_character.
1513 is run BEFORE the check for the escape_character.
1511
1514
1512 Note that if you change the window size of the parent the SIGWINCH
1515 Note that if you change the window size of the parent the SIGWINCH
1513 signal will not be passed through to the child. If you want the child
1516 signal will not be passed through to the child. If you want the child
1514 window size to change when the parent's window size changes then do
1517 window size to change when the parent's window size changes then do
1515 something like the following example::
1518 something like the following example::
1516
1519
1517 import pexpect, struct, fcntl, termios, signal, sys
1520 import pexpect, struct, fcntl, termios, signal, sys
1518 def sigwinch_passthrough (sig, data):
1521 def sigwinch_passthrough (sig, data):
1519 s = struct.pack("HHHH", 0, 0, 0, 0)
1522 s = struct.pack("HHHH", 0, 0, 0, 0)
1520 a = struct.unpack('hhhh', fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ , s))
1523 a = struct.unpack('hhhh', fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ , s))
1521 global p
1524 global p
1522 p.setwinsize(a[0],a[1])
1525 p.setwinsize(a[0],a[1])
1523 p = pexpect.spawn('/bin/bash') # Note this is global and used in sigwinch_passthrough.
1526 p = pexpect.spawn('/bin/bash') # Note this is global and used in sigwinch_passthrough.
1524 signal.signal(signal.SIGWINCH, sigwinch_passthrough)
1527 signal.signal(signal.SIGWINCH, sigwinch_passthrough)
1525 p.interact()
1528 p.interact()
1526 """
1529 """
1527
1530
1528 # Flush the buffer.
1531 # Flush the buffer.
1529 if PY3: self.stdout.write(_cast_unicode(self.buffer, self.encoding))
1532 if PY3: self.stdout.write(_cast_unicode(self.buffer, self.encoding))
1530 else: self.stdout.write(self.buffer)
1533 else: self.stdout.write(self.buffer)
1531 self.stdout.flush()
1534 self.stdout.flush()
1532 self.buffer = self._empty_buffer
1535 self.buffer = self._empty_buffer
1533 mode = tty.tcgetattr(self.STDIN_FILENO)
1536 mode = tty.tcgetattr(self.STDIN_FILENO)
1534 tty.setraw(self.STDIN_FILENO)
1537 tty.setraw(self.STDIN_FILENO)
1535 try:
1538 try:
1536 self.__interact_copy(escape_character, input_filter, output_filter)
1539 self.__interact_copy(escape_character, input_filter, output_filter)
1537 finally:
1540 finally:
1538 tty.tcsetattr(self.STDIN_FILENO, tty.TCSAFLUSH, mode)
1541 tty.tcsetattr(self.STDIN_FILENO, tty.TCSAFLUSH, mode)
1539
1542
1540 def __interact_writen(self, fd, data):
1543 def __interact_writen(self, fd, data):
1541
1544
1542 """This is used by the interact() method.
1545 """This is used by the interact() method.
1543 """
1546 """
1544
1547
1545 while data != b'' and self.isalive():
1548 while data != b'' and self.isalive():
1546 n = os.write(fd, data)
1549 n = os.write(fd, data)
1547 data = data[n:]
1550 data = data[n:]
1548
1551
1549 def __interact_read(self, fd):
1552 def __interact_read(self, fd):
1550
1553
1551 """This is used by the interact() method.
1554 """This is used by the interact() method.
1552 """
1555 """
1553
1556
1554 return os.read(fd, 1000)
1557 return os.read(fd, 1000)
1555
1558
1556 def __interact_copy(self, escape_character = None, input_filter = None, output_filter = None):
1559 def __interact_copy(self, escape_character = None, input_filter = None, output_filter = None):
1557
1560
1558 """This is used by the interact() method.
1561 """This is used by the interact() method.
1559 """
1562 """
1560
1563
1561 while self.isalive():
1564 while self.isalive():
1562 r,w,e = self.__select([self.child_fd, self.STDIN_FILENO], [], [])
1565 r,w,e = self.__select([self.child_fd, self.STDIN_FILENO], [], [])
1563 if self.child_fd in r:
1566 if self.child_fd in r:
1564 data = self.__interact_read(self.child_fd)
1567 data = self.__interact_read(self.child_fd)
1565 if output_filter: data = output_filter(data)
1568 if output_filter: data = output_filter(data)
1566 if self.logfile is not None:
1569 if self.logfile is not None:
1567 self.logfile.write (data)
1570 self.logfile.write (data)
1568 self.logfile.flush()
1571 self.logfile.flush()
1569 os.write(self.STDOUT_FILENO, data)
1572 os.write(self.STDOUT_FILENO, data)
1570 if self.STDIN_FILENO in r:
1573 if self.STDIN_FILENO in r:
1571 data = self.__interact_read(self.STDIN_FILENO)
1574 data = self.__interact_read(self.STDIN_FILENO)
1572 if input_filter: data = input_filter(data)
1575 if input_filter: data = input_filter(data)
1573 i = data.rfind(escape_character)
1576 i = data.rfind(escape_character)
1574 if i != -1:
1577 if i != -1:
1575 data = data[:i]
1578 data = data[:i]
1576 self.__interact_writen(self.child_fd, data)
1579 self.__interact_writen(self.child_fd, data)
1577 break
1580 break
1578 self.__interact_writen(self.child_fd, data)
1581 self.__interact_writen(self.child_fd, data)
1579
1582
1580 def __select (self, iwtd, owtd, ewtd, timeout=None):
1583 def __select (self, iwtd, owtd, ewtd, timeout=None):
1581
1584
1582 """This is a wrapper around select.select() that ignores signals. If
1585 """This is a wrapper around select.select() that ignores signals. If
1583 select.select raises a select.error exception and errno is an EINTR
1586 select.select raises a select.error exception and errno is an EINTR
1584 error then it is ignored. Mainly this is used to ignore sigwinch
1587 error then it is ignored. Mainly this is used to ignore sigwinch
1585 (terminal resize). """
1588 (terminal resize). """
1586
1589
1587 # if select() is interrupted by a signal (errno==EINTR) then
1590 # if select() is interrupted by a signal (errno==EINTR) then
1588 # we loop back and enter the select() again.
1591 # we loop back and enter the select() again.
1589 if timeout is not None:
1592 if timeout is not None:
1590 end_time = time.time() + timeout
1593 end_time = time.time() + timeout
1591 while True:
1594 while True:
1592 try:
1595 try:
1593 return select.select (iwtd, owtd, ewtd, timeout)
1596 return select.select (iwtd, owtd, ewtd, timeout)
1594 except select.error as e:
1597 except select.error as e:
1595 if e.args[0] == errno.EINTR:
1598 if e.args[0] == errno.EINTR:
1596 # if we loop back we have to subtract the amount of time we already waited.
1599 # if we loop back we have to subtract the amount of time we already waited.
1597 if timeout is not None:
1600 if timeout is not None:
1598 timeout = end_time - time.time()
1601 timeout = end_time - time.time()
1599 if timeout < 0:
1602 if timeout < 0:
1600 return ([],[],[])
1603 return ([],[],[])
1601 else: # something else caused the select.error, so this really is an exception
1604 else: # something else caused the select.error, so this really is an exception
1602 raise
1605 raise
1603
1606
1604 class spawn(spawnb):
1607 class spawn(spawnb):
1605 """This is the main class interface for Pexpect. Use this class to start
1608 """This is the main class interface for Pexpect. Use this class to start
1606 and control child applications."""
1609 and control child applications."""
1607
1610
1608 _buffer_type = unicode
1611 _buffer_type = unicode
1609 def _cast_buffer_type(self, s):
1612 def _cast_buffer_type(self, s):
1610 return _cast_unicode(s, self.encoding)
1613 return _cast_unicode(s, self.encoding)
1611 _empty_buffer = u''
1614 _empty_buffer = u''
1612 _pty_newline = u'\r\n'
1615 _pty_newline = u'\r\n'
1613
1616
1614 def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None,
1617 def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None,
1615 logfile=None, cwd=None, env=None, encoding='utf-8'):
1618 logfile=None, cwd=None, env=None, encoding='utf-8'):
1616 super(spawn, self).__init__(command, args, timeout=timeout, maxread=maxread,
1619 super(spawn, self).__init__(command, args, timeout=timeout, maxread=maxread,
1617 searchwindowsize=searchwindowsize, logfile=logfile, cwd=cwd, env=env)
1620 searchwindowsize=searchwindowsize, logfile=logfile, cwd=cwd, env=env)
1618 self.encoding = encoding
1621 self.encoding = encoding
1619
1622
1620 def _prepare_regex_pattern(self, p):
1623 def _prepare_regex_pattern(self, p):
1621 "Recompile bytes regexes as unicode regexes."
1624 "Recompile bytes regexes as unicode regexes."
1622 if isinstance(p.pattern, bytes):
1625 if isinstance(p.pattern, bytes):
1623 p = re.compile(p.pattern.decode(self.encoding), p.flags)
1626 p = re.compile(p.pattern.decode(self.encoding), p.flags)
1624 return p
1627 return p
1625
1628
1626 def read_nonblocking(self, size=1, timeout=-1):
1629 def read_nonblocking(self, size=1, timeout=-1):
1627 return super(spawn, self).read_nonblocking(size=size, timeout=timeout)\
1630 return super(spawn, self).read_nonblocking(size=size, timeout=timeout)\
1628 .decode(self.encoding)
1631 .decode(self.encoding)
1629
1632
1630 read_nonblocking.__doc__ = spawnb.read_nonblocking.__doc__
1633 read_nonblocking.__doc__ = spawnb.read_nonblocking.__doc__
1631
1634
1632
1635
1633 ##############################################################################
1636 ##############################################################################
1634 # End of spawn class
1637 # End of spawn class
1635 ##############################################################################
1638 ##############################################################################
1636
1639
1637 class searcher_string (object):
1640 class searcher_string (object):
1638
1641
1639 """This is a plain string search helper for the spawn.expect_any() method.
1642 """This is a plain string search helper for the spawn.expect_any() method.
1640 This helper class is for speed. For more powerful regex patterns
1643 This helper class is for speed. For more powerful regex patterns
1641 see the helper class, searcher_re.
1644 see the helper class, searcher_re.
1642
1645
1643 Attributes:
1646 Attributes:
1644
1647
1645 eof_index - index of EOF, or -1
1648 eof_index - index of EOF, or -1
1646 timeout_index - index of TIMEOUT, or -1
1649 timeout_index - index of TIMEOUT, or -1
1647
1650
1648 After a successful match by the search() method the following attributes
1651 After a successful match by the search() method the following attributes
1649 are available:
1652 are available:
1650
1653
1651 start - index into the buffer, first byte of match
1654 start - index into the buffer, first byte of match
1652 end - index into the buffer, first byte after match
1655 end - index into the buffer, first byte after match
1653 match - the matching string itself
1656 match - the matching string itself
1654
1657
1655 """
1658 """
1656
1659
1657 def __init__(self, strings):
1660 def __init__(self, strings):
1658
1661
1659 """This creates an instance of searcher_string. This argument 'strings'
1662 """This creates an instance of searcher_string. This argument 'strings'
1660 may be a list; a sequence of strings; or the EOF or TIMEOUT types. """
1663 may be a list; a sequence of strings; or the EOF or TIMEOUT types. """
1661
1664
1662 self.eof_index = -1
1665 self.eof_index = -1
1663 self.timeout_index = -1
1666 self.timeout_index = -1
1664 self._strings = []
1667 self._strings = []
1665 for n, s in enumerate(strings):
1668 for n, s in enumerate(strings):
1666 if s is EOF:
1669 if s is EOF:
1667 self.eof_index = n
1670 self.eof_index = n
1668 continue
1671 continue
1669 if s is TIMEOUT:
1672 if s is TIMEOUT:
1670 self.timeout_index = n
1673 self.timeout_index = n
1671 continue
1674 continue
1672 self._strings.append((n, s))
1675 self._strings.append((n, s))
1673
1676
1674 def __str__(self):
1677 def __str__(self):
1675
1678
1676 """This returns a human-readable string that represents the state of
1679 """This returns a human-readable string that represents the state of
1677 the object."""
1680 the object."""
1678
1681
1679 ss = [ (ns[0],' %d: "%s"' % ns) for ns in self._strings ]
1682 ss = [ (ns[0],' %d: "%s"' % ns) for ns in self._strings ]
1680 ss.append((-1,'searcher_string:'))
1683 ss.append((-1,'searcher_string:'))
1681 if self.eof_index >= 0:
1684 if self.eof_index >= 0:
1682 ss.append ((self.eof_index,' %d: EOF' % self.eof_index))
1685 ss.append ((self.eof_index,' %d: EOF' % self.eof_index))
1683 if self.timeout_index >= 0:
1686 if self.timeout_index >= 0:
1684 ss.append ((self.timeout_index,' %d: TIMEOUT' % self.timeout_index))
1687 ss.append ((self.timeout_index,' %d: TIMEOUT' % self.timeout_index))
1685 ss.sort()
1688 ss.sort()
1686 return '\n'.join(a[1] for a in ss)
1689 return '\n'.join(a[1] for a in ss)
1687
1690
1688 def search(self, buffer, freshlen, searchwindowsize=None):
1691 def search(self, buffer, freshlen, searchwindowsize=None):
1689
1692
1690 """This searches 'buffer' for the first occurence of one of the search
1693 """This searches 'buffer' for the first occurence of one of the search
1691 strings. 'freshlen' must indicate the number of bytes at the end of
1694 strings. 'freshlen' must indicate the number of bytes at the end of
1692 'buffer' which have not been searched before. It helps to avoid
1695 'buffer' which have not been searched before. It helps to avoid
1693 searching the same, possibly big, buffer over and over again.
1696 searching the same, possibly big, buffer over and over again.
1694
1697
1695 See class spawn for the 'searchwindowsize' argument.
1698 See class spawn for the 'searchwindowsize' argument.
1696
1699
1697 If there is a match this returns the index of that string, and sets
1700 If there is a match this returns the index of that string, and sets
1698 'start', 'end' and 'match'. Otherwise, this returns -1. """
1701 'start', 'end' and 'match'. Otherwise, this returns -1. """
1699
1702
1700 absurd_match = len(buffer)
1703 absurd_match = len(buffer)
1701 first_match = absurd_match
1704 first_match = absurd_match
1702
1705
1703 # 'freshlen' helps a lot here. Further optimizations could
1706 # 'freshlen' helps a lot here. Further optimizations could
1704 # possibly include:
1707 # possibly include:
1705 #
1708 #
1706 # using something like the Boyer-Moore Fast String Searching
1709 # using something like the Boyer-Moore Fast String Searching
1707 # Algorithm; pre-compiling the search through a list of
1710 # Algorithm; pre-compiling the search through a list of
1708 # strings into something that can scan the input once to
1711 # strings into something that can scan the input once to
1709 # search for all N strings; realize that if we search for
1712 # search for all N strings; realize that if we search for
1710 # ['bar', 'baz'] and the input is '...foo' we need not bother
1713 # ['bar', 'baz'] and the input is '...foo' we need not bother
1711 # rescanning until we've read three more bytes.
1714 # rescanning until we've read three more bytes.
1712 #
1715 #
1713 # Sadly, I don't know enough about this interesting topic. /grahn
1716 # Sadly, I don't know enough about this interesting topic. /grahn
1714
1717
1715 for index, s in self._strings:
1718 for index, s in self._strings:
1716 if searchwindowsize is None:
1719 if searchwindowsize is None:
1717 # the match, if any, can only be in the fresh data,
1720 # the match, if any, can only be in the fresh data,
1718 # or at the very end of the old data
1721 # or at the very end of the old data
1719 offset = -(freshlen+len(s))
1722 offset = -(freshlen+len(s))
1720 else:
1723 else:
1721 # better obey searchwindowsize
1724 # better obey searchwindowsize
1722 offset = -searchwindowsize
1725 offset = -searchwindowsize
1723 n = buffer.find(s, offset)
1726 n = buffer.find(s, offset)
1724 if n >= 0 and n < first_match:
1727 if n >= 0 and n < first_match:
1725 first_match = n
1728 first_match = n
1726 best_index, best_match = index, s
1729 best_index, best_match = index, s
1727 if first_match == absurd_match:
1730 if first_match == absurd_match:
1728 return -1
1731 return -1
1729 self.match = best_match
1732 self.match = best_match
1730 self.start = first_match
1733 self.start = first_match
1731 self.end = self.start + len(self.match)
1734 self.end = self.start + len(self.match)
1732 return best_index
1735 return best_index
1733
1736
1734 class searcher_re (object):
1737 class searcher_re (object):
1735
1738
1736 """This is regular expression string search helper for the
1739 """This is regular expression string search helper for the
1737 spawn.expect_any() method. This helper class is for powerful
1740 spawn.expect_any() method. This helper class is for powerful
1738 pattern matching. For speed, see the helper class, searcher_string.
1741 pattern matching. For speed, see the helper class, searcher_string.
1739
1742
1740 Attributes:
1743 Attributes:
1741
1744
1742 eof_index - index of EOF, or -1
1745 eof_index - index of EOF, or -1
1743 timeout_index - index of TIMEOUT, or -1
1746 timeout_index - index of TIMEOUT, or -1
1744
1747
1745 After a successful match by the search() method the following attributes
1748 After a successful match by the search() method the following attributes
1746 are available:
1749 are available:
1747
1750
1748 start - index into the buffer, first byte of match
1751 start - index into the buffer, first byte of match
1749 end - index into the buffer, first byte after match
1752 end - index into the buffer, first byte after match
1750 match - the re.match object returned by a succesful re.search
1753 match - the re.match object returned by a succesful re.search
1751
1754
1752 """
1755 """
1753
1756
1754 def __init__(self, patterns):
1757 def __init__(self, patterns):
1755
1758
1756 """This creates an instance that searches for 'patterns' Where
1759 """This creates an instance that searches for 'patterns' Where
1757 'patterns' may be a list or other sequence of compiled regular
1760 'patterns' may be a list or other sequence of compiled regular
1758 expressions, or the EOF or TIMEOUT types."""
1761 expressions, or the EOF or TIMEOUT types."""
1759
1762
1760 self.eof_index = -1
1763 self.eof_index = -1
1761 self.timeout_index = -1
1764 self.timeout_index = -1
1762 self._searches = []
1765 self._searches = []
1763 for n, s in enumerate(patterns):
1766 for n, s in enumerate(patterns):
1764 if s is EOF:
1767 if s is EOF:
1765 self.eof_index = n
1768 self.eof_index = n
1766 continue
1769 continue
1767 if s is TIMEOUT:
1770 if s is TIMEOUT:
1768 self.timeout_index = n
1771 self.timeout_index = n
1769 continue
1772 continue
1770 self._searches.append((n, s))
1773 self._searches.append((n, s))
1771
1774
1772 def __str__(self):
1775 def __str__(self):
1773
1776
1774 """This returns a human-readable string that represents the state of
1777 """This returns a human-readable string that represents the state of
1775 the object."""
1778 the object."""
1776
1779
1777 ss = [ (n,' %d: re.compile("%s")' % (n,str(s.pattern))) for n,s in self._searches]
1780 ss = [ (n,' %d: re.compile("%s")' % (n,str(s.pattern))) for n,s in self._searches]
1778 ss.append((-1,'searcher_re:'))
1781 ss.append((-1,'searcher_re:'))
1779 if self.eof_index >= 0:
1782 if self.eof_index >= 0:
1780 ss.append ((self.eof_index,' %d: EOF' % self.eof_index))
1783 ss.append ((self.eof_index,' %d: EOF' % self.eof_index))
1781 if self.timeout_index >= 0:
1784 if self.timeout_index >= 0:
1782 ss.append ((self.timeout_index,' %d: TIMEOUT' % self.timeout_index))
1785 ss.append ((self.timeout_index,' %d: TIMEOUT' % self.timeout_index))
1783 ss.sort()
1786 ss.sort()
1784 return '\n'.join(a[1] for a in ss)
1787 return '\n'.join(a[1] for a in ss)
1785
1788
1786 def search(self, buffer, freshlen, searchwindowsize=None):
1789 def search(self, buffer, freshlen, searchwindowsize=None):
1787
1790
1788 """This searches 'buffer' for the first occurence of one of the regular
1791 """This searches 'buffer' for the first occurence of one of the regular
1789 expressions. 'freshlen' must indicate the number of bytes at the end of
1792 expressions. 'freshlen' must indicate the number of bytes at the end of
1790 'buffer' which have not been searched before.
1793 'buffer' which have not been searched before.
1791
1794
1792 See class spawn for the 'searchwindowsize' argument.
1795 See class spawn for the 'searchwindowsize' argument.
1793
1796
1794 If there is a match this returns the index of that string, and sets
1797 If there is a match this returns the index of that string, and sets
1795 'start', 'end' and 'match'. Otherwise, returns -1."""
1798 'start', 'end' and 'match'. Otherwise, returns -1."""
1796
1799
1797 absurd_match = len(buffer)
1800 absurd_match = len(buffer)
1798 first_match = absurd_match
1801 first_match = absurd_match
1799 # 'freshlen' doesn't help here -- we cannot predict the
1802 # 'freshlen' doesn't help here -- we cannot predict the
1800 # length of a match, and the re module provides no help.
1803 # length of a match, and the re module provides no help.
1801 if searchwindowsize is None:
1804 if searchwindowsize is None:
1802 searchstart = 0
1805 searchstart = 0
1803 else:
1806 else:
1804 searchstart = max(0, len(buffer)-searchwindowsize)
1807 searchstart = max(0, len(buffer)-searchwindowsize)
1805 for index, s in self._searches:
1808 for index, s in self._searches:
1806 match = s.search(buffer, searchstart)
1809 match = s.search(buffer, searchstart)
1807 if match is None:
1810 if match is None:
1808 continue
1811 continue
1809 n = match.start()
1812 n = match.start()
1810 if n < first_match:
1813 if n < first_match:
1811 first_match = n
1814 first_match = n
1812 the_match = match
1815 the_match = match
1813 best_index = index
1816 best_index = index
1814 if first_match == absurd_match:
1817 if first_match == absurd_match:
1815 return -1
1818 return -1
1816 self.start = first_match
1819 self.start = first_match
1817 self.match = the_match
1820 self.match = the_match
1818 self.end = self.match.end()
1821 self.end = self.match.end()
1819 return best_index
1822 return best_index
1820
1823
1821 def which (filename):
1824 def which (filename):
1822
1825
1823 """This takes a given filename; tries to find it in the environment path;
1826 """This takes a given filename; tries to find it in the environment path;
1824 then checks if it is executable. This returns the full path to the filename
1827 then checks if it is executable. This returns the full path to the filename
1825 if found and executable. Otherwise this returns None."""
1828 if found and executable. Otherwise this returns None."""
1826
1829
1827 # Special case where filename already contains a path.
1830 # Special case where filename already contains a path.
1828 if os.path.dirname(filename) != '':
1831 if os.path.dirname(filename) != '':
1829 if os.access (filename, os.X_OK):
1832 if os.access (filename, os.X_OK):
1830 return filename
1833 return filename
1831
1834
1832 if not os.environ.has_key('PATH') or os.environ['PATH'] == '':
1835 if not os.environ.has_key('PATH') or os.environ['PATH'] == '':
1833 p = os.defpath
1836 p = os.defpath
1834 else:
1837 else:
1835 p = os.environ['PATH']
1838 p = os.environ['PATH']
1836
1839
1837 pathlist = p.split(os.pathsep)
1840 pathlist = p.split(os.pathsep)
1838
1841
1839 for path in pathlist:
1842 for path in pathlist:
1840 f = os.path.join(path, filename)
1843 f = os.path.join(path, filename)
1841 if os.access(f, os.X_OK):
1844 if os.access(f, os.X_OK):
1842 return f
1845 return f
1843 return None
1846 return None
1844
1847
1845 def split_command_line(command_line):
1848 def split_command_line(command_line):
1846
1849
1847 """This splits a command line into a list of arguments. It splits arguments
1850 """This splits a command line into a list of arguments. It splits arguments
1848 on spaces, but handles embedded quotes, doublequotes, and escaped
1851 on spaces, but handles embedded quotes, doublequotes, and escaped
1849 characters. It's impossible to do this with a regular expression, so I
1852 characters. It's impossible to do this with a regular expression, so I
1850 wrote a little state machine to parse the command line. """
1853 wrote a little state machine to parse the command line. """
1851
1854
1852 arg_list = []
1855 arg_list = []
1853 arg = ''
1856 arg = ''
1854
1857
1855 # Constants to name the states we can be in.
1858 # Constants to name the states we can be in.
1856 state_basic = 0
1859 state_basic = 0
1857 state_esc = 1
1860 state_esc = 1
1858 state_singlequote = 2
1861 state_singlequote = 2
1859 state_doublequote = 3
1862 state_doublequote = 3
1860 state_whitespace = 4 # The state of consuming whitespace between commands.
1863 state_whitespace = 4 # The state of consuming whitespace between commands.
1861 state = state_basic
1864 state = state_basic
1862
1865
1863 for c in command_line:
1866 for c in command_line:
1864 if state == state_basic or state == state_whitespace:
1867 if state == state_basic or state == state_whitespace:
1865 if c == '\\': # Escape the next character
1868 if c == '\\': # Escape the next character
1866 state = state_esc
1869 state = state_esc
1867 elif c == r"'": # Handle single quote
1870 elif c == r"'": # Handle single quote
1868 state = state_singlequote
1871 state = state_singlequote
1869 elif c == r'"': # Handle double quote
1872 elif c == r'"': # Handle double quote
1870 state = state_doublequote
1873 state = state_doublequote
1871 elif c.isspace():
1874 elif c.isspace():
1872 # Add arg to arg_list if we aren't in the middle of whitespace.
1875 # Add arg to arg_list if we aren't in the middle of whitespace.
1873 if state == state_whitespace:
1876 if state == state_whitespace:
1874 None # Do nothing.
1877 None # Do nothing.
1875 else:
1878 else:
1876 arg_list.append(arg)
1879 arg_list.append(arg)
1877 arg = ''
1880 arg = ''
1878 state = state_whitespace
1881 state = state_whitespace
1879 else:
1882 else:
1880 arg = arg + c
1883 arg = arg + c
1881 state = state_basic
1884 state = state_basic
1882 elif state == state_esc:
1885 elif state == state_esc:
1883 arg = arg + c
1886 arg = arg + c
1884 state = state_basic
1887 state = state_basic
1885 elif state == state_singlequote:
1888 elif state == state_singlequote:
1886 if c == r"'":
1889 if c == r"'":
1887 state = state_basic
1890 state = state_basic
1888 else:
1891 else:
1889 arg = arg + c
1892 arg = arg + c
1890 elif state == state_doublequote:
1893 elif state == state_doublequote:
1891 if c == r'"':
1894 if c == r'"':
1892 state = state_basic
1895 state = state_basic
1893 else:
1896 else:
1894 arg = arg + c
1897 arg = arg + c
1895
1898
1896 if arg != '':
1899 if arg != '':
1897 arg_list.append(arg)
1900 arg_list.append(arg)
1898 return arg_list
1901 return arg_list
1899
1902
1900 # vi:set sr et ts=4 sw=4 ft=python :
1903 # vi:set sr et ts=4 sw=4 ft=python :
@@ -1,224 +1,224 b''
1 # System library imports.
1 # System library imports.
2 from IPython.external.qt import QtGui
2 from IPython.external.qt import QtGui
3 from pygments.formatters.html import HtmlFormatter
3 from pygments.formatters.html import HtmlFormatter
4 from pygments.lexer import RegexLexer, _TokenType, Text, Error
4 from pygments.lexer import RegexLexer, _TokenType, Text, Error
5 from pygments.lexers import PythonLexer
5 from pygments.lexers import PythonLexer
6 from pygments.styles import get_style_by_name
6 from pygments.styles import get_style_by_name
7
7
8
8
9 def get_tokens_unprocessed(self, text, stack=('root',)):
9 def get_tokens_unprocessed(self, text, stack=('root',)):
10 """ Split ``text`` into (tokentype, text) pairs.
10 """ Split ``text`` into (tokentype, text) pairs.
11
11
12 Monkeypatched to store the final stack on the object itself.
12 Monkeypatched to store the final stack on the object itself.
13 """
13 """
14 pos = 0
14 pos = 0
15 tokendefs = self._tokens
15 tokendefs = self._tokens
16 if hasattr(self, '_saved_state_stack'):
16 if hasattr(self, '_saved_state_stack'):
17 statestack = list(self._saved_state_stack)
17 statestack = list(self._saved_state_stack)
18 else:
18 else:
19 statestack = list(stack)
19 statestack = list(stack)
20 statetokens = tokendefs[statestack[-1]]
20 statetokens = tokendefs[statestack[-1]]
21 while 1:
21 while 1:
22 for rexmatch, action, new_state in statetokens:
22 for rexmatch, action, new_state in statetokens:
23 m = rexmatch(text, pos)
23 m = rexmatch(text, pos)
24 if m:
24 if m:
25 if type(action) is _TokenType:
25 if type(action) is _TokenType:
26 yield pos, action, m.group()
26 yield pos, action, m.group()
27 else:
27 else:
28 for item in action(self, m):
28 for item in action(self, m):
29 yield item
29 yield item
30 pos = m.end()
30 pos = m.end()
31 if new_state is not None:
31 if new_state is not None:
32 # state transition
32 # state transition
33 if isinstance(new_state, tuple):
33 if isinstance(new_state, tuple):
34 for state in new_state:
34 for state in new_state:
35 if state == '#pop':
35 if state == '#pop':
36 statestack.pop()
36 statestack.pop()
37 elif state == '#push':
37 elif state == '#push':
38 statestack.append(statestack[-1])
38 statestack.append(statestack[-1])
39 else:
39 else:
40 statestack.append(state)
40 statestack.append(state)
41 elif isinstance(new_state, int):
41 elif isinstance(new_state, int):
42 # pop
42 # pop
43 del statestack[new_state:]
43 del statestack[new_state:]
44 elif new_state == '#push':
44 elif new_state == '#push':
45 statestack.append(statestack[-1])
45 statestack.append(statestack[-1])
46 else:
46 else:
47 assert False, "wrong state def: %r" % new_state
47 assert False, "wrong state def: %r" % new_state
48 statetokens = tokendefs[statestack[-1]]
48 statetokens = tokendefs[statestack[-1]]
49 break
49 break
50 else:
50 else:
51 try:
51 try:
52 if text[pos] == '\n':
52 if text[pos] == '\n':
53 # at EOL, reset state to "root"
53 # at EOL, reset state to "root"
54 pos += 1
54 pos += 1
55 statestack = ['root']
55 statestack = ['root']
56 statetokens = tokendefs['root']
56 statetokens = tokendefs['root']
57 yield pos, Text, u'\n'
57 yield pos, Text, u'\n'
58 continue
58 continue
59 yield pos, Error, text[pos]
59 yield pos, Error, text[pos]
60 pos += 1
60 pos += 1
61 except IndexError:
61 except IndexError:
62 break
62 break
63 self._saved_state_stack = list(statestack)
63 self._saved_state_stack = list(statestack)
64
64
65 # Monkeypatch!
65 # Monkeypatch!
66 RegexLexer.get_tokens_unprocessed = get_tokens_unprocessed
66 RegexLexer.get_tokens_unprocessed = get_tokens_unprocessed
67
67
68
68
69 class PygmentsBlockUserData(QtGui.QTextBlockUserData):
69 class PygmentsBlockUserData(QtGui.QTextBlockUserData):
70 """ Storage for the user data associated with each line.
70 """ Storage for the user data associated with each line.
71 """
71 """
72
72
73 syntax_stack = ('root',)
73 syntax_stack = ('root',)
74
74
75 def __init__(self, **kwds):
75 def __init__(self, **kwds):
76 for key, value in kwds.iteritems():
76 for key, value in kwds.iteritems():
77 setattr(self, key, value)
77 setattr(self, key, value)
78 QtGui.QTextBlockUserData.__init__(self)
78 QtGui.QTextBlockUserData.__init__(self)
79
79
80 def __repr__(self):
80 def __repr__(self):
81 attrs = ['syntax_stack']
81 attrs = ['syntax_stack']
82 kwds = ', '.join([ '%s=%r' % (attr, getattr(self, attr))
82 kwds = ', '.join([ '%s=%r' % (attr, getattr(self, attr))
83 for attr in attrs ])
83 for attr in attrs ])
84 return 'PygmentsBlockUserData(%s)' % kwds
84 return 'PygmentsBlockUserData(%s)' % kwds
85
85
86
86
87 class PygmentsHighlighter(QtGui.QSyntaxHighlighter):
87 class PygmentsHighlighter(QtGui.QSyntaxHighlighter):
88 """ Syntax highlighter that uses Pygments for parsing. """
88 """ Syntax highlighter that uses Pygments for parsing. """
89
89
90 #---------------------------------------------------------------------------
90 #---------------------------------------------------------------------------
91 # 'QSyntaxHighlighter' interface
91 # 'QSyntaxHighlighter' interface
92 #---------------------------------------------------------------------------
92 #---------------------------------------------------------------------------
93
93
94 def __init__(self, parent, lexer=None):
94 def __init__(self, parent, lexer=None):
95 super(PygmentsHighlighter, self).__init__(parent)
95 super(PygmentsHighlighter, self).__init__(parent)
96
96
97 self._document = QtGui.QTextDocument()
97 self._document = QtGui.QTextDocument()
98 self._formatter = HtmlFormatter(nowrap=True)
98 self._formatter = HtmlFormatter(nowrap=True)
99 self._lexer = lexer if lexer else PythonLexer()
99 self._lexer = lexer if lexer else PythonLexer()
100 self.set_style('default')
100 self.set_style('default')
101
101
102 def highlightBlock(self, string):
102 def highlightBlock(self, string):
103 """ Highlight a block of text.
103 """ Highlight a block of text.
104 """
104 """
105 prev_data = self.currentBlock().previous().userData()
105 prev_data = self.currentBlock().previous().userData()
106 if prev_data is not None:
106 if prev_data is not None:
107 self._lexer._saved_state_stack = prev_data.syntax_stack
107 self._lexer._saved_state_stack = prev_data.syntax_stack
108 elif hasattr(self._lexer, '_saved_state_stack'):
108 elif hasattr(self._lexer, '_saved_state_stack'):
109 del self._lexer._saved_state_stack
109 del self._lexer._saved_state_stack
110
110
111 # Lex the text using Pygments
111 # Lex the text using Pygments
112 index = 0
112 index = 0
113 for token, text in self._lexer.get_tokens(string):
113 for token, text in self._lexer.get_tokens(string):
114 length = len(text)
114 length = len(text)
115 self.setFormat(index, length, self._get_format(token))
115 self.setFormat(index, length, self._get_format(token))
116 index += length
116 index += length
117
117
118 if hasattr(self._lexer, '_saved_state_stack'):
118 if hasattr(self._lexer, '_saved_state_stack'):
119 data = PygmentsBlockUserData(
119 data = PygmentsBlockUserData(
120 syntax_stack=self._lexer._saved_state_stack)
120 syntax_stack=self._lexer._saved_state_stack)
121 self.currentBlock().setUserData(data)
121 self.currentBlock().setUserData(data)
122 # Clean up for the next go-round.
122 # Clean up for the next go-round.
123 del self._lexer._saved_state_stack
123 del self._lexer._saved_state_stack
124
124
125 #---------------------------------------------------------------------------
125 #---------------------------------------------------------------------------
126 # 'PygmentsHighlighter' interface
126 # 'PygmentsHighlighter' interface
127 #---------------------------------------------------------------------------
127 #---------------------------------------------------------------------------
128
128
129 def set_style(self, style):
129 def set_style(self, style):
130 """ Sets the style to the specified Pygments style.
130 """ Sets the style to the specified Pygments style.
131 """
131 """
132 if isinstance(style, basestring):
132 if isinstance(style, basestring):
133 style = get_style_by_name(style)
133 style = get_style_by_name(style)
134 self._style = style
134 self._style = style
135 self._clear_caches()
135 self._clear_caches()
136
136
137 def set_style_sheet(self, stylesheet):
137 def set_style_sheet(self, stylesheet):
138 """ Sets a CSS stylesheet. The classes in the stylesheet should
138 """ Sets a CSS stylesheet. The classes in the stylesheet should
139 correspond to those generated by:
139 correspond to those generated by:
140
140
141 pygmentize -S <style> -f html
141 pygmentize -S <style> -f html
142
142
143 Note that 'set_style' and 'set_style_sheet' completely override each
143 Note that 'set_style' and 'set_style_sheet' completely override each
144 other, i.e. they cannot be used in conjunction.
144 other, i.e. they cannot be used in conjunction.
145 """
145 """
146 self._document.setDefaultStyleSheet(stylesheet)
146 self._document.setDefaultStyleSheet(stylesheet)
147 self._style = None
147 self._style = None
148 self._clear_caches()
148 self._clear_caches()
149
149
150 #---------------------------------------------------------------------------
150 #---------------------------------------------------------------------------
151 # Protected interface
151 # Protected interface
152 #---------------------------------------------------------------------------
152 #---------------------------------------------------------------------------
153
153
154 def _clear_caches(self):
154 def _clear_caches(self):
155 """ Clear caches for brushes and formats.
155 """ Clear caches for brushes and formats.
156 """
156 """
157 self._brushes = {}
157 self._brushes = {}
158 self._formats = {}
158 self._formats = {}
159
159
160 def _get_format(self, token):
160 def _get_format(self, token):
161 """ Returns a QTextCharFormat for token or None.
161 """ Returns a QTextCharFormat for token or None.
162 """
162 """
163 if token in self._formats:
163 if token in self._formats:
164 return self._formats[token]
164 return self._formats[token]
165
165
166 if self._style is None:
166 if self._style is None:
167 result = self._get_format_from_document(token, self._document)
167 result = self._get_format_from_document(token, self._document)
168 else:
168 else:
169 result = self._get_format_from_style(token, self._style)
169 result = self._get_format_from_style(token, self._style)
170
170
171 self._formats[token] = result
171 self._formats[token] = result
172 return result
172 return result
173
173
174 def _get_format_from_document(self, token, document):
174 def _get_format_from_document(self, token, document):
175 """ Returns a QTextCharFormat for token by
175 """ Returns a QTextCharFormat for token by
176 """
176 """
177 code, html = self._formatter._format_lines([(token, u'dummy')]).next()
177 code, html = next(self._formatter._format_lines([(token, u'dummy')]))
178 self._document.setHtml(html)
178 self._document.setHtml(html)
179 return QtGui.QTextCursor(self._document).charFormat()
179 return QtGui.QTextCursor(self._document).charFormat()
180
180
181 def _get_format_from_style(self, token, style):
181 def _get_format_from_style(self, token, style):
182 """ Returns a QTextCharFormat for token by reading a Pygments style.
182 """ Returns a QTextCharFormat for token by reading a Pygments style.
183 """
183 """
184 result = QtGui.QTextCharFormat()
184 result = QtGui.QTextCharFormat()
185 for key, value in style.style_for_token(token).items():
185 for key, value in style.style_for_token(token).items():
186 if value:
186 if value:
187 if key == 'color':
187 if key == 'color':
188 result.setForeground(self._get_brush(value))
188 result.setForeground(self._get_brush(value))
189 elif key == 'bgcolor':
189 elif key == 'bgcolor':
190 result.setBackground(self._get_brush(value))
190 result.setBackground(self._get_brush(value))
191 elif key == 'bold':
191 elif key == 'bold':
192 result.setFontWeight(QtGui.QFont.Bold)
192 result.setFontWeight(QtGui.QFont.Bold)
193 elif key == 'italic':
193 elif key == 'italic':
194 result.setFontItalic(True)
194 result.setFontItalic(True)
195 elif key == 'underline':
195 elif key == 'underline':
196 result.setUnderlineStyle(
196 result.setUnderlineStyle(
197 QtGui.QTextCharFormat.SingleUnderline)
197 QtGui.QTextCharFormat.SingleUnderline)
198 elif key == 'sans':
198 elif key == 'sans':
199 result.setFontStyleHint(QtGui.QFont.SansSerif)
199 result.setFontStyleHint(QtGui.QFont.SansSerif)
200 elif key == 'roman':
200 elif key == 'roman':
201 result.setFontStyleHint(QtGui.QFont.Times)
201 result.setFontStyleHint(QtGui.QFont.Times)
202 elif key == 'mono':
202 elif key == 'mono':
203 result.setFontStyleHint(QtGui.QFont.TypeWriter)
203 result.setFontStyleHint(QtGui.QFont.TypeWriter)
204 return result
204 return result
205
205
206 def _get_brush(self, color):
206 def _get_brush(self, color):
207 """ Returns a brush for the color.
207 """ Returns a brush for the color.
208 """
208 """
209 result = self._brushes.get(color)
209 result = self._brushes.get(color)
210 if result is None:
210 if result is None:
211 qcolor = self._get_color(color)
211 qcolor = self._get_color(color)
212 result = QtGui.QBrush(qcolor)
212 result = QtGui.QBrush(qcolor)
213 self._brushes[color] = result
213 self._brushes[color] = result
214 return result
214 return result
215
215
216 def _get_color(self, color):
216 def _get_color(self, color):
217 """ Returns a QColor built from a Pygments color string.
217 """ Returns a QColor built from a Pygments color string.
218 """
218 """
219 qcolor = QtGui.QColor()
219 qcolor = QtGui.QColor()
220 qcolor.setRgb(int(color[:2], base=16),
220 qcolor.setRgb(int(color[:2], base=16),
221 int(color[2:4], base=16),
221 int(color[2:4], base=16),
222 int(color[4:6], base=16))
222 int(color[4:6], base=16))
223 return qcolor
223 return qcolor
224
224
@@ -1,97 +1,97 b''
1 """Implementation of the parametric test support for Python 2.x
1 """Implementation of the parametric test support for Python 2.x
2 """
2 """
3
3
4 #-----------------------------------------------------------------------------
4 #-----------------------------------------------------------------------------
5 # Copyright (C) 2009-2011 The IPython Development Team
5 # Copyright (C) 2009-2011 The IPython Development Team
6 #
6 #
7 # Distributed under the terms of the BSD License. The full license is in
7 # Distributed under the terms of the BSD License. The full license is in
8 # the file COPYING, distributed as part of this software.
8 # the file COPYING, distributed as part of this software.
9 #-----------------------------------------------------------------------------
9 #-----------------------------------------------------------------------------
10
10
11 #-----------------------------------------------------------------------------
11 #-----------------------------------------------------------------------------
12 # Imports
12 # Imports
13 #-----------------------------------------------------------------------------
13 #-----------------------------------------------------------------------------
14
14
15 import sys
15 import sys
16 import unittest
16 import unittest
17 from compiler.consts import CO_GENERATOR
17 from compiler.consts import CO_GENERATOR
18
18
19 #-----------------------------------------------------------------------------
19 #-----------------------------------------------------------------------------
20 # Classes and functions
20 # Classes and functions
21 #-----------------------------------------------------------------------------
21 #-----------------------------------------------------------------------------
22
22
23 def isgenerator(func):
23 def isgenerator(func):
24 try:
24 try:
25 return func.func_code.co_flags & CO_GENERATOR != 0
25 return func.func_code.co_flags & CO_GENERATOR != 0
26 except AttributeError:
26 except AttributeError:
27 return False
27 return False
28
28
29 class ParametricTestCase(unittest.TestCase):
29 class ParametricTestCase(unittest.TestCase):
30 """Write parametric tests in normal unittest testcase form.
30 """Write parametric tests in normal unittest testcase form.
31
31
32 Limitations: the last iteration misses printing out a newline when running
32 Limitations: the last iteration misses printing out a newline when running
33 in verbose mode.
33 in verbose mode.
34 """
34 """
35 def run_parametric(self, result, testMethod):
35 def run_parametric(self, result, testMethod):
36 # But if we have a test generator, we iterate it ourselves
36 # But if we have a test generator, we iterate it ourselves
37 testgen = testMethod()
37 testgen = testMethod()
38 while True:
38 while True:
39 try:
39 try:
40 # Initialize test
40 # Initialize test
41 result.startTest(self)
41 result.startTest(self)
42
42
43 # SetUp
43 # SetUp
44 try:
44 try:
45 self.setUp()
45 self.setUp()
46 except KeyboardInterrupt:
46 except KeyboardInterrupt:
47 raise
47 raise
48 except:
48 except:
49 result.addError(self, sys.exc_info())
49 result.addError(self, sys.exc_info())
50 return
50 return
51 # Test execution
51 # Test execution
52 ok = False
52 ok = False
53 try:
53 try:
54 testgen.next()
54 next(testgen)
55 ok = True
55 ok = True
56 except StopIteration:
56 except StopIteration:
57 # We stop the loop
57 # We stop the loop
58 break
58 break
59 except self.failureException:
59 except self.failureException:
60 result.addFailure(self, sys.exc_info())
60 result.addFailure(self, sys.exc_info())
61 except KeyboardInterrupt:
61 except KeyboardInterrupt:
62 raise
62 raise
63 except:
63 except:
64 result.addError(self, sys.exc_info())
64 result.addError(self, sys.exc_info())
65 # TearDown
65 # TearDown
66 try:
66 try:
67 self.tearDown()
67 self.tearDown()
68 except KeyboardInterrupt:
68 except KeyboardInterrupt:
69 raise
69 raise
70 except:
70 except:
71 result.addError(self, sys.exc_info())
71 result.addError(self, sys.exc_info())
72 ok = False
72 ok = False
73 if ok: result.addSuccess(self)
73 if ok: result.addSuccess(self)
74
74
75 finally:
75 finally:
76 result.stopTest(self)
76 result.stopTest(self)
77
77
78 def run(self, result=None):
78 def run(self, result=None):
79 if result is None:
79 if result is None:
80 result = self.defaultTestResult()
80 result = self.defaultTestResult()
81 testMethod = getattr(self, self._testMethodName)
81 testMethod = getattr(self, self._testMethodName)
82 # For normal tests, we just call the base class and return that
82 # For normal tests, we just call the base class and return that
83 if isgenerator(testMethod):
83 if isgenerator(testMethod):
84 return self.run_parametric(result, testMethod)
84 return self.run_parametric(result, testMethod)
85 else:
85 else:
86 return super(ParametricTestCase, self).run(result)
86 return super(ParametricTestCase, self).run(result)
87
87
88
88
89 def parametric(func):
89 def parametric(func):
90 """Decorator to make a simple function into a normal test via unittest."""
90 """Decorator to make a simple function into a normal test via unittest."""
91
91
92 class Tester(ParametricTestCase):
92 class Tester(ParametricTestCase):
93 test = staticmethod(func)
93 test = staticmethod(func)
94
94
95 Tester.__name__ = func.__name__
95 Tester.__name__ = func.__name__
96
96
97 return Tester
97 return Tester
@@ -1,195 +1,195 b''
1 """Common utilities for the various process_* implementations.
1 """Common utilities for the various process_* implementations.
2
2
3 This file is only meant to be imported by the platform-specific implementations
3 This file is only meant to be imported by the platform-specific implementations
4 of subprocess utilities, and it contains tools that are common to all of them.
4 of subprocess utilities, and it contains tools that are common to all of them.
5 """
5 """
6
6
7 #-----------------------------------------------------------------------------
7 #-----------------------------------------------------------------------------
8 # Copyright (C) 2010-2011 The IPython Development Team
8 # Copyright (C) 2010-2011 The IPython Development Team
9 #
9 #
10 # Distributed under the terms of the BSD License. The full license is in
10 # Distributed under the terms of the BSD License. The full license is in
11 # the file COPYING, distributed as part of this software.
11 # the file COPYING, distributed as part of this software.
12 #-----------------------------------------------------------------------------
12 #-----------------------------------------------------------------------------
13
13
14 #-----------------------------------------------------------------------------
14 #-----------------------------------------------------------------------------
15 # Imports
15 # Imports
16 #-----------------------------------------------------------------------------
16 #-----------------------------------------------------------------------------
17 import subprocess
17 import subprocess
18 import shlex
18 import shlex
19 import sys
19 import sys
20
20
21 from IPython.utils import py3compat
21 from IPython.utils import py3compat
22
22
23 #-----------------------------------------------------------------------------
23 #-----------------------------------------------------------------------------
24 # Function definitions
24 # Function definitions
25 #-----------------------------------------------------------------------------
25 #-----------------------------------------------------------------------------
26
26
27 def read_no_interrupt(p):
27 def read_no_interrupt(p):
28 """Read from a pipe ignoring EINTR errors.
28 """Read from a pipe ignoring EINTR errors.
29
29
30 This is necessary because when reading from pipes with GUI event loops
30 This is necessary because when reading from pipes with GUI event loops
31 running in the background, often interrupts are raised that stop the
31 running in the background, often interrupts are raised that stop the
32 command from completing."""
32 command from completing."""
33 import errno
33 import errno
34
34
35 try:
35 try:
36 return p.read()
36 return p.read()
37 except IOError as err:
37 except IOError as err:
38 if err.errno != errno.EINTR:
38 if err.errno != errno.EINTR:
39 raise
39 raise
40
40
41
41
42 def process_handler(cmd, callback, stderr=subprocess.PIPE):
42 def process_handler(cmd, callback, stderr=subprocess.PIPE):
43 """Open a command in a shell subprocess and execute a callback.
43 """Open a command in a shell subprocess and execute a callback.
44
44
45 This function provides common scaffolding for creating subprocess.Popen()
45 This function provides common scaffolding for creating subprocess.Popen()
46 calls. It creates a Popen object and then calls the callback with it.
46 calls. It creates a Popen object and then calls the callback with it.
47
47
48 Parameters
48 Parameters
49 ----------
49 ----------
50 cmd : str
50 cmd : str
51 A string to be executed with the underlying system shell (by calling
51 A string to be executed with the underlying system shell (by calling
52 :func:`Popen` with ``shell=True``.
52 :func:`Popen` with ``shell=True``.
53
53
54 callback : callable
54 callback : callable
55 A one-argument function that will be called with the Popen object.
55 A one-argument function that will be called with the Popen object.
56
56
57 stderr : file descriptor number, optional
57 stderr : file descriptor number, optional
58 By default this is set to ``subprocess.PIPE``, but you can also pass the
58 By default this is set to ``subprocess.PIPE``, but you can also pass the
59 value ``subprocess.STDOUT`` to force the subprocess' stderr to go into
59 value ``subprocess.STDOUT`` to force the subprocess' stderr to go into
60 the same file descriptor as its stdout. This is useful to read stdout
60 the same file descriptor as its stdout. This is useful to read stdout
61 and stderr combined in the order they are generated.
61 and stderr combined in the order they are generated.
62
62
63 Returns
63 Returns
64 -------
64 -------
65 The return value of the provided callback is returned.
65 The return value of the provided callback is returned.
66 """
66 """
67 sys.stdout.flush()
67 sys.stdout.flush()
68 sys.stderr.flush()
68 sys.stderr.flush()
69 # On win32, close_fds can't be true when using pipes for stdin/out/err
69 # On win32, close_fds can't be true when using pipes for stdin/out/err
70 close_fds = sys.platform != 'win32'
70 close_fds = sys.platform != 'win32'
71 p = subprocess.Popen(cmd, shell=True,
71 p = subprocess.Popen(cmd, shell=True,
72 stdin=subprocess.PIPE,
72 stdin=subprocess.PIPE,
73 stdout=subprocess.PIPE,
73 stdout=subprocess.PIPE,
74 stderr=stderr,
74 stderr=stderr,
75 close_fds=close_fds)
75 close_fds=close_fds)
76
76
77 try:
77 try:
78 out = callback(p)
78 out = callback(p)
79 except KeyboardInterrupt:
79 except KeyboardInterrupt:
80 print('^C')
80 print('^C')
81 sys.stdout.flush()
81 sys.stdout.flush()
82 sys.stderr.flush()
82 sys.stderr.flush()
83 out = None
83 out = None
84 finally:
84 finally:
85 # Make really sure that we don't leave processes behind, in case the
85 # Make really sure that we don't leave processes behind, in case the
86 # call above raises an exception
86 # call above raises an exception
87 # We start by assuming the subprocess finished (to avoid NameErrors
87 # We start by assuming the subprocess finished (to avoid NameErrors
88 # later depending on the path taken)
88 # later depending on the path taken)
89 if p.returncode is None:
89 if p.returncode is None:
90 try:
90 try:
91 p.terminate()
91 p.terminate()
92 p.poll()
92 p.poll()
93 except OSError:
93 except OSError:
94 pass
94 pass
95 # One last try on our way out
95 # One last try on our way out
96 if p.returncode is None:
96 if p.returncode is None:
97 try:
97 try:
98 p.kill()
98 p.kill()
99 except OSError:
99 except OSError:
100 pass
100 pass
101
101
102 return out
102 return out
103
103
104
104
105 def getoutput(cmd):
105 def getoutput(cmd):
106 """Return standard output of executing cmd in a shell.
106 """Return standard output of executing cmd in a shell.
107
107
108 Accepts the same arguments as os.system().
108 Accepts the same arguments as os.system().
109
109
110 Parameters
110 Parameters
111 ----------
111 ----------
112 cmd : str
112 cmd : str
113 A command to be executed in the system shell.
113 A command to be executed in the system shell.
114
114
115 Returns
115 Returns
116 -------
116 -------
117 stdout : str
117 stdout : str
118 """
118 """
119
119
120 out = process_handler(cmd, lambda p: p.communicate()[0], subprocess.STDOUT)
120 out = process_handler(cmd, lambda p: p.communicate()[0], subprocess.STDOUT)
121 if out is None:
121 if out is None:
122 return ''
122 return ''
123 return py3compat.bytes_to_str(out)
123 return py3compat.bytes_to_str(out)
124
124
125
125
126 def getoutputerror(cmd):
126 def getoutputerror(cmd):
127 """Return (standard output, standard error) of executing cmd in a shell.
127 """Return (standard output, standard error) of executing cmd in a shell.
128
128
129 Accepts the same arguments as os.system().
129 Accepts the same arguments as os.system().
130
130
131 Parameters
131 Parameters
132 ----------
132 ----------
133 cmd : str
133 cmd : str
134 A command to be executed in the system shell.
134 A command to be executed in the system shell.
135
135
136 Returns
136 Returns
137 -------
137 -------
138 stdout : str
138 stdout : str
139 stderr : str
139 stderr : str
140 """
140 """
141
141
142 out_err = process_handler(cmd, lambda p: p.communicate())
142 out_err = process_handler(cmd, lambda p: p.communicate())
143 if out_err is None:
143 if out_err is None:
144 return '', ''
144 return '', ''
145 out, err = out_err
145 out, err = out_err
146 return py3compat.bytes_to_str(out), py3compat.bytes_to_str(err)
146 return py3compat.bytes_to_str(out), py3compat.bytes_to_str(err)
147
147
148
148
149 def arg_split(s, posix=False, strict=True):
149 def arg_split(s, posix=False, strict=True):
150 """Split a command line's arguments in a shell-like manner.
150 """Split a command line's arguments in a shell-like manner.
151
151
152 This is a modified version of the standard library's shlex.split()
152 This is a modified version of the standard library's shlex.split()
153 function, but with a default of posix=False for splitting, so that quotes
153 function, but with a default of posix=False for splitting, so that quotes
154 in inputs are respected.
154 in inputs are respected.
155
155
156 if strict=False, then any errors shlex.split would raise will result in the
156 if strict=False, then any errors shlex.split would raise will result in the
157 unparsed remainder being the last element of the list, rather than raising.
157 unparsed remainder being the last element of the list, rather than raising.
158 This is because we sometimes use arg_split to parse things other than
158 This is because we sometimes use arg_split to parse things other than
159 command-line args.
159 command-line args.
160 """
160 """
161
161
162 # Unfortunately, python's shlex module is buggy with unicode input:
162 # Unfortunately, python's shlex module is buggy with unicode input:
163 # http://bugs.python.org/issue1170
163 # http://bugs.python.org/issue1170
164 # At least encoding the input when it's unicode seems to help, but there
164 # At least encoding the input when it's unicode seems to help, but there
165 # may be more problems lurking. Apparently this is fixed in python3.
165 # may be more problems lurking. Apparently this is fixed in python3.
166 is_unicode = False
166 is_unicode = False
167 if (not py3compat.PY3) and isinstance(s, unicode):
167 if (not py3compat.PY3) and isinstance(s, unicode):
168 is_unicode = True
168 is_unicode = True
169 s = s.encode('utf-8')
169 s = s.encode('utf-8')
170 lex = shlex.shlex(s, posix=posix)
170 lex = shlex.shlex(s, posix=posix)
171 lex.whitespace_split = True
171 lex.whitespace_split = True
172 # Extract tokens, ensuring that things like leaving open quotes
172 # Extract tokens, ensuring that things like leaving open quotes
173 # does not cause this to raise. This is important, because we
173 # does not cause this to raise. This is important, because we
174 # sometimes pass Python source through this (e.g. %timeit f(" ")),
174 # sometimes pass Python source through this (e.g. %timeit f(" ")),
175 # and it shouldn't raise an exception.
175 # and it shouldn't raise an exception.
176 # It may be a bad idea to parse things that are not command-line args
176 # It may be a bad idea to parse things that are not command-line args
177 # through this function, but we do, so let's be safe about it.
177 # through this function, but we do, so let's be safe about it.
178 lex.commenters='' #fix for GH-1269
178 lex.commenters='' #fix for GH-1269
179 tokens = []
179 tokens = []
180 while True:
180 while True:
181 try:
181 try:
182 tokens.append(lex.next())
182 tokens.append(next(lex))
183 except StopIteration:
183 except StopIteration:
184 break
184 break
185 except ValueError:
185 except ValueError:
186 if strict:
186 if strict:
187 raise
187 raise
188 # couldn't parse, get remaining blob as last token
188 # couldn't parse, get remaining blob as last token
189 tokens.append(lex.token)
189 tokens.append(lex.token)
190 break
190 break
191
191
192 if is_unicode:
192 if is_unicode:
193 # Convert the tokens back to unicode.
193 # Convert the tokens back to unicode.
194 tokens = [x.decode('utf-8') for x in tokens]
194 tokens = [x.decode('utf-8') for x in tokens]
195 return tokens
195 return tokens
@@ -1,91 +1,95 b''
1 import sys
1 import sys
2 import time
2 import time
3 from io import StringIO
3 from io import StringIO
4
4
5 from session import extract_header, Message
5 from session import extract_header, Message
6
6
7 from IPython.utils import io, text, encoding
7 from IPython.utils import io, text, encoding
8 from IPython.utils import py3compat
8
9
9 #-----------------------------------------------------------------------------
10 #-----------------------------------------------------------------------------
10 # Globals
11 # Globals
11 #-----------------------------------------------------------------------------
12 #-----------------------------------------------------------------------------
12
13
13 #-----------------------------------------------------------------------------
14 #-----------------------------------------------------------------------------
14 # Stream classes
15 # Stream classes
15 #-----------------------------------------------------------------------------
16 #-----------------------------------------------------------------------------
16
17
17 class OutStream(object):
18 class OutStream(object):
18 """A file like object that publishes the stream to a 0MQ PUB socket."""
19 """A file like object that publishes the stream to a 0MQ PUB socket."""
19
20
20 # The time interval between automatic flushes, in seconds.
21 # The time interval between automatic flushes, in seconds.
21 flush_interval = 0.05
22 flush_interval = 0.05
22 topic=None
23 topic=None
23
24
24 def __init__(self, session, pub_socket, name):
25 def __init__(self, session, pub_socket, name):
25 self.session = session
26 self.session = session
26 self.pub_socket = pub_socket
27 self.pub_socket = pub_socket
27 self.name = name
28 self.name = name
28 self.parent_header = {}
29 self.parent_header = {}
29 self._new_buffer()
30 self._new_buffer()
30
31
31 def set_parent(self, parent):
32 def set_parent(self, parent):
32 self.parent_header = extract_header(parent)
33 self.parent_header = extract_header(parent)
33
34
34 def close(self):
35 def close(self):
35 self.pub_socket = None
36 self.pub_socket = None
36
37
37 def flush(self):
38 def flush(self):
38 #io.rprint('>>>flushing output buffer: %s<<<' % self.name) # dbg
39 #io.rprint('>>>flushing output buffer: %s<<<' % self.name) # dbg
39 if self.pub_socket is None:
40 if self.pub_socket is None:
40 raise ValueError(u'I/O operation on closed file')
41 raise ValueError(u'I/O operation on closed file')
41 else:
42 else:
42 data = self._buffer.getvalue()
43 data = self._buffer.getvalue()
43 if data:
44 if data:
44 content = {u'name':self.name, u'data':data}
45 content = {u'name':self.name, u'data':data}
45 msg = self.session.send(self.pub_socket, u'stream', content=content,
46 msg = self.session.send(self.pub_socket, u'stream', content=content,
46 parent=self.parent_header, ident=self.topic)
47 parent=self.parent_header, ident=self.topic)
47
48
48 if hasattr(self.pub_socket, 'flush'):
49 if hasattr(self.pub_socket, 'flush'):
49 # socket itself has flush (presumably ZMQStream)
50 # socket itself has flush (presumably ZMQStream)
50 self.pub_socket.flush()
51 self.pub_socket.flush()
51 self._buffer.close()
52 self._buffer.close()
52 self._new_buffer()
53 self._new_buffer()
53
54
54 def isatty(self):
55 def isatty(self):
55 return False
56 return False
56
57
57 def next(self):
58 def __next__(self):
58 raise IOError('Read not supported on a write only stream.')
59 raise IOError('Read not supported on a write only stream.')
59
60
61 if not py3compat.PY3:
62 next = __next__
63
60 def read(self, size=-1):
64 def read(self, size=-1):
61 raise IOError('Read not supported on a write only stream.')
65 raise IOError('Read not supported on a write only stream.')
62
66
63 def readline(self, size=-1):
67 def readline(self, size=-1):
64 raise IOError('Read not supported on a write only stream.')
68 raise IOError('Read not supported on a write only stream.')
65
69
66 def write(self, string):
70 def write(self, string):
67 if self.pub_socket is None:
71 if self.pub_socket is None:
68 raise ValueError('I/O operation on closed file')
72 raise ValueError('I/O operation on closed file')
69 else:
73 else:
70 # Make sure that we're handling unicode
74 # Make sure that we're handling unicode
71 if not isinstance(string, unicode):
75 if not isinstance(string, unicode):
72 enc = encoding.DEFAULT_ENCODING
76 enc = encoding.DEFAULT_ENCODING
73 string = string.decode(enc, 'replace')
77 string = string.decode(enc, 'replace')
74
78
75 self._buffer.write(string)
79 self._buffer.write(string)
76 current_time = time.time()
80 current_time = time.time()
77 if self._start <= 0:
81 if self._start <= 0:
78 self._start = current_time
82 self._start = current_time
79 elif current_time - self._start > self.flush_interval:
83 elif current_time - self._start > self.flush_interval:
80 self.flush()
84 self.flush()
81
85
82 def writelines(self, sequence):
86 def writelines(self, sequence):
83 if self.pub_socket is None:
87 if self.pub_socket is None:
84 raise ValueError('I/O operation on closed file')
88 raise ValueError('I/O operation on closed file')
85 else:
89 else:
86 for string in sequence:
90 for string in sequence:
87 self.write(string)
91 self.write(string)
88
92
89 def _new_buffer(self):
93 def _new_buffer(self):
90 self._buffer = StringIO()
94 self._buffer = StringIO()
91 self._start = -1
95 self._start = -1
@@ -1,298 +1,299 b''
1 #!/usr/bin/env python
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
2 # -*- coding: utf-8 -*-
3 """Setup script for IPython.
3 """Setup script for IPython.
4
4
5 Under Posix environments it works like a typical setup.py script.
5 Under Posix environments it works like a typical setup.py script.
6 Under Windows, the command sdist is not supported, since IPython
6 Under Windows, the command sdist is not supported, since IPython
7 requires utilities which are not available under Windows."""
7 requires utilities which are not available under Windows."""
8
8
9 #-----------------------------------------------------------------------------
9 #-----------------------------------------------------------------------------
10 # Copyright (c) 2008-2011, IPython Development Team.
10 # Copyright (c) 2008-2011, IPython Development Team.
11 # Copyright (c) 2001-2007, Fernando Perez <fernando.perez@colorado.edu>
11 # Copyright (c) 2001-2007, Fernando Perez <fernando.perez@colorado.edu>
12 # Copyright (c) 2001, Janko Hauser <jhauser@zscout.de>
12 # Copyright (c) 2001, Janko Hauser <jhauser@zscout.de>
13 # Copyright (c) 2001, Nathaniel Gray <n8gray@caltech.edu>
13 # Copyright (c) 2001, Nathaniel Gray <n8gray@caltech.edu>
14 #
14 #
15 # Distributed under the terms of the Modified BSD License.
15 # Distributed under the terms of the Modified BSD License.
16 #
16 #
17 # The full license is in the file COPYING.txt, distributed with this software.
17 # The full license is in the file COPYING.txt, distributed with this software.
18 #-----------------------------------------------------------------------------
18 #-----------------------------------------------------------------------------
19
19
20 #-----------------------------------------------------------------------------
20 #-----------------------------------------------------------------------------
21 # Minimal Python version sanity check
21 # Minimal Python version sanity check
22 #-----------------------------------------------------------------------------
22 #-----------------------------------------------------------------------------
23 from __future__ import print_function
23 from __future__ import print_function
24
24
25 import sys
25 import sys
26
26
27 # This check is also made in IPython/__init__, don't forget to update both when
27 # This check is also made in IPython/__init__, don't forget to update both when
28 # changing Python version requirements.
28 # changing Python version requirements.
29 #~ if sys.version[0:3] < '2.6':
29 #~ if sys.version[0:3] < '2.6':
30 #~ error = """\
30 #~ error = """\
31 #~ ERROR: 'IPython requires Python Version 2.6 or above.'
31 #~ ERROR: 'IPython requires Python Version 2.6 or above.'
32 #~ Exiting."""
32 #~ Exiting."""
33 #~ print >> sys.stderr, error
33 #~ print >> sys.stderr, error
34 #~ sys.exit(1)
34 #~ sys.exit(1)
35
35
36 PY3 = (sys.version_info[0] >= 3)
36 PY3 = (sys.version_info[0] >= 3)
37
37
38 # At least we're on the python version we need, move on.
38 # At least we're on the python version we need, move on.
39
39
40 #-------------------------------------------------------------------------------
40 #-------------------------------------------------------------------------------
41 # Imports
41 # Imports
42 #-------------------------------------------------------------------------------
42 #-------------------------------------------------------------------------------
43
43
44 # Stdlib imports
44 # Stdlib imports
45 import os
45 import os
46 import shutil
46 import shutil
47
47
48 from glob import glob
48 from glob import glob
49
49
50 # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly
50 # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly
51 # update it when the contents of directories change.
51 # update it when the contents of directories change.
52 if os.path.exists('MANIFEST'): os.remove('MANIFEST')
52 if os.path.exists('MANIFEST'): os.remove('MANIFEST')
53
53
54 from distutils.core import setup
54 from distutils.core import setup
55
55
56 # On Python 3, we need distribute (new setuptools) to do the 2to3 conversion
56 # On Python 3, we need distribute (new setuptools) to do the 2to3 conversion
57 if PY3:
57 if PY3:
58 import setuptools
58 import setuptools
59
59
60 # Our own imports
60 # Our own imports
61 from setupbase import target_update
61 from setupbase import target_update
62
62
63 from setupbase import (
63 from setupbase import (
64 setup_args,
64 setup_args,
65 find_packages,
65 find_packages,
66 find_package_data,
66 find_package_data,
67 find_scripts,
67 find_scripts,
68 find_data_files,
68 find_data_files,
69 check_for_dependencies,
69 check_for_dependencies,
70 record_commit_info,
70 record_commit_info,
71 )
71 )
72 from setupext import setupext
72 from setupext import setupext
73
73
74 isfile = os.path.isfile
74 isfile = os.path.isfile
75 pjoin = os.path.join
75 pjoin = os.path.join
76
76
77 #-----------------------------------------------------------------------------
77 #-----------------------------------------------------------------------------
78 # Function definitions
78 # Function definitions
79 #-----------------------------------------------------------------------------
79 #-----------------------------------------------------------------------------
80
80
81 def cleanup():
81 def cleanup():
82 """Clean up the junk left around by the build process"""
82 """Clean up the junk left around by the build process"""
83 if "develop" not in sys.argv:
83 if "develop" not in sys.argv:
84 try:
84 try:
85 shutil.rmtree('ipython.egg-info')
85 shutil.rmtree('ipython.egg-info')
86 except:
86 except:
87 try:
87 try:
88 os.unlink('ipython.egg-info')
88 os.unlink('ipython.egg-info')
89 except:
89 except:
90 pass
90 pass
91
91
92 #-------------------------------------------------------------------------------
92 #-------------------------------------------------------------------------------
93 # Handle OS specific things
93 # Handle OS specific things
94 #-------------------------------------------------------------------------------
94 #-------------------------------------------------------------------------------
95
95
96 if os.name == 'posix':
96 if os.name == 'posix':
97 os_name = 'posix'
97 os_name = 'posix'
98 elif os.name in ['nt','dos']:
98 elif os.name in ['nt','dos']:
99 os_name = 'windows'
99 os_name = 'windows'
100 else:
100 else:
101 print('Unsupported operating system:',os.name)
101 print('Unsupported operating system:',os.name)
102 sys.exit(1)
102 sys.exit(1)
103
103
104 # Under Windows, 'sdist' has not been supported. Now that the docs build with
104 # Under Windows, 'sdist' has not been supported. Now that the docs build with
105 # Sphinx it might work, but let's not turn it on until someone confirms that it
105 # Sphinx it might work, but let's not turn it on until someone confirms that it
106 # actually works.
106 # actually works.
107 if os_name == 'windows' and 'sdist' in sys.argv:
107 if os_name == 'windows' and 'sdist' in sys.argv:
108 print('The sdist command is not available under Windows. Exiting.')
108 print('The sdist command is not available under Windows. Exiting.')
109 sys.exit(1)
109 sys.exit(1)
110
110
111 #-------------------------------------------------------------------------------
111 #-------------------------------------------------------------------------------
112 # Things related to the IPython documentation
112 # Things related to the IPython documentation
113 #-------------------------------------------------------------------------------
113 #-------------------------------------------------------------------------------
114
114
115 # update the manuals when building a source dist
115 # update the manuals when building a source dist
116 if len(sys.argv) >= 2 and sys.argv[1] in ('sdist','bdist_rpm'):
116 if len(sys.argv) >= 2 and sys.argv[1] in ('sdist','bdist_rpm'):
117 import textwrap
117 import textwrap
118
118
119 # List of things to be updated. Each entry is a triplet of args for
119 # List of things to be updated. Each entry is a triplet of args for
120 # target_update()
120 # target_update()
121 to_update = [
121 to_update = [
122 # FIXME - Disabled for now: we need to redo an automatic way
122 # FIXME - Disabled for now: we need to redo an automatic way
123 # of generating the magic info inside the rst.
123 # of generating the magic info inside the rst.
124 #('docs/magic.tex',
124 #('docs/magic.tex',
125 #['IPython/Magic.py'],
125 #['IPython/Magic.py'],
126 #"cd doc && ./update_magic.sh" ),
126 #"cd doc && ./update_magic.sh" ),
127
127
128 ('docs/man/ipcluster.1.gz',
128 ('docs/man/ipcluster.1.gz',
129 ['docs/man/ipcluster.1'],
129 ['docs/man/ipcluster.1'],
130 'cd docs/man && gzip -9c ipcluster.1 > ipcluster.1.gz'),
130 'cd docs/man && gzip -9c ipcluster.1 > ipcluster.1.gz'),
131
131
132 ('docs/man/ipcontroller.1.gz',
132 ('docs/man/ipcontroller.1.gz',
133 ['docs/man/ipcontroller.1'],
133 ['docs/man/ipcontroller.1'],
134 'cd docs/man && gzip -9c ipcontroller.1 > ipcontroller.1.gz'),
134 'cd docs/man && gzip -9c ipcontroller.1 > ipcontroller.1.gz'),
135
135
136 ('docs/man/ipengine.1.gz',
136 ('docs/man/ipengine.1.gz',
137 ['docs/man/ipengine.1'],
137 ['docs/man/ipengine.1'],
138 'cd docs/man && gzip -9c ipengine.1 > ipengine.1.gz'),
138 'cd docs/man && gzip -9c ipengine.1 > ipengine.1.gz'),
139
139
140 ('docs/man/iplogger.1.gz',
140 ('docs/man/iplogger.1.gz',
141 ['docs/man/iplogger.1'],
141 ['docs/man/iplogger.1'],
142 'cd docs/man && gzip -9c iplogger.1 > iplogger.1.gz'),
142 'cd docs/man && gzip -9c iplogger.1 > iplogger.1.gz'),
143
143
144 ('docs/man/ipython.1.gz',
144 ('docs/man/ipython.1.gz',
145 ['docs/man/ipython.1'],
145 ['docs/man/ipython.1'],
146 'cd docs/man && gzip -9c ipython.1 > ipython.1.gz'),
146 'cd docs/man && gzip -9c ipython.1 > ipython.1.gz'),
147
147
148 ('docs/man/irunner.1.gz',
148 ('docs/man/irunner.1.gz',
149 ['docs/man/irunner.1'],
149 ['docs/man/irunner.1'],
150 'cd docs/man && gzip -9c irunner.1 > irunner.1.gz'),
150 'cd docs/man && gzip -9c irunner.1 > irunner.1.gz'),
151
151
152 ('docs/man/pycolor.1.gz',
152 ('docs/man/pycolor.1.gz',
153 ['docs/man/pycolor.1'],
153 ['docs/man/pycolor.1'],
154 'cd docs/man && gzip -9c pycolor.1 > pycolor.1.gz'),
154 'cd docs/man && gzip -9c pycolor.1 > pycolor.1.gz'),
155 ]
155 ]
156
156
157
157
158 [ target_update(*t) for t in to_update ]
158 [ target_update(*t) for t in to_update ]
159
159
160 #---------------------------------------------------------------------------
160 #---------------------------------------------------------------------------
161 # Find all the packages, package data, and data_files
161 # Find all the packages, package data, and data_files
162 #---------------------------------------------------------------------------
162 #---------------------------------------------------------------------------
163
163
164 packages = find_packages()
164 packages = find_packages()
165 package_data = find_package_data()
165 package_data = find_package_data()
166 data_files = find_data_files()
166 data_files = find_data_files()
167
167
168 setup_args['packages'] = packages
168 setup_args['packages'] = packages
169 setup_args['package_data'] = package_data
169 setup_args['package_data'] = package_data
170 setup_args['data_files'] = data_files
170 setup_args['data_files'] = data_files
171
171
172 #---------------------------------------------------------------------------
172 #---------------------------------------------------------------------------
173 # custom distutils commands
173 # custom distutils commands
174 #---------------------------------------------------------------------------
174 #---------------------------------------------------------------------------
175 # imports here, so they are after setuptools import if there was one
175 # imports here, so they are after setuptools import if there was one
176 from distutils.command.sdist import sdist
176 from distutils.command.sdist import sdist
177 from distutils.command.upload import upload
177 from distutils.command.upload import upload
178
178
179 class UploadWindowsInstallers(upload):
179 class UploadWindowsInstallers(upload):
180
180
181 description = "Upload Windows installers to PyPI (only used from tools/release_windows.py)"
181 description = "Upload Windows installers to PyPI (only used from tools/release_windows.py)"
182 user_options = upload.user_options + [
182 user_options = upload.user_options + [
183 ('files=', 'f', 'exe file (or glob) to upload')
183 ('files=', 'f', 'exe file (or glob) to upload')
184 ]
184 ]
185 def initialize_options(self):
185 def initialize_options(self):
186 upload.initialize_options(self)
186 upload.initialize_options(self)
187 meta = self.distribution.metadata
187 meta = self.distribution.metadata
188 base = '{name}-{version}'.format(
188 base = '{name}-{version}'.format(
189 name=meta.get_name(),
189 name=meta.get_name(),
190 version=meta.get_version()
190 version=meta.get_version()
191 )
191 )
192 self.files = os.path.join('dist', '%s.*.exe' % base)
192 self.files = os.path.join('dist', '%s.*.exe' % base)
193
193
194 def run(self):
194 def run(self):
195 for dist_file in glob(self.files):
195 for dist_file in glob(self.files):
196 self.upload_file('bdist_wininst', 'any', dist_file)
196 self.upload_file('bdist_wininst', 'any', dist_file)
197
197
198 setup_args['cmdclass'] = {
198 setup_args['cmdclass'] = {
199 'build_py': record_commit_info('IPython'),
199 'build_py': record_commit_info('IPython'),
200 'sdist' : record_commit_info('IPython', sdist),
200 'sdist' : record_commit_info('IPython', sdist),
201 'upload_wininst' : UploadWindowsInstallers,
201 'upload_wininst' : UploadWindowsInstallers,
202 }
202 }
203
203
204 #---------------------------------------------------------------------------
204 #---------------------------------------------------------------------------
205 # Handle scripts, dependencies, and setuptools specific things
205 # Handle scripts, dependencies, and setuptools specific things
206 #---------------------------------------------------------------------------
206 #---------------------------------------------------------------------------
207
207
208 # For some commands, use setuptools. Note that we do NOT list install here!
208 # For some commands, use setuptools. Note that we do NOT list install here!
209 # If you want a setuptools-enhanced install, just run 'setupegg.py install'
209 # If you want a setuptools-enhanced install, just run 'setupegg.py install'
210 needs_setuptools = set(('develop', 'release', 'bdist_egg', 'bdist_rpm',
210 needs_setuptools = set(('develop', 'release', 'bdist_egg', 'bdist_rpm',
211 'bdist', 'bdist_dumb', 'bdist_wininst', 'install_egg_info',
211 'bdist', 'bdist_dumb', 'bdist_wininst', 'install_egg_info',
212 'egg_info', 'easy_install', 'upload',
212 'egg_info', 'easy_install', 'upload',
213 ))
213 ))
214 if sys.platform == 'win32':
214 if sys.platform == 'win32':
215 # Depend on setuptools for install on *Windows only*
215 # Depend on setuptools for install on *Windows only*
216 # If we get script-installation working without setuptools,
216 # If we get script-installation working without setuptools,
217 # then we can back off, but until then use it.
217 # then we can back off, but until then use it.
218 # See Issue #369 on GitHub for more
218 # See Issue #369 on GitHub for more
219 needs_setuptools.add('install')
219 needs_setuptools.add('install')
220
220
221 if len(needs_setuptools.intersection(sys.argv)) > 0:
221 if len(needs_setuptools.intersection(sys.argv)) > 0:
222 import setuptools
222 import setuptools
223
223
224 # This dict is used for passing extra arguments that are setuptools
224 # This dict is used for passing extra arguments that are setuptools
225 # specific to setup
225 # specific to setup
226 setuptools_extra_args = {}
226 setuptools_extra_args = {}
227
227
228 if 'setuptools' in sys.modules:
228 if 'setuptools' in sys.modules:
229 setuptools_extra_args['zip_safe'] = False
229 setuptools_extra_args['zip_safe'] = False
230 setuptools_extra_args['entry_points'] = find_scripts(True)
230 setuptools_extra_args['entry_points'] = find_scripts(True)
231 setup_args['extras_require'] = dict(
231 setup_args['extras_require'] = dict(
232 parallel = 'pyzmq>=2.1.4',
232 parallel = 'pyzmq>=2.1.4',
233 zmq = 'pyzmq>=2.1.4',
233 zmq = 'pyzmq>=2.1.4',
234 doc = 'Sphinx>=0.3',
234 doc = 'Sphinx>=0.3',
235 test = 'nose>=0.10.1',
235 test = 'nose>=0.10.1',
236 notebook = 'tornado>=2.0'
236 notebook = 'tornado>=2.0'
237 )
237 )
238 requires = setup_args.setdefault('install_requires', [])
238 requires = setup_args.setdefault('install_requires', [])
239 setupext.display_status = False
239 setupext.display_status = False
240 if not setupext.check_for_readline():
240 if not setupext.check_for_readline():
241 if sys.platform == 'darwin':
241 if sys.platform == 'darwin':
242 requires.append('readline')
242 requires.append('readline')
243 elif sys.platform.startswith('win'):
243 elif sys.platform.startswith('win'):
244 # Pyreadline 64 bit windows issue solved in versions >=1.7.1
244 # Pyreadline 64 bit windows issue solved in versions >=1.7.1
245 # Also solves issues with some older versions of pyreadline that
245 # Also solves issues with some older versions of pyreadline that
246 # satisfy the unconstrained depdendency.
246 # satisfy the unconstrained depdendency.
247 requires.append('pyreadline>=1.7.1')
247 requires.append('pyreadline>=1.7.1')
248 else:
248 else:
249 pass
249 pass
250 # do we want to install readline here?
250 # do we want to install readline here?
251
251
252 # Script to be run by the windows binary installer after the default setup
252 # Script to be run by the windows binary installer after the default setup
253 # routine, to add shortcuts and similar windows-only things. Windows
253 # routine, to add shortcuts and similar windows-only things. Windows
254 # post-install scripts MUST reside in the scripts/ dir, otherwise distutils
254 # post-install scripts MUST reside in the scripts/ dir, otherwise distutils
255 # doesn't find them.
255 # doesn't find them.
256 if 'bdist_wininst' in sys.argv:
256 if 'bdist_wininst' in sys.argv:
257 if len(sys.argv) > 2 and \
257 if len(sys.argv) > 2 and \
258 ('sdist' in sys.argv or 'bdist_rpm' in sys.argv):
258 ('sdist' in sys.argv or 'bdist_rpm' in sys.argv):
259 print >> sys.stderr, "ERROR: bdist_wininst must be run alone. Exiting."
259 print >> sys.stderr, "ERROR: bdist_wininst must be run alone. Exiting."
260 sys.exit(1)
260 sys.exit(1)
261 setup_args['scripts'] = [pjoin('scripts','ipython_win_post_install.py')]
261 setup_args['scripts'] = [pjoin('scripts','ipython_win_post_install.py')]
262 setup_args['options'] = {"bdist_wininst":
262 setup_args['options'] = {"bdist_wininst":
263 {"install_script":
263 {"install_script":
264 "ipython_win_post_install.py"}}
264 "ipython_win_post_install.py"}}
265
265
266 if PY3:
266 if PY3:
267 setuptools_extra_args['use_2to3'] = True
267 setuptools_extra_args['use_2to3'] = True
268 # we try to make a 2.6, 2.7, and 3.1 to 3.3 python compatible code
268 # we try to make a 2.6, 2.7, and 3.1 to 3.3 python compatible code
269 # so we explicitly disable some 2to3 fixes to be sure we aren't forgetting
269 # so we explicitly disable some 2to3 fixes to be sure we aren't forgetting
270 # anything.
270 # anything.
271 setuptools_extra_args['use_2to3_exclude_fixers'] = [
271 setuptools_extra_args['use_2to3_exclude_fixers'] = [
272 'lib2to3.fixes.fix_except',
272 'lib2to3.fixes.fix_except',
273 'lib2to3.fixes.fix_apply',
273 'lib2to3.fixes.fix_apply',
274 'lib2to3.fixes.fix_repr',
274 'lib2to3.fixes.fix_repr',
275 'lib2to3.fixes.fix_next',
275 ]
276 ]
276 from setuptools.command.build_py import build_py
277 from setuptools.command.build_py import build_py
277 setup_args['cmdclass'] = {'build_py': record_commit_info('IPython', build_cmd=build_py)}
278 setup_args['cmdclass'] = {'build_py': record_commit_info('IPython', build_cmd=build_py)}
278 setuptools_extra_args['entry_points'] = find_scripts(True, suffix='3')
279 setuptools_extra_args['entry_points'] = find_scripts(True, suffix='3')
279 setuptools._dont_write_bytecode = True
280 setuptools._dont_write_bytecode = True
280 else:
281 else:
281 # If we are running without setuptools, call this function which will
282 # If we are running without setuptools, call this function which will
282 # check for dependencies an inform the user what is needed. This is
283 # check for dependencies an inform the user what is needed. This is
283 # just to make life easy for users.
284 # just to make life easy for users.
284 check_for_dependencies()
285 check_for_dependencies()
285 setup_args['scripts'] = find_scripts(False)
286 setup_args['scripts'] = find_scripts(False)
286
287
287 #---------------------------------------------------------------------------
288 #---------------------------------------------------------------------------
288 # Do the actual setup now
289 # Do the actual setup now
289 #---------------------------------------------------------------------------
290 #---------------------------------------------------------------------------
290
291
291 setup_args.update(setuptools_extra_args)
292 setup_args.update(setuptools_extra_args)
292
293
293 def main():
294 def main():
294 setup(**setup_args)
295 setup(**setup_args)
295 cleanup()
296 cleanup()
296
297
297 if __name__ == '__main__':
298 if __name__ == '__main__':
298 main()
299 main()
General Comments 0
You need to be logged in to leave comments. Login now