##// END OF EJS Templates
spelling: fixes from spell checker
Mads Kiilerich -
r21024:7731a228 default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,579 +1,579 b''
1 # color.py color output for the status and qseries commands
1 # color.py color output for the status and qseries commands
2 #
2 #
3 # Copyright (C) 2007 Kevin Christen <kevin.christen@gmail.com>
3 # Copyright (C) 2007 Kevin Christen <kevin.christen@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''colorize output from some commands
8 '''colorize output from some commands
9
9
10 This extension modifies the status and resolve commands to add color
10 This extension modifies the status and resolve commands to add color
11 to their output to reflect file status, the qseries command to add
11 to their output to reflect file status, the qseries command to add
12 color to reflect patch status (applied, unapplied, missing), and to
12 color to reflect patch status (applied, unapplied, missing), and to
13 diff-related commands to highlight additions, removals, diff headers,
13 diff-related commands to highlight additions, removals, diff headers,
14 and trailing whitespace.
14 and trailing whitespace.
15
15
16 Other effects in addition to color, like bold and underlined text, are
16 Other effects in addition to color, like bold and underlined text, are
17 also available. By default, the terminfo database is used to find the
17 also available. By default, the terminfo database is used to find the
18 terminal codes used to change color and effect. If terminfo is not
18 terminal codes used to change color and effect. If terminfo is not
19 available, then effects are rendered with the ECMA-48 SGR control
19 available, then effects are rendered with the ECMA-48 SGR control
20 function (aka ANSI escape codes).
20 function (aka ANSI escape codes).
21
21
22 Default effects may be overridden from your configuration file::
22 Default effects may be overridden from your configuration file::
23
23
24 [color]
24 [color]
25 status.modified = blue bold underline red_background
25 status.modified = blue bold underline red_background
26 status.added = green bold
26 status.added = green bold
27 status.removed = red bold blue_background
27 status.removed = red bold blue_background
28 status.deleted = cyan bold underline
28 status.deleted = cyan bold underline
29 status.unknown = magenta bold underline
29 status.unknown = magenta bold underline
30 status.ignored = black bold
30 status.ignored = black bold
31
31
32 # 'none' turns off all effects
32 # 'none' turns off all effects
33 status.clean = none
33 status.clean = none
34 status.copied = none
34 status.copied = none
35
35
36 qseries.applied = blue bold underline
36 qseries.applied = blue bold underline
37 qseries.unapplied = black bold
37 qseries.unapplied = black bold
38 qseries.missing = red bold
38 qseries.missing = red bold
39
39
40 diff.diffline = bold
40 diff.diffline = bold
41 diff.extended = cyan bold
41 diff.extended = cyan bold
42 diff.file_a = red bold
42 diff.file_a = red bold
43 diff.file_b = green bold
43 diff.file_b = green bold
44 diff.hunk = magenta
44 diff.hunk = magenta
45 diff.deleted = red
45 diff.deleted = red
46 diff.inserted = green
46 diff.inserted = green
47 diff.changed = white
47 diff.changed = white
48 diff.trailingwhitespace = bold red_background
48 diff.trailingwhitespace = bold red_background
49
49
50 resolve.unresolved = red bold
50 resolve.unresolved = red bold
51 resolve.resolved = green bold
51 resolve.resolved = green bold
52
52
53 bookmarks.current = green
53 bookmarks.current = green
54
54
55 branches.active = none
55 branches.active = none
56 branches.closed = black bold
56 branches.closed = black bold
57 branches.current = green
57 branches.current = green
58 branches.inactive = none
58 branches.inactive = none
59
59
60 tags.normal = green
60 tags.normal = green
61 tags.local = black bold
61 tags.local = black bold
62
62
63 rebase.rebased = blue
63 rebase.rebased = blue
64 rebase.remaining = red bold
64 rebase.remaining = red bold
65
65
66 shelve.age = cyan
66 shelve.age = cyan
67 shelve.newest = green bold
67 shelve.newest = green bold
68 shelve.name = blue bold
68 shelve.name = blue bold
69
69
70 histedit.remaining = red bold
70 histedit.remaining = red bold
71
71
72 The available effects in terminfo mode are 'blink', 'bold', 'dim',
72 The available effects in terminfo mode are 'blink', 'bold', 'dim',
73 'inverse', 'invisible', 'italic', 'standout', and 'underline'; in
73 'inverse', 'invisible', 'italic', 'standout', and 'underline'; in
74 ECMA-48 mode, the options are 'bold', 'inverse', 'italic', and
74 ECMA-48 mode, the options are 'bold', 'inverse', 'italic', and
75 'underline'. How each is rendered depends on the terminal emulator.
75 'underline'. How each is rendered depends on the terminal emulator.
76 Some may not be available for a given terminal type, and will be
76 Some may not be available for a given terminal type, and will be
77 silently ignored.
77 silently ignored.
78
78
79 Note that on some systems, terminfo mode may cause problems when using
79 Note that on some systems, terminfo mode may cause problems when using
80 color with the pager extension and less -R. less with the -R option
80 color with the pager extension and less -R. less with the -R option
81 will only display ECMA-48 color codes, and terminfo mode may sometimes
81 will only display ECMA-48 color codes, and terminfo mode may sometimes
82 emit codes that less doesn't understand. You can work around this by
82 emit codes that less doesn't understand. You can work around this by
83 either using ansi mode (or auto mode), or by using less -r (which will
83 either using ansi mode (or auto mode), or by using less -r (which will
84 pass through all terminal control codes, not just color control
84 pass through all terminal control codes, not just color control
85 codes).
85 codes).
86
86
87 Because there are only eight standard colors, this module allows you
87 Because there are only eight standard colors, this module allows you
88 to define color names for other color slots which might be available
88 to define color names for other color slots which might be available
89 for your terminal type, assuming terminfo mode. For instance::
89 for your terminal type, assuming terminfo mode. For instance::
90
90
91 color.brightblue = 12
91 color.brightblue = 12
92 color.pink = 207
92 color.pink = 207
93 color.orange = 202
93 color.orange = 202
94
94
95 to set 'brightblue' to color slot 12 (useful for 16 color terminals
95 to set 'brightblue' to color slot 12 (useful for 16 color terminals
96 that have brighter colors defined in the upper eight) and, 'pink' and
96 that have brighter colors defined in the upper eight) and, 'pink' and
97 'orange' to colors in 256-color xterm's default color cube. These
97 'orange' to colors in 256-color xterm's default color cube. These
98 defined colors may then be used as any of the pre-defined eight,
98 defined colors may then be used as any of the pre-defined eight,
99 including appending '_background' to set the background to that color.
99 including appending '_background' to set the background to that color.
100
100
101 By default, the color extension will use ANSI mode (or win32 mode on
101 By default, the color extension will use ANSI mode (or win32 mode on
102 Windows) if it detects a terminal. To override auto mode (to enable
102 Windows) if it detects a terminal. To override auto mode (to enable
103 terminfo mode, for example), set the following configuration option::
103 terminfo mode, for example), set the following configuration option::
104
104
105 [color]
105 [color]
106 mode = terminfo
106 mode = terminfo
107
107
108 Any value other than 'ansi', 'win32', 'terminfo', or 'auto' will
108 Any value other than 'ansi', 'win32', 'terminfo', or 'auto' will
109 disable color.
109 disable color.
110 '''
110 '''
111
111
112 import os
112 import os
113
113
114 from mercurial import commands, dispatch, extensions, ui as uimod, util
114 from mercurial import commands, dispatch, extensions, ui as uimod, util
115 from mercurial import templater, error
115 from mercurial import templater, error
116 from mercurial.i18n import _
116 from mercurial.i18n import _
117
117
118 testedwith = 'internal'
118 testedwith = 'internal'
119
119
120 # start and stop parameters for effects
120 # start and stop parameters for effects
121 _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33,
121 _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33,
122 'blue': 34, 'magenta': 35, 'cyan': 36, 'white': 37, 'bold': 1,
122 'blue': 34, 'magenta': 35, 'cyan': 36, 'white': 37, 'bold': 1,
123 'italic': 3, 'underline': 4, 'inverse': 7,
123 'italic': 3, 'underline': 4, 'inverse': 7,
124 'black_background': 40, 'red_background': 41,
124 'black_background': 40, 'red_background': 41,
125 'green_background': 42, 'yellow_background': 43,
125 'green_background': 42, 'yellow_background': 43,
126 'blue_background': 44, 'purple_background': 45,
126 'blue_background': 44, 'purple_background': 45,
127 'cyan_background': 46, 'white_background': 47}
127 'cyan_background': 46, 'white_background': 47}
128
128
129 def _terminfosetup(ui, mode):
129 def _terminfosetup(ui, mode):
130 '''Initialize terminfo data and the terminal if we're in terminfo mode.'''
130 '''Initialize terminfo data and the terminal if we're in terminfo mode.'''
131
131
132 global _terminfo_params
132 global _terminfo_params
133 # If we failed to load curses, we go ahead and return.
133 # If we failed to load curses, we go ahead and return.
134 if not _terminfo_params:
134 if not _terminfo_params:
135 return
135 return
136 # Otherwise, see what the config file says.
136 # Otherwise, see what the config file says.
137 if mode not in ('auto', 'terminfo'):
137 if mode not in ('auto', 'terminfo'):
138 return
138 return
139
139
140 _terminfo_params.update((key[6:], (False, int(val)))
140 _terminfo_params.update((key[6:], (False, int(val)))
141 for key, val in ui.configitems('color')
141 for key, val in ui.configitems('color')
142 if key.startswith('color.'))
142 if key.startswith('color.'))
143
143
144 try:
144 try:
145 curses.setupterm()
145 curses.setupterm()
146 except curses.error, e:
146 except curses.error, e:
147 _terminfo_params = {}
147 _terminfo_params = {}
148 return
148 return
149
149
150 for key, (b, e) in _terminfo_params.items():
150 for key, (b, e) in _terminfo_params.items():
151 if not b:
151 if not b:
152 continue
152 continue
153 if not curses.tigetstr(e):
153 if not curses.tigetstr(e):
154 # Most terminals don't support dim, invis, etc, so don't be
154 # Most terminals don't support dim, invis, etc, so don't be
155 # noisy and use ui.debug().
155 # noisy and use ui.debug().
156 ui.debug("no terminfo entry for %s\n" % e)
156 ui.debug("no terminfo entry for %s\n" % e)
157 del _terminfo_params[key]
157 del _terminfo_params[key]
158 if not curses.tigetstr('setaf') or not curses.tigetstr('setab'):
158 if not curses.tigetstr('setaf') or not curses.tigetstr('setab'):
159 # Only warn about missing terminfo entries if we explicitly asked for
159 # Only warn about missing terminfo entries if we explicitly asked for
160 # terminfo mode.
160 # terminfo mode.
161 if mode == "terminfo":
161 if mode == "terminfo":
162 ui.warn(_("no terminfo entry for setab/setaf: reverting to "
162 ui.warn(_("no terminfo entry for setab/setaf: reverting to "
163 "ECMA-48 color\n"))
163 "ECMA-48 color\n"))
164 _terminfo_params = {}
164 _terminfo_params = {}
165
165
166 def _modesetup(ui, coloropt):
166 def _modesetup(ui, coloropt):
167 global _terminfo_params
167 global _terminfo_params
168
168
169 auto = coloropt == 'auto'
169 auto = coloropt == 'auto'
170 always = not auto and util.parsebool(coloropt)
170 always = not auto and util.parsebool(coloropt)
171 if not always and not auto:
171 if not always and not auto:
172 return None
172 return None
173
173
174 formatted = always or (os.environ.get('TERM') != 'dumb' and ui.formatted())
174 formatted = always or (os.environ.get('TERM') != 'dumb' and ui.formatted())
175
175
176 mode = ui.config('color', 'mode', 'auto')
176 mode = ui.config('color', 'mode', 'auto')
177 realmode = mode
177 realmode = mode
178 if mode == 'auto':
178 if mode == 'auto':
179 if os.name == 'nt' and 'TERM' not in os.environ:
179 if os.name == 'nt' and 'TERM' not in os.environ:
180 # looks line a cmd.exe console, use win32 API or nothing
180 # looks line a cmd.exe console, use win32 API or nothing
181 realmode = 'win32'
181 realmode = 'win32'
182 else:
182 else:
183 realmode = 'ansi'
183 realmode = 'ansi'
184
184
185 if realmode == 'win32':
185 if realmode == 'win32':
186 _terminfo_params = {}
186 _terminfo_params = {}
187 if not w32effects:
187 if not w32effects:
188 if mode == 'win32':
188 if mode == 'win32':
189 # only warn if color.mode is explicitly set to win32
189 # only warn if color.mode is explicitly set to win32
190 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
190 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
191 return None
191 return None
192 _effects.update(w32effects)
192 _effects.update(w32effects)
193 elif realmode == 'ansi':
193 elif realmode == 'ansi':
194 _terminfo_params = {}
194 _terminfo_params = {}
195 elif realmode == 'terminfo':
195 elif realmode == 'terminfo':
196 _terminfosetup(ui, mode)
196 _terminfosetup(ui, mode)
197 if not _terminfo_params:
197 if not _terminfo_params:
198 if mode == 'terminfo':
198 if mode == 'terminfo':
199 ## FIXME Shouldn't we return None in this case too?
199 ## FIXME Shouldn't we return None in this case too?
200 # only warn if color.mode is explicitly set to win32
200 # only warn if color.mode is explicitly set to win32
201 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
201 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
202 realmode = 'ansi'
202 realmode = 'ansi'
203 else:
203 else:
204 return None
204 return None
205
205
206 if always or (auto and formatted):
206 if always or (auto and formatted):
207 return realmode
207 return realmode
208 return None
208 return None
209
209
210 try:
210 try:
211 import curses
211 import curses
212 # Mapping from effect name to terminfo attribute name or color number.
212 # Mapping from effect name to terminfo attribute name or color number.
213 # This will also force-load the curses module.
213 # This will also force-load the curses module.
214 _terminfo_params = {'none': (True, 'sgr0'),
214 _terminfo_params = {'none': (True, 'sgr0'),
215 'standout': (True, 'smso'),
215 'standout': (True, 'smso'),
216 'underline': (True, 'smul'),
216 'underline': (True, 'smul'),
217 'reverse': (True, 'rev'),
217 'reverse': (True, 'rev'),
218 'inverse': (True, 'rev'),
218 'inverse': (True, 'rev'),
219 'blink': (True, 'blink'),
219 'blink': (True, 'blink'),
220 'dim': (True, 'dim'),
220 'dim': (True, 'dim'),
221 'bold': (True, 'bold'),
221 'bold': (True, 'bold'),
222 'invisible': (True, 'invis'),
222 'invisible': (True, 'invis'),
223 'italic': (True, 'sitm'),
223 'italic': (True, 'sitm'),
224 'black': (False, curses.COLOR_BLACK),
224 'black': (False, curses.COLOR_BLACK),
225 'red': (False, curses.COLOR_RED),
225 'red': (False, curses.COLOR_RED),
226 'green': (False, curses.COLOR_GREEN),
226 'green': (False, curses.COLOR_GREEN),
227 'yellow': (False, curses.COLOR_YELLOW),
227 'yellow': (False, curses.COLOR_YELLOW),
228 'blue': (False, curses.COLOR_BLUE),
228 'blue': (False, curses.COLOR_BLUE),
229 'magenta': (False, curses.COLOR_MAGENTA),
229 'magenta': (False, curses.COLOR_MAGENTA),
230 'cyan': (False, curses.COLOR_CYAN),
230 'cyan': (False, curses.COLOR_CYAN),
231 'white': (False, curses.COLOR_WHITE)}
231 'white': (False, curses.COLOR_WHITE)}
232 except ImportError:
232 except ImportError:
233 _terminfo_params = False
233 _terminfo_params = False
234
234
235 _styles = {'grep.match': 'red bold',
235 _styles = {'grep.match': 'red bold',
236 'grep.linenumber': 'green',
236 'grep.linenumber': 'green',
237 'grep.rev': 'green',
237 'grep.rev': 'green',
238 'grep.change': 'green',
238 'grep.change': 'green',
239 'grep.sep': 'cyan',
239 'grep.sep': 'cyan',
240 'grep.filename': 'magenta',
240 'grep.filename': 'magenta',
241 'grep.user': 'magenta',
241 'grep.user': 'magenta',
242 'grep.date': 'magenta',
242 'grep.date': 'magenta',
243 'bookmarks.current': 'green',
243 'bookmarks.current': 'green',
244 'branches.active': 'none',
244 'branches.active': 'none',
245 'branches.closed': 'black bold',
245 'branches.closed': 'black bold',
246 'branches.current': 'green',
246 'branches.current': 'green',
247 'branches.inactive': 'none',
247 'branches.inactive': 'none',
248 'diff.changed': 'white',
248 'diff.changed': 'white',
249 'diff.deleted': 'red',
249 'diff.deleted': 'red',
250 'diff.diffline': 'bold',
250 'diff.diffline': 'bold',
251 'diff.extended': 'cyan bold',
251 'diff.extended': 'cyan bold',
252 'diff.file_a': 'red bold',
252 'diff.file_a': 'red bold',
253 'diff.file_b': 'green bold',
253 'diff.file_b': 'green bold',
254 'diff.hunk': 'magenta',
254 'diff.hunk': 'magenta',
255 'diff.inserted': 'green',
255 'diff.inserted': 'green',
256 'diff.trailingwhitespace': 'bold red_background',
256 'diff.trailingwhitespace': 'bold red_background',
257 'diffstat.deleted': 'red',
257 'diffstat.deleted': 'red',
258 'diffstat.inserted': 'green',
258 'diffstat.inserted': 'green',
259 'histedit.remaining': 'red bold',
259 'histedit.remaining': 'red bold',
260 'ui.prompt': 'yellow',
260 'ui.prompt': 'yellow',
261 'log.changeset': 'yellow',
261 'log.changeset': 'yellow',
262 'rebase.rebased': 'blue',
262 'rebase.rebased': 'blue',
263 'rebase.remaining': 'red bold',
263 'rebase.remaining': 'red bold',
264 'resolve.resolved': 'green bold',
264 'resolve.resolved': 'green bold',
265 'resolve.unresolved': 'red bold',
265 'resolve.unresolved': 'red bold',
266 'shelve.age': 'cyan',
266 'shelve.age': 'cyan',
267 'shelve.newest': 'green bold',
267 'shelve.newest': 'green bold',
268 'shelve.name': 'blue bold',
268 'shelve.name': 'blue bold',
269 'status.added': 'green bold',
269 'status.added': 'green bold',
270 'status.clean': 'none',
270 'status.clean': 'none',
271 'status.copied': 'none',
271 'status.copied': 'none',
272 'status.deleted': 'cyan bold underline',
272 'status.deleted': 'cyan bold underline',
273 'status.ignored': 'black bold',
273 'status.ignored': 'black bold',
274 'status.modified': 'blue bold',
274 'status.modified': 'blue bold',
275 'status.removed': 'red bold',
275 'status.removed': 'red bold',
276 'status.unknown': 'magenta bold underline',
276 'status.unknown': 'magenta bold underline',
277 'tags.normal': 'green',
277 'tags.normal': 'green',
278 'tags.local': 'black bold'}
278 'tags.local': 'black bold'}
279
279
280
280
281 def _effect_str(effect):
281 def _effect_str(effect):
282 '''Helper function for render_effects().'''
282 '''Helper function for render_effects().'''
283
283
284 bg = False
284 bg = False
285 if effect.endswith('_background'):
285 if effect.endswith('_background'):
286 bg = True
286 bg = True
287 effect = effect[:-11]
287 effect = effect[:-11]
288 attr, val = _terminfo_params[effect]
288 attr, val = _terminfo_params[effect]
289 if attr:
289 if attr:
290 return curses.tigetstr(val)
290 return curses.tigetstr(val)
291 elif bg:
291 elif bg:
292 return curses.tparm(curses.tigetstr('setab'), val)
292 return curses.tparm(curses.tigetstr('setab'), val)
293 else:
293 else:
294 return curses.tparm(curses.tigetstr('setaf'), val)
294 return curses.tparm(curses.tigetstr('setaf'), val)
295
295
296 def render_effects(text, effects):
296 def render_effects(text, effects):
297 'Wrap text in commands to turn on each effect.'
297 'Wrap text in commands to turn on each effect.'
298 if not text:
298 if not text:
299 return text
299 return text
300 if not _terminfo_params:
300 if not _terminfo_params:
301 start = [str(_effects[e]) for e in ['none'] + effects.split()]
301 start = [str(_effects[e]) for e in ['none'] + effects.split()]
302 start = '\033[' + ';'.join(start) + 'm'
302 start = '\033[' + ';'.join(start) + 'm'
303 stop = '\033[' + str(_effects['none']) + 'm'
303 stop = '\033[' + str(_effects['none']) + 'm'
304 else:
304 else:
305 start = ''.join(_effect_str(effect)
305 start = ''.join(_effect_str(effect)
306 for effect in ['none'] + effects.split())
306 for effect in ['none'] + effects.split())
307 stop = _effect_str('none')
307 stop = _effect_str('none')
308 return ''.join([start, text, stop])
308 return ''.join([start, text, stop])
309
309
310 def extstyles():
310 def extstyles():
311 for name, ext in extensions.extensions():
311 for name, ext in extensions.extensions():
312 _styles.update(getattr(ext, 'colortable', {}))
312 _styles.update(getattr(ext, 'colortable', {}))
313
313
314 def valideffect(effect):
314 def valideffect(effect):
315 'Determine if the effect is valid or not.'
315 'Determine if the effect is valid or not.'
316 good = False
316 good = False
317 if not _terminfo_params and effect in _effects:
317 if not _terminfo_params and effect in _effects:
318 good = True
318 good = True
319 elif effect in _terminfo_params or effect[:-11] in _terminfo_params:
319 elif effect in _terminfo_params or effect[:-11] in _terminfo_params:
320 good = True
320 good = True
321 return good
321 return good
322
322
323 def configstyles(ui):
323 def configstyles(ui):
324 for status, cfgeffects in ui.configitems('color'):
324 for status, cfgeffects in ui.configitems('color'):
325 if '.' not in status or status.startswith('color.'):
325 if '.' not in status or status.startswith('color.'):
326 continue
326 continue
327 cfgeffects = ui.configlist('color', status)
327 cfgeffects = ui.configlist('color', status)
328 if cfgeffects:
328 if cfgeffects:
329 good = []
329 good = []
330 for e in cfgeffects:
330 for e in cfgeffects:
331 if valideffect(e):
331 if valideffect(e):
332 good.append(e)
332 good.append(e)
333 else:
333 else:
334 ui.warn(_("ignoring unknown color/effect %r "
334 ui.warn(_("ignoring unknown color/effect %r "
335 "(configured in color.%s)\n")
335 "(configured in color.%s)\n")
336 % (e, status))
336 % (e, status))
337 _styles[status] = ' '.join(good)
337 _styles[status] = ' '.join(good)
338
338
339 class colorui(uimod.ui):
339 class colorui(uimod.ui):
340 def popbuffer(self, labeled=False):
340 def popbuffer(self, labeled=False):
341 if self._colormode is None:
341 if self._colormode is None:
342 return super(colorui, self).popbuffer(labeled)
342 return super(colorui, self).popbuffer(labeled)
343
343
344 if labeled:
344 if labeled:
345 return ''.join(self.label(a, label) for a, label
345 return ''.join(self.label(a, label) for a, label
346 in self._buffers.pop())
346 in self._buffers.pop())
347 return ''.join(a for a, label in self._buffers.pop())
347 return ''.join(a for a, label in self._buffers.pop())
348
348
349 _colormode = 'ansi'
349 _colormode = 'ansi'
350 def write(self, *args, **opts):
350 def write(self, *args, **opts):
351 if self._colormode is None:
351 if self._colormode is None:
352 return super(colorui, self).write(*args, **opts)
352 return super(colorui, self).write(*args, **opts)
353
353
354 label = opts.get('label', '')
354 label = opts.get('label', '')
355 if self._buffers:
355 if self._buffers:
356 self._buffers[-1].extend([(str(a), label) for a in args])
356 self._buffers[-1].extend([(str(a), label) for a in args])
357 elif self._colormode == 'win32':
357 elif self._colormode == 'win32':
358 for a in args:
358 for a in args:
359 win32print(a, super(colorui, self).write, **opts)
359 win32print(a, super(colorui, self).write, **opts)
360 else:
360 else:
361 return super(colorui, self).write(
361 return super(colorui, self).write(
362 *[self.label(str(a), label) for a in args], **opts)
362 *[self.label(str(a), label) for a in args], **opts)
363
363
364 def write_err(self, *args, **opts):
364 def write_err(self, *args, **opts):
365 if self._colormode is None:
365 if self._colormode is None:
366 return super(colorui, self).write_err(*args, **opts)
366 return super(colorui, self).write_err(*args, **opts)
367
367
368 label = opts.get('label', '')
368 label = opts.get('label', '')
369 if self._colormode == 'win32':
369 if self._colormode == 'win32':
370 for a in args:
370 for a in args:
371 win32print(a, super(colorui, self).write_err, **opts)
371 win32print(a, super(colorui, self).write_err, **opts)
372 else:
372 else:
373 return super(colorui, self).write_err(
373 return super(colorui, self).write_err(
374 *[self.label(str(a), label) for a in args], **opts)
374 *[self.label(str(a), label) for a in args], **opts)
375
375
376 def label(self, msg, label):
376 def label(self, msg, label):
377 if self._colormode is None:
377 if self._colormode is None:
378 return super(colorui, self).label(msg, label)
378 return super(colorui, self).label(msg, label)
379
379
380 effects = []
380 effects = []
381 for l in label.split():
381 for l in label.split():
382 s = _styles.get(l, '')
382 s = _styles.get(l, '')
383 if s:
383 if s:
384 effects.append(s)
384 effects.append(s)
385 elif valideffect(l):
385 elif valideffect(l):
386 effects.append(l)
386 effects.append(l)
387 effects = ' '.join(effects)
387 effects = ' '.join(effects)
388 if effects:
388 if effects:
389 return '\n'.join([render_effects(s, effects)
389 return '\n'.join([render_effects(s, effects)
390 for s in msg.split('\n')])
390 for s in msg.split('\n')])
391 return msg
391 return msg
392
392
393 def templatelabel(context, mapping, args):
393 def templatelabel(context, mapping, args):
394 if len(args) != 2:
394 if len(args) != 2:
395 # i18n: "label" is a keyword
395 # i18n: "label" is a keyword
396 raise error.ParseError(_("label expects two arguments"))
396 raise error.ParseError(_("label expects two arguments"))
397
397
398 thing = templater._evalifliteral(args[1], context, mapping)
398 thing = templater._evalifliteral(args[1], context, mapping)
399
399
400 # apparently, repo could be a string that is the favicon?
400 # apparently, repo could be a string that is the favicon?
401 repo = mapping.get('repo', '')
401 repo = mapping.get('repo', '')
402 if isinstance(repo, str):
402 if isinstance(repo, str):
403 return thing
403 return thing
404
404
405 label = templater._evalifliteral(args[0], context, mapping)
405 label = templater._evalifliteral(args[0], context, mapping)
406
406
407 thing = templater.stringify(thing)
407 thing = templater.stringify(thing)
408 label = templater.stringify(label)
408 label = templater.stringify(label)
409
409
410 return repo.ui.label(thing, label)
410 return repo.ui.label(thing, label)
411
411
412 def uisetup(ui):
412 def uisetup(ui):
413 if ui.plain():
413 if ui.plain():
414 return
414 return
415 if not isinstance(ui, colorui):
415 if not isinstance(ui, colorui):
416 colorui.__bases__ = (ui.__class__,)
416 colorui.__bases__ = (ui.__class__,)
417 ui.__class__ = colorui
417 ui.__class__ = colorui
418 def colorcmd(orig, ui_, opts, cmd, cmdfunc):
418 def colorcmd(orig, ui_, opts, cmd, cmdfunc):
419 mode = _modesetup(ui_, opts['color'])
419 mode = _modesetup(ui_, opts['color'])
420 colorui._colormode = mode
420 colorui._colormode = mode
421 if mode:
421 if mode:
422 extstyles()
422 extstyles()
423 configstyles(ui_)
423 configstyles(ui_)
424 return orig(ui_, opts, cmd, cmdfunc)
424 return orig(ui_, opts, cmd, cmdfunc)
425 extensions.wrapfunction(dispatch, '_runcommand', colorcmd)
425 extensions.wrapfunction(dispatch, '_runcommand', colorcmd)
426 templater.funcs['label'] = templatelabel
426 templater.funcs['label'] = templatelabel
427
427
428 def extsetup(ui):
428 def extsetup(ui):
429 commands.globalopts.append(
429 commands.globalopts.append(
430 ('', 'color', 'auto',
430 ('', 'color', 'auto',
431 # i18n: 'always', 'auto', and 'never' are keywords and should
431 # i18n: 'always', 'auto', and 'never' are keywords and should
432 # not be translated
432 # not be translated
433 _("when to colorize (boolean, always, auto, or never)"),
433 _("when to colorize (boolean, always, auto, or never)"),
434 _('TYPE')))
434 _('TYPE')))
435
435
436 def debugcolor(ui, repo, **opts):
436 def debugcolor(ui, repo, **opts):
437 global _styles
437 global _styles
438 _styles = {}
438 _styles = {}
439 for effect in _effects.keys():
439 for effect in _effects.keys():
440 _styles[effect] = effect
440 _styles[effect] = effect
441 ui.write(('colormode: %s\n') % ui._colormode)
441 ui.write(('color mode: %s\n') % ui._colormode)
442 ui.write(_('available colors:\n'))
442 ui.write(_('available colors:\n'))
443 for label, colors in _styles.items():
443 for label, colors in _styles.items():
444 ui.write(('%s\n') % colors, label=label)
444 ui.write(('%s\n') % colors, label=label)
445
445
446 if os.name != 'nt':
446 if os.name != 'nt':
447 w32effects = None
447 w32effects = None
448 else:
448 else:
449 import re, ctypes
449 import re, ctypes
450
450
451 _kernel32 = ctypes.windll.kernel32
451 _kernel32 = ctypes.windll.kernel32
452
452
453 _WORD = ctypes.c_ushort
453 _WORD = ctypes.c_ushort
454
454
455 _INVALID_HANDLE_VALUE = -1
455 _INVALID_HANDLE_VALUE = -1
456
456
457 class _COORD(ctypes.Structure):
457 class _COORD(ctypes.Structure):
458 _fields_ = [('X', ctypes.c_short),
458 _fields_ = [('X', ctypes.c_short),
459 ('Y', ctypes.c_short)]
459 ('Y', ctypes.c_short)]
460
460
461 class _SMALL_RECT(ctypes.Structure):
461 class _SMALL_RECT(ctypes.Structure):
462 _fields_ = [('Left', ctypes.c_short),
462 _fields_ = [('Left', ctypes.c_short),
463 ('Top', ctypes.c_short),
463 ('Top', ctypes.c_short),
464 ('Right', ctypes.c_short),
464 ('Right', ctypes.c_short),
465 ('Bottom', ctypes.c_short)]
465 ('Bottom', ctypes.c_short)]
466
466
467 class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
467 class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
468 _fields_ = [('dwSize', _COORD),
468 _fields_ = [('dwSize', _COORD),
469 ('dwCursorPosition', _COORD),
469 ('dwCursorPosition', _COORD),
470 ('wAttributes', _WORD),
470 ('wAttributes', _WORD),
471 ('srWindow', _SMALL_RECT),
471 ('srWindow', _SMALL_RECT),
472 ('dwMaximumWindowSize', _COORD)]
472 ('dwMaximumWindowSize', _COORD)]
473
473
474 _STD_OUTPUT_HANDLE = 0xfffffff5L # (DWORD)-11
474 _STD_OUTPUT_HANDLE = 0xfffffff5L # (DWORD)-11
475 _STD_ERROR_HANDLE = 0xfffffff4L # (DWORD)-12
475 _STD_ERROR_HANDLE = 0xfffffff4L # (DWORD)-12
476
476
477 _FOREGROUND_BLUE = 0x0001
477 _FOREGROUND_BLUE = 0x0001
478 _FOREGROUND_GREEN = 0x0002
478 _FOREGROUND_GREEN = 0x0002
479 _FOREGROUND_RED = 0x0004
479 _FOREGROUND_RED = 0x0004
480 _FOREGROUND_INTENSITY = 0x0008
480 _FOREGROUND_INTENSITY = 0x0008
481
481
482 _BACKGROUND_BLUE = 0x0010
482 _BACKGROUND_BLUE = 0x0010
483 _BACKGROUND_GREEN = 0x0020
483 _BACKGROUND_GREEN = 0x0020
484 _BACKGROUND_RED = 0x0040
484 _BACKGROUND_RED = 0x0040
485 _BACKGROUND_INTENSITY = 0x0080
485 _BACKGROUND_INTENSITY = 0x0080
486
486
487 _COMMON_LVB_REVERSE_VIDEO = 0x4000
487 _COMMON_LVB_REVERSE_VIDEO = 0x4000
488 _COMMON_LVB_UNDERSCORE = 0x8000
488 _COMMON_LVB_UNDERSCORE = 0x8000
489
489
490 # http://msdn.microsoft.com/en-us/library/ms682088%28VS.85%29.aspx
490 # http://msdn.microsoft.com/en-us/library/ms682088%28VS.85%29.aspx
491 w32effects = {
491 w32effects = {
492 'none': -1,
492 'none': -1,
493 'black': 0,
493 'black': 0,
494 'red': _FOREGROUND_RED,
494 'red': _FOREGROUND_RED,
495 'green': _FOREGROUND_GREEN,
495 'green': _FOREGROUND_GREEN,
496 'yellow': _FOREGROUND_RED | _FOREGROUND_GREEN,
496 'yellow': _FOREGROUND_RED | _FOREGROUND_GREEN,
497 'blue': _FOREGROUND_BLUE,
497 'blue': _FOREGROUND_BLUE,
498 'magenta': _FOREGROUND_BLUE | _FOREGROUND_RED,
498 'magenta': _FOREGROUND_BLUE | _FOREGROUND_RED,
499 'cyan': _FOREGROUND_BLUE | _FOREGROUND_GREEN,
499 'cyan': _FOREGROUND_BLUE | _FOREGROUND_GREEN,
500 'white': _FOREGROUND_RED | _FOREGROUND_GREEN | _FOREGROUND_BLUE,
500 'white': _FOREGROUND_RED | _FOREGROUND_GREEN | _FOREGROUND_BLUE,
501 'bold': _FOREGROUND_INTENSITY,
501 'bold': _FOREGROUND_INTENSITY,
502 'black_background': 0x100, # unused value > 0x0f
502 'black_background': 0x100, # unused value > 0x0f
503 'red_background': _BACKGROUND_RED,
503 'red_background': _BACKGROUND_RED,
504 'green_background': _BACKGROUND_GREEN,
504 'green_background': _BACKGROUND_GREEN,
505 'yellow_background': _BACKGROUND_RED | _BACKGROUND_GREEN,
505 'yellow_background': _BACKGROUND_RED | _BACKGROUND_GREEN,
506 'blue_background': _BACKGROUND_BLUE,
506 'blue_background': _BACKGROUND_BLUE,
507 'purple_background': _BACKGROUND_BLUE | _BACKGROUND_RED,
507 'purple_background': _BACKGROUND_BLUE | _BACKGROUND_RED,
508 'cyan_background': _BACKGROUND_BLUE | _BACKGROUND_GREEN,
508 'cyan_background': _BACKGROUND_BLUE | _BACKGROUND_GREEN,
509 'white_background': (_BACKGROUND_RED | _BACKGROUND_GREEN |
509 'white_background': (_BACKGROUND_RED | _BACKGROUND_GREEN |
510 _BACKGROUND_BLUE),
510 _BACKGROUND_BLUE),
511 'bold_background': _BACKGROUND_INTENSITY,
511 'bold_background': _BACKGROUND_INTENSITY,
512 'underline': _COMMON_LVB_UNDERSCORE, # double-byte charsets only
512 'underline': _COMMON_LVB_UNDERSCORE, # double-byte charsets only
513 'inverse': _COMMON_LVB_REVERSE_VIDEO, # double-byte charsets only
513 'inverse': _COMMON_LVB_REVERSE_VIDEO, # double-byte charsets only
514 }
514 }
515
515
516 passthrough = set([_FOREGROUND_INTENSITY,
516 passthrough = set([_FOREGROUND_INTENSITY,
517 _BACKGROUND_INTENSITY,
517 _BACKGROUND_INTENSITY,
518 _COMMON_LVB_UNDERSCORE,
518 _COMMON_LVB_UNDERSCORE,
519 _COMMON_LVB_REVERSE_VIDEO])
519 _COMMON_LVB_REVERSE_VIDEO])
520
520
521 stdout = _kernel32.GetStdHandle(
521 stdout = _kernel32.GetStdHandle(
522 _STD_OUTPUT_HANDLE) # don't close the handle returned
522 _STD_OUTPUT_HANDLE) # don't close the handle returned
523 if stdout is None or stdout == _INVALID_HANDLE_VALUE:
523 if stdout is None or stdout == _INVALID_HANDLE_VALUE:
524 w32effects = None
524 w32effects = None
525 else:
525 else:
526 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
526 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
527 if not _kernel32.GetConsoleScreenBufferInfo(
527 if not _kernel32.GetConsoleScreenBufferInfo(
528 stdout, ctypes.byref(csbi)):
528 stdout, ctypes.byref(csbi)):
529 # stdout may not support GetConsoleScreenBufferInfo()
529 # stdout may not support GetConsoleScreenBufferInfo()
530 # when called from subprocess or redirected
530 # when called from subprocess or redirected
531 w32effects = None
531 w32effects = None
532 else:
532 else:
533 origattr = csbi.wAttributes
533 origattr = csbi.wAttributes
534 ansire = re.compile('\033\[([^m]*)m([^\033]*)(.*)',
534 ansire = re.compile('\033\[([^m]*)m([^\033]*)(.*)',
535 re.MULTILINE | re.DOTALL)
535 re.MULTILINE | re.DOTALL)
536
536
537 def win32print(text, orig, **opts):
537 def win32print(text, orig, **opts):
538 label = opts.get('label', '')
538 label = opts.get('label', '')
539 attr = origattr
539 attr = origattr
540
540
541 def mapcolor(val, attr):
541 def mapcolor(val, attr):
542 if val == -1:
542 if val == -1:
543 return origattr
543 return origattr
544 elif val in passthrough:
544 elif val in passthrough:
545 return attr | val
545 return attr | val
546 elif val > 0x0f:
546 elif val > 0x0f:
547 return (val & 0x70) | (attr & 0x8f)
547 return (val & 0x70) | (attr & 0x8f)
548 else:
548 else:
549 return (val & 0x07) | (attr & 0xf8)
549 return (val & 0x07) | (attr & 0xf8)
550
550
551 # determine console attributes based on labels
551 # determine console attributes based on labels
552 for l in label.split():
552 for l in label.split():
553 style = _styles.get(l, '')
553 style = _styles.get(l, '')
554 for effect in style.split():
554 for effect in style.split():
555 attr = mapcolor(w32effects[effect], attr)
555 attr = mapcolor(w32effects[effect], attr)
556
556
557 # hack to ensure regexp finds data
557 # hack to ensure regexp finds data
558 if not text.startswith('\033['):
558 if not text.startswith('\033['):
559 text = '\033[m' + text
559 text = '\033[m' + text
560
560
561 # Look for ANSI-like codes embedded in text
561 # Look for ANSI-like codes embedded in text
562 m = re.match(ansire, text)
562 m = re.match(ansire, text)
563
563
564 try:
564 try:
565 while m:
565 while m:
566 for sattr in m.group(1).split(';'):
566 for sattr in m.group(1).split(';'):
567 if sattr:
567 if sattr:
568 attr = mapcolor(int(sattr), attr)
568 attr = mapcolor(int(sattr), attr)
569 _kernel32.SetConsoleTextAttribute(stdout, attr)
569 _kernel32.SetConsoleTextAttribute(stdout, attr)
570 orig(m.group(2), **opts)
570 orig(m.group(2), **opts)
571 m = re.match(ansire, m.group(3))
571 m = re.match(ansire, m.group(3))
572 finally:
572 finally:
573 # Explicitly reset original attributes
573 # Explicitly reset original attributes
574 _kernel32.SetConsoleTextAttribute(stdout, origattr)
574 _kernel32.SetConsoleTextAttribute(stdout, origattr)
575
575
576 cmdtable = {
576 cmdtable = {
577 'debugcolor':
577 'debugcolor':
578 (debugcolor, [], ('hg debugcolor'))
578 (debugcolor, [], ('hg debugcolor'))
579 }
579 }
@@ -1,395 +1,395 b''
1 # convert.py Foreign SCM converter
1 # convert.py Foreign SCM converter
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''import revisions from foreign VCS repositories into Mercurial'''
8 '''import revisions from foreign VCS repositories into Mercurial'''
9
9
10 import convcmd
10 import convcmd
11 import cvsps
11 import cvsps
12 import subversion
12 import subversion
13 from mercurial import commands, templatekw
13 from mercurial import commands, templatekw
14 from mercurial.i18n import _
14 from mercurial.i18n import _
15
15
16 testedwith = 'internal'
16 testedwith = 'internal'
17
17
18 # Commands definition was moved elsewhere to ease demandload job.
18 # Commands definition was moved elsewhere to ease demandload job.
19
19
20 def convert(ui, src, dest=None, revmapfile=None, **opts):
20 def convert(ui, src, dest=None, revmapfile=None, **opts):
21 """convert a foreign SCM repository to a Mercurial one.
21 """convert a foreign SCM repository to a Mercurial one.
22
22
23 Accepted source formats [identifiers]:
23 Accepted source formats [identifiers]:
24
24
25 - Mercurial [hg]
25 - Mercurial [hg]
26 - CVS [cvs]
26 - CVS [cvs]
27 - Darcs [darcs]
27 - Darcs [darcs]
28 - git [git]
28 - git [git]
29 - Subversion [svn]
29 - Subversion [svn]
30 - Monotone [mtn]
30 - Monotone [mtn]
31 - GNU Arch [gnuarch]
31 - GNU Arch [gnuarch]
32 - Bazaar [bzr]
32 - Bazaar [bzr]
33 - Perforce [p4]
33 - Perforce [p4]
34
34
35 Accepted destination formats [identifiers]:
35 Accepted destination formats [identifiers]:
36
36
37 - Mercurial [hg]
37 - Mercurial [hg]
38 - Subversion [svn] (history on branches is not preserved)
38 - Subversion [svn] (history on branches is not preserved)
39
39
40 If no revision is given, all revisions will be converted.
40 If no revision is given, all revisions will be converted.
41 Otherwise, convert will only import up to the named revision
41 Otherwise, convert will only import up to the named revision
42 (given in a format understood by the source).
42 (given in a format understood by the source).
43
43
44 If no destination directory name is specified, it defaults to the
44 If no destination directory name is specified, it defaults to the
45 basename of the source with ``-hg`` appended. If the destination
45 basename of the source with ``-hg`` appended. If the destination
46 repository doesn't exist, it will be created.
46 repository doesn't exist, it will be created.
47
47
48 By default, all sources except Mercurial will use --branchsort.
48 By default, all sources except Mercurial will use --branchsort.
49 Mercurial uses --sourcesort to preserve original revision numbers
49 Mercurial uses --sourcesort to preserve original revision numbers
50 order. Sort modes have the following effects:
50 order. Sort modes have the following effects:
51
51
52 --branchsort convert from parent to child revision when possible,
52 --branchsort convert from parent to child revision when possible,
53 which means branches are usually converted one after
53 which means branches are usually converted one after
54 the other. It generates more compact repositories.
54 the other. It generates more compact repositories.
55
55
56 --datesort sort revisions by date. Converted repositories have
56 --datesort sort revisions by date. Converted repositories have
57 good-looking changelogs but are often an order of
57 good-looking changelogs but are often an order of
58 magnitude larger than the same ones generated by
58 magnitude larger than the same ones generated by
59 --branchsort.
59 --branchsort.
60
60
61 --sourcesort try to preserve source revisions order, only
61 --sourcesort try to preserve source revisions order, only
62 supported by Mercurial sources.
62 supported by Mercurial sources.
63
63
64 --closesort try to move closed revisions as close as possible
64 --closesort try to move closed revisions as close as possible
65 to parent branches, only supported by Mercurial
65 to parent branches, only supported by Mercurial
66 sources.
66 sources.
67
67
68 If ``REVMAP`` isn't given, it will be put in a default location
68 If ``REVMAP`` isn't given, it will be put in a default location
69 (``<dest>/.hg/shamap`` by default). The ``REVMAP`` is a simple
69 (``<dest>/.hg/shamap`` by default). The ``REVMAP`` is a simple
70 text file that maps each source commit ID to the destination ID
70 text file that maps each source commit ID to the destination ID
71 for that revision, like so::
71 for that revision, like so::
72
72
73 <source ID> <destination ID>
73 <source ID> <destination ID>
74
74
75 If the file doesn't exist, it's automatically created. It's
75 If the file doesn't exist, it's automatically created. It's
76 updated on each commit copied, so :hg:`convert` can be interrupted
76 updated on each commit copied, so :hg:`convert` can be interrupted
77 and can be run repeatedly to copy new commits.
77 and can be run repeatedly to copy new commits.
78
78
79 The authormap is a simple text file that maps each source commit
79 The authormap is a simple text file that maps each source commit
80 author to a destination commit author. It is handy for source SCMs
80 author to a destination commit author. It is handy for source SCMs
81 that use unix logins to identify authors (e.g.: CVS). One line per
81 that use unix logins to identify authors (e.g.: CVS). One line per
82 author mapping and the line format is::
82 author mapping and the line format is::
83
83
84 source author = destination author
84 source author = destination author
85
85
86 Empty lines and lines starting with a ``#`` are ignored.
86 Empty lines and lines starting with a ``#`` are ignored.
87
87
88 The filemap is a file that allows filtering and remapping of files
88 The filemap is a file that allows filtering and remapping of files
89 and directories. Each line can contain one of the following
89 and directories. Each line can contain one of the following
90 directives::
90 directives::
91
91
92 include path/to/file-or-dir
92 include path/to/file-or-dir
93
93
94 exclude path/to/file-or-dir
94 exclude path/to/file-or-dir
95
95
96 rename path/to/source path/to/destination
96 rename path/to/source path/to/destination
97
97
98 Comment lines start with ``#``. A specified path matches if it
98 Comment lines start with ``#``. A specified path matches if it
99 equals the full relative name of a file or one of its parent
99 equals the full relative name of a file or one of its parent
100 directories. The ``include`` or ``exclude`` directive with the
100 directories. The ``include`` or ``exclude`` directive with the
101 longest matching path applies, so line order does not matter.
101 longest matching path applies, so line order does not matter.
102
102
103 The ``include`` directive causes a file, or all files under a
103 The ``include`` directive causes a file, or all files under a
104 directory, to be included in the destination repository. The default
104 directory, to be included in the destination repository. The default
105 if there are no ``include`` statements is to include everything.
105 if there are no ``include`` statements is to include everything.
106 If there are any ``include`` statements, nothing else is included.
106 If there are any ``include`` statements, nothing else is included.
107 The ``exclude`` directive causes files or directories to
107 The ``exclude`` directive causes files or directories to
108 be omitted. The ``rename`` directive renames a file or directory if
108 be omitted. The ``rename`` directive renames a file or directory if
109 it is converted. To rename from a subdirectory into the root of
109 it is converted. To rename from a subdirectory into the root of
110 the repository, use ``.`` as the path to rename to.
110 the repository, use ``.`` as the path to rename to.
111
111
112 The splicemap is a file that allows insertion of synthetic
112 The splicemap is a file that allows insertion of synthetic
113 history, letting you specify the parents of a revision. This is
113 history, letting you specify the parents of a revision. This is
114 useful if you want to e.g. give a Subversion merge two parents, or
114 useful if you want to e.g. give a Subversion merge two parents, or
115 graft two disconnected series of history together. Each entry
115 graft two disconnected series of history together. Each entry
116 contains a key, followed by a space, followed by one or two
116 contains a key, followed by a space, followed by one or two
117 comma-separated values::
117 comma-separated values::
118
118
119 key parent1, parent2
119 key parent1, parent2
120
120
121 The key is the revision ID in the source
121 The key is the revision ID in the source
122 revision control system whose parents should be modified (same
122 revision control system whose parents should be modified (same
123 format as a key in .hg/shamap). The values are the revision IDs
123 format as a key in .hg/shamap). The values are the revision IDs
124 (in either the source or destination revision control system) that
124 (in either the source or destination revision control system) that
125 should be used as the new parents for that node. For example, if
125 should be used as the new parents for that node. For example, if
126 you have merged "release-1.0" into "trunk", then you should
126 you have merged "release-1.0" into "trunk", then you should
127 specify the revision on "trunk" as the first parent and the one on
127 specify the revision on "trunk" as the first parent and the one on
128 the "release-1.0" branch as the second.
128 the "release-1.0" branch as the second.
129
129
130 The branchmap is a file that allows you to rename a branch when it is
130 The branchmap is a file that allows you to rename a branch when it is
131 being brought in from whatever external repository. When used in
131 being brought in from whatever external repository. When used in
132 conjunction with a splicemap, it allows for a powerful combination
132 conjunction with a splicemap, it allows for a powerful combination
133 to help fix even the most badly mismanaged repositories and turn them
133 to help fix even the most badly mismanaged repositories and turn them
134 into nicely structured Mercurial repositories. The branchmap contains
134 into nicely structured Mercurial repositories. The branchmap contains
135 lines of the form::
135 lines of the form::
136
136
137 original_branch_name new_branch_name
137 original_branch_name new_branch_name
138
138
139 where "original_branch_name" is the name of the branch in the
139 where "original_branch_name" is the name of the branch in the
140 source repository, and "new_branch_name" is the name of the branch
140 source repository, and "new_branch_name" is the name of the branch
141 is the destination repository. No whitespace is allowed in the
141 is the destination repository. No whitespace is allowed in the
142 branch names. This can be used to (for instance) move code in one
142 branch names. This can be used to (for instance) move code in one
143 repository from "default" to a named branch.
143 repository from "default" to a named branch.
144
144
145 The closemap is a file that allows closing of a branch. This is useful if
145 The closemap is a file that allows closing of a branch. This is useful if
146 you want to close a branch. Each entry contains a revision or hash
146 you want to close a branch. Each entry contains a revision or hash
147 separated by white space.
147 separated by white space.
148
148
149 The tagpmap is a file that exactly analogous to the branchmap. This will
149 The tagmap is a file that exactly analogous to the branchmap. This will
150 rename tags on the fly and prevent the 'update tags' commit usually found
150 rename tags on the fly and prevent the 'update tags' commit usually found
151 at the end of a convert process.
151 at the end of a convert process.
152
152
153 Mercurial Source
153 Mercurial Source
154 ################
154 ################
155
155
156 The Mercurial source recognizes the following configuration
156 The Mercurial source recognizes the following configuration
157 options, which you can set on the command line with ``--config``:
157 options, which you can set on the command line with ``--config``:
158
158
159 :convert.hg.ignoreerrors: ignore integrity errors when reading.
159 :convert.hg.ignoreerrors: ignore integrity errors when reading.
160 Use it to fix Mercurial repositories with missing revlogs, by
160 Use it to fix Mercurial repositories with missing revlogs, by
161 converting from and to Mercurial. Default is False.
161 converting from and to Mercurial. Default is False.
162
162
163 :convert.hg.saverev: store original revision ID in changeset
163 :convert.hg.saverev: store original revision ID in changeset
164 (forces target IDs to change). It takes a boolean argument and
164 (forces target IDs to change). It takes a boolean argument and
165 defaults to False.
165 defaults to False.
166
166
167 :convert.hg.revs: revset specifying the source revisions to convert.
167 :convert.hg.revs: revset specifying the source revisions to convert.
168
168
169 CVS Source
169 CVS Source
170 ##########
170 ##########
171
171
172 CVS source will use a sandbox (i.e. a checked-out copy) from CVS
172 CVS source will use a sandbox (i.e. a checked-out copy) from CVS
173 to indicate the starting point of what will be converted. Direct
173 to indicate the starting point of what will be converted. Direct
174 access to the repository files is not needed, unless of course the
174 access to the repository files is not needed, unless of course the
175 repository is ``:local:``. The conversion uses the top level
175 repository is ``:local:``. The conversion uses the top level
176 directory in the sandbox to find the CVS repository, and then uses
176 directory in the sandbox to find the CVS repository, and then uses
177 CVS rlog commands to find files to convert. This means that unless
177 CVS rlog commands to find files to convert. This means that unless
178 a filemap is given, all files under the starting directory will be
178 a filemap is given, all files under the starting directory will be
179 converted, and that any directory reorganization in the CVS
179 converted, and that any directory reorganization in the CVS
180 sandbox is ignored.
180 sandbox is ignored.
181
181
182 The following options can be used with ``--config``:
182 The following options can be used with ``--config``:
183
183
184 :convert.cvsps.cache: Set to False to disable remote log caching,
184 :convert.cvsps.cache: Set to False to disable remote log caching,
185 for testing and debugging purposes. Default is True.
185 for testing and debugging purposes. Default is True.
186
186
187 :convert.cvsps.fuzz: Specify the maximum time (in seconds) that is
187 :convert.cvsps.fuzz: Specify the maximum time (in seconds) that is
188 allowed between commits with identical user and log message in
188 allowed between commits with identical user and log message in
189 a single changeset. When very large files were checked in as
189 a single changeset. When very large files were checked in as
190 part of a changeset then the default may not be long enough.
190 part of a changeset then the default may not be long enough.
191 The default is 60.
191 The default is 60.
192
192
193 :convert.cvsps.mergeto: Specify a regular expression to which
193 :convert.cvsps.mergeto: Specify a regular expression to which
194 commit log messages are matched. If a match occurs, then the
194 commit log messages are matched. If a match occurs, then the
195 conversion process will insert a dummy revision merging the
195 conversion process will insert a dummy revision merging the
196 branch on which this log message occurs to the branch
196 branch on which this log message occurs to the branch
197 indicated in the regex. Default is ``{{mergetobranch
197 indicated in the regex. Default is ``{{mergetobranch
198 ([-\\w]+)}}``
198 ([-\\w]+)}}``
199
199
200 :convert.cvsps.mergefrom: Specify a regular expression to which
200 :convert.cvsps.mergefrom: Specify a regular expression to which
201 commit log messages are matched. If a match occurs, then the
201 commit log messages are matched. If a match occurs, then the
202 conversion process will add the most recent revision on the
202 conversion process will add the most recent revision on the
203 branch indicated in the regex as the second parent of the
203 branch indicated in the regex as the second parent of the
204 changeset. Default is ``{{mergefrombranch ([-\\w]+)}}``
204 changeset. Default is ``{{mergefrombranch ([-\\w]+)}}``
205
205
206 :convert.localtimezone: use local time (as determined by the TZ
206 :convert.localtimezone: use local time (as determined by the TZ
207 environment variable) for changeset date/times. The default
207 environment variable) for changeset date/times. The default
208 is False (use UTC).
208 is False (use UTC).
209
209
210 :hooks.cvslog: Specify a Python function to be called at the end of
210 :hooks.cvslog: Specify a Python function to be called at the end of
211 gathering the CVS log. The function is passed a list with the
211 gathering the CVS log. The function is passed a list with the
212 log entries, and can modify the entries in-place, or add or
212 log entries, and can modify the entries in-place, or add or
213 delete them.
213 delete them.
214
214
215 :hooks.cvschangesets: Specify a Python function to be called after
215 :hooks.cvschangesets: Specify a Python function to be called after
216 the changesets are calculated from the CVS log. The
216 the changesets are calculated from the CVS log. The
217 function is passed a list with the changeset entries, and can
217 function is passed a list with the changeset entries, and can
218 modify the changesets in-place, or add or delete them.
218 modify the changesets in-place, or add or delete them.
219
219
220 An additional "debugcvsps" Mercurial command allows the builtin
220 An additional "debugcvsps" Mercurial command allows the builtin
221 changeset merging code to be run without doing a conversion. Its
221 changeset merging code to be run without doing a conversion. Its
222 parameters and output are similar to that of cvsps 2.1. Please see
222 parameters and output are similar to that of cvsps 2.1. Please see
223 the command help for more details.
223 the command help for more details.
224
224
225 Subversion Source
225 Subversion Source
226 #################
226 #################
227
227
228 Subversion source detects classical trunk/branches/tags layouts.
228 Subversion source detects classical trunk/branches/tags layouts.
229 By default, the supplied ``svn://repo/path/`` source URL is
229 By default, the supplied ``svn://repo/path/`` source URL is
230 converted as a single branch. If ``svn://repo/path/trunk`` exists
230 converted as a single branch. If ``svn://repo/path/trunk`` exists
231 it replaces the default branch. If ``svn://repo/path/branches``
231 it replaces the default branch. If ``svn://repo/path/branches``
232 exists, its subdirectories are listed as possible branches. If
232 exists, its subdirectories are listed as possible branches. If
233 ``svn://repo/path/tags`` exists, it is looked for tags referencing
233 ``svn://repo/path/tags`` exists, it is looked for tags referencing
234 converted branches. Default ``trunk``, ``branches`` and ``tags``
234 converted branches. Default ``trunk``, ``branches`` and ``tags``
235 values can be overridden with following options. Set them to paths
235 values can be overridden with following options. Set them to paths
236 relative to the source URL, or leave them blank to disable auto
236 relative to the source URL, or leave them blank to disable auto
237 detection.
237 detection.
238
238
239 The following options can be set with ``--config``:
239 The following options can be set with ``--config``:
240
240
241 :convert.svn.branches: specify the directory containing branches.
241 :convert.svn.branches: specify the directory containing branches.
242 The default is ``branches``.
242 The default is ``branches``.
243
243
244 :convert.svn.tags: specify the directory containing tags. The
244 :convert.svn.tags: specify the directory containing tags. The
245 default is ``tags``.
245 default is ``tags``.
246
246
247 :convert.svn.trunk: specify the name of the trunk branch. The
247 :convert.svn.trunk: specify the name of the trunk branch. The
248 default is ``trunk``.
248 default is ``trunk``.
249
249
250 :convert.localtimezone: use local time (as determined by the TZ
250 :convert.localtimezone: use local time (as determined by the TZ
251 environment variable) for changeset date/times. The default
251 environment variable) for changeset date/times. The default
252 is False (use UTC).
252 is False (use UTC).
253
253
254 Source history can be retrieved starting at a specific revision,
254 Source history can be retrieved starting at a specific revision,
255 instead of being integrally converted. Only single branch
255 instead of being integrally converted. Only single branch
256 conversions are supported.
256 conversions are supported.
257
257
258 :convert.svn.startrev: specify start Subversion revision number.
258 :convert.svn.startrev: specify start Subversion revision number.
259 The default is 0.
259 The default is 0.
260
260
261 Perforce Source
261 Perforce Source
262 ###############
262 ###############
263
263
264 The Perforce (P4) importer can be given a p4 depot path or a
264 The Perforce (P4) importer can be given a p4 depot path or a
265 client specification as source. It will convert all files in the
265 client specification as source. It will convert all files in the
266 source to a flat Mercurial repository, ignoring labels, branches
266 source to a flat Mercurial repository, ignoring labels, branches
267 and integrations. Note that when a depot path is given you then
267 and integrations. Note that when a depot path is given you then
268 usually should specify a target directory, because otherwise the
268 usually should specify a target directory, because otherwise the
269 target may be named ``...-hg``.
269 target may be named ``...-hg``.
270
270
271 It is possible to limit the amount of source history to be
271 It is possible to limit the amount of source history to be
272 converted by specifying an initial Perforce revision:
272 converted by specifying an initial Perforce revision:
273
273
274 :convert.p4.startrev: specify initial Perforce revision (a
274 :convert.p4.startrev: specify initial Perforce revision (a
275 Perforce changelist number).
275 Perforce changelist number).
276
276
277 Mercurial Destination
277 Mercurial Destination
278 #####################
278 #####################
279
279
280 The following options are supported:
280 The following options are supported:
281
281
282 :convert.hg.clonebranches: dispatch source branches in separate
282 :convert.hg.clonebranches: dispatch source branches in separate
283 clones. The default is False.
283 clones. The default is False.
284
284
285 :convert.hg.tagsbranch: branch name for tag revisions, defaults to
285 :convert.hg.tagsbranch: branch name for tag revisions, defaults to
286 ``default``.
286 ``default``.
287
287
288 :convert.hg.usebranchnames: preserve branch names. The default is
288 :convert.hg.usebranchnames: preserve branch names. The default is
289 True.
289 True.
290 """
290 """
291 return convcmd.convert(ui, src, dest, revmapfile, **opts)
291 return convcmd.convert(ui, src, dest, revmapfile, **opts)
292
292
293 def debugsvnlog(ui, **opts):
293 def debugsvnlog(ui, **opts):
294 return subversion.debugsvnlog(ui, **opts)
294 return subversion.debugsvnlog(ui, **opts)
295
295
296 def debugcvsps(ui, *args, **opts):
296 def debugcvsps(ui, *args, **opts):
297 '''create changeset information from CVS
297 '''create changeset information from CVS
298
298
299 This command is intended as a debugging tool for the CVS to
299 This command is intended as a debugging tool for the CVS to
300 Mercurial converter, and can be used as a direct replacement for
300 Mercurial converter, and can be used as a direct replacement for
301 cvsps.
301 cvsps.
302
302
303 Hg debugcvsps reads the CVS rlog for current directory (or any
303 Hg debugcvsps reads the CVS rlog for current directory (or any
304 named directory) in the CVS repository, and converts the log to a
304 named directory) in the CVS repository, and converts the log to a
305 series of changesets based on matching commit log entries and
305 series of changesets based on matching commit log entries and
306 dates.'''
306 dates.'''
307 return cvsps.debugcvsps(ui, *args, **opts)
307 return cvsps.debugcvsps(ui, *args, **opts)
308
308
309 commands.norepo += " convert debugsvnlog debugcvsps"
309 commands.norepo += " convert debugsvnlog debugcvsps"
310
310
311 cmdtable = {
311 cmdtable = {
312 "convert":
312 "convert":
313 (convert,
313 (convert,
314 [('', 'authors', '',
314 [('', 'authors', '',
315 _('username mapping filename (DEPRECATED, use --authormap instead)'),
315 _('username mapping filename (DEPRECATED, use --authormap instead)'),
316 _('FILE')),
316 _('FILE')),
317 ('s', 'source-type', '',
317 ('s', 'source-type', '',
318 _('source repository type'), _('TYPE')),
318 _('source repository type'), _('TYPE')),
319 ('d', 'dest-type', '',
319 ('d', 'dest-type', '',
320 _('destination repository type'), _('TYPE')),
320 _('destination repository type'), _('TYPE')),
321 ('r', 'rev', '',
321 ('r', 'rev', '',
322 _('import up to source revision REV'), _('REV')),
322 _('import up to source revision REV'), _('REV')),
323 ('A', 'authormap', '',
323 ('A', 'authormap', '',
324 _('remap usernames using this file'), _('FILE')),
324 _('remap usernames using this file'), _('FILE')),
325 ('', 'filemap', '',
325 ('', 'filemap', '',
326 _('remap file names using contents of file'), _('FILE')),
326 _('remap file names using contents of file'), _('FILE')),
327 ('', 'splicemap', '',
327 ('', 'splicemap', '',
328 _('splice synthesized history into place'), _('FILE')),
328 _('splice synthesized history into place'), _('FILE')),
329 ('', 'branchmap', '',
329 ('', 'branchmap', '',
330 _('change branch names while converting'), _('FILE')),
330 _('change branch names while converting'), _('FILE')),
331 ('', 'closemap', '',
331 ('', 'closemap', '',
332 _('closes given revs'), _('FILE')),
332 _('closes given revs'), _('FILE')),
333 ('', 'tagmap', '',
333 ('', 'tagmap', '',
334 _('change tag names while converting'), _('FILE')),
334 _('change tag names while converting'), _('FILE')),
335 ('', 'branchsort', None, _('try to sort changesets by branches')),
335 ('', 'branchsort', None, _('try to sort changesets by branches')),
336 ('', 'datesort', None, _('try to sort changesets by date')),
336 ('', 'datesort', None, _('try to sort changesets by date')),
337 ('', 'sourcesort', None, _('preserve source changesets order')),
337 ('', 'sourcesort', None, _('preserve source changesets order')),
338 ('', 'closesort', None, _('try to reorder closed revisions'))],
338 ('', 'closesort', None, _('try to reorder closed revisions'))],
339 _('hg convert [OPTION]... SOURCE [DEST [REVMAP]]')),
339 _('hg convert [OPTION]... SOURCE [DEST [REVMAP]]')),
340 "debugsvnlog":
340 "debugsvnlog":
341 (debugsvnlog,
341 (debugsvnlog,
342 [],
342 [],
343 'hg debugsvnlog'),
343 'hg debugsvnlog'),
344 "debugcvsps":
344 "debugcvsps":
345 (debugcvsps,
345 (debugcvsps,
346 [
346 [
347 # Main options shared with cvsps-2.1
347 # Main options shared with cvsps-2.1
348 ('b', 'branches', [], _('only return changes on specified branches')),
348 ('b', 'branches', [], _('only return changes on specified branches')),
349 ('p', 'prefix', '', _('prefix to remove from file names')),
349 ('p', 'prefix', '', _('prefix to remove from file names')),
350 ('r', 'revisions', [],
350 ('r', 'revisions', [],
351 _('only return changes after or between specified tags')),
351 _('only return changes after or between specified tags')),
352 ('u', 'update-cache', None, _("update cvs log cache")),
352 ('u', 'update-cache', None, _("update cvs log cache")),
353 ('x', 'new-cache', None, _("create new cvs log cache")),
353 ('x', 'new-cache', None, _("create new cvs log cache")),
354 ('z', 'fuzz', 60, _('set commit time fuzz in seconds')),
354 ('z', 'fuzz', 60, _('set commit time fuzz in seconds')),
355 ('', 'root', '', _('specify cvsroot')),
355 ('', 'root', '', _('specify cvsroot')),
356 # Options specific to builtin cvsps
356 # Options specific to builtin cvsps
357 ('', 'parents', '', _('show parent changesets')),
357 ('', 'parents', '', _('show parent changesets')),
358 ('', 'ancestors', '',
358 ('', 'ancestors', '',
359 _('show current changeset in ancestor branches')),
359 _('show current changeset in ancestor branches')),
360 # Options that are ignored for compatibility with cvsps-2.1
360 # Options that are ignored for compatibility with cvsps-2.1
361 ('A', 'cvs-direct', None, _('ignored for compatibility')),
361 ('A', 'cvs-direct', None, _('ignored for compatibility')),
362 ],
362 ],
363 _('hg debugcvsps [OPTION]... [PATH]...')),
363 _('hg debugcvsps [OPTION]... [PATH]...')),
364 }
364 }
365
365
366 def kwconverted(ctx, name):
366 def kwconverted(ctx, name):
367 rev = ctx.extra().get('convert_revision', '')
367 rev = ctx.extra().get('convert_revision', '')
368 if rev.startswith('svn:'):
368 if rev.startswith('svn:'):
369 if name == 'svnrev':
369 if name == 'svnrev':
370 return str(subversion.revsplit(rev)[2])
370 return str(subversion.revsplit(rev)[2])
371 elif name == 'svnpath':
371 elif name == 'svnpath':
372 return subversion.revsplit(rev)[1]
372 return subversion.revsplit(rev)[1]
373 elif name == 'svnuuid':
373 elif name == 'svnuuid':
374 return subversion.revsplit(rev)[0]
374 return subversion.revsplit(rev)[0]
375 return rev
375 return rev
376
376
377 def kwsvnrev(repo, ctx, **args):
377 def kwsvnrev(repo, ctx, **args):
378 """:svnrev: String. Converted subversion revision number."""
378 """:svnrev: String. Converted subversion revision number."""
379 return kwconverted(ctx, 'svnrev')
379 return kwconverted(ctx, 'svnrev')
380
380
381 def kwsvnpath(repo, ctx, **args):
381 def kwsvnpath(repo, ctx, **args):
382 """:svnpath: String. Converted subversion revision project path."""
382 """:svnpath: String. Converted subversion revision project path."""
383 return kwconverted(ctx, 'svnpath')
383 return kwconverted(ctx, 'svnpath')
384
384
385 def kwsvnuuid(repo, ctx, **args):
385 def kwsvnuuid(repo, ctx, **args):
386 """:svnuuid: String. Converted subversion revision repository identifier."""
386 """:svnuuid: String. Converted subversion revision repository identifier."""
387 return kwconverted(ctx, 'svnuuid')
387 return kwconverted(ctx, 'svnuuid')
388
388
389 def extsetup(ui):
389 def extsetup(ui):
390 templatekw.keywords['svnrev'] = kwsvnrev
390 templatekw.keywords['svnrev'] = kwsvnrev
391 templatekw.keywords['svnpath'] = kwsvnpath
391 templatekw.keywords['svnpath'] = kwsvnpath
392 templatekw.keywords['svnuuid'] = kwsvnuuid
392 templatekw.keywords['svnuuid'] = kwsvnuuid
393
393
394 # tell hggettext to extract docstrings from these functions:
394 # tell hggettext to extract docstrings from these functions:
395 i18nfunctions = [kwsvnrev, kwsvnpath, kwsvnuuid]
395 i18nfunctions = [kwsvnrev, kwsvnpath, kwsvnuuid]
@@ -1,659 +1,659 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: (16 bits integer)
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. parameter with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safefly ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remains simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a the bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: (16 bits inter)
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :typename: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitraty content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 :payload:
116 :payload:
117
117
118 payload is a series of `<chunksize><chunkdata>`.
118 payload is a series of `<chunksize><chunkdata>`.
119
119
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
122
122
123 The current implementation always produces either zero or one chunk.
123 The current implementation always produces either zero or one chunk.
124 This is an implementation limitation that will ultimatly be lifted.
124 This is an implementation limitation that will ultimately be lifted.
125
125
126 Bundle processing
126 Bundle processing
127 ============================
127 ============================
128
128
129 Each part is processed in order using a "part handler". Handler are registered
129 Each part is processed in order using a "part handler". Handler are registered
130 for a certain part type.
130 for a certain part type.
131
131
132 The matching of a part to its handler is case insensitive. The case of the
132 The matching of a part to its handler is case insensitive. The case of the
133 part type is used to know if a part is mandatory or advisory. If the Part type
133 part type is used to know if a part is mandatory or advisory. If the Part type
134 contains any uppercase char it is considered mandatory. When no handler is
134 contains any uppercase char it is considered mandatory. When no handler is
135 known for a Mandatory part, the process is aborted and an exception is raised.
135 known for a Mandatory part, the process is aborted and an exception is raised.
136 If the part is advisory and no handler is known, the part is ignored. When the
136 If the part is advisory and no handler is known, the part is ignored. When the
137 process is aborted, the full bundle is still read from the stream to keep the
137 process is aborted, the full bundle is still read from the stream to keep the
138 channel usable. But none of the part read from an abort are processed. In the
138 channel usable. But none of the part read from an abort are processed. In the
139 future, dropping the stream may become an option for channel we do not care to
139 future, dropping the stream may become an option for channel we do not care to
140 preserve.
140 preserve.
141 """
141 """
142
142
143 import util
143 import util
144 import struct
144 import struct
145 import urllib
145 import urllib
146 import string
146 import string
147
147
148 import changegroup
148 import changegroup
149 from i18n import _
149 from i18n import _
150
150
151 _pack = struct.pack
151 _pack = struct.pack
152 _unpack = struct.unpack
152 _unpack = struct.unpack
153
153
154 _magicstring = 'HG20'
154 _magicstring = 'HG20'
155
155
156 _fstreamparamsize = '>H'
156 _fstreamparamsize = '>H'
157 _fpartheadersize = '>H'
157 _fpartheadersize = '>H'
158 _fparttypesize = '>B'
158 _fparttypesize = '>B'
159 _fpartid = '>I'
159 _fpartid = '>I'
160 _fpayloadsize = '>I'
160 _fpayloadsize = '>I'
161 _fpartparamcount = '>BB'
161 _fpartparamcount = '>BB'
162
162
163 preferedchunksize = 4096
163 preferedchunksize = 4096
164
164
165 def _makefpartparamsizes(nbparams):
165 def _makefpartparamsizes(nbparams):
166 """return a struct format to read part parameter sizes
166 """return a struct format to read part parameter sizes
167
167
168 The number parameters is variable so we need to build that format
168 The number parameters is variable so we need to build that format
169 dynamically.
169 dynamically.
170 """
170 """
171 return '>'+('BB'*nbparams)
171 return '>'+('BB'*nbparams)
172
172
173 parthandlermapping = {}
173 parthandlermapping = {}
174
174
175 def parthandler(parttype):
175 def parthandler(parttype):
176 """decorator that register a function as a bundle2 part handler
176 """decorator that register a function as a bundle2 part handler
177
177
178 eg::
178 eg::
179
179
180 @parthandler('myparttype')
180 @parthandler('myparttype')
181 def myparttypehandler(...):
181 def myparttypehandler(...):
182 '''process a part of type "my part".'''
182 '''process a part of type "my part".'''
183 ...
183 ...
184 """
184 """
185 def _decorator(func):
185 def _decorator(func):
186 lparttype = parttype.lower() # enforce lower case matching.
186 lparttype = parttype.lower() # enforce lower case matching.
187 assert lparttype not in parthandlermapping
187 assert lparttype not in parthandlermapping
188 parthandlermapping[lparttype] = func
188 parthandlermapping[lparttype] = func
189 return func
189 return func
190 return _decorator
190 return _decorator
191
191
192 class unbundlerecords(object):
192 class unbundlerecords(object):
193 """keep record of what happens during and unbundle
193 """keep record of what happens during and unbundle
194
194
195 New records are added using `records.add('cat', obj)`. Where 'cat' is a
195 New records are added using `records.add('cat', obj)`. Where 'cat' is a
196 category of record and obj is an arbitraty object.
196 category of record and obj is an arbitrary object.
197
197
198 `records['cat']` will return all entries of this category 'cat'.
198 `records['cat']` will return all entries of this category 'cat'.
199
199
200 Iterating on the object itself will yield `('category', obj)` tuples
200 Iterating on the object itself will yield `('category', obj)` tuples
201 for all entries.
201 for all entries.
202
202
203 All iterations happens in chronological order.
203 All iterations happens in chronological order.
204 """
204 """
205
205
206 def __init__(self):
206 def __init__(self):
207 self._categories = {}
207 self._categories = {}
208 self._sequences = []
208 self._sequences = []
209 self._replies = {}
209 self._replies = {}
210
210
211 def add(self, category, entry, inreplyto=None):
211 def add(self, category, entry, inreplyto=None):
212 """add a new record of a given category.
212 """add a new record of a given category.
213
213
214 The entry can then be retrieved in the list returned by
214 The entry can then be retrieved in the list returned by
215 self['category']."""
215 self['category']."""
216 self._categories.setdefault(category, []).append(entry)
216 self._categories.setdefault(category, []).append(entry)
217 self._sequences.append((category, entry))
217 self._sequences.append((category, entry))
218 if inreplyto is not None:
218 if inreplyto is not None:
219 self.getreplies(inreplyto).add(category, entry)
219 self.getreplies(inreplyto).add(category, entry)
220
220
221 def getreplies(self, partid):
221 def getreplies(self, partid):
222 """get the subrecords that replies to a specific part"""
222 """get the subrecords that replies to a specific part"""
223 return self._replies.setdefault(partid, unbundlerecords())
223 return self._replies.setdefault(partid, unbundlerecords())
224
224
225 def __getitem__(self, cat):
225 def __getitem__(self, cat):
226 return tuple(self._categories.get(cat, ()))
226 return tuple(self._categories.get(cat, ()))
227
227
228 def __iter__(self):
228 def __iter__(self):
229 return iter(self._sequences)
229 return iter(self._sequences)
230
230
231 def __len__(self):
231 def __len__(self):
232 return len(self._sequences)
232 return len(self._sequences)
233
233
234 def __nonzero__(self):
234 def __nonzero__(self):
235 return bool(self._sequences)
235 return bool(self._sequences)
236
236
237 class bundleoperation(object):
237 class bundleoperation(object):
238 """an object that represents a single bundling process
238 """an object that represents a single bundling process
239
239
240 Its purpose is to carry unbundle-related objects and states.
240 Its purpose is to carry unbundle-related objects and states.
241
241
242 A new object should be created at the beginning of each bundle processing.
242 A new object should be created at the beginning of each bundle processing.
243 The object is to be returned by the processing function.
243 The object is to be returned by the processing function.
244
244
245 The object has very little content now it will ultimately contain:
245 The object has very little content now it will ultimately contain:
246 * an access to the repo the bundle is applied to,
246 * an access to the repo the bundle is applied to,
247 * a ui object,
247 * a ui object,
248 * a way to retrieve a transaction to add changes to the repo,
248 * a way to retrieve a transaction to add changes to the repo,
249 * a way to record the result of processing each part,
249 * a way to record the result of processing each part,
250 * a way to construct a bundle response when applicable.
250 * a way to construct a bundle response when applicable.
251 """
251 """
252
252
253 def __init__(self, repo, transactiongetter):
253 def __init__(self, repo, transactiongetter):
254 self.repo = repo
254 self.repo = repo
255 self.ui = repo.ui
255 self.ui = repo.ui
256 self.records = unbundlerecords()
256 self.records = unbundlerecords()
257 self.gettransaction = transactiongetter
257 self.gettransaction = transactiongetter
258 self.reply = None
258 self.reply = None
259
259
260 class TransactionUnavailable(RuntimeError):
260 class TransactionUnavailable(RuntimeError):
261 pass
261 pass
262
262
263 def _notransaction():
263 def _notransaction():
264 """default method to get a transaction while processing a bundle
264 """default method to get a transaction while processing a bundle
265
265
266 Raise an exception to highlight the fact that no transaction was expected
266 Raise an exception to highlight the fact that no transaction was expected
267 to be created"""
267 to be created"""
268 raise TransactionUnavailable()
268 raise TransactionUnavailable()
269
269
270 def processbundle(repo, unbundler, transactiongetter=_notransaction):
270 def processbundle(repo, unbundler, transactiongetter=_notransaction):
271 """This function process a bundle, apply effect to/from a repo
271 """This function process a bundle, apply effect to/from a repo
272
272
273 It iterates over each part then searches for and uses the proper handling
273 It iterates over each part then searches for and uses the proper handling
274 code to process the part. Parts are processed in order.
274 code to process the part. Parts are processed in order.
275
275
276 This is very early version of this function that will be strongly reworked
276 This is very early version of this function that will be strongly reworked
277 before final usage.
277 before final usage.
278
278
279 Unknown Mandatory part will abort the process.
279 Unknown Mandatory part will abort the process.
280 """
280 """
281 op = bundleoperation(repo, transactiongetter)
281 op = bundleoperation(repo, transactiongetter)
282 # todo:
282 # todo:
283 # - only create reply bundle if requested.
283 # - only create reply bundle if requested.
284 op.reply = bundle20(op.ui)
284 op.reply = bundle20(op.ui)
285 # todo:
285 # todo:
286 # - replace this is a init function soon.
286 # - replace this is a init function soon.
287 # - exception catching
287 # - exception catching
288 unbundler.params
288 unbundler.params
289 iterparts = iter(unbundler)
289 iterparts = iter(unbundler)
290 part = None
290 part = None
291 try:
291 try:
292 for part in iterparts:
292 for part in iterparts:
293 parttype = part.type
293 parttype = part.type
294 # part key are matched lower case
294 # part key are matched lower case
295 key = parttype.lower()
295 key = parttype.lower()
296 try:
296 try:
297 handler = parthandlermapping[key]
297 handler = parthandlermapping[key]
298 op.ui.debug('found a handler for part %r\n' % parttype)
298 op.ui.debug('found a handler for part %r\n' % parttype)
299 except KeyError:
299 except KeyError:
300 if key != parttype: # mandatory parts
300 if key != parttype: # mandatory parts
301 # todo:
301 # todo:
302 # - use a more precise exception
302 # - use a more precise exception
303 raise
303 raise
304 op.ui.debug('ignoring unknown advisory part %r\n' % key)
304 op.ui.debug('ignoring unknown advisory part %r\n' % key)
305 # consuming the part
305 # consuming the part
306 part.read()
306 part.read()
307 continue
307 continue
308
308
309 # handler is called outside the above try block so that we don't
309 # handler is called outside the above try block so that we don't
310 # risk catching KeyErrors from anything other than the
310 # risk catching KeyErrors from anything other than the
311 # parthandlermapping lookup (any KeyError raised by handler()
311 # parthandlermapping lookup (any KeyError raised by handler()
312 # itself represents a defect of a different variety).
312 # itself represents a defect of a different variety).
313 handler(op, part)
313 handler(op, part)
314 part.read()
314 part.read()
315 except Exception:
315 except Exception:
316 if part is not None:
316 if part is not None:
317 # consume the bundle content
317 # consume the bundle content
318 part.read()
318 part.read()
319 for part in iterparts:
319 for part in iterparts:
320 # consume the bundle content
320 # consume the bundle content
321 part.read()
321 part.read()
322 raise
322 raise
323 return op
323 return op
324
324
325 class bundle20(object):
325 class bundle20(object):
326 """represent an outgoing bundle2 container
326 """represent an outgoing bundle2 container
327
327
328 Use the `addparam` method to add stream level parameter. and `addpart` to
328 Use the `addparam` method to add stream level parameter. and `addpart` to
329 populate it. Then call `getchunks` to retrieve all the binary chunks of
329 populate it. Then call `getchunks` to retrieve all the binary chunks of
330 datathat compose the bundle2 container."""
330 data that compose the bundle2 container."""
331
331
332 def __init__(self, ui):
332 def __init__(self, ui):
333 self.ui = ui
333 self.ui = ui
334 self._params = []
334 self._params = []
335 self._parts = []
335 self._parts = []
336
336
337 def addparam(self, name, value=None):
337 def addparam(self, name, value=None):
338 """add a stream level parameter"""
338 """add a stream level parameter"""
339 if not name:
339 if not name:
340 raise ValueError('empty parameter name')
340 raise ValueError('empty parameter name')
341 if name[0] not in string.letters:
341 if name[0] not in string.letters:
342 raise ValueError('non letter first character: %r' % name)
342 raise ValueError('non letter first character: %r' % name)
343 self._params.append((name, value))
343 self._params.append((name, value))
344
344
345 def addpart(self, part):
345 def addpart(self, part):
346 """add a new part to the bundle2 container
346 """add a new part to the bundle2 container
347
347
348 Parts contains the actuall applicative payload."""
348 Parts contains the actual applicative payload."""
349 assert part.id is None
349 assert part.id is None
350 part.id = len(self._parts) # very cheap counter
350 part.id = len(self._parts) # very cheap counter
351 self._parts.append(part)
351 self._parts.append(part)
352
352
353 def getchunks(self):
353 def getchunks(self):
354 self.ui.debug('start emission of %s stream\n' % _magicstring)
354 self.ui.debug('start emission of %s stream\n' % _magicstring)
355 yield _magicstring
355 yield _magicstring
356 param = self._paramchunk()
356 param = self._paramchunk()
357 self.ui.debug('bundle parameter: %s\n' % param)
357 self.ui.debug('bundle parameter: %s\n' % param)
358 yield _pack(_fstreamparamsize, len(param))
358 yield _pack(_fstreamparamsize, len(param))
359 if param:
359 if param:
360 yield param
360 yield param
361
361
362 self.ui.debug('start of parts\n')
362 self.ui.debug('start of parts\n')
363 for part in self._parts:
363 for part in self._parts:
364 self.ui.debug('bundle part: "%s"\n' % part.type)
364 self.ui.debug('bundle part: "%s"\n' % part.type)
365 for chunk in part.getchunks():
365 for chunk in part.getchunks():
366 yield chunk
366 yield chunk
367 self.ui.debug('end of bundle\n')
367 self.ui.debug('end of bundle\n')
368 yield '\0\0'
368 yield '\0\0'
369
369
370 def _paramchunk(self):
370 def _paramchunk(self):
371 """return a encoded version of all stream parameters"""
371 """return a encoded version of all stream parameters"""
372 blocks = []
372 blocks = []
373 for par, value in self._params:
373 for par, value in self._params:
374 par = urllib.quote(par)
374 par = urllib.quote(par)
375 if value is not None:
375 if value is not None:
376 value = urllib.quote(value)
376 value = urllib.quote(value)
377 par = '%s=%s' % (par, value)
377 par = '%s=%s' % (par, value)
378 blocks.append(par)
378 blocks.append(par)
379 return ' '.join(blocks)
379 return ' '.join(blocks)
380
380
381 class unpackermixin(object):
381 class unpackermixin(object):
382 """A mixin to extract bytes and struct data from a stream"""
382 """A mixin to extract bytes and struct data from a stream"""
383
383
384 def __init__(self, fp):
384 def __init__(self, fp):
385 self._fp = fp
385 self._fp = fp
386
386
387 def _unpack(self, format):
387 def _unpack(self, format):
388 """unpack this struct format from the stream"""
388 """unpack this struct format from the stream"""
389 data = self._readexact(struct.calcsize(format))
389 data = self._readexact(struct.calcsize(format))
390 return _unpack(format, data)
390 return _unpack(format, data)
391
391
392 def _readexact(self, size):
392 def _readexact(self, size):
393 """read exactly <size> bytes from the stream"""
393 """read exactly <size> bytes from the stream"""
394 return changegroup.readexactly(self._fp, size)
394 return changegroup.readexactly(self._fp, size)
395
395
396
396
397 class unbundle20(unpackermixin):
397 class unbundle20(unpackermixin):
398 """interpret a bundle2 stream
398 """interpret a bundle2 stream
399
399
400 (this will eventually yield parts)"""
400 (this will eventually yield parts)"""
401
401
402 def __init__(self, ui, fp):
402 def __init__(self, ui, fp):
403 self.ui = ui
403 self.ui = ui
404 super(unbundle20, self).__init__(fp)
404 super(unbundle20, self).__init__(fp)
405 header = self._readexact(4)
405 header = self._readexact(4)
406 magic, version = header[0:2], header[2:4]
406 magic, version = header[0:2], header[2:4]
407 if magic != 'HG':
407 if magic != 'HG':
408 raise util.Abort(_('not a Mercurial bundle'))
408 raise util.Abort(_('not a Mercurial bundle'))
409 if version != '20':
409 if version != '20':
410 raise util.Abort(_('unknown bundle version %s') % version)
410 raise util.Abort(_('unknown bundle version %s') % version)
411 self.ui.debug('start processing of %s stream\n' % header)
411 self.ui.debug('start processing of %s stream\n' % header)
412
412
413 @util.propertycache
413 @util.propertycache
414 def params(self):
414 def params(self):
415 """dictionnary of stream level parameters"""
415 """dictionary of stream level parameters"""
416 self.ui.debug('reading bundle2 stream parameters\n')
416 self.ui.debug('reading bundle2 stream parameters\n')
417 params = {}
417 params = {}
418 paramssize = self._unpack(_fstreamparamsize)[0]
418 paramssize = self._unpack(_fstreamparamsize)[0]
419 if paramssize:
419 if paramssize:
420 for p in self._readexact(paramssize).split(' '):
420 for p in self._readexact(paramssize).split(' '):
421 p = p.split('=', 1)
421 p = p.split('=', 1)
422 p = [urllib.unquote(i) for i in p]
422 p = [urllib.unquote(i) for i in p]
423 if len(p) < 2:
423 if len(p) < 2:
424 p.append(None)
424 p.append(None)
425 self._processparam(*p)
425 self._processparam(*p)
426 params[p[0]] = p[1]
426 params[p[0]] = p[1]
427 return params
427 return params
428
428
429 def _processparam(self, name, value):
429 def _processparam(self, name, value):
430 """process a parameter, applying its effect if needed
430 """process a parameter, applying its effect if needed
431
431
432 Parameter starting with a lower case letter are advisory and will be
432 Parameter starting with a lower case letter are advisory and will be
433 ignored when unknown. Those starting with an upper case letter are
433 ignored when unknown. Those starting with an upper case letter are
434 mandatory and will this function will raise a KeyError when unknown.
434 mandatory and will this function will raise a KeyError when unknown.
435
435
436 Note: no option are currently supported. Any input will be either
436 Note: no option are currently supported. Any input will be either
437 ignored or failing.
437 ignored or failing.
438 """
438 """
439 if not name:
439 if not name:
440 raise ValueError('empty parameter name')
440 raise ValueError('empty parameter name')
441 if name[0] not in string.letters:
441 if name[0] not in string.letters:
442 raise ValueError('non letter first character: %r' % name)
442 raise ValueError('non letter first character: %r' % name)
443 # Some logic will be later added here to try to process the option for
443 # Some logic will be later added here to try to process the option for
444 # a dict of known parameter.
444 # a dict of known parameter.
445 if name[0].islower():
445 if name[0].islower():
446 self.ui.debug("ignoring unknown parameter %r\n" % name)
446 self.ui.debug("ignoring unknown parameter %r\n" % name)
447 else:
447 else:
448 raise KeyError(name)
448 raise KeyError(name)
449
449
450
450
451 def __iter__(self):
451 def __iter__(self):
452 """yield all parts contained in the stream"""
452 """yield all parts contained in the stream"""
453 # make sure param have been loaded
453 # make sure param have been loaded
454 self.params
454 self.params
455 self.ui.debug('start extraction of bundle2 parts\n')
455 self.ui.debug('start extraction of bundle2 parts\n')
456 headerblock = self._readpartheader()
456 headerblock = self._readpartheader()
457 while headerblock is not None:
457 while headerblock is not None:
458 part = unbundlepart(self.ui, headerblock, self._fp)
458 part = unbundlepart(self.ui, headerblock, self._fp)
459 yield part
459 yield part
460 headerblock = self._readpartheader()
460 headerblock = self._readpartheader()
461 self.ui.debug('end of bundle2 stream\n')
461 self.ui.debug('end of bundle2 stream\n')
462
462
463 def _readpartheader(self):
463 def _readpartheader(self):
464 """reads a part header size and return the bytes blob
464 """reads a part header size and return the bytes blob
465
465
466 returns None if empty"""
466 returns None if empty"""
467 headersize = self._unpack(_fpartheadersize)[0]
467 headersize = self._unpack(_fpartheadersize)[0]
468 self.ui.debug('part header size: %i\n' % headersize)
468 self.ui.debug('part header size: %i\n' % headersize)
469 if headersize:
469 if headersize:
470 return self._readexact(headersize)
470 return self._readexact(headersize)
471 return None
471 return None
472
472
473
473
474 class bundlepart(object):
474 class bundlepart(object):
475 """A bundle2 part contains application level payload
475 """A bundle2 part contains application level payload
476
476
477 The part `type` is used to route the part to the application level
477 The part `type` is used to route the part to the application level
478 handler.
478 handler.
479 """
479 """
480
480
481 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
481 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
482 data=''):
482 data=''):
483 self.id = None
483 self.id = None
484 self.type = parttype
484 self.type = parttype
485 self.data = data
485 self.data = data
486 self.mandatoryparams = mandatoryparams
486 self.mandatoryparams = mandatoryparams
487 self.advisoryparams = advisoryparams
487 self.advisoryparams = advisoryparams
488
488
489 def getchunks(self):
489 def getchunks(self):
490 #### header
490 #### header
491 ## parttype
491 ## parttype
492 header = [_pack(_fparttypesize, len(self.type)),
492 header = [_pack(_fparttypesize, len(self.type)),
493 self.type, _pack(_fpartid, self.id),
493 self.type, _pack(_fpartid, self.id),
494 ]
494 ]
495 ## parameters
495 ## parameters
496 # count
496 # count
497 manpar = self.mandatoryparams
497 manpar = self.mandatoryparams
498 advpar = self.advisoryparams
498 advpar = self.advisoryparams
499 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
499 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
500 # size
500 # size
501 parsizes = []
501 parsizes = []
502 for key, value in manpar:
502 for key, value in manpar:
503 parsizes.append(len(key))
503 parsizes.append(len(key))
504 parsizes.append(len(value))
504 parsizes.append(len(value))
505 for key, value in advpar:
505 for key, value in advpar:
506 parsizes.append(len(key))
506 parsizes.append(len(key))
507 parsizes.append(len(value))
507 parsizes.append(len(value))
508 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
508 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
509 header.append(paramsizes)
509 header.append(paramsizes)
510 # key, value
510 # key, value
511 for key, value in manpar:
511 for key, value in manpar:
512 header.append(key)
512 header.append(key)
513 header.append(value)
513 header.append(value)
514 for key, value in advpar:
514 for key, value in advpar:
515 header.append(key)
515 header.append(key)
516 header.append(value)
516 header.append(value)
517 ## finalize header
517 ## finalize header
518 headerchunk = ''.join(header)
518 headerchunk = ''.join(header)
519 yield _pack(_fpartheadersize, len(headerchunk))
519 yield _pack(_fpartheadersize, len(headerchunk))
520 yield headerchunk
520 yield headerchunk
521 ## payload
521 ## payload
522 for chunk in self._payloadchunks():
522 for chunk in self._payloadchunks():
523 yield _pack(_fpayloadsize, len(chunk))
523 yield _pack(_fpayloadsize, len(chunk))
524 yield chunk
524 yield chunk
525 # end of payload
525 # end of payload
526 yield _pack(_fpayloadsize, 0)
526 yield _pack(_fpayloadsize, 0)
527
527
528 def _payloadchunks(self):
528 def _payloadchunks(self):
529 """yield chunks of a the part payload
529 """yield chunks of a the part payload
530
530
531 Exists to handle the different methods to provide data to a part."""
531 Exists to handle the different methods to provide data to a part."""
532 # we only support fixed size data now.
532 # we only support fixed size data now.
533 # This will be improved in the future.
533 # This will be improved in the future.
534 if util.safehasattr(self.data, 'next'):
534 if util.safehasattr(self.data, 'next'):
535 buff = util.chunkbuffer(self.data)
535 buff = util.chunkbuffer(self.data)
536 chunk = buff.read(preferedchunksize)
536 chunk = buff.read(preferedchunksize)
537 while chunk:
537 while chunk:
538 yield chunk
538 yield chunk
539 chunk = buff.read(preferedchunksize)
539 chunk = buff.read(preferedchunksize)
540 elif len(self.data):
540 elif len(self.data):
541 yield self.data
541 yield self.data
542
542
543 class unbundlepart(unpackermixin):
543 class unbundlepart(unpackermixin):
544 """a bundle part read from a bundle"""
544 """a bundle part read from a bundle"""
545
545
546 def __init__(self, ui, header, fp):
546 def __init__(self, ui, header, fp):
547 super(unbundlepart, self).__init__(fp)
547 super(unbundlepart, self).__init__(fp)
548 self.ui = ui
548 self.ui = ui
549 # unbundle state attr
549 # unbundle state attr
550 self._headerdata = header
550 self._headerdata = header
551 self._headeroffset = 0
551 self._headeroffset = 0
552 self._initialized = False
552 self._initialized = False
553 self.consumed = False
553 self.consumed = False
554 # part data
554 # part data
555 self.id = None
555 self.id = None
556 self.type = None
556 self.type = None
557 self.mandatoryparams = None
557 self.mandatoryparams = None
558 self.advisoryparams = None
558 self.advisoryparams = None
559 self._payloadstream = None
559 self._payloadstream = None
560 self._readheader()
560 self._readheader()
561
561
562 def _fromheader(self, size):
562 def _fromheader(self, size):
563 """return the next <size> byte from the header"""
563 """return the next <size> byte from the header"""
564 offset = self._headeroffset
564 offset = self._headeroffset
565 data = self._headerdata[offset:(offset + size)]
565 data = self._headerdata[offset:(offset + size)]
566 self._headeroffset = offset + size
566 self._headeroffset = offset + size
567 return data
567 return data
568
568
569 def _unpackheader(self, format):
569 def _unpackheader(self, format):
570 """read given format from header
570 """read given format from header
571
571
572 This automatically compute the size of the format to read."""
572 This automatically compute the size of the format to read."""
573 data = self._fromheader(struct.calcsize(format))
573 data = self._fromheader(struct.calcsize(format))
574 return _unpack(format, data)
574 return _unpack(format, data)
575
575
576 def _readheader(self):
576 def _readheader(self):
577 """read the header and setup the object"""
577 """read the header and setup the object"""
578 typesize = self._unpackheader(_fparttypesize)[0]
578 typesize = self._unpackheader(_fparttypesize)[0]
579 self.type = self._fromheader(typesize)
579 self.type = self._fromheader(typesize)
580 self.ui.debug('part type: "%s"\n' % self.type)
580 self.ui.debug('part type: "%s"\n' % self.type)
581 self.id = self._unpackheader(_fpartid)[0]
581 self.id = self._unpackheader(_fpartid)[0]
582 self.ui.debug('part id: "%s"\n' % self.id)
582 self.ui.debug('part id: "%s"\n' % self.id)
583 ## reading parameters
583 ## reading parameters
584 # param count
584 # param count
585 mancount, advcount = self._unpackheader(_fpartparamcount)
585 mancount, advcount = self._unpackheader(_fpartparamcount)
586 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
586 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
587 # param size
587 # param size
588 fparamsizes = _makefpartparamsizes(mancount + advcount)
588 fparamsizes = _makefpartparamsizes(mancount + advcount)
589 paramsizes = self._unpackheader(fparamsizes)
589 paramsizes = self._unpackheader(fparamsizes)
590 # make it a list of couple again
590 # make it a list of couple again
591 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
591 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
592 # split mandatory from advisory
592 # split mandatory from advisory
593 mansizes = paramsizes[:mancount]
593 mansizes = paramsizes[:mancount]
594 advsizes = paramsizes[mancount:]
594 advsizes = paramsizes[mancount:]
595 # retrive param value
595 # retrive param value
596 manparams = []
596 manparams = []
597 for key, value in mansizes:
597 for key, value in mansizes:
598 manparams.append((self._fromheader(key), self._fromheader(value)))
598 manparams.append((self._fromheader(key), self._fromheader(value)))
599 advparams = []
599 advparams = []
600 for key, value in advsizes:
600 for key, value in advsizes:
601 advparams.append((self._fromheader(key), self._fromheader(value)))
601 advparams.append((self._fromheader(key), self._fromheader(value)))
602 self.mandatoryparams = manparams
602 self.mandatoryparams = manparams
603 self.advisoryparams = advparams
603 self.advisoryparams = advparams
604 ## part payload
604 ## part payload
605 def payloadchunks():
605 def payloadchunks():
606 payloadsize = self._unpack(_fpayloadsize)[0]
606 payloadsize = self._unpack(_fpayloadsize)[0]
607 self.ui.debug('payload chunk size: %i\n' % payloadsize)
607 self.ui.debug('payload chunk size: %i\n' % payloadsize)
608 while payloadsize:
608 while payloadsize:
609 yield self._readexact(payloadsize)
609 yield self._readexact(payloadsize)
610 payloadsize = self._unpack(_fpayloadsize)[0]
610 payloadsize = self._unpack(_fpayloadsize)[0]
611 self.ui.debug('payload chunk size: %i\n' % payloadsize)
611 self.ui.debug('payload chunk size: %i\n' % payloadsize)
612 self._payloadstream = util.chunkbuffer(payloadchunks())
612 self._payloadstream = util.chunkbuffer(payloadchunks())
613 # we read the data, tell it
613 # we read the data, tell it
614 self._initialized = True
614 self._initialized = True
615
615
616 def read(self, size=None):
616 def read(self, size=None):
617 """read payload data"""
617 """read payload data"""
618 if not self._initialized:
618 if not self._initialized:
619 self._readheader()
619 self._readheader()
620 if size is None:
620 if size is None:
621 data = self._payloadstream.read()
621 data = self._payloadstream.read()
622 else:
622 else:
623 data = self._payloadstream.read(size)
623 data = self._payloadstream.read(size)
624 if size is None or len(data) < size:
624 if size is None or len(data) < size:
625 self.consumed = True
625 self.consumed = True
626 return data
626 return data
627
627
628
628
629 @parthandler('changegroup')
629 @parthandler('changegroup')
630 def handlechangegroup(op, inpart):
630 def handlechangegroup(op, inpart):
631 """apply a changegroup part on the repo
631 """apply a changegroup part on the repo
632
632
633 This is a very early implementation that will massive rework before being
633 This is a very early implementation that will massive rework before being
634 inflicted to any end-user.
634 inflicted to any end-user.
635 """
635 """
636 # Make sure we trigger a transaction creation
636 # Make sure we trigger a transaction creation
637 #
637 #
638 # The addchangegroup function will get a transaction object by itself, but
638 # The addchangegroup function will get a transaction object by itself, but
639 # we need to make sure we trigger the creation of a transaction object used
639 # we need to make sure we trigger the creation of a transaction object used
640 # for the whole processing scope.
640 # for the whole processing scope.
641 op.gettransaction()
641 op.gettransaction()
642 cg = changegroup.readbundle(inpart, 'bundle2part')
642 cg = changegroup.readbundle(inpart, 'bundle2part')
643 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
643 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
644 op.records.add('changegroup', {'return': ret})
644 op.records.add('changegroup', {'return': ret})
645 if op.reply is not None:
645 if op.reply is not None:
646 # This is definitly not the final form of this
646 # This is definitly not the final form of this
647 # return. But one need to start somewhere.
647 # return. But one need to start somewhere.
648 part = bundlepart('reply:changegroup', (),
648 part = bundlepart('reply:changegroup', (),
649 [('in-reply-to', str(inpart.id)),
649 [('in-reply-to', str(inpart.id)),
650 ('return', '%i' % ret)])
650 ('return', '%i' % ret)])
651 op.reply.addpart(part)
651 op.reply.addpart(part)
652 assert not inpart.read()
652 assert not inpart.read()
653
653
654 @parthandler('reply:changegroup')
654 @parthandler('reply:changegroup')
655 def handlechangegroup(op, inpart):
655 def handlechangegroup(op, inpart):
656 p = dict(inpart.advisoryparams)
656 p = dict(inpart.advisoryparams)
657 ret = int(p['return'])
657 ret = int(p['return'])
658 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
658 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
659
659
@@ -1,2361 +1,2361 b''
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from node import hex, nullid, nullrev, short
8 from node import hex, nullid, nullrev, short
9 from i18n import _
9 from i18n import _
10 import os, sys, errno, re, tempfile
10 import os, sys, errno, re, tempfile
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 import match as matchmod
12 import match as matchmod
13 import context, repair, graphmod, revset, phases, obsolete, pathutil
13 import context, repair, graphmod, revset, phases, obsolete, pathutil
14 import changelog
14 import changelog
15 import bookmarks
15 import bookmarks
16 import lock as lockmod
16 import lock as lockmod
17
17
18 def parsealiases(cmd):
18 def parsealiases(cmd):
19 return cmd.lstrip("^").split("|")
19 return cmd.lstrip("^").split("|")
20
20
21 def findpossible(cmd, table, strict=False):
21 def findpossible(cmd, table, strict=False):
22 """
22 """
23 Return cmd -> (aliases, command table entry)
23 Return cmd -> (aliases, command table entry)
24 for each matching command.
24 for each matching command.
25 Return debug commands (or their aliases) only if no normal command matches.
25 Return debug commands (or their aliases) only if no normal command matches.
26 """
26 """
27 choice = {}
27 choice = {}
28 debugchoice = {}
28 debugchoice = {}
29
29
30 if cmd in table:
30 if cmd in table:
31 # short-circuit exact matches, "log" alias beats "^log|history"
31 # short-circuit exact matches, "log" alias beats "^log|history"
32 keys = [cmd]
32 keys = [cmd]
33 else:
33 else:
34 keys = table.keys()
34 keys = table.keys()
35
35
36 for e in keys:
36 for e in keys:
37 aliases = parsealiases(e)
37 aliases = parsealiases(e)
38 found = None
38 found = None
39 if cmd in aliases:
39 if cmd in aliases:
40 found = cmd
40 found = cmd
41 elif not strict:
41 elif not strict:
42 for a in aliases:
42 for a in aliases:
43 if a.startswith(cmd):
43 if a.startswith(cmd):
44 found = a
44 found = a
45 break
45 break
46 if found is not None:
46 if found is not None:
47 if aliases[0].startswith("debug") or found.startswith("debug"):
47 if aliases[0].startswith("debug") or found.startswith("debug"):
48 debugchoice[found] = (aliases, table[e])
48 debugchoice[found] = (aliases, table[e])
49 else:
49 else:
50 choice[found] = (aliases, table[e])
50 choice[found] = (aliases, table[e])
51
51
52 if not choice and debugchoice:
52 if not choice and debugchoice:
53 choice = debugchoice
53 choice = debugchoice
54
54
55 return choice
55 return choice
56
56
57 def findcmd(cmd, table, strict=True):
57 def findcmd(cmd, table, strict=True):
58 """Return (aliases, command table entry) for command string."""
58 """Return (aliases, command table entry) for command string."""
59 choice = findpossible(cmd, table, strict)
59 choice = findpossible(cmd, table, strict)
60
60
61 if cmd in choice:
61 if cmd in choice:
62 return choice[cmd]
62 return choice[cmd]
63
63
64 if len(choice) > 1:
64 if len(choice) > 1:
65 clist = choice.keys()
65 clist = choice.keys()
66 clist.sort()
66 clist.sort()
67 raise error.AmbiguousCommand(cmd, clist)
67 raise error.AmbiguousCommand(cmd, clist)
68
68
69 if choice:
69 if choice:
70 return choice.values()[0]
70 return choice.values()[0]
71
71
72 raise error.UnknownCommand(cmd)
72 raise error.UnknownCommand(cmd)
73
73
74 def findrepo(p):
74 def findrepo(p):
75 while not os.path.isdir(os.path.join(p, ".hg")):
75 while not os.path.isdir(os.path.join(p, ".hg")):
76 oldp, p = p, os.path.dirname(p)
76 oldp, p = p, os.path.dirname(p)
77 if p == oldp:
77 if p == oldp:
78 return None
78 return None
79
79
80 return p
80 return p
81
81
82 def bailifchanged(repo):
82 def bailifchanged(repo):
83 if repo.dirstate.p2() != nullid:
83 if repo.dirstate.p2() != nullid:
84 raise util.Abort(_('outstanding uncommitted merge'))
84 raise util.Abort(_('outstanding uncommitted merge'))
85 modified, added, removed, deleted = repo.status()[:4]
85 modified, added, removed, deleted = repo.status()[:4]
86 if modified or added or removed or deleted:
86 if modified or added or removed or deleted:
87 raise util.Abort(_('uncommitted changes'))
87 raise util.Abort(_('uncommitted changes'))
88 ctx = repo[None]
88 ctx = repo[None]
89 for s in sorted(ctx.substate):
89 for s in sorted(ctx.substate):
90 if ctx.sub(s).dirty():
90 if ctx.sub(s).dirty():
91 raise util.Abort(_("uncommitted changes in subrepo %s") % s)
91 raise util.Abort(_("uncommitted changes in subrepo %s") % s)
92
92
93 def logmessage(ui, opts):
93 def logmessage(ui, opts):
94 """ get the log message according to -m and -l option """
94 """ get the log message according to -m and -l option """
95 message = opts.get('message')
95 message = opts.get('message')
96 logfile = opts.get('logfile')
96 logfile = opts.get('logfile')
97
97
98 if message and logfile:
98 if message and logfile:
99 raise util.Abort(_('options --message and --logfile are mutually '
99 raise util.Abort(_('options --message and --logfile are mutually '
100 'exclusive'))
100 'exclusive'))
101 if not message and logfile:
101 if not message and logfile:
102 try:
102 try:
103 if logfile == '-':
103 if logfile == '-':
104 message = ui.fin.read()
104 message = ui.fin.read()
105 else:
105 else:
106 message = '\n'.join(util.readfile(logfile).splitlines())
106 message = '\n'.join(util.readfile(logfile).splitlines())
107 except IOError, inst:
107 except IOError, inst:
108 raise util.Abort(_("can't read commit message '%s': %s") %
108 raise util.Abort(_("can't read commit message '%s': %s") %
109 (logfile, inst.strerror))
109 (logfile, inst.strerror))
110 return message
110 return message
111
111
112 def loglimit(opts):
112 def loglimit(opts):
113 """get the log limit according to option -l/--limit"""
113 """get the log limit according to option -l/--limit"""
114 limit = opts.get('limit')
114 limit = opts.get('limit')
115 if limit:
115 if limit:
116 try:
116 try:
117 limit = int(limit)
117 limit = int(limit)
118 except ValueError:
118 except ValueError:
119 raise util.Abort(_('limit must be a positive integer'))
119 raise util.Abort(_('limit must be a positive integer'))
120 if limit <= 0:
120 if limit <= 0:
121 raise util.Abort(_('limit must be positive'))
121 raise util.Abort(_('limit must be positive'))
122 else:
122 else:
123 limit = None
123 limit = None
124 return limit
124 return limit
125
125
126 def makefilename(repo, pat, node, desc=None,
126 def makefilename(repo, pat, node, desc=None,
127 total=None, seqno=None, revwidth=None, pathname=None):
127 total=None, seqno=None, revwidth=None, pathname=None):
128 node_expander = {
128 node_expander = {
129 'H': lambda: hex(node),
129 'H': lambda: hex(node),
130 'R': lambda: str(repo.changelog.rev(node)),
130 'R': lambda: str(repo.changelog.rev(node)),
131 'h': lambda: short(node),
131 'h': lambda: short(node),
132 'm': lambda: re.sub('[^\w]', '_', str(desc))
132 'm': lambda: re.sub('[^\w]', '_', str(desc))
133 }
133 }
134 expander = {
134 expander = {
135 '%': lambda: '%',
135 '%': lambda: '%',
136 'b': lambda: os.path.basename(repo.root),
136 'b': lambda: os.path.basename(repo.root),
137 }
137 }
138
138
139 try:
139 try:
140 if node:
140 if node:
141 expander.update(node_expander)
141 expander.update(node_expander)
142 if node:
142 if node:
143 expander['r'] = (lambda:
143 expander['r'] = (lambda:
144 str(repo.changelog.rev(node)).zfill(revwidth or 0))
144 str(repo.changelog.rev(node)).zfill(revwidth or 0))
145 if total is not None:
145 if total is not None:
146 expander['N'] = lambda: str(total)
146 expander['N'] = lambda: str(total)
147 if seqno is not None:
147 if seqno is not None:
148 expander['n'] = lambda: str(seqno)
148 expander['n'] = lambda: str(seqno)
149 if total is not None and seqno is not None:
149 if total is not None and seqno is not None:
150 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
150 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
151 if pathname is not None:
151 if pathname is not None:
152 expander['s'] = lambda: os.path.basename(pathname)
152 expander['s'] = lambda: os.path.basename(pathname)
153 expander['d'] = lambda: os.path.dirname(pathname) or '.'
153 expander['d'] = lambda: os.path.dirname(pathname) or '.'
154 expander['p'] = lambda: pathname
154 expander['p'] = lambda: pathname
155
155
156 newname = []
156 newname = []
157 patlen = len(pat)
157 patlen = len(pat)
158 i = 0
158 i = 0
159 while i < patlen:
159 while i < patlen:
160 c = pat[i]
160 c = pat[i]
161 if c == '%':
161 if c == '%':
162 i += 1
162 i += 1
163 c = pat[i]
163 c = pat[i]
164 c = expander[c]()
164 c = expander[c]()
165 newname.append(c)
165 newname.append(c)
166 i += 1
166 i += 1
167 return ''.join(newname)
167 return ''.join(newname)
168 except KeyError, inst:
168 except KeyError, inst:
169 raise util.Abort(_("invalid format spec '%%%s' in output filename") %
169 raise util.Abort(_("invalid format spec '%%%s' in output filename") %
170 inst.args[0])
170 inst.args[0])
171
171
172 def makefileobj(repo, pat, node=None, desc=None, total=None,
172 def makefileobj(repo, pat, node=None, desc=None, total=None,
173 seqno=None, revwidth=None, mode='wb', modemap=None,
173 seqno=None, revwidth=None, mode='wb', modemap=None,
174 pathname=None):
174 pathname=None):
175
175
176 writable = mode not in ('r', 'rb')
176 writable = mode not in ('r', 'rb')
177
177
178 if not pat or pat == '-':
178 if not pat or pat == '-':
179 fp = writable and repo.ui.fout or repo.ui.fin
179 fp = writable and repo.ui.fout or repo.ui.fin
180 if util.safehasattr(fp, 'fileno'):
180 if util.safehasattr(fp, 'fileno'):
181 return os.fdopen(os.dup(fp.fileno()), mode)
181 return os.fdopen(os.dup(fp.fileno()), mode)
182 else:
182 else:
183 # if this fp can't be duped properly, return
183 # if this fp can't be duped properly, return
184 # a dummy object that can be closed
184 # a dummy object that can be closed
185 class wrappedfileobj(object):
185 class wrappedfileobj(object):
186 noop = lambda x: None
186 noop = lambda x: None
187 def __init__(self, f):
187 def __init__(self, f):
188 self.f = f
188 self.f = f
189 def __getattr__(self, attr):
189 def __getattr__(self, attr):
190 if attr == 'close':
190 if attr == 'close':
191 return self.noop
191 return self.noop
192 else:
192 else:
193 return getattr(self.f, attr)
193 return getattr(self.f, attr)
194
194
195 return wrappedfileobj(fp)
195 return wrappedfileobj(fp)
196 if util.safehasattr(pat, 'write') and writable:
196 if util.safehasattr(pat, 'write') and writable:
197 return pat
197 return pat
198 if util.safehasattr(pat, 'read') and 'r' in mode:
198 if util.safehasattr(pat, 'read') and 'r' in mode:
199 return pat
199 return pat
200 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
200 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
201 if modemap is not None:
201 if modemap is not None:
202 mode = modemap.get(fn, mode)
202 mode = modemap.get(fn, mode)
203 if mode == 'wb':
203 if mode == 'wb':
204 modemap[fn] = 'ab'
204 modemap[fn] = 'ab'
205 return open(fn, mode)
205 return open(fn, mode)
206
206
207 def openrevlog(repo, cmd, file_, opts):
207 def openrevlog(repo, cmd, file_, opts):
208 """opens the changelog, manifest, a filelog or a given revlog"""
208 """opens the changelog, manifest, a filelog or a given revlog"""
209 cl = opts['changelog']
209 cl = opts['changelog']
210 mf = opts['manifest']
210 mf = opts['manifest']
211 msg = None
211 msg = None
212 if cl and mf:
212 if cl and mf:
213 msg = _('cannot specify --changelog and --manifest at the same time')
213 msg = _('cannot specify --changelog and --manifest at the same time')
214 elif cl or mf:
214 elif cl or mf:
215 if file_:
215 if file_:
216 msg = _('cannot specify filename with --changelog or --manifest')
216 msg = _('cannot specify filename with --changelog or --manifest')
217 elif not repo:
217 elif not repo:
218 msg = _('cannot specify --changelog or --manifest '
218 msg = _('cannot specify --changelog or --manifest '
219 'without a repository')
219 'without a repository')
220 if msg:
220 if msg:
221 raise util.Abort(msg)
221 raise util.Abort(msg)
222
222
223 r = None
223 r = None
224 if repo:
224 if repo:
225 if cl:
225 if cl:
226 r = repo.changelog
226 r = repo.changelog
227 elif mf:
227 elif mf:
228 r = repo.manifest
228 r = repo.manifest
229 elif file_:
229 elif file_:
230 filelog = repo.file(file_)
230 filelog = repo.file(file_)
231 if len(filelog):
231 if len(filelog):
232 r = filelog
232 r = filelog
233 if not r:
233 if not r:
234 if not file_:
234 if not file_:
235 raise error.CommandError(cmd, _('invalid arguments'))
235 raise error.CommandError(cmd, _('invalid arguments'))
236 if not os.path.isfile(file_):
236 if not os.path.isfile(file_):
237 raise util.Abort(_("revlog '%s' not found") % file_)
237 raise util.Abort(_("revlog '%s' not found") % file_)
238 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
238 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
239 file_[:-2] + ".i")
239 file_[:-2] + ".i")
240 return r
240 return r
241
241
242 def copy(ui, repo, pats, opts, rename=False):
242 def copy(ui, repo, pats, opts, rename=False):
243 # called with the repo lock held
243 # called with the repo lock held
244 #
244 #
245 # hgsep => pathname that uses "/" to separate directories
245 # hgsep => pathname that uses "/" to separate directories
246 # ossep => pathname that uses os.sep to separate directories
246 # ossep => pathname that uses os.sep to separate directories
247 cwd = repo.getcwd()
247 cwd = repo.getcwd()
248 targets = {}
248 targets = {}
249 after = opts.get("after")
249 after = opts.get("after")
250 dryrun = opts.get("dry_run")
250 dryrun = opts.get("dry_run")
251 wctx = repo[None]
251 wctx = repo[None]
252
252
253 def walkpat(pat):
253 def walkpat(pat):
254 srcs = []
254 srcs = []
255 badstates = after and '?' or '?r'
255 badstates = after and '?' or '?r'
256 m = scmutil.match(repo[None], [pat], opts, globbed=True)
256 m = scmutil.match(repo[None], [pat], opts, globbed=True)
257 for abs in repo.walk(m):
257 for abs in repo.walk(m):
258 state = repo.dirstate[abs]
258 state = repo.dirstate[abs]
259 rel = m.rel(abs)
259 rel = m.rel(abs)
260 exact = m.exact(abs)
260 exact = m.exact(abs)
261 if state in badstates:
261 if state in badstates:
262 if exact and state == '?':
262 if exact and state == '?':
263 ui.warn(_('%s: not copying - file is not managed\n') % rel)
263 ui.warn(_('%s: not copying - file is not managed\n') % rel)
264 if exact and state == 'r':
264 if exact and state == 'r':
265 ui.warn(_('%s: not copying - file has been marked for'
265 ui.warn(_('%s: not copying - file has been marked for'
266 ' remove\n') % rel)
266 ' remove\n') % rel)
267 continue
267 continue
268 # abs: hgsep
268 # abs: hgsep
269 # rel: ossep
269 # rel: ossep
270 srcs.append((abs, rel, exact))
270 srcs.append((abs, rel, exact))
271 return srcs
271 return srcs
272
272
273 # abssrc: hgsep
273 # abssrc: hgsep
274 # relsrc: ossep
274 # relsrc: ossep
275 # otarget: ossep
275 # otarget: ossep
276 def copyfile(abssrc, relsrc, otarget, exact):
276 def copyfile(abssrc, relsrc, otarget, exact):
277 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
277 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
278 if '/' in abstarget:
278 if '/' in abstarget:
279 # We cannot normalize abstarget itself, this would prevent
279 # We cannot normalize abstarget itself, this would prevent
280 # case only renames, like a => A.
280 # case only renames, like a => A.
281 abspath, absname = abstarget.rsplit('/', 1)
281 abspath, absname = abstarget.rsplit('/', 1)
282 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
282 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
283 reltarget = repo.pathto(abstarget, cwd)
283 reltarget = repo.pathto(abstarget, cwd)
284 target = repo.wjoin(abstarget)
284 target = repo.wjoin(abstarget)
285 src = repo.wjoin(abssrc)
285 src = repo.wjoin(abssrc)
286 state = repo.dirstate[abstarget]
286 state = repo.dirstate[abstarget]
287
287
288 scmutil.checkportable(ui, abstarget)
288 scmutil.checkportable(ui, abstarget)
289
289
290 # check for collisions
290 # check for collisions
291 prevsrc = targets.get(abstarget)
291 prevsrc = targets.get(abstarget)
292 if prevsrc is not None:
292 if prevsrc is not None:
293 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
293 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
294 (reltarget, repo.pathto(abssrc, cwd),
294 (reltarget, repo.pathto(abssrc, cwd),
295 repo.pathto(prevsrc, cwd)))
295 repo.pathto(prevsrc, cwd)))
296 return
296 return
297
297
298 # check for overwrites
298 # check for overwrites
299 exists = os.path.lexists(target)
299 exists = os.path.lexists(target)
300 samefile = False
300 samefile = False
301 if exists and abssrc != abstarget:
301 if exists and abssrc != abstarget:
302 if (repo.dirstate.normalize(abssrc) ==
302 if (repo.dirstate.normalize(abssrc) ==
303 repo.dirstate.normalize(abstarget)):
303 repo.dirstate.normalize(abstarget)):
304 if not rename:
304 if not rename:
305 ui.warn(_("%s: can't copy - same file\n") % reltarget)
305 ui.warn(_("%s: can't copy - same file\n") % reltarget)
306 return
306 return
307 exists = False
307 exists = False
308 samefile = True
308 samefile = True
309
309
310 if not after and exists or after and state in 'mn':
310 if not after and exists or after and state in 'mn':
311 if not opts['force']:
311 if not opts['force']:
312 ui.warn(_('%s: not overwriting - file exists\n') %
312 ui.warn(_('%s: not overwriting - file exists\n') %
313 reltarget)
313 reltarget)
314 return
314 return
315
315
316 if after:
316 if after:
317 if not exists:
317 if not exists:
318 if rename:
318 if rename:
319 ui.warn(_('%s: not recording move - %s does not exist\n') %
319 ui.warn(_('%s: not recording move - %s does not exist\n') %
320 (relsrc, reltarget))
320 (relsrc, reltarget))
321 else:
321 else:
322 ui.warn(_('%s: not recording copy - %s does not exist\n') %
322 ui.warn(_('%s: not recording copy - %s does not exist\n') %
323 (relsrc, reltarget))
323 (relsrc, reltarget))
324 return
324 return
325 elif not dryrun:
325 elif not dryrun:
326 try:
326 try:
327 if exists:
327 if exists:
328 os.unlink(target)
328 os.unlink(target)
329 targetdir = os.path.dirname(target) or '.'
329 targetdir = os.path.dirname(target) or '.'
330 if not os.path.isdir(targetdir):
330 if not os.path.isdir(targetdir):
331 os.makedirs(targetdir)
331 os.makedirs(targetdir)
332 if samefile:
332 if samefile:
333 tmp = target + "~hgrename"
333 tmp = target + "~hgrename"
334 os.rename(src, tmp)
334 os.rename(src, tmp)
335 os.rename(tmp, target)
335 os.rename(tmp, target)
336 else:
336 else:
337 util.copyfile(src, target)
337 util.copyfile(src, target)
338 srcexists = True
338 srcexists = True
339 except IOError, inst:
339 except IOError, inst:
340 if inst.errno == errno.ENOENT:
340 if inst.errno == errno.ENOENT:
341 ui.warn(_('%s: deleted in working copy\n') % relsrc)
341 ui.warn(_('%s: deleted in working copy\n') % relsrc)
342 srcexists = False
342 srcexists = False
343 else:
343 else:
344 ui.warn(_('%s: cannot copy - %s\n') %
344 ui.warn(_('%s: cannot copy - %s\n') %
345 (relsrc, inst.strerror))
345 (relsrc, inst.strerror))
346 return True # report a failure
346 return True # report a failure
347
347
348 if ui.verbose or not exact:
348 if ui.verbose or not exact:
349 if rename:
349 if rename:
350 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
350 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
351 else:
351 else:
352 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
352 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
353
353
354 targets[abstarget] = abssrc
354 targets[abstarget] = abssrc
355
355
356 # fix up dirstate
356 # fix up dirstate
357 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
357 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
358 dryrun=dryrun, cwd=cwd)
358 dryrun=dryrun, cwd=cwd)
359 if rename and not dryrun:
359 if rename and not dryrun:
360 if not after and srcexists and not samefile:
360 if not after and srcexists and not samefile:
361 util.unlinkpath(repo.wjoin(abssrc))
361 util.unlinkpath(repo.wjoin(abssrc))
362 wctx.forget([abssrc])
362 wctx.forget([abssrc])
363
363
364 # pat: ossep
364 # pat: ossep
365 # dest ossep
365 # dest ossep
366 # srcs: list of (hgsep, hgsep, ossep, bool)
366 # srcs: list of (hgsep, hgsep, ossep, bool)
367 # return: function that takes hgsep and returns ossep
367 # return: function that takes hgsep and returns ossep
368 def targetpathfn(pat, dest, srcs):
368 def targetpathfn(pat, dest, srcs):
369 if os.path.isdir(pat):
369 if os.path.isdir(pat):
370 abspfx = pathutil.canonpath(repo.root, cwd, pat)
370 abspfx = pathutil.canonpath(repo.root, cwd, pat)
371 abspfx = util.localpath(abspfx)
371 abspfx = util.localpath(abspfx)
372 if destdirexists:
372 if destdirexists:
373 striplen = len(os.path.split(abspfx)[0])
373 striplen = len(os.path.split(abspfx)[0])
374 else:
374 else:
375 striplen = len(abspfx)
375 striplen = len(abspfx)
376 if striplen:
376 if striplen:
377 striplen += len(os.sep)
377 striplen += len(os.sep)
378 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
378 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
379 elif destdirexists:
379 elif destdirexists:
380 res = lambda p: os.path.join(dest,
380 res = lambda p: os.path.join(dest,
381 os.path.basename(util.localpath(p)))
381 os.path.basename(util.localpath(p)))
382 else:
382 else:
383 res = lambda p: dest
383 res = lambda p: dest
384 return res
384 return res
385
385
386 # pat: ossep
386 # pat: ossep
387 # dest ossep
387 # dest ossep
388 # srcs: list of (hgsep, hgsep, ossep, bool)
388 # srcs: list of (hgsep, hgsep, ossep, bool)
389 # return: function that takes hgsep and returns ossep
389 # return: function that takes hgsep and returns ossep
390 def targetpathafterfn(pat, dest, srcs):
390 def targetpathafterfn(pat, dest, srcs):
391 if matchmod.patkind(pat):
391 if matchmod.patkind(pat):
392 # a mercurial pattern
392 # a mercurial pattern
393 res = lambda p: os.path.join(dest,
393 res = lambda p: os.path.join(dest,
394 os.path.basename(util.localpath(p)))
394 os.path.basename(util.localpath(p)))
395 else:
395 else:
396 abspfx = pathutil.canonpath(repo.root, cwd, pat)
396 abspfx = pathutil.canonpath(repo.root, cwd, pat)
397 if len(abspfx) < len(srcs[0][0]):
397 if len(abspfx) < len(srcs[0][0]):
398 # A directory. Either the target path contains the last
398 # A directory. Either the target path contains the last
399 # component of the source path or it does not.
399 # component of the source path or it does not.
400 def evalpath(striplen):
400 def evalpath(striplen):
401 score = 0
401 score = 0
402 for s in srcs:
402 for s in srcs:
403 t = os.path.join(dest, util.localpath(s[0])[striplen:])
403 t = os.path.join(dest, util.localpath(s[0])[striplen:])
404 if os.path.lexists(t):
404 if os.path.lexists(t):
405 score += 1
405 score += 1
406 return score
406 return score
407
407
408 abspfx = util.localpath(abspfx)
408 abspfx = util.localpath(abspfx)
409 striplen = len(abspfx)
409 striplen = len(abspfx)
410 if striplen:
410 if striplen:
411 striplen += len(os.sep)
411 striplen += len(os.sep)
412 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
412 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
413 score = evalpath(striplen)
413 score = evalpath(striplen)
414 striplen1 = len(os.path.split(abspfx)[0])
414 striplen1 = len(os.path.split(abspfx)[0])
415 if striplen1:
415 if striplen1:
416 striplen1 += len(os.sep)
416 striplen1 += len(os.sep)
417 if evalpath(striplen1) > score:
417 if evalpath(striplen1) > score:
418 striplen = striplen1
418 striplen = striplen1
419 res = lambda p: os.path.join(dest,
419 res = lambda p: os.path.join(dest,
420 util.localpath(p)[striplen:])
420 util.localpath(p)[striplen:])
421 else:
421 else:
422 # a file
422 # a file
423 if destdirexists:
423 if destdirexists:
424 res = lambda p: os.path.join(dest,
424 res = lambda p: os.path.join(dest,
425 os.path.basename(util.localpath(p)))
425 os.path.basename(util.localpath(p)))
426 else:
426 else:
427 res = lambda p: dest
427 res = lambda p: dest
428 return res
428 return res
429
429
430
430
431 pats = scmutil.expandpats(pats)
431 pats = scmutil.expandpats(pats)
432 if not pats:
432 if not pats:
433 raise util.Abort(_('no source or destination specified'))
433 raise util.Abort(_('no source or destination specified'))
434 if len(pats) == 1:
434 if len(pats) == 1:
435 raise util.Abort(_('no destination specified'))
435 raise util.Abort(_('no destination specified'))
436 dest = pats.pop()
436 dest = pats.pop()
437 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
437 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
438 if not destdirexists:
438 if not destdirexists:
439 if len(pats) > 1 or matchmod.patkind(pats[0]):
439 if len(pats) > 1 or matchmod.patkind(pats[0]):
440 raise util.Abort(_('with multiple sources, destination must be an '
440 raise util.Abort(_('with multiple sources, destination must be an '
441 'existing directory'))
441 'existing directory'))
442 if util.endswithsep(dest):
442 if util.endswithsep(dest):
443 raise util.Abort(_('destination %s is not a directory') % dest)
443 raise util.Abort(_('destination %s is not a directory') % dest)
444
444
445 tfn = targetpathfn
445 tfn = targetpathfn
446 if after:
446 if after:
447 tfn = targetpathafterfn
447 tfn = targetpathafterfn
448 copylist = []
448 copylist = []
449 for pat in pats:
449 for pat in pats:
450 srcs = walkpat(pat)
450 srcs = walkpat(pat)
451 if not srcs:
451 if not srcs:
452 continue
452 continue
453 copylist.append((tfn(pat, dest, srcs), srcs))
453 copylist.append((tfn(pat, dest, srcs), srcs))
454 if not copylist:
454 if not copylist:
455 raise util.Abort(_('no files to copy'))
455 raise util.Abort(_('no files to copy'))
456
456
457 errors = 0
457 errors = 0
458 for targetpath, srcs in copylist:
458 for targetpath, srcs in copylist:
459 for abssrc, relsrc, exact in srcs:
459 for abssrc, relsrc, exact in srcs:
460 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
460 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
461 errors += 1
461 errors += 1
462
462
463 if errors:
463 if errors:
464 ui.warn(_('(consider using --after)\n'))
464 ui.warn(_('(consider using --after)\n'))
465
465
466 return errors != 0
466 return errors != 0
467
467
468 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
468 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
469 runargs=None, appendpid=False):
469 runargs=None, appendpid=False):
470 '''Run a command as a service.'''
470 '''Run a command as a service.'''
471
471
472 def writepid(pid):
472 def writepid(pid):
473 if opts['pid_file']:
473 if opts['pid_file']:
474 mode = appendpid and 'a' or 'w'
474 mode = appendpid and 'a' or 'w'
475 fp = open(opts['pid_file'], mode)
475 fp = open(opts['pid_file'], mode)
476 fp.write(str(pid) + '\n')
476 fp.write(str(pid) + '\n')
477 fp.close()
477 fp.close()
478
478
479 if opts['daemon'] and not opts['daemon_pipefds']:
479 if opts['daemon'] and not opts['daemon_pipefds']:
480 # Signal child process startup with file removal
480 # Signal child process startup with file removal
481 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
481 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
482 os.close(lockfd)
482 os.close(lockfd)
483 try:
483 try:
484 if not runargs:
484 if not runargs:
485 runargs = util.hgcmd() + sys.argv[1:]
485 runargs = util.hgcmd() + sys.argv[1:]
486 runargs.append('--daemon-pipefds=%s' % lockpath)
486 runargs.append('--daemon-pipefds=%s' % lockpath)
487 # Don't pass --cwd to the child process, because we've already
487 # Don't pass --cwd to the child process, because we've already
488 # changed directory.
488 # changed directory.
489 for i in xrange(1, len(runargs)):
489 for i in xrange(1, len(runargs)):
490 if runargs[i].startswith('--cwd='):
490 if runargs[i].startswith('--cwd='):
491 del runargs[i]
491 del runargs[i]
492 break
492 break
493 elif runargs[i].startswith('--cwd'):
493 elif runargs[i].startswith('--cwd'):
494 del runargs[i:i + 2]
494 del runargs[i:i + 2]
495 break
495 break
496 def condfn():
496 def condfn():
497 return not os.path.exists(lockpath)
497 return not os.path.exists(lockpath)
498 pid = util.rundetached(runargs, condfn)
498 pid = util.rundetached(runargs, condfn)
499 if pid < 0:
499 if pid < 0:
500 raise util.Abort(_('child process failed to start'))
500 raise util.Abort(_('child process failed to start'))
501 writepid(pid)
501 writepid(pid)
502 finally:
502 finally:
503 try:
503 try:
504 os.unlink(lockpath)
504 os.unlink(lockpath)
505 except OSError, e:
505 except OSError, e:
506 if e.errno != errno.ENOENT:
506 if e.errno != errno.ENOENT:
507 raise
507 raise
508 if parentfn:
508 if parentfn:
509 return parentfn(pid)
509 return parentfn(pid)
510 else:
510 else:
511 return
511 return
512
512
513 if initfn:
513 if initfn:
514 initfn()
514 initfn()
515
515
516 if not opts['daemon']:
516 if not opts['daemon']:
517 writepid(os.getpid())
517 writepid(os.getpid())
518
518
519 if opts['daemon_pipefds']:
519 if opts['daemon_pipefds']:
520 lockpath = opts['daemon_pipefds']
520 lockpath = opts['daemon_pipefds']
521 try:
521 try:
522 os.setsid()
522 os.setsid()
523 except AttributeError:
523 except AttributeError:
524 pass
524 pass
525 os.unlink(lockpath)
525 os.unlink(lockpath)
526 util.hidewindow()
526 util.hidewindow()
527 sys.stdout.flush()
527 sys.stdout.flush()
528 sys.stderr.flush()
528 sys.stderr.flush()
529
529
530 nullfd = os.open(os.devnull, os.O_RDWR)
530 nullfd = os.open(os.devnull, os.O_RDWR)
531 logfilefd = nullfd
531 logfilefd = nullfd
532 if logfile:
532 if logfile:
533 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
533 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
534 os.dup2(nullfd, 0)
534 os.dup2(nullfd, 0)
535 os.dup2(logfilefd, 1)
535 os.dup2(logfilefd, 1)
536 os.dup2(logfilefd, 2)
536 os.dup2(logfilefd, 2)
537 if nullfd not in (0, 1, 2):
537 if nullfd not in (0, 1, 2):
538 os.close(nullfd)
538 os.close(nullfd)
539 if logfile and logfilefd not in (0, 1, 2):
539 if logfile and logfilefd not in (0, 1, 2):
540 os.close(logfilefd)
540 os.close(logfilefd)
541
541
542 if runfn:
542 if runfn:
543 return runfn()
543 return runfn()
544
544
545 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
545 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
546 """Utility function used by commands.import to import a single patch
546 """Utility function used by commands.import to import a single patch
547
547
548 This function is explicitly defined here to help the evolve extension to
548 This function is explicitly defined here to help the evolve extension to
549 wrap this part of the import logic.
549 wrap this part of the import logic.
550
550
551 The API is currently a bit ugly because it a simple code translation from
551 The API is currently a bit ugly because it a simple code translation from
552 the import command. Feel free to make it better.
552 the import command. Feel free to make it better.
553
553
554 :hunk: a patch (as a binary string)
554 :hunk: a patch (as a binary string)
555 :parents: nodes that will be parent of the created commit
555 :parents: nodes that will be parent of the created commit
556 :opts: the full dict of option passed to the import command
556 :opts: the full dict of option passed to the import command
557 :msgs: list to save commit message to.
557 :msgs: list to save commit message to.
558 (used in case we need to save it when failing)
558 (used in case we need to save it when failing)
559 :updatefunc: a function that update a repo to a given node
559 :updatefunc: a function that update a repo to a given node
560 updatefunc(<repo>, <node>)
560 updatefunc(<repo>, <node>)
561 """
561 """
562 tmpname, message, user, date, branch, nodeid, p1, p2 = \
562 tmpname, message, user, date, branch, nodeid, p1, p2 = \
563 patch.extract(ui, hunk)
563 patch.extract(ui, hunk)
564
564
565 editor = commiteditor
565 editor = commiteditor
566 if opts.get('edit'):
566 if opts.get('edit'):
567 editor = commitforceeditor
567 editor = commitforceeditor
568 update = not opts.get('bypass')
568 update = not opts.get('bypass')
569 strip = opts["strip"]
569 strip = opts["strip"]
570 sim = float(opts.get('similarity') or 0)
570 sim = float(opts.get('similarity') or 0)
571 if not tmpname:
571 if not tmpname:
572 return (None, None)
572 return (None, None)
573 msg = _('applied to working directory')
573 msg = _('applied to working directory')
574
574
575 try:
575 try:
576 cmdline_message = logmessage(ui, opts)
576 cmdline_message = logmessage(ui, opts)
577 if cmdline_message:
577 if cmdline_message:
578 # pickup the cmdline msg
578 # pickup the cmdline msg
579 message = cmdline_message
579 message = cmdline_message
580 elif message:
580 elif message:
581 # pickup the patch msg
581 # pickup the patch msg
582 message = message.strip()
582 message = message.strip()
583 else:
583 else:
584 # launch the editor
584 # launch the editor
585 message = None
585 message = None
586 ui.debug('message:\n%s\n' % message)
586 ui.debug('message:\n%s\n' % message)
587
587
588 if len(parents) == 1:
588 if len(parents) == 1:
589 parents.append(repo[nullid])
589 parents.append(repo[nullid])
590 if opts.get('exact'):
590 if opts.get('exact'):
591 if not nodeid or not p1:
591 if not nodeid or not p1:
592 raise util.Abort(_('not a Mercurial patch'))
592 raise util.Abort(_('not a Mercurial patch'))
593 p1 = repo[p1]
593 p1 = repo[p1]
594 p2 = repo[p2 or nullid]
594 p2 = repo[p2 or nullid]
595 elif p2:
595 elif p2:
596 try:
596 try:
597 p1 = repo[p1]
597 p1 = repo[p1]
598 p2 = repo[p2]
598 p2 = repo[p2]
599 # Without any options, consider p2 only if the
599 # Without any options, consider p2 only if the
600 # patch is being applied on top of the recorded
600 # patch is being applied on top of the recorded
601 # first parent.
601 # first parent.
602 if p1 != parents[0]:
602 if p1 != parents[0]:
603 p1 = parents[0]
603 p1 = parents[0]
604 p2 = repo[nullid]
604 p2 = repo[nullid]
605 except error.RepoError:
605 except error.RepoError:
606 p1, p2 = parents
606 p1, p2 = parents
607 else:
607 else:
608 p1, p2 = parents
608 p1, p2 = parents
609
609
610 n = None
610 n = None
611 if update:
611 if update:
612 if p1 != parents[0]:
612 if p1 != parents[0]:
613 updatefunc(repo, p1.node())
613 updatefunc(repo, p1.node())
614 if p2 != parents[1]:
614 if p2 != parents[1]:
615 repo.setparents(p1.node(), p2.node())
615 repo.setparents(p1.node(), p2.node())
616
616
617 if opts.get('exact') or opts.get('import_branch'):
617 if opts.get('exact') or opts.get('import_branch'):
618 repo.dirstate.setbranch(branch or 'default')
618 repo.dirstate.setbranch(branch or 'default')
619
619
620 files = set()
620 files = set()
621 patch.patch(ui, repo, tmpname, strip=strip, files=files,
621 patch.patch(ui, repo, tmpname, strip=strip, files=files,
622 eolmode=None, similarity=sim / 100.0)
622 eolmode=None, similarity=sim / 100.0)
623 files = list(files)
623 files = list(files)
624 if opts.get('no_commit'):
624 if opts.get('no_commit'):
625 if message:
625 if message:
626 msgs.append(message)
626 msgs.append(message)
627 else:
627 else:
628 if opts.get('exact') or p2:
628 if opts.get('exact') or p2:
629 # If you got here, you either use --force and know what
629 # If you got here, you either use --force and know what
630 # you are doing or used --exact or a merge patch while
630 # you are doing or used --exact or a merge patch while
631 # being updated to its first parent.
631 # being updated to its first parent.
632 m = None
632 m = None
633 else:
633 else:
634 m = scmutil.matchfiles(repo, files or [])
634 m = scmutil.matchfiles(repo, files or [])
635 n = repo.commit(message, opts.get('user') or user,
635 n = repo.commit(message, opts.get('user') or user,
636 opts.get('date') or date, match=m,
636 opts.get('date') or date, match=m,
637 editor=editor)
637 editor=editor)
638 else:
638 else:
639 if opts.get('exact') or opts.get('import_branch'):
639 if opts.get('exact') or opts.get('import_branch'):
640 branch = branch or 'default'
640 branch = branch or 'default'
641 else:
641 else:
642 branch = p1.branch()
642 branch = p1.branch()
643 store = patch.filestore()
643 store = patch.filestore()
644 try:
644 try:
645 files = set()
645 files = set()
646 try:
646 try:
647 patch.patchrepo(ui, repo, p1, store, tmpname, strip,
647 patch.patchrepo(ui, repo, p1, store, tmpname, strip,
648 files, eolmode=None)
648 files, eolmode=None)
649 except patch.PatchError, e:
649 except patch.PatchError, e:
650 raise util.Abort(str(e))
650 raise util.Abort(str(e))
651 memctx = context.makememctx(repo, (p1.node(), p2.node()),
651 memctx = context.makememctx(repo, (p1.node(), p2.node()),
652 message,
652 message,
653 opts.get('user') or user,
653 opts.get('user') or user,
654 opts.get('date') or date,
654 opts.get('date') or date,
655 branch, files, store,
655 branch, files, store,
656 editor=commiteditor)
656 editor=commiteditor)
657 repo.savecommitmessage(memctx.description())
657 repo.savecommitmessage(memctx.description())
658 n = memctx.commit()
658 n = memctx.commit()
659 finally:
659 finally:
660 store.close()
660 store.close()
661 if opts.get('exact') and hex(n) != nodeid:
661 if opts.get('exact') and hex(n) != nodeid:
662 raise util.Abort(_('patch is damaged or loses information'))
662 raise util.Abort(_('patch is damaged or loses information'))
663 if n:
663 if n:
664 # i18n: refers to a short changeset id
664 # i18n: refers to a short changeset id
665 msg = _('created %s') % short(n)
665 msg = _('created %s') % short(n)
666 return (msg, n)
666 return (msg, n)
667 finally:
667 finally:
668 os.unlink(tmpname)
668 os.unlink(tmpname)
669
669
670 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
670 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
671 opts=None):
671 opts=None):
672 '''export changesets as hg patches.'''
672 '''export changesets as hg patches.'''
673
673
674 total = len(revs)
674 total = len(revs)
675 revwidth = max([len(str(rev)) for rev in revs])
675 revwidth = max([len(str(rev)) for rev in revs])
676 filemode = {}
676 filemode = {}
677
677
678 def single(rev, seqno, fp):
678 def single(rev, seqno, fp):
679 ctx = repo[rev]
679 ctx = repo[rev]
680 node = ctx.node()
680 node = ctx.node()
681 parents = [p.node() for p in ctx.parents() if p]
681 parents = [p.node() for p in ctx.parents() if p]
682 branch = ctx.branch()
682 branch = ctx.branch()
683 if switch_parent:
683 if switch_parent:
684 parents.reverse()
684 parents.reverse()
685 prev = (parents and parents[0]) or nullid
685 prev = (parents and parents[0]) or nullid
686
686
687 shouldclose = False
687 shouldclose = False
688 if not fp and len(template) > 0:
688 if not fp and len(template) > 0:
689 desc_lines = ctx.description().rstrip().split('\n')
689 desc_lines = ctx.description().rstrip().split('\n')
690 desc = desc_lines[0] #Commit always has a first line.
690 desc = desc_lines[0] #Commit always has a first line.
691 fp = makefileobj(repo, template, node, desc=desc, total=total,
691 fp = makefileobj(repo, template, node, desc=desc, total=total,
692 seqno=seqno, revwidth=revwidth, mode='wb',
692 seqno=seqno, revwidth=revwidth, mode='wb',
693 modemap=filemode)
693 modemap=filemode)
694 if fp != template:
694 if fp != template:
695 shouldclose = True
695 shouldclose = True
696 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
696 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
697 repo.ui.note("%s\n" % fp.name)
697 repo.ui.note("%s\n" % fp.name)
698
698
699 if not fp:
699 if not fp:
700 write = repo.ui.write
700 write = repo.ui.write
701 else:
701 else:
702 def write(s, **kw):
702 def write(s, **kw):
703 fp.write(s)
703 fp.write(s)
704
704
705
705
706 write("# HG changeset patch\n")
706 write("# HG changeset patch\n")
707 write("# User %s\n" % ctx.user())
707 write("# User %s\n" % ctx.user())
708 write("# Date %d %d\n" % ctx.date())
708 write("# Date %d %d\n" % ctx.date())
709 write("# %s\n" % util.datestr(ctx.date()))
709 write("# %s\n" % util.datestr(ctx.date()))
710 if branch and branch != 'default':
710 if branch and branch != 'default':
711 write("# Branch %s\n" % branch)
711 write("# Branch %s\n" % branch)
712 write("# Node ID %s\n" % hex(node))
712 write("# Node ID %s\n" % hex(node))
713 write("# Parent %s\n" % hex(prev))
713 write("# Parent %s\n" % hex(prev))
714 if len(parents) > 1:
714 if len(parents) > 1:
715 write("# Parent %s\n" % hex(parents[1]))
715 write("# Parent %s\n" % hex(parents[1]))
716 write(ctx.description().rstrip())
716 write(ctx.description().rstrip())
717 write("\n\n")
717 write("\n\n")
718
718
719 for chunk, label in patch.diffui(repo, prev, node, opts=opts):
719 for chunk, label in patch.diffui(repo, prev, node, opts=opts):
720 write(chunk, label=label)
720 write(chunk, label=label)
721
721
722 if shouldclose:
722 if shouldclose:
723 fp.close()
723 fp.close()
724
724
725 for seqno, rev in enumerate(revs):
725 for seqno, rev in enumerate(revs):
726 single(rev, seqno + 1, fp)
726 single(rev, seqno + 1, fp)
727
727
728 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
728 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
729 changes=None, stat=False, fp=None, prefix='',
729 changes=None, stat=False, fp=None, prefix='',
730 listsubrepos=False):
730 listsubrepos=False):
731 '''show diff or diffstat.'''
731 '''show diff or diffstat.'''
732 if fp is None:
732 if fp is None:
733 write = ui.write
733 write = ui.write
734 else:
734 else:
735 def write(s, **kw):
735 def write(s, **kw):
736 fp.write(s)
736 fp.write(s)
737
737
738 if stat:
738 if stat:
739 diffopts = diffopts.copy(context=0)
739 diffopts = diffopts.copy(context=0)
740 width = 80
740 width = 80
741 if not ui.plain():
741 if not ui.plain():
742 width = ui.termwidth()
742 width = ui.termwidth()
743 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
743 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
744 prefix=prefix)
744 prefix=prefix)
745 for chunk, label in patch.diffstatui(util.iterlines(chunks),
745 for chunk, label in patch.diffstatui(util.iterlines(chunks),
746 width=width,
746 width=width,
747 git=diffopts.git):
747 git=diffopts.git):
748 write(chunk, label=label)
748 write(chunk, label=label)
749 else:
749 else:
750 for chunk, label in patch.diffui(repo, node1, node2, match,
750 for chunk, label in patch.diffui(repo, node1, node2, match,
751 changes, diffopts, prefix=prefix):
751 changes, diffopts, prefix=prefix):
752 write(chunk, label=label)
752 write(chunk, label=label)
753
753
754 if listsubrepos:
754 if listsubrepos:
755 ctx1 = repo[node1]
755 ctx1 = repo[node1]
756 ctx2 = repo[node2]
756 ctx2 = repo[node2]
757 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
757 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
758 tempnode2 = node2
758 tempnode2 = node2
759 try:
759 try:
760 if node2 is not None:
760 if node2 is not None:
761 tempnode2 = ctx2.substate[subpath][1]
761 tempnode2 = ctx2.substate[subpath][1]
762 except KeyError:
762 except KeyError:
763 # A subrepo that existed in node1 was deleted between node1 and
763 # A subrepo that existed in node1 was deleted between node1 and
764 # node2 (inclusive). Thus, ctx2's substate won't contain that
764 # node2 (inclusive). Thus, ctx2's substate won't contain that
765 # subpath. The best we can do is to ignore it.
765 # subpath. The best we can do is to ignore it.
766 tempnode2 = None
766 tempnode2 = None
767 submatch = matchmod.narrowmatcher(subpath, match)
767 submatch = matchmod.narrowmatcher(subpath, match)
768 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
768 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
769 stat=stat, fp=fp, prefix=prefix)
769 stat=stat, fp=fp, prefix=prefix)
770
770
771 class changeset_printer(object):
771 class changeset_printer(object):
772 '''show changeset information when templating not requested.'''
772 '''show changeset information when templating not requested.'''
773
773
774 def __init__(self, ui, repo, patch, diffopts, buffered):
774 def __init__(self, ui, repo, patch, diffopts, buffered):
775 self.ui = ui
775 self.ui = ui
776 self.repo = repo
776 self.repo = repo
777 self.buffered = buffered
777 self.buffered = buffered
778 self.patch = patch
778 self.patch = patch
779 self.diffopts = diffopts
779 self.diffopts = diffopts
780 self.header = {}
780 self.header = {}
781 self.hunk = {}
781 self.hunk = {}
782 self.lastheader = None
782 self.lastheader = None
783 self.footer = None
783 self.footer = None
784
784
785 def flush(self, rev):
785 def flush(self, rev):
786 if rev in self.header:
786 if rev in self.header:
787 h = self.header[rev]
787 h = self.header[rev]
788 if h != self.lastheader:
788 if h != self.lastheader:
789 self.lastheader = h
789 self.lastheader = h
790 self.ui.write(h)
790 self.ui.write(h)
791 del self.header[rev]
791 del self.header[rev]
792 if rev in self.hunk:
792 if rev in self.hunk:
793 self.ui.write(self.hunk[rev])
793 self.ui.write(self.hunk[rev])
794 del self.hunk[rev]
794 del self.hunk[rev]
795 return 1
795 return 1
796 return 0
796 return 0
797
797
798 def close(self):
798 def close(self):
799 if self.footer:
799 if self.footer:
800 self.ui.write(self.footer)
800 self.ui.write(self.footer)
801
801
802 def show(self, ctx, copies=None, matchfn=None, **props):
802 def show(self, ctx, copies=None, matchfn=None, **props):
803 if self.buffered:
803 if self.buffered:
804 self.ui.pushbuffer()
804 self.ui.pushbuffer()
805 self._show(ctx, copies, matchfn, props)
805 self._show(ctx, copies, matchfn, props)
806 self.hunk[ctx.rev()] = self.ui.popbuffer(labeled=True)
806 self.hunk[ctx.rev()] = self.ui.popbuffer(labeled=True)
807 else:
807 else:
808 self._show(ctx, copies, matchfn, props)
808 self._show(ctx, copies, matchfn, props)
809
809
810 def _show(self, ctx, copies, matchfn, props):
810 def _show(self, ctx, copies, matchfn, props):
811 '''show a single changeset or file revision'''
811 '''show a single changeset or file revision'''
812 changenode = ctx.node()
812 changenode = ctx.node()
813 rev = ctx.rev()
813 rev = ctx.rev()
814
814
815 if self.ui.quiet:
815 if self.ui.quiet:
816 self.ui.write("%d:%s\n" % (rev, short(changenode)),
816 self.ui.write("%d:%s\n" % (rev, short(changenode)),
817 label='log.node')
817 label='log.node')
818 return
818 return
819
819
820 log = self.repo.changelog
820 log = self.repo.changelog
821 date = util.datestr(ctx.date())
821 date = util.datestr(ctx.date())
822
822
823 hexfunc = self.ui.debugflag and hex or short
823 hexfunc = self.ui.debugflag and hex or short
824
824
825 parents = [(p, hexfunc(log.node(p)))
825 parents = [(p, hexfunc(log.node(p)))
826 for p in self._meaningful_parentrevs(log, rev)]
826 for p in self._meaningful_parentrevs(log, rev)]
827
827
828 # i18n: column positioning for "hg log"
828 # i18n: column positioning for "hg log"
829 self.ui.write(_("changeset: %d:%s\n") % (rev, hexfunc(changenode)),
829 self.ui.write(_("changeset: %d:%s\n") % (rev, hexfunc(changenode)),
830 label='log.changeset changeset.%s' % ctx.phasestr())
830 label='log.changeset changeset.%s' % ctx.phasestr())
831
831
832 branch = ctx.branch()
832 branch = ctx.branch()
833 # don't show the default branch name
833 # don't show the default branch name
834 if branch != 'default':
834 if branch != 'default':
835 # i18n: column positioning for "hg log"
835 # i18n: column positioning for "hg log"
836 self.ui.write(_("branch: %s\n") % branch,
836 self.ui.write(_("branch: %s\n") % branch,
837 label='log.branch')
837 label='log.branch')
838 for bookmark in self.repo.nodebookmarks(changenode):
838 for bookmark in self.repo.nodebookmarks(changenode):
839 # i18n: column positioning for "hg log"
839 # i18n: column positioning for "hg log"
840 self.ui.write(_("bookmark: %s\n") % bookmark,
840 self.ui.write(_("bookmark: %s\n") % bookmark,
841 label='log.bookmark')
841 label='log.bookmark')
842 for tag in self.repo.nodetags(changenode):
842 for tag in self.repo.nodetags(changenode):
843 # i18n: column positioning for "hg log"
843 # i18n: column positioning for "hg log"
844 self.ui.write(_("tag: %s\n") % tag,
844 self.ui.write(_("tag: %s\n") % tag,
845 label='log.tag')
845 label='log.tag')
846 if self.ui.debugflag and ctx.phase():
846 if self.ui.debugflag and ctx.phase():
847 # i18n: column positioning for "hg log"
847 # i18n: column positioning for "hg log"
848 self.ui.write(_("phase: %s\n") % _(ctx.phasestr()),
848 self.ui.write(_("phase: %s\n") % _(ctx.phasestr()),
849 label='log.phase')
849 label='log.phase')
850 for parent in parents:
850 for parent in parents:
851 # i18n: column positioning for "hg log"
851 # i18n: column positioning for "hg log"
852 self.ui.write(_("parent: %d:%s\n") % parent,
852 self.ui.write(_("parent: %d:%s\n") % parent,
853 label='log.parent changeset.%s' % ctx.phasestr())
853 label='log.parent changeset.%s' % ctx.phasestr())
854
854
855 if self.ui.debugflag:
855 if self.ui.debugflag:
856 mnode = ctx.manifestnode()
856 mnode = ctx.manifestnode()
857 # i18n: column positioning for "hg log"
857 # i18n: column positioning for "hg log"
858 self.ui.write(_("manifest: %d:%s\n") %
858 self.ui.write(_("manifest: %d:%s\n") %
859 (self.repo.manifest.rev(mnode), hex(mnode)),
859 (self.repo.manifest.rev(mnode), hex(mnode)),
860 label='ui.debug log.manifest')
860 label='ui.debug log.manifest')
861 # i18n: column positioning for "hg log"
861 # i18n: column positioning for "hg log"
862 self.ui.write(_("user: %s\n") % ctx.user(),
862 self.ui.write(_("user: %s\n") % ctx.user(),
863 label='log.user')
863 label='log.user')
864 # i18n: column positioning for "hg log"
864 # i18n: column positioning for "hg log"
865 self.ui.write(_("date: %s\n") % date,
865 self.ui.write(_("date: %s\n") % date,
866 label='log.date')
866 label='log.date')
867
867
868 if self.ui.debugflag:
868 if self.ui.debugflag:
869 files = self.repo.status(log.parents(changenode)[0], changenode)[:3]
869 files = self.repo.status(log.parents(changenode)[0], changenode)[:3]
870 for key, value in zip([# i18n: column positioning for "hg log"
870 for key, value in zip([# i18n: column positioning for "hg log"
871 _("files:"),
871 _("files:"),
872 # i18n: column positioning for "hg log"
872 # i18n: column positioning for "hg log"
873 _("files+:"),
873 _("files+:"),
874 # i18n: column positioning for "hg log"
874 # i18n: column positioning for "hg log"
875 _("files-:")], files):
875 _("files-:")], files):
876 if value:
876 if value:
877 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
877 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
878 label='ui.debug log.files')
878 label='ui.debug log.files')
879 elif ctx.files() and self.ui.verbose:
879 elif ctx.files() and self.ui.verbose:
880 # i18n: column positioning for "hg log"
880 # i18n: column positioning for "hg log"
881 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
881 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
882 label='ui.note log.files')
882 label='ui.note log.files')
883 if copies and self.ui.verbose:
883 if copies and self.ui.verbose:
884 copies = ['%s (%s)' % c for c in copies]
884 copies = ['%s (%s)' % c for c in copies]
885 # i18n: column positioning for "hg log"
885 # i18n: column positioning for "hg log"
886 self.ui.write(_("copies: %s\n") % ' '.join(copies),
886 self.ui.write(_("copies: %s\n") % ' '.join(copies),
887 label='ui.note log.copies')
887 label='ui.note log.copies')
888
888
889 extra = ctx.extra()
889 extra = ctx.extra()
890 if extra and self.ui.debugflag:
890 if extra and self.ui.debugflag:
891 for key, value in sorted(extra.items()):
891 for key, value in sorted(extra.items()):
892 # i18n: column positioning for "hg log"
892 # i18n: column positioning for "hg log"
893 self.ui.write(_("extra: %s=%s\n")
893 self.ui.write(_("extra: %s=%s\n")
894 % (key, value.encode('string_escape')),
894 % (key, value.encode('string_escape')),
895 label='ui.debug log.extra')
895 label='ui.debug log.extra')
896
896
897 description = ctx.description().strip()
897 description = ctx.description().strip()
898 if description:
898 if description:
899 if self.ui.verbose:
899 if self.ui.verbose:
900 self.ui.write(_("description:\n"),
900 self.ui.write(_("description:\n"),
901 label='ui.note log.description')
901 label='ui.note log.description')
902 self.ui.write(description,
902 self.ui.write(description,
903 label='ui.note log.description')
903 label='ui.note log.description')
904 self.ui.write("\n\n")
904 self.ui.write("\n\n")
905 else:
905 else:
906 # i18n: column positioning for "hg log"
906 # i18n: column positioning for "hg log"
907 self.ui.write(_("summary: %s\n") %
907 self.ui.write(_("summary: %s\n") %
908 description.splitlines()[0],
908 description.splitlines()[0],
909 label='log.summary')
909 label='log.summary')
910 self.ui.write("\n")
910 self.ui.write("\n")
911
911
912 self.showpatch(changenode, matchfn)
912 self.showpatch(changenode, matchfn)
913
913
914 def showpatch(self, node, matchfn):
914 def showpatch(self, node, matchfn):
915 if not matchfn:
915 if not matchfn:
916 matchfn = self.patch
916 matchfn = self.patch
917 if matchfn:
917 if matchfn:
918 stat = self.diffopts.get('stat')
918 stat = self.diffopts.get('stat')
919 diff = self.diffopts.get('patch')
919 diff = self.diffopts.get('patch')
920 diffopts = patch.diffopts(self.ui, self.diffopts)
920 diffopts = patch.diffopts(self.ui, self.diffopts)
921 prev = self.repo.changelog.parents(node)[0]
921 prev = self.repo.changelog.parents(node)[0]
922 if stat:
922 if stat:
923 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
923 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
924 match=matchfn, stat=True)
924 match=matchfn, stat=True)
925 if diff:
925 if diff:
926 if stat:
926 if stat:
927 self.ui.write("\n")
927 self.ui.write("\n")
928 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
928 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
929 match=matchfn, stat=False)
929 match=matchfn, stat=False)
930 self.ui.write("\n")
930 self.ui.write("\n")
931
931
932 def _meaningful_parentrevs(self, log, rev):
932 def _meaningful_parentrevs(self, log, rev):
933 """Return list of meaningful (or all if debug) parentrevs for rev.
933 """Return list of meaningful (or all if debug) parentrevs for rev.
934
934
935 For merges (two non-nullrev revisions) both parents are meaningful.
935 For merges (two non-nullrev revisions) both parents are meaningful.
936 Otherwise the first parent revision is considered meaningful if it
936 Otherwise the first parent revision is considered meaningful if it
937 is not the preceding revision.
937 is not the preceding revision.
938 """
938 """
939 parents = log.parentrevs(rev)
939 parents = log.parentrevs(rev)
940 if not self.ui.debugflag and parents[1] == nullrev:
940 if not self.ui.debugflag and parents[1] == nullrev:
941 if parents[0] >= rev - 1:
941 if parents[0] >= rev - 1:
942 parents = []
942 parents = []
943 else:
943 else:
944 parents = [parents[0]]
944 parents = [parents[0]]
945 return parents
945 return parents
946
946
947
947
948 class changeset_templater(changeset_printer):
948 class changeset_templater(changeset_printer):
949 '''format changeset information.'''
949 '''format changeset information.'''
950
950
951 def __init__(self, ui, repo, patch, diffopts, tmpl, mapfile, buffered):
951 def __init__(self, ui, repo, patch, diffopts, tmpl, mapfile, buffered):
952 changeset_printer.__init__(self, ui, repo, patch, diffopts, buffered)
952 changeset_printer.__init__(self, ui, repo, patch, diffopts, buffered)
953 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
953 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
954 defaulttempl = {
954 defaulttempl = {
955 'parent': '{rev}:{node|formatnode} ',
955 'parent': '{rev}:{node|formatnode} ',
956 'manifest': '{rev}:{node|formatnode}',
956 'manifest': '{rev}:{node|formatnode}',
957 'file_copy': '{name} ({source})',
957 'file_copy': '{name} ({source})',
958 'extra': '{key}={value|stringescape}'
958 'extra': '{key}={value|stringescape}'
959 }
959 }
960 # filecopy is preserved for compatibility reasons
960 # filecopy is preserved for compatibility reasons
961 defaulttempl['filecopy'] = defaulttempl['file_copy']
961 defaulttempl['filecopy'] = defaulttempl['file_copy']
962 self.t = templater.templater(mapfile, {'formatnode': formatnode},
962 self.t = templater.templater(mapfile, {'formatnode': formatnode},
963 cache=defaulttempl)
963 cache=defaulttempl)
964 if tmpl:
964 if tmpl:
965 self.t.cache['changeset'] = tmpl
965 self.t.cache['changeset'] = tmpl
966
966
967 self.cache = {}
967 self.cache = {}
968
968
969 def _meaningful_parentrevs(self, ctx):
969 def _meaningful_parentrevs(self, ctx):
970 """Return list of meaningful (or all if debug) parentrevs for rev.
970 """Return list of meaningful (or all if debug) parentrevs for rev.
971 """
971 """
972 parents = ctx.parents()
972 parents = ctx.parents()
973 if len(parents) > 1:
973 if len(parents) > 1:
974 return parents
974 return parents
975 if self.ui.debugflag:
975 if self.ui.debugflag:
976 return [parents[0], self.repo['null']]
976 return [parents[0], self.repo['null']]
977 if parents[0].rev() >= ctx.rev() - 1:
977 if parents[0].rev() >= ctx.rev() - 1:
978 return []
978 return []
979 return parents
979 return parents
980
980
981 def _show(self, ctx, copies, matchfn, props):
981 def _show(self, ctx, copies, matchfn, props):
982 '''show a single changeset or file revision'''
982 '''show a single changeset or file revision'''
983
983
984 showlist = templatekw.showlist
984 showlist = templatekw.showlist
985
985
986 # showparents() behaviour depends on ui trace level which
986 # showparents() behaviour depends on ui trace level which
987 # causes unexpected behaviours at templating level and makes
987 # causes unexpected behaviours at templating level and makes
988 # it harder to extract it in a standalone function. Its
988 # it harder to extract it in a standalone function. Its
989 # behaviour cannot be changed so leave it here for now.
989 # behaviour cannot be changed so leave it here for now.
990 def showparents(**args):
990 def showparents(**args):
991 ctx = args['ctx']
991 ctx = args['ctx']
992 parents = [[('rev', p.rev()), ('node', p.hex())]
992 parents = [[('rev', p.rev()), ('node', p.hex())]
993 for p in self._meaningful_parentrevs(ctx)]
993 for p in self._meaningful_parentrevs(ctx)]
994 return showlist('parent', parents, **args)
994 return showlist('parent', parents, **args)
995
995
996 props = props.copy()
996 props = props.copy()
997 props.update(templatekw.keywords)
997 props.update(templatekw.keywords)
998 props['parents'] = showparents
998 props['parents'] = showparents
999 props['templ'] = self.t
999 props['templ'] = self.t
1000 props['ctx'] = ctx
1000 props['ctx'] = ctx
1001 props['repo'] = self.repo
1001 props['repo'] = self.repo
1002 props['revcache'] = {'copies': copies}
1002 props['revcache'] = {'copies': copies}
1003 props['cache'] = self.cache
1003 props['cache'] = self.cache
1004
1004
1005 # find correct templates for current mode
1005 # find correct templates for current mode
1006
1006
1007 tmplmodes = [
1007 tmplmodes = [
1008 (True, None),
1008 (True, None),
1009 (self.ui.verbose, 'verbose'),
1009 (self.ui.verbose, 'verbose'),
1010 (self.ui.quiet, 'quiet'),
1010 (self.ui.quiet, 'quiet'),
1011 (self.ui.debugflag, 'debug'),
1011 (self.ui.debugflag, 'debug'),
1012 ]
1012 ]
1013
1013
1014 types = {'header': '', 'footer':'', 'changeset': 'changeset'}
1014 types = {'header': '', 'footer':'', 'changeset': 'changeset'}
1015 for mode, postfix in tmplmodes:
1015 for mode, postfix in tmplmodes:
1016 for type in types:
1016 for type in types:
1017 cur = postfix and ('%s_%s' % (type, postfix)) or type
1017 cur = postfix and ('%s_%s' % (type, postfix)) or type
1018 if mode and cur in self.t:
1018 if mode and cur in self.t:
1019 types[type] = cur
1019 types[type] = cur
1020
1020
1021 try:
1021 try:
1022
1022
1023 # write header
1023 # write header
1024 if types['header']:
1024 if types['header']:
1025 h = templater.stringify(self.t(types['header'], **props))
1025 h = templater.stringify(self.t(types['header'], **props))
1026 if self.buffered:
1026 if self.buffered:
1027 self.header[ctx.rev()] = h
1027 self.header[ctx.rev()] = h
1028 else:
1028 else:
1029 if self.lastheader != h:
1029 if self.lastheader != h:
1030 self.lastheader = h
1030 self.lastheader = h
1031 self.ui.write(h)
1031 self.ui.write(h)
1032
1032
1033 # write changeset metadata, then patch if requested
1033 # write changeset metadata, then patch if requested
1034 key = types['changeset']
1034 key = types['changeset']
1035 self.ui.write(templater.stringify(self.t(key, **props)))
1035 self.ui.write(templater.stringify(self.t(key, **props)))
1036 self.showpatch(ctx.node(), matchfn)
1036 self.showpatch(ctx.node(), matchfn)
1037
1037
1038 if types['footer']:
1038 if types['footer']:
1039 if not self.footer:
1039 if not self.footer:
1040 self.footer = templater.stringify(self.t(types['footer'],
1040 self.footer = templater.stringify(self.t(types['footer'],
1041 **props))
1041 **props))
1042
1042
1043 except KeyError, inst:
1043 except KeyError, inst:
1044 msg = _("%s: no key named '%s'")
1044 msg = _("%s: no key named '%s'")
1045 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
1045 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
1046 except SyntaxError, inst:
1046 except SyntaxError, inst:
1047 raise util.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1047 raise util.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1048
1048
1049 def gettemplate(ui, tmpl, style):
1049 def gettemplate(ui, tmpl, style):
1050 """
1050 """
1051 Find the template matching the given template spec or style.
1051 Find the template matching the given template spec or style.
1052 """
1052 """
1053
1053
1054 # ui settings
1054 # ui settings
1055 if not tmpl and not style:
1055 if not tmpl and not style:
1056 tmpl = ui.config('ui', 'logtemplate')
1056 tmpl = ui.config('ui', 'logtemplate')
1057 if tmpl:
1057 if tmpl:
1058 try:
1058 try:
1059 tmpl = templater.parsestring(tmpl)
1059 tmpl = templater.parsestring(tmpl)
1060 except SyntaxError:
1060 except SyntaxError:
1061 tmpl = templater.parsestring(tmpl, quoted=False)
1061 tmpl = templater.parsestring(tmpl, quoted=False)
1062 return tmpl, None
1062 return tmpl, None
1063 else:
1063 else:
1064 style = util.expandpath(ui.config('ui', 'style', ''))
1064 style = util.expandpath(ui.config('ui', 'style', ''))
1065
1065
1066 if style:
1066 if style:
1067 mapfile = style
1067 mapfile = style
1068 if not os.path.split(mapfile)[0]:
1068 if not os.path.split(mapfile)[0]:
1069 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1069 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1070 or templater.templatepath(mapfile))
1070 or templater.templatepath(mapfile))
1071 if mapname:
1071 if mapname:
1072 mapfile = mapname
1072 mapfile = mapname
1073 return None, mapfile
1073 return None, mapfile
1074
1074
1075 if not tmpl:
1075 if not tmpl:
1076 return None, None
1076 return None, None
1077
1077
1078 # looks like a literal template?
1078 # looks like a literal template?
1079 if '{' in tmpl:
1079 if '{' in tmpl:
1080 return tmpl, None
1080 return tmpl, None
1081
1081
1082 # perhaps a stock style?
1082 # perhaps a stock style?
1083 if not os.path.split(tmpl)[0]:
1083 if not os.path.split(tmpl)[0]:
1084 mapname = (templater.templatepath('map-cmdline.' + tmpl)
1084 mapname = (templater.templatepath('map-cmdline.' + tmpl)
1085 or templater.templatepath(tmpl))
1085 or templater.templatepath(tmpl))
1086 if mapname and os.path.isfile(mapname):
1086 if mapname and os.path.isfile(mapname):
1087 return None, mapname
1087 return None, mapname
1088
1088
1089 # perhaps it's a reference to [templates]
1089 # perhaps it's a reference to [templates]
1090 t = ui.config('templates', tmpl)
1090 t = ui.config('templates', tmpl)
1091 if t:
1091 if t:
1092 try:
1092 try:
1093 tmpl = templater.parsestring(t)
1093 tmpl = templater.parsestring(t)
1094 except SyntaxError:
1094 except SyntaxError:
1095 tmpl = templater.parsestring(t, quoted=False)
1095 tmpl = templater.parsestring(t, quoted=False)
1096 return tmpl, None
1096 return tmpl, None
1097
1097
1098 # perhaps it's a path to a map or a template
1098 # perhaps it's a path to a map or a template
1099 if ('/' in tmpl or '\\' in tmpl) and os.path.isfile(tmpl):
1099 if ('/' in tmpl or '\\' in tmpl) and os.path.isfile(tmpl):
1100 # is it a mapfile for a style?
1100 # is it a mapfile for a style?
1101 if os.path.basename(tmpl).startswith("map-"):
1101 if os.path.basename(tmpl).startswith("map-"):
1102 return None, os.path.realpath(tmpl)
1102 return None, os.path.realpath(tmpl)
1103 tmpl = open(tmpl).read()
1103 tmpl = open(tmpl).read()
1104 return tmpl, None
1104 return tmpl, None
1105
1105
1106 # constant string?
1106 # constant string?
1107 return tmpl, None
1107 return tmpl, None
1108
1108
1109 def show_changeset(ui, repo, opts, buffered=False):
1109 def show_changeset(ui, repo, opts, buffered=False):
1110 """show one changeset using template or regular display.
1110 """show one changeset using template or regular display.
1111
1111
1112 Display format will be the first non-empty hit of:
1112 Display format will be the first non-empty hit of:
1113 1. option 'template'
1113 1. option 'template'
1114 2. option 'style'
1114 2. option 'style'
1115 3. [ui] setting 'logtemplate'
1115 3. [ui] setting 'logtemplate'
1116 4. [ui] setting 'style'
1116 4. [ui] setting 'style'
1117 If all of these values are either the unset or the empty string,
1117 If all of these values are either the unset or the empty string,
1118 regular display via changeset_printer() is done.
1118 regular display via changeset_printer() is done.
1119 """
1119 """
1120 # options
1120 # options
1121 patch = None
1121 patch = None
1122 if opts.get('patch') or opts.get('stat'):
1122 if opts.get('patch') or opts.get('stat'):
1123 patch = scmutil.matchall(repo)
1123 patch = scmutil.matchall(repo)
1124
1124
1125 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1125 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1126
1126
1127 if not tmpl and not mapfile:
1127 if not tmpl and not mapfile:
1128 return changeset_printer(ui, repo, patch, opts, buffered)
1128 return changeset_printer(ui, repo, patch, opts, buffered)
1129
1129
1130 try:
1130 try:
1131 t = changeset_templater(ui, repo, patch, opts, tmpl, mapfile, buffered)
1131 t = changeset_templater(ui, repo, patch, opts, tmpl, mapfile, buffered)
1132 except SyntaxError, inst:
1132 except SyntaxError, inst:
1133 raise util.Abort(inst.args[0])
1133 raise util.Abort(inst.args[0])
1134 return t
1134 return t
1135
1135
1136 def showmarker(ui, marker):
1136 def showmarker(ui, marker):
1137 """utility function to display obsolescence marker in a readable way
1137 """utility function to display obsolescence marker in a readable way
1138
1138
1139 To be used by debug function."""
1139 To be used by debug function."""
1140 ui.write(hex(marker.precnode()))
1140 ui.write(hex(marker.precnode()))
1141 for repl in marker.succnodes():
1141 for repl in marker.succnodes():
1142 ui.write(' ')
1142 ui.write(' ')
1143 ui.write(hex(repl))
1143 ui.write(hex(repl))
1144 ui.write(' %X ' % marker._data[2])
1144 ui.write(' %X ' % marker._data[2])
1145 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1145 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1146 sorted(marker.metadata().items()))))
1146 sorted(marker.metadata().items()))))
1147 ui.write('\n')
1147 ui.write('\n')
1148
1148
1149 def finddate(ui, repo, date):
1149 def finddate(ui, repo, date):
1150 """Find the tipmost changeset that matches the given date spec"""
1150 """Find the tipmost changeset that matches the given date spec"""
1151
1151
1152 df = util.matchdate(date)
1152 df = util.matchdate(date)
1153 m = scmutil.matchall(repo)
1153 m = scmutil.matchall(repo)
1154 results = {}
1154 results = {}
1155
1155
1156 def prep(ctx, fns):
1156 def prep(ctx, fns):
1157 d = ctx.date()
1157 d = ctx.date()
1158 if df(d[0]):
1158 if df(d[0]):
1159 results[ctx.rev()] = d
1159 results[ctx.rev()] = d
1160
1160
1161 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1161 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1162 rev = ctx.rev()
1162 rev = ctx.rev()
1163 if rev in results:
1163 if rev in results:
1164 ui.status(_("found revision %s from %s\n") %
1164 ui.status(_("found revision %s from %s\n") %
1165 (rev, util.datestr(results[rev])))
1165 (rev, util.datestr(results[rev])))
1166 return str(rev)
1166 return str(rev)
1167
1167
1168 raise util.Abort(_("revision matching date not found"))
1168 raise util.Abort(_("revision matching date not found"))
1169
1169
1170 def increasingwindows(windowsize=8, sizelimit=512):
1170 def increasingwindows(windowsize=8, sizelimit=512):
1171 while True:
1171 while True:
1172 yield windowsize
1172 yield windowsize
1173 if windowsize < sizelimit:
1173 if windowsize < sizelimit:
1174 windowsize *= 2
1174 windowsize *= 2
1175
1175
1176 class FileWalkError(Exception):
1176 class FileWalkError(Exception):
1177 pass
1177 pass
1178
1178
1179 def walkfilerevs(repo, match, follow, revs, fncache):
1179 def walkfilerevs(repo, match, follow, revs, fncache):
1180 '''Walks the file history for the matched files.
1180 '''Walks the file history for the matched files.
1181
1181
1182 Returns the changeset revs that are involved in the file history.
1182 Returns the changeset revs that are involved in the file history.
1183
1183
1184 Throws FileWalkError if the file history can't be walked using
1184 Throws FileWalkError if the file history can't be walked using
1185 filelogs alone.
1185 filelogs alone.
1186 '''
1186 '''
1187 wanted = set()
1187 wanted = set()
1188 copies = []
1188 copies = []
1189 minrev, maxrev = min(revs), max(revs)
1189 minrev, maxrev = min(revs), max(revs)
1190 def filerevgen(filelog, last):
1190 def filerevgen(filelog, last):
1191 """
1191 """
1192 Only files, no patterns. Check the history of each file.
1192 Only files, no patterns. Check the history of each file.
1193
1193
1194 Examines filelog entries within minrev, maxrev linkrev range
1194 Examines filelog entries within minrev, maxrev linkrev range
1195 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1195 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1196 tuples in backwards order
1196 tuples in backwards order
1197 """
1197 """
1198 cl_count = len(repo)
1198 cl_count = len(repo)
1199 revs = []
1199 revs = []
1200 for j in xrange(0, last + 1):
1200 for j in xrange(0, last + 1):
1201 linkrev = filelog.linkrev(j)
1201 linkrev = filelog.linkrev(j)
1202 if linkrev < minrev:
1202 if linkrev < minrev:
1203 continue
1203 continue
1204 # only yield rev for which we have the changelog, it can
1204 # only yield rev for which we have the changelog, it can
1205 # happen while doing "hg log" during a pull or commit
1205 # happen while doing "hg log" during a pull or commit
1206 if linkrev >= cl_count:
1206 if linkrev >= cl_count:
1207 break
1207 break
1208
1208
1209 parentlinkrevs = []
1209 parentlinkrevs = []
1210 for p in filelog.parentrevs(j):
1210 for p in filelog.parentrevs(j):
1211 if p != nullrev:
1211 if p != nullrev:
1212 parentlinkrevs.append(filelog.linkrev(p))
1212 parentlinkrevs.append(filelog.linkrev(p))
1213 n = filelog.node(j)
1213 n = filelog.node(j)
1214 revs.append((linkrev, parentlinkrevs,
1214 revs.append((linkrev, parentlinkrevs,
1215 follow and filelog.renamed(n)))
1215 follow and filelog.renamed(n)))
1216
1216
1217 return reversed(revs)
1217 return reversed(revs)
1218 def iterfiles():
1218 def iterfiles():
1219 pctx = repo['.']
1219 pctx = repo['.']
1220 for filename in match.files():
1220 for filename in match.files():
1221 if follow:
1221 if follow:
1222 if filename not in pctx:
1222 if filename not in pctx:
1223 raise util.Abort(_('cannot follow file not in parent '
1223 raise util.Abort(_('cannot follow file not in parent '
1224 'revision: "%s"') % filename)
1224 'revision: "%s"') % filename)
1225 yield filename, pctx[filename].filenode()
1225 yield filename, pctx[filename].filenode()
1226 else:
1226 else:
1227 yield filename, None
1227 yield filename, None
1228 for filename_node in copies:
1228 for filename_node in copies:
1229 yield filename_node
1229 yield filename_node
1230
1230
1231 for file_, node in iterfiles():
1231 for file_, node in iterfiles():
1232 filelog = repo.file(file_)
1232 filelog = repo.file(file_)
1233 if not len(filelog):
1233 if not len(filelog):
1234 if node is None:
1234 if node is None:
1235 # A zero count may be a directory or deleted file, so
1235 # A zero count may be a directory or deleted file, so
1236 # try to find matching entries on the slow path.
1236 # try to find matching entries on the slow path.
1237 if follow:
1237 if follow:
1238 raise util.Abort(
1238 raise util.Abort(
1239 _('cannot follow nonexistent file: "%s"') % file_)
1239 _('cannot follow nonexistent file: "%s"') % file_)
1240 raise FileWalkError("Cannot walk via filelog")
1240 raise FileWalkError("Cannot walk via filelog")
1241 else:
1241 else:
1242 continue
1242 continue
1243
1243
1244 if node is None:
1244 if node is None:
1245 last = len(filelog) - 1
1245 last = len(filelog) - 1
1246 else:
1246 else:
1247 last = filelog.rev(node)
1247 last = filelog.rev(node)
1248
1248
1249
1249
1250 # keep track of all ancestors of the file
1250 # keep track of all ancestors of the file
1251 ancestors = set([filelog.linkrev(last)])
1251 ancestors = set([filelog.linkrev(last)])
1252
1252
1253 # iterate from latest to oldest revision
1253 # iterate from latest to oldest revision
1254 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1254 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1255 if not follow:
1255 if not follow:
1256 if rev > maxrev:
1256 if rev > maxrev:
1257 continue
1257 continue
1258 else:
1258 else:
1259 # Note that last might not be the first interesting
1259 # Note that last might not be the first interesting
1260 # rev to us:
1260 # rev to us:
1261 # if the file has been changed after maxrev, we'll
1261 # if the file has been changed after maxrev, we'll
1262 # have linkrev(last) > maxrev, and we still need
1262 # have linkrev(last) > maxrev, and we still need
1263 # to explore the file graph
1263 # to explore the file graph
1264 if rev not in ancestors:
1264 if rev not in ancestors:
1265 continue
1265 continue
1266 # XXX insert 1327 fix here
1266 # XXX insert 1327 fix here
1267 if flparentlinkrevs:
1267 if flparentlinkrevs:
1268 ancestors.update(flparentlinkrevs)
1268 ancestors.update(flparentlinkrevs)
1269
1269
1270 fncache.setdefault(rev, []).append(file_)
1270 fncache.setdefault(rev, []).append(file_)
1271 wanted.add(rev)
1271 wanted.add(rev)
1272 if copied:
1272 if copied:
1273 copies.append(copied)
1273 copies.append(copied)
1274
1274
1275 return wanted
1275 return wanted
1276
1276
1277 def walkchangerevs(repo, match, opts, prepare):
1277 def walkchangerevs(repo, match, opts, prepare):
1278 '''Iterate over files and the revs in which they changed.
1278 '''Iterate over files and the revs in which they changed.
1279
1279
1280 Callers most commonly need to iterate backwards over the history
1280 Callers most commonly need to iterate backwards over the history
1281 in which they are interested. Doing so has awful (quadratic-looking)
1281 in which they are interested. Doing so has awful (quadratic-looking)
1282 performance, so we use iterators in a "windowed" way.
1282 performance, so we use iterators in a "windowed" way.
1283
1283
1284 We walk a window of revisions in the desired order. Within the
1284 We walk a window of revisions in the desired order. Within the
1285 window, we first walk forwards to gather data, then in the desired
1285 window, we first walk forwards to gather data, then in the desired
1286 order (usually backwards) to display it.
1286 order (usually backwards) to display it.
1287
1287
1288 This function returns an iterator yielding contexts. Before
1288 This function returns an iterator yielding contexts. Before
1289 yielding each context, the iterator will first call the prepare
1289 yielding each context, the iterator will first call the prepare
1290 function on each context in the window in forward order.'''
1290 function on each context in the window in forward order.'''
1291
1291
1292 follow = opts.get('follow') or opts.get('follow_first')
1292 follow = opts.get('follow') or opts.get('follow_first')
1293
1293
1294 if opts.get('rev'):
1294 if opts.get('rev'):
1295 revs = scmutil.revrange(repo, opts.get('rev'))
1295 revs = scmutil.revrange(repo, opts.get('rev'))
1296 elif follow:
1296 elif follow:
1297 revs = repo.revs('reverse(:.)')
1297 revs = repo.revs('reverse(:.)')
1298 else:
1298 else:
1299 revs = revset.spanset(repo)
1299 revs = revset.spanset(repo)
1300 revs.reverse()
1300 revs.reverse()
1301 if not revs:
1301 if not revs:
1302 return []
1302 return []
1303 wanted = set()
1303 wanted = set()
1304 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1304 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1305 fncache = {}
1305 fncache = {}
1306 change = repo.changectx
1306 change = repo.changectx
1307
1307
1308 # First step is to fill wanted, the set of revisions that we want to yield.
1308 # First step is to fill wanted, the set of revisions that we want to yield.
1309 # When it does not induce extra cost, we also fill fncache for revisions in
1309 # When it does not induce extra cost, we also fill fncache for revisions in
1310 # wanted: a cache of filenames that were changed (ctx.files()) and that
1310 # wanted: a cache of filenames that were changed (ctx.files()) and that
1311 # match the file filtering conditions.
1311 # match the file filtering conditions.
1312
1312
1313 if not slowpath and not match.files():
1313 if not slowpath and not match.files():
1314 # No files, no patterns. Display all revs.
1314 # No files, no patterns. Display all revs.
1315 wanted = revs
1315 wanted = revs
1316
1316
1317 if not slowpath and match.files():
1317 if not slowpath and match.files():
1318 # We only have to read through the filelog to find wanted revisions
1318 # We only have to read through the filelog to find wanted revisions
1319
1319
1320 try:
1320 try:
1321 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1321 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1322 except FileWalkError:
1322 except FileWalkError:
1323 slowpath = True
1323 slowpath = True
1324
1324
1325 # We decided to fall back to the slowpath because at least one
1325 # We decided to fall back to the slowpath because at least one
1326 # of the paths was not a file. Check to see if at least one of them
1326 # of the paths was not a file. Check to see if at least one of them
1327 # existed in history, otherwise simply return
1327 # existed in history, otherwise simply return
1328 for path in match.files():
1328 for path in match.files():
1329 if path == '.' or path in repo.store:
1329 if path == '.' or path in repo.store:
1330 break
1330 break
1331 else:
1331 else:
1332 return []
1332 return []
1333
1333
1334 if slowpath:
1334 if slowpath:
1335 # We have to read the changelog to match filenames against
1335 # We have to read the changelog to match filenames against
1336 # changed files
1336 # changed files
1337
1337
1338 if follow:
1338 if follow:
1339 raise util.Abort(_('can only follow copies/renames for explicit '
1339 raise util.Abort(_('can only follow copies/renames for explicit '
1340 'filenames'))
1340 'filenames'))
1341
1341
1342 # The slow path checks files modified in every changeset.
1342 # The slow path checks files modified in every changeset.
1343 # This is really slow on large repos, so compute the set lazily.
1343 # This is really slow on large repos, so compute the set lazily.
1344 class lazywantedset(object):
1344 class lazywantedset(object):
1345 def __init__(self):
1345 def __init__(self):
1346 self.set = set()
1346 self.set = set()
1347 self.revs = set(revs)
1347 self.revs = set(revs)
1348
1348
1349 # No need to worry about locality here because it will be accessed
1349 # No need to worry about locality here because it will be accessed
1350 # in the same order as the increasing window below.
1350 # in the same order as the increasing window below.
1351 def __contains__(self, value):
1351 def __contains__(self, value):
1352 if value in self.set:
1352 if value in self.set:
1353 return True
1353 return True
1354 elif not value in self.revs:
1354 elif not value in self.revs:
1355 return False
1355 return False
1356 else:
1356 else:
1357 self.revs.discard(value)
1357 self.revs.discard(value)
1358 ctx = change(value)
1358 ctx = change(value)
1359 matches = filter(match, ctx.files())
1359 matches = filter(match, ctx.files())
1360 if matches:
1360 if matches:
1361 fncache[value] = matches
1361 fncache[value] = matches
1362 self.set.add(value)
1362 self.set.add(value)
1363 return True
1363 return True
1364 return False
1364 return False
1365
1365
1366 def discard(self, value):
1366 def discard(self, value):
1367 self.revs.discard(value)
1367 self.revs.discard(value)
1368 self.set.discard(value)
1368 self.set.discard(value)
1369
1369
1370 wanted = lazywantedset()
1370 wanted = lazywantedset()
1371
1371
1372 class followfilter(object):
1372 class followfilter(object):
1373 def __init__(self, onlyfirst=False):
1373 def __init__(self, onlyfirst=False):
1374 self.startrev = nullrev
1374 self.startrev = nullrev
1375 self.roots = set()
1375 self.roots = set()
1376 self.onlyfirst = onlyfirst
1376 self.onlyfirst = onlyfirst
1377
1377
1378 def match(self, rev):
1378 def match(self, rev):
1379 def realparents(rev):
1379 def realparents(rev):
1380 if self.onlyfirst:
1380 if self.onlyfirst:
1381 return repo.changelog.parentrevs(rev)[0:1]
1381 return repo.changelog.parentrevs(rev)[0:1]
1382 else:
1382 else:
1383 return filter(lambda x: x != nullrev,
1383 return filter(lambda x: x != nullrev,
1384 repo.changelog.parentrevs(rev))
1384 repo.changelog.parentrevs(rev))
1385
1385
1386 if self.startrev == nullrev:
1386 if self.startrev == nullrev:
1387 self.startrev = rev
1387 self.startrev = rev
1388 return True
1388 return True
1389
1389
1390 if rev > self.startrev:
1390 if rev > self.startrev:
1391 # forward: all descendants
1391 # forward: all descendants
1392 if not self.roots:
1392 if not self.roots:
1393 self.roots.add(self.startrev)
1393 self.roots.add(self.startrev)
1394 for parent in realparents(rev):
1394 for parent in realparents(rev):
1395 if parent in self.roots:
1395 if parent in self.roots:
1396 self.roots.add(rev)
1396 self.roots.add(rev)
1397 return True
1397 return True
1398 else:
1398 else:
1399 # backwards: all parents
1399 # backwards: all parents
1400 if not self.roots:
1400 if not self.roots:
1401 self.roots.update(realparents(self.startrev))
1401 self.roots.update(realparents(self.startrev))
1402 if rev in self.roots:
1402 if rev in self.roots:
1403 self.roots.remove(rev)
1403 self.roots.remove(rev)
1404 self.roots.update(realparents(rev))
1404 self.roots.update(realparents(rev))
1405 return True
1405 return True
1406
1406
1407 return False
1407 return False
1408
1408
1409 # it might be worthwhile to do this in the iterator if the rev range
1409 # it might be worthwhile to do this in the iterator if the rev range
1410 # is descending and the prune args are all within that range
1410 # is descending and the prune args are all within that range
1411 for rev in opts.get('prune', ()):
1411 for rev in opts.get('prune', ()):
1412 rev = repo[rev].rev()
1412 rev = repo[rev].rev()
1413 ff = followfilter()
1413 ff = followfilter()
1414 stop = min(revs[0], revs[-1])
1414 stop = min(revs[0], revs[-1])
1415 for x in xrange(rev, stop - 1, -1):
1415 for x in xrange(rev, stop - 1, -1):
1416 if ff.match(x):
1416 if ff.match(x):
1417 wanted = wanted - [x]
1417 wanted = wanted - [x]
1418
1418
1419 # Now that wanted is correctly initialized, we can iterate over the
1419 # Now that wanted is correctly initialized, we can iterate over the
1420 # revision range, yielding only revisions in wanted.
1420 # revision range, yielding only revisions in wanted.
1421 def iterate():
1421 def iterate():
1422 if follow and not match.files():
1422 if follow and not match.files():
1423 ff = followfilter(onlyfirst=opts.get('follow_first'))
1423 ff = followfilter(onlyfirst=opts.get('follow_first'))
1424 def want(rev):
1424 def want(rev):
1425 return ff.match(rev) and rev in wanted
1425 return ff.match(rev) and rev in wanted
1426 else:
1426 else:
1427 def want(rev):
1427 def want(rev):
1428 return rev in wanted
1428 return rev in wanted
1429
1429
1430 it = iter(revs)
1430 it = iter(revs)
1431 stopiteration = False
1431 stopiteration = False
1432 for windowsize in increasingwindows():
1432 for windowsize in increasingwindows():
1433 nrevs = []
1433 nrevs = []
1434 for i in xrange(windowsize):
1434 for i in xrange(windowsize):
1435 try:
1435 try:
1436 rev = it.next()
1436 rev = it.next()
1437 if want(rev):
1437 if want(rev):
1438 nrevs.append(rev)
1438 nrevs.append(rev)
1439 except (StopIteration):
1439 except (StopIteration):
1440 stopiteration = True
1440 stopiteration = True
1441 break
1441 break
1442 for rev in sorted(nrevs):
1442 for rev in sorted(nrevs):
1443 fns = fncache.get(rev)
1443 fns = fncache.get(rev)
1444 ctx = change(rev)
1444 ctx = change(rev)
1445 if not fns:
1445 if not fns:
1446 def fns_generator():
1446 def fns_generator():
1447 for f in ctx.files():
1447 for f in ctx.files():
1448 if match(f):
1448 if match(f):
1449 yield f
1449 yield f
1450 fns = fns_generator()
1450 fns = fns_generator()
1451 prepare(ctx, fns)
1451 prepare(ctx, fns)
1452 for rev in nrevs:
1452 for rev in nrevs:
1453 yield change(rev)
1453 yield change(rev)
1454
1454
1455 if stopiteration:
1455 if stopiteration:
1456 break
1456 break
1457
1457
1458 return iterate()
1458 return iterate()
1459
1459
1460 def _makegraphfilematcher(repo, pats, followfirst):
1460 def _makegraphfilematcher(repo, pats, followfirst):
1461 # When displaying a revision with --patch --follow FILE, we have
1461 # When displaying a revision with --patch --follow FILE, we have
1462 # to know which file of the revision must be diffed. With
1462 # to know which file of the revision must be diffed. With
1463 # --follow, we want the names of the ancestors of FILE in the
1463 # --follow, we want the names of the ancestors of FILE in the
1464 # revision, stored in "fcache". "fcache" is populated by
1464 # revision, stored in "fcache". "fcache" is populated by
1465 # reproducing the graph traversal already done by --follow revset
1465 # reproducing the graph traversal already done by --follow revset
1466 # and relating linkrevs to file names (which is not "correct" but
1466 # and relating linkrevs to file names (which is not "correct" but
1467 # good enough).
1467 # good enough).
1468 fcache = {}
1468 fcache = {}
1469 fcacheready = [False]
1469 fcacheready = [False]
1470 pctx = repo['.']
1470 pctx = repo['.']
1471 wctx = repo[None]
1471 wctx = repo[None]
1472
1472
1473 def populate():
1473 def populate():
1474 for fn in pats:
1474 for fn in pats:
1475 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1475 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1476 for c in i:
1476 for c in i:
1477 fcache.setdefault(c.linkrev(), set()).add(c.path())
1477 fcache.setdefault(c.linkrev(), set()).add(c.path())
1478
1478
1479 def filematcher(rev):
1479 def filematcher(rev):
1480 if not fcacheready[0]:
1480 if not fcacheready[0]:
1481 # Lazy initialization
1481 # Lazy initialization
1482 fcacheready[0] = True
1482 fcacheready[0] = True
1483 populate()
1483 populate()
1484 return scmutil.match(wctx, fcache.get(rev, []), default='path')
1484 return scmutil.match(wctx, fcache.get(rev, []), default='path')
1485
1485
1486 return filematcher
1486 return filematcher
1487
1487
1488 def _makegraphlogrevset(repo, pats, opts, revs):
1488 def _makegraphlogrevset(repo, pats, opts, revs):
1489 """Return (expr, filematcher) where expr is a revset string built
1489 """Return (expr, filematcher) where expr is a revset string built
1490 from log options and file patterns or None. If --stat or --patch
1490 from log options and file patterns or None. If --stat or --patch
1491 are not passed filematcher is None. Otherwise it is a callable
1491 are not passed filematcher is None. Otherwise it is a callable
1492 taking a revision number and returning a match objects filtering
1492 taking a revision number and returning a match objects filtering
1493 the files to be detailed when displaying the revision.
1493 the files to be detailed when displaying the revision.
1494 """
1494 """
1495 opt2revset = {
1495 opt2revset = {
1496 'no_merges': ('not merge()', None),
1496 'no_merges': ('not merge()', None),
1497 'only_merges': ('merge()', None),
1497 'only_merges': ('merge()', None),
1498 '_ancestors': ('ancestors(%(val)s)', None),
1498 '_ancestors': ('ancestors(%(val)s)', None),
1499 '_fancestors': ('_firstancestors(%(val)s)', None),
1499 '_fancestors': ('_firstancestors(%(val)s)', None),
1500 '_descendants': ('descendants(%(val)s)', None),
1500 '_descendants': ('descendants(%(val)s)', None),
1501 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1501 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1502 '_matchfiles': ('_matchfiles(%(val)s)', None),
1502 '_matchfiles': ('_matchfiles(%(val)s)', None),
1503 'date': ('date(%(val)r)', None),
1503 'date': ('date(%(val)r)', None),
1504 'branch': ('branch(%(val)r)', ' or '),
1504 'branch': ('branch(%(val)r)', ' or '),
1505 '_patslog': ('filelog(%(val)r)', ' or '),
1505 '_patslog': ('filelog(%(val)r)', ' or '),
1506 '_patsfollow': ('follow(%(val)r)', ' or '),
1506 '_patsfollow': ('follow(%(val)r)', ' or '),
1507 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1507 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1508 'keyword': ('keyword(%(val)r)', ' or '),
1508 'keyword': ('keyword(%(val)r)', ' or '),
1509 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1509 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1510 'user': ('user(%(val)r)', ' or '),
1510 'user': ('user(%(val)r)', ' or '),
1511 }
1511 }
1512
1512
1513 opts = dict(opts)
1513 opts = dict(opts)
1514 # follow or not follow?
1514 # follow or not follow?
1515 follow = opts.get('follow') or opts.get('follow_first')
1515 follow = opts.get('follow') or opts.get('follow_first')
1516 followfirst = opts.get('follow_first') and 1 or 0
1516 followfirst = opts.get('follow_first') and 1 or 0
1517 # --follow with FILE behaviour depends on revs...
1517 # --follow with FILE behaviour depends on revs...
1518 it = iter(revs)
1518 it = iter(revs)
1519 startrev = it.next()
1519 startrev = it.next()
1520 try:
1520 try:
1521 followdescendants = startrev < it.next()
1521 followdescendants = startrev < it.next()
1522 except (StopIteration):
1522 except (StopIteration):
1523 followdescendants = False
1523 followdescendants = False
1524
1524
1525 # branch and only_branch are really aliases and must be handled at
1525 # branch and only_branch are really aliases and must be handled at
1526 # the same time
1526 # the same time
1527 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1527 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1528 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1528 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1529 # pats/include/exclude are passed to match.match() directly in
1529 # pats/include/exclude are passed to match.match() directly in
1530 # _matchfiles() revset but walkchangerevs() builds its matcher with
1530 # _matchfiles() revset but walkchangerevs() builds its matcher with
1531 # scmutil.match(). The difference is input pats are globbed on
1531 # scmutil.match(). The difference is input pats are globbed on
1532 # platforms without shell expansion (windows).
1532 # platforms without shell expansion (windows).
1533 pctx = repo[None]
1533 pctx = repo[None]
1534 match, pats = scmutil.matchandpats(pctx, pats, opts)
1534 match, pats = scmutil.matchandpats(pctx, pats, opts)
1535 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1535 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1536 if not slowpath:
1536 if not slowpath:
1537 for f in match.files():
1537 for f in match.files():
1538 if follow and f not in pctx:
1538 if follow and f not in pctx:
1539 raise util.Abort(_('cannot follow file not in parent '
1539 raise util.Abort(_('cannot follow file not in parent '
1540 'revision: "%s"') % f)
1540 'revision: "%s"') % f)
1541 filelog = repo.file(f)
1541 filelog = repo.file(f)
1542 if not filelog:
1542 if not filelog:
1543 # A zero count may be a directory or deleted file, so
1543 # A zero count may be a directory or deleted file, so
1544 # try to find matching entries on the slow path.
1544 # try to find matching entries on the slow path.
1545 if follow:
1545 if follow:
1546 raise util.Abort(
1546 raise util.Abort(
1547 _('cannot follow nonexistent file: "%s"') % f)
1547 _('cannot follow nonexistent file: "%s"') % f)
1548 slowpath = True
1548 slowpath = True
1549
1549
1550 # We decided to fall back to the slowpath because at least one
1550 # We decided to fall back to the slowpath because at least one
1551 # of the paths was not a file. Check to see if at least one of them
1551 # of the paths was not a file. Check to see if at least one of them
1552 # existed in history - in that case, we'll continue down the
1552 # existed in history - in that case, we'll continue down the
1553 # slowpath; otherwise, we can turn off the slowpath
1553 # slowpath; otherwise, we can turn off the slowpath
1554 if slowpath:
1554 if slowpath:
1555 for path in match.files():
1555 for path in match.files():
1556 if path == '.' or path in repo.store:
1556 if path == '.' or path in repo.store:
1557 break
1557 break
1558 else:
1558 else:
1559 slowpath = False
1559 slowpath = False
1560
1560
1561 if slowpath:
1561 if slowpath:
1562 # See walkchangerevs() slow path.
1562 # See walkchangerevs() slow path.
1563 #
1563 #
1564 if follow:
1564 if follow:
1565 raise util.Abort(_('can only follow copies/renames for explicit '
1565 raise util.Abort(_('can only follow copies/renames for explicit '
1566 'filenames'))
1566 'filenames'))
1567 # pats/include/exclude cannot be represented as separate
1567 # pats/include/exclude cannot be represented as separate
1568 # revset expressions as their filtering logic applies at file
1568 # revset expressions as their filtering logic applies at file
1569 # level. For instance "-I a -X a" matches a revision touching
1569 # level. For instance "-I a -X a" matches a revision touching
1570 # "a" and "b" while "file(a) and not file(b)" does
1570 # "a" and "b" while "file(a) and not file(b)" does
1571 # not. Besides, filesets are evaluated against the working
1571 # not. Besides, filesets are evaluated against the working
1572 # directory.
1572 # directory.
1573 matchargs = ['r:', 'd:relpath']
1573 matchargs = ['r:', 'd:relpath']
1574 for p in pats:
1574 for p in pats:
1575 matchargs.append('p:' + p)
1575 matchargs.append('p:' + p)
1576 for p in opts.get('include', []):
1576 for p in opts.get('include', []):
1577 matchargs.append('i:' + p)
1577 matchargs.append('i:' + p)
1578 for p in opts.get('exclude', []):
1578 for p in opts.get('exclude', []):
1579 matchargs.append('x:' + p)
1579 matchargs.append('x:' + p)
1580 matchargs = ','.join(('%r' % p) for p in matchargs)
1580 matchargs = ','.join(('%r' % p) for p in matchargs)
1581 opts['_matchfiles'] = matchargs
1581 opts['_matchfiles'] = matchargs
1582 else:
1582 else:
1583 if follow:
1583 if follow:
1584 fpats = ('_patsfollow', '_patsfollowfirst')
1584 fpats = ('_patsfollow', '_patsfollowfirst')
1585 fnopats = (('_ancestors', '_fancestors'),
1585 fnopats = (('_ancestors', '_fancestors'),
1586 ('_descendants', '_fdescendants'))
1586 ('_descendants', '_fdescendants'))
1587 if pats:
1587 if pats:
1588 # follow() revset interprets its file argument as a
1588 # follow() revset interprets its file argument as a
1589 # manifest entry, so use match.files(), not pats.
1589 # manifest entry, so use match.files(), not pats.
1590 opts[fpats[followfirst]] = list(match.files())
1590 opts[fpats[followfirst]] = list(match.files())
1591 else:
1591 else:
1592 opts[fnopats[followdescendants][followfirst]] = str(startrev)
1592 opts[fnopats[followdescendants][followfirst]] = str(startrev)
1593 else:
1593 else:
1594 opts['_patslog'] = list(pats)
1594 opts['_patslog'] = list(pats)
1595
1595
1596 filematcher = None
1596 filematcher = None
1597 if opts.get('patch') or opts.get('stat'):
1597 if opts.get('patch') or opts.get('stat'):
1598 if follow:
1598 if follow:
1599 filematcher = _makegraphfilematcher(repo, pats, followfirst)
1599 filematcher = _makegraphfilematcher(repo, pats, followfirst)
1600 else:
1600 else:
1601 filematcher = lambda rev: match
1601 filematcher = lambda rev: match
1602
1602
1603 expr = []
1603 expr = []
1604 for op, val in opts.iteritems():
1604 for op, val in opts.iteritems():
1605 if not val:
1605 if not val:
1606 continue
1606 continue
1607 if op not in opt2revset:
1607 if op not in opt2revset:
1608 continue
1608 continue
1609 revop, andor = opt2revset[op]
1609 revop, andor = opt2revset[op]
1610 if '%(val)' not in revop:
1610 if '%(val)' not in revop:
1611 expr.append(revop)
1611 expr.append(revop)
1612 else:
1612 else:
1613 if not isinstance(val, list):
1613 if not isinstance(val, list):
1614 e = revop % {'val': val}
1614 e = revop % {'val': val}
1615 else:
1615 else:
1616 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
1616 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
1617 expr.append(e)
1617 expr.append(e)
1618
1618
1619 if expr:
1619 if expr:
1620 expr = '(' + ' and '.join(expr) + ')'
1620 expr = '(' + ' and '.join(expr) + ')'
1621 else:
1621 else:
1622 expr = None
1622 expr = None
1623 return expr, filematcher
1623 return expr, filematcher
1624
1624
1625 def getgraphlogrevs(repo, pats, opts):
1625 def getgraphlogrevs(repo, pats, opts):
1626 """Return (revs, expr, filematcher) where revs is an iterable of
1626 """Return (revs, expr, filematcher) where revs is an iterable of
1627 revision numbers, expr is a revset string built from log options
1627 revision numbers, expr is a revset string built from log options
1628 and file patterns or None, and used to filter 'revs'. If --stat or
1628 and file patterns or None, and used to filter 'revs'. If --stat or
1629 --patch are not passed filematcher is None. Otherwise it is a
1629 --patch are not passed filematcher is None. Otherwise it is a
1630 callable taking a revision number and returning a match objects
1630 callable taking a revision number and returning a match objects
1631 filtering the files to be detailed when displaying the revision.
1631 filtering the files to be detailed when displaying the revision.
1632 """
1632 """
1633 if not len(repo):
1633 if not len(repo):
1634 return [], None, None
1634 return [], None, None
1635 limit = loglimit(opts)
1635 limit = loglimit(opts)
1636 # Default --rev value depends on --follow but --follow behaviour
1636 # Default --rev value depends on --follow but --follow behaviour
1637 # depends on revisions resolved from --rev...
1637 # depends on revisions resolved from --rev...
1638 follow = opts.get('follow') or opts.get('follow_first')
1638 follow = opts.get('follow') or opts.get('follow_first')
1639 possiblyunsorted = False # whether revs might need sorting
1639 possiblyunsorted = False # whether revs might need sorting
1640 if opts.get('rev'):
1640 if opts.get('rev'):
1641 revs = scmutil.revrange(repo, opts['rev'])
1641 revs = scmutil.revrange(repo, opts['rev'])
1642 # Don't sort here because _makegraphlogrevset might depend on the
1642 # Don't sort here because _makegraphlogrevset might depend on the
1643 # order of revs
1643 # order of revs
1644 possiblyunsorted = True
1644 possiblyunsorted = True
1645 else:
1645 else:
1646 if follow and len(repo) > 0:
1646 if follow and len(repo) > 0:
1647 revs = repo.revs('reverse(:.)')
1647 revs = repo.revs('reverse(:.)')
1648 else:
1648 else:
1649 revs = revset.spanset(repo)
1649 revs = revset.spanset(repo)
1650 revs.reverse()
1650 revs.reverse()
1651 if not revs:
1651 if not revs:
1652 return revset.baseset(), None, None
1652 return revset.baseset(), None, None
1653 expr, filematcher = _makegraphlogrevset(repo, pats, opts, revs)
1653 expr, filematcher = _makegraphlogrevset(repo, pats, opts, revs)
1654 if possiblyunsorted:
1654 if possiblyunsorted:
1655 revs.sort(reverse=True)
1655 revs.sort(reverse=True)
1656 if expr:
1656 if expr:
1657 # Revset matchers often operate faster on revisions in changelog
1657 # Revset matchers often operate faster on revisions in changelog
1658 # order, because most filters deal with the changelog.
1658 # order, because most filters deal with the changelog.
1659 revs.reverse()
1659 revs.reverse()
1660 matcher = revset.match(repo.ui, expr)
1660 matcher = revset.match(repo.ui, expr)
1661 # Revset matches can reorder revisions. "A or B" typically returns
1661 # Revset matches can reorder revisions. "A or B" typically returns
1662 # returns the revision matching A then the revision matching B. Sort
1662 # returns the revision matching A then the revision matching B. Sort
1663 # again to fix that.
1663 # again to fix that.
1664 revs = matcher(repo, revs)
1664 revs = matcher(repo, revs)
1665 revs.sort(reverse=True)
1665 revs.sort(reverse=True)
1666 if limit is not None:
1666 if limit is not None:
1667 limitedrevs = revset.baseset()
1667 limitedrevs = revset.baseset()
1668 for idx, rev in enumerate(revs):
1668 for idx, rev in enumerate(revs):
1669 if idx >= limit:
1669 if idx >= limit:
1670 break
1670 break
1671 limitedrevs.append(rev)
1671 limitedrevs.append(rev)
1672 revs = limitedrevs
1672 revs = limitedrevs
1673
1673
1674 return revs, expr, filematcher
1674 return revs, expr, filematcher
1675
1675
1676 def displaygraph(ui, dag, displayer, showparents, edgefn, getrenamed=None,
1676 def displaygraph(ui, dag, displayer, showparents, edgefn, getrenamed=None,
1677 filematcher=None):
1677 filematcher=None):
1678 seen, state = [], graphmod.asciistate()
1678 seen, state = [], graphmod.asciistate()
1679 for rev, type, ctx, parents in dag:
1679 for rev, type, ctx, parents in dag:
1680 char = 'o'
1680 char = 'o'
1681 if ctx.node() in showparents:
1681 if ctx.node() in showparents:
1682 char = '@'
1682 char = '@'
1683 elif ctx.obsolete():
1683 elif ctx.obsolete():
1684 char = 'x'
1684 char = 'x'
1685 copies = None
1685 copies = None
1686 if getrenamed and ctx.rev():
1686 if getrenamed and ctx.rev():
1687 copies = []
1687 copies = []
1688 for fn in ctx.files():
1688 for fn in ctx.files():
1689 rename = getrenamed(fn, ctx.rev())
1689 rename = getrenamed(fn, ctx.rev())
1690 if rename:
1690 if rename:
1691 copies.append((fn, rename[0]))
1691 copies.append((fn, rename[0]))
1692 revmatchfn = None
1692 revmatchfn = None
1693 if filematcher is not None:
1693 if filematcher is not None:
1694 revmatchfn = filematcher(ctx.rev())
1694 revmatchfn = filematcher(ctx.rev())
1695 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
1695 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
1696 lines = displayer.hunk.pop(rev).split('\n')
1696 lines = displayer.hunk.pop(rev).split('\n')
1697 if not lines[-1]:
1697 if not lines[-1]:
1698 del lines[-1]
1698 del lines[-1]
1699 displayer.flush(rev)
1699 displayer.flush(rev)
1700 edges = edgefn(type, char, lines, seen, rev, parents)
1700 edges = edgefn(type, char, lines, seen, rev, parents)
1701 for type, char, lines, coldata in edges:
1701 for type, char, lines, coldata in edges:
1702 graphmod.ascii(ui, state, type, char, lines, coldata)
1702 graphmod.ascii(ui, state, type, char, lines, coldata)
1703 displayer.close()
1703 displayer.close()
1704
1704
1705 def graphlog(ui, repo, *pats, **opts):
1705 def graphlog(ui, repo, *pats, **opts):
1706 # Parameters are identical to log command ones
1706 # Parameters are identical to log command ones
1707 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
1707 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
1708 revdag = graphmod.dagwalker(repo, revs)
1708 revdag = graphmod.dagwalker(repo, revs)
1709
1709
1710 getrenamed = None
1710 getrenamed = None
1711 if opts.get('copies'):
1711 if opts.get('copies'):
1712 endrev = None
1712 endrev = None
1713 if opts.get('rev'):
1713 if opts.get('rev'):
1714 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
1714 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
1715 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
1715 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
1716 displayer = show_changeset(ui, repo, opts, buffered=True)
1716 displayer = show_changeset(ui, repo, opts, buffered=True)
1717 showparents = [ctx.node() for ctx in repo[None].parents()]
1717 showparents = [ctx.node() for ctx in repo[None].parents()]
1718 displaygraph(ui, revdag, displayer, showparents,
1718 displaygraph(ui, revdag, displayer, showparents,
1719 graphmod.asciiedges, getrenamed, filematcher)
1719 graphmod.asciiedges, getrenamed, filematcher)
1720
1720
1721 def checkunsupportedgraphflags(pats, opts):
1721 def checkunsupportedgraphflags(pats, opts):
1722 for op in ["newest_first"]:
1722 for op in ["newest_first"]:
1723 if op in opts and opts[op]:
1723 if op in opts and opts[op]:
1724 raise util.Abort(_("-G/--graph option is incompatible with --%s")
1724 raise util.Abort(_("-G/--graph option is incompatible with --%s")
1725 % op.replace("_", "-"))
1725 % op.replace("_", "-"))
1726
1726
1727 def graphrevs(repo, nodes, opts):
1727 def graphrevs(repo, nodes, opts):
1728 limit = loglimit(opts)
1728 limit = loglimit(opts)
1729 nodes.reverse()
1729 nodes.reverse()
1730 if limit is not None:
1730 if limit is not None:
1731 nodes = nodes[:limit]
1731 nodes = nodes[:limit]
1732 return graphmod.nodes(repo, nodes)
1732 return graphmod.nodes(repo, nodes)
1733
1733
1734 def add(ui, repo, match, dryrun, listsubrepos, prefix, explicitonly):
1734 def add(ui, repo, match, dryrun, listsubrepos, prefix, explicitonly):
1735 join = lambda f: os.path.join(prefix, f)
1735 join = lambda f: os.path.join(prefix, f)
1736 bad = []
1736 bad = []
1737 oldbad = match.bad
1737 oldbad = match.bad
1738 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1738 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1739 names = []
1739 names = []
1740 wctx = repo[None]
1740 wctx = repo[None]
1741 cca = None
1741 cca = None
1742 abort, warn = scmutil.checkportabilityalert(ui)
1742 abort, warn = scmutil.checkportabilityalert(ui)
1743 if abort or warn:
1743 if abort or warn:
1744 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
1744 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
1745 for f in repo.walk(match):
1745 for f in repo.walk(match):
1746 exact = match.exact(f)
1746 exact = match.exact(f)
1747 if exact or not explicitonly and f not in repo.dirstate:
1747 if exact or not explicitonly and f not in repo.dirstate:
1748 if cca:
1748 if cca:
1749 cca(f)
1749 cca(f)
1750 names.append(f)
1750 names.append(f)
1751 if ui.verbose or not exact:
1751 if ui.verbose or not exact:
1752 ui.status(_('adding %s\n') % match.rel(join(f)))
1752 ui.status(_('adding %s\n') % match.rel(join(f)))
1753
1753
1754 for subpath in sorted(wctx.substate):
1754 for subpath in sorted(wctx.substate):
1755 sub = wctx.sub(subpath)
1755 sub = wctx.sub(subpath)
1756 try:
1756 try:
1757 submatch = matchmod.narrowmatcher(subpath, match)
1757 submatch = matchmod.narrowmatcher(subpath, match)
1758 if listsubrepos:
1758 if listsubrepos:
1759 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1759 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1760 False))
1760 False))
1761 else:
1761 else:
1762 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1762 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1763 True))
1763 True))
1764 except error.LookupError:
1764 except error.LookupError:
1765 ui.status(_("skipping missing subrepository: %s\n")
1765 ui.status(_("skipping missing subrepository: %s\n")
1766 % join(subpath))
1766 % join(subpath))
1767
1767
1768 if not dryrun:
1768 if not dryrun:
1769 rejected = wctx.add(names, prefix)
1769 rejected = wctx.add(names, prefix)
1770 bad.extend(f for f in rejected if f in match.files())
1770 bad.extend(f for f in rejected if f in match.files())
1771 return bad
1771 return bad
1772
1772
1773 def forget(ui, repo, match, prefix, explicitonly):
1773 def forget(ui, repo, match, prefix, explicitonly):
1774 join = lambda f: os.path.join(prefix, f)
1774 join = lambda f: os.path.join(prefix, f)
1775 bad = []
1775 bad = []
1776 oldbad = match.bad
1776 oldbad = match.bad
1777 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1777 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1778 wctx = repo[None]
1778 wctx = repo[None]
1779 forgot = []
1779 forgot = []
1780 s = repo.status(match=match, clean=True)
1780 s = repo.status(match=match, clean=True)
1781 forget = sorted(s[0] + s[1] + s[3] + s[6])
1781 forget = sorted(s[0] + s[1] + s[3] + s[6])
1782 if explicitonly:
1782 if explicitonly:
1783 forget = [f for f in forget if match.exact(f)]
1783 forget = [f for f in forget if match.exact(f)]
1784
1784
1785 for subpath in sorted(wctx.substate):
1785 for subpath in sorted(wctx.substate):
1786 sub = wctx.sub(subpath)
1786 sub = wctx.sub(subpath)
1787 try:
1787 try:
1788 submatch = matchmod.narrowmatcher(subpath, match)
1788 submatch = matchmod.narrowmatcher(subpath, match)
1789 subbad, subforgot = sub.forget(ui, submatch, prefix)
1789 subbad, subforgot = sub.forget(ui, submatch, prefix)
1790 bad.extend([subpath + '/' + f for f in subbad])
1790 bad.extend([subpath + '/' + f for f in subbad])
1791 forgot.extend([subpath + '/' + f for f in subforgot])
1791 forgot.extend([subpath + '/' + f for f in subforgot])
1792 except error.LookupError:
1792 except error.LookupError:
1793 ui.status(_("skipping missing subrepository: %s\n")
1793 ui.status(_("skipping missing subrepository: %s\n")
1794 % join(subpath))
1794 % join(subpath))
1795
1795
1796 if not explicitonly:
1796 if not explicitonly:
1797 for f in match.files():
1797 for f in match.files():
1798 if f not in repo.dirstate and not os.path.isdir(match.rel(join(f))):
1798 if f not in repo.dirstate and not os.path.isdir(match.rel(join(f))):
1799 if f not in forgot:
1799 if f not in forgot:
1800 if os.path.exists(match.rel(join(f))):
1800 if os.path.exists(match.rel(join(f))):
1801 ui.warn(_('not removing %s: '
1801 ui.warn(_('not removing %s: '
1802 'file is already untracked\n')
1802 'file is already untracked\n')
1803 % match.rel(join(f)))
1803 % match.rel(join(f)))
1804 bad.append(f)
1804 bad.append(f)
1805
1805
1806 for f in forget:
1806 for f in forget:
1807 if ui.verbose or not match.exact(f):
1807 if ui.verbose or not match.exact(f):
1808 ui.status(_('removing %s\n') % match.rel(join(f)))
1808 ui.status(_('removing %s\n') % match.rel(join(f)))
1809
1809
1810 rejected = wctx.forget(forget, prefix)
1810 rejected = wctx.forget(forget, prefix)
1811 bad.extend(f for f in rejected if f in match.files())
1811 bad.extend(f for f in rejected if f in match.files())
1812 forgot.extend(forget)
1812 forgot.extend(forget)
1813 return bad, forgot
1813 return bad, forgot
1814
1814
1815 def duplicatecopies(repo, rev, fromrev):
1815 def duplicatecopies(repo, rev, fromrev):
1816 '''reproduce copies from fromrev to rev in the dirstate'''
1816 '''reproduce copies from fromrev to rev in the dirstate'''
1817 for dst, src in copies.pathcopies(repo[fromrev], repo[rev]).iteritems():
1817 for dst, src in copies.pathcopies(repo[fromrev], repo[rev]).iteritems():
1818 # copies.pathcopies returns backward renames, so dst might not
1818 # copies.pathcopies returns backward renames, so dst might not
1819 # actually be in the dirstate
1819 # actually be in the dirstate
1820 if repo.dirstate[dst] in "nma":
1820 if repo.dirstate[dst] in "nma":
1821 repo.dirstate.copy(src, dst)
1821 repo.dirstate.copy(src, dst)
1822
1822
1823 def commit(ui, repo, commitfunc, pats, opts):
1823 def commit(ui, repo, commitfunc, pats, opts):
1824 '''commit the specified files or all outstanding changes'''
1824 '''commit the specified files or all outstanding changes'''
1825 date = opts.get('date')
1825 date = opts.get('date')
1826 if date:
1826 if date:
1827 opts['date'] = util.parsedate(date)
1827 opts['date'] = util.parsedate(date)
1828 message = logmessage(ui, opts)
1828 message = logmessage(ui, opts)
1829
1829
1830 # extract addremove carefully -- this function can be called from a command
1830 # extract addremove carefully -- this function can be called from a command
1831 # that doesn't support addremove
1831 # that doesn't support addremove
1832 if opts.get('addremove'):
1832 if opts.get('addremove'):
1833 scmutil.addremove(repo, pats, opts)
1833 scmutil.addremove(repo, pats, opts)
1834
1834
1835 return commitfunc(ui, repo, message,
1835 return commitfunc(ui, repo, message,
1836 scmutil.match(repo[None], pats, opts), opts)
1836 scmutil.match(repo[None], pats, opts), opts)
1837
1837
1838 def amend(ui, repo, commitfunc, old, extra, pats, opts):
1838 def amend(ui, repo, commitfunc, old, extra, pats, opts):
1839 ui.note(_('amending changeset %s\n') % old)
1839 ui.note(_('amending changeset %s\n') % old)
1840 base = old.p1()
1840 base = old.p1()
1841
1841
1842 wlock = lock = newid = None
1842 wlock = lock = newid = None
1843 try:
1843 try:
1844 wlock = repo.wlock()
1844 wlock = repo.wlock()
1845 lock = repo.lock()
1845 lock = repo.lock()
1846 tr = repo.transaction('amend')
1846 tr = repo.transaction('amend')
1847 try:
1847 try:
1848 # See if we got a message from -m or -l, if not, open the editor
1848 # See if we got a message from -m or -l, if not, open the editor
1849 # with the message of the changeset to amend
1849 # with the message of the changeset to amend
1850 message = logmessage(ui, opts)
1850 message = logmessage(ui, opts)
1851 # ensure logfile does not conflict with later enforcement of the
1851 # ensure logfile does not conflict with later enforcement of the
1852 # message. potential logfile content has been processed by
1852 # message. potential logfile content has been processed by
1853 # `logmessage` anyway.
1853 # `logmessage` anyway.
1854 opts.pop('logfile')
1854 opts.pop('logfile')
1855 # First, do a regular commit to record all changes in the working
1855 # First, do a regular commit to record all changes in the working
1856 # directory (if there are any)
1856 # directory (if there are any)
1857 ui.callhooks = False
1857 ui.callhooks = False
1858 currentbookmark = repo._bookmarkcurrent
1858 currentbookmark = repo._bookmarkcurrent
1859 try:
1859 try:
1860 repo._bookmarkcurrent = None
1860 repo._bookmarkcurrent = None
1861 opts['message'] = 'temporary amend commit for %s' % old
1861 opts['message'] = 'temporary amend commit for %s' % old
1862 node = commit(ui, repo, commitfunc, pats, opts)
1862 node = commit(ui, repo, commitfunc, pats, opts)
1863 finally:
1863 finally:
1864 repo._bookmarkcurrent = currentbookmark
1864 repo._bookmarkcurrent = currentbookmark
1865 ui.callhooks = True
1865 ui.callhooks = True
1866 ctx = repo[node]
1866 ctx = repo[node]
1867
1867
1868 # Participating changesets:
1868 # Participating changesets:
1869 #
1869 #
1870 # node/ctx o - new (intermediate) commit that contains changes
1870 # node/ctx o - new (intermediate) commit that contains changes
1871 # | from working dir to go into amending commit
1871 # | from working dir to go into amending commit
1872 # | (or a workingctx if there were no changes)
1872 # | (or a workingctx if there were no changes)
1873 # |
1873 # |
1874 # old o - changeset to amend
1874 # old o - changeset to amend
1875 # |
1875 # |
1876 # base o - parent of amending changeset
1876 # base o - parent of amending changeset
1877
1877
1878 # Update extra dict from amended commit (e.g. to preserve graft
1878 # Update extra dict from amended commit (e.g. to preserve graft
1879 # source)
1879 # source)
1880 extra.update(old.extra())
1880 extra.update(old.extra())
1881
1881
1882 # Also update it from the intermediate commit or from the wctx
1882 # Also update it from the intermediate commit or from the wctx
1883 extra.update(ctx.extra())
1883 extra.update(ctx.extra())
1884
1884
1885 if len(old.parents()) > 1:
1885 if len(old.parents()) > 1:
1886 # ctx.files() isn't reliable for merges, so fall back to the
1886 # ctx.files() isn't reliable for merges, so fall back to the
1887 # slower repo.status() method
1887 # slower repo.status() method
1888 files = set([fn for st in repo.status(base, old)[:3]
1888 files = set([fn for st in repo.status(base, old)[:3]
1889 for fn in st])
1889 for fn in st])
1890 else:
1890 else:
1891 files = set(old.files())
1891 files = set(old.files())
1892
1892
1893 # Second, we use either the commit we just did, or if there were no
1893 # Second, we use either the commit we just did, or if there were no
1894 # changes the parent of the working directory as the version of the
1894 # changes the parent of the working directory as the version of the
1895 # files in the final amend commit
1895 # files in the final amend commit
1896 if node:
1896 if node:
1897 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
1897 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
1898
1898
1899 user = ctx.user()
1899 user = ctx.user()
1900 date = ctx.date()
1900 date = ctx.date()
1901 # Recompute copies (avoid recording a -> b -> a)
1901 # Recompute copies (avoid recording a -> b -> a)
1902 copied = copies.pathcopies(base, ctx)
1902 copied = copies.pathcopies(base, ctx)
1903
1903
1904 # Prune files which were reverted by the updates: if old
1904 # Prune files which were reverted by the updates: if old
1905 # introduced file X and our intermediate commit, node,
1905 # introduced file X and our intermediate commit, node,
1906 # renamed that file, then those two files are the same and
1906 # renamed that file, then those two files are the same and
1907 # we can discard X from our list of files. Likewise if X
1907 # we can discard X from our list of files. Likewise if X
1908 # was deleted, it's no longer relevant
1908 # was deleted, it's no longer relevant
1909 files.update(ctx.files())
1909 files.update(ctx.files())
1910
1910
1911 def samefile(f):
1911 def samefile(f):
1912 if f in ctx.manifest():
1912 if f in ctx.manifest():
1913 a = ctx.filectx(f)
1913 a = ctx.filectx(f)
1914 if f in base.manifest():
1914 if f in base.manifest():
1915 b = base.filectx(f)
1915 b = base.filectx(f)
1916 return (not a.cmp(b)
1916 return (not a.cmp(b)
1917 and a.flags() == b.flags())
1917 and a.flags() == b.flags())
1918 else:
1918 else:
1919 return False
1919 return False
1920 else:
1920 else:
1921 return f not in base.manifest()
1921 return f not in base.manifest()
1922 files = [f for f in files if not samefile(f)]
1922 files = [f for f in files if not samefile(f)]
1923
1923
1924 def filectxfn(repo, ctx_, path):
1924 def filectxfn(repo, ctx_, path):
1925 try:
1925 try:
1926 fctx = ctx[path]
1926 fctx = ctx[path]
1927 flags = fctx.flags()
1927 flags = fctx.flags()
1928 mctx = context.memfilectx(fctx.path(), fctx.data(),
1928 mctx = context.memfilectx(fctx.path(), fctx.data(),
1929 islink='l' in flags,
1929 islink='l' in flags,
1930 isexec='x' in flags,
1930 isexec='x' in flags,
1931 copied=copied.get(path))
1931 copied=copied.get(path))
1932 return mctx
1932 return mctx
1933 except KeyError:
1933 except KeyError:
1934 raise IOError
1934 raise IOError
1935 else:
1935 else:
1936 ui.note(_('copying changeset %s to %s\n') % (old, base))
1936 ui.note(_('copying changeset %s to %s\n') % (old, base))
1937
1937
1938 # Use version of files as in the old cset
1938 # Use version of files as in the old cset
1939 def filectxfn(repo, ctx_, path):
1939 def filectxfn(repo, ctx_, path):
1940 try:
1940 try:
1941 return old.filectx(path)
1941 return old.filectx(path)
1942 except KeyError:
1942 except KeyError:
1943 raise IOError
1943 raise IOError
1944
1944
1945 user = opts.get('user') or old.user()
1945 user = opts.get('user') or old.user()
1946 date = opts.get('date') or old.date()
1946 date = opts.get('date') or old.date()
1947 editmsg = False
1947 editmsg = False
1948 if not message:
1948 if not message:
1949 editmsg = True
1949 editmsg = True
1950 message = old.description()
1950 message = old.description()
1951
1951
1952 pureextra = extra.copy()
1952 pureextra = extra.copy()
1953 extra['amend_source'] = old.hex()
1953 extra['amend_source'] = old.hex()
1954
1954
1955 new = context.memctx(repo,
1955 new = context.memctx(repo,
1956 parents=[base.node(), old.p2().node()],
1956 parents=[base.node(), old.p2().node()],
1957 text=message,
1957 text=message,
1958 files=files,
1958 files=files,
1959 filectxfn=filectxfn,
1959 filectxfn=filectxfn,
1960 user=user,
1960 user=user,
1961 date=date,
1961 date=date,
1962 extra=extra)
1962 extra=extra)
1963 if editmsg:
1963 if editmsg:
1964 new._text = commitforceeditor(repo, new, [])
1964 new._text = commitforceeditor(repo, new, [])
1965 repo.savecommitmessage(new.description())
1965 repo.savecommitmessage(new.description())
1966
1966
1967 newdesc = changelog.stripdesc(new.description())
1967 newdesc = changelog.stripdesc(new.description())
1968 if ((not node)
1968 if ((not node)
1969 and newdesc == old.description()
1969 and newdesc == old.description()
1970 and user == old.user()
1970 and user == old.user()
1971 and date == old.date()
1971 and date == old.date()
1972 and pureextra == old.extra()):
1972 and pureextra == old.extra()):
1973 # nothing changed. continuing here would create a new node
1973 # nothing changed. continuing here would create a new node
1974 # anyway because of the amend_source noise.
1974 # anyway because of the amend_source noise.
1975 #
1975 #
1976 # This not what we expect from amend.
1976 # This not what we expect from amend.
1977 return old.node()
1977 return old.node()
1978
1978
1979 ph = repo.ui.config('phases', 'new-commit', phases.draft)
1979 ph = repo.ui.config('phases', 'new-commit', phases.draft)
1980 try:
1980 try:
1981 if opts.get('secret'):
1981 if opts.get('secret'):
1982 commitphase = 'secret'
1982 commitphase = 'secret'
1983 else:
1983 else:
1984 commitphase = old.phase()
1984 commitphase = old.phase()
1985 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
1985 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
1986 newid = repo.commitctx(new)
1986 newid = repo.commitctx(new)
1987 finally:
1987 finally:
1988 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
1988 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
1989 if newid != old.node():
1989 if newid != old.node():
1990 # Reroute the working copy parent to the new changeset
1990 # Reroute the working copy parent to the new changeset
1991 repo.setparents(newid, nullid)
1991 repo.setparents(newid, nullid)
1992
1992
1993 # Move bookmarks from old parent to amend commit
1993 # Move bookmarks from old parent to amend commit
1994 bms = repo.nodebookmarks(old.node())
1994 bms = repo.nodebookmarks(old.node())
1995 if bms:
1995 if bms:
1996 marks = repo._bookmarks
1996 marks = repo._bookmarks
1997 for bm in bms:
1997 for bm in bms:
1998 marks[bm] = newid
1998 marks[bm] = newid
1999 marks.write()
1999 marks.write()
2000 #commit the whole amend process
2000 #commit the whole amend process
2001 if obsolete._enabled and newid != old.node():
2001 if obsolete._enabled and newid != old.node():
2002 # mark the new changeset as successor of the rewritten one
2002 # mark the new changeset as successor of the rewritten one
2003 new = repo[newid]
2003 new = repo[newid]
2004 obs = [(old, (new,))]
2004 obs = [(old, (new,))]
2005 if node:
2005 if node:
2006 obs.append((ctx, ()))
2006 obs.append((ctx, ()))
2007
2007
2008 obsolete.createmarkers(repo, obs)
2008 obsolete.createmarkers(repo, obs)
2009 tr.close()
2009 tr.close()
2010 finally:
2010 finally:
2011 tr.release()
2011 tr.release()
2012 if (not obsolete._enabled) and newid != old.node():
2012 if (not obsolete._enabled) and newid != old.node():
2013 # Strip the intermediate commit (if there was one) and the amended
2013 # Strip the intermediate commit (if there was one) and the amended
2014 # commit
2014 # commit
2015 if node:
2015 if node:
2016 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2016 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2017 ui.note(_('stripping amended changeset %s\n') % old)
2017 ui.note(_('stripping amended changeset %s\n') % old)
2018 repair.strip(ui, repo, old.node(), topic='amend-backup')
2018 repair.strip(ui, repo, old.node(), topic='amend-backup')
2019 finally:
2019 finally:
2020 if newid is None:
2020 if newid is None:
2021 repo.dirstate.invalidate()
2021 repo.dirstate.invalidate()
2022 lockmod.release(lock, wlock)
2022 lockmod.release(lock, wlock)
2023 return newid
2023 return newid
2024
2024
2025 def commiteditor(repo, ctx, subs):
2025 def commiteditor(repo, ctx, subs):
2026 if ctx.description():
2026 if ctx.description():
2027 return ctx.description()
2027 return ctx.description()
2028 return commitforceeditor(repo, ctx, subs)
2028 return commitforceeditor(repo, ctx, subs)
2029
2029
2030 def commitforceeditor(repo, ctx, subs):
2030 def commitforceeditor(repo, ctx, subs):
2031 edittext = []
2031 edittext = []
2032 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2032 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2033 if ctx.description():
2033 if ctx.description():
2034 edittext.append(ctx.description())
2034 edittext.append(ctx.description())
2035 edittext.append("")
2035 edittext.append("")
2036 edittext.append("") # Empty line between message and comments.
2036 edittext.append("") # Empty line between message and comments.
2037 edittext.append(_("HG: Enter commit message."
2037 edittext.append(_("HG: Enter commit message."
2038 " Lines beginning with 'HG:' are removed."))
2038 " Lines beginning with 'HG:' are removed."))
2039 edittext.append(_("HG: Leave message empty to abort commit."))
2039 edittext.append(_("HG: Leave message empty to abort commit."))
2040 edittext.append("HG: --")
2040 edittext.append("HG: --")
2041 edittext.append(_("HG: user: %s") % ctx.user())
2041 edittext.append(_("HG: user: %s") % ctx.user())
2042 if ctx.p2():
2042 if ctx.p2():
2043 edittext.append(_("HG: branch merge"))
2043 edittext.append(_("HG: branch merge"))
2044 if ctx.branch():
2044 if ctx.branch():
2045 edittext.append(_("HG: branch '%s'") % ctx.branch())
2045 edittext.append(_("HG: branch '%s'") % ctx.branch())
2046 if bookmarks.iscurrent(repo):
2046 if bookmarks.iscurrent(repo):
2047 edittext.append(_("HG: bookmark '%s'") % repo._bookmarkcurrent)
2047 edittext.append(_("HG: bookmark '%s'") % repo._bookmarkcurrent)
2048 edittext.extend([_("HG: subrepo %s") % s for s in subs])
2048 edittext.extend([_("HG: subrepo %s") % s for s in subs])
2049 edittext.extend([_("HG: added %s") % f for f in added])
2049 edittext.extend([_("HG: added %s") % f for f in added])
2050 edittext.extend([_("HG: changed %s") % f for f in modified])
2050 edittext.extend([_("HG: changed %s") % f for f in modified])
2051 edittext.extend([_("HG: removed %s") % f for f in removed])
2051 edittext.extend([_("HG: removed %s") % f for f in removed])
2052 if not added and not modified and not removed:
2052 if not added and not modified and not removed:
2053 edittext.append(_("HG: no files changed"))
2053 edittext.append(_("HG: no files changed"))
2054 edittext.append("")
2054 edittext.append("")
2055 # run editor in the repository root
2055 # run editor in the repository root
2056 olddir = os.getcwd()
2056 olddir = os.getcwd()
2057 os.chdir(repo.root)
2057 os.chdir(repo.root)
2058 text = repo.ui.edit("\n".join(edittext), ctx.user(), ctx.extra())
2058 text = repo.ui.edit("\n".join(edittext), ctx.user(), ctx.extra())
2059 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2059 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2060 os.chdir(olddir)
2060 os.chdir(olddir)
2061
2061
2062 if not text.strip():
2062 if not text.strip():
2063 raise util.Abort(_("empty commit message"))
2063 raise util.Abort(_("empty commit message"))
2064
2064
2065 return text
2065 return text
2066
2066
2067 def commitstatus(repo, node, branch, bheads=None, opts={}):
2067 def commitstatus(repo, node, branch, bheads=None, opts={}):
2068 ctx = repo[node]
2068 ctx = repo[node]
2069 parents = ctx.parents()
2069 parents = ctx.parents()
2070
2070
2071 if (not opts.get('amend') and bheads and node not in bheads and not
2071 if (not opts.get('amend') and bheads and node not in bheads and not
2072 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2072 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2073 repo.ui.status(_('created new head\n'))
2073 repo.ui.status(_('created new head\n'))
2074 # The message is not printed for initial roots. For the other
2074 # The message is not printed for initial roots. For the other
2075 # changesets, it is printed in the following situations:
2075 # changesets, it is printed in the following situations:
2076 #
2076 #
2077 # Par column: for the 2 parents with ...
2077 # Par column: for the 2 parents with ...
2078 # N: null or no parent
2078 # N: null or no parent
2079 # B: parent is on another named branch
2079 # B: parent is on another named branch
2080 # C: parent is a regular non head changeset
2080 # C: parent is a regular non head changeset
2081 # H: parent was a branch head of the current branch
2081 # H: parent was a branch head of the current branch
2082 # Msg column: whether we print "created new head" message
2082 # Msg column: whether we print "created new head" message
2083 # In the following, it is assumed that there already exists some
2083 # In the following, it is assumed that there already exists some
2084 # initial branch heads of the current branch, otherwise nothing is
2084 # initial branch heads of the current branch, otherwise nothing is
2085 # printed anyway.
2085 # printed anyway.
2086 #
2086 #
2087 # Par Msg Comment
2087 # Par Msg Comment
2088 # N N y additional topo root
2088 # N N y additional topo root
2089 #
2089 #
2090 # B N y additional branch root
2090 # B N y additional branch root
2091 # C N y additional topo head
2091 # C N y additional topo head
2092 # H N n usual case
2092 # H N n usual case
2093 #
2093 #
2094 # B B y weird additional branch root
2094 # B B y weird additional branch root
2095 # C B y branch merge
2095 # C B y branch merge
2096 # H B n merge with named branch
2096 # H B n merge with named branch
2097 #
2097 #
2098 # C C y additional head from merge
2098 # C C y additional head from merge
2099 # C H n merge with a head
2099 # C H n merge with a head
2100 #
2100 #
2101 # H H n head merge: head count decreases
2101 # H H n head merge: head count decreases
2102
2102
2103 if not opts.get('close_branch'):
2103 if not opts.get('close_branch'):
2104 for r in parents:
2104 for r in parents:
2105 if r.closesbranch() and r.branch() == branch:
2105 if r.closesbranch() and r.branch() == branch:
2106 repo.ui.status(_('reopening closed branch head %d\n') % r)
2106 repo.ui.status(_('reopening closed branch head %d\n') % r)
2107
2107
2108 if repo.ui.debugflag:
2108 if repo.ui.debugflag:
2109 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2109 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2110 elif repo.ui.verbose:
2110 elif repo.ui.verbose:
2111 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2111 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2112
2112
2113 def revert(ui, repo, ctx, parents, *pats, **opts):
2113 def revert(ui, repo, ctx, parents, *pats, **opts):
2114 parent, p2 = parents
2114 parent, p2 = parents
2115 node = ctx.node()
2115 node = ctx.node()
2116
2116
2117 mf = ctx.manifest()
2117 mf = ctx.manifest()
2118 if node == parent:
2118 if node == parent:
2119 pmf = mf
2119 pmf = mf
2120 else:
2120 else:
2121 pmf = None
2121 pmf = None
2122
2122
2123 # need all matching names in dirstate and manifest of target rev,
2123 # need all matching names in dirstate and manifest of target rev,
2124 # so have to walk both. do not print errors if files exist in one
2124 # so have to walk both. do not print errors if files exist in one
2125 # but not other.
2125 # but not other.
2126
2126
2127 names = {}
2127 names = {}
2128
2128
2129 wlock = repo.wlock()
2129 wlock = repo.wlock()
2130 try:
2130 try:
2131 # walk dirstate.
2131 # walk dirstate.
2132
2132
2133 m = scmutil.match(repo[None], pats, opts)
2133 m = scmutil.match(repo[None], pats, opts)
2134 m.bad = lambda x, y: False
2134 m.bad = lambda x, y: False
2135 for abs in repo.walk(m):
2135 for abs in repo.walk(m):
2136 names[abs] = m.rel(abs), m.exact(abs)
2136 names[abs] = m.rel(abs), m.exact(abs)
2137
2137
2138 # walk target manifest.
2138 # walk target manifest.
2139
2139
2140 def badfn(path, msg):
2140 def badfn(path, msg):
2141 if path in names:
2141 if path in names:
2142 return
2142 return
2143 if path in ctx.substate:
2143 if path in ctx.substate:
2144 return
2144 return
2145 path_ = path + '/'
2145 path_ = path + '/'
2146 for f in names:
2146 for f in names:
2147 if f.startswith(path_):
2147 if f.startswith(path_):
2148 return
2148 return
2149 ui.warn("%s: %s\n" % (m.rel(path), msg))
2149 ui.warn("%s: %s\n" % (m.rel(path), msg))
2150
2150
2151 m = scmutil.match(ctx, pats, opts)
2151 m = scmutil.match(ctx, pats, opts)
2152 m.bad = badfn
2152 m.bad = badfn
2153 for abs in ctx.walk(m):
2153 for abs in ctx.walk(m):
2154 if abs not in names:
2154 if abs not in names:
2155 names[abs] = m.rel(abs), m.exact(abs)
2155 names[abs] = m.rel(abs), m.exact(abs)
2156
2156
2157 # get the list of subrepos that must be reverted
2157 # get the list of subrepos that must be reverted
2158 targetsubs = sorted(s for s in ctx.substate if m(s))
2158 targetsubs = sorted(s for s in ctx.substate if m(s))
2159 m = scmutil.matchfiles(repo, names)
2159 m = scmutil.matchfiles(repo, names)
2160 changes = repo.status(match=m)[:4]
2160 changes = repo.status(match=m)[:4]
2161 modified, added, removed, deleted = map(set, changes)
2161 modified, added, removed, deleted = map(set, changes)
2162
2162
2163 # if f is a rename, also revert the source
2163 # if f is a rename, also revert the source
2164 cwd = repo.getcwd()
2164 cwd = repo.getcwd()
2165 for f in added:
2165 for f in added:
2166 src = repo.dirstate.copied(f)
2166 src = repo.dirstate.copied(f)
2167 if src and src not in names and repo.dirstate[src] == 'r':
2167 if src and src not in names and repo.dirstate[src] == 'r':
2168 removed.add(src)
2168 removed.add(src)
2169 names[src] = (repo.pathto(src, cwd), True)
2169 names[src] = (repo.pathto(src, cwd), True)
2170
2170
2171 def removeforget(abs):
2171 def removeforget(abs):
2172 if repo.dirstate[abs] == 'a':
2172 if repo.dirstate[abs] == 'a':
2173 return _('forgetting %s\n')
2173 return _('forgetting %s\n')
2174 return _('removing %s\n')
2174 return _('removing %s\n')
2175
2175
2176 revert = ([], _('reverting %s\n'))
2176 revert = ([], _('reverting %s\n'))
2177 add = ([], _('adding %s\n'))
2177 add = ([], _('adding %s\n'))
2178 remove = ([], removeforget)
2178 remove = ([], removeforget)
2179 undelete = ([], _('undeleting %s\n'))
2179 undelete = ([], _('undeleting %s\n'))
2180
2180
2181 disptable = (
2181 disptable = (
2182 # dispatch table:
2182 # dispatch table:
2183 # file state
2183 # file state
2184 # action if in target manifest
2184 # action if in target manifest
2185 # action if not in target manifest
2185 # action if not in target manifest
2186 # make backup if in target manifest
2186 # make backup if in target manifest
2187 # make backup if not in target manifest
2187 # make backup if not in target manifest
2188 (modified, revert, remove, True, True),
2188 (modified, revert, remove, True, True),
2189 (added, revert, remove, True, False),
2189 (added, revert, remove, True, False),
2190 (removed, undelete, None, True, False),
2190 (removed, undelete, None, True, False),
2191 (deleted, revert, remove, False, False),
2191 (deleted, revert, remove, False, False),
2192 )
2192 )
2193
2193
2194 for abs, (rel, exact) in sorted(names.items()):
2194 for abs, (rel, exact) in sorted(names.items()):
2195 mfentry = mf.get(abs)
2195 mfentry = mf.get(abs)
2196 target = repo.wjoin(abs)
2196 target = repo.wjoin(abs)
2197 def handle(xlist, dobackup):
2197 def handle(xlist, dobackup):
2198 xlist[0].append(abs)
2198 xlist[0].append(abs)
2199 if (dobackup and not opts.get('no_backup') and
2199 if (dobackup and not opts.get('no_backup') and
2200 os.path.lexists(target) and
2200 os.path.lexists(target) and
2201 abs in ctx and repo[None][abs].cmp(ctx[abs])):
2201 abs in ctx and repo[None][abs].cmp(ctx[abs])):
2202 bakname = "%s.orig" % rel
2202 bakname = "%s.orig" % rel
2203 ui.note(_('saving current version of %s as %s\n') %
2203 ui.note(_('saving current version of %s as %s\n') %
2204 (rel, bakname))
2204 (rel, bakname))
2205 if not opts.get('dry_run'):
2205 if not opts.get('dry_run'):
2206 util.rename(target, bakname)
2206 util.rename(target, bakname)
2207 if ui.verbose or not exact:
2207 if ui.verbose or not exact:
2208 msg = xlist[1]
2208 msg = xlist[1]
2209 if not isinstance(msg, basestring):
2209 if not isinstance(msg, basestring):
2210 msg = msg(abs)
2210 msg = msg(abs)
2211 ui.status(msg % rel)
2211 ui.status(msg % rel)
2212 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2212 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2213 if abs not in table:
2213 if abs not in table:
2214 continue
2214 continue
2215 # file has changed in dirstate
2215 # file has changed in dirstate
2216 if mfentry:
2216 if mfentry:
2217 handle(hitlist, backuphit)
2217 handle(hitlist, backuphit)
2218 elif misslist is not None:
2218 elif misslist is not None:
2219 handle(misslist, backupmiss)
2219 handle(misslist, backupmiss)
2220 break
2220 break
2221 else:
2221 else:
2222 if abs not in repo.dirstate:
2222 if abs not in repo.dirstate:
2223 if mfentry:
2223 if mfentry:
2224 handle(add, True)
2224 handle(add, True)
2225 elif exact:
2225 elif exact:
2226 ui.warn(_('file not managed: %s\n') % rel)
2226 ui.warn(_('file not managed: %s\n') % rel)
2227 continue
2227 continue
2228 # file has not changed in dirstate
2228 # file has not changed in dirstate
2229 if node == parent:
2229 if node == parent:
2230 if exact:
2230 if exact:
2231 ui.warn(_('no changes needed to %s\n') % rel)
2231 ui.warn(_('no changes needed to %s\n') % rel)
2232 continue
2232 continue
2233 if pmf is None:
2233 if pmf is None:
2234 # only need parent manifest in this unlikely case,
2234 # only need parent manifest in this unlikely case,
2235 # so do not read by default
2235 # so do not read by default
2236 pmf = repo[parent].manifest()
2236 pmf = repo[parent].manifest()
2237 if abs in pmf and mfentry:
2237 if abs in pmf and mfentry:
2238 # if version of file is same in parent and target
2238 # if version of file is same in parent and target
2239 # manifests, do nothing
2239 # manifests, do nothing
2240 if (pmf[abs] != mfentry or
2240 if (pmf[abs] != mfentry or
2241 pmf.flags(abs) != mf.flags(abs)):
2241 pmf.flags(abs) != mf.flags(abs)):
2242 handle(revert, False)
2242 handle(revert, False)
2243 else:
2243 else:
2244 handle(remove, False)
2244 handle(remove, False)
2245 if not opts.get('dry_run'):
2245 if not opts.get('dry_run'):
2246 _performrevert(repo, parents, ctx, revert, add, remove, undelete)
2246 _performrevert(repo, parents, ctx, revert, add, remove, undelete)
2247
2247
2248 if targetsubs:
2248 if targetsubs:
2249 # Revert the subrepos on the revert list
2249 # Revert the subrepos on the revert list
2250 for sub in targetsubs:
2250 for sub in targetsubs:
2251 ctx.sub(sub).revert(ui, ctx.substate[sub], *pats, **opts)
2251 ctx.sub(sub).revert(ui, ctx.substate[sub], *pats, **opts)
2252 finally:
2252 finally:
2253 wlock.release()
2253 wlock.release()
2254
2254
2255 def _performrevert(repo, parents, ctx, revert, add, remove, undelete):
2255 def _performrevert(repo, parents, ctx, revert, add, remove, undelete):
2256 """function that actually perform all the action computed for revert
2256 """function that actually perform all the action computed for revert
2257
2257
2258 This is an independent function to let extension to plug in and react to
2258 This is an independent function to let extension to plug in and react to
2259 the imminent revert.
2259 the imminent revert.
2260
2260
2261 Make sure you have the working directory locked when caling this function.
2261 Make sure you have the working directory locked when calling this function.
2262 """
2262 """
2263 parent, p2 = parents
2263 parent, p2 = parents
2264 node = ctx.node()
2264 node = ctx.node()
2265 def checkout(f):
2265 def checkout(f):
2266 fc = ctx[f]
2266 fc = ctx[f]
2267 repo.wwrite(f, fc.data(), fc.flags())
2267 repo.wwrite(f, fc.data(), fc.flags())
2268
2268
2269 audit_path = pathutil.pathauditor(repo.root)
2269 audit_path = pathutil.pathauditor(repo.root)
2270 for f in remove[0]:
2270 for f in remove[0]:
2271 if repo.dirstate[f] == 'a':
2271 if repo.dirstate[f] == 'a':
2272 repo.dirstate.drop(f)
2272 repo.dirstate.drop(f)
2273 continue
2273 continue
2274 audit_path(f)
2274 audit_path(f)
2275 try:
2275 try:
2276 util.unlinkpath(repo.wjoin(f))
2276 util.unlinkpath(repo.wjoin(f))
2277 except OSError:
2277 except OSError:
2278 pass
2278 pass
2279 repo.dirstate.remove(f)
2279 repo.dirstate.remove(f)
2280
2280
2281 normal = None
2281 normal = None
2282 if node == parent:
2282 if node == parent:
2283 # We're reverting to our parent. If possible, we'd like status
2283 # We're reverting to our parent. If possible, we'd like status
2284 # to report the file as clean. We have to use normallookup for
2284 # to report the file as clean. We have to use normallookup for
2285 # merges to avoid losing information about merged/dirty files.
2285 # merges to avoid losing information about merged/dirty files.
2286 if p2 != nullid:
2286 if p2 != nullid:
2287 normal = repo.dirstate.normallookup
2287 normal = repo.dirstate.normallookup
2288 else:
2288 else:
2289 normal = repo.dirstate.normal
2289 normal = repo.dirstate.normal
2290 for f in revert[0]:
2290 for f in revert[0]:
2291 checkout(f)
2291 checkout(f)
2292 if normal:
2292 if normal:
2293 normal(f)
2293 normal(f)
2294
2294
2295 for f in add[0]:
2295 for f in add[0]:
2296 checkout(f)
2296 checkout(f)
2297 repo.dirstate.add(f)
2297 repo.dirstate.add(f)
2298
2298
2299 normal = repo.dirstate.normallookup
2299 normal = repo.dirstate.normallookup
2300 if node == parent and p2 == nullid:
2300 if node == parent and p2 == nullid:
2301 normal = repo.dirstate.normal
2301 normal = repo.dirstate.normal
2302 for f in undelete[0]:
2302 for f in undelete[0]:
2303 checkout(f)
2303 checkout(f)
2304 normal(f)
2304 normal(f)
2305
2305
2306 copied = copies.pathcopies(repo[parent], ctx)
2306 copied = copies.pathcopies(repo[parent], ctx)
2307
2307
2308 for f in add[0] + undelete[0] + revert[0]:
2308 for f in add[0] + undelete[0] + revert[0]:
2309 if f in copied:
2309 if f in copied:
2310 repo.dirstate.copy(copied[f], f)
2310 repo.dirstate.copy(copied[f], f)
2311
2311
2312 def command(table):
2312 def command(table):
2313 '''returns a function object bound to table which can be used as
2313 '''returns a function object bound to table which can be used as
2314 a decorator for populating table as a command table'''
2314 a decorator for populating table as a command table'''
2315
2315
2316 def cmd(name, options=(), synopsis=None):
2316 def cmd(name, options=(), synopsis=None):
2317 def decorator(func):
2317 def decorator(func):
2318 if synopsis:
2318 if synopsis:
2319 table[name] = func, list(options), synopsis
2319 table[name] = func, list(options), synopsis
2320 else:
2320 else:
2321 table[name] = func, list(options)
2321 table[name] = func, list(options)
2322 return func
2322 return func
2323 return decorator
2323 return decorator
2324
2324
2325 return cmd
2325 return cmd
2326
2326
2327 # a list of (ui, repo) functions called by commands.summary
2327 # a list of (ui, repo) functions called by commands.summary
2328 summaryhooks = util.hooks()
2328 summaryhooks = util.hooks()
2329
2329
2330 # A list of state files kept by multistep operations like graft.
2330 # A list of state files kept by multistep operations like graft.
2331 # Since graft cannot be aborted, it is considered 'clearable' by update.
2331 # Since graft cannot be aborted, it is considered 'clearable' by update.
2332 # note: bisect is intentionally excluded
2332 # note: bisect is intentionally excluded
2333 # (state file, clearable, allowcommit, error, hint)
2333 # (state file, clearable, allowcommit, error, hint)
2334 unfinishedstates = [
2334 unfinishedstates = [
2335 ('graftstate', True, False, _('graft in progress'),
2335 ('graftstate', True, False, _('graft in progress'),
2336 _("use 'hg graft --continue' or 'hg update' to abort")),
2336 _("use 'hg graft --continue' or 'hg update' to abort")),
2337 ('updatestate', True, False, _('last update was interrupted'),
2337 ('updatestate', True, False, _('last update was interrupted'),
2338 _("use 'hg update' to get a consistent checkout"))
2338 _("use 'hg update' to get a consistent checkout"))
2339 ]
2339 ]
2340
2340
2341 def checkunfinished(repo, commit=False):
2341 def checkunfinished(repo, commit=False):
2342 '''Look for an unfinished multistep operation, like graft, and abort
2342 '''Look for an unfinished multistep operation, like graft, and abort
2343 if found. It's probably good to check this right before
2343 if found. It's probably good to check this right before
2344 bailifchanged().
2344 bailifchanged().
2345 '''
2345 '''
2346 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2346 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2347 if commit and allowcommit:
2347 if commit and allowcommit:
2348 continue
2348 continue
2349 if repo.vfs.exists(f):
2349 if repo.vfs.exists(f):
2350 raise util.Abort(msg, hint=hint)
2350 raise util.Abort(msg, hint=hint)
2351
2351
2352 def clearunfinished(repo):
2352 def clearunfinished(repo):
2353 '''Check for unfinished operations (as above), and clear the ones
2353 '''Check for unfinished operations (as above), and clear the ones
2354 that are clearable.
2354 that are clearable.
2355 '''
2355 '''
2356 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2356 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2357 if not clearable and repo.vfs.exists(f):
2357 if not clearable and repo.vfs.exists(f):
2358 raise util.Abort(msg, hint=hint)
2358 raise util.Abort(msg, hint=hint)
2359 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2359 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2360 if clearable and repo.vfs.exists(f):
2360 if clearable and repo.vfs.exists(f):
2361 util.unlink(repo.join(f))
2361 util.unlink(repo.join(f))
@@ -1,644 +1,644 b''
1 # exchange.py - utily to exchange data between repo.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno
10 import errno
11 import util, scmutil, changegroup, base85
11 import util, scmutil, changegroup, base85
12 import discovery, phases, obsolete, bookmarks, bundle2
12 import discovery, phases, obsolete, bookmarks, bundle2
13
13
14
14
15 class pushoperation(object):
15 class pushoperation(object):
16 """A object that represent a single push operation
16 """A object that represent a single push operation
17
17
18 It purpose is to carry push related state and very common operation.
18 It purpose is to carry push related state and very common operation.
19
19
20 A new should be created at the begining of each push and discarded
20 A new should be created at the beginning of each push and discarded
21 afterward.
21 afterward.
22 """
22 """
23
23
24 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
24 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
25 # repo we push from
25 # repo we push from
26 self.repo = repo
26 self.repo = repo
27 self.ui = repo.ui
27 self.ui = repo.ui
28 # repo we push to
28 # repo we push to
29 self.remote = remote
29 self.remote = remote
30 # force option provided
30 # force option provided
31 self.force = force
31 self.force = force
32 # revs to be pushed (None is "all")
32 # revs to be pushed (None is "all")
33 self.revs = revs
33 self.revs = revs
34 # allow push of new branch
34 # allow push of new branch
35 self.newbranch = newbranch
35 self.newbranch = newbranch
36 # did a local lock get acquired?
36 # did a local lock get acquired?
37 self.locallocked = None
37 self.locallocked = None
38 # Integer version of the push result
38 # Integer version of the push result
39 # - None means nothing to push
39 # - None means nothing to push
40 # - 0 means HTTP error
40 # - 0 means HTTP error
41 # - 1 means we pushed and remote head count is unchanged *or*
41 # - 1 means we pushed and remote head count is unchanged *or*
42 # we have outgoing changesets but refused to push
42 # we have outgoing changesets but refused to push
43 # - other values as described by addchangegroup()
43 # - other values as described by addchangegroup()
44 self.ret = None
44 self.ret = None
45 # discover.outgoing object (contains common and outgoin data)
45 # discover.outgoing object (contains common and outgoing data)
46 self.outgoing = None
46 self.outgoing = None
47 # all remote heads before the push
47 # all remote heads before the push
48 self.remoteheads = None
48 self.remoteheads = None
49 # testable as a boolean indicating if any nodes are missing locally.
49 # testable as a boolean indicating if any nodes are missing locally.
50 self.incoming = None
50 self.incoming = None
51 # set of all heads common after changeset bundle push
51 # set of all heads common after changeset bundle push
52 self.commonheads = None
52 self.commonheads = None
53
53
54 def push(repo, remote, force=False, revs=None, newbranch=False):
54 def push(repo, remote, force=False, revs=None, newbranch=False):
55 '''Push outgoing changesets (limited by revs) from a local
55 '''Push outgoing changesets (limited by revs) from a local
56 repository to remote. Return an integer:
56 repository to remote. Return an integer:
57 - None means nothing to push
57 - None means nothing to push
58 - 0 means HTTP error
58 - 0 means HTTP error
59 - 1 means we pushed and remote head count is unchanged *or*
59 - 1 means we pushed and remote head count is unchanged *or*
60 we have outgoing changesets but refused to push
60 we have outgoing changesets but refused to push
61 - other values as described by addchangegroup()
61 - other values as described by addchangegroup()
62 '''
62 '''
63 pushop = pushoperation(repo, remote, force, revs, newbranch)
63 pushop = pushoperation(repo, remote, force, revs, newbranch)
64 if pushop.remote.local():
64 if pushop.remote.local():
65 missing = (set(pushop.repo.requirements)
65 missing = (set(pushop.repo.requirements)
66 - pushop.remote.local().supported)
66 - pushop.remote.local().supported)
67 if missing:
67 if missing:
68 msg = _("required features are not"
68 msg = _("required features are not"
69 " supported in the destination:"
69 " supported in the destination:"
70 " %s") % (', '.join(sorted(missing)))
70 " %s") % (', '.join(sorted(missing)))
71 raise util.Abort(msg)
71 raise util.Abort(msg)
72
72
73 # there are two ways to push to remote repo:
73 # there are two ways to push to remote repo:
74 #
74 #
75 # addchangegroup assumes local user can lock remote
75 # addchangegroup assumes local user can lock remote
76 # repo (local filesystem, old ssh servers).
76 # repo (local filesystem, old ssh servers).
77 #
77 #
78 # unbundle assumes local user cannot lock remote repo (new ssh
78 # unbundle assumes local user cannot lock remote repo (new ssh
79 # servers, http servers).
79 # servers, http servers).
80
80
81 if not pushop.remote.canpush():
81 if not pushop.remote.canpush():
82 raise util.Abort(_("destination does not support push"))
82 raise util.Abort(_("destination does not support push"))
83 # get local lock as we might write phase data
83 # get local lock as we might write phase data
84 locallock = None
84 locallock = None
85 try:
85 try:
86 locallock = pushop.repo.lock()
86 locallock = pushop.repo.lock()
87 pushop.locallocked = True
87 pushop.locallocked = True
88 except IOError, err:
88 except IOError, err:
89 pushop.locallocked = False
89 pushop.locallocked = False
90 if err.errno != errno.EACCES:
90 if err.errno != errno.EACCES:
91 raise
91 raise
92 # source repo cannot be locked.
92 # source repo cannot be locked.
93 # We do not abort the push, but just disable the local phase
93 # We do not abort the push, but just disable the local phase
94 # synchronisation.
94 # synchronisation.
95 msg = 'cannot lock source repository: %s\n' % err
95 msg = 'cannot lock source repository: %s\n' % err
96 pushop.ui.debug(msg)
96 pushop.ui.debug(msg)
97 try:
97 try:
98 pushop.repo.checkpush(pushop)
98 pushop.repo.checkpush(pushop)
99 lock = None
99 lock = None
100 unbundle = pushop.remote.capable('unbundle')
100 unbundle = pushop.remote.capable('unbundle')
101 if not unbundle:
101 if not unbundle:
102 lock = pushop.remote.lock()
102 lock = pushop.remote.lock()
103 try:
103 try:
104 _pushdiscovery(pushop)
104 _pushdiscovery(pushop)
105 if _pushcheckoutgoing(pushop):
105 if _pushcheckoutgoing(pushop):
106 _pushchangeset(pushop)
106 _pushchangeset(pushop)
107 _pushcomputecommonheads(pushop)
107 _pushcomputecommonheads(pushop)
108 _pushsyncphase(pushop)
108 _pushsyncphase(pushop)
109 _pushobsolete(pushop)
109 _pushobsolete(pushop)
110 finally:
110 finally:
111 if lock is not None:
111 if lock is not None:
112 lock.release()
112 lock.release()
113 finally:
113 finally:
114 if locallock is not None:
114 if locallock is not None:
115 locallock.release()
115 locallock.release()
116
116
117 _pushbookmark(pushop)
117 _pushbookmark(pushop)
118 return pushop.ret
118 return pushop.ret
119
119
120 def _pushdiscovery(pushop):
120 def _pushdiscovery(pushop):
121 # discovery
121 # discovery
122 unfi = pushop.repo.unfiltered()
122 unfi = pushop.repo.unfiltered()
123 fci = discovery.findcommonincoming
123 fci = discovery.findcommonincoming
124 commoninc = fci(unfi, pushop.remote, force=pushop.force)
124 commoninc = fci(unfi, pushop.remote, force=pushop.force)
125 common, inc, remoteheads = commoninc
125 common, inc, remoteheads = commoninc
126 fco = discovery.findcommonoutgoing
126 fco = discovery.findcommonoutgoing
127 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
127 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
128 commoninc=commoninc, force=pushop.force)
128 commoninc=commoninc, force=pushop.force)
129 pushop.outgoing = outgoing
129 pushop.outgoing = outgoing
130 pushop.remoteheads = remoteheads
130 pushop.remoteheads = remoteheads
131 pushop.incoming = inc
131 pushop.incoming = inc
132
132
133 def _pushcheckoutgoing(pushop):
133 def _pushcheckoutgoing(pushop):
134 outgoing = pushop.outgoing
134 outgoing = pushop.outgoing
135 unfi = pushop.repo.unfiltered()
135 unfi = pushop.repo.unfiltered()
136 if not outgoing.missing:
136 if not outgoing.missing:
137 # nothing to push
137 # nothing to push
138 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
138 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
139 return False
139 return False
140 # something to push
140 # something to push
141 if not pushop.force:
141 if not pushop.force:
142 # if repo.obsstore == False --> no obsolete
142 # if repo.obsstore == False --> no obsolete
143 # then, save the iteration
143 # then, save the iteration
144 if unfi.obsstore:
144 if unfi.obsstore:
145 # this message are here for 80 char limit reason
145 # this message are here for 80 char limit reason
146 mso = _("push includes obsolete changeset: %s!")
146 mso = _("push includes obsolete changeset: %s!")
147 mst = "push includes %s changeset: %s!"
147 mst = "push includes %s changeset: %s!"
148 # plain versions for i18n tool to detect them
148 # plain versions for i18n tool to detect them
149 _("push includes unstable changeset: %s!")
149 _("push includes unstable changeset: %s!")
150 _("push includes bumped changeset: %s!")
150 _("push includes bumped changeset: %s!")
151 _("push includes divergent changeset: %s!")
151 _("push includes divergent changeset: %s!")
152 # If we are to push if there is at least one
152 # If we are to push if there is at least one
153 # obsolete or unstable changeset in missing, at
153 # obsolete or unstable changeset in missing, at
154 # least one of the missinghead will be obsolete or
154 # least one of the missinghead will be obsolete or
155 # unstable. So checking heads only is ok
155 # unstable. So checking heads only is ok
156 for node in outgoing.missingheads:
156 for node in outgoing.missingheads:
157 ctx = unfi[node]
157 ctx = unfi[node]
158 if ctx.obsolete():
158 if ctx.obsolete():
159 raise util.Abort(mso % ctx)
159 raise util.Abort(mso % ctx)
160 elif ctx.troubled():
160 elif ctx.troubled():
161 raise util.Abort(_(mst)
161 raise util.Abort(_(mst)
162 % (ctx.troubles()[0],
162 % (ctx.troubles()[0],
163 ctx))
163 ctx))
164 newbm = pushop.ui.configlist('bookmarks', 'pushing')
164 newbm = pushop.ui.configlist('bookmarks', 'pushing')
165 discovery.checkheads(unfi, pushop.remote, outgoing,
165 discovery.checkheads(unfi, pushop.remote, outgoing,
166 pushop.remoteheads,
166 pushop.remoteheads,
167 pushop.newbranch,
167 pushop.newbranch,
168 bool(pushop.incoming),
168 bool(pushop.incoming),
169 newbm)
169 newbm)
170 return True
170 return True
171
171
172 def _pushchangeset(pushop):
172 def _pushchangeset(pushop):
173 """Make the actual push of changeset bundle to remote repo"""
173 """Make the actual push of changeset bundle to remote repo"""
174 outgoing = pushop.outgoing
174 outgoing = pushop.outgoing
175 unbundle = pushop.remote.capable('unbundle')
175 unbundle = pushop.remote.capable('unbundle')
176 # TODO: get bundlecaps from remote
176 # TODO: get bundlecaps from remote
177 bundlecaps = None
177 bundlecaps = None
178 # create a changegroup from local
178 # create a changegroup from local
179 if pushop.revs is None and not (outgoing.excluded
179 if pushop.revs is None and not (outgoing.excluded
180 or pushop.repo.changelog.filteredrevs):
180 or pushop.repo.changelog.filteredrevs):
181 # push everything,
181 # push everything,
182 # use the fast path, no race possible on push
182 # use the fast path, no race possible on push
183 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
183 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
184 cg = changegroup.getsubset(pushop.repo,
184 cg = changegroup.getsubset(pushop.repo,
185 outgoing,
185 outgoing,
186 bundler,
186 bundler,
187 'push',
187 'push',
188 fastpath=True)
188 fastpath=True)
189 else:
189 else:
190 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
190 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
191 bundlecaps)
191 bundlecaps)
192
192
193 # apply changegroup to remote
193 # apply changegroup to remote
194 if unbundle:
194 if unbundle:
195 # local repo finds heads on server, finds out what
195 # local repo finds heads on server, finds out what
196 # revs it must push. once revs transferred, if server
196 # revs it must push. once revs transferred, if server
197 # finds it has different heads (someone else won
197 # finds it has different heads (someone else won
198 # commit/push race), server aborts.
198 # commit/push race), server aborts.
199 if pushop.force:
199 if pushop.force:
200 remoteheads = ['force']
200 remoteheads = ['force']
201 else:
201 else:
202 remoteheads = pushop.remoteheads
202 remoteheads = pushop.remoteheads
203 # ssh: return remote's addchangegroup()
203 # ssh: return remote's addchangegroup()
204 # http: return remote's addchangegroup() or 0 for error
204 # http: return remote's addchangegroup() or 0 for error
205 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
205 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
206 'push')
206 'push')
207 else:
207 else:
208 # we return an integer indicating remote head count
208 # we return an integer indicating remote head count
209 # change
209 # change
210 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
210 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
211
211
212 def _pushcomputecommonheads(pushop):
212 def _pushcomputecommonheads(pushop):
213 unfi = pushop.repo.unfiltered()
213 unfi = pushop.repo.unfiltered()
214 if pushop.ret:
214 if pushop.ret:
215 # push succeed, synchronize target of the push
215 # push succeed, synchronize target of the push
216 cheads = pushop.outgoing.missingheads
216 cheads = pushop.outgoing.missingheads
217 elif pushop.revs is None:
217 elif pushop.revs is None:
218 # All out push fails. synchronize all common
218 # All out push fails. synchronize all common
219 cheads = pushop.outgoing.commonheads
219 cheads = pushop.outgoing.commonheads
220 else:
220 else:
221 # I want cheads = heads(::missingheads and ::commonheads)
221 # I want cheads = heads(::missingheads and ::commonheads)
222 # (missingheads is revs with secret changeset filtered out)
222 # (missingheads is revs with secret changeset filtered out)
223 #
223 #
224 # This can be expressed as:
224 # This can be expressed as:
225 # cheads = ( (missingheads and ::commonheads)
225 # cheads = ( (missingheads and ::commonheads)
226 # + (commonheads and ::missingheads))"
226 # + (commonheads and ::missingheads))"
227 # )
227 # )
228 #
228 #
229 # while trying to push we already computed the following:
229 # while trying to push we already computed the following:
230 # common = (::commonheads)
230 # common = (::commonheads)
231 # missing = ((commonheads::missingheads) - commonheads)
231 # missing = ((commonheads::missingheads) - commonheads)
232 #
232 #
233 # We can pick:
233 # We can pick:
234 # * missingheads part of common (::commonheads)
234 # * missingheads part of common (::commonheads)
235 common = set(pushop.outgoing.common)
235 common = set(pushop.outgoing.common)
236 nm = pushop.repo.changelog.nodemap
236 nm = pushop.repo.changelog.nodemap
237 cheads = [node for node in pushop.revs if nm[node] in common]
237 cheads = [node for node in pushop.revs if nm[node] in common]
238 # and
238 # and
239 # * commonheads parents on missing
239 # * commonheads parents on missing
240 revset = unfi.set('%ln and parents(roots(%ln))',
240 revset = unfi.set('%ln and parents(roots(%ln))',
241 pushop.outgoing.commonheads,
241 pushop.outgoing.commonheads,
242 pushop.outgoing.missing)
242 pushop.outgoing.missing)
243 cheads.extend(c.node() for c in revset)
243 cheads.extend(c.node() for c in revset)
244 pushop.commonheads = cheads
244 pushop.commonheads = cheads
245
245
246 def _pushsyncphase(pushop):
246 def _pushsyncphase(pushop):
247 """synchronise phase information locally and remotly"""
247 """synchronise phase information locally and remotely"""
248 unfi = pushop.repo.unfiltered()
248 unfi = pushop.repo.unfiltered()
249 cheads = pushop.commonheads
249 cheads = pushop.commonheads
250 if pushop.ret:
250 if pushop.ret:
251 # push succeed, synchronize target of the push
251 # push succeed, synchronize target of the push
252 cheads = pushop.outgoing.missingheads
252 cheads = pushop.outgoing.missingheads
253 elif pushop.revs is None:
253 elif pushop.revs is None:
254 # All out push fails. synchronize all common
254 # All out push fails. synchronize all common
255 cheads = pushop.outgoing.commonheads
255 cheads = pushop.outgoing.commonheads
256 else:
256 else:
257 # I want cheads = heads(::missingheads and ::commonheads)
257 # I want cheads = heads(::missingheads and ::commonheads)
258 # (missingheads is revs with secret changeset filtered out)
258 # (missingheads is revs with secret changeset filtered out)
259 #
259 #
260 # This can be expressed as:
260 # This can be expressed as:
261 # cheads = ( (missingheads and ::commonheads)
261 # cheads = ( (missingheads and ::commonheads)
262 # + (commonheads and ::missingheads))"
262 # + (commonheads and ::missingheads))"
263 # )
263 # )
264 #
264 #
265 # while trying to push we already computed the following:
265 # while trying to push we already computed the following:
266 # common = (::commonheads)
266 # common = (::commonheads)
267 # missing = ((commonheads::missingheads) - commonheads)
267 # missing = ((commonheads::missingheads) - commonheads)
268 #
268 #
269 # We can pick:
269 # We can pick:
270 # * missingheads part of common (::commonheads)
270 # * missingheads part of common (::commonheads)
271 common = set(pushop.outgoing.common)
271 common = set(pushop.outgoing.common)
272 nm = pushop.repo.changelog.nodemap
272 nm = pushop.repo.changelog.nodemap
273 cheads = [node for node in pushop.revs if nm[node] in common]
273 cheads = [node for node in pushop.revs if nm[node] in common]
274 # and
274 # and
275 # * commonheads parents on missing
275 # * commonheads parents on missing
276 revset = unfi.set('%ln and parents(roots(%ln))',
276 revset = unfi.set('%ln and parents(roots(%ln))',
277 pushop.outgoing.commonheads,
277 pushop.outgoing.commonheads,
278 pushop.outgoing.missing)
278 pushop.outgoing.missing)
279 cheads.extend(c.node() for c in revset)
279 cheads.extend(c.node() for c in revset)
280 pushop.commonheads = cheads
280 pushop.commonheads = cheads
281 # even when we don't push, exchanging phase data is useful
281 # even when we don't push, exchanging phase data is useful
282 remotephases = pushop.remote.listkeys('phases')
282 remotephases = pushop.remote.listkeys('phases')
283 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
283 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
284 and remotephases # server supports phases
284 and remotephases # server supports phases
285 and pushop.ret is None # nothing was pushed
285 and pushop.ret is None # nothing was pushed
286 and remotephases.get('publishing', False)):
286 and remotephases.get('publishing', False)):
287 # When:
287 # When:
288 # - this is a subrepo push
288 # - this is a subrepo push
289 # - and remote support phase
289 # - and remote support phase
290 # - and no changeset was pushed
290 # - and no changeset was pushed
291 # - and remote is publishing
291 # - and remote is publishing
292 # We may be in issue 3871 case!
292 # We may be in issue 3871 case!
293 # We drop the possible phase synchronisation done by
293 # We drop the possible phase synchronisation done by
294 # courtesy to publish changesets possibly locally draft
294 # courtesy to publish changesets possibly locally draft
295 # on the remote.
295 # on the remote.
296 remotephases = {'publishing': 'True'}
296 remotephases = {'publishing': 'True'}
297 if not remotephases: # old server or public only reply from non-publishing
297 if not remotephases: # old server or public only reply from non-publishing
298 _localphasemove(pushop, cheads)
298 _localphasemove(pushop, cheads)
299 # don't push any phase data as there is nothing to push
299 # don't push any phase data as there is nothing to push
300 else:
300 else:
301 ana = phases.analyzeremotephases(pushop.repo, cheads,
301 ana = phases.analyzeremotephases(pushop.repo, cheads,
302 remotephases)
302 remotephases)
303 pheads, droots = ana
303 pheads, droots = ana
304 ### Apply remote phase on local
304 ### Apply remote phase on local
305 if remotephases.get('publishing', False):
305 if remotephases.get('publishing', False):
306 _localphasemove(pushop, cheads)
306 _localphasemove(pushop, cheads)
307 else: # publish = False
307 else: # publish = False
308 _localphasemove(pushop, pheads)
308 _localphasemove(pushop, pheads)
309 _localphasemove(pushop, cheads, phases.draft)
309 _localphasemove(pushop, cheads, phases.draft)
310 ### Apply local phase on remote
310 ### Apply local phase on remote
311
311
312 # Get the list of all revs draft on remote by public here.
312 # Get the list of all revs draft on remote by public here.
313 # XXX Beware that revset break if droots is not strictly
313 # XXX Beware that revset break if droots is not strictly
314 # XXX root we may want to ensure it is but it is costly
314 # XXX root we may want to ensure it is but it is costly
315 outdated = unfi.set('heads((%ln::%ln) and public())',
315 outdated = unfi.set('heads((%ln::%ln) and public())',
316 droots, cheads)
316 droots, cheads)
317 for newremotehead in outdated:
317 for newremotehead in outdated:
318 r = pushop.remote.pushkey('phases',
318 r = pushop.remote.pushkey('phases',
319 newremotehead.hex(),
319 newremotehead.hex(),
320 str(phases.draft),
320 str(phases.draft),
321 str(phases.public))
321 str(phases.public))
322 if not r:
322 if not r:
323 pushop.ui.warn(_('updating %s to public failed!\n')
323 pushop.ui.warn(_('updating %s to public failed!\n')
324 % newremotehead)
324 % newremotehead)
325
325
326 def _localphasemove(pushop, nodes, phase=phases.public):
326 def _localphasemove(pushop, nodes, phase=phases.public):
327 """move <nodes> to <phase> in the local source repo"""
327 """move <nodes> to <phase> in the local source repo"""
328 if pushop.locallocked:
328 if pushop.locallocked:
329 phases.advanceboundary(pushop.repo, phase, nodes)
329 phases.advanceboundary(pushop.repo, phase, nodes)
330 else:
330 else:
331 # repo is not locked, do not change any phases!
331 # repo is not locked, do not change any phases!
332 # Informs the user that phases should have been moved when
332 # Informs the user that phases should have been moved when
333 # applicable.
333 # applicable.
334 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
334 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
335 phasestr = phases.phasenames[phase]
335 phasestr = phases.phasenames[phase]
336 if actualmoves:
336 if actualmoves:
337 pushop.ui.status(_('cannot lock source repo, skipping '
337 pushop.ui.status(_('cannot lock source repo, skipping '
338 'local %s phase update\n') % phasestr)
338 'local %s phase update\n') % phasestr)
339
339
340 def _pushobsolete(pushop):
340 def _pushobsolete(pushop):
341 """utility function to push obsolete markers to a remote"""
341 """utility function to push obsolete markers to a remote"""
342 pushop.ui.debug('try to push obsolete markers to remote\n')
342 pushop.ui.debug('try to push obsolete markers to remote\n')
343 repo = pushop.repo
343 repo = pushop.repo
344 remote = pushop.remote
344 remote = pushop.remote
345 if (obsolete._enabled and repo.obsstore and
345 if (obsolete._enabled and repo.obsstore and
346 'obsolete' in remote.listkeys('namespaces')):
346 'obsolete' in remote.listkeys('namespaces')):
347 rslts = []
347 rslts = []
348 remotedata = repo.listkeys('obsolete')
348 remotedata = repo.listkeys('obsolete')
349 for key in sorted(remotedata, reverse=True):
349 for key in sorted(remotedata, reverse=True):
350 # reverse sort to ensure we end with dump0
350 # reverse sort to ensure we end with dump0
351 data = remotedata[key]
351 data = remotedata[key]
352 rslts.append(remote.pushkey('obsolete', key, '', data))
352 rslts.append(remote.pushkey('obsolete', key, '', data))
353 if [r for r in rslts if not r]:
353 if [r for r in rslts if not r]:
354 msg = _('failed to push some obsolete markers!\n')
354 msg = _('failed to push some obsolete markers!\n')
355 repo.ui.warn(msg)
355 repo.ui.warn(msg)
356
356
357 def _pushbookmark(pushop):
357 def _pushbookmark(pushop):
358 """Update bookmark position on remote"""
358 """Update bookmark position on remote"""
359 ui = pushop.ui
359 ui = pushop.ui
360 repo = pushop.repo.unfiltered()
360 repo = pushop.repo.unfiltered()
361 remote = pushop.remote
361 remote = pushop.remote
362 ui.debug("checking for updated bookmarks\n")
362 ui.debug("checking for updated bookmarks\n")
363 revnums = map(repo.changelog.rev, pushop.revs or [])
363 revnums = map(repo.changelog.rev, pushop.revs or [])
364 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
364 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
365 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
365 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
366 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
366 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
367 srchex=hex)
367 srchex=hex)
368
368
369 for b, scid, dcid in advsrc:
369 for b, scid, dcid in advsrc:
370 if ancestors and repo[scid].rev() not in ancestors:
370 if ancestors and repo[scid].rev() not in ancestors:
371 continue
371 continue
372 if remote.pushkey('bookmarks', b, dcid, scid):
372 if remote.pushkey('bookmarks', b, dcid, scid):
373 ui.status(_("updating bookmark %s\n") % b)
373 ui.status(_("updating bookmark %s\n") % b)
374 else:
374 else:
375 ui.warn(_('updating bookmark %s failed!\n') % b)
375 ui.warn(_('updating bookmark %s failed!\n') % b)
376
376
377 class pulloperation(object):
377 class pulloperation(object):
378 """A object that represent a single pull operation
378 """A object that represent a single pull operation
379
379
380 It purpose is to carry push related state and very common operation.
380 It purpose is to carry push related state and very common operation.
381
381
382 A new should be created at the begining of each pull and discarded
382 A new should be created at the beginning of each pull and discarded
383 afterward.
383 afterward.
384 """
384 """
385
385
386 def __init__(self, repo, remote, heads=None, force=False):
386 def __init__(self, repo, remote, heads=None, force=False):
387 # repo we pull into
387 # repo we pull into
388 self.repo = repo
388 self.repo = repo
389 # repo we pull from
389 # repo we pull from
390 self.remote = remote
390 self.remote = remote
391 # revision we try to pull (None is "all")
391 # revision we try to pull (None is "all")
392 self.heads = heads
392 self.heads = heads
393 # do we force pull?
393 # do we force pull?
394 self.force = force
394 self.force = force
395 # the name the pull transaction
395 # the name the pull transaction
396 self._trname = 'pull\n' + util.hidepassword(remote.url())
396 self._trname = 'pull\n' + util.hidepassword(remote.url())
397 # hold the transaction once created
397 # hold the transaction once created
398 self._tr = None
398 self._tr = None
399 # set of common changeset between local and remote before pull
399 # set of common changeset between local and remote before pull
400 self.common = None
400 self.common = None
401 # set of pulled head
401 # set of pulled head
402 self.rheads = None
402 self.rheads = None
403 # list of missing changeset to fetch remotly
403 # list of missing changeset to fetch remotely
404 self.fetch = None
404 self.fetch = None
405 # result of changegroup pulling (used as returng code by pull)
405 # result of changegroup pulling (used as return code by pull)
406 self.cgresult = None
406 self.cgresult = None
407 # list of step remaining todo (related to future bundle2 usage)
407 # list of step remaining todo (related to future bundle2 usage)
408 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
408 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
409
409
410 @util.propertycache
410 @util.propertycache
411 def pulledsubset(self):
411 def pulledsubset(self):
412 """heads of the set of changeset target by the pull"""
412 """heads of the set of changeset target by the pull"""
413 # compute target subset
413 # compute target subset
414 if self.heads is None:
414 if self.heads is None:
415 # We pulled every thing possible
415 # We pulled every thing possible
416 # sync on everything common
416 # sync on everything common
417 c = set(self.common)
417 c = set(self.common)
418 ret = list(self.common)
418 ret = list(self.common)
419 for n in self.rheads:
419 for n in self.rheads:
420 if n not in c:
420 if n not in c:
421 ret.append(n)
421 ret.append(n)
422 return ret
422 return ret
423 else:
423 else:
424 # We pulled a specific subset
424 # We pulled a specific subset
425 # sync on this subset
425 # sync on this subset
426 return self.heads
426 return self.heads
427
427
428 def gettransaction(self):
428 def gettransaction(self):
429 """get appropriate pull transaction, creating it if needed"""
429 """get appropriate pull transaction, creating it if needed"""
430 if self._tr is None:
430 if self._tr is None:
431 self._tr = self.repo.transaction(self._trname)
431 self._tr = self.repo.transaction(self._trname)
432 return self._tr
432 return self._tr
433
433
434 def closetransaction(self):
434 def closetransaction(self):
435 """close transaction if created"""
435 """close transaction if created"""
436 if self._tr is not None:
436 if self._tr is not None:
437 self._tr.close()
437 self._tr.close()
438
438
439 def releasetransaction(self):
439 def releasetransaction(self):
440 """release transaction if created"""
440 """release transaction if created"""
441 if self._tr is not None:
441 if self._tr is not None:
442 self._tr.release()
442 self._tr.release()
443
443
444 def pull(repo, remote, heads=None, force=False):
444 def pull(repo, remote, heads=None, force=False):
445 pullop = pulloperation(repo, remote, heads, force)
445 pullop = pulloperation(repo, remote, heads, force)
446 if pullop.remote.local():
446 if pullop.remote.local():
447 missing = set(pullop.remote.requirements) - pullop.repo.supported
447 missing = set(pullop.remote.requirements) - pullop.repo.supported
448 if missing:
448 if missing:
449 msg = _("required features are not"
449 msg = _("required features are not"
450 " supported in the destination:"
450 " supported in the destination:"
451 " %s") % (', '.join(sorted(missing)))
451 " %s") % (', '.join(sorted(missing)))
452 raise util.Abort(msg)
452 raise util.Abort(msg)
453
453
454 lock = pullop.repo.lock()
454 lock = pullop.repo.lock()
455 try:
455 try:
456 _pulldiscovery(pullop)
456 _pulldiscovery(pullop)
457 if pullop.remote.capable('bundle2'):
457 if pullop.remote.capable('bundle2'):
458 _pullbundle2(pullop)
458 _pullbundle2(pullop)
459 if 'changegroup' in pullop.todosteps:
459 if 'changegroup' in pullop.todosteps:
460 _pullchangeset(pullop)
460 _pullchangeset(pullop)
461 if 'phases' in pullop.todosteps:
461 if 'phases' in pullop.todosteps:
462 _pullphase(pullop)
462 _pullphase(pullop)
463 if 'obsmarkers' in pullop.todosteps:
463 if 'obsmarkers' in pullop.todosteps:
464 _pullobsolete(pullop)
464 _pullobsolete(pullop)
465 pullop.closetransaction()
465 pullop.closetransaction()
466 finally:
466 finally:
467 pullop.releasetransaction()
467 pullop.releasetransaction()
468 lock.release()
468 lock.release()
469
469
470 return pullop.cgresult
470 return pullop.cgresult
471
471
472 def _pulldiscovery(pullop):
472 def _pulldiscovery(pullop):
473 """discovery phase for the pull
473 """discovery phase for the pull
474
474
475 Current handle changeset discovery only, will change handle all discovery
475 Current handle changeset discovery only, will change handle all discovery
476 at some point."""
476 at some point."""
477 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
477 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
478 pullop.remote,
478 pullop.remote,
479 heads=pullop.heads,
479 heads=pullop.heads,
480 force=pullop.force)
480 force=pullop.force)
481 pullop.common, pullop.fetch, pullop.rheads = tmp
481 pullop.common, pullop.fetch, pullop.rheads = tmp
482
482
483 def _pullbundle2(pullop):
483 def _pullbundle2(pullop):
484 """pull data using bundle2
484 """pull data using bundle2
485
485
486 For now, the only supported data are changegroup."""
486 For now, the only supported data are changegroup."""
487 kwargs = {'bundlecaps': set(['HG20'])}
487 kwargs = {'bundlecaps': set(['HG20'])}
488 # pulling changegroup
488 # pulling changegroup
489 pullop.todosteps.remove('changegroup')
489 pullop.todosteps.remove('changegroup')
490 if not pullop.fetch:
490 if not pullop.fetch:
491 pullop.repo.ui.status(_("no changes found\n"))
491 pullop.repo.ui.status(_("no changes found\n"))
492 pullop.cgresult = 0
492 pullop.cgresult = 0
493 else:
493 else:
494 kwargs['common'] = pullop.common
494 kwargs['common'] = pullop.common
495 kwargs['heads'] = pullop.heads or pullop.rheads
495 kwargs['heads'] = pullop.heads or pullop.rheads
496 if pullop.heads is None and list(pullop.common) == [nullid]:
496 if pullop.heads is None and list(pullop.common) == [nullid]:
497 pullop.repo.ui.status(_("requesting all changes\n"))
497 pullop.repo.ui.status(_("requesting all changes\n"))
498 if kwargs.keys() == ['format']:
498 if kwargs.keys() == ['format']:
499 return # nothing to pull
499 return # nothing to pull
500 bundle = pullop.remote.getbundle('pull', **kwargs)
500 bundle = pullop.remote.getbundle('pull', **kwargs)
501 try:
501 try:
502 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
502 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
503 except KeyError, exc:
503 except KeyError, exc:
504 raise util.Abort('missing support for %s' % exc)
504 raise util.Abort('missing support for %s' % exc)
505 assert len(op.records['changegroup']) == 1
505 assert len(op.records['changegroup']) == 1
506 pullop.cgresult = op.records['changegroup'][0]['return']
506 pullop.cgresult = op.records['changegroup'][0]['return']
507
507
508 def _pullchangeset(pullop):
508 def _pullchangeset(pullop):
509 """pull changeset from unbundle into the local repo"""
509 """pull changeset from unbundle into the local repo"""
510 # We delay the open of the transaction as late as possible so we
510 # We delay the open of the transaction as late as possible so we
511 # don't open transaction for nothing or you break future useful
511 # don't open transaction for nothing or you break future useful
512 # rollback call
512 # rollback call
513 pullop.todosteps.remove('changegroup')
513 pullop.todosteps.remove('changegroup')
514 if not pullop.fetch:
514 if not pullop.fetch:
515 pullop.repo.ui.status(_("no changes found\n"))
515 pullop.repo.ui.status(_("no changes found\n"))
516 pullop.cgresult = 0
516 pullop.cgresult = 0
517 return
517 return
518 pullop.gettransaction()
518 pullop.gettransaction()
519 if pullop.heads is None and list(pullop.common) == [nullid]:
519 if pullop.heads is None and list(pullop.common) == [nullid]:
520 pullop.repo.ui.status(_("requesting all changes\n"))
520 pullop.repo.ui.status(_("requesting all changes\n"))
521 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
521 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
522 # issue1320, avoid a race if remote changed after discovery
522 # issue1320, avoid a race if remote changed after discovery
523 pullop.heads = pullop.rheads
523 pullop.heads = pullop.rheads
524
524
525 if pullop.remote.capable('getbundle'):
525 if pullop.remote.capable('getbundle'):
526 # TODO: get bundlecaps from remote
526 # TODO: get bundlecaps from remote
527 cg = pullop.remote.getbundle('pull', common=pullop.common,
527 cg = pullop.remote.getbundle('pull', common=pullop.common,
528 heads=pullop.heads or pullop.rheads)
528 heads=pullop.heads or pullop.rheads)
529 elif pullop.heads is None:
529 elif pullop.heads is None:
530 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
530 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
531 elif not pullop.remote.capable('changegroupsubset'):
531 elif not pullop.remote.capable('changegroupsubset'):
532 raise util.Abort(_("partial pull cannot be done because "
532 raise util.Abort(_("partial pull cannot be done because "
533 "other repository doesn't support "
533 "other repository doesn't support "
534 "changegroupsubset."))
534 "changegroupsubset."))
535 else:
535 else:
536 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
536 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
537 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
537 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
538 pullop.remote.url())
538 pullop.remote.url())
539
539
540 def _pullphase(pullop):
540 def _pullphase(pullop):
541 # Get remote phases data from remote
541 # Get remote phases data from remote
542 pullop.todosteps.remove('phases')
542 pullop.todosteps.remove('phases')
543 remotephases = pullop.remote.listkeys('phases')
543 remotephases = pullop.remote.listkeys('phases')
544 publishing = bool(remotephases.get('publishing', False))
544 publishing = bool(remotephases.get('publishing', False))
545 if remotephases and not publishing:
545 if remotephases and not publishing:
546 # remote is new and unpublishing
546 # remote is new and unpublishing
547 pheads, _dr = phases.analyzeremotephases(pullop.repo,
547 pheads, _dr = phases.analyzeremotephases(pullop.repo,
548 pullop.pulledsubset,
548 pullop.pulledsubset,
549 remotephases)
549 remotephases)
550 phases.advanceboundary(pullop.repo, phases.public, pheads)
550 phases.advanceboundary(pullop.repo, phases.public, pheads)
551 phases.advanceboundary(pullop.repo, phases.draft,
551 phases.advanceboundary(pullop.repo, phases.draft,
552 pullop.pulledsubset)
552 pullop.pulledsubset)
553 else:
553 else:
554 # Remote is old or publishing all common changesets
554 # Remote is old or publishing all common changesets
555 # should be seen as public
555 # should be seen as public
556 phases.advanceboundary(pullop.repo, phases.public,
556 phases.advanceboundary(pullop.repo, phases.public,
557 pullop.pulledsubset)
557 pullop.pulledsubset)
558
558
559 def _pullobsolete(pullop):
559 def _pullobsolete(pullop):
560 """utility function to pull obsolete markers from a remote
560 """utility function to pull obsolete markers from a remote
561
561
562 The `gettransaction` is function that return the pull transaction, creating
562 The `gettransaction` is function that return the pull transaction, creating
563 one if necessary. We return the transaction to inform the calling code that
563 one if necessary. We return the transaction to inform the calling code that
564 a new transaction have been created (when applicable).
564 a new transaction have been created (when applicable).
565
565
566 Exists mostly to allow overriding for experimentation purpose"""
566 Exists mostly to allow overriding for experimentation purpose"""
567 pullop.todosteps.remove('obsmarkers')
567 pullop.todosteps.remove('obsmarkers')
568 tr = None
568 tr = None
569 if obsolete._enabled:
569 if obsolete._enabled:
570 pullop.repo.ui.debug('fetching remote obsolete markers\n')
570 pullop.repo.ui.debug('fetching remote obsolete markers\n')
571 remoteobs = pullop.remote.listkeys('obsolete')
571 remoteobs = pullop.remote.listkeys('obsolete')
572 if 'dump0' in remoteobs:
572 if 'dump0' in remoteobs:
573 tr = pullop.gettransaction()
573 tr = pullop.gettransaction()
574 for key in sorted(remoteobs, reverse=True):
574 for key in sorted(remoteobs, reverse=True):
575 if key.startswith('dump'):
575 if key.startswith('dump'):
576 data = base85.b85decode(remoteobs[key])
576 data = base85.b85decode(remoteobs[key])
577 pullop.repo.obsstore.mergemarkers(tr, data)
577 pullop.repo.obsstore.mergemarkers(tr, data)
578 pullop.repo.invalidatevolatilesets()
578 pullop.repo.invalidatevolatilesets()
579 return tr
579 return tr
580
580
581 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
581 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
582 """return a full bundle (with potentially multiple kind of parts)
582 """return a full bundle (with potentially multiple kind of parts)
583
583
584 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
584 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
585 passed. For now, the bundle can contain only changegroup, but this will
585 passed. For now, the bundle can contain only changegroup, but this will
586 changes when more part type will be available for bundle2.
586 changes when more part type will be available for bundle2.
587
587
588 This is different from changegroup.getbundle that only returns an HG10
588 This is different from changegroup.getbundle that only returns an HG10
589 changegroup bundle. They may eventually get reunited in the future when we
589 changegroup bundle. They may eventually get reunited in the future when we
590 have a clearer idea of the API we what to query different data.
590 have a clearer idea of the API we what to query different data.
591
591
592 The implementation is at a very early stage and will get massive rework
592 The implementation is at a very early stage and will get massive rework
593 when the API of bundle is refined.
593 when the API of bundle is refined.
594 """
594 """
595 # build bundle here.
595 # build bundle here.
596 cg = changegroup.getbundle(repo, source, heads=heads,
596 cg = changegroup.getbundle(repo, source, heads=heads,
597 common=common, bundlecaps=bundlecaps)
597 common=common, bundlecaps=bundlecaps)
598 if bundlecaps is None or 'HG20' not in bundlecaps:
598 if bundlecaps is None or 'HG20' not in bundlecaps:
599 return cg
599 return cg
600 # very crude first implementation,
600 # very crude first implementation,
601 # the bundle API will change and the generation will be done lazily.
601 # the bundle API will change and the generation will be done lazily.
602 bundler = bundle2.bundle20(repo.ui)
602 bundler = bundle2.bundle20(repo.ui)
603 def cgchunks(cg=cg):
603 def cgchunks(cg=cg):
604 yield 'HG10UN'
604 yield 'HG10UN'
605 for c in cg.getchunks():
605 for c in cg.getchunks():
606 yield c
606 yield c
607 part = bundle2.bundlepart('changegroup', data=cgchunks())
607 part = bundle2.bundlepart('changegroup', data=cgchunks())
608 bundler.addpart(part)
608 bundler.addpart(part)
609 return bundle2.unbundle20(repo.ui, util.chunkbuffer(bundler.getchunks()))
609 return bundle2.unbundle20(repo.ui, util.chunkbuffer(bundler.getchunks()))
610
610
611 class PushRaced(RuntimeError):
611 class PushRaced(RuntimeError):
612 """An exception raised during unbunding that indicate a push race"""
612 """An exception raised during unbundling that indicate a push race"""
613
613
614 def check_heads(repo, their_heads, context):
614 def check_heads(repo, their_heads, context):
615 """check if the heads of a repo have been modified
615 """check if the heads of a repo have been modified
616
616
617 Used by peer for unbundling.
617 Used by peer for unbundling.
618 """
618 """
619 heads = repo.heads()
619 heads = repo.heads()
620 heads_hash = util.sha1(''.join(sorted(heads))).digest()
620 heads_hash = util.sha1(''.join(sorted(heads))).digest()
621 if not (their_heads == ['force'] or their_heads == heads or
621 if not (their_heads == ['force'] or their_heads == heads or
622 their_heads == ['hashed', heads_hash]):
622 their_heads == ['hashed', heads_hash]):
623 # someone else committed/pushed/unbundled while we
623 # someone else committed/pushed/unbundled while we
624 # were transferring data
624 # were transferring data
625 raise PushRaced('repository changed while %s - '
625 raise PushRaced('repository changed while %s - '
626 'please try again' % context)
626 'please try again' % context)
627
627
628 def unbundle(repo, cg, heads, source, url):
628 def unbundle(repo, cg, heads, source, url):
629 """Apply a bundle to a repo.
629 """Apply a bundle to a repo.
630
630
631 this function makes sure the repo is locked during the application and have
631 this function makes sure the repo is locked during the application and have
632 mechanism to check that no push race occured between the creation of the
632 mechanism to check that no push race occurred between the creation of the
633 bundle and its application.
633 bundle and its application.
634
634
635 If the push was raced as PushRaced exception is raised."""
635 If the push was raced as PushRaced exception is raised."""
636 r = 0
636 r = 0
637 lock = repo.lock()
637 lock = repo.lock()
638 try:
638 try:
639 check_heads(repo, heads, 'uploading changes')
639 check_heads(repo, heads, 'uploading changes')
640 # push can proceed
640 # push can proceed
641 r = changegroup.addchangegroup(repo, cg, source, url)
641 r = changegroup.addchangegroup(repo, cg, source, url)
642 finally:
642 finally:
643 lock.release()
643 lock.release()
644 return r
644 return r
@@ -1,1020 +1,1020 b''
1 # merge.py - directory-level update/merge handling for Mercurial
1 # merge.py - directory-level update/merge handling for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import struct
8 import struct
9
9
10 from node import nullid, nullrev, hex, bin
10 from node import nullid, nullrev, hex, bin
11 from i18n import _
11 from i18n import _
12 from mercurial import obsolete
12 from mercurial import obsolete
13 import error, util, filemerge, copies, subrepo, worker, dicthelpers
13 import error, util, filemerge, copies, subrepo, worker, dicthelpers
14 import errno, os, shutil
14 import errno, os, shutil
15
15
16 _pack = struct.pack
16 _pack = struct.pack
17 _unpack = struct.unpack
17 _unpack = struct.unpack
18
18
19 def _droponode(data):
19 def _droponode(data):
20 # used for compatibility for v1
20 # used for compatibility for v1
21 bits = data.split("\0")
21 bits = data.split("\0")
22 bits = bits[:-2] + bits[-1:]
22 bits = bits[:-2] + bits[-1:]
23 return "\0".join(bits)
23 return "\0".join(bits)
24
24
25 class mergestate(object):
25 class mergestate(object):
26 '''track 3-way merge state of individual files
26 '''track 3-way merge state of individual files
27
27
28 it is stored on disk when needed. Two file are used, one with an old
28 it is stored on disk when needed. Two file are used, one with an old
29 format, one with a new format. Both contains similar data, but the new
29 format, one with a new format. Both contains similar data, but the new
30 format can store new kind of field.
30 format can store new kind of field.
31
31
32 Current new format is a list of arbitrary record of the form:
32 Current new format is a list of arbitrary record of the form:
33
33
34 [type][length][content]
34 [type][length][content]
35
35
36 Type is a single character, length is a 4 bytes integer, content is an
36 Type is a single character, length is a 4 bytes integer, content is an
37 arbitrary suites of bytes of length `length`.
37 arbitrary suites of bytes of length `length`.
38
38
39 Type should be a letter. Capital letter are mandatory record, Mercurial
39 Type should be a letter. Capital letter are mandatory record, Mercurial
40 should abort if they are unknown. lower case record can be safely ignored.
40 should abort if they are unknown. lower case record can be safely ignored.
41
41
42 Currently known record:
42 Currently known record:
43
43
44 L: the node of the "local" part of the merge (hexified version)
44 L: the node of the "local" part of the merge (hexified version)
45 O: the node of the "other" part of the merge (hexified version)
45 O: the node of the "other" part of the merge (hexified version)
46 F: a file to be merged entry
46 F: a file to be merged entry
47 '''
47 '''
48 statepathv1 = "merge/state"
48 statepathv1 = "merge/state"
49 statepathv2 = "merge/state2"
49 statepathv2 = "merge/state2"
50
50
51 def __init__(self, repo):
51 def __init__(self, repo):
52 self._repo = repo
52 self._repo = repo
53 self._dirty = False
53 self._dirty = False
54 self._read()
54 self._read()
55
55
56 def reset(self, node=None, other=None):
56 def reset(self, node=None, other=None):
57 self._state = {}
57 self._state = {}
58 if node:
58 if node:
59 self._local = node
59 self._local = node
60 self._other = other
60 self._other = other
61 shutil.rmtree(self._repo.join("merge"), True)
61 shutil.rmtree(self._repo.join("merge"), True)
62 self._dirty = False
62 self._dirty = False
63
63
64 def _read(self):
64 def _read(self):
65 """Analyse each record content to restore a serialized state from disk
65 """Analyse each record content to restore a serialized state from disk
66
66
67 This function process "record" entry produced by the de-serialization
67 This function process "record" entry produced by the de-serialization
68 of on disk file.
68 of on disk file.
69 """
69 """
70 self._state = {}
70 self._state = {}
71 records = self._readrecords()
71 records = self._readrecords()
72 for rtype, record in records:
72 for rtype, record in records:
73 if rtype == 'L':
73 if rtype == 'L':
74 self._local = bin(record)
74 self._local = bin(record)
75 elif rtype == 'O':
75 elif rtype == 'O':
76 self._other = bin(record)
76 self._other = bin(record)
77 elif rtype == "F":
77 elif rtype == "F":
78 bits = record.split("\0")
78 bits = record.split("\0")
79 self._state[bits[0]] = bits[1:]
79 self._state[bits[0]] = bits[1:]
80 elif not rtype.islower():
80 elif not rtype.islower():
81 raise util.Abort(_('unsupported merge state record: %s')
81 raise util.Abort(_('unsupported merge state record: %s')
82 % rtype)
82 % rtype)
83 self._dirty = False
83 self._dirty = False
84
84
85 def _readrecords(self):
85 def _readrecords(self):
86 """Read merge state from disk and return a list of record (TYPE, data)
86 """Read merge state from disk and return a list of record (TYPE, data)
87
87
88 We read data from both V1 and Ve files decide which on to use.
88 We read data from both v1 and v2 files and decide which one to use.
89
89
90 V1 have been used by version prior to 2.9.1 and contains less data than
90 V1 has been used by version prior to 2.9.1 and contains less data than
91 v2. We read both version and check if no data in v2 contradict one in
91 v2. We read both versions and check if no data in v2 contradicts
92 v1. If there is not contradiction we can safely assume that both v1
92 v1. If there is not contradiction we can safely assume that both v1
93 and v2 were written at the same time and use the extract data in v2. If
93 and v2 were written at the same time and use the extract data in v2. If
94 there is contradiction we ignore v2 content as we assume an old version
94 there is contradiction we ignore v2 content as we assume an old version
95 of Mercurial have over written the mergstate file and left an old v2
95 of Mercurial has overwritten the mergestate file and left an old v2
96 file around.
96 file around.
97
97
98 returns list of record [(TYPE, data), ...]"""
98 returns list of record [(TYPE, data), ...]"""
99 v1records = self._readrecordsv1()
99 v1records = self._readrecordsv1()
100 v2records = self._readrecordsv2()
100 v2records = self._readrecordsv2()
101 oldv2 = set() # old format version of v2 record
101 oldv2 = set() # old format version of v2 record
102 for rec in v2records:
102 for rec in v2records:
103 if rec[0] == 'L':
103 if rec[0] == 'L':
104 oldv2.add(rec)
104 oldv2.add(rec)
105 elif rec[0] == 'F':
105 elif rec[0] == 'F':
106 # drop the onode data (not contained in v1)
106 # drop the onode data (not contained in v1)
107 oldv2.add(('F', _droponode(rec[1])))
107 oldv2.add(('F', _droponode(rec[1])))
108 for rec in v1records:
108 for rec in v1records:
109 if rec not in oldv2:
109 if rec not in oldv2:
110 # v1 file is newer than v2 file, use it
110 # v1 file is newer than v2 file, use it
111 # we have to infer the "other" changeset of the merge
111 # we have to infer the "other" changeset of the merge
112 # we cannot do better than that with v1 of the format
112 # we cannot do better than that with v1 of the format
113 mctx = self._repo[None].parents()[-1]
113 mctx = self._repo[None].parents()[-1]
114 v1records.append(('O', mctx.hex()))
114 v1records.append(('O', mctx.hex()))
115 # add place holder "other" file node information
115 # add place holder "other" file node information
116 # nobody is using it yet so we do no need to fetch the data
116 # nobody is using it yet so we do no need to fetch the data
117 # if mctx was wrong `mctx[bits[-2]]` may fails.
117 # if mctx was wrong `mctx[bits[-2]]` may fails.
118 for idx, r in enumerate(v1records):
118 for idx, r in enumerate(v1records):
119 if r[0] == 'F':
119 if r[0] == 'F':
120 bits = r[1].split("\0")
120 bits = r[1].split("\0")
121 bits.insert(-2, '')
121 bits.insert(-2, '')
122 v1records[idx] = (r[0], "\0".join(bits))
122 v1records[idx] = (r[0], "\0".join(bits))
123 return v1records
123 return v1records
124 else:
124 else:
125 return v2records
125 return v2records
126
126
127 def _readrecordsv1(self):
127 def _readrecordsv1(self):
128 """read on disk merge state for version 1 file
128 """read on disk merge state for version 1 file
129
129
130 returns list of record [(TYPE, data), ...]
130 returns list of record [(TYPE, data), ...]
131
131
132 Note: the "F" data from this file are one entry short
132 Note: the "F" data from this file are one entry short
133 (no "other file node" entry)
133 (no "other file node" entry)
134 """
134 """
135 records = []
135 records = []
136 try:
136 try:
137 f = self._repo.opener(self.statepathv1)
137 f = self._repo.opener(self.statepathv1)
138 for i, l in enumerate(f):
138 for i, l in enumerate(f):
139 if i == 0:
139 if i == 0:
140 records.append(('L', l[:-1]))
140 records.append(('L', l[:-1]))
141 else:
141 else:
142 records.append(('F', l[:-1]))
142 records.append(('F', l[:-1]))
143 f.close()
143 f.close()
144 except IOError, err:
144 except IOError, err:
145 if err.errno != errno.ENOENT:
145 if err.errno != errno.ENOENT:
146 raise
146 raise
147 return records
147 return records
148
148
149 def _readrecordsv2(self):
149 def _readrecordsv2(self):
150 """read on disk merge state for version 2 file
150 """read on disk merge state for version 2 file
151
151
152 returns list of record [(TYPE, data), ...]
152 returns list of record [(TYPE, data), ...]
153 """
153 """
154 records = []
154 records = []
155 try:
155 try:
156 f = self._repo.opener(self.statepathv2)
156 f = self._repo.opener(self.statepathv2)
157 data = f.read()
157 data = f.read()
158 off = 0
158 off = 0
159 end = len(data)
159 end = len(data)
160 while off < end:
160 while off < end:
161 rtype = data[off]
161 rtype = data[off]
162 off += 1
162 off += 1
163 length = _unpack('>I', data[off:(off + 4)])[0]
163 length = _unpack('>I', data[off:(off + 4)])[0]
164 off += 4
164 off += 4
165 record = data[off:(off + length)]
165 record = data[off:(off + length)]
166 off += length
166 off += length
167 records.append((rtype, record))
167 records.append((rtype, record))
168 f.close()
168 f.close()
169 except IOError, err:
169 except IOError, err:
170 if err.errno != errno.ENOENT:
170 if err.errno != errno.ENOENT:
171 raise
171 raise
172 return records
172 return records
173
173
174 def commit(self):
174 def commit(self):
175 """Write current state on disk (if necessary)"""
175 """Write current state on disk (if necessary)"""
176 if self._dirty:
176 if self._dirty:
177 records = []
177 records = []
178 records.append(("L", hex(self._local)))
178 records.append(("L", hex(self._local)))
179 records.append(("O", hex(self._other)))
179 records.append(("O", hex(self._other)))
180 for d, v in self._state.iteritems():
180 for d, v in self._state.iteritems():
181 records.append(("F", "\0".join([d] + v)))
181 records.append(("F", "\0".join([d] + v)))
182 self._writerecords(records)
182 self._writerecords(records)
183 self._dirty = False
183 self._dirty = False
184
184
185 def _writerecords(self, records):
185 def _writerecords(self, records):
186 """Write current state on disk (both v1 and v2)"""
186 """Write current state on disk (both v1 and v2)"""
187 self._writerecordsv1(records)
187 self._writerecordsv1(records)
188 self._writerecordsv2(records)
188 self._writerecordsv2(records)
189
189
190 def _writerecordsv1(self, records):
190 def _writerecordsv1(self, records):
191 """Write current state on disk in a version 1 file"""
191 """Write current state on disk in a version 1 file"""
192 f = self._repo.opener(self.statepathv1, "w")
192 f = self._repo.opener(self.statepathv1, "w")
193 irecords = iter(records)
193 irecords = iter(records)
194 lrecords = irecords.next()
194 lrecords = irecords.next()
195 assert lrecords[0] == 'L'
195 assert lrecords[0] == 'L'
196 f.write(hex(self._local) + "\n")
196 f.write(hex(self._local) + "\n")
197 for rtype, data in irecords:
197 for rtype, data in irecords:
198 if rtype == "F":
198 if rtype == "F":
199 f.write("%s\n" % _droponode(data))
199 f.write("%s\n" % _droponode(data))
200 f.close()
200 f.close()
201
201
202 def _writerecordsv2(self, records):
202 def _writerecordsv2(self, records):
203 """Write current state on disk in a version 2 file"""
203 """Write current state on disk in a version 2 file"""
204 f = self._repo.opener(self.statepathv2, "w")
204 f = self._repo.opener(self.statepathv2, "w")
205 for key, data in records:
205 for key, data in records:
206 assert len(key) == 1
206 assert len(key) == 1
207 format = ">sI%is" % len(data)
207 format = ">sI%is" % len(data)
208 f.write(_pack(format, key, len(data), data))
208 f.write(_pack(format, key, len(data), data))
209 f.close()
209 f.close()
210
210
211 def add(self, fcl, fco, fca, fd):
211 def add(self, fcl, fco, fca, fd):
212 """add a new (potentially?) conflicting file the merge state
212 """add a new (potentially?) conflicting file the merge state
213 fcl: file context for local,
213 fcl: file context for local,
214 fco: file context for remote,
214 fco: file context for remote,
215 fca: file context for ancestors,
215 fca: file context for ancestors,
216 fd: file path of the resulting merge.
216 fd: file path of the resulting merge.
217
217
218 note: also write the local version to the `.hg/merge` directory.
218 note: also write the local version to the `.hg/merge` directory.
219 """
219 """
220 hash = util.sha1(fcl.path()).hexdigest()
220 hash = util.sha1(fcl.path()).hexdigest()
221 self._repo.opener.write("merge/" + hash, fcl.data())
221 self._repo.opener.write("merge/" + hash, fcl.data())
222 self._state[fd] = ['u', hash, fcl.path(),
222 self._state[fd] = ['u', hash, fcl.path(),
223 fca.path(), hex(fca.filenode()),
223 fca.path(), hex(fca.filenode()),
224 fco.path(), hex(fco.filenode()),
224 fco.path(), hex(fco.filenode()),
225 fcl.flags()]
225 fcl.flags()]
226 self._dirty = True
226 self._dirty = True
227
227
228 def __contains__(self, dfile):
228 def __contains__(self, dfile):
229 return dfile in self._state
229 return dfile in self._state
230
230
231 def __getitem__(self, dfile):
231 def __getitem__(self, dfile):
232 return self._state[dfile][0]
232 return self._state[dfile][0]
233
233
234 def __iter__(self):
234 def __iter__(self):
235 l = self._state.keys()
235 l = self._state.keys()
236 l.sort()
236 l.sort()
237 for f in l:
237 for f in l:
238 yield f
238 yield f
239
239
240 def files(self):
240 def files(self):
241 return self._state.keys()
241 return self._state.keys()
242
242
243 def mark(self, dfile, state):
243 def mark(self, dfile, state):
244 self._state[dfile][0] = state
244 self._state[dfile][0] = state
245 self._dirty = True
245 self._dirty = True
246
246
247 def resolve(self, dfile, wctx):
247 def resolve(self, dfile, wctx):
248 """rerun merge process for file path `dfile`"""
248 """rerun merge process for file path `dfile`"""
249 if self[dfile] == 'r':
249 if self[dfile] == 'r':
250 return 0
250 return 0
251 stateentry = self._state[dfile]
251 stateentry = self._state[dfile]
252 state, hash, lfile, afile, anode, ofile, onode, flags = stateentry
252 state, hash, lfile, afile, anode, ofile, onode, flags = stateentry
253 octx = self._repo[self._other]
253 octx = self._repo[self._other]
254 fcd = wctx[dfile]
254 fcd = wctx[dfile]
255 fco = octx[ofile]
255 fco = octx[ofile]
256 fca = self._repo.filectx(afile, fileid=anode)
256 fca = self._repo.filectx(afile, fileid=anode)
257 # "premerge" x flags
257 # "premerge" x flags
258 flo = fco.flags()
258 flo = fco.flags()
259 fla = fca.flags()
259 fla = fca.flags()
260 if 'x' in flags + flo + fla and 'l' not in flags + flo + fla:
260 if 'x' in flags + flo + fla and 'l' not in flags + flo + fla:
261 if fca.node() == nullid:
261 if fca.node() == nullid:
262 self._repo.ui.warn(_('warning: cannot merge flags for %s\n') %
262 self._repo.ui.warn(_('warning: cannot merge flags for %s\n') %
263 afile)
263 afile)
264 elif flags == fla:
264 elif flags == fla:
265 flags = flo
265 flags = flo
266 # restore local
266 # restore local
267 f = self._repo.opener("merge/" + hash)
267 f = self._repo.opener("merge/" + hash)
268 self._repo.wwrite(dfile, f.read(), flags)
268 self._repo.wwrite(dfile, f.read(), flags)
269 f.close()
269 f.close()
270 r = filemerge.filemerge(self._repo, self._local, lfile, fcd, fco, fca)
270 r = filemerge.filemerge(self._repo, self._local, lfile, fcd, fco, fca)
271 if r is None:
271 if r is None:
272 # no real conflict
272 # no real conflict
273 del self._state[dfile]
273 del self._state[dfile]
274 self._dirty = True
274 self._dirty = True
275 elif not r:
275 elif not r:
276 self.mark(dfile, 'r')
276 self.mark(dfile, 'r')
277 return r
277 return r
278
278
279 def _checkunknownfile(repo, wctx, mctx, f):
279 def _checkunknownfile(repo, wctx, mctx, f):
280 return (not repo.dirstate._ignore(f)
280 return (not repo.dirstate._ignore(f)
281 and os.path.isfile(repo.wjoin(f))
281 and os.path.isfile(repo.wjoin(f))
282 and repo.wopener.audit.check(f)
282 and repo.wopener.audit.check(f)
283 and repo.dirstate.normalize(f) not in repo.dirstate
283 and repo.dirstate.normalize(f) not in repo.dirstate
284 and mctx[f].cmp(wctx[f]))
284 and mctx[f].cmp(wctx[f]))
285
285
286 def _checkunknown(repo, wctx, mctx):
286 def _checkunknown(repo, wctx, mctx):
287 "check for collisions between unknown files and files in mctx"
287 "check for collisions between unknown files and files in mctx"
288
288
289 error = False
289 error = False
290 for f in mctx:
290 for f in mctx:
291 if f not in wctx and _checkunknownfile(repo, wctx, mctx, f):
291 if f not in wctx and _checkunknownfile(repo, wctx, mctx, f):
292 error = True
292 error = True
293 wctx._repo.ui.warn(_("%s: untracked file differs\n") % f)
293 wctx._repo.ui.warn(_("%s: untracked file differs\n") % f)
294 if error:
294 if error:
295 raise util.Abort(_("untracked files in working directory differ "
295 raise util.Abort(_("untracked files in working directory differ "
296 "from files in requested revision"))
296 "from files in requested revision"))
297
297
298 def _forgetremoved(wctx, mctx, branchmerge):
298 def _forgetremoved(wctx, mctx, branchmerge):
299 """
299 """
300 Forget removed files
300 Forget removed files
301
301
302 If we're jumping between revisions (as opposed to merging), and if
302 If we're jumping between revisions (as opposed to merging), and if
303 neither the working directory nor the target rev has the file,
303 neither the working directory nor the target rev has the file,
304 then we need to remove it from the dirstate, to prevent the
304 then we need to remove it from the dirstate, to prevent the
305 dirstate from listing the file when it is no longer in the
305 dirstate from listing the file when it is no longer in the
306 manifest.
306 manifest.
307
307
308 If we're merging, and the other revision has removed a file
308 If we're merging, and the other revision has removed a file
309 that is not present in the working directory, we need to mark it
309 that is not present in the working directory, we need to mark it
310 as removed.
310 as removed.
311 """
311 """
312
312
313 actions = []
313 actions = []
314 state = branchmerge and 'r' or 'f'
314 state = branchmerge and 'r' or 'f'
315 for f in wctx.deleted():
315 for f in wctx.deleted():
316 if f not in mctx:
316 if f not in mctx:
317 actions.append((f, state, None, "forget deleted"))
317 actions.append((f, state, None, "forget deleted"))
318
318
319 if not branchmerge:
319 if not branchmerge:
320 for f in wctx.removed():
320 for f in wctx.removed():
321 if f not in mctx:
321 if f not in mctx:
322 actions.append((f, "f", None, "forget removed"))
322 actions.append((f, "f", None, "forget removed"))
323
323
324 return actions
324 return actions
325
325
326 def _checkcollision(repo, wmf, actions):
326 def _checkcollision(repo, wmf, actions):
327 # build provisional merged manifest up
327 # build provisional merged manifest up
328 pmmf = set(wmf)
328 pmmf = set(wmf)
329
329
330 def addop(f, args):
330 def addop(f, args):
331 pmmf.add(f)
331 pmmf.add(f)
332 def removeop(f, args):
332 def removeop(f, args):
333 pmmf.discard(f)
333 pmmf.discard(f)
334 def nop(f, args):
334 def nop(f, args):
335 pass
335 pass
336
336
337 def renamemoveop(f, args):
337 def renamemoveop(f, args):
338 f2, flags = args
338 f2, flags = args
339 pmmf.discard(f2)
339 pmmf.discard(f2)
340 pmmf.add(f)
340 pmmf.add(f)
341 def renamegetop(f, args):
341 def renamegetop(f, args):
342 f2, flags = args
342 f2, flags = args
343 pmmf.add(f)
343 pmmf.add(f)
344 def mergeop(f, args):
344 def mergeop(f, args):
345 f1, f2, fa, move, anc = args
345 f1, f2, fa, move, anc = args
346 if move:
346 if move:
347 pmmf.discard(f1)
347 pmmf.discard(f1)
348 pmmf.add(f)
348 pmmf.add(f)
349
349
350 opmap = {
350 opmap = {
351 "a": addop,
351 "a": addop,
352 "dm": renamemoveop,
352 "dm": renamemoveop,
353 "dg": renamegetop,
353 "dg": renamegetop,
354 "dr": nop,
354 "dr": nop,
355 "e": nop,
355 "e": nop,
356 "f": addop, # untracked file should be kept in working directory
356 "f": addop, # untracked file should be kept in working directory
357 "g": addop,
357 "g": addop,
358 "m": mergeop,
358 "m": mergeop,
359 "r": removeop,
359 "r": removeop,
360 "rd": nop,
360 "rd": nop,
361 "cd": addop,
361 "cd": addop,
362 "dc": addop,
362 "dc": addop,
363 }
363 }
364 for f, m, args, msg in actions:
364 for f, m, args, msg in actions:
365 op = opmap.get(m)
365 op = opmap.get(m)
366 assert op, m
366 assert op, m
367 op(f, args)
367 op(f, args)
368
368
369 # check case-folding collision in provisional merged manifest
369 # check case-folding collision in provisional merged manifest
370 foldmap = {}
370 foldmap = {}
371 for f in sorted(pmmf):
371 for f in sorted(pmmf):
372 fold = util.normcase(f)
372 fold = util.normcase(f)
373 if fold in foldmap:
373 if fold in foldmap:
374 raise util.Abort(_("case-folding collision between %s and %s")
374 raise util.Abort(_("case-folding collision between %s and %s")
375 % (f, foldmap[fold]))
375 % (f, foldmap[fold]))
376 foldmap[fold] = f
376 foldmap[fold] = f
377
377
378 def manifestmerge(repo, wctx, p2, pa, branchmerge, force, partial,
378 def manifestmerge(repo, wctx, p2, pa, branchmerge, force, partial,
379 acceptremote=False):
379 acceptremote=False):
380 """
380 """
381 Merge p1 and p2 with ancestor pa and generate merge action list
381 Merge p1 and p2 with ancestor pa and generate merge action list
382
382
383 branchmerge and force are as passed in to update
383 branchmerge and force are as passed in to update
384 partial = function to filter file lists
384 partial = function to filter file lists
385 acceptremote = accept the incoming changes without prompting
385 acceptremote = accept the incoming changes without prompting
386 """
386 """
387
387
388 overwrite = force and not branchmerge
388 overwrite = force and not branchmerge
389 actions, copy, movewithdir = [], {}, {}
389 actions, copy, movewithdir = [], {}, {}
390
390
391 followcopies = False
391 followcopies = False
392 if overwrite:
392 if overwrite:
393 pa = wctx
393 pa = wctx
394 elif pa == p2: # backwards
394 elif pa == p2: # backwards
395 pa = wctx.p1()
395 pa = wctx.p1()
396 elif not branchmerge and not wctx.dirty(missing=True):
396 elif not branchmerge and not wctx.dirty(missing=True):
397 pass
397 pass
398 elif pa and repo.ui.configbool("merge", "followcopies", True):
398 elif pa and repo.ui.configbool("merge", "followcopies", True):
399 followcopies = True
399 followcopies = True
400
400
401 # manifests fetched in order are going to be faster, so prime the caches
401 # manifests fetched in order are going to be faster, so prime the caches
402 [x.manifest() for x in
402 [x.manifest() for x in
403 sorted(wctx.parents() + [p2, pa], key=lambda x: x.rev())]
403 sorted(wctx.parents() + [p2, pa], key=lambda x: x.rev())]
404
404
405 if followcopies:
405 if followcopies:
406 ret = copies.mergecopies(repo, wctx, p2, pa)
406 ret = copies.mergecopies(repo, wctx, p2, pa)
407 copy, movewithdir, diverge, renamedelete = ret
407 copy, movewithdir, diverge, renamedelete = ret
408 for of, fl in diverge.iteritems():
408 for of, fl in diverge.iteritems():
409 actions.append((of, "dr", (fl,), "divergent renames"))
409 actions.append((of, "dr", (fl,), "divergent renames"))
410 for of, fl in renamedelete.iteritems():
410 for of, fl in renamedelete.iteritems():
411 actions.append((of, "rd", (fl,), "rename and delete"))
411 actions.append((of, "rd", (fl,), "rename and delete"))
412
412
413 repo.ui.note(_("resolving manifests\n"))
413 repo.ui.note(_("resolving manifests\n"))
414 repo.ui.debug(" branchmerge: %s, force: %s, partial: %s\n"
414 repo.ui.debug(" branchmerge: %s, force: %s, partial: %s\n"
415 % (bool(branchmerge), bool(force), bool(partial)))
415 % (bool(branchmerge), bool(force), bool(partial)))
416 repo.ui.debug(" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
416 repo.ui.debug(" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
417
417
418 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
418 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
419 copied = set(copy.values())
419 copied = set(copy.values())
420 copied.update(movewithdir.values())
420 copied.update(movewithdir.values())
421
421
422 if '.hgsubstate' in m1:
422 if '.hgsubstate' in m1:
423 # check whether sub state is modified
423 # check whether sub state is modified
424 for s in sorted(wctx.substate):
424 for s in sorted(wctx.substate):
425 if wctx.sub(s).dirty():
425 if wctx.sub(s).dirty():
426 m1['.hgsubstate'] += "+"
426 m1['.hgsubstate'] += "+"
427 break
427 break
428
428
429 aborts = []
429 aborts = []
430 # Compare manifests
430 # Compare manifests
431 fdiff = dicthelpers.diff(m1, m2)
431 fdiff = dicthelpers.diff(m1, m2)
432 flagsdiff = m1.flagsdiff(m2)
432 flagsdiff = m1.flagsdiff(m2)
433 diff12 = dicthelpers.join(fdiff, flagsdiff)
433 diff12 = dicthelpers.join(fdiff, flagsdiff)
434
434
435 for f, (n12, fl12) in diff12.iteritems():
435 for f, (n12, fl12) in diff12.iteritems():
436 if n12:
436 if n12:
437 n1, n2 = n12
437 n1, n2 = n12
438 else: # file contents didn't change, but flags did
438 else: # file contents didn't change, but flags did
439 n1 = n2 = m1.get(f, None)
439 n1 = n2 = m1.get(f, None)
440 if n1 is None:
440 if n1 is None:
441 # Since n1 == n2, the file isn't present in m2 either. This
441 # Since n1 == n2, the file isn't present in m2 either. This
442 # means that the file was removed or deleted locally and
442 # means that the file was removed or deleted locally and
443 # removed remotely, but that residual entries remain in flags.
443 # removed remotely, but that residual entries remain in flags.
444 # This can happen in manifests generated by workingctx.
444 # This can happen in manifests generated by workingctx.
445 continue
445 continue
446 if fl12:
446 if fl12:
447 fl1, fl2 = fl12
447 fl1, fl2 = fl12
448 else: # flags didn't change, file contents did
448 else: # flags didn't change, file contents did
449 fl1 = fl2 = m1.flags(f)
449 fl1 = fl2 = m1.flags(f)
450
450
451 if partial and not partial(f):
451 if partial and not partial(f):
452 continue
452 continue
453 if n1 and n2:
453 if n1 and n2:
454 fa = f
454 fa = f
455 a = ma.get(f, nullid)
455 a = ma.get(f, nullid)
456 if a == nullid:
456 if a == nullid:
457 fa = copy.get(f, f)
457 fa = copy.get(f, f)
458 # Note: f as default is wrong - we can't really make a 3-way
458 # Note: f as default is wrong - we can't really make a 3-way
459 # merge without an ancestor file.
459 # merge without an ancestor file.
460 fla = ma.flags(fa)
460 fla = ma.flags(fa)
461 nol = 'l' not in fl1 + fl2 + fla
461 nol = 'l' not in fl1 + fl2 + fla
462 if n2 == a and fl2 == fla:
462 if n2 == a and fl2 == fla:
463 pass # remote unchanged - keep local
463 pass # remote unchanged - keep local
464 elif n1 == a and fl1 == fla: # local unchanged - use remote
464 elif n1 == a and fl1 == fla: # local unchanged - use remote
465 if n1 == n2: # optimization: keep local content
465 if n1 == n2: # optimization: keep local content
466 actions.append((f, "e", (fl2,), "update permissions"))
466 actions.append((f, "e", (fl2,), "update permissions"))
467 else:
467 else:
468 actions.append((f, "g", (fl2,), "remote is newer"))
468 actions.append((f, "g", (fl2,), "remote is newer"))
469 elif nol and n2 == a: # remote only changed 'x'
469 elif nol and n2 == a: # remote only changed 'x'
470 actions.append((f, "e", (fl2,), "update permissions"))
470 actions.append((f, "e", (fl2,), "update permissions"))
471 elif nol and n1 == a: # local only changed 'x'
471 elif nol and n1 == a: # local only changed 'x'
472 actions.append((f, "g", (fl1,), "remote is newer"))
472 actions.append((f, "g", (fl1,), "remote is newer"))
473 else: # both changed something
473 else: # both changed something
474 actions.append((f, "m", (f, f, fa, False, pa.node()),
474 actions.append((f, "m", (f, f, fa, False, pa.node()),
475 "versions differ"))
475 "versions differ"))
476 elif f in copied: # files we'll deal with on m2 side
476 elif f in copied: # files we'll deal with on m2 side
477 pass
477 pass
478 elif n1 and f in movewithdir: # directory rename, move local
478 elif n1 and f in movewithdir: # directory rename, move local
479 f2 = movewithdir[f]
479 f2 = movewithdir[f]
480 actions.append((f2, "dm", (f, fl1),
480 actions.append((f2, "dm", (f, fl1),
481 "remote directory rename - move from " + f))
481 "remote directory rename - move from " + f))
482 elif n1 and f in copy:
482 elif n1 and f in copy:
483 f2 = copy[f]
483 f2 = copy[f]
484 actions.append((f, "m", (f, f2, f2, False, pa.node()),
484 actions.append((f, "m", (f, f2, f2, False, pa.node()),
485 "local copied/moved from " + f2))
485 "local copied/moved from " + f2))
486 elif n1 and f in ma: # clean, a different, no remote
486 elif n1 and f in ma: # clean, a different, no remote
487 if n1 != ma[f]:
487 if n1 != ma[f]:
488 if acceptremote:
488 if acceptremote:
489 actions.append((f, "r", None, "remote delete"))
489 actions.append((f, "r", None, "remote delete"))
490 else:
490 else:
491 actions.append((f, "cd", None, "prompt changed/deleted"))
491 actions.append((f, "cd", None, "prompt changed/deleted"))
492 elif n1[20:] == "a": # added, no remote
492 elif n1[20:] == "a": # added, no remote
493 actions.append((f, "f", None, "remote deleted"))
493 actions.append((f, "f", None, "remote deleted"))
494 else:
494 else:
495 actions.append((f, "r", None, "other deleted"))
495 actions.append((f, "r", None, "other deleted"))
496 elif n2 and f in movewithdir:
496 elif n2 and f in movewithdir:
497 f2 = movewithdir[f]
497 f2 = movewithdir[f]
498 actions.append((f2, "dg", (f, fl2),
498 actions.append((f2, "dg", (f, fl2),
499 "local directory rename - get from " + f))
499 "local directory rename - get from " + f))
500 elif n2 and f in copy:
500 elif n2 and f in copy:
501 f2 = copy[f]
501 f2 = copy[f]
502 if f2 in m2:
502 if f2 in m2:
503 actions.append((f, "m", (f2, f, f2, False, pa.node()),
503 actions.append((f, "m", (f2, f, f2, False, pa.node()),
504 "remote copied from " + f2))
504 "remote copied from " + f2))
505 else:
505 else:
506 actions.append((f, "m", (f2, f, f2, True, pa.node()),
506 actions.append((f, "m", (f2, f, f2, True, pa.node()),
507 "remote moved from " + f2))
507 "remote moved from " + f2))
508 elif n2 and f not in ma:
508 elif n2 and f not in ma:
509 # local unknown, remote created: the logic is described by the
509 # local unknown, remote created: the logic is described by the
510 # following table:
510 # following table:
511 #
511 #
512 # force branchmerge different | action
512 # force branchmerge different | action
513 # n * n | get
513 # n * n | get
514 # n * y | abort
514 # n * y | abort
515 # y n * | get
515 # y n * | get
516 # y y n | get
516 # y y n | get
517 # y y y | merge
517 # y y y | merge
518 #
518 #
519 # Checking whether the files are different is expensive, so we
519 # Checking whether the files are different is expensive, so we
520 # don't do that when we can avoid it.
520 # don't do that when we can avoid it.
521 if force and not branchmerge:
521 if force and not branchmerge:
522 actions.append((f, "g", (fl2,), "remote created"))
522 actions.append((f, "g", (fl2,), "remote created"))
523 else:
523 else:
524 different = _checkunknownfile(repo, wctx, p2, f)
524 different = _checkunknownfile(repo, wctx, p2, f)
525 if force and branchmerge and different:
525 if force and branchmerge and different:
526 # FIXME: This is wrong - f is not in ma ...
526 # FIXME: This is wrong - f is not in ma ...
527 actions.append((f, "m", (f, f, f, False, pa.node()),
527 actions.append((f, "m", (f, f, f, False, pa.node()),
528 "remote differs from untracked local"))
528 "remote differs from untracked local"))
529 elif not force and different:
529 elif not force and different:
530 aborts.append((f, "ud"))
530 aborts.append((f, "ud"))
531 else:
531 else:
532 actions.append((f, "g", (fl2,), "remote created"))
532 actions.append((f, "g", (fl2,), "remote created"))
533 elif n2 and n2 != ma[f]:
533 elif n2 and n2 != ma[f]:
534 different = _checkunknownfile(repo, wctx, p2, f)
534 different = _checkunknownfile(repo, wctx, p2, f)
535 if not force and different:
535 if not force and different:
536 aborts.append((f, "ud"))
536 aborts.append((f, "ud"))
537 else:
537 else:
538 # if different: old untracked f may be overwritten and lost
538 # if different: old untracked f may be overwritten and lost
539 if acceptremote:
539 if acceptremote:
540 actions.append((f, "g", (m2.flags(f),),
540 actions.append((f, "g", (m2.flags(f),),
541 "remote recreating"))
541 "remote recreating"))
542 else:
542 else:
543 actions.append((f, "dc", (m2.flags(f),),
543 actions.append((f, "dc", (m2.flags(f),),
544 "prompt deleted/changed"))
544 "prompt deleted/changed"))
545
545
546 for f, m in sorted(aborts):
546 for f, m in sorted(aborts):
547 if m == "ud":
547 if m == "ud":
548 repo.ui.warn(_("%s: untracked file differs\n") % f)
548 repo.ui.warn(_("%s: untracked file differs\n") % f)
549 else: assert False, m
549 else: assert False, m
550 if aborts:
550 if aborts:
551 raise util.Abort(_("untracked files in working directory differ "
551 raise util.Abort(_("untracked files in working directory differ "
552 "from files in requested revision"))
552 "from files in requested revision"))
553
553
554 if not util.checkcase(repo.path):
554 if not util.checkcase(repo.path):
555 # check collision between files only in p2 for clean update
555 # check collision between files only in p2 for clean update
556 if (not branchmerge and
556 if (not branchmerge and
557 (force or not wctx.dirty(missing=True, branch=False))):
557 (force or not wctx.dirty(missing=True, branch=False))):
558 _checkcollision(repo, m2, [])
558 _checkcollision(repo, m2, [])
559 else:
559 else:
560 _checkcollision(repo, m1, actions)
560 _checkcollision(repo, m1, actions)
561
561
562 return actions
562 return actions
563
563
564 def actionkey(a):
564 def actionkey(a):
565 return a[1] in "rf" and -1 or 0, a
565 return a[1] in "rf" and -1 or 0, a
566
566
567 def getremove(repo, mctx, overwrite, args):
567 def getremove(repo, mctx, overwrite, args):
568 """apply usually-non-interactive updates to the working directory
568 """apply usually-non-interactive updates to the working directory
569
569
570 mctx is the context to be merged into the working copy
570 mctx is the context to be merged into the working copy
571
571
572 yields tuples for progress updates
572 yields tuples for progress updates
573 """
573 """
574 verbose = repo.ui.verbose
574 verbose = repo.ui.verbose
575 unlink = util.unlinkpath
575 unlink = util.unlinkpath
576 wjoin = repo.wjoin
576 wjoin = repo.wjoin
577 fctx = mctx.filectx
577 fctx = mctx.filectx
578 wwrite = repo.wwrite
578 wwrite = repo.wwrite
579 audit = repo.wopener.audit
579 audit = repo.wopener.audit
580 i = 0
580 i = 0
581 for arg in args:
581 for arg in args:
582 f = arg[0]
582 f = arg[0]
583 if arg[1] == 'r':
583 if arg[1] == 'r':
584 if verbose:
584 if verbose:
585 repo.ui.note(_("removing %s\n") % f)
585 repo.ui.note(_("removing %s\n") % f)
586 audit(f)
586 audit(f)
587 try:
587 try:
588 unlink(wjoin(f), ignoremissing=True)
588 unlink(wjoin(f), ignoremissing=True)
589 except OSError, inst:
589 except OSError, inst:
590 repo.ui.warn(_("update failed to remove %s: %s!\n") %
590 repo.ui.warn(_("update failed to remove %s: %s!\n") %
591 (f, inst.strerror))
591 (f, inst.strerror))
592 else:
592 else:
593 if verbose:
593 if verbose:
594 repo.ui.note(_("getting %s\n") % f)
594 repo.ui.note(_("getting %s\n") % f)
595 wwrite(f, fctx(f).data(), arg[2][0])
595 wwrite(f, fctx(f).data(), arg[2][0])
596 if i == 100:
596 if i == 100:
597 yield i, f
597 yield i, f
598 i = 0
598 i = 0
599 i += 1
599 i += 1
600 if i > 0:
600 if i > 0:
601 yield i, f
601 yield i, f
602
602
603 def applyupdates(repo, actions, wctx, mctx, overwrite):
603 def applyupdates(repo, actions, wctx, mctx, overwrite):
604 """apply the merge action list to the working directory
604 """apply the merge action list to the working directory
605
605
606 wctx is the working copy context
606 wctx is the working copy context
607 mctx is the context to be merged into the working copy
607 mctx is the context to be merged into the working copy
608
608
609 Return a tuple of counts (updated, merged, removed, unresolved) that
609 Return a tuple of counts (updated, merged, removed, unresolved) that
610 describes how many files were affected by the update.
610 describes how many files were affected by the update.
611 """
611 """
612
612
613 updated, merged, removed, unresolved = 0, 0, 0, 0
613 updated, merged, removed, unresolved = 0, 0, 0, 0
614 ms = mergestate(repo)
614 ms = mergestate(repo)
615 ms.reset(wctx.p1().node(), mctx.node())
615 ms.reset(wctx.p1().node(), mctx.node())
616 moves = []
616 moves = []
617 actions.sort(key=actionkey)
617 actions.sort(key=actionkey)
618
618
619 # prescan for merges
619 # prescan for merges
620 for a in actions:
620 for a in actions:
621 f, m, args, msg = a
621 f, m, args, msg = a
622 repo.ui.debug(" %s: %s -> %s\n" % (f, msg, m))
622 repo.ui.debug(" %s: %s -> %s\n" % (f, msg, m))
623 if m == "m": # merge
623 if m == "m": # merge
624 f1, f2, fa, move, anc = args
624 f1, f2, fa, move, anc = args
625 if f == '.hgsubstate': # merged internally
625 if f == '.hgsubstate': # merged internally
626 continue
626 continue
627 repo.ui.debug(" preserving %s for resolve of %s\n" % (f1, f))
627 repo.ui.debug(" preserving %s for resolve of %s\n" % (f1, f))
628 fcl = wctx[f1]
628 fcl = wctx[f1]
629 fco = mctx[f2]
629 fco = mctx[f2]
630 actx = repo[anc]
630 actx = repo[anc]
631 if fa in actx:
631 if fa in actx:
632 fca = actx[fa]
632 fca = actx[fa]
633 else:
633 else:
634 fca = repo.filectx(f1, fileid=nullrev)
634 fca = repo.filectx(f1, fileid=nullrev)
635 ms.add(fcl, fco, fca, f)
635 ms.add(fcl, fco, fca, f)
636 if f1 != f and move:
636 if f1 != f and move:
637 moves.append(f1)
637 moves.append(f1)
638
638
639 audit = repo.wopener.audit
639 audit = repo.wopener.audit
640
640
641 # remove renamed files after safely stored
641 # remove renamed files after safely stored
642 for f in moves:
642 for f in moves:
643 if os.path.lexists(repo.wjoin(f)):
643 if os.path.lexists(repo.wjoin(f)):
644 repo.ui.debug("removing %s\n" % f)
644 repo.ui.debug("removing %s\n" % f)
645 audit(f)
645 audit(f)
646 util.unlinkpath(repo.wjoin(f))
646 util.unlinkpath(repo.wjoin(f))
647
647
648 numupdates = len(actions)
648 numupdates = len(actions)
649 workeractions = [a for a in actions if a[1] in 'gr']
649 workeractions = [a for a in actions if a[1] in 'gr']
650 updateactions = [a for a in workeractions if a[1] == 'g']
650 updateactions = [a for a in workeractions if a[1] == 'g']
651 updated = len(updateactions)
651 updated = len(updateactions)
652 removeactions = [a for a in workeractions if a[1] == 'r']
652 removeactions = [a for a in workeractions if a[1] == 'r']
653 removed = len(removeactions)
653 removed = len(removeactions)
654 actions = [a for a in actions if a[1] not in 'gr']
654 actions = [a for a in actions if a[1] not in 'gr']
655
655
656 hgsub = [a[1] for a in workeractions if a[0] == '.hgsubstate']
656 hgsub = [a[1] for a in workeractions if a[0] == '.hgsubstate']
657 if hgsub and hgsub[0] == 'r':
657 if hgsub and hgsub[0] == 'r':
658 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
658 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
659
659
660 z = 0
660 z = 0
661 prog = worker.worker(repo.ui, 0.001, getremove, (repo, mctx, overwrite),
661 prog = worker.worker(repo.ui, 0.001, getremove, (repo, mctx, overwrite),
662 removeactions)
662 removeactions)
663 for i, item in prog:
663 for i, item in prog:
664 z += i
664 z += i
665 repo.ui.progress(_('updating'), z, item=item, total=numupdates,
665 repo.ui.progress(_('updating'), z, item=item, total=numupdates,
666 unit=_('files'))
666 unit=_('files'))
667 prog = worker.worker(repo.ui, 0.001, getremove, (repo, mctx, overwrite),
667 prog = worker.worker(repo.ui, 0.001, getremove, (repo, mctx, overwrite),
668 updateactions)
668 updateactions)
669 for i, item in prog:
669 for i, item in prog:
670 z += i
670 z += i
671 repo.ui.progress(_('updating'), z, item=item, total=numupdates,
671 repo.ui.progress(_('updating'), z, item=item, total=numupdates,
672 unit=_('files'))
672 unit=_('files'))
673
673
674 if hgsub and hgsub[0] == 'g':
674 if hgsub and hgsub[0] == 'g':
675 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
675 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
676
676
677 _updating = _('updating')
677 _updating = _('updating')
678 _files = _('files')
678 _files = _('files')
679 progress = repo.ui.progress
679 progress = repo.ui.progress
680
680
681 for i, a in enumerate(actions):
681 for i, a in enumerate(actions):
682 f, m, args, msg = a
682 f, m, args, msg = a
683 progress(_updating, z + i + 1, item=f, total=numupdates, unit=_files)
683 progress(_updating, z + i + 1, item=f, total=numupdates, unit=_files)
684 if m == "m": # merge
684 if m == "m": # merge
685 f1, f2, fa, move, anc = args
685 f1, f2, fa, move, anc = args
686 if f == '.hgsubstate': # subrepo states need updating
686 if f == '.hgsubstate': # subrepo states need updating
687 subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx),
687 subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx),
688 overwrite)
688 overwrite)
689 continue
689 continue
690 audit(f)
690 audit(f)
691 r = ms.resolve(f, wctx)
691 r = ms.resolve(f, wctx)
692 if r is not None and r > 0:
692 if r is not None and r > 0:
693 unresolved += 1
693 unresolved += 1
694 else:
694 else:
695 if r is None:
695 if r is None:
696 updated += 1
696 updated += 1
697 else:
697 else:
698 merged += 1
698 merged += 1
699 elif m == "dm": # directory rename, move local
699 elif m == "dm": # directory rename, move local
700 f0, flags = args
700 f0, flags = args
701 repo.ui.note(_("moving %s to %s\n") % (f0, f))
701 repo.ui.note(_("moving %s to %s\n") % (f0, f))
702 audit(f)
702 audit(f)
703 repo.wwrite(f, wctx.filectx(f0).data(), flags)
703 repo.wwrite(f, wctx.filectx(f0).data(), flags)
704 util.unlinkpath(repo.wjoin(f0))
704 util.unlinkpath(repo.wjoin(f0))
705 updated += 1
705 updated += 1
706 elif m == "dg": # local directory rename, get
706 elif m == "dg": # local directory rename, get
707 f0, flags = args
707 f0, flags = args
708 repo.ui.note(_("getting %s to %s\n") % (f0, f))
708 repo.ui.note(_("getting %s to %s\n") % (f0, f))
709 repo.wwrite(f, mctx.filectx(f0).data(), flags)
709 repo.wwrite(f, mctx.filectx(f0).data(), flags)
710 updated += 1
710 updated += 1
711 elif m == "dr": # divergent renames
711 elif m == "dr": # divergent renames
712 fl, = args
712 fl, = args
713 repo.ui.warn(_("note: possible conflict - %s was renamed "
713 repo.ui.warn(_("note: possible conflict - %s was renamed "
714 "multiple times to:\n") % f)
714 "multiple times to:\n") % f)
715 for nf in fl:
715 for nf in fl:
716 repo.ui.warn(" %s\n" % nf)
716 repo.ui.warn(" %s\n" % nf)
717 elif m == "rd": # rename and delete
717 elif m == "rd": # rename and delete
718 fl, = args
718 fl, = args
719 repo.ui.warn(_("note: possible conflict - %s was deleted "
719 repo.ui.warn(_("note: possible conflict - %s was deleted "
720 "and renamed to:\n") % f)
720 "and renamed to:\n") % f)
721 for nf in fl:
721 for nf in fl:
722 repo.ui.warn(" %s\n" % nf)
722 repo.ui.warn(" %s\n" % nf)
723 elif m == "e": # exec
723 elif m == "e": # exec
724 flags, = args
724 flags, = args
725 audit(f)
725 audit(f)
726 util.setflags(repo.wjoin(f), 'l' in flags, 'x' in flags)
726 util.setflags(repo.wjoin(f), 'l' in flags, 'x' in flags)
727 updated += 1
727 updated += 1
728 ms.commit()
728 ms.commit()
729 progress(_updating, None, total=numupdates, unit=_files)
729 progress(_updating, None, total=numupdates, unit=_files)
730
730
731 return updated, merged, removed, unresolved
731 return updated, merged, removed, unresolved
732
732
733 def calculateupdates(repo, tctx, mctx, ancestor, branchmerge, force, partial,
733 def calculateupdates(repo, tctx, mctx, ancestor, branchmerge, force, partial,
734 acceptremote=False):
734 acceptremote=False):
735 "Calculate the actions needed to merge mctx into tctx"
735 "Calculate the actions needed to merge mctx into tctx"
736 actions = []
736 actions = []
737 actions += manifestmerge(repo, tctx, mctx,
737 actions += manifestmerge(repo, tctx, mctx,
738 ancestor,
738 ancestor,
739 branchmerge, force,
739 branchmerge, force,
740 partial, acceptremote)
740 partial, acceptremote)
741
741
742 # Filter out prompts.
742 # Filter out prompts.
743 newactions, prompts = [], []
743 newactions, prompts = [], []
744 for a in actions:
744 for a in actions:
745 if a[1] in ("cd", "dc"):
745 if a[1] in ("cd", "dc"):
746 prompts.append(a)
746 prompts.append(a)
747 else:
747 else:
748 newactions.append(a)
748 newactions.append(a)
749 # Prompt and create actions. TODO: Move this towards resolve phase.
749 # Prompt and create actions. TODO: Move this towards resolve phase.
750 for f, m, args, msg in sorted(prompts):
750 for f, m, args, msg in sorted(prompts):
751 if m == "cd":
751 if m == "cd":
752 if repo.ui.promptchoice(
752 if repo.ui.promptchoice(
753 _("local changed %s which remote deleted\n"
753 _("local changed %s which remote deleted\n"
754 "use (c)hanged version or (d)elete?"
754 "use (c)hanged version or (d)elete?"
755 "$$ &Changed $$ &Delete") % f, 0):
755 "$$ &Changed $$ &Delete") % f, 0):
756 newactions.append((f, "r", None, "prompt delete"))
756 newactions.append((f, "r", None, "prompt delete"))
757 else:
757 else:
758 newactions.append((f, "a", None, "prompt keep"))
758 newactions.append((f, "a", None, "prompt keep"))
759 elif m == "dc":
759 elif m == "dc":
760 flags, = args
760 flags, = args
761 if repo.ui.promptchoice(
761 if repo.ui.promptchoice(
762 _("remote changed %s which local deleted\n"
762 _("remote changed %s which local deleted\n"
763 "use (c)hanged version or leave (d)eleted?"
763 "use (c)hanged version or leave (d)eleted?"
764 "$$ &Changed $$ &Deleted") % f, 0) == 0:
764 "$$ &Changed $$ &Deleted") % f, 0) == 0:
765 newactions.append((f, "g", (flags,), "prompt recreating"))
765 newactions.append((f, "g", (flags,), "prompt recreating"))
766 else: assert False, m
766 else: assert False, m
767
767
768 if tctx.rev() is None:
768 if tctx.rev() is None:
769 newactions += _forgetremoved(tctx, mctx, branchmerge)
769 newactions += _forgetremoved(tctx, mctx, branchmerge)
770
770
771 return newactions
771 return newactions
772
772
773 def recordupdates(repo, actions, branchmerge):
773 def recordupdates(repo, actions, branchmerge):
774 "record merge actions to the dirstate"
774 "record merge actions to the dirstate"
775
775
776 for a in actions:
776 for a in actions:
777 f, m, args, msg = a
777 f, m, args, msg = a
778 if m == "r": # remove
778 if m == "r": # remove
779 if branchmerge:
779 if branchmerge:
780 repo.dirstate.remove(f)
780 repo.dirstate.remove(f)
781 else:
781 else:
782 repo.dirstate.drop(f)
782 repo.dirstate.drop(f)
783 elif m == "a": # re-add
783 elif m == "a": # re-add
784 if not branchmerge:
784 if not branchmerge:
785 repo.dirstate.add(f)
785 repo.dirstate.add(f)
786 elif m == "f": # forget
786 elif m == "f": # forget
787 repo.dirstate.drop(f)
787 repo.dirstate.drop(f)
788 elif m == "e": # exec change
788 elif m == "e": # exec change
789 repo.dirstate.normallookup(f)
789 repo.dirstate.normallookup(f)
790 elif m == "g": # get
790 elif m == "g": # get
791 if branchmerge:
791 if branchmerge:
792 repo.dirstate.otherparent(f)
792 repo.dirstate.otherparent(f)
793 else:
793 else:
794 repo.dirstate.normal(f)
794 repo.dirstate.normal(f)
795 elif m == "m": # merge
795 elif m == "m": # merge
796 f1, f2, fa, move, anc = args
796 f1, f2, fa, move, anc = args
797 if branchmerge:
797 if branchmerge:
798 # We've done a branch merge, mark this file as merged
798 # We've done a branch merge, mark this file as merged
799 # so that we properly record the merger later
799 # so that we properly record the merger later
800 repo.dirstate.merge(f)
800 repo.dirstate.merge(f)
801 if f1 != f2: # copy/rename
801 if f1 != f2: # copy/rename
802 if move:
802 if move:
803 repo.dirstate.remove(f1)
803 repo.dirstate.remove(f1)
804 if f1 != f:
804 if f1 != f:
805 repo.dirstate.copy(f1, f)
805 repo.dirstate.copy(f1, f)
806 else:
806 else:
807 repo.dirstate.copy(f2, f)
807 repo.dirstate.copy(f2, f)
808 else:
808 else:
809 # We've update-merged a locally modified file, so
809 # We've update-merged a locally modified file, so
810 # we set the dirstate to emulate a normal checkout
810 # we set the dirstate to emulate a normal checkout
811 # of that file some time in the past. Thus our
811 # of that file some time in the past. Thus our
812 # merge will appear as a normal local file
812 # merge will appear as a normal local file
813 # modification.
813 # modification.
814 if f2 == f: # file not locally copied/moved
814 if f2 == f: # file not locally copied/moved
815 repo.dirstate.normallookup(f)
815 repo.dirstate.normallookup(f)
816 if move:
816 if move:
817 repo.dirstate.drop(f1)
817 repo.dirstate.drop(f1)
818 elif m == "dm": # directory rename, move local
818 elif m == "dm": # directory rename, move local
819 f0, flag = args
819 f0, flag = args
820 if f0 not in repo.dirstate:
820 if f0 not in repo.dirstate:
821 # untracked file moved
821 # untracked file moved
822 continue
822 continue
823 if branchmerge:
823 if branchmerge:
824 repo.dirstate.add(f)
824 repo.dirstate.add(f)
825 repo.dirstate.remove(f0)
825 repo.dirstate.remove(f0)
826 repo.dirstate.copy(f0, f)
826 repo.dirstate.copy(f0, f)
827 else:
827 else:
828 repo.dirstate.normal(f)
828 repo.dirstate.normal(f)
829 repo.dirstate.drop(f0)
829 repo.dirstate.drop(f0)
830 elif m == "dg": # directory rename, get
830 elif m == "dg": # directory rename, get
831 f0, flag = args
831 f0, flag = args
832 if branchmerge:
832 if branchmerge:
833 repo.dirstate.add(f)
833 repo.dirstate.add(f)
834 repo.dirstate.copy(f0, f)
834 repo.dirstate.copy(f0, f)
835 else:
835 else:
836 repo.dirstate.normal(f)
836 repo.dirstate.normal(f)
837
837
838 def update(repo, node, branchmerge, force, partial, ancestor=None,
838 def update(repo, node, branchmerge, force, partial, ancestor=None,
839 mergeancestor=False):
839 mergeancestor=False):
840 """
840 """
841 Perform a merge between the working directory and the given node
841 Perform a merge between the working directory and the given node
842
842
843 node = the node to update to, or None if unspecified
843 node = the node to update to, or None if unspecified
844 branchmerge = whether to merge between branches
844 branchmerge = whether to merge between branches
845 force = whether to force branch merging or file overwriting
845 force = whether to force branch merging or file overwriting
846 partial = a function to filter file lists (dirstate not updated)
846 partial = a function to filter file lists (dirstate not updated)
847 mergeancestor = whether it is merging with an ancestor. If true,
847 mergeancestor = whether it is merging with an ancestor. If true,
848 we should accept the incoming changes for any prompts that occur.
848 we should accept the incoming changes for any prompts that occur.
849 If false, merging with an ancestor (fast-forward) is only allowed
849 If false, merging with an ancestor (fast-forward) is only allowed
850 between different named branches. This flag is used by rebase extension
850 between different named branches. This flag is used by rebase extension
851 as a temporary fix and should be avoided in general.
851 as a temporary fix and should be avoided in general.
852
852
853 The table below shows all the behaviors of the update command
853 The table below shows all the behaviors of the update command
854 given the -c and -C or no options, whether the working directory
854 given the -c and -C or no options, whether the working directory
855 is dirty, whether a revision is specified, and the relationship of
855 is dirty, whether a revision is specified, and the relationship of
856 the parent rev to the target rev (linear, on the same named
856 the parent rev to the target rev (linear, on the same named
857 branch, or on another named branch).
857 branch, or on another named branch).
858
858
859 This logic is tested by test-update-branches.t.
859 This logic is tested by test-update-branches.t.
860
860
861 -c -C dirty rev | linear same cross
861 -c -C dirty rev | linear same cross
862 n n n n | ok (1) x
862 n n n n | ok (1) x
863 n n n y | ok ok ok
863 n n n y | ok ok ok
864 n n y n | merge (2) (2)
864 n n y n | merge (2) (2)
865 n n y y | merge (3) (3)
865 n n y y | merge (3) (3)
866 n y * * | --- discard ---
866 n y * * | --- discard ---
867 y n y * | --- (4) ---
867 y n y * | --- (4) ---
868 y n n * | --- ok ---
868 y n n * | --- ok ---
869 y y * * | --- (5) ---
869 y y * * | --- (5) ---
870
870
871 x = can't happen
871 x = can't happen
872 * = don't-care
872 * = don't-care
873 1 = abort: not a linear update (merge or update --check to force update)
873 1 = abort: not a linear update (merge or update --check to force update)
874 2 = abort: uncommitted changes (commit and merge, or update --clean to
874 2 = abort: uncommitted changes (commit and merge, or update --clean to
875 discard changes)
875 discard changes)
876 3 = abort: uncommitted changes (commit or update --clean to discard changes)
876 3 = abort: uncommitted changes (commit or update --clean to discard changes)
877 4 = abort: uncommitted changes (checked in commands.py)
877 4 = abort: uncommitted changes (checked in commands.py)
878 5 = incompatible options (checked in commands.py)
878 5 = incompatible options (checked in commands.py)
879
879
880 Return the same tuple as applyupdates().
880 Return the same tuple as applyupdates().
881 """
881 """
882
882
883 onode = node
883 onode = node
884 wlock = repo.wlock()
884 wlock = repo.wlock()
885 try:
885 try:
886 wc = repo[None]
886 wc = repo[None]
887 pl = wc.parents()
887 pl = wc.parents()
888 p1 = pl[0]
888 p1 = pl[0]
889 pa = None
889 pa = None
890 if ancestor:
890 if ancestor:
891 pa = repo[ancestor]
891 pa = repo[ancestor]
892
892
893 if node is None:
893 if node is None:
894 # Here is where we should consider bookmarks, divergent bookmarks,
894 # Here is where we should consider bookmarks, divergent bookmarks,
895 # foreground changesets (successors), and tip of current branch;
895 # foreground changesets (successors), and tip of current branch;
896 # but currently we are only checking the branch tips.
896 # but currently we are only checking the branch tips.
897 try:
897 try:
898 node = repo.branchtip(wc.branch())
898 node = repo.branchtip(wc.branch())
899 except error.RepoLookupError:
899 except error.RepoLookupError:
900 if wc.branch() == "default": # no default branch!
900 if wc.branch() == "default": # no default branch!
901 node = repo.lookup("tip") # update to tip
901 node = repo.lookup("tip") # update to tip
902 else:
902 else:
903 raise util.Abort(_("branch %s not found") % wc.branch())
903 raise util.Abort(_("branch %s not found") % wc.branch())
904
904
905 if p1.obsolete() and not p1.children():
905 if p1.obsolete() and not p1.children():
906 # allow updating to successors
906 # allow updating to successors
907 successors = obsolete.successorssets(repo, p1.node())
907 successors = obsolete.successorssets(repo, p1.node())
908
908
909 # behavior of certain cases is as follows,
909 # behavior of certain cases is as follows,
910 #
910 #
911 # divergent changesets: update to highest rev, similar to what
911 # divergent changesets: update to highest rev, similar to what
912 # is currently done when there are more than one head
912 # is currently done when there are more than one head
913 # (i.e. 'tip')
913 # (i.e. 'tip')
914 #
914 #
915 # replaced changesets: same as divergent except we know there
915 # replaced changesets: same as divergent except we know there
916 # is no conflict
916 # is no conflict
917 #
917 #
918 # pruned changeset: no update is done; though, we could
918 # pruned changeset: no update is done; though, we could
919 # consider updating to the first non-obsolete parent,
919 # consider updating to the first non-obsolete parent,
920 # similar to what is current done for 'hg prune'
920 # similar to what is current done for 'hg prune'
921
921
922 if successors:
922 if successors:
923 # flatten the list here handles both divergent (len > 1)
923 # flatten the list here handles both divergent (len > 1)
924 # and the usual case (len = 1)
924 # and the usual case (len = 1)
925 successors = [n for sub in successors for n in sub]
925 successors = [n for sub in successors for n in sub]
926
926
927 # get the max revision for the given successors set,
927 # get the max revision for the given successors set,
928 # i.e. the 'tip' of a set
928 # i.e. the 'tip' of a set
929 node = repo.revs("max(%ln)", successors)[0]
929 node = repo.revs("max(%ln)", successors)[0]
930 pa = p1
930 pa = p1
931
931
932 overwrite = force and not branchmerge
932 overwrite = force and not branchmerge
933
933
934 p2 = repo[node]
934 p2 = repo[node]
935 if pa is None:
935 if pa is None:
936 pa = p1.ancestor(p2)
936 pa = p1.ancestor(p2)
937
937
938 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), str(p1), str(p2)
938 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), str(p1), str(p2)
939
939
940 ### check phase
940 ### check phase
941 if not overwrite and len(pl) > 1:
941 if not overwrite and len(pl) > 1:
942 raise util.Abort(_("outstanding uncommitted merges"))
942 raise util.Abort(_("outstanding uncommitted merges"))
943 if branchmerge:
943 if branchmerge:
944 if pa == p2:
944 if pa == p2:
945 raise util.Abort(_("merging with a working directory ancestor"
945 raise util.Abort(_("merging with a working directory ancestor"
946 " has no effect"))
946 " has no effect"))
947 elif pa == p1:
947 elif pa == p1:
948 if not mergeancestor and p1.branch() == p2.branch():
948 if not mergeancestor and p1.branch() == p2.branch():
949 raise util.Abort(_("nothing to merge"),
949 raise util.Abort(_("nothing to merge"),
950 hint=_("use 'hg update' "
950 hint=_("use 'hg update' "
951 "or check 'hg heads'"))
951 "or check 'hg heads'"))
952 if not force and (wc.files() or wc.deleted()):
952 if not force and (wc.files() or wc.deleted()):
953 raise util.Abort(_("uncommitted changes"),
953 raise util.Abort(_("uncommitted changes"),
954 hint=_("use 'hg status' to list changes"))
954 hint=_("use 'hg status' to list changes"))
955 for s in sorted(wc.substate):
955 for s in sorted(wc.substate):
956 if wc.sub(s).dirty():
956 if wc.sub(s).dirty():
957 raise util.Abort(_("uncommitted changes in "
957 raise util.Abort(_("uncommitted changes in "
958 "subrepository '%s'") % s)
958 "subrepository '%s'") % s)
959
959
960 elif not overwrite:
960 elif not overwrite:
961 if p1 == p2: # no-op update
961 if p1 == p2: # no-op update
962 # call the hooks and exit early
962 # call the hooks and exit early
963 repo.hook('preupdate', throw=True, parent1=xp2, parent2='')
963 repo.hook('preupdate', throw=True, parent1=xp2, parent2='')
964 repo.hook('update', parent1=xp2, parent2='', error=0)
964 repo.hook('update', parent1=xp2, parent2='', error=0)
965 return 0, 0, 0, 0
965 return 0, 0, 0, 0
966
966
967 if pa not in (p1, p2): # nonlinear
967 if pa not in (p1, p2): # nonlinear
968 dirty = wc.dirty(missing=True)
968 dirty = wc.dirty(missing=True)
969 if dirty or onode is None:
969 if dirty or onode is None:
970 # Branching is a bit strange to ensure we do the minimal
970 # Branching is a bit strange to ensure we do the minimal
971 # amount of call to obsolete.background.
971 # amount of call to obsolete.background.
972 foreground = obsolete.foreground(repo, [p1.node()])
972 foreground = obsolete.foreground(repo, [p1.node()])
973 # note: the <node> variable contains a random identifier
973 # note: the <node> variable contains a random identifier
974 if repo[node].node() in foreground:
974 if repo[node].node() in foreground:
975 pa = p1 # allow updating to successors
975 pa = p1 # allow updating to successors
976 elif dirty:
976 elif dirty:
977 msg = _("uncommitted changes")
977 msg = _("uncommitted changes")
978 if onode is None:
978 if onode is None:
979 hint = _("commit and merge, or update --clean to"
979 hint = _("commit and merge, or update --clean to"
980 " discard changes")
980 " discard changes")
981 else:
981 else:
982 hint = _("commit or update --clean to discard"
982 hint = _("commit or update --clean to discard"
983 " changes")
983 " changes")
984 raise util.Abort(msg, hint=hint)
984 raise util.Abort(msg, hint=hint)
985 else: # node is none
985 else: # node is none
986 msg = _("not a linear update")
986 msg = _("not a linear update")
987 hint = _("merge or update --check to force update")
987 hint = _("merge or update --check to force update")
988 raise util.Abort(msg, hint=hint)
988 raise util.Abort(msg, hint=hint)
989 else:
989 else:
990 # Allow jumping branches if clean and specific rev given
990 # Allow jumping branches if clean and specific rev given
991 pa = p1
991 pa = p1
992
992
993 ### calculate phase
993 ### calculate phase
994 actions = calculateupdates(repo, wc, p2, pa,
994 actions = calculateupdates(repo, wc, p2, pa,
995 branchmerge, force, partial, mergeancestor)
995 branchmerge, force, partial, mergeancestor)
996
996
997 ### apply phase
997 ### apply phase
998 if not branchmerge: # just jump to the new rev
998 if not branchmerge: # just jump to the new rev
999 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, ''
999 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, ''
1000 if not partial:
1000 if not partial:
1001 repo.hook('preupdate', throw=True, parent1=xp1, parent2=xp2)
1001 repo.hook('preupdate', throw=True, parent1=xp1, parent2=xp2)
1002 # note that we're in the middle of an update
1002 # note that we're in the middle of an update
1003 repo.vfs.write('updatestate', p2.hex())
1003 repo.vfs.write('updatestate', p2.hex())
1004
1004
1005 stats = applyupdates(repo, actions, wc, p2, overwrite)
1005 stats = applyupdates(repo, actions, wc, p2, overwrite)
1006
1006
1007 if not partial:
1007 if not partial:
1008 repo.setparents(fp1, fp2)
1008 repo.setparents(fp1, fp2)
1009 recordupdates(repo, actions, branchmerge)
1009 recordupdates(repo, actions, branchmerge)
1010 # update completed, clear state
1010 # update completed, clear state
1011 util.unlink(repo.join('updatestate'))
1011 util.unlink(repo.join('updatestate'))
1012
1012
1013 if not branchmerge:
1013 if not branchmerge:
1014 repo.dirstate.setbranch(p2.branch())
1014 repo.dirstate.setbranch(p2.branch())
1015 finally:
1015 finally:
1016 wlock.release()
1016 wlock.release()
1017
1017
1018 if not partial:
1018 if not partial:
1019 repo.hook('update', parent1=xp1, parent2=xp2, error=stats[3])
1019 repo.hook('update', parent1=xp1, parent2=xp2, error=stats[3])
1020 return stats
1020 return stats
@@ -1,867 +1,867 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 import phases
87 import phases
88 from i18n import _
88 from i18n import _
89
89
90 _pack = struct.pack
90 _pack = struct.pack
91 _unpack = struct.unpack
91 _unpack = struct.unpack
92
92
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
94
94
95 # the obsolete feature is not mature enough to be enabled by default.
95 # the obsolete feature is not mature enough to be enabled by default.
96 # you have to rely on third party extension extension to enable this.
96 # you have to rely on third party extension extension to enable this.
97 _enabled = False
97 _enabled = False
98
98
99 # data used for parsing and writing
99 # data used for parsing and writing
100 _fmversion = 0
100 _fmversion = 0
101 _fmfixed = '>BIB20s'
101 _fmfixed = '>BIB20s'
102 _fmnode = '20s'
102 _fmnode = '20s'
103 _fmfsize = struct.calcsize(_fmfixed)
103 _fmfsize = struct.calcsize(_fmfixed)
104 _fnodesize = struct.calcsize(_fmnode)
104 _fnodesize = struct.calcsize(_fmnode)
105
105
106 ### obsolescence marker flag
106 ### obsolescence marker flag
107
107
108 ## bumpedfix flag
108 ## bumpedfix flag
109 #
109 #
110 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # When a changeset A' succeed to a changeset A which became public, we call A'
111 # "bumped" because it's a successors of a public changesets
111 # "bumped" because it's a successors of a public changesets
112 #
112 #
113 # o A' (bumped)
113 # o A' (bumped)
114 # |`:
114 # |`:
115 # | o A
115 # | o A
116 # |/
116 # |/
117 # o Z
117 # o Z
118 #
118 #
119 # The way to solve this situation is to create a new changeset Ad as children
119 # The way to solve this situation is to create a new changeset Ad as children
120 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # of A. This changeset have the same content than A'. So the diff from A to A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
122 #
122 #
123 # o Ad
123 # o Ad
124 # |`:
124 # |`:
125 # | x A'
125 # | x A'
126 # |'|
126 # |'|
127 # o | A
127 # o | A
128 # |/
128 # |/
129 # o Z
129 # o Z
130 #
130 #
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
133 # This flag mean that the successors express the changes between the public and
133 # This flag mean that the successors express the changes between the public and
134 # bumped version and fix the situation, breaking the transitivity of
134 # bumped version and fix the situation, breaking the transitivity of
135 # "bumped" here.
135 # "bumped" here.
136 bumpedfix = 1
136 bumpedfix = 1
137
137
138 def _readmarkers(data):
138 def _readmarkers(data):
139 """Read and enumerate markers from raw data"""
139 """Read and enumerate markers from raw data"""
140 off = 0
140 off = 0
141 diskversion = _unpack('>B', data[off:off + 1])[0]
141 diskversion = _unpack('>B', data[off:off + 1])[0]
142 off += 1
142 off += 1
143 if diskversion != _fmversion:
143 if diskversion != _fmversion:
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
145 % diskversion)
145 % diskversion)
146
146
147 # Loop on markers
147 # Loop on markers
148 l = len(data)
148 l = len(data)
149 while off + _fmfsize <= l:
149 while off + _fmfsize <= l:
150 # read fixed part
150 # read fixed part
151 cur = data[off:off + _fmfsize]
151 cur = data[off:off + _fmfsize]
152 off += _fmfsize
152 off += _fmfsize
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
154 # read replacement
154 # read replacement
155 sucs = ()
155 sucs = ()
156 if nbsuc:
156 if nbsuc:
157 s = (_fnodesize * nbsuc)
157 s = (_fnodesize * nbsuc)
158 cur = data[off:off + s]
158 cur = data[off:off + s]
159 sucs = _unpack(_fmnode * nbsuc, cur)
159 sucs = _unpack(_fmnode * nbsuc, cur)
160 off += s
160 off += s
161 # read metadata
161 # read metadata
162 # (metadata will be decoded on demand)
162 # (metadata will be decoded on demand)
163 metadata = data[off:off + mdsize]
163 metadata = data[off:off + mdsize]
164 if len(metadata) != mdsize:
164 if len(metadata) != mdsize:
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
166 'short, %d bytes expected, got %d')
166 'short, %d bytes expected, got %d')
167 % (mdsize, len(metadata)))
167 % (mdsize, len(metadata)))
168 off += mdsize
168 off += mdsize
169 yield (pre, sucs, flags, metadata)
169 yield (pre, sucs, flags, metadata)
170
170
171 def encodemeta(meta):
171 def encodemeta(meta):
172 """Return encoded metadata string to string mapping.
172 """Return encoded metadata string to string mapping.
173
173
174 Assume no ':' in key and no '\0' in both key and value."""
174 Assume no ':' in key and no '\0' in both key and value."""
175 for key, value in meta.iteritems():
175 for key, value in meta.iteritems():
176 if ':' in key or '\0' in key:
176 if ':' in key or '\0' in key:
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
178 if '\0' in value:
178 if '\0' in value:
179 raise ValueError("':' are forbidden in metadata value'")
179 raise ValueError("':' is forbidden in metadata value'")
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
181
181
182 def decodemeta(data):
182 def decodemeta(data):
183 """Return string to string dictionary from encoded version."""
183 """Return string to string dictionary from encoded version."""
184 d = {}
184 d = {}
185 for l in data.split('\0'):
185 for l in data.split('\0'):
186 if l:
186 if l:
187 key, value = l.split(':')
187 key, value = l.split(':')
188 d[key] = value
188 d[key] = value
189 return d
189 return d
190
190
191 class marker(object):
191 class marker(object):
192 """Wrap obsolete marker raw data"""
192 """Wrap obsolete marker raw data"""
193
193
194 def __init__(self, repo, data):
194 def __init__(self, repo, data):
195 # the repo argument will be used to create changectx in later version
195 # the repo argument will be used to create changectx in later version
196 self._repo = repo
196 self._repo = repo
197 self._data = data
197 self._data = data
198 self._decodedmeta = None
198 self._decodedmeta = None
199
199
200 def __hash__(self):
200 def __hash__(self):
201 return hash(self._data)
201 return hash(self._data)
202
202
203 def __eq__(self, other):
203 def __eq__(self, other):
204 if type(other) != type(self):
204 if type(other) != type(self):
205 return False
205 return False
206 return self._data == other._data
206 return self._data == other._data
207
207
208 def precnode(self):
208 def precnode(self):
209 """Precursor changeset node identifier"""
209 """Precursor changeset node identifier"""
210 return self._data[0]
210 return self._data[0]
211
211
212 def succnodes(self):
212 def succnodes(self):
213 """List of successor changesets node identifiers"""
213 """List of successor changesets node identifiers"""
214 return self._data[1]
214 return self._data[1]
215
215
216 def metadata(self):
216 def metadata(self):
217 """Decoded metadata dictionary"""
217 """Decoded metadata dictionary"""
218 if self._decodedmeta is None:
218 if self._decodedmeta is None:
219 self._decodedmeta = decodemeta(self._data[3])
219 self._decodedmeta = decodemeta(self._data[3])
220 return self._decodedmeta
220 return self._decodedmeta
221
221
222 def date(self):
222 def date(self):
223 """Creation date as (unixtime, offset)"""
223 """Creation date as (unixtime, offset)"""
224 parts = self.metadata()['date'].split(' ')
224 parts = self.metadata()['date'].split(' ')
225 return (float(parts[0]), int(parts[1]))
225 return (float(parts[0]), int(parts[1]))
226
226
227 class obsstore(object):
227 class obsstore(object):
228 """Store obsolete markers
228 """Store obsolete markers
229
229
230 Markers can be accessed with two mappings:
230 Markers can be accessed with two mappings:
231 - precursors[x] -> set(markers on precursors edges of x)
231 - precursors[x] -> set(markers on precursors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
233 """
233 """
234
234
235 def __init__(self, sopener):
235 def __init__(self, sopener):
236 # caches for various obsolescence related cache
236 # caches for various obsolescence related cache
237 self.caches = {}
237 self.caches = {}
238 self._all = []
238 self._all = []
239 # new markers to serialize
239 # new markers to serialize
240 self.precursors = {}
240 self.precursors = {}
241 self.successors = {}
241 self.successors = {}
242 self.sopener = sopener
242 self.sopener = sopener
243 data = sopener.tryread('obsstore')
243 data = sopener.tryread('obsstore')
244 if data:
244 if data:
245 self._load(_readmarkers(data))
245 self._load(_readmarkers(data))
246
246
247 def __iter__(self):
247 def __iter__(self):
248 return iter(self._all)
248 return iter(self._all)
249
249
250 def __len__(self):
250 def __len__(self):
251 return len(self._all)
251 return len(self._all)
252
252
253 def __nonzero__(self):
253 def __nonzero__(self):
254 return bool(self._all)
254 return bool(self._all)
255
255
256 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
256 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
257 """obsolete: add a new obsolete marker
257 """obsolete: add a new obsolete marker
258
258
259 * ensuring it is hashable
259 * ensuring it is hashable
260 * check mandatory metadata
260 * check mandatory metadata
261 * encode metadata
261 * encode metadata
262
262
263 If you are a human writing code creating marker you want to use the
263 If you are a human writing code creating marker you want to use the
264 `createmarkers` function in this module instead.
264 `createmarkers` function in this module instead.
265
265
266 return True if a new marker have been added, False if the markers
266 return True if a new marker have been added, False if the markers
267 already existed (no op).
267 already existed (no op).
268 """
268 """
269 if metadata is None:
269 if metadata is None:
270 metadata = {}
270 metadata = {}
271 if 'date' not in metadata:
271 if 'date' not in metadata:
272 metadata['date'] = "%d %d" % util.makedate()
272 metadata['date'] = "%d %d" % util.makedate()
273 if len(prec) != 20:
273 if len(prec) != 20:
274 raise ValueError(prec)
274 raise ValueError(prec)
275 for succ in succs:
275 for succ in succs:
276 if len(succ) != 20:
276 if len(succ) != 20:
277 raise ValueError(succ)
277 raise ValueError(succ)
278 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
278 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
279 return bool(self.add(transaction, [marker]))
279 return bool(self.add(transaction, [marker]))
280
280
281 def add(self, transaction, markers):
281 def add(self, transaction, markers):
282 """Add new markers to the store
282 """Add new markers to the store
283
283
284 Take care of filtering duplicate.
284 Take care of filtering duplicate.
285 Return the number of new marker."""
285 Return the number of new marker."""
286 if not _enabled:
286 if not _enabled:
287 raise util.Abort('obsolete feature is not enabled on this repo')
287 raise util.Abort('obsolete feature is not enabled on this repo')
288 known = set(self._all)
288 known = set(self._all)
289 new = []
289 new = []
290 for m in markers:
290 for m in markers:
291 if m not in known:
291 if m not in known:
292 known.add(m)
292 known.add(m)
293 new.append(m)
293 new.append(m)
294 if new:
294 if new:
295 f = self.sopener('obsstore', 'ab')
295 f = self.sopener('obsstore', 'ab')
296 try:
296 try:
297 # Whether the file's current position is at the begin or at
297 # Whether the file's current position is at the begin or at
298 # the end after opening a file for appending is implementation
298 # the end after opening a file for appending is implementation
299 # defined. So we must seek to the end before calling tell(),
299 # defined. So we must seek to the end before calling tell(),
300 # or we may get a zero offset for non-zero sized files on
300 # or we may get a zero offset for non-zero sized files on
301 # some platforms (issue3543).
301 # some platforms (issue3543).
302 f.seek(0, _SEEK_END)
302 f.seek(0, _SEEK_END)
303 offset = f.tell()
303 offset = f.tell()
304 transaction.add('obsstore', offset)
304 transaction.add('obsstore', offset)
305 # offset == 0: new file - add the version header
305 # offset == 0: new file - add the version header
306 for bytes in _encodemarkers(new, offset == 0):
306 for bytes in _encodemarkers(new, offset == 0):
307 f.write(bytes)
307 f.write(bytes)
308 finally:
308 finally:
309 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
309 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
310 # call 'filecacheentry.refresh()' here
310 # call 'filecacheentry.refresh()' here
311 f.close()
311 f.close()
312 self._load(new)
312 self._load(new)
313 # new marker *may* have changed several set. invalidate the cache.
313 # new marker *may* have changed several set. invalidate the cache.
314 self.caches.clear()
314 self.caches.clear()
315 return len(new)
315 return len(new)
316
316
317 def mergemarkers(self, transaction, data):
317 def mergemarkers(self, transaction, data):
318 markers = _readmarkers(data)
318 markers = _readmarkers(data)
319 self.add(transaction, markers)
319 self.add(transaction, markers)
320
320
321 def _load(self, markers):
321 def _load(self, markers):
322 for mark in markers:
322 for mark in markers:
323 self._all.append(mark)
323 self._all.append(mark)
324 pre, sucs = mark[:2]
324 pre, sucs = mark[:2]
325 self.successors.setdefault(pre, set()).add(mark)
325 self.successors.setdefault(pre, set()).add(mark)
326 for suc in sucs:
326 for suc in sucs:
327 self.precursors.setdefault(suc, set()).add(mark)
327 self.precursors.setdefault(suc, set()).add(mark)
328 if node.nullid in self.precursors:
328 if node.nullid in self.precursors:
329 raise util.Abort(_('bad obsolescence marker detected: '
329 raise util.Abort(_('bad obsolescence marker detected: '
330 'invalid successors nullid'))
330 'invalid successors nullid'))
331
331
332 def _encodemarkers(markers, addheader=False):
332 def _encodemarkers(markers, addheader=False):
333 # Kept separate from flushmarkers(), it will be reused for
333 # Kept separate from flushmarkers(), it will be reused for
334 # markers exchange.
334 # markers exchange.
335 if addheader:
335 if addheader:
336 yield _pack('>B', _fmversion)
336 yield _pack('>B', _fmversion)
337 for marker in markers:
337 for marker in markers:
338 yield _encodeonemarker(marker)
338 yield _encodeonemarker(marker)
339
339
340
340
341 def _encodeonemarker(marker):
341 def _encodeonemarker(marker):
342 pre, sucs, flags, metadata = marker
342 pre, sucs, flags, metadata = marker
343 nbsuc = len(sucs)
343 nbsuc = len(sucs)
344 format = _fmfixed + (_fmnode * nbsuc)
344 format = _fmfixed + (_fmnode * nbsuc)
345 data = [nbsuc, len(metadata), flags, pre]
345 data = [nbsuc, len(metadata), flags, pre]
346 data.extend(sucs)
346 data.extend(sucs)
347 return _pack(format, *data) + metadata
347 return _pack(format, *data) + metadata
348
348
349 # arbitrary picked to fit into 8K limit from HTTP server
349 # arbitrary picked to fit into 8K limit from HTTP server
350 # you have to take in account:
350 # you have to take in account:
351 # - the version header
351 # - the version header
352 # - the base85 encoding
352 # - the base85 encoding
353 _maxpayload = 5300
353 _maxpayload = 5300
354
354
355 def _pushkeyescape(markers):
355 def _pushkeyescape(markers):
356 """encode markers into a dict suitable for pushkey exchange
356 """encode markers into a dict suitable for pushkey exchange
357
357
358 - binary data is base86 encoded
358 - binary data is base85 encoded
359 - splitted in chunks less than 5300 bytes"""
359 - split in chunks smaller than 5300 bytes"""
360 keys = {}
360 keys = {}
361 parts = []
361 parts = []
362 currentlen = _maxpayload * 2 # ensure we create a new part
362 currentlen = _maxpayload * 2 # ensure we create a new part
363 for marker in markers:
363 for marker in markers:
364 nextdata = _encodeonemarker(marker)
364 nextdata = _encodeonemarker(marker)
365 if (len(nextdata) + currentlen > _maxpayload):
365 if (len(nextdata) + currentlen > _maxpayload):
366 currentpart = []
366 currentpart = []
367 currentlen = 0
367 currentlen = 0
368 parts.append(currentpart)
368 parts.append(currentpart)
369 currentpart.append(nextdata)
369 currentpart.append(nextdata)
370 currentlen += len(nextdata)
370 currentlen += len(nextdata)
371 for idx, part in enumerate(reversed(parts)):
371 for idx, part in enumerate(reversed(parts)):
372 data = ''.join([_pack('>B', _fmversion)] + part)
372 data = ''.join([_pack('>B', _fmversion)] + part)
373 keys['dump%i' % idx] = base85.b85encode(data)
373 keys['dump%i' % idx] = base85.b85encode(data)
374 return keys
374 return keys
375
375
376 def listmarkers(repo):
376 def listmarkers(repo):
377 """List markers over pushkey"""
377 """List markers over pushkey"""
378 if not repo.obsstore:
378 if not repo.obsstore:
379 return {}
379 return {}
380 return _pushkeyescape(repo.obsstore)
380 return _pushkeyescape(repo.obsstore)
381
381
382 def pushmarker(repo, key, old, new):
382 def pushmarker(repo, key, old, new):
383 """Push markers over pushkey"""
383 """Push markers over pushkey"""
384 if not key.startswith('dump'):
384 if not key.startswith('dump'):
385 repo.ui.warn(_('unknown key: %r') % key)
385 repo.ui.warn(_('unknown key: %r') % key)
386 return 0
386 return 0
387 if old:
387 if old:
388 repo.ui.warn(_('unexpected old value') % key)
388 repo.ui.warn(_('unexpected old value') % key)
389 return 0
389 return 0
390 data = base85.b85decode(new)
390 data = base85.b85decode(new)
391 lock = repo.lock()
391 lock = repo.lock()
392 try:
392 try:
393 tr = repo.transaction('pushkey: obsolete markers')
393 tr = repo.transaction('pushkey: obsolete markers')
394 try:
394 try:
395 repo.obsstore.mergemarkers(tr, data)
395 repo.obsstore.mergemarkers(tr, data)
396 tr.close()
396 tr.close()
397 return 1
397 return 1
398 finally:
398 finally:
399 tr.release()
399 tr.release()
400 finally:
400 finally:
401 lock.release()
401 lock.release()
402
402
403 def allmarkers(repo):
403 def allmarkers(repo):
404 """all obsolete markers known in a repository"""
404 """all obsolete markers known in a repository"""
405 for markerdata in repo.obsstore:
405 for markerdata in repo.obsstore:
406 yield marker(repo, markerdata)
406 yield marker(repo, markerdata)
407
407
408 def precursormarkers(ctx):
408 def precursormarkers(ctx):
409 """obsolete marker marking this changeset as a successors"""
409 """obsolete marker marking this changeset as a successors"""
410 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
410 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
411 yield marker(ctx._repo, data)
411 yield marker(ctx._repo, data)
412
412
413 def successormarkers(ctx):
413 def successormarkers(ctx):
414 """obsolete marker making this changeset obsolete"""
414 """obsolete marker making this changeset obsolete"""
415 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
415 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
416 yield marker(ctx._repo, data)
416 yield marker(ctx._repo, data)
417
417
418 def allsuccessors(obsstore, nodes, ignoreflags=0):
418 def allsuccessors(obsstore, nodes, ignoreflags=0):
419 """Yield node for every successor of <nodes>.
419 """Yield node for every successor of <nodes>.
420
420
421 Some successors may be unknown locally.
421 Some successors may be unknown locally.
422
422
423 This is a linear yield unsuited to detecting split changesets. It includes
423 This is a linear yield unsuited to detecting split changesets. It includes
424 initial nodes too."""
424 initial nodes too."""
425 remaining = set(nodes)
425 remaining = set(nodes)
426 seen = set(remaining)
426 seen = set(remaining)
427 while remaining:
427 while remaining:
428 current = remaining.pop()
428 current = remaining.pop()
429 yield current
429 yield current
430 for mark in obsstore.successors.get(current, ()):
430 for mark in obsstore.successors.get(current, ()):
431 # ignore marker flagged with specified flag
431 # ignore marker flagged with specified flag
432 if mark[2] & ignoreflags:
432 if mark[2] & ignoreflags:
433 continue
433 continue
434 for suc in mark[1]:
434 for suc in mark[1]:
435 if suc not in seen:
435 if suc not in seen:
436 seen.add(suc)
436 seen.add(suc)
437 remaining.add(suc)
437 remaining.add(suc)
438
438
439 def allprecursors(obsstore, nodes, ignoreflags=0):
439 def allprecursors(obsstore, nodes, ignoreflags=0):
440 """Yield node for every precursors of <nodes>.
440 """Yield node for every precursors of <nodes>.
441
441
442 Some precursors may be unknown locally.
442 Some precursors may be unknown locally.
443
443
444 This is a linear yield unsuited to detecting folded changesets. It includes
444 This is a linear yield unsuited to detecting folded changesets. It includes
445 initial nodes too."""
445 initial nodes too."""
446
446
447 remaining = set(nodes)
447 remaining = set(nodes)
448 seen = set(remaining)
448 seen = set(remaining)
449 while remaining:
449 while remaining:
450 current = remaining.pop()
450 current = remaining.pop()
451 yield current
451 yield current
452 for mark in obsstore.precursors.get(current, ()):
452 for mark in obsstore.precursors.get(current, ()):
453 # ignore marker flagged with specified flag
453 # ignore marker flagged with specified flag
454 if mark[2] & ignoreflags:
454 if mark[2] & ignoreflags:
455 continue
455 continue
456 suc = mark[0]
456 suc = mark[0]
457 if suc not in seen:
457 if suc not in seen:
458 seen.add(suc)
458 seen.add(suc)
459 remaining.add(suc)
459 remaining.add(suc)
460
460
461 def foreground(repo, nodes):
461 def foreground(repo, nodes):
462 """return all nodes in the "foreground" of other node
462 """return all nodes in the "foreground" of other node
463
463
464 The foreground of a revision is anything reachable using parent -> children
464 The foreground of a revision is anything reachable using parent -> children
465 or precursor -> successor relation. It is very similar to "descendant" but
465 or precursor -> successor relation. It is very similar to "descendant" but
466 augmented with obsolescence information.
466 augmented with obsolescence information.
467
467
468 Beware that possible obsolescence cycle may result if complex situation.
468 Beware that possible obsolescence cycle may result if complex situation.
469 """
469 """
470 repo = repo.unfiltered()
470 repo = repo.unfiltered()
471 foreground = set(repo.set('%ln::', nodes))
471 foreground = set(repo.set('%ln::', nodes))
472 if repo.obsstore:
472 if repo.obsstore:
473 # We only need this complicated logic if there is obsolescence
473 # We only need this complicated logic if there is obsolescence
474 # XXX will probably deserve an optimised revset.
474 # XXX will probably deserve an optimised revset.
475 nm = repo.changelog.nodemap
475 nm = repo.changelog.nodemap
476 plen = -1
476 plen = -1
477 # compute the whole set of successors or descendants
477 # compute the whole set of successors or descendants
478 while len(foreground) != plen:
478 while len(foreground) != plen:
479 plen = len(foreground)
479 plen = len(foreground)
480 succs = set(c.node() for c in foreground)
480 succs = set(c.node() for c in foreground)
481 mutable = [c.node() for c in foreground if c.mutable()]
481 mutable = [c.node() for c in foreground if c.mutable()]
482 succs.update(allsuccessors(repo.obsstore, mutable))
482 succs.update(allsuccessors(repo.obsstore, mutable))
483 known = (n for n in succs if n in nm)
483 known = (n for n in succs if n in nm)
484 foreground = set(repo.set('%ln::', known))
484 foreground = set(repo.set('%ln::', known))
485 return set(c.node() for c in foreground)
485 return set(c.node() for c in foreground)
486
486
487
487
488 def successorssets(repo, initialnode, cache=None):
488 def successorssets(repo, initialnode, cache=None):
489 """Return all set of successors of initial nodes
489 """Return all set of successors of initial nodes
490
490
491 The successors set of a changeset A are a group of revisions that succeed
491 The successors set of a changeset A are a group of revisions that succeed
492 A. It succeeds A as a consistent whole, each revision being only a partial
492 A. It succeeds A as a consistent whole, each revision being only a partial
493 replacement. The successors set contains non-obsolete changesets only.
493 replacement. The successors set contains non-obsolete changesets only.
494
494
495 This function returns the full list of successor sets which is why it
495 This function returns the full list of successor sets which is why it
496 returns a list of tuples and not just a single tuple. Each tuple is a valid
496 returns a list of tuples and not just a single tuple. Each tuple is a valid
497 successors set. Not that (A,) may be a valid successors set for changeset A
497 successors set. Not that (A,) may be a valid successors set for changeset A
498 (see below).
498 (see below).
499
499
500 In most cases, a changeset A will have a single element (e.g. the changeset
500 In most cases, a changeset A will have a single element (e.g. the changeset
501 A is replaced by A') in its successors set. Though, it is also common for a
501 A is replaced by A') in its successors set. Though, it is also common for a
502 changeset A to have no elements in its successor set (e.g. the changeset
502 changeset A to have no elements in its successor set (e.g. the changeset
503 has been pruned). Therefore, the returned list of successors sets will be
503 has been pruned). Therefore, the returned list of successors sets will be
504 [(A',)] or [], respectively.
504 [(A',)] or [], respectively.
505
505
506 When a changeset A is split into A' and B', however, it will result in a
506 When a changeset A is split into A' and B', however, it will result in a
507 successors set containing more than a single element, i.e. [(A',B')].
507 successors set containing more than a single element, i.e. [(A',B')].
508 Divergent changesets will result in multiple successors sets, i.e. [(A',),
508 Divergent changesets will result in multiple successors sets, i.e. [(A',),
509 (A'')].
509 (A'')].
510
510
511 If a changeset A is not obsolete, then it will conceptually have no
511 If a changeset A is not obsolete, then it will conceptually have no
512 successors set. To distinguish this from a pruned changeset, the successor
512 successors set. To distinguish this from a pruned changeset, the successor
513 set will only contain itself, i.e. [(A,)].
513 set will only contain itself, i.e. [(A,)].
514
514
515 Finally, successors unknown locally are considered to be pruned (obsoleted
515 Finally, successors unknown locally are considered to be pruned (obsoleted
516 without any successors).
516 without any successors).
517
517
518 The optional `cache` parameter is a dictionary that may contain precomputed
518 The optional `cache` parameter is a dictionary that may contain precomputed
519 successors sets. It is meant to reuse the computation of a previous call to
519 successors sets. It is meant to reuse the computation of a previous call to
520 `successorssets` when multiple calls are made at the same time. The cache
520 `successorssets` when multiple calls are made at the same time. The cache
521 dictionary is updated in place. The caller is responsible for its live
521 dictionary is updated in place. The caller is responsible for its live
522 spawn. Code that makes multiple calls to `successorssets` *must* use this
522 spawn. Code that makes multiple calls to `successorssets` *must* use this
523 cache mechanism or suffer terrible performances.
523 cache mechanism or suffer terrible performances.
524
524
525 """
525 """
526
526
527 succmarkers = repo.obsstore.successors
527 succmarkers = repo.obsstore.successors
528
528
529 # Stack of nodes we search successors sets for
529 # Stack of nodes we search successors sets for
530 toproceed = [initialnode]
530 toproceed = [initialnode]
531 # set version of above list for fast loop detection
531 # set version of above list for fast loop detection
532 # element added to "toproceed" must be added here
532 # element added to "toproceed" must be added here
533 stackedset = set(toproceed)
533 stackedset = set(toproceed)
534 if cache is None:
534 if cache is None:
535 cache = {}
535 cache = {}
536
536
537 # This while loop is the flattened version of a recursive search for
537 # This while loop is the flattened version of a recursive search for
538 # successors sets
538 # successors sets
539 #
539 #
540 # def successorssets(x):
540 # def successorssets(x):
541 # successors = directsuccessors(x)
541 # successors = directsuccessors(x)
542 # ss = [[]]
542 # ss = [[]]
543 # for succ in directsuccessors(x):
543 # for succ in directsuccessors(x):
544 # # product as in itertools cartesian product
544 # # product as in itertools cartesian product
545 # ss = product(ss, successorssets(succ))
545 # ss = product(ss, successorssets(succ))
546 # return ss
546 # return ss
547 #
547 #
548 # But we can not use plain recursive calls here:
548 # But we can not use plain recursive calls here:
549 # - that would blow the python call stack
549 # - that would blow the python call stack
550 # - obsolescence markers may have cycles, we need to handle them.
550 # - obsolescence markers may have cycles, we need to handle them.
551 #
551 #
552 # The `toproceed` list act as our call stack. Every node we search
552 # The `toproceed` list act as our call stack. Every node we search
553 # successors set for are stacked there.
553 # successors set for are stacked there.
554 #
554 #
555 # The `stackedset` is set version of this stack used to check if a node is
555 # The `stackedset` is set version of this stack used to check if a node is
556 # already stacked. This check is used to detect cycles and prevent infinite
556 # already stacked. This check is used to detect cycles and prevent infinite
557 # loop.
557 # loop.
558 #
558 #
559 # successors set of all nodes are stored in the `cache` dictionary.
559 # successors set of all nodes are stored in the `cache` dictionary.
560 #
560 #
561 # After this while loop ends we use the cache to return the successors sets
561 # After this while loop ends we use the cache to return the successors sets
562 # for the node requested by the caller.
562 # for the node requested by the caller.
563 while toproceed:
563 while toproceed:
564 # Every iteration tries to compute the successors sets of the topmost
564 # Every iteration tries to compute the successors sets of the topmost
565 # node of the stack: CURRENT.
565 # node of the stack: CURRENT.
566 #
566 #
567 # There are four possible outcomes:
567 # There are four possible outcomes:
568 #
568 #
569 # 1) We already know the successors sets of CURRENT:
569 # 1) We already know the successors sets of CURRENT:
570 # -> mission accomplished, pop it from the stack.
570 # -> mission accomplished, pop it from the stack.
571 # 2) Node is not obsolete:
571 # 2) Node is not obsolete:
572 # -> the node is its own successors sets. Add it to the cache.
572 # -> the node is its own successors sets. Add it to the cache.
573 # 3) We do not know successors set of direct successors of CURRENT:
573 # 3) We do not know successors set of direct successors of CURRENT:
574 # -> We add those successors to the stack.
574 # -> We add those successors to the stack.
575 # 4) We know successors sets of all direct successors of CURRENT:
575 # 4) We know successors sets of all direct successors of CURRENT:
576 # -> We can compute CURRENT successors set and add it to the
576 # -> We can compute CURRENT successors set and add it to the
577 # cache.
577 # cache.
578 #
578 #
579 current = toproceed[-1]
579 current = toproceed[-1]
580 if current in cache:
580 if current in cache:
581 # case (1): We already know the successors sets
581 # case (1): We already know the successors sets
582 stackedset.remove(toproceed.pop())
582 stackedset.remove(toproceed.pop())
583 elif current not in succmarkers:
583 elif current not in succmarkers:
584 # case (2): The node is not obsolete.
584 # case (2): The node is not obsolete.
585 if current in repo:
585 if current in repo:
586 # We have a valid last successors.
586 # We have a valid last successors.
587 cache[current] = [(current,)]
587 cache[current] = [(current,)]
588 else:
588 else:
589 # Final obsolete version is unknown locally.
589 # Final obsolete version is unknown locally.
590 # Do not count that as a valid successors
590 # Do not count that as a valid successors
591 cache[current] = []
591 cache[current] = []
592 else:
592 else:
593 # cases (3) and (4)
593 # cases (3) and (4)
594 #
594 #
595 # We proceed in two phases. Phase 1 aims to distinguish case (3)
595 # We proceed in two phases. Phase 1 aims to distinguish case (3)
596 # from case (4):
596 # from case (4):
597 #
597 #
598 # For each direct successors of CURRENT, we check whether its
598 # For each direct successors of CURRENT, we check whether its
599 # successors sets are known. If they are not, we stack the
599 # successors sets are known. If they are not, we stack the
600 # unknown node and proceed to the next iteration of the while
600 # unknown node and proceed to the next iteration of the while
601 # loop. (case 3)
601 # loop. (case 3)
602 #
602 #
603 # During this step, we may detect obsolescence cycles: a node
603 # During this step, we may detect obsolescence cycles: a node
604 # with unknown successors sets but already in the call stack.
604 # with unknown successors sets but already in the call stack.
605 # In such a situation, we arbitrary set the successors sets of
605 # In such a situation, we arbitrary set the successors sets of
606 # the node to nothing (node pruned) to break the cycle.
606 # the node to nothing (node pruned) to break the cycle.
607 #
607 #
608 # If no break was encountered we proceed to phase 2.
608 # If no break was encountered we proceed to phase 2.
609 #
609 #
610 # Phase 2 computes successors sets of CURRENT (case 4); see details
610 # Phase 2 computes successors sets of CURRENT (case 4); see details
611 # in phase 2 itself.
611 # in phase 2 itself.
612 #
612 #
613 # Note the two levels of iteration in each phase.
613 # Note the two levels of iteration in each phase.
614 # - The first one handles obsolescence markers using CURRENT as
614 # - The first one handles obsolescence markers using CURRENT as
615 # precursor (successors markers of CURRENT).
615 # precursor (successors markers of CURRENT).
616 #
616 #
617 # Having multiple entry here means divergence.
617 # Having multiple entry here means divergence.
618 #
618 #
619 # - The second one handles successors defined in each marker.
619 # - The second one handles successors defined in each marker.
620 #
620 #
621 # Having none means pruned node, multiple successors means split,
621 # Having none means pruned node, multiple successors means split,
622 # single successors are standard replacement.
622 # single successors are standard replacement.
623 #
623 #
624 for mark in sorted(succmarkers[current]):
624 for mark in sorted(succmarkers[current]):
625 for suc in mark[1]:
625 for suc in mark[1]:
626 if suc not in cache:
626 if suc not in cache:
627 if suc in stackedset:
627 if suc in stackedset:
628 # cycle breaking
628 # cycle breaking
629 cache[suc] = []
629 cache[suc] = []
630 else:
630 else:
631 # case (3) If we have not computed successors sets
631 # case (3) If we have not computed successors sets
632 # of one of those successors we add it to the
632 # of one of those successors we add it to the
633 # `toproceed` stack and stop all work for this
633 # `toproceed` stack and stop all work for this
634 # iteration.
634 # iteration.
635 toproceed.append(suc)
635 toproceed.append(suc)
636 stackedset.add(suc)
636 stackedset.add(suc)
637 break
637 break
638 else:
638 else:
639 continue
639 continue
640 break
640 break
641 else:
641 else:
642 # case (4): we know all successors sets of all direct
642 # case (4): we know all successors sets of all direct
643 # successors
643 # successors
644 #
644 #
645 # Successors set contributed by each marker depends on the
645 # Successors set contributed by each marker depends on the
646 # successors sets of all its "successors" node.
646 # successors sets of all its "successors" node.
647 #
647 #
648 # Each different marker is a divergence in the obsolescence
648 # Each different marker is a divergence in the obsolescence
649 # history. It contributes successors sets distinct from other
649 # history. It contributes successors sets distinct from other
650 # markers.
650 # markers.
651 #
651 #
652 # Within a marker, a successor may have divergent successors
652 # Within a marker, a successor may have divergent successors
653 # sets. In such a case, the marker will contribute multiple
653 # sets. In such a case, the marker will contribute multiple
654 # divergent successors sets. If multiple successors have
654 # divergent successors sets. If multiple successors have
655 # divergent successors sets, a cartesian product is used.
655 # divergent successors sets, a Cartesian product is used.
656 #
656 #
657 # At the end we post-process successors sets to remove
657 # At the end we post-process successors sets to remove
658 # duplicated entry and successors set that are strict subset of
658 # duplicated entry and successors set that are strict subset of
659 # another one.
659 # another one.
660 succssets = []
660 succssets = []
661 for mark in sorted(succmarkers[current]):
661 for mark in sorted(succmarkers[current]):
662 # successors sets contributed by this marker
662 # successors sets contributed by this marker
663 markss = [[]]
663 markss = [[]]
664 for suc in mark[1]:
664 for suc in mark[1]:
665 # cardinal product with previous successors
665 # cardinal product with previous successors
666 productresult = []
666 productresult = []
667 for prefix in markss:
667 for prefix in markss:
668 for suffix in cache[suc]:
668 for suffix in cache[suc]:
669 newss = list(prefix)
669 newss = list(prefix)
670 for part in suffix:
670 for part in suffix:
671 # do not duplicated entry in successors set
671 # do not duplicated entry in successors set
672 # first entry wins.
672 # first entry wins.
673 if part not in newss:
673 if part not in newss:
674 newss.append(part)
674 newss.append(part)
675 productresult.append(newss)
675 productresult.append(newss)
676 markss = productresult
676 markss = productresult
677 succssets.extend(markss)
677 succssets.extend(markss)
678 # remove duplicated and subset
678 # remove duplicated and subset
679 seen = []
679 seen = []
680 final = []
680 final = []
681 candidate = sorted(((set(s), s) for s in succssets if s),
681 candidate = sorted(((set(s), s) for s in succssets if s),
682 key=lambda x: len(x[1]), reverse=True)
682 key=lambda x: len(x[1]), reverse=True)
683 for setversion, listversion in candidate:
683 for setversion, listversion in candidate:
684 for seenset in seen:
684 for seenset in seen:
685 if setversion.issubset(seenset):
685 if setversion.issubset(seenset):
686 break
686 break
687 else:
687 else:
688 final.append(listversion)
688 final.append(listversion)
689 seen.append(setversion)
689 seen.append(setversion)
690 final.reverse() # put small successors set first
690 final.reverse() # put small successors set first
691 cache[current] = final
691 cache[current] = final
692 return cache[initialnode]
692 return cache[initialnode]
693
693
694 def _knownrevs(repo, nodes):
694 def _knownrevs(repo, nodes):
695 """yield revision numbers of known nodes passed in parameters
695 """yield revision numbers of known nodes passed in parameters
696
696
697 Unknown revisions are silently ignored."""
697 Unknown revisions are silently ignored."""
698 torev = repo.changelog.nodemap.get
698 torev = repo.changelog.nodemap.get
699 for n in nodes:
699 for n in nodes:
700 rev = torev(n)
700 rev = torev(n)
701 if rev is not None:
701 if rev is not None:
702 yield rev
702 yield rev
703
703
704 # mapping of 'set-name' -> <function to compute this set>
704 # mapping of 'set-name' -> <function to compute this set>
705 cachefuncs = {}
705 cachefuncs = {}
706 def cachefor(name):
706 def cachefor(name):
707 """Decorator to register a function as computing the cache for a set"""
707 """Decorator to register a function as computing the cache for a set"""
708 def decorator(func):
708 def decorator(func):
709 assert name not in cachefuncs
709 assert name not in cachefuncs
710 cachefuncs[name] = func
710 cachefuncs[name] = func
711 return func
711 return func
712 return decorator
712 return decorator
713
713
714 def getrevs(repo, name):
714 def getrevs(repo, name):
715 """Return the set of revision that belong to the <name> set
715 """Return the set of revision that belong to the <name> set
716
716
717 Such access may compute the set and cache it for future use"""
717 Such access may compute the set and cache it for future use"""
718 repo = repo.unfiltered()
718 repo = repo.unfiltered()
719 if not repo.obsstore:
719 if not repo.obsstore:
720 return ()
720 return ()
721 if name not in repo.obsstore.caches:
721 if name not in repo.obsstore.caches:
722 repo.obsstore.caches[name] = cachefuncs[name](repo)
722 repo.obsstore.caches[name] = cachefuncs[name](repo)
723 return repo.obsstore.caches[name]
723 return repo.obsstore.caches[name]
724
724
725 # To be simple we need to invalidate obsolescence cache when:
725 # To be simple we need to invalidate obsolescence cache when:
726 #
726 #
727 # - new changeset is added:
727 # - new changeset is added:
728 # - public phase is changed
728 # - public phase is changed
729 # - obsolescence marker are added
729 # - obsolescence marker are added
730 # - strip is used a repo
730 # - strip is used a repo
731 def clearobscaches(repo):
731 def clearobscaches(repo):
732 """Remove all obsolescence related cache from a repo
732 """Remove all obsolescence related cache from a repo
733
733
734 This remove all cache in obsstore is the obsstore already exist on the
734 This remove all cache in obsstore is the obsstore already exist on the
735 repo.
735 repo.
736
736
737 (We could be smarter here given the exact event that trigger the cache
737 (We could be smarter here given the exact event that trigger the cache
738 clearing)"""
738 clearing)"""
739 # only clear cache is there is obsstore data in this repo
739 # only clear cache is there is obsstore data in this repo
740 if 'obsstore' in repo._filecache:
740 if 'obsstore' in repo._filecache:
741 repo.obsstore.caches.clear()
741 repo.obsstore.caches.clear()
742
742
743 @cachefor('obsolete')
743 @cachefor('obsolete')
744 def _computeobsoleteset(repo):
744 def _computeobsoleteset(repo):
745 """the set of obsolete revisions"""
745 """the set of obsolete revisions"""
746 obs = set()
746 obs = set()
747 getrev = repo.changelog.nodemap.get
747 getrev = repo.changelog.nodemap.get
748 getphase = repo._phasecache.phase
748 getphase = repo._phasecache.phase
749 for node in repo.obsstore.successors:
749 for node in repo.obsstore.successors:
750 rev = getrev(node)
750 rev = getrev(node)
751 if rev is not None and getphase(repo, rev):
751 if rev is not None and getphase(repo, rev):
752 obs.add(rev)
752 obs.add(rev)
753 return obs
753 return obs
754
754
755 @cachefor('unstable')
755 @cachefor('unstable')
756 def _computeunstableset(repo):
756 def _computeunstableset(repo):
757 """the set of non obsolete revisions with obsolete parents"""
757 """the set of non obsolete revisions with obsolete parents"""
758 # revset is not efficient enough here
758 # revset is not efficient enough here
759 # we do (obsolete()::) - obsolete() by hand
759 # we do (obsolete()::) - obsolete() by hand
760 obs = getrevs(repo, 'obsolete')
760 obs = getrevs(repo, 'obsolete')
761 if not obs:
761 if not obs:
762 return set()
762 return set()
763 cl = repo.changelog
763 cl = repo.changelog
764 return set(r for r in cl.descendants(obs) if r not in obs)
764 return set(r for r in cl.descendants(obs) if r not in obs)
765
765
766 @cachefor('suspended')
766 @cachefor('suspended')
767 def _computesuspendedset(repo):
767 def _computesuspendedset(repo):
768 """the set of obsolete parents with non obsolete descendants"""
768 """the set of obsolete parents with non obsolete descendants"""
769 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
769 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
770 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
770 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
771
771
772 @cachefor('extinct')
772 @cachefor('extinct')
773 def _computeextinctset(repo):
773 def _computeextinctset(repo):
774 """the set of obsolete parents without non obsolete descendants"""
774 """the set of obsolete parents without non obsolete descendants"""
775 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
775 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
776
776
777
777
778 @cachefor('bumped')
778 @cachefor('bumped')
779 def _computebumpedset(repo):
779 def _computebumpedset(repo):
780 """the set of revs trying to obsolete public revisions"""
780 """the set of revs trying to obsolete public revisions"""
781 bumped = set()
781 bumped = set()
782 # utils function (avoid attribute lookup in the loop)
782 # util function (avoid attribute lookup in the loop)
783 phase = repo._phasecache.phase # would be faster to grab the full list
783 phase = repo._phasecache.phase # would be faster to grab the full list
784 public = phases.public
784 public = phases.public
785 cl = repo.changelog
785 cl = repo.changelog
786 torev = cl.nodemap.get
786 torev = cl.nodemap.get
787 obs = getrevs(repo, 'obsolete')
787 obs = getrevs(repo, 'obsolete')
788 for rev in repo:
788 for rev in repo:
789 # We only evaluate mutable, non-obsolete revision
789 # We only evaluate mutable, non-obsolete revision
790 if (public < phase(repo, rev)) and (rev not in obs):
790 if (public < phase(repo, rev)) and (rev not in obs):
791 node = cl.node(rev)
791 node = cl.node(rev)
792 # (future) A cache of precursors may worth if split is very common
792 # (future) A cache of precursors may worth if split is very common
793 for pnode in allprecursors(repo.obsstore, [node],
793 for pnode in allprecursors(repo.obsstore, [node],
794 ignoreflags=bumpedfix):
794 ignoreflags=bumpedfix):
795 prev = torev(pnode) # unfiltered! but so is phasecache
795 prev = torev(pnode) # unfiltered! but so is phasecache
796 if (prev is not None) and (phase(repo, prev) <= public):
796 if (prev is not None) and (phase(repo, prev) <= public):
797 # we have a public precursors
797 # we have a public precursors
798 bumped.add(rev)
798 bumped.add(rev)
799 break # Next draft!
799 break # Next draft!
800 return bumped
800 return bumped
801
801
802 @cachefor('divergent')
802 @cachefor('divergent')
803 def _computedivergentset(repo):
803 def _computedivergentset(repo):
804 """the set of rev that compete to be the final successors of some revision.
804 """the set of rev that compete to be the final successors of some revision.
805 """
805 """
806 divergent = set()
806 divergent = set()
807 obsstore = repo.obsstore
807 obsstore = repo.obsstore
808 newermap = {}
808 newermap = {}
809 for ctx in repo.set('(not public()) - obsolete()'):
809 for ctx in repo.set('(not public()) - obsolete()'):
810 mark = obsstore.precursors.get(ctx.node(), ())
810 mark = obsstore.precursors.get(ctx.node(), ())
811 toprocess = set(mark)
811 toprocess = set(mark)
812 while toprocess:
812 while toprocess:
813 prec = toprocess.pop()[0]
813 prec = toprocess.pop()[0]
814 if prec not in newermap:
814 if prec not in newermap:
815 successorssets(repo, prec, newermap)
815 successorssets(repo, prec, newermap)
816 newer = [n for n in newermap[prec] if n]
816 newer = [n for n in newermap[prec] if n]
817 if len(newer) > 1:
817 if len(newer) > 1:
818 divergent.add(ctx.rev())
818 divergent.add(ctx.rev())
819 break
819 break
820 toprocess.update(obsstore.precursors.get(prec, ()))
820 toprocess.update(obsstore.precursors.get(prec, ()))
821 return divergent
821 return divergent
822
822
823
823
824 def createmarkers(repo, relations, flag=0, metadata=None):
824 def createmarkers(repo, relations, flag=0, metadata=None):
825 """Add obsolete markers between changesets in a repo
825 """Add obsolete markers between changesets in a repo
826
826
827 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
827 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
828 tuple. `old` and `news` are changectx. metadata is an optional dictionnary
828 tuple. `old` and `news` are changectx. metadata is an optional dictionary
829 containing metadata for this marker only. It is merged with the global
829 containing metadata for this marker only. It is merged with the global
830 metadata specified through the `metadata` argument of this function,
830 metadata specified through the `metadata` argument of this function,
831
831
832 Trying to obsolete a public changeset will raise an exception.
832 Trying to obsolete a public changeset will raise an exception.
833
833
834 Current user and date are used except if specified otherwise in the
834 Current user and date are used except if specified otherwise in the
835 metadata attribute.
835 metadata attribute.
836
836
837 This function operates within a transaction of its own, but does
837 This function operates within a transaction of its own, but does
838 not take any lock on the repo.
838 not take any lock on the repo.
839 """
839 """
840 # prepare metadata
840 # prepare metadata
841 if metadata is None:
841 if metadata is None:
842 metadata = {}
842 metadata = {}
843 if 'date' not in metadata:
843 if 'date' not in metadata:
844 metadata['date'] = '%i %i' % util.makedate()
844 metadata['date'] = '%i %i' % util.makedate()
845 if 'user' not in metadata:
845 if 'user' not in metadata:
846 metadata['user'] = repo.ui.username()
846 metadata['user'] = repo.ui.username()
847 tr = repo.transaction('add-obsolescence-marker')
847 tr = repo.transaction('add-obsolescence-marker')
848 try:
848 try:
849 for rel in relations:
849 for rel in relations:
850 prec = rel[0]
850 prec = rel[0]
851 sucs = rel[1]
851 sucs = rel[1]
852 localmetadata = metadata.copy()
852 localmetadata = metadata.copy()
853 if 2 < len(rel):
853 if 2 < len(rel):
854 localmetadata.update(rel[2])
854 localmetadata.update(rel[2])
855
855
856 if not prec.mutable():
856 if not prec.mutable():
857 raise util.Abort("cannot obsolete immutable changeset: %s"
857 raise util.Abort("cannot obsolete immutable changeset: %s"
858 % prec)
858 % prec)
859 nprec = prec.node()
859 nprec = prec.node()
860 nsucs = tuple(s.node() for s in sucs)
860 nsucs = tuple(s.node() for s in sucs)
861 if nprec in nsucs:
861 if nprec in nsucs:
862 raise util.Abort("changeset %s cannot obsolete itself" % prec)
862 raise util.Abort("changeset %s cannot obsolete itself" % prec)
863 repo.obsstore.create(tr, nprec, nsucs, flag, localmetadata)
863 repo.obsstore.create(tr, nprec, nsucs, flag, localmetadata)
864 repo.filteredrevcache.clear()
864 repo.filteredrevcache.clear()
865 tr.close()
865 tr.close()
866 finally:
866 finally:
867 tr.release()
867 tr.release()
@@ -1,2859 +1,2859 b''
1 # revset.py - revision set queries for mercurial
1 # revset.py - revision set queries for mercurial
2 #
2 #
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import re
8 import re
9 import parser, util, error, discovery, hbisect, phases
9 import parser, util, error, discovery, hbisect, phases
10 import node
10 import node
11 import heapq
11 import heapq
12 import match as matchmod
12 import match as matchmod
13 import ancestor as ancestormod
13 import ancestor as ancestormod
14 from i18n import _
14 from i18n import _
15 import encoding
15 import encoding
16 import obsolete as obsmod
16 import obsolete as obsmod
17 import pathutil
17 import pathutil
18 import repoview
18 import repoview
19
19
20 def _revancestors(repo, revs, followfirst):
20 def _revancestors(repo, revs, followfirst):
21 """Like revlog.ancestors(), but supports followfirst."""
21 """Like revlog.ancestors(), but supports followfirst."""
22 cut = followfirst and 1 or None
22 cut = followfirst and 1 or None
23 cl = repo.changelog
23 cl = repo.changelog
24
24
25 def iterate():
25 def iterate():
26 revqueue, revsnode = None, None
26 revqueue, revsnode = None, None
27 h = []
27 h = []
28
28
29 revs.descending()
29 revs.descending()
30 revqueue = util.deque(revs)
30 revqueue = util.deque(revs)
31 if revqueue:
31 if revqueue:
32 revsnode = revqueue.popleft()
32 revsnode = revqueue.popleft()
33 heapq.heappush(h, -revsnode)
33 heapq.heappush(h, -revsnode)
34
34
35 seen = set([node.nullrev])
35 seen = set([node.nullrev])
36 while h:
36 while h:
37 current = -heapq.heappop(h)
37 current = -heapq.heappop(h)
38 if current not in seen:
38 if current not in seen:
39 if revsnode and current == revsnode:
39 if revsnode and current == revsnode:
40 if revqueue:
40 if revqueue:
41 revsnode = revqueue.popleft()
41 revsnode = revqueue.popleft()
42 heapq.heappush(h, -revsnode)
42 heapq.heappush(h, -revsnode)
43 seen.add(current)
43 seen.add(current)
44 yield current
44 yield current
45 for parent in cl.parentrevs(current)[:cut]:
45 for parent in cl.parentrevs(current)[:cut]:
46 if parent != node.nullrev:
46 if parent != node.nullrev:
47 heapq.heappush(h, -parent)
47 heapq.heappush(h, -parent)
48
48
49 return _descgeneratorset(iterate())
49 return _descgeneratorset(iterate())
50
50
51 def _revdescendants(repo, revs, followfirst):
51 def _revdescendants(repo, revs, followfirst):
52 """Like revlog.descendants() but supports followfirst."""
52 """Like revlog.descendants() but supports followfirst."""
53 cut = followfirst and 1 or None
53 cut = followfirst and 1 or None
54
54
55 def iterate():
55 def iterate():
56 cl = repo.changelog
56 cl = repo.changelog
57 first = min(revs)
57 first = min(revs)
58 nullrev = node.nullrev
58 nullrev = node.nullrev
59 if first == nullrev:
59 if first == nullrev:
60 # Are there nodes with a null first parent and a non-null
60 # Are there nodes with a null first parent and a non-null
61 # second one? Maybe. Do we care? Probably not.
61 # second one? Maybe. Do we care? Probably not.
62 for i in cl:
62 for i in cl:
63 yield i
63 yield i
64 else:
64 else:
65 seen = set(revs)
65 seen = set(revs)
66 for i in cl.revs(first + 1):
66 for i in cl.revs(first + 1):
67 for x in cl.parentrevs(i)[:cut]:
67 for x in cl.parentrevs(i)[:cut]:
68 if x != nullrev and x in seen:
68 if x != nullrev and x in seen:
69 seen.add(i)
69 seen.add(i)
70 yield i
70 yield i
71 break
71 break
72
72
73 return _ascgeneratorset(iterate())
73 return _ascgeneratorset(iterate())
74
74
75 def _revsbetween(repo, roots, heads):
75 def _revsbetween(repo, roots, heads):
76 """Return all paths between roots and heads, inclusive of both endpoint
76 """Return all paths between roots and heads, inclusive of both endpoint
77 sets."""
77 sets."""
78 if not roots:
78 if not roots:
79 return baseset([])
79 return baseset([])
80 parentrevs = repo.changelog.parentrevs
80 parentrevs = repo.changelog.parentrevs
81 visit = baseset(heads)
81 visit = baseset(heads)
82 reachable = set()
82 reachable = set()
83 seen = {}
83 seen = {}
84 minroot = min(roots)
84 minroot = min(roots)
85 roots = set(roots)
85 roots = set(roots)
86 # open-code the post-order traversal due to the tiny size of
86 # open-code the post-order traversal due to the tiny size of
87 # sys.getrecursionlimit()
87 # sys.getrecursionlimit()
88 while visit:
88 while visit:
89 rev = visit.pop()
89 rev = visit.pop()
90 if rev in roots:
90 if rev in roots:
91 reachable.add(rev)
91 reachable.add(rev)
92 parents = parentrevs(rev)
92 parents = parentrevs(rev)
93 seen[rev] = parents
93 seen[rev] = parents
94 for parent in parents:
94 for parent in parents:
95 if parent >= minroot and parent not in seen:
95 if parent >= minroot and parent not in seen:
96 visit.append(parent)
96 visit.append(parent)
97 if not reachable:
97 if not reachable:
98 return baseset([])
98 return baseset([])
99 for rev in sorted(seen):
99 for rev in sorted(seen):
100 for parent in seen[rev]:
100 for parent in seen[rev]:
101 if parent in reachable:
101 if parent in reachable:
102 reachable.add(rev)
102 reachable.add(rev)
103 return baseset(sorted(reachable))
103 return baseset(sorted(reachable))
104
104
105 elements = {
105 elements = {
106 "(": (20, ("group", 1, ")"), ("func", 1, ")")),
106 "(": (20, ("group", 1, ")"), ("func", 1, ")")),
107 "~": (18, None, ("ancestor", 18)),
107 "~": (18, None, ("ancestor", 18)),
108 "^": (18, None, ("parent", 18), ("parentpost", 18)),
108 "^": (18, None, ("parent", 18), ("parentpost", 18)),
109 "-": (5, ("negate", 19), ("minus", 5)),
109 "-": (5, ("negate", 19), ("minus", 5)),
110 "::": (17, ("dagrangepre", 17), ("dagrange", 17),
110 "::": (17, ("dagrangepre", 17), ("dagrange", 17),
111 ("dagrangepost", 17)),
111 ("dagrangepost", 17)),
112 "..": (17, ("dagrangepre", 17), ("dagrange", 17),
112 "..": (17, ("dagrangepre", 17), ("dagrange", 17),
113 ("dagrangepost", 17)),
113 ("dagrangepost", 17)),
114 ":": (15, ("rangepre", 15), ("range", 15), ("rangepost", 15)),
114 ":": (15, ("rangepre", 15), ("range", 15), ("rangepost", 15)),
115 "not": (10, ("not", 10)),
115 "not": (10, ("not", 10)),
116 "!": (10, ("not", 10)),
116 "!": (10, ("not", 10)),
117 "and": (5, None, ("and", 5)),
117 "and": (5, None, ("and", 5)),
118 "&": (5, None, ("and", 5)),
118 "&": (5, None, ("and", 5)),
119 "or": (4, None, ("or", 4)),
119 "or": (4, None, ("or", 4)),
120 "|": (4, None, ("or", 4)),
120 "|": (4, None, ("or", 4)),
121 "+": (4, None, ("or", 4)),
121 "+": (4, None, ("or", 4)),
122 ",": (2, None, ("list", 2)),
122 ",": (2, None, ("list", 2)),
123 ")": (0, None, None),
123 ")": (0, None, None),
124 "symbol": (0, ("symbol",), None),
124 "symbol": (0, ("symbol",), None),
125 "string": (0, ("string",), None),
125 "string": (0, ("string",), None),
126 "end": (0, None, None),
126 "end": (0, None, None),
127 }
127 }
128
128
129 keywords = set(['and', 'or', 'not'])
129 keywords = set(['and', 'or', 'not'])
130
130
131 def tokenize(program, lookup=None):
131 def tokenize(program, lookup=None):
132 '''
132 '''
133 Parse a revset statement into a stream of tokens
133 Parse a revset statement into a stream of tokens
134
134
135 Check that @ is a valid unquoted token character (issue3686):
135 Check that @ is a valid unquoted token character (issue3686):
136 >>> list(tokenize("@::"))
136 >>> list(tokenize("@::"))
137 [('symbol', '@', 0), ('::', None, 1), ('end', None, 3)]
137 [('symbol', '@', 0), ('::', None, 1), ('end', None, 3)]
138
138
139 '''
139 '''
140
140
141 pos, l = 0, len(program)
141 pos, l = 0, len(program)
142 while pos < l:
142 while pos < l:
143 c = program[pos]
143 c = program[pos]
144 if c.isspace(): # skip inter-token whitespace
144 if c.isspace(): # skip inter-token whitespace
145 pass
145 pass
146 elif c == ':' and program[pos:pos + 2] == '::': # look ahead carefully
146 elif c == ':' and program[pos:pos + 2] == '::': # look ahead carefully
147 yield ('::', None, pos)
147 yield ('::', None, pos)
148 pos += 1 # skip ahead
148 pos += 1 # skip ahead
149 elif c == '.' and program[pos:pos + 2] == '..': # look ahead carefully
149 elif c == '.' and program[pos:pos + 2] == '..': # look ahead carefully
150 yield ('..', None, pos)
150 yield ('..', None, pos)
151 pos += 1 # skip ahead
151 pos += 1 # skip ahead
152 elif c in "():,-|&+!~^": # handle simple operators
152 elif c in "():,-|&+!~^": # handle simple operators
153 yield (c, None, pos)
153 yield (c, None, pos)
154 elif (c in '"\'' or c == 'r' and
154 elif (c in '"\'' or c == 'r' and
155 program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
155 program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
156 if c == 'r':
156 if c == 'r':
157 pos += 1
157 pos += 1
158 c = program[pos]
158 c = program[pos]
159 decode = lambda x: x
159 decode = lambda x: x
160 else:
160 else:
161 decode = lambda x: x.decode('string-escape')
161 decode = lambda x: x.decode('string-escape')
162 pos += 1
162 pos += 1
163 s = pos
163 s = pos
164 while pos < l: # find closing quote
164 while pos < l: # find closing quote
165 d = program[pos]
165 d = program[pos]
166 if d == '\\': # skip over escaped characters
166 if d == '\\': # skip over escaped characters
167 pos += 2
167 pos += 2
168 continue
168 continue
169 if d == c:
169 if d == c:
170 yield ('string', decode(program[s:pos]), s)
170 yield ('string', decode(program[s:pos]), s)
171 break
171 break
172 pos += 1
172 pos += 1
173 else:
173 else:
174 raise error.ParseError(_("unterminated string"), s)
174 raise error.ParseError(_("unterminated string"), s)
175 # gather up a symbol/keyword
175 # gather up a symbol/keyword
176 elif c.isalnum() or c in '._@' or ord(c) > 127:
176 elif c.isalnum() or c in '._@' or ord(c) > 127:
177 s = pos
177 s = pos
178 pos += 1
178 pos += 1
179 while pos < l: # find end of symbol
179 while pos < l: # find end of symbol
180 d = program[pos]
180 d = program[pos]
181 if not (d.isalnum() or d in "-._/@" or ord(d) > 127):
181 if not (d.isalnum() or d in "-._/@" or ord(d) > 127):
182 break
182 break
183 if d == '.' and program[pos - 1] == '.': # special case for ..
183 if d == '.' and program[pos - 1] == '.': # special case for ..
184 pos -= 1
184 pos -= 1
185 break
185 break
186 pos += 1
186 pos += 1
187 sym = program[s:pos]
187 sym = program[s:pos]
188 if sym in keywords: # operator keywords
188 if sym in keywords: # operator keywords
189 yield (sym, None, s)
189 yield (sym, None, s)
190 elif '-' in sym:
190 elif '-' in sym:
191 # some jerk gave us foo-bar-baz, try to check if it's a symbol
191 # some jerk gave us foo-bar-baz, try to check if it's a symbol
192 if lookup and lookup(sym):
192 if lookup and lookup(sym):
193 # looks like a real symbol
193 # looks like a real symbol
194 yield ('symbol', sym, s)
194 yield ('symbol', sym, s)
195 else:
195 else:
196 # looks like an expression
196 # looks like an expression
197 parts = sym.split('-')
197 parts = sym.split('-')
198 for p in parts[:-1]:
198 for p in parts[:-1]:
199 if p: # possible consecutive -
199 if p: # possible consecutive -
200 yield ('symbol', p, s)
200 yield ('symbol', p, s)
201 s += len(p)
201 s += len(p)
202 yield ('-', None, pos)
202 yield ('-', None, pos)
203 s += 1
203 s += 1
204 if parts[-1]: # possible trailing -
204 if parts[-1]: # possible trailing -
205 yield ('symbol', parts[-1], s)
205 yield ('symbol', parts[-1], s)
206 else:
206 else:
207 yield ('symbol', sym, s)
207 yield ('symbol', sym, s)
208 pos -= 1
208 pos -= 1
209 else:
209 else:
210 raise error.ParseError(_("syntax error"), pos)
210 raise error.ParseError(_("syntax error"), pos)
211 pos += 1
211 pos += 1
212 yield ('end', None, pos)
212 yield ('end', None, pos)
213
213
214 # helpers
214 # helpers
215
215
216 def getstring(x, err):
216 def getstring(x, err):
217 if x and (x[0] == 'string' or x[0] == 'symbol'):
217 if x and (x[0] == 'string' or x[0] == 'symbol'):
218 return x[1]
218 return x[1]
219 raise error.ParseError(err)
219 raise error.ParseError(err)
220
220
221 def getlist(x):
221 def getlist(x):
222 if not x:
222 if not x:
223 return []
223 return []
224 if x[0] == 'list':
224 if x[0] == 'list':
225 return getlist(x[1]) + [x[2]]
225 return getlist(x[1]) + [x[2]]
226 return [x]
226 return [x]
227
227
228 def getargs(x, min, max, err):
228 def getargs(x, min, max, err):
229 l = getlist(x)
229 l = getlist(x)
230 if len(l) < min or (max >= 0 and len(l) > max):
230 if len(l) < min or (max >= 0 and len(l) > max):
231 raise error.ParseError(err)
231 raise error.ParseError(err)
232 return l
232 return l
233
233
234 def getset(repo, subset, x):
234 def getset(repo, subset, x):
235 if not x:
235 if not x:
236 raise error.ParseError(_("missing argument"))
236 raise error.ParseError(_("missing argument"))
237 s = methods[x[0]](repo, subset, *x[1:])
237 s = methods[x[0]](repo, subset, *x[1:])
238 if util.safehasattr(s, 'set'):
238 if util.safehasattr(s, 'set'):
239 return s
239 return s
240 return baseset(s)
240 return baseset(s)
241
241
242 def _getrevsource(repo, r):
242 def _getrevsource(repo, r):
243 extra = repo[r].extra()
243 extra = repo[r].extra()
244 for label in ('source', 'transplant_source', 'rebase_source'):
244 for label in ('source', 'transplant_source', 'rebase_source'):
245 if label in extra:
245 if label in extra:
246 try:
246 try:
247 return repo[extra[label]].rev()
247 return repo[extra[label]].rev()
248 except error.RepoLookupError:
248 except error.RepoLookupError:
249 pass
249 pass
250 return None
250 return None
251
251
252 # operator methods
252 # operator methods
253
253
254 def stringset(repo, subset, x):
254 def stringset(repo, subset, x):
255 x = repo[x].rev()
255 x = repo[x].rev()
256 if x == -1 and len(subset) == len(repo):
256 if x == -1 and len(subset) == len(repo):
257 return baseset([-1])
257 return baseset([-1])
258 if len(subset) == len(repo) or x in subset:
258 if len(subset) == len(repo) or x in subset:
259 return baseset([x])
259 return baseset([x])
260 return baseset([])
260 return baseset([])
261
261
262 def symbolset(repo, subset, x):
262 def symbolset(repo, subset, x):
263 if x in symbols:
263 if x in symbols:
264 raise error.ParseError(_("can't use %s here") % x)
264 raise error.ParseError(_("can't use %s here") % x)
265 return stringset(repo, subset, x)
265 return stringset(repo, subset, x)
266
266
267 def rangeset(repo, subset, x, y):
267 def rangeset(repo, subset, x, y):
268 cl = baseset(repo.changelog)
268 cl = baseset(repo.changelog)
269 m = getset(repo, cl, x)
269 m = getset(repo, cl, x)
270 n = getset(repo, cl, y)
270 n = getset(repo, cl, y)
271
271
272 if not m or not n:
272 if not m or not n:
273 return baseset([])
273 return baseset([])
274 m, n = m[0], n[-1]
274 m, n = m[0], n[-1]
275
275
276 if m < n:
276 if m < n:
277 r = spanset(repo, m, n + 1)
277 r = spanset(repo, m, n + 1)
278 else:
278 else:
279 r = spanset(repo, m, n - 1)
279 r = spanset(repo, m, n - 1)
280 return r & subset
280 return r & subset
281
281
282 def dagrange(repo, subset, x, y):
282 def dagrange(repo, subset, x, y):
283 r = spanset(repo)
283 r = spanset(repo)
284 xs = _revsbetween(repo, getset(repo, r, x), getset(repo, r, y))
284 xs = _revsbetween(repo, getset(repo, r, x), getset(repo, r, y))
285 s = subset.set()
285 s = subset.set()
286 return xs.filter(lambda r: r in s)
286 return xs.filter(lambda r: r in s)
287
287
288 def andset(repo, subset, x, y):
288 def andset(repo, subset, x, y):
289 return getset(repo, getset(repo, subset, x), y)
289 return getset(repo, getset(repo, subset, x), y)
290
290
291 def orset(repo, subset, x, y):
291 def orset(repo, subset, x, y):
292 xl = getset(repo, subset, x)
292 xl = getset(repo, subset, x)
293 yl = getset(repo, subset - xl, y)
293 yl = getset(repo, subset - xl, y)
294 return xl + yl
294 return xl + yl
295
295
296 def notset(repo, subset, x):
296 def notset(repo, subset, x):
297 return subset - getset(repo, subset, x)
297 return subset - getset(repo, subset, x)
298
298
299 def listset(repo, subset, a, b):
299 def listset(repo, subset, a, b):
300 raise error.ParseError(_("can't use a list in this context"))
300 raise error.ParseError(_("can't use a list in this context"))
301
301
302 def func(repo, subset, a, b):
302 def func(repo, subset, a, b):
303 if a[0] == 'symbol' and a[1] in symbols:
303 if a[0] == 'symbol' and a[1] in symbols:
304 return symbols[a[1]](repo, subset, b)
304 return symbols[a[1]](repo, subset, b)
305 raise error.ParseError(_("not a function: %s") % a[1])
305 raise error.ParseError(_("not a function: %s") % a[1])
306
306
307 # functions
307 # functions
308
308
309 def adds(repo, subset, x):
309 def adds(repo, subset, x):
310 """``adds(pattern)``
310 """``adds(pattern)``
311 Changesets that add a file matching pattern.
311 Changesets that add a file matching pattern.
312
312
313 The pattern without explicit kind like ``glob:`` is expected to be
313 The pattern without explicit kind like ``glob:`` is expected to be
314 relative to the current directory and match against a file or a
314 relative to the current directory and match against a file or a
315 directory.
315 directory.
316 """
316 """
317 # i18n: "adds" is a keyword
317 # i18n: "adds" is a keyword
318 pat = getstring(x, _("adds requires a pattern"))
318 pat = getstring(x, _("adds requires a pattern"))
319 return checkstatus(repo, subset, pat, 1)
319 return checkstatus(repo, subset, pat, 1)
320
320
321 def ancestor(repo, subset, x):
321 def ancestor(repo, subset, x):
322 """``ancestor(*changeset)``
322 """``ancestor(*changeset)``
323 A greatest common ancestor of the changesets.
323 A greatest common ancestor of the changesets.
324
324
325 Accepts 0 or more changesets.
325 Accepts 0 or more changesets.
326 Will return empty list when passed no args.
326 Will return empty list when passed no args.
327 Greatest common ancestor of a single changeset is that changeset.
327 Greatest common ancestor of a single changeset is that changeset.
328 """
328 """
329 # i18n: "ancestor" is a keyword
329 # i18n: "ancestor" is a keyword
330 l = getlist(x)
330 l = getlist(x)
331 rl = spanset(repo)
331 rl = spanset(repo)
332 anc = None
332 anc = None
333
333
334 # (getset(repo, rl, i) for i in l) generates a list of lists
334 # (getset(repo, rl, i) for i in l) generates a list of lists
335 for revs in (getset(repo, rl, i) for i in l):
335 for revs in (getset(repo, rl, i) for i in l):
336 for r in revs:
336 for r in revs:
337 if anc is None:
337 if anc is None:
338 anc = repo[r]
338 anc = repo[r]
339 else:
339 else:
340 anc = anc.ancestor(repo[r])
340 anc = anc.ancestor(repo[r])
341
341
342 if anc is not None and anc.rev() in subset:
342 if anc is not None and anc.rev() in subset:
343 return baseset([anc.rev()])
343 return baseset([anc.rev()])
344 return baseset([])
344 return baseset([])
345
345
346 def _ancestors(repo, subset, x, followfirst=False):
346 def _ancestors(repo, subset, x, followfirst=False):
347 args = getset(repo, spanset(repo), x)
347 args = getset(repo, spanset(repo), x)
348 if not args:
348 if not args:
349 return baseset([])
349 return baseset([])
350 s = _revancestors(repo, args, followfirst)
350 s = _revancestors(repo, args, followfirst)
351 return subset.filter(lambda r: r in s)
351 return subset.filter(lambda r: r in s)
352
352
353 def ancestors(repo, subset, x):
353 def ancestors(repo, subset, x):
354 """``ancestors(set)``
354 """``ancestors(set)``
355 Changesets that are ancestors of a changeset in set.
355 Changesets that are ancestors of a changeset in set.
356 """
356 """
357 return _ancestors(repo, subset, x)
357 return _ancestors(repo, subset, x)
358
358
359 def _firstancestors(repo, subset, x):
359 def _firstancestors(repo, subset, x):
360 # ``_firstancestors(set)``
360 # ``_firstancestors(set)``
361 # Like ``ancestors(set)`` but follows only the first parents.
361 # Like ``ancestors(set)`` but follows only the first parents.
362 return _ancestors(repo, subset, x, followfirst=True)
362 return _ancestors(repo, subset, x, followfirst=True)
363
363
364 def ancestorspec(repo, subset, x, n):
364 def ancestorspec(repo, subset, x, n):
365 """``set~n``
365 """``set~n``
366 Changesets that are the Nth ancestor (first parents only) of a changeset
366 Changesets that are the Nth ancestor (first parents only) of a changeset
367 in set.
367 in set.
368 """
368 """
369 try:
369 try:
370 n = int(n[1])
370 n = int(n[1])
371 except (TypeError, ValueError):
371 except (TypeError, ValueError):
372 raise error.ParseError(_("~ expects a number"))
372 raise error.ParseError(_("~ expects a number"))
373 ps = set()
373 ps = set()
374 cl = repo.changelog
374 cl = repo.changelog
375 for r in getset(repo, baseset(cl), x):
375 for r in getset(repo, baseset(cl), x):
376 for i in range(n):
376 for i in range(n):
377 r = cl.parentrevs(r)[0]
377 r = cl.parentrevs(r)[0]
378 ps.add(r)
378 ps.add(r)
379 return subset.filter(lambda r: r in ps)
379 return subset.filter(lambda r: r in ps)
380
380
381 def author(repo, subset, x):
381 def author(repo, subset, x):
382 """``author(string)``
382 """``author(string)``
383 Alias for ``user(string)``.
383 Alias for ``user(string)``.
384 """
384 """
385 # i18n: "author" is a keyword
385 # i18n: "author" is a keyword
386 n = encoding.lower(getstring(x, _("author requires a string")))
386 n = encoding.lower(getstring(x, _("author requires a string")))
387 kind, pattern, matcher = _substringmatcher(n)
387 kind, pattern, matcher = _substringmatcher(n)
388 return subset.filter(lambda x: matcher(encoding.lower(repo[x].user())))
388 return subset.filter(lambda x: matcher(encoding.lower(repo[x].user())))
389
389
390 def only(repo, subset, x):
390 def only(repo, subset, x):
391 """``only(set, [set])``
391 """``only(set, [set])``
392 Changesets that are ancestors of the first set that are not ancestors
392 Changesets that are ancestors of the first set that are not ancestors
393 of any other head in the repo. If a second set is specified, the result
393 of any other head in the repo. If a second set is specified, the result
394 is ancestors of the first set that are not ancestors of the second set
394 is ancestors of the first set that are not ancestors of the second set
395 (i.e. ::<set1> - ::<set2>).
395 (i.e. ::<set1> - ::<set2>).
396 """
396 """
397 cl = repo.changelog
397 cl = repo.changelog
398 args = getargs(x, 1, 2, _('only takes one or two arguments'))
398 args = getargs(x, 1, 2, _('only takes one or two arguments'))
399 include = getset(repo, spanset(repo), args[0]).set()
399 include = getset(repo, spanset(repo), args[0]).set()
400 if len(args) == 1:
400 if len(args) == 1:
401 descendants = set(_revdescendants(repo, include, False))
401 descendants = set(_revdescendants(repo, include, False))
402 exclude = [rev for rev in cl.headrevs()
402 exclude = [rev for rev in cl.headrevs()
403 if not rev in descendants and not rev in include]
403 if not rev in descendants and not rev in include]
404 else:
404 else:
405 exclude = getset(repo, spanset(repo), args[1])
405 exclude = getset(repo, spanset(repo), args[1])
406
406
407 results = set(ancestormod.missingancestors(include, exclude, cl.parentrevs))
407 results = set(ancestormod.missingancestors(include, exclude, cl.parentrevs))
408 return lazyset(subset, lambda x: x in results)
408 return lazyset(subset, lambda x: x in results)
409
409
410 def bisect(repo, subset, x):
410 def bisect(repo, subset, x):
411 """``bisect(string)``
411 """``bisect(string)``
412 Changesets marked in the specified bisect status:
412 Changesets marked in the specified bisect status:
413
413
414 - ``good``, ``bad``, ``skip``: csets explicitly marked as good/bad/skip
414 - ``good``, ``bad``, ``skip``: csets explicitly marked as good/bad/skip
415 - ``goods``, ``bads`` : csets topologically good/bad
415 - ``goods``, ``bads`` : csets topologically good/bad
416 - ``range`` : csets taking part in the bisection
416 - ``range`` : csets taking part in the bisection
417 - ``pruned`` : csets that are goods, bads or skipped
417 - ``pruned`` : csets that are goods, bads or skipped
418 - ``untested`` : csets whose fate is yet unknown
418 - ``untested`` : csets whose fate is yet unknown
419 - ``ignored`` : csets ignored due to DAG topology
419 - ``ignored`` : csets ignored due to DAG topology
420 - ``current`` : the cset currently being bisected
420 - ``current`` : the cset currently being bisected
421 """
421 """
422 # i18n: "bisect" is a keyword
422 # i18n: "bisect" is a keyword
423 status = getstring(x, _("bisect requires a string")).lower()
423 status = getstring(x, _("bisect requires a string")).lower()
424 state = set(hbisect.get(repo, status))
424 state = set(hbisect.get(repo, status))
425 return subset.filter(lambda r: r in state)
425 return subset.filter(lambda r: r in state)
426
426
427 # Backward-compatibility
427 # Backward-compatibility
428 # - no help entry so that we do not advertise it any more
428 # - no help entry so that we do not advertise it any more
429 def bisected(repo, subset, x):
429 def bisected(repo, subset, x):
430 return bisect(repo, subset, x)
430 return bisect(repo, subset, x)
431
431
432 def bookmark(repo, subset, x):
432 def bookmark(repo, subset, x):
433 """``bookmark([name])``
433 """``bookmark([name])``
434 The named bookmark or all bookmarks.
434 The named bookmark or all bookmarks.
435
435
436 If `name` starts with `re:`, the remainder of the name is treated as
436 If `name` starts with `re:`, the remainder of the name is treated as
437 a regular expression. To match a bookmark that actually starts with `re:`,
437 a regular expression. To match a bookmark that actually starts with `re:`,
438 use the prefix `literal:`.
438 use the prefix `literal:`.
439 """
439 """
440 # i18n: "bookmark" is a keyword
440 # i18n: "bookmark" is a keyword
441 args = getargs(x, 0, 1, _('bookmark takes one or no arguments'))
441 args = getargs(x, 0, 1, _('bookmark takes one or no arguments'))
442 if args:
442 if args:
443 bm = getstring(args[0],
443 bm = getstring(args[0],
444 # i18n: "bookmark" is a keyword
444 # i18n: "bookmark" is a keyword
445 _('the argument to bookmark must be a string'))
445 _('the argument to bookmark must be a string'))
446 kind, pattern, matcher = _stringmatcher(bm)
446 kind, pattern, matcher = _stringmatcher(bm)
447 if kind == 'literal':
447 if kind == 'literal':
448 bmrev = repo._bookmarks.get(bm, None)
448 bmrev = repo._bookmarks.get(bm, None)
449 if not bmrev:
449 if not bmrev:
450 raise util.Abort(_("bookmark '%s' does not exist") % bm)
450 raise util.Abort(_("bookmark '%s' does not exist") % bm)
451 bmrev = repo[bmrev].rev()
451 bmrev = repo[bmrev].rev()
452 return subset.filter(lambda r: r == bmrev)
452 return subset.filter(lambda r: r == bmrev)
453 else:
453 else:
454 matchrevs = set()
454 matchrevs = set()
455 for name, bmrev in repo._bookmarks.iteritems():
455 for name, bmrev in repo._bookmarks.iteritems():
456 if matcher(name):
456 if matcher(name):
457 matchrevs.add(bmrev)
457 matchrevs.add(bmrev)
458 if not matchrevs:
458 if not matchrevs:
459 raise util.Abort(_("no bookmarks exist that match '%s'")
459 raise util.Abort(_("no bookmarks exist that match '%s'")
460 % pattern)
460 % pattern)
461 bmrevs = set()
461 bmrevs = set()
462 for bmrev in matchrevs:
462 for bmrev in matchrevs:
463 bmrevs.add(repo[bmrev].rev())
463 bmrevs.add(repo[bmrev].rev())
464 return subset & bmrevs
464 return subset & bmrevs
465
465
466 bms = set([repo[r].rev()
466 bms = set([repo[r].rev()
467 for r in repo._bookmarks.values()])
467 for r in repo._bookmarks.values()])
468 return subset.filter(lambda r: r in bms)
468 return subset.filter(lambda r: r in bms)
469
469
470 def branch(repo, subset, x):
470 def branch(repo, subset, x):
471 """``branch(string or set)``
471 """``branch(string or set)``
472 All changesets belonging to the given branch or the branches of the given
472 All changesets belonging to the given branch or the branches of the given
473 changesets.
473 changesets.
474
474
475 If `string` starts with `re:`, the remainder of the name is treated as
475 If `string` starts with `re:`, the remainder of the name is treated as
476 a regular expression. To match a branch that actually starts with `re:`,
476 a regular expression. To match a branch that actually starts with `re:`,
477 use the prefix `literal:`.
477 use the prefix `literal:`.
478 """
478 """
479 try:
479 try:
480 b = getstring(x, '')
480 b = getstring(x, '')
481 except error.ParseError:
481 except error.ParseError:
482 # not a string, but another revspec, e.g. tip()
482 # not a string, but another revspec, e.g. tip()
483 pass
483 pass
484 else:
484 else:
485 kind, pattern, matcher = _stringmatcher(b)
485 kind, pattern, matcher = _stringmatcher(b)
486 if kind == 'literal':
486 if kind == 'literal':
487 # note: falls through to the revspec case if no branch with
487 # note: falls through to the revspec case if no branch with
488 # this name exists
488 # this name exists
489 if pattern in repo.branchmap():
489 if pattern in repo.branchmap():
490 return subset.filter(lambda r: matcher(repo[r].branch()))
490 return subset.filter(lambda r: matcher(repo[r].branch()))
491 else:
491 else:
492 return subset.filter(lambda r: matcher(repo[r].branch()))
492 return subset.filter(lambda r: matcher(repo[r].branch()))
493
493
494 s = getset(repo, spanset(repo), x)
494 s = getset(repo, spanset(repo), x)
495 b = set()
495 b = set()
496 for r in s:
496 for r in s:
497 b.add(repo[r].branch())
497 b.add(repo[r].branch())
498 s = s.set()
498 s = s.set()
499 return subset.filter(lambda r: r in s or repo[r].branch() in b)
499 return subset.filter(lambda r: r in s or repo[r].branch() in b)
500
500
501 def bumped(repo, subset, x):
501 def bumped(repo, subset, x):
502 """``bumped()``
502 """``bumped()``
503 Mutable changesets marked as successors of public changesets.
503 Mutable changesets marked as successors of public changesets.
504
504
505 Only non-public and non-obsolete changesets can be `bumped`.
505 Only non-public and non-obsolete changesets can be `bumped`.
506 """
506 """
507 # i18n: "bumped" is a keyword
507 # i18n: "bumped" is a keyword
508 getargs(x, 0, 0, _("bumped takes no arguments"))
508 getargs(x, 0, 0, _("bumped takes no arguments"))
509 bumped = obsmod.getrevs(repo, 'bumped')
509 bumped = obsmod.getrevs(repo, 'bumped')
510 return subset & bumped
510 return subset & bumped
511
511
512 def bundle(repo, subset, x):
512 def bundle(repo, subset, x):
513 """``bundle()``
513 """``bundle()``
514 Changesets in the bundle.
514 Changesets in the bundle.
515
515
516 Bundle must be specified by the -R option."""
516 Bundle must be specified by the -R option."""
517
517
518 try:
518 try:
519 bundlerevs = repo.changelog.bundlerevs
519 bundlerevs = repo.changelog.bundlerevs
520 except AttributeError:
520 except AttributeError:
521 raise util.Abort(_("no bundle provided - specify with -R"))
521 raise util.Abort(_("no bundle provided - specify with -R"))
522 return subset & bundlerevs
522 return subset & bundlerevs
523
523
524 def checkstatus(repo, subset, pat, field):
524 def checkstatus(repo, subset, pat, field):
525 hasset = matchmod.patkind(pat) == 'set'
525 hasset = matchmod.patkind(pat) == 'set'
526
526
527 def matches(x):
527 def matches(x):
528 m = None
528 m = None
529 fname = None
529 fname = None
530 c = repo[x]
530 c = repo[x]
531 if not m or hasset:
531 if not m or hasset:
532 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
532 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
533 if not m.anypats() and len(m.files()) == 1:
533 if not m.anypats() and len(m.files()) == 1:
534 fname = m.files()[0]
534 fname = m.files()[0]
535 if fname is not None:
535 if fname is not None:
536 if fname not in c.files():
536 if fname not in c.files():
537 return False
537 return False
538 else:
538 else:
539 for f in c.files():
539 for f in c.files():
540 if m(f):
540 if m(f):
541 break
541 break
542 else:
542 else:
543 return False
543 return False
544 files = repo.status(c.p1().node(), c.node())[field]
544 files = repo.status(c.p1().node(), c.node())[field]
545 if fname is not None:
545 if fname is not None:
546 if fname in files:
546 if fname in files:
547 return True
547 return True
548 else:
548 else:
549 for f in files:
549 for f in files:
550 if m(f):
550 if m(f):
551 return True
551 return True
552
552
553 return subset.filter(matches)
553 return subset.filter(matches)
554
554
555 def _children(repo, narrow, parentset):
555 def _children(repo, narrow, parentset):
556 cs = set()
556 cs = set()
557 if not parentset:
557 if not parentset:
558 return baseset(cs)
558 return baseset(cs)
559 pr = repo.changelog.parentrevs
559 pr = repo.changelog.parentrevs
560 minrev = min(parentset)
560 minrev = min(parentset)
561 for r in narrow:
561 for r in narrow:
562 if r <= minrev:
562 if r <= minrev:
563 continue
563 continue
564 for p in pr(r):
564 for p in pr(r):
565 if p in parentset:
565 if p in parentset:
566 cs.add(r)
566 cs.add(r)
567 return baseset(cs)
567 return baseset(cs)
568
568
569 def children(repo, subset, x):
569 def children(repo, subset, x):
570 """``children(set)``
570 """``children(set)``
571 Child changesets of changesets in set.
571 Child changesets of changesets in set.
572 """
572 """
573 s = getset(repo, baseset(repo), x).set()
573 s = getset(repo, baseset(repo), x).set()
574 cs = _children(repo, subset, s)
574 cs = _children(repo, subset, s)
575 return subset & cs
575 return subset & cs
576
576
577 def closed(repo, subset, x):
577 def closed(repo, subset, x):
578 """``closed()``
578 """``closed()``
579 Changeset is closed.
579 Changeset is closed.
580 """
580 """
581 # i18n: "closed" is a keyword
581 # i18n: "closed" is a keyword
582 getargs(x, 0, 0, _("closed takes no arguments"))
582 getargs(x, 0, 0, _("closed takes no arguments"))
583 return subset.filter(lambda r: repo[r].closesbranch())
583 return subset.filter(lambda r: repo[r].closesbranch())
584
584
585 def contains(repo, subset, x):
585 def contains(repo, subset, x):
586 """``contains(pattern)``
586 """``contains(pattern)``
587 Revision contains a file matching pattern. See :hg:`help patterns`
587 Revision contains a file matching pattern. See :hg:`help patterns`
588 for information about file patterns.
588 for information about file patterns.
589
589
590 The pattern without explicit kind like ``glob:`` is expected to be
590 The pattern without explicit kind like ``glob:`` is expected to be
591 relative to the current directory and match against a file exactly
591 relative to the current directory and match against a file exactly
592 for efficiency.
592 for efficiency.
593 """
593 """
594 # i18n: "contains" is a keyword
594 # i18n: "contains" is a keyword
595 pat = getstring(x, _("contains requires a pattern"))
595 pat = getstring(x, _("contains requires a pattern"))
596
596
597 def matches(x):
597 def matches(x):
598 if not matchmod.patkind(pat):
598 if not matchmod.patkind(pat):
599 pats = pathutil.canonpath(repo.root, repo.getcwd(), pat)
599 pats = pathutil.canonpath(repo.root, repo.getcwd(), pat)
600 if pats in repo[x]:
600 if pats in repo[x]:
601 return True
601 return True
602 else:
602 else:
603 c = repo[x]
603 c = repo[x]
604 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
604 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
605 for f in c.manifest():
605 for f in c.manifest():
606 if m(f):
606 if m(f):
607 return True
607 return True
608 return False
608 return False
609
609
610 return subset.filter(matches)
610 return subset.filter(matches)
611
611
612 def converted(repo, subset, x):
612 def converted(repo, subset, x):
613 """``converted([id])``
613 """``converted([id])``
614 Changesets converted from the given identifier in the old repository if
614 Changesets converted from the given identifier in the old repository if
615 present, or all converted changesets if no identifier is specified.
615 present, or all converted changesets if no identifier is specified.
616 """
616 """
617
617
618 # There is exactly no chance of resolving the revision, so do a simple
618 # There is exactly no chance of resolving the revision, so do a simple
619 # string compare and hope for the best
619 # string compare and hope for the best
620
620
621 rev = None
621 rev = None
622 # i18n: "converted" is a keyword
622 # i18n: "converted" is a keyword
623 l = getargs(x, 0, 1, _('converted takes one or no arguments'))
623 l = getargs(x, 0, 1, _('converted takes one or no arguments'))
624 if l:
624 if l:
625 # i18n: "converted" is a keyword
625 # i18n: "converted" is a keyword
626 rev = getstring(l[0], _('converted requires a revision'))
626 rev = getstring(l[0], _('converted requires a revision'))
627
627
628 def _matchvalue(r):
628 def _matchvalue(r):
629 source = repo[r].extra().get('convert_revision', None)
629 source = repo[r].extra().get('convert_revision', None)
630 return source is not None and (rev is None or source.startswith(rev))
630 return source is not None and (rev is None or source.startswith(rev))
631
631
632 return subset.filter(lambda r: _matchvalue(r))
632 return subset.filter(lambda r: _matchvalue(r))
633
633
634 def date(repo, subset, x):
634 def date(repo, subset, x):
635 """``date(interval)``
635 """``date(interval)``
636 Changesets within the interval, see :hg:`help dates`.
636 Changesets within the interval, see :hg:`help dates`.
637 """
637 """
638 # i18n: "date" is a keyword
638 # i18n: "date" is a keyword
639 ds = getstring(x, _("date requires a string"))
639 ds = getstring(x, _("date requires a string"))
640 dm = util.matchdate(ds)
640 dm = util.matchdate(ds)
641 return subset.filter(lambda x: dm(repo[x].date()[0]))
641 return subset.filter(lambda x: dm(repo[x].date()[0]))
642
642
643 def desc(repo, subset, x):
643 def desc(repo, subset, x):
644 """``desc(string)``
644 """``desc(string)``
645 Search commit message for string. The match is case-insensitive.
645 Search commit message for string. The match is case-insensitive.
646 """
646 """
647 # i18n: "desc" is a keyword
647 # i18n: "desc" is a keyword
648 ds = encoding.lower(getstring(x, _("desc requires a string")))
648 ds = encoding.lower(getstring(x, _("desc requires a string")))
649
649
650 def matches(x):
650 def matches(x):
651 c = repo[x]
651 c = repo[x]
652 return ds in encoding.lower(c.description())
652 return ds in encoding.lower(c.description())
653
653
654 return subset.filter(matches)
654 return subset.filter(matches)
655
655
656 def _descendants(repo, subset, x, followfirst=False):
656 def _descendants(repo, subset, x, followfirst=False):
657 args = getset(repo, spanset(repo), x)
657 args = getset(repo, spanset(repo), x)
658 if not args:
658 if not args:
659 return baseset([])
659 return baseset([])
660 s = _revdescendants(repo, args, followfirst)
660 s = _revdescendants(repo, args, followfirst)
661
661
662 # Both sets need to be ascending in order to lazily return the union
662 # Both sets need to be ascending in order to lazily return the union
663 # in the correct order.
663 # in the correct order.
664 args.ascending()
664 args.ascending()
665
665
666 subsetset = subset.set()
666 subsetset = subset.set()
667 result = (orderedlazyset(s, subsetset.__contains__, ascending=True) +
667 result = (orderedlazyset(s, subsetset.__contains__, ascending=True) +
668 orderedlazyset(args, subsetset.__contains__, ascending=True))
668 orderedlazyset(args, subsetset.__contains__, ascending=True))
669
669
670 # Wrap result in a lazyset since it's an _addset, which doesn't implement
670 # Wrap result in a lazyset since it's an _addset, which doesn't implement
671 # all the necessary functions to be consumed by callers.
671 # all the necessary functions to be consumed by callers.
672 return orderedlazyset(result, lambda r: True, ascending=True)
672 return orderedlazyset(result, lambda r: True, ascending=True)
673
673
674 def descendants(repo, subset, x):
674 def descendants(repo, subset, x):
675 """``descendants(set)``
675 """``descendants(set)``
676 Changesets which are descendants of changesets in set.
676 Changesets which are descendants of changesets in set.
677 """
677 """
678 return _descendants(repo, subset, x)
678 return _descendants(repo, subset, x)
679
679
680 def _firstdescendants(repo, subset, x):
680 def _firstdescendants(repo, subset, x):
681 # ``_firstdescendants(set)``
681 # ``_firstdescendants(set)``
682 # Like ``descendants(set)`` but follows only the first parents.
682 # Like ``descendants(set)`` but follows only the first parents.
683 return _descendants(repo, subset, x, followfirst=True)
683 return _descendants(repo, subset, x, followfirst=True)
684
684
685 def destination(repo, subset, x):
685 def destination(repo, subset, x):
686 """``destination([set])``
686 """``destination([set])``
687 Changesets that were created by a graft, transplant or rebase operation,
687 Changesets that were created by a graft, transplant or rebase operation,
688 with the given revisions specified as the source. Omitting the optional set
688 with the given revisions specified as the source. Omitting the optional set
689 is the same as passing all().
689 is the same as passing all().
690 """
690 """
691 if x is not None:
691 if x is not None:
692 args = getset(repo, spanset(repo), x).set()
692 args = getset(repo, spanset(repo), x).set()
693 else:
693 else:
694 args = getall(repo, spanset(repo), x).set()
694 args = getall(repo, spanset(repo), x).set()
695
695
696 dests = set()
696 dests = set()
697
697
698 # subset contains all of the possible destinations that can be returned, so
698 # subset contains all of the possible destinations that can be returned, so
699 # iterate over them and see if their source(s) were provided in the args.
699 # iterate over them and see if their source(s) were provided in the args.
700 # Even if the immediate src of r is not in the args, src's source (or
700 # Even if the immediate src of r is not in the args, src's source (or
701 # further back) may be. Scanning back further than the immediate src allows
701 # further back) may be. Scanning back further than the immediate src allows
702 # transitive transplants and rebases to yield the same results as transitive
702 # transitive transplants and rebases to yield the same results as transitive
703 # grafts.
703 # grafts.
704 for r in subset:
704 for r in subset:
705 src = _getrevsource(repo, r)
705 src = _getrevsource(repo, r)
706 lineage = None
706 lineage = None
707
707
708 while src is not None:
708 while src is not None:
709 if lineage is None:
709 if lineage is None:
710 lineage = list()
710 lineage = list()
711
711
712 lineage.append(r)
712 lineage.append(r)
713
713
714 # The visited lineage is a match if the current source is in the arg
714 # The visited lineage is a match if the current source is in the arg
715 # set. Since every candidate dest is visited by way of iterating
715 # set. Since every candidate dest is visited by way of iterating
716 # subset, any dests further back in the lineage will be tested by a
716 # subset, any dests further back in the lineage will be tested by a
717 # different iteration over subset. Likewise, if the src was already
717 # different iteration over subset. Likewise, if the src was already
718 # selected, the current lineage can be selected without going back
718 # selected, the current lineage can be selected without going back
719 # further.
719 # further.
720 if src in args or src in dests:
720 if src in args or src in dests:
721 dests.update(lineage)
721 dests.update(lineage)
722 break
722 break
723
723
724 r = src
724 r = src
725 src = _getrevsource(repo, r)
725 src = _getrevsource(repo, r)
726
726
727 return subset.filter(lambda r: r in dests)
727 return subset.filter(lambda r: r in dests)
728
728
729 def divergent(repo, subset, x):
729 def divergent(repo, subset, x):
730 """``divergent()``
730 """``divergent()``
731 Final successors of changesets with an alternative set of final successors.
731 Final successors of changesets with an alternative set of final successors.
732 """
732 """
733 # i18n: "divergent" is a keyword
733 # i18n: "divergent" is a keyword
734 getargs(x, 0, 0, _("divergent takes no arguments"))
734 getargs(x, 0, 0, _("divergent takes no arguments"))
735 divergent = obsmod.getrevs(repo, 'divergent')
735 divergent = obsmod.getrevs(repo, 'divergent')
736 return subset.filter(lambda r: r in divergent)
736 return subset.filter(lambda r: r in divergent)
737
737
738 def draft(repo, subset, x):
738 def draft(repo, subset, x):
739 """``draft()``
739 """``draft()``
740 Changeset in draft phase."""
740 Changeset in draft phase."""
741 # i18n: "draft" is a keyword
741 # i18n: "draft" is a keyword
742 getargs(x, 0, 0, _("draft takes no arguments"))
742 getargs(x, 0, 0, _("draft takes no arguments"))
743 pc = repo._phasecache
743 pc = repo._phasecache
744 return subset.filter(lambda r: pc.phase(repo, r) == phases.draft)
744 return subset.filter(lambda r: pc.phase(repo, r) == phases.draft)
745
745
746 def extinct(repo, subset, x):
746 def extinct(repo, subset, x):
747 """``extinct()``
747 """``extinct()``
748 Obsolete changesets with obsolete descendants only.
748 Obsolete changesets with obsolete descendants only.
749 """
749 """
750 # i18n: "extinct" is a keyword
750 # i18n: "extinct" is a keyword
751 getargs(x, 0, 0, _("extinct takes no arguments"))
751 getargs(x, 0, 0, _("extinct takes no arguments"))
752 extincts = obsmod.getrevs(repo, 'extinct')
752 extincts = obsmod.getrevs(repo, 'extinct')
753 return subset & extincts
753 return subset & extincts
754
754
755 def extra(repo, subset, x):
755 def extra(repo, subset, x):
756 """``extra(label, [value])``
756 """``extra(label, [value])``
757 Changesets with the given label in the extra metadata, with the given
757 Changesets with the given label in the extra metadata, with the given
758 optional value.
758 optional value.
759
759
760 If `value` starts with `re:`, the remainder of the value is treated as
760 If `value` starts with `re:`, the remainder of the value is treated as
761 a regular expression. To match a value that actually starts with `re:`,
761 a regular expression. To match a value that actually starts with `re:`,
762 use the prefix `literal:`.
762 use the prefix `literal:`.
763 """
763 """
764
764
765 # i18n: "extra" is a keyword
765 # i18n: "extra" is a keyword
766 l = getargs(x, 1, 2, _('extra takes at least 1 and at most 2 arguments'))
766 l = getargs(x, 1, 2, _('extra takes at least 1 and at most 2 arguments'))
767 # i18n: "extra" is a keyword
767 # i18n: "extra" is a keyword
768 label = getstring(l[0], _('first argument to extra must be a string'))
768 label = getstring(l[0], _('first argument to extra must be a string'))
769 value = None
769 value = None
770
770
771 if len(l) > 1:
771 if len(l) > 1:
772 # i18n: "extra" is a keyword
772 # i18n: "extra" is a keyword
773 value = getstring(l[1], _('second argument to extra must be a string'))
773 value = getstring(l[1], _('second argument to extra must be a string'))
774 kind, value, matcher = _stringmatcher(value)
774 kind, value, matcher = _stringmatcher(value)
775
775
776 def _matchvalue(r):
776 def _matchvalue(r):
777 extra = repo[r].extra()
777 extra = repo[r].extra()
778 return label in extra and (value is None or matcher(extra[label]))
778 return label in extra and (value is None or matcher(extra[label]))
779
779
780 return subset.filter(lambda r: _matchvalue(r))
780 return subset.filter(lambda r: _matchvalue(r))
781
781
782 def filelog(repo, subset, x):
782 def filelog(repo, subset, x):
783 """``filelog(pattern)``
783 """``filelog(pattern)``
784 Changesets connected to the specified filelog.
784 Changesets connected to the specified filelog.
785
785
786 For performance reasons, ``filelog()`` does not show every changeset
786 For performance reasons, ``filelog()`` does not show every changeset
787 that affects the requested file(s). See :hg:`help log` for details. For
787 that affects the requested file(s). See :hg:`help log` for details. For
788 a slower, more accurate result, use ``file()``.
788 a slower, more accurate result, use ``file()``.
789
789
790 The pattern without explicit kind like ``glob:`` is expected to be
790 The pattern without explicit kind like ``glob:`` is expected to be
791 relative to the current directory and match against a file exactly
791 relative to the current directory and match against a file exactly
792 for efficiency.
792 for efficiency.
793 """
793 """
794
794
795 # i18n: "filelog" is a keyword
795 # i18n: "filelog" is a keyword
796 pat = getstring(x, _("filelog requires a pattern"))
796 pat = getstring(x, _("filelog requires a pattern"))
797 s = set()
797 s = set()
798
798
799 if not matchmod.patkind(pat):
799 if not matchmod.patkind(pat):
800 f = pathutil.canonpath(repo.root, repo.getcwd(), pat)
800 f = pathutil.canonpath(repo.root, repo.getcwd(), pat)
801 fl = repo.file(f)
801 fl = repo.file(f)
802 for fr in fl:
802 for fr in fl:
803 s.add(fl.linkrev(fr))
803 s.add(fl.linkrev(fr))
804 else:
804 else:
805 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=repo[None])
805 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=repo[None])
806 for f in repo[None]:
806 for f in repo[None]:
807 if m(f):
807 if m(f):
808 fl = repo.file(f)
808 fl = repo.file(f)
809 for fr in fl:
809 for fr in fl:
810 s.add(fl.linkrev(fr))
810 s.add(fl.linkrev(fr))
811
811
812 return subset.filter(lambda r: r in s)
812 return subset.filter(lambda r: r in s)
813
813
814 def first(repo, subset, x):
814 def first(repo, subset, x):
815 """``first(set, [n])``
815 """``first(set, [n])``
816 An alias for limit().
816 An alias for limit().
817 """
817 """
818 return limit(repo, subset, x)
818 return limit(repo, subset, x)
819
819
820 def _follow(repo, subset, x, name, followfirst=False):
820 def _follow(repo, subset, x, name, followfirst=False):
821 l = getargs(x, 0, 1, _("%s takes no arguments or a filename") % name)
821 l = getargs(x, 0, 1, _("%s takes no arguments or a filename") % name)
822 c = repo['.']
822 c = repo['.']
823 if l:
823 if l:
824 x = getstring(l[0], _("%s expected a filename") % name)
824 x = getstring(l[0], _("%s expected a filename") % name)
825 if x in c:
825 if x in c:
826 cx = c[x]
826 cx = c[x]
827 s = set(ctx.rev() for ctx in cx.ancestors(followfirst=followfirst))
827 s = set(ctx.rev() for ctx in cx.ancestors(followfirst=followfirst))
828 # include the revision responsible for the most recent version
828 # include the revision responsible for the most recent version
829 s.add(cx.linkrev())
829 s.add(cx.linkrev())
830 else:
830 else:
831 return baseset([])
831 return baseset([])
832 else:
832 else:
833 s = _revancestors(repo, baseset([c.rev()]), followfirst)
833 s = _revancestors(repo, baseset([c.rev()]), followfirst)
834
834
835 return subset.filter(lambda r: r in s)
835 return subset.filter(lambda r: r in s)
836
836
837 def follow(repo, subset, x):
837 def follow(repo, subset, x):
838 """``follow([file])``
838 """``follow([file])``
839 An alias for ``::.`` (ancestors of the working copy's first parent).
839 An alias for ``::.`` (ancestors of the working copy's first parent).
840 If a filename is specified, the history of the given file is followed,
840 If a filename is specified, the history of the given file is followed,
841 including copies.
841 including copies.
842 """
842 """
843 return _follow(repo, subset, x, 'follow')
843 return _follow(repo, subset, x, 'follow')
844
844
845 def _followfirst(repo, subset, x):
845 def _followfirst(repo, subset, x):
846 # ``followfirst([file])``
846 # ``followfirst([file])``
847 # Like ``follow([file])`` but follows only the first parent of
847 # Like ``follow([file])`` but follows only the first parent of
848 # every revision or file revision.
848 # every revision or file revision.
849 return _follow(repo, subset, x, '_followfirst', followfirst=True)
849 return _follow(repo, subset, x, '_followfirst', followfirst=True)
850
850
851 def getall(repo, subset, x):
851 def getall(repo, subset, x):
852 """``all()``
852 """``all()``
853 All changesets, the same as ``0:tip``.
853 All changesets, the same as ``0:tip``.
854 """
854 """
855 # i18n: "all" is a keyword
855 # i18n: "all" is a keyword
856 getargs(x, 0, 0, _("all takes no arguments"))
856 getargs(x, 0, 0, _("all takes no arguments"))
857 return subset
857 return subset
858
858
859 def grep(repo, subset, x):
859 def grep(repo, subset, x):
860 """``grep(regex)``
860 """``grep(regex)``
861 Like ``keyword(string)`` but accepts a regex. Use ``grep(r'...')``
861 Like ``keyword(string)`` but accepts a regex. Use ``grep(r'...')``
862 to ensure special escape characters are handled correctly. Unlike
862 to ensure special escape characters are handled correctly. Unlike
863 ``keyword(string)``, the match is case-sensitive.
863 ``keyword(string)``, the match is case-sensitive.
864 """
864 """
865 try:
865 try:
866 # i18n: "grep" is a keyword
866 # i18n: "grep" is a keyword
867 gr = re.compile(getstring(x, _("grep requires a string")))
867 gr = re.compile(getstring(x, _("grep requires a string")))
868 except re.error, e:
868 except re.error, e:
869 raise error.ParseError(_('invalid match pattern: %s') % e)
869 raise error.ParseError(_('invalid match pattern: %s') % e)
870
870
871 def matches(x):
871 def matches(x):
872 c = repo[x]
872 c = repo[x]
873 for e in c.files() + [c.user(), c.description()]:
873 for e in c.files() + [c.user(), c.description()]:
874 if gr.search(e):
874 if gr.search(e):
875 return True
875 return True
876 return False
876 return False
877
877
878 return subset.filter(matches)
878 return subset.filter(matches)
879
879
880 def _matchfiles(repo, subset, x):
880 def _matchfiles(repo, subset, x):
881 # _matchfiles takes a revset list of prefixed arguments:
881 # _matchfiles takes a revset list of prefixed arguments:
882 #
882 #
883 # [p:foo, i:bar, x:baz]
883 # [p:foo, i:bar, x:baz]
884 #
884 #
885 # builds a match object from them and filters subset. Allowed
885 # builds a match object from them and filters subset. Allowed
886 # prefixes are 'p:' for regular patterns, 'i:' for include
886 # prefixes are 'p:' for regular patterns, 'i:' for include
887 # patterns and 'x:' for exclude patterns. Use 'r:' prefix to pass
887 # patterns and 'x:' for exclude patterns. Use 'r:' prefix to pass
888 # a revision identifier, or the empty string to reference the
888 # a revision identifier, or the empty string to reference the
889 # working directory, from which the match object is
889 # working directory, from which the match object is
890 # initialized. Use 'd:' to set the default matching mode, default
890 # initialized. Use 'd:' to set the default matching mode, default
891 # to 'glob'. At most one 'r:' and 'd:' argument can be passed.
891 # to 'glob'. At most one 'r:' and 'd:' argument can be passed.
892
892
893 # i18n: "_matchfiles" is a keyword
893 # i18n: "_matchfiles" is a keyword
894 l = getargs(x, 1, -1, _("_matchfiles requires at least one argument"))
894 l = getargs(x, 1, -1, _("_matchfiles requires at least one argument"))
895 pats, inc, exc = [], [], []
895 pats, inc, exc = [], [], []
896 hasset = False
896 hasset = False
897 rev, default = None, None
897 rev, default = None, None
898 for arg in l:
898 for arg in l:
899 # i18n: "_matchfiles" is a keyword
899 # i18n: "_matchfiles" is a keyword
900 s = getstring(arg, _("_matchfiles requires string arguments"))
900 s = getstring(arg, _("_matchfiles requires string arguments"))
901 prefix, value = s[:2], s[2:]
901 prefix, value = s[:2], s[2:]
902 if prefix == 'p:':
902 if prefix == 'p:':
903 pats.append(value)
903 pats.append(value)
904 elif prefix == 'i:':
904 elif prefix == 'i:':
905 inc.append(value)
905 inc.append(value)
906 elif prefix == 'x:':
906 elif prefix == 'x:':
907 exc.append(value)
907 exc.append(value)
908 elif prefix == 'r:':
908 elif prefix == 'r:':
909 if rev is not None:
909 if rev is not None:
910 # i18n: "_matchfiles" is a keyword
910 # i18n: "_matchfiles" is a keyword
911 raise error.ParseError(_('_matchfiles expected at most one '
911 raise error.ParseError(_('_matchfiles expected at most one '
912 'revision'))
912 'revision'))
913 rev = value
913 rev = value
914 elif prefix == 'd:':
914 elif prefix == 'd:':
915 if default is not None:
915 if default is not None:
916 # i18n: "_matchfiles" is a keyword
916 # i18n: "_matchfiles" is a keyword
917 raise error.ParseError(_('_matchfiles expected at most one '
917 raise error.ParseError(_('_matchfiles expected at most one '
918 'default mode'))
918 'default mode'))
919 default = value
919 default = value
920 else:
920 else:
921 # i18n: "_matchfiles" is a keyword
921 # i18n: "_matchfiles" is a keyword
922 raise error.ParseError(_('invalid _matchfiles prefix: %s') % prefix)
922 raise error.ParseError(_('invalid _matchfiles prefix: %s') % prefix)
923 if not hasset and matchmod.patkind(value) == 'set':
923 if not hasset and matchmod.patkind(value) == 'set':
924 hasset = True
924 hasset = True
925 if not default:
925 if not default:
926 default = 'glob'
926 default = 'glob'
927
927
928 def matches(x):
928 def matches(x):
929 m = None
929 m = None
930 c = repo[x]
930 c = repo[x]
931 if not m or (hasset and rev is None):
931 if not m or (hasset and rev is None):
932 ctx = c
932 ctx = c
933 if rev is not None:
933 if rev is not None:
934 ctx = repo[rev or None]
934 ctx = repo[rev or None]
935 m = matchmod.match(repo.root, repo.getcwd(), pats, include=inc,
935 m = matchmod.match(repo.root, repo.getcwd(), pats, include=inc,
936 exclude=exc, ctx=ctx, default=default)
936 exclude=exc, ctx=ctx, default=default)
937 for f in c.files():
937 for f in c.files():
938 if m(f):
938 if m(f):
939 return True
939 return True
940 return False
940 return False
941
941
942 return subset.filter(matches)
942 return subset.filter(matches)
943
943
944 def hasfile(repo, subset, x):
944 def hasfile(repo, subset, x):
945 """``file(pattern)``
945 """``file(pattern)``
946 Changesets affecting files matched by pattern.
946 Changesets affecting files matched by pattern.
947
947
948 For a faster but less accurate result, consider using ``filelog()``
948 For a faster but less accurate result, consider using ``filelog()``
949 instead.
949 instead.
950
950
951 This predicate uses ``glob:`` as the default kind of pattern.
951 This predicate uses ``glob:`` as the default kind of pattern.
952 """
952 """
953 # i18n: "file" is a keyword
953 # i18n: "file" is a keyword
954 pat = getstring(x, _("file requires a pattern"))
954 pat = getstring(x, _("file requires a pattern"))
955 return _matchfiles(repo, subset, ('string', 'p:' + pat))
955 return _matchfiles(repo, subset, ('string', 'p:' + pat))
956
956
957 def head(repo, subset, x):
957 def head(repo, subset, x):
958 """``head()``
958 """``head()``
959 Changeset is a named branch head.
959 Changeset is a named branch head.
960 """
960 """
961 # i18n: "head" is a keyword
961 # i18n: "head" is a keyword
962 getargs(x, 0, 0, _("head takes no arguments"))
962 getargs(x, 0, 0, _("head takes no arguments"))
963 hs = set()
963 hs = set()
964 for b, ls in repo.branchmap().iteritems():
964 for b, ls in repo.branchmap().iteritems():
965 hs.update(repo[h].rev() for h in ls)
965 hs.update(repo[h].rev() for h in ls)
966 return baseset(hs).filter(subset.__contains__)
966 return baseset(hs).filter(subset.__contains__)
967
967
968 def heads(repo, subset, x):
968 def heads(repo, subset, x):
969 """``heads(set)``
969 """``heads(set)``
970 Members of set with no children in set.
970 Members of set with no children in set.
971 """
971 """
972 s = getset(repo, subset, x)
972 s = getset(repo, subset, x)
973 ps = parents(repo, subset, x)
973 ps = parents(repo, subset, x)
974 return s - ps
974 return s - ps
975
975
976 def hidden(repo, subset, x):
976 def hidden(repo, subset, x):
977 """``hidden()``
977 """``hidden()``
978 Hidden changesets.
978 Hidden changesets.
979 """
979 """
980 # i18n: "hidden" is a keyword
980 # i18n: "hidden" is a keyword
981 getargs(x, 0, 0, _("hidden takes no arguments"))
981 getargs(x, 0, 0, _("hidden takes no arguments"))
982 hiddenrevs = repoview.filterrevs(repo, 'visible')
982 hiddenrevs = repoview.filterrevs(repo, 'visible')
983 return subset & hiddenrevs
983 return subset & hiddenrevs
984
984
985 def keyword(repo, subset, x):
985 def keyword(repo, subset, x):
986 """``keyword(string)``
986 """``keyword(string)``
987 Search commit message, user name, and names of changed files for
987 Search commit message, user name, and names of changed files for
988 string. The match is case-insensitive.
988 string. The match is case-insensitive.
989 """
989 """
990 # i18n: "keyword" is a keyword
990 # i18n: "keyword" is a keyword
991 kw = encoding.lower(getstring(x, _("keyword requires a string")))
991 kw = encoding.lower(getstring(x, _("keyword requires a string")))
992
992
993 def matches(r):
993 def matches(r):
994 c = repo[r]
994 c = repo[r]
995 return util.any(kw in encoding.lower(t) for t in c.files() + [c.user(),
995 return util.any(kw in encoding.lower(t) for t in c.files() + [c.user(),
996 c.description()])
996 c.description()])
997
997
998 return subset.filter(matches)
998 return subset.filter(matches)
999
999
1000 def limit(repo, subset, x):
1000 def limit(repo, subset, x):
1001 """``limit(set, [n])``
1001 """``limit(set, [n])``
1002 First n members of set, defaulting to 1.
1002 First n members of set, defaulting to 1.
1003 """
1003 """
1004 # i18n: "limit" is a keyword
1004 # i18n: "limit" is a keyword
1005 l = getargs(x, 1, 2, _("limit requires one or two arguments"))
1005 l = getargs(x, 1, 2, _("limit requires one or two arguments"))
1006 try:
1006 try:
1007 lim = 1
1007 lim = 1
1008 if len(l) == 2:
1008 if len(l) == 2:
1009 # i18n: "limit" is a keyword
1009 # i18n: "limit" is a keyword
1010 lim = int(getstring(l[1], _("limit requires a number")))
1010 lim = int(getstring(l[1], _("limit requires a number")))
1011 except (TypeError, ValueError):
1011 except (TypeError, ValueError):
1012 # i18n: "limit" is a keyword
1012 # i18n: "limit" is a keyword
1013 raise error.ParseError(_("limit expects a number"))
1013 raise error.ParseError(_("limit expects a number"))
1014 ss = subset.set()
1014 ss = subset.set()
1015 os = getset(repo, spanset(repo), l[0])
1015 os = getset(repo, spanset(repo), l[0])
1016 bs = baseset([])
1016 bs = baseset([])
1017 it = iter(os)
1017 it = iter(os)
1018 for x in xrange(lim):
1018 for x in xrange(lim):
1019 try:
1019 try:
1020 y = it.next()
1020 y = it.next()
1021 if y in ss:
1021 if y in ss:
1022 bs.append(y)
1022 bs.append(y)
1023 except (StopIteration):
1023 except (StopIteration):
1024 break
1024 break
1025 return bs
1025 return bs
1026
1026
1027 def last(repo, subset, x):
1027 def last(repo, subset, x):
1028 """``last(set, [n])``
1028 """``last(set, [n])``
1029 Last n members of set, defaulting to 1.
1029 Last n members of set, defaulting to 1.
1030 """
1030 """
1031 # i18n: "last" is a keyword
1031 # i18n: "last" is a keyword
1032 l = getargs(x, 1, 2, _("last requires one or two arguments"))
1032 l = getargs(x, 1, 2, _("last requires one or two arguments"))
1033 try:
1033 try:
1034 lim = 1
1034 lim = 1
1035 if len(l) == 2:
1035 if len(l) == 2:
1036 # i18n: "last" is a keyword
1036 # i18n: "last" is a keyword
1037 lim = int(getstring(l[1], _("last requires a number")))
1037 lim = int(getstring(l[1], _("last requires a number")))
1038 except (TypeError, ValueError):
1038 except (TypeError, ValueError):
1039 # i18n: "last" is a keyword
1039 # i18n: "last" is a keyword
1040 raise error.ParseError(_("last expects a number"))
1040 raise error.ParseError(_("last expects a number"))
1041 ss = subset.set()
1041 ss = subset.set()
1042 os = getset(repo, spanset(repo), l[0])
1042 os = getset(repo, spanset(repo), l[0])
1043 os.reverse()
1043 os.reverse()
1044 bs = baseset([])
1044 bs = baseset([])
1045 it = iter(os)
1045 it = iter(os)
1046 for x in xrange(lim):
1046 for x in xrange(lim):
1047 try:
1047 try:
1048 y = it.next()
1048 y = it.next()
1049 if y in ss:
1049 if y in ss:
1050 bs.append(y)
1050 bs.append(y)
1051 except (StopIteration):
1051 except (StopIteration):
1052 break
1052 break
1053 return bs
1053 return bs
1054
1054
1055 def maxrev(repo, subset, x):
1055 def maxrev(repo, subset, x):
1056 """``max(set)``
1056 """``max(set)``
1057 Changeset with highest revision number in set.
1057 Changeset with highest revision number in set.
1058 """
1058 """
1059 os = getset(repo, spanset(repo), x)
1059 os = getset(repo, spanset(repo), x)
1060 if os:
1060 if os:
1061 m = os.max()
1061 m = os.max()
1062 if m in subset:
1062 if m in subset:
1063 return baseset([m])
1063 return baseset([m])
1064 return baseset([])
1064 return baseset([])
1065
1065
1066 def merge(repo, subset, x):
1066 def merge(repo, subset, x):
1067 """``merge()``
1067 """``merge()``
1068 Changeset is a merge changeset.
1068 Changeset is a merge changeset.
1069 """
1069 """
1070 # i18n: "merge" is a keyword
1070 # i18n: "merge" is a keyword
1071 getargs(x, 0, 0, _("merge takes no arguments"))
1071 getargs(x, 0, 0, _("merge takes no arguments"))
1072 cl = repo.changelog
1072 cl = repo.changelog
1073 return subset.filter(lambda r: cl.parentrevs(r)[1] != -1)
1073 return subset.filter(lambda r: cl.parentrevs(r)[1] != -1)
1074
1074
1075 def branchpoint(repo, subset, x):
1075 def branchpoint(repo, subset, x):
1076 """``branchpoint()``
1076 """``branchpoint()``
1077 Changesets with more than one child.
1077 Changesets with more than one child.
1078 """
1078 """
1079 # i18n: "branchpoint" is a keyword
1079 # i18n: "branchpoint" is a keyword
1080 getargs(x, 0, 0, _("branchpoint takes no arguments"))
1080 getargs(x, 0, 0, _("branchpoint takes no arguments"))
1081 cl = repo.changelog
1081 cl = repo.changelog
1082 if not subset:
1082 if not subset:
1083 return baseset([])
1083 return baseset([])
1084 baserev = min(subset)
1084 baserev = min(subset)
1085 parentscount = [0]*(len(repo) - baserev)
1085 parentscount = [0]*(len(repo) - baserev)
1086 for r in cl.revs(start=baserev + 1):
1086 for r in cl.revs(start=baserev + 1):
1087 for p in cl.parentrevs(r):
1087 for p in cl.parentrevs(r):
1088 if p >= baserev:
1088 if p >= baserev:
1089 parentscount[p - baserev] += 1
1089 parentscount[p - baserev] += 1
1090 return subset.filter(lambda r: parentscount[r - baserev] > 1)
1090 return subset.filter(lambda r: parentscount[r - baserev] > 1)
1091
1091
1092 def minrev(repo, subset, x):
1092 def minrev(repo, subset, x):
1093 """``min(set)``
1093 """``min(set)``
1094 Changeset with lowest revision number in set.
1094 Changeset with lowest revision number in set.
1095 """
1095 """
1096 os = getset(repo, spanset(repo), x)
1096 os = getset(repo, spanset(repo), x)
1097 if os:
1097 if os:
1098 m = os.min()
1098 m = os.min()
1099 if m in subset:
1099 if m in subset:
1100 return baseset([m])
1100 return baseset([m])
1101 return baseset([])
1101 return baseset([])
1102
1102
1103 def _missingancestors(repo, subset, x):
1103 def _missingancestors(repo, subset, x):
1104 # i18n: "_missingancestors" is a keyword
1104 # i18n: "_missingancestors" is a keyword
1105 revs, bases = getargs(x, 2, 2,
1105 revs, bases = getargs(x, 2, 2,
1106 _("_missingancestors requires two arguments"))
1106 _("_missingancestors requires two arguments"))
1107 rs = baseset(repo)
1107 rs = baseset(repo)
1108 revs = getset(repo, rs, revs)
1108 revs = getset(repo, rs, revs)
1109 bases = getset(repo, rs, bases)
1109 bases = getset(repo, rs, bases)
1110 missing = set(repo.changelog.findmissingrevs(bases, revs))
1110 missing = set(repo.changelog.findmissingrevs(bases, revs))
1111 return baseset([r for r in subset if r in missing])
1111 return baseset([r for r in subset if r in missing])
1112
1112
1113 def modifies(repo, subset, x):
1113 def modifies(repo, subset, x):
1114 """``modifies(pattern)``
1114 """``modifies(pattern)``
1115 Changesets modifying files matched by pattern.
1115 Changesets modifying files matched by pattern.
1116
1116
1117 The pattern without explicit kind like ``glob:`` is expected to be
1117 The pattern without explicit kind like ``glob:`` is expected to be
1118 relative to the current directory and match against a file or a
1118 relative to the current directory and match against a file or a
1119 directory.
1119 directory.
1120 """
1120 """
1121 # i18n: "modifies" is a keyword
1121 # i18n: "modifies" is a keyword
1122 pat = getstring(x, _("modifies requires a pattern"))
1122 pat = getstring(x, _("modifies requires a pattern"))
1123 return checkstatus(repo, subset, pat, 0)
1123 return checkstatus(repo, subset, pat, 0)
1124
1124
1125 def node_(repo, subset, x):
1125 def node_(repo, subset, x):
1126 """``id(string)``
1126 """``id(string)``
1127 Revision non-ambiguously specified by the given hex string prefix.
1127 Revision non-ambiguously specified by the given hex string prefix.
1128 """
1128 """
1129 # i18n: "id" is a keyword
1129 # i18n: "id" is a keyword
1130 l = getargs(x, 1, 1, _("id requires one argument"))
1130 l = getargs(x, 1, 1, _("id requires one argument"))
1131 # i18n: "id" is a keyword
1131 # i18n: "id" is a keyword
1132 n = getstring(l[0], _("id requires a string"))
1132 n = getstring(l[0], _("id requires a string"))
1133 if len(n) == 40:
1133 if len(n) == 40:
1134 rn = repo[n].rev()
1134 rn = repo[n].rev()
1135 else:
1135 else:
1136 rn = None
1136 rn = None
1137 pm = repo.changelog._partialmatch(n)
1137 pm = repo.changelog._partialmatch(n)
1138 if pm is not None:
1138 if pm is not None:
1139 rn = repo.changelog.rev(pm)
1139 rn = repo.changelog.rev(pm)
1140
1140
1141 return subset.filter(lambda r: r == rn)
1141 return subset.filter(lambda r: r == rn)
1142
1142
1143 def obsolete(repo, subset, x):
1143 def obsolete(repo, subset, x):
1144 """``obsolete()``
1144 """``obsolete()``
1145 Mutable changeset with a newer version."""
1145 Mutable changeset with a newer version."""
1146 # i18n: "obsolete" is a keyword
1146 # i18n: "obsolete" is a keyword
1147 getargs(x, 0, 0, _("obsolete takes no arguments"))
1147 getargs(x, 0, 0, _("obsolete takes no arguments"))
1148 obsoletes = obsmod.getrevs(repo, 'obsolete')
1148 obsoletes = obsmod.getrevs(repo, 'obsolete')
1149 return subset & obsoletes
1149 return subset & obsoletes
1150
1150
1151 def origin(repo, subset, x):
1151 def origin(repo, subset, x):
1152 """``origin([set])``
1152 """``origin([set])``
1153 Changesets that were specified as a source for the grafts, transplants or
1153 Changesets that were specified as a source for the grafts, transplants or
1154 rebases that created the given revisions. Omitting the optional set is the
1154 rebases that created the given revisions. Omitting the optional set is the
1155 same as passing all(). If a changeset created by these operations is itself
1155 same as passing all(). If a changeset created by these operations is itself
1156 specified as a source for one of these operations, only the source changeset
1156 specified as a source for one of these operations, only the source changeset
1157 for the first operation is selected.
1157 for the first operation is selected.
1158 """
1158 """
1159 if x is not None:
1159 if x is not None:
1160 args = getset(repo, spanset(repo), x).set()
1160 args = getset(repo, spanset(repo), x).set()
1161 else:
1161 else:
1162 args = getall(repo, spanset(repo), x).set()
1162 args = getall(repo, spanset(repo), x).set()
1163
1163
1164 def _firstsrc(rev):
1164 def _firstsrc(rev):
1165 src = _getrevsource(repo, rev)
1165 src = _getrevsource(repo, rev)
1166 if src is None:
1166 if src is None:
1167 return None
1167 return None
1168
1168
1169 while True:
1169 while True:
1170 prev = _getrevsource(repo, src)
1170 prev = _getrevsource(repo, src)
1171
1171
1172 if prev is None:
1172 if prev is None:
1173 return src
1173 return src
1174 src = prev
1174 src = prev
1175
1175
1176 o = set([_firstsrc(r) for r in args])
1176 o = set([_firstsrc(r) for r in args])
1177 return subset.filter(lambda r: r in o)
1177 return subset.filter(lambda r: r in o)
1178
1178
1179 def outgoing(repo, subset, x):
1179 def outgoing(repo, subset, x):
1180 """``outgoing([path])``
1180 """``outgoing([path])``
1181 Changesets not found in the specified destination repository, or the
1181 Changesets not found in the specified destination repository, or the
1182 default push location.
1182 default push location.
1183 """
1183 """
1184 import hg # avoid start-up nasties
1184 import hg # avoid start-up nasties
1185 # i18n: "outgoing" is a keyword
1185 # i18n: "outgoing" is a keyword
1186 l = getargs(x, 0, 1, _("outgoing takes one or no arguments"))
1186 l = getargs(x, 0, 1, _("outgoing takes one or no arguments"))
1187 # i18n: "outgoing" is a keyword
1187 # i18n: "outgoing" is a keyword
1188 dest = l and getstring(l[0], _("outgoing requires a repository path")) or ''
1188 dest = l and getstring(l[0], _("outgoing requires a repository path")) or ''
1189 dest = repo.ui.expandpath(dest or 'default-push', dest or 'default')
1189 dest = repo.ui.expandpath(dest or 'default-push', dest or 'default')
1190 dest, branches = hg.parseurl(dest)
1190 dest, branches = hg.parseurl(dest)
1191 revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
1191 revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
1192 if revs:
1192 if revs:
1193 revs = [repo.lookup(rev) for rev in revs]
1193 revs = [repo.lookup(rev) for rev in revs]
1194 other = hg.peer(repo, {}, dest)
1194 other = hg.peer(repo, {}, dest)
1195 repo.ui.pushbuffer()
1195 repo.ui.pushbuffer()
1196 outgoing = discovery.findcommonoutgoing(repo, other, onlyheads=revs)
1196 outgoing = discovery.findcommonoutgoing(repo, other, onlyheads=revs)
1197 repo.ui.popbuffer()
1197 repo.ui.popbuffer()
1198 cl = repo.changelog
1198 cl = repo.changelog
1199 o = set([cl.rev(r) for r in outgoing.missing])
1199 o = set([cl.rev(r) for r in outgoing.missing])
1200 return subset.filter(lambda r: r in o)
1200 return subset.filter(lambda r: r in o)
1201
1201
1202 def p1(repo, subset, x):
1202 def p1(repo, subset, x):
1203 """``p1([set])``
1203 """``p1([set])``
1204 First parent of changesets in set, or the working directory.
1204 First parent of changesets in set, or the working directory.
1205 """
1205 """
1206 if x is None:
1206 if x is None:
1207 p = repo[x].p1().rev()
1207 p = repo[x].p1().rev()
1208 return subset.filter(lambda r: r == p)
1208 return subset.filter(lambda r: r == p)
1209
1209
1210 ps = set()
1210 ps = set()
1211 cl = repo.changelog
1211 cl = repo.changelog
1212 for r in getset(repo, spanset(repo), x):
1212 for r in getset(repo, spanset(repo), x):
1213 ps.add(cl.parentrevs(r)[0])
1213 ps.add(cl.parentrevs(r)[0])
1214 return subset & ps
1214 return subset & ps
1215
1215
1216 def p2(repo, subset, x):
1216 def p2(repo, subset, x):
1217 """``p2([set])``
1217 """``p2([set])``
1218 Second parent of changesets in set, or the working directory.
1218 Second parent of changesets in set, or the working directory.
1219 """
1219 """
1220 if x is None:
1220 if x is None:
1221 ps = repo[x].parents()
1221 ps = repo[x].parents()
1222 try:
1222 try:
1223 p = ps[1].rev()
1223 p = ps[1].rev()
1224 return subset.filter(lambda r: r == p)
1224 return subset.filter(lambda r: r == p)
1225 except IndexError:
1225 except IndexError:
1226 return baseset([])
1226 return baseset([])
1227
1227
1228 ps = set()
1228 ps = set()
1229 cl = repo.changelog
1229 cl = repo.changelog
1230 for r in getset(repo, spanset(repo), x):
1230 for r in getset(repo, spanset(repo), x):
1231 ps.add(cl.parentrevs(r)[1])
1231 ps.add(cl.parentrevs(r)[1])
1232 return subset & ps
1232 return subset & ps
1233
1233
1234 def parents(repo, subset, x):
1234 def parents(repo, subset, x):
1235 """``parents([set])``
1235 """``parents([set])``
1236 The set of all parents for all changesets in set, or the working directory.
1236 The set of all parents for all changesets in set, or the working directory.
1237 """
1237 """
1238 if x is None:
1238 if x is None:
1239 ps = tuple(p.rev() for p in repo[x].parents())
1239 ps = tuple(p.rev() for p in repo[x].parents())
1240 return subset & ps
1240 return subset & ps
1241
1241
1242 ps = set()
1242 ps = set()
1243 cl = repo.changelog
1243 cl = repo.changelog
1244 for r in getset(repo, spanset(repo), x):
1244 for r in getset(repo, spanset(repo), x):
1245 ps.update(cl.parentrevs(r))
1245 ps.update(cl.parentrevs(r))
1246 return subset & ps
1246 return subset & ps
1247
1247
1248 def parentspec(repo, subset, x, n):
1248 def parentspec(repo, subset, x, n):
1249 """``set^0``
1249 """``set^0``
1250 The set.
1250 The set.
1251 ``set^1`` (or ``set^``), ``set^2``
1251 ``set^1`` (or ``set^``), ``set^2``
1252 First or second parent, respectively, of all changesets in set.
1252 First or second parent, respectively, of all changesets in set.
1253 """
1253 """
1254 try:
1254 try:
1255 n = int(n[1])
1255 n = int(n[1])
1256 if n not in (0, 1, 2):
1256 if n not in (0, 1, 2):
1257 raise ValueError
1257 raise ValueError
1258 except (TypeError, ValueError):
1258 except (TypeError, ValueError):
1259 raise error.ParseError(_("^ expects a number 0, 1, or 2"))
1259 raise error.ParseError(_("^ expects a number 0, 1, or 2"))
1260 ps = set()
1260 ps = set()
1261 cl = repo.changelog
1261 cl = repo.changelog
1262 for r in getset(repo, baseset(cl), x):
1262 for r in getset(repo, baseset(cl), x):
1263 if n == 0:
1263 if n == 0:
1264 ps.add(r)
1264 ps.add(r)
1265 elif n == 1:
1265 elif n == 1:
1266 ps.add(cl.parentrevs(r)[0])
1266 ps.add(cl.parentrevs(r)[0])
1267 elif n == 2:
1267 elif n == 2:
1268 parents = cl.parentrevs(r)
1268 parents = cl.parentrevs(r)
1269 if len(parents) > 1:
1269 if len(parents) > 1:
1270 ps.add(parents[1])
1270 ps.add(parents[1])
1271 return subset & ps
1271 return subset & ps
1272
1272
1273 def present(repo, subset, x):
1273 def present(repo, subset, x):
1274 """``present(set)``
1274 """``present(set)``
1275 An empty set, if any revision in set isn't found; otherwise,
1275 An empty set, if any revision in set isn't found; otherwise,
1276 all revisions in set.
1276 all revisions in set.
1277
1277
1278 If any of specified revisions is not present in the local repository,
1278 If any of specified revisions is not present in the local repository,
1279 the query is normally aborted. But this predicate allows the query
1279 the query is normally aborted. But this predicate allows the query
1280 to continue even in such cases.
1280 to continue even in such cases.
1281 """
1281 """
1282 try:
1282 try:
1283 return getset(repo, subset, x)
1283 return getset(repo, subset, x)
1284 except error.RepoLookupError:
1284 except error.RepoLookupError:
1285 return baseset([])
1285 return baseset([])
1286
1286
1287 def public(repo, subset, x):
1287 def public(repo, subset, x):
1288 """``public()``
1288 """``public()``
1289 Changeset in public phase."""
1289 Changeset in public phase."""
1290 # i18n: "public" is a keyword
1290 # i18n: "public" is a keyword
1291 getargs(x, 0, 0, _("public takes no arguments"))
1291 getargs(x, 0, 0, _("public takes no arguments"))
1292 pc = repo._phasecache
1292 pc = repo._phasecache
1293 return subset.filter(lambda r: pc.phase(repo, r) == phases.public)
1293 return subset.filter(lambda r: pc.phase(repo, r) == phases.public)
1294
1294
1295 def remote(repo, subset, x):
1295 def remote(repo, subset, x):
1296 """``remote([id [,path]])``
1296 """``remote([id [,path]])``
1297 Local revision that corresponds to the given identifier in a
1297 Local revision that corresponds to the given identifier in a
1298 remote repository, if present. Here, the '.' identifier is a
1298 remote repository, if present. Here, the '.' identifier is a
1299 synonym for the current local branch.
1299 synonym for the current local branch.
1300 """
1300 """
1301
1301
1302 import hg # avoid start-up nasties
1302 import hg # avoid start-up nasties
1303 # i18n: "remote" is a keyword
1303 # i18n: "remote" is a keyword
1304 l = getargs(x, 0, 2, _("remote takes one, two or no arguments"))
1304 l = getargs(x, 0, 2, _("remote takes one, two or no arguments"))
1305
1305
1306 q = '.'
1306 q = '.'
1307 if len(l) > 0:
1307 if len(l) > 0:
1308 # i18n: "remote" is a keyword
1308 # i18n: "remote" is a keyword
1309 q = getstring(l[0], _("remote requires a string id"))
1309 q = getstring(l[0], _("remote requires a string id"))
1310 if q == '.':
1310 if q == '.':
1311 q = repo['.'].branch()
1311 q = repo['.'].branch()
1312
1312
1313 dest = ''
1313 dest = ''
1314 if len(l) > 1:
1314 if len(l) > 1:
1315 # i18n: "remote" is a keyword
1315 # i18n: "remote" is a keyword
1316 dest = getstring(l[1], _("remote requires a repository path"))
1316 dest = getstring(l[1], _("remote requires a repository path"))
1317 dest = repo.ui.expandpath(dest or 'default')
1317 dest = repo.ui.expandpath(dest or 'default')
1318 dest, branches = hg.parseurl(dest)
1318 dest, branches = hg.parseurl(dest)
1319 revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
1319 revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
1320 if revs:
1320 if revs:
1321 revs = [repo.lookup(rev) for rev in revs]
1321 revs = [repo.lookup(rev) for rev in revs]
1322 other = hg.peer(repo, {}, dest)
1322 other = hg.peer(repo, {}, dest)
1323 n = other.lookup(q)
1323 n = other.lookup(q)
1324 if n in repo:
1324 if n in repo:
1325 r = repo[n].rev()
1325 r = repo[n].rev()
1326 if r in subset:
1326 if r in subset:
1327 return baseset([r])
1327 return baseset([r])
1328 return baseset([])
1328 return baseset([])
1329
1329
1330 def removes(repo, subset, x):
1330 def removes(repo, subset, x):
1331 """``removes(pattern)``
1331 """``removes(pattern)``
1332 Changesets which remove files matching pattern.
1332 Changesets which remove files matching pattern.
1333
1333
1334 The pattern without explicit kind like ``glob:`` is expected to be
1334 The pattern without explicit kind like ``glob:`` is expected to be
1335 relative to the current directory and match against a file or a
1335 relative to the current directory and match against a file or a
1336 directory.
1336 directory.
1337 """
1337 """
1338 # i18n: "removes" is a keyword
1338 # i18n: "removes" is a keyword
1339 pat = getstring(x, _("removes requires a pattern"))
1339 pat = getstring(x, _("removes requires a pattern"))
1340 return checkstatus(repo, subset, pat, 2)
1340 return checkstatus(repo, subset, pat, 2)
1341
1341
1342 def rev(repo, subset, x):
1342 def rev(repo, subset, x):
1343 """``rev(number)``
1343 """``rev(number)``
1344 Revision with the given numeric identifier.
1344 Revision with the given numeric identifier.
1345 """
1345 """
1346 # i18n: "rev" is a keyword
1346 # i18n: "rev" is a keyword
1347 l = getargs(x, 1, 1, _("rev requires one argument"))
1347 l = getargs(x, 1, 1, _("rev requires one argument"))
1348 try:
1348 try:
1349 # i18n: "rev" is a keyword
1349 # i18n: "rev" is a keyword
1350 l = int(getstring(l[0], _("rev requires a number")))
1350 l = int(getstring(l[0], _("rev requires a number")))
1351 except (TypeError, ValueError):
1351 except (TypeError, ValueError):
1352 # i18n: "rev" is a keyword
1352 # i18n: "rev" is a keyword
1353 raise error.ParseError(_("rev expects a number"))
1353 raise error.ParseError(_("rev expects a number"))
1354 return subset.filter(lambda r: r == l)
1354 return subset.filter(lambda r: r == l)
1355
1355
1356 def matching(repo, subset, x):
1356 def matching(repo, subset, x):
1357 """``matching(revision [, field])``
1357 """``matching(revision [, field])``
1358 Changesets in which a given set of fields match the set of fields in the
1358 Changesets in which a given set of fields match the set of fields in the
1359 selected revision or set.
1359 selected revision or set.
1360
1360
1361 To match more than one field pass the list of fields to match separated
1361 To match more than one field pass the list of fields to match separated
1362 by spaces (e.g. ``author description``).
1362 by spaces (e.g. ``author description``).
1363
1363
1364 Valid fields are most regular revision fields and some special fields.
1364 Valid fields are most regular revision fields and some special fields.
1365
1365
1366 Regular revision fields are ``description``, ``author``, ``branch``,
1366 Regular revision fields are ``description``, ``author``, ``branch``,
1367 ``date``, ``files``, ``phase``, ``parents``, ``substate``, ``user``
1367 ``date``, ``files``, ``phase``, ``parents``, ``substate``, ``user``
1368 and ``diff``.
1368 and ``diff``.
1369 Note that ``author`` and ``user`` are synonyms. ``diff`` refers to the
1369 Note that ``author`` and ``user`` are synonyms. ``diff`` refers to the
1370 contents of the revision. Two revisions matching their ``diff`` will
1370 contents of the revision. Two revisions matching their ``diff`` will
1371 also match their ``files``.
1371 also match their ``files``.
1372
1372
1373 Special fields are ``summary`` and ``metadata``:
1373 Special fields are ``summary`` and ``metadata``:
1374 ``summary`` matches the first line of the description.
1374 ``summary`` matches the first line of the description.
1375 ``metadata`` is equivalent to matching ``description user date``
1375 ``metadata`` is equivalent to matching ``description user date``
1376 (i.e. it matches the main metadata fields).
1376 (i.e. it matches the main metadata fields).
1377
1377
1378 ``metadata`` is the default field which is used when no fields are
1378 ``metadata`` is the default field which is used when no fields are
1379 specified. You can match more than one field at a time.
1379 specified. You can match more than one field at a time.
1380 """
1380 """
1381 # i18n: "matching" is a keyword
1381 # i18n: "matching" is a keyword
1382 l = getargs(x, 1, 2, _("matching takes 1 or 2 arguments"))
1382 l = getargs(x, 1, 2, _("matching takes 1 or 2 arguments"))
1383
1383
1384 revs = getset(repo, baseset(repo.changelog), l[0])
1384 revs = getset(repo, baseset(repo.changelog), l[0])
1385
1385
1386 fieldlist = ['metadata']
1386 fieldlist = ['metadata']
1387 if len(l) > 1:
1387 if len(l) > 1:
1388 fieldlist = getstring(l[1],
1388 fieldlist = getstring(l[1],
1389 # i18n: "matching" is a keyword
1389 # i18n: "matching" is a keyword
1390 _("matching requires a string "
1390 _("matching requires a string "
1391 "as its second argument")).split()
1391 "as its second argument")).split()
1392
1392
1393 # Make sure that there are no repeated fields,
1393 # Make sure that there are no repeated fields,
1394 # expand the 'special' 'metadata' field type
1394 # expand the 'special' 'metadata' field type
1395 # and check the 'files' whenever we check the 'diff'
1395 # and check the 'files' whenever we check the 'diff'
1396 fields = []
1396 fields = []
1397 for field in fieldlist:
1397 for field in fieldlist:
1398 if field == 'metadata':
1398 if field == 'metadata':
1399 fields += ['user', 'description', 'date']
1399 fields += ['user', 'description', 'date']
1400 elif field == 'diff':
1400 elif field == 'diff':
1401 # a revision matching the diff must also match the files
1401 # a revision matching the diff must also match the files
1402 # since matching the diff is very costly, make sure to
1402 # since matching the diff is very costly, make sure to
1403 # also match the files first
1403 # also match the files first
1404 fields += ['files', 'diff']
1404 fields += ['files', 'diff']
1405 else:
1405 else:
1406 if field == 'author':
1406 if field == 'author':
1407 field = 'user'
1407 field = 'user'
1408 fields.append(field)
1408 fields.append(field)
1409 fields = set(fields)
1409 fields = set(fields)
1410 if 'summary' in fields and 'description' in fields:
1410 if 'summary' in fields and 'description' in fields:
1411 # If a revision matches its description it also matches its summary
1411 # If a revision matches its description it also matches its summary
1412 fields.discard('summary')
1412 fields.discard('summary')
1413
1413
1414 # We may want to match more than one field
1414 # We may want to match more than one field
1415 # Not all fields take the same amount of time to be matched
1415 # Not all fields take the same amount of time to be matched
1416 # Sort the selected fields in order of increasing matching cost
1416 # Sort the selected fields in order of increasing matching cost
1417 fieldorder = ['phase', 'parents', 'user', 'date', 'branch', 'summary',
1417 fieldorder = ['phase', 'parents', 'user', 'date', 'branch', 'summary',
1418 'files', 'description', 'substate', 'diff']
1418 'files', 'description', 'substate', 'diff']
1419 def fieldkeyfunc(f):
1419 def fieldkeyfunc(f):
1420 try:
1420 try:
1421 return fieldorder.index(f)
1421 return fieldorder.index(f)
1422 except ValueError:
1422 except ValueError:
1423 # assume an unknown field is very costly
1423 # assume an unknown field is very costly
1424 return len(fieldorder)
1424 return len(fieldorder)
1425 fields = list(fields)
1425 fields = list(fields)
1426 fields.sort(key=fieldkeyfunc)
1426 fields.sort(key=fieldkeyfunc)
1427
1427
1428 # Each field will be matched with its own "getfield" function
1428 # Each field will be matched with its own "getfield" function
1429 # which will be added to the getfieldfuncs array of functions
1429 # which will be added to the getfieldfuncs array of functions
1430 getfieldfuncs = []
1430 getfieldfuncs = []
1431 _funcs = {
1431 _funcs = {
1432 'user': lambda r: repo[r].user(),
1432 'user': lambda r: repo[r].user(),
1433 'branch': lambda r: repo[r].branch(),
1433 'branch': lambda r: repo[r].branch(),
1434 'date': lambda r: repo[r].date(),
1434 'date': lambda r: repo[r].date(),
1435 'description': lambda r: repo[r].description(),
1435 'description': lambda r: repo[r].description(),
1436 'files': lambda r: repo[r].files(),
1436 'files': lambda r: repo[r].files(),
1437 'parents': lambda r: repo[r].parents(),
1437 'parents': lambda r: repo[r].parents(),
1438 'phase': lambda r: repo[r].phase(),
1438 'phase': lambda r: repo[r].phase(),
1439 'substate': lambda r: repo[r].substate,
1439 'substate': lambda r: repo[r].substate,
1440 'summary': lambda r: repo[r].description().splitlines()[0],
1440 'summary': lambda r: repo[r].description().splitlines()[0],
1441 'diff': lambda r: list(repo[r].diff(git=True),)
1441 'diff': lambda r: list(repo[r].diff(git=True),)
1442 }
1442 }
1443 for info in fields:
1443 for info in fields:
1444 getfield = _funcs.get(info, None)
1444 getfield = _funcs.get(info, None)
1445 if getfield is None:
1445 if getfield is None:
1446 raise error.ParseError(
1446 raise error.ParseError(
1447 # i18n: "matching" is a keyword
1447 # i18n: "matching" is a keyword
1448 _("unexpected field name passed to matching: %s") % info)
1448 _("unexpected field name passed to matching: %s") % info)
1449 getfieldfuncs.append(getfield)
1449 getfieldfuncs.append(getfield)
1450 # convert the getfield array of functions into a "getinfo" function
1450 # convert the getfield array of functions into a "getinfo" function
1451 # which returns an array of field values (or a single value if there
1451 # which returns an array of field values (or a single value if there
1452 # is only one field to match)
1452 # is only one field to match)
1453 getinfo = lambda r: [f(r) for f in getfieldfuncs]
1453 getinfo = lambda r: [f(r) for f in getfieldfuncs]
1454
1454
1455 def matches(x):
1455 def matches(x):
1456 for rev in revs:
1456 for rev in revs:
1457 target = getinfo(rev)
1457 target = getinfo(rev)
1458 match = True
1458 match = True
1459 for n, f in enumerate(getfieldfuncs):
1459 for n, f in enumerate(getfieldfuncs):
1460 if target[n] != f(x):
1460 if target[n] != f(x):
1461 match = False
1461 match = False
1462 if match:
1462 if match:
1463 return True
1463 return True
1464 return False
1464 return False
1465
1465
1466 return subset.filter(matches)
1466 return subset.filter(matches)
1467
1467
1468 def reverse(repo, subset, x):
1468 def reverse(repo, subset, x):
1469 """``reverse(set)``
1469 """``reverse(set)``
1470 Reverse order of set.
1470 Reverse order of set.
1471 """
1471 """
1472 l = getset(repo, subset, x)
1472 l = getset(repo, subset, x)
1473 l.reverse()
1473 l.reverse()
1474 return l
1474 return l
1475
1475
1476 def roots(repo, subset, x):
1476 def roots(repo, subset, x):
1477 """``roots(set)``
1477 """``roots(set)``
1478 Changesets in set with no parent changeset in set.
1478 Changesets in set with no parent changeset in set.
1479 """
1479 """
1480 s = getset(repo, spanset(repo), x).set()
1480 s = getset(repo, spanset(repo), x).set()
1481 subset = baseset([r for r in s if r in subset.set()])
1481 subset = baseset([r for r in s if r in subset.set()])
1482 cs = _children(repo, subset, s)
1482 cs = _children(repo, subset, s)
1483 return subset - cs
1483 return subset - cs
1484
1484
1485 def secret(repo, subset, x):
1485 def secret(repo, subset, x):
1486 """``secret()``
1486 """``secret()``
1487 Changeset in secret phase."""
1487 Changeset in secret phase."""
1488 # i18n: "secret" is a keyword
1488 # i18n: "secret" is a keyword
1489 getargs(x, 0, 0, _("secret takes no arguments"))
1489 getargs(x, 0, 0, _("secret takes no arguments"))
1490 pc = repo._phasecache
1490 pc = repo._phasecache
1491 return subset.filter(lambda x: pc.phase(repo, x) == phases.secret)
1491 return subset.filter(lambda x: pc.phase(repo, x) == phases.secret)
1492
1492
1493 def sort(repo, subset, x):
1493 def sort(repo, subset, x):
1494 """``sort(set[, [-]key...])``
1494 """``sort(set[, [-]key...])``
1495 Sort set by keys. The default sort order is ascending, specify a key
1495 Sort set by keys. The default sort order is ascending, specify a key
1496 as ``-key`` to sort in descending order.
1496 as ``-key`` to sort in descending order.
1497
1497
1498 The keys can be:
1498 The keys can be:
1499
1499
1500 - ``rev`` for the revision number,
1500 - ``rev`` for the revision number,
1501 - ``branch`` for the branch name,
1501 - ``branch`` for the branch name,
1502 - ``desc`` for the commit message (description),
1502 - ``desc`` for the commit message (description),
1503 - ``user`` for user name (``author`` can be used as an alias),
1503 - ``user`` for user name (``author`` can be used as an alias),
1504 - ``date`` for the commit date
1504 - ``date`` for the commit date
1505 """
1505 """
1506 # i18n: "sort" is a keyword
1506 # i18n: "sort" is a keyword
1507 l = getargs(x, 1, 2, _("sort requires one or two arguments"))
1507 l = getargs(x, 1, 2, _("sort requires one or two arguments"))
1508 keys = "rev"
1508 keys = "rev"
1509 if len(l) == 2:
1509 if len(l) == 2:
1510 # i18n: "sort" is a keyword
1510 # i18n: "sort" is a keyword
1511 keys = getstring(l[1], _("sort spec must be a string"))
1511 keys = getstring(l[1], _("sort spec must be a string"))
1512
1512
1513 s = l[0]
1513 s = l[0]
1514 keys = keys.split()
1514 keys = keys.split()
1515 l = []
1515 l = []
1516 def invert(s):
1516 def invert(s):
1517 return "".join(chr(255 - ord(c)) for c in s)
1517 return "".join(chr(255 - ord(c)) for c in s)
1518 revs = getset(repo, subset, s)
1518 revs = getset(repo, subset, s)
1519 if keys == ["rev"]:
1519 if keys == ["rev"]:
1520 revs.sort()
1520 revs.sort()
1521 return revs
1521 return revs
1522 elif keys == ["-rev"]:
1522 elif keys == ["-rev"]:
1523 revs.sort(reverse=True)
1523 revs.sort(reverse=True)
1524 return revs
1524 return revs
1525 for r in revs:
1525 for r in revs:
1526 c = repo[r]
1526 c = repo[r]
1527 e = []
1527 e = []
1528 for k in keys:
1528 for k in keys:
1529 if k == 'rev':
1529 if k == 'rev':
1530 e.append(r)
1530 e.append(r)
1531 elif k == '-rev':
1531 elif k == '-rev':
1532 e.append(-r)
1532 e.append(-r)
1533 elif k == 'branch':
1533 elif k == 'branch':
1534 e.append(c.branch())
1534 e.append(c.branch())
1535 elif k == '-branch':
1535 elif k == '-branch':
1536 e.append(invert(c.branch()))
1536 e.append(invert(c.branch()))
1537 elif k == 'desc':
1537 elif k == 'desc':
1538 e.append(c.description())
1538 e.append(c.description())
1539 elif k == '-desc':
1539 elif k == '-desc':
1540 e.append(invert(c.description()))
1540 e.append(invert(c.description()))
1541 elif k in 'user author':
1541 elif k in 'user author':
1542 e.append(c.user())
1542 e.append(c.user())
1543 elif k in '-user -author':
1543 elif k in '-user -author':
1544 e.append(invert(c.user()))
1544 e.append(invert(c.user()))
1545 elif k == 'date':
1545 elif k == 'date':
1546 e.append(c.date()[0])
1546 e.append(c.date()[0])
1547 elif k == '-date':
1547 elif k == '-date':
1548 e.append(-c.date()[0])
1548 e.append(-c.date()[0])
1549 else:
1549 else:
1550 raise error.ParseError(_("unknown sort key %r") % k)
1550 raise error.ParseError(_("unknown sort key %r") % k)
1551 e.append(r)
1551 e.append(r)
1552 l.append(e)
1552 l.append(e)
1553 l.sort()
1553 l.sort()
1554 return baseset([e[-1] for e in l])
1554 return baseset([e[-1] for e in l])
1555
1555
1556 def _stringmatcher(pattern):
1556 def _stringmatcher(pattern):
1557 """
1557 """
1558 accepts a string, possibly starting with 're:' or 'literal:' prefix.
1558 accepts a string, possibly starting with 're:' or 'literal:' prefix.
1559 returns the matcher name, pattern, and matcher function.
1559 returns the matcher name, pattern, and matcher function.
1560 missing or unknown prefixes are treated as literal matches.
1560 missing or unknown prefixes are treated as literal matches.
1561
1561
1562 helper for tests:
1562 helper for tests:
1563 >>> def test(pattern, *tests):
1563 >>> def test(pattern, *tests):
1564 ... kind, pattern, matcher = _stringmatcher(pattern)
1564 ... kind, pattern, matcher = _stringmatcher(pattern)
1565 ... return (kind, pattern, [bool(matcher(t)) for t in tests])
1565 ... return (kind, pattern, [bool(matcher(t)) for t in tests])
1566
1566
1567 exact matching (no prefix):
1567 exact matching (no prefix):
1568 >>> test('abcdefg', 'abc', 'def', 'abcdefg')
1568 >>> test('abcdefg', 'abc', 'def', 'abcdefg')
1569 ('literal', 'abcdefg', [False, False, True])
1569 ('literal', 'abcdefg', [False, False, True])
1570
1570
1571 regex matching ('re:' prefix)
1571 regex matching ('re:' prefix)
1572 >>> test('re:a.+b', 'nomatch', 'fooadef', 'fooadefbar')
1572 >>> test('re:a.+b', 'nomatch', 'fooadef', 'fooadefbar')
1573 ('re', 'a.+b', [False, False, True])
1573 ('re', 'a.+b', [False, False, True])
1574
1574
1575 force exact matches ('literal:' prefix)
1575 force exact matches ('literal:' prefix)
1576 >>> test('literal:re:foobar', 'foobar', 're:foobar')
1576 >>> test('literal:re:foobar', 'foobar', 're:foobar')
1577 ('literal', 're:foobar', [False, True])
1577 ('literal', 're:foobar', [False, True])
1578
1578
1579 unknown prefixes are ignored and treated as literals
1579 unknown prefixes are ignored and treated as literals
1580 >>> test('foo:bar', 'foo', 'bar', 'foo:bar')
1580 >>> test('foo:bar', 'foo', 'bar', 'foo:bar')
1581 ('literal', 'foo:bar', [False, False, True])
1581 ('literal', 'foo:bar', [False, False, True])
1582 """
1582 """
1583 if pattern.startswith('re:'):
1583 if pattern.startswith('re:'):
1584 pattern = pattern[3:]
1584 pattern = pattern[3:]
1585 try:
1585 try:
1586 regex = re.compile(pattern)
1586 regex = re.compile(pattern)
1587 except re.error, e:
1587 except re.error, e:
1588 raise error.ParseError(_('invalid regular expression: %s')
1588 raise error.ParseError(_('invalid regular expression: %s')
1589 % e)
1589 % e)
1590 return 're', pattern, regex.search
1590 return 're', pattern, regex.search
1591 elif pattern.startswith('literal:'):
1591 elif pattern.startswith('literal:'):
1592 pattern = pattern[8:]
1592 pattern = pattern[8:]
1593 return 'literal', pattern, pattern.__eq__
1593 return 'literal', pattern, pattern.__eq__
1594
1594
1595 def _substringmatcher(pattern):
1595 def _substringmatcher(pattern):
1596 kind, pattern, matcher = _stringmatcher(pattern)
1596 kind, pattern, matcher = _stringmatcher(pattern)
1597 if kind == 'literal':
1597 if kind == 'literal':
1598 matcher = lambda s: pattern in s
1598 matcher = lambda s: pattern in s
1599 return kind, pattern, matcher
1599 return kind, pattern, matcher
1600
1600
1601 def tag(repo, subset, x):
1601 def tag(repo, subset, x):
1602 """``tag([name])``
1602 """``tag([name])``
1603 The specified tag by name, or all tagged revisions if no name is given.
1603 The specified tag by name, or all tagged revisions if no name is given.
1604
1604
1605 If `name` starts with `re:`, the remainder of the name is treated as
1605 If `name` starts with `re:`, the remainder of the name is treated as
1606 a regular expression. To match a tag that actually starts with `re:`,
1606 a regular expression. To match a tag that actually starts with `re:`,
1607 use the prefix `literal:`.
1607 use the prefix `literal:`.
1608 """
1608 """
1609 # i18n: "tag" is a keyword
1609 # i18n: "tag" is a keyword
1610 args = getargs(x, 0, 1, _("tag takes one or no arguments"))
1610 args = getargs(x, 0, 1, _("tag takes one or no arguments"))
1611 cl = repo.changelog
1611 cl = repo.changelog
1612 if args:
1612 if args:
1613 pattern = getstring(args[0],
1613 pattern = getstring(args[0],
1614 # i18n: "tag" is a keyword
1614 # i18n: "tag" is a keyword
1615 _('the argument to tag must be a string'))
1615 _('the argument to tag must be a string'))
1616 kind, pattern, matcher = _stringmatcher(pattern)
1616 kind, pattern, matcher = _stringmatcher(pattern)
1617 if kind == 'literal':
1617 if kind == 'literal':
1618 # avoid resolving all tags
1618 # avoid resolving all tags
1619 tn = repo._tagscache.tags.get(pattern, None)
1619 tn = repo._tagscache.tags.get(pattern, None)
1620 if tn is None:
1620 if tn is None:
1621 raise util.Abort(_("tag '%s' does not exist") % pattern)
1621 raise util.Abort(_("tag '%s' does not exist") % pattern)
1622 s = set([repo[tn].rev()])
1622 s = set([repo[tn].rev()])
1623 else:
1623 else:
1624 s = set([cl.rev(n) for t, n in repo.tagslist() if matcher(t)])
1624 s = set([cl.rev(n) for t, n in repo.tagslist() if matcher(t)])
1625 else:
1625 else:
1626 s = set([cl.rev(n) for t, n in repo.tagslist() if t != 'tip'])
1626 s = set([cl.rev(n) for t, n in repo.tagslist() if t != 'tip'])
1627 return subset & s
1627 return subset & s
1628
1628
1629 def tagged(repo, subset, x):
1629 def tagged(repo, subset, x):
1630 return tag(repo, subset, x)
1630 return tag(repo, subset, x)
1631
1631
1632 def unstable(repo, subset, x):
1632 def unstable(repo, subset, x):
1633 """``unstable()``
1633 """``unstable()``
1634 Non-obsolete changesets with obsolete ancestors.
1634 Non-obsolete changesets with obsolete ancestors.
1635 """
1635 """
1636 # i18n: "unstable" is a keyword
1636 # i18n: "unstable" is a keyword
1637 getargs(x, 0, 0, _("unstable takes no arguments"))
1637 getargs(x, 0, 0, _("unstable takes no arguments"))
1638 unstables = obsmod.getrevs(repo, 'unstable')
1638 unstables = obsmod.getrevs(repo, 'unstable')
1639 return subset & unstables
1639 return subset & unstables
1640
1640
1641
1641
1642 def user(repo, subset, x):
1642 def user(repo, subset, x):
1643 """``user(string)``
1643 """``user(string)``
1644 User name contains string. The match is case-insensitive.
1644 User name contains string. The match is case-insensitive.
1645
1645
1646 If `string` starts with `re:`, the remainder of the string is treated as
1646 If `string` starts with `re:`, the remainder of the string is treated as
1647 a regular expression. To match a user that actually contains `re:`, use
1647 a regular expression. To match a user that actually contains `re:`, use
1648 the prefix `literal:`.
1648 the prefix `literal:`.
1649 """
1649 """
1650 return author(repo, subset, x)
1650 return author(repo, subset, x)
1651
1651
1652 # for internal use
1652 # for internal use
1653 def _list(repo, subset, x):
1653 def _list(repo, subset, x):
1654 s = getstring(x, "internal error")
1654 s = getstring(x, "internal error")
1655 if not s:
1655 if not s:
1656 return baseset([])
1656 return baseset([])
1657 ls = [repo[r].rev() for r in s.split('\0')]
1657 ls = [repo[r].rev() for r in s.split('\0')]
1658 s = subset.set()
1658 s = subset.set()
1659 return baseset([r for r in ls if r in s])
1659 return baseset([r for r in ls if r in s])
1660
1660
1661 # for internal use
1661 # for internal use
1662 def _intlist(repo, subset, x):
1662 def _intlist(repo, subset, x):
1663 s = getstring(x, "internal error")
1663 s = getstring(x, "internal error")
1664 if not s:
1664 if not s:
1665 return baseset([])
1665 return baseset([])
1666 ls = [int(r) for r in s.split('\0')]
1666 ls = [int(r) for r in s.split('\0')]
1667 s = subset.set()
1667 s = subset.set()
1668 return baseset([r for r in ls if r in s])
1668 return baseset([r for r in ls if r in s])
1669
1669
1670 # for internal use
1670 # for internal use
1671 def _hexlist(repo, subset, x):
1671 def _hexlist(repo, subset, x):
1672 s = getstring(x, "internal error")
1672 s = getstring(x, "internal error")
1673 if not s:
1673 if not s:
1674 return baseset([])
1674 return baseset([])
1675 cl = repo.changelog
1675 cl = repo.changelog
1676 ls = [cl.rev(node.bin(r)) for r in s.split('\0')]
1676 ls = [cl.rev(node.bin(r)) for r in s.split('\0')]
1677 s = subset.set()
1677 s = subset.set()
1678 return baseset([r for r in ls if r in s])
1678 return baseset([r for r in ls if r in s])
1679
1679
1680 symbols = {
1680 symbols = {
1681 "adds": adds,
1681 "adds": adds,
1682 "all": getall,
1682 "all": getall,
1683 "ancestor": ancestor,
1683 "ancestor": ancestor,
1684 "ancestors": ancestors,
1684 "ancestors": ancestors,
1685 "_firstancestors": _firstancestors,
1685 "_firstancestors": _firstancestors,
1686 "author": author,
1686 "author": author,
1687 "only": only,
1687 "only": only,
1688 "bisect": bisect,
1688 "bisect": bisect,
1689 "bisected": bisected,
1689 "bisected": bisected,
1690 "bookmark": bookmark,
1690 "bookmark": bookmark,
1691 "branch": branch,
1691 "branch": branch,
1692 "branchpoint": branchpoint,
1692 "branchpoint": branchpoint,
1693 "bumped": bumped,
1693 "bumped": bumped,
1694 "bundle": bundle,
1694 "bundle": bundle,
1695 "children": children,
1695 "children": children,
1696 "closed": closed,
1696 "closed": closed,
1697 "contains": contains,
1697 "contains": contains,
1698 "converted": converted,
1698 "converted": converted,
1699 "date": date,
1699 "date": date,
1700 "desc": desc,
1700 "desc": desc,
1701 "descendants": descendants,
1701 "descendants": descendants,
1702 "_firstdescendants": _firstdescendants,
1702 "_firstdescendants": _firstdescendants,
1703 "destination": destination,
1703 "destination": destination,
1704 "divergent": divergent,
1704 "divergent": divergent,
1705 "draft": draft,
1705 "draft": draft,
1706 "extinct": extinct,
1706 "extinct": extinct,
1707 "extra": extra,
1707 "extra": extra,
1708 "file": hasfile,
1708 "file": hasfile,
1709 "filelog": filelog,
1709 "filelog": filelog,
1710 "first": first,
1710 "first": first,
1711 "follow": follow,
1711 "follow": follow,
1712 "_followfirst": _followfirst,
1712 "_followfirst": _followfirst,
1713 "grep": grep,
1713 "grep": grep,
1714 "head": head,
1714 "head": head,
1715 "heads": heads,
1715 "heads": heads,
1716 "hidden": hidden,
1716 "hidden": hidden,
1717 "id": node_,
1717 "id": node_,
1718 "keyword": keyword,
1718 "keyword": keyword,
1719 "last": last,
1719 "last": last,
1720 "limit": limit,
1720 "limit": limit,
1721 "_matchfiles": _matchfiles,
1721 "_matchfiles": _matchfiles,
1722 "max": maxrev,
1722 "max": maxrev,
1723 "merge": merge,
1723 "merge": merge,
1724 "min": minrev,
1724 "min": minrev,
1725 "_missingancestors": _missingancestors,
1725 "_missingancestors": _missingancestors,
1726 "modifies": modifies,
1726 "modifies": modifies,
1727 "obsolete": obsolete,
1727 "obsolete": obsolete,
1728 "origin": origin,
1728 "origin": origin,
1729 "outgoing": outgoing,
1729 "outgoing": outgoing,
1730 "p1": p1,
1730 "p1": p1,
1731 "p2": p2,
1731 "p2": p2,
1732 "parents": parents,
1732 "parents": parents,
1733 "present": present,
1733 "present": present,
1734 "public": public,
1734 "public": public,
1735 "remote": remote,
1735 "remote": remote,
1736 "removes": removes,
1736 "removes": removes,
1737 "rev": rev,
1737 "rev": rev,
1738 "reverse": reverse,
1738 "reverse": reverse,
1739 "roots": roots,
1739 "roots": roots,
1740 "sort": sort,
1740 "sort": sort,
1741 "secret": secret,
1741 "secret": secret,
1742 "matching": matching,
1742 "matching": matching,
1743 "tag": tag,
1743 "tag": tag,
1744 "tagged": tagged,
1744 "tagged": tagged,
1745 "user": user,
1745 "user": user,
1746 "unstable": unstable,
1746 "unstable": unstable,
1747 "_list": _list,
1747 "_list": _list,
1748 "_intlist": _intlist,
1748 "_intlist": _intlist,
1749 "_hexlist": _hexlist,
1749 "_hexlist": _hexlist,
1750 }
1750 }
1751
1751
1752 # symbols which can't be used for a DoS attack for any given input
1752 # symbols which can't be used for a DoS attack for any given input
1753 # (e.g. those which accept regexes as plain strings shouldn't be included)
1753 # (e.g. those which accept regexes as plain strings shouldn't be included)
1754 # functions that just return a lot of changesets (like all) don't count here
1754 # functions that just return a lot of changesets (like all) don't count here
1755 safesymbols = set([
1755 safesymbols = set([
1756 "adds",
1756 "adds",
1757 "all",
1757 "all",
1758 "ancestor",
1758 "ancestor",
1759 "ancestors",
1759 "ancestors",
1760 "_firstancestors",
1760 "_firstancestors",
1761 "author",
1761 "author",
1762 "bisect",
1762 "bisect",
1763 "bisected",
1763 "bisected",
1764 "bookmark",
1764 "bookmark",
1765 "branch",
1765 "branch",
1766 "branchpoint",
1766 "branchpoint",
1767 "bumped",
1767 "bumped",
1768 "bundle",
1768 "bundle",
1769 "children",
1769 "children",
1770 "closed",
1770 "closed",
1771 "converted",
1771 "converted",
1772 "date",
1772 "date",
1773 "desc",
1773 "desc",
1774 "descendants",
1774 "descendants",
1775 "_firstdescendants",
1775 "_firstdescendants",
1776 "destination",
1776 "destination",
1777 "divergent",
1777 "divergent",
1778 "draft",
1778 "draft",
1779 "extinct",
1779 "extinct",
1780 "extra",
1780 "extra",
1781 "file",
1781 "file",
1782 "filelog",
1782 "filelog",
1783 "first",
1783 "first",
1784 "follow",
1784 "follow",
1785 "_followfirst",
1785 "_followfirst",
1786 "head",
1786 "head",
1787 "heads",
1787 "heads",
1788 "hidden",
1788 "hidden",
1789 "id",
1789 "id",
1790 "keyword",
1790 "keyword",
1791 "last",
1791 "last",
1792 "limit",
1792 "limit",
1793 "_matchfiles",
1793 "_matchfiles",
1794 "max",
1794 "max",
1795 "merge",
1795 "merge",
1796 "min",
1796 "min",
1797 "_missingancestors",
1797 "_missingancestors",
1798 "modifies",
1798 "modifies",
1799 "obsolete",
1799 "obsolete",
1800 "origin",
1800 "origin",
1801 "outgoing",
1801 "outgoing",
1802 "p1",
1802 "p1",
1803 "p2",
1803 "p2",
1804 "parents",
1804 "parents",
1805 "present",
1805 "present",
1806 "public",
1806 "public",
1807 "remote",
1807 "remote",
1808 "removes",
1808 "removes",
1809 "rev",
1809 "rev",
1810 "reverse",
1810 "reverse",
1811 "roots",
1811 "roots",
1812 "sort",
1812 "sort",
1813 "secret",
1813 "secret",
1814 "matching",
1814 "matching",
1815 "tag",
1815 "tag",
1816 "tagged",
1816 "tagged",
1817 "user",
1817 "user",
1818 "unstable",
1818 "unstable",
1819 "_list",
1819 "_list",
1820 "_intlist",
1820 "_intlist",
1821 "_hexlist",
1821 "_hexlist",
1822 ])
1822 ])
1823
1823
1824 methods = {
1824 methods = {
1825 "range": rangeset,
1825 "range": rangeset,
1826 "dagrange": dagrange,
1826 "dagrange": dagrange,
1827 "string": stringset,
1827 "string": stringset,
1828 "symbol": symbolset,
1828 "symbol": symbolset,
1829 "and": andset,
1829 "and": andset,
1830 "or": orset,
1830 "or": orset,
1831 "not": notset,
1831 "not": notset,
1832 "list": listset,
1832 "list": listset,
1833 "func": func,
1833 "func": func,
1834 "ancestor": ancestorspec,
1834 "ancestor": ancestorspec,
1835 "parent": parentspec,
1835 "parent": parentspec,
1836 "parentpost": p1,
1836 "parentpost": p1,
1837 }
1837 }
1838
1838
1839 def optimize(x, small):
1839 def optimize(x, small):
1840 if x is None:
1840 if x is None:
1841 return 0, x
1841 return 0, x
1842
1842
1843 smallbonus = 1
1843 smallbonus = 1
1844 if small:
1844 if small:
1845 smallbonus = .5
1845 smallbonus = .5
1846
1846
1847 op = x[0]
1847 op = x[0]
1848 if op == 'minus':
1848 if op == 'minus':
1849 return optimize(('and', x[1], ('not', x[2])), small)
1849 return optimize(('and', x[1], ('not', x[2])), small)
1850 elif op == 'dagrangepre':
1850 elif op == 'dagrangepre':
1851 return optimize(('func', ('symbol', 'ancestors'), x[1]), small)
1851 return optimize(('func', ('symbol', 'ancestors'), x[1]), small)
1852 elif op == 'dagrangepost':
1852 elif op == 'dagrangepost':
1853 return optimize(('func', ('symbol', 'descendants'), x[1]), small)
1853 return optimize(('func', ('symbol', 'descendants'), x[1]), small)
1854 elif op == 'rangepre':
1854 elif op == 'rangepre':
1855 return optimize(('range', ('string', '0'), x[1]), small)
1855 return optimize(('range', ('string', '0'), x[1]), small)
1856 elif op == 'rangepost':
1856 elif op == 'rangepost':
1857 return optimize(('range', x[1], ('string', 'tip')), small)
1857 return optimize(('range', x[1], ('string', 'tip')), small)
1858 elif op == 'negate':
1858 elif op == 'negate':
1859 return optimize(('string',
1859 return optimize(('string',
1860 '-' + getstring(x[1], _("can't negate that"))), small)
1860 '-' + getstring(x[1], _("can't negate that"))), small)
1861 elif op in 'string symbol negate':
1861 elif op in 'string symbol negate':
1862 return smallbonus, x # single revisions are small
1862 return smallbonus, x # single revisions are small
1863 elif op == 'and':
1863 elif op == 'and':
1864 wa, ta = optimize(x[1], True)
1864 wa, ta = optimize(x[1], True)
1865 wb, tb = optimize(x[2], True)
1865 wb, tb = optimize(x[2], True)
1866
1866
1867 # (::x and not ::y)/(not ::y and ::x) have a fast path
1867 # (::x and not ::y)/(not ::y and ::x) have a fast path
1868 def ismissingancestors(revs, bases):
1868 def ismissingancestors(revs, bases):
1869 return (
1869 return (
1870 revs[0] == 'func'
1870 revs[0] == 'func'
1871 and getstring(revs[1], _('not a symbol')) == 'ancestors'
1871 and getstring(revs[1], _('not a symbol')) == 'ancestors'
1872 and bases[0] == 'not'
1872 and bases[0] == 'not'
1873 and bases[1][0] == 'func'
1873 and bases[1][0] == 'func'
1874 and getstring(bases[1][1], _('not a symbol')) == 'ancestors')
1874 and getstring(bases[1][1], _('not a symbol')) == 'ancestors')
1875
1875
1876 w = min(wa, wb)
1876 w = min(wa, wb)
1877 if ismissingancestors(ta, tb):
1877 if ismissingancestors(ta, tb):
1878 return w, ('func', ('symbol', '_missingancestors'),
1878 return w, ('func', ('symbol', '_missingancestors'),
1879 ('list', ta[2], tb[1][2]))
1879 ('list', ta[2], tb[1][2]))
1880 if ismissingancestors(tb, ta):
1880 if ismissingancestors(tb, ta):
1881 return w, ('func', ('symbol', '_missingancestors'),
1881 return w, ('func', ('symbol', '_missingancestors'),
1882 ('list', tb[2], ta[1][2]))
1882 ('list', tb[2], ta[1][2]))
1883
1883
1884 if wa > wb:
1884 if wa > wb:
1885 return w, (op, tb, ta)
1885 return w, (op, tb, ta)
1886 return w, (op, ta, tb)
1886 return w, (op, ta, tb)
1887 elif op == 'or':
1887 elif op == 'or':
1888 wa, ta = optimize(x[1], False)
1888 wa, ta = optimize(x[1], False)
1889 wb, tb = optimize(x[2], False)
1889 wb, tb = optimize(x[2], False)
1890 if wb < wa:
1890 if wb < wa:
1891 wb, wa = wa, wb
1891 wb, wa = wa, wb
1892 return max(wa, wb), (op, ta, tb)
1892 return max(wa, wb), (op, ta, tb)
1893 elif op == 'not':
1893 elif op == 'not':
1894 o = optimize(x[1], not small)
1894 o = optimize(x[1], not small)
1895 return o[0], (op, o[1])
1895 return o[0], (op, o[1])
1896 elif op == 'parentpost':
1896 elif op == 'parentpost':
1897 o = optimize(x[1], small)
1897 o = optimize(x[1], small)
1898 return o[0], (op, o[1])
1898 return o[0], (op, o[1])
1899 elif op == 'group':
1899 elif op == 'group':
1900 return optimize(x[1], small)
1900 return optimize(x[1], small)
1901 elif op in 'dagrange range list parent ancestorspec':
1901 elif op in 'dagrange range list parent ancestorspec':
1902 if op == 'parent':
1902 if op == 'parent':
1903 # x^:y means (x^) : y, not x ^ (:y)
1903 # x^:y means (x^) : y, not x ^ (:y)
1904 post = ('parentpost', x[1])
1904 post = ('parentpost', x[1])
1905 if x[2][0] == 'dagrangepre':
1905 if x[2][0] == 'dagrangepre':
1906 return optimize(('dagrange', post, x[2][1]), small)
1906 return optimize(('dagrange', post, x[2][1]), small)
1907 elif x[2][0] == 'rangepre':
1907 elif x[2][0] == 'rangepre':
1908 return optimize(('range', post, x[2][1]), small)
1908 return optimize(('range', post, x[2][1]), small)
1909
1909
1910 wa, ta = optimize(x[1], small)
1910 wa, ta = optimize(x[1], small)
1911 wb, tb = optimize(x[2], small)
1911 wb, tb = optimize(x[2], small)
1912 return wa + wb, (op, ta, tb)
1912 return wa + wb, (op, ta, tb)
1913 elif op == 'func':
1913 elif op == 'func':
1914 f = getstring(x[1], _("not a symbol"))
1914 f = getstring(x[1], _("not a symbol"))
1915 wa, ta = optimize(x[2], small)
1915 wa, ta = optimize(x[2], small)
1916 if f in ("author branch closed date desc file grep keyword "
1916 if f in ("author branch closed date desc file grep keyword "
1917 "outgoing user"):
1917 "outgoing user"):
1918 w = 10 # slow
1918 w = 10 # slow
1919 elif f in "modifies adds removes":
1919 elif f in "modifies adds removes":
1920 w = 30 # slower
1920 w = 30 # slower
1921 elif f == "contains":
1921 elif f == "contains":
1922 w = 100 # very slow
1922 w = 100 # very slow
1923 elif f == "ancestor":
1923 elif f == "ancestor":
1924 w = 1 * smallbonus
1924 w = 1 * smallbonus
1925 elif f in "reverse limit first":
1925 elif f in "reverse limit first":
1926 w = 0
1926 w = 0
1927 elif f in "sort":
1927 elif f in "sort":
1928 w = 10 # assume most sorts look at changelog
1928 w = 10 # assume most sorts look at changelog
1929 else:
1929 else:
1930 w = 1
1930 w = 1
1931 return w + wa, (op, x[1], ta)
1931 return w + wa, (op, x[1], ta)
1932 return 1, x
1932 return 1, x
1933
1933
1934 _aliasarg = ('func', ('symbol', '_aliasarg'))
1934 _aliasarg = ('func', ('symbol', '_aliasarg'))
1935 def _getaliasarg(tree):
1935 def _getaliasarg(tree):
1936 """If tree matches ('func', ('symbol', '_aliasarg'), ('string', X))
1936 """If tree matches ('func', ('symbol', '_aliasarg'), ('string', X))
1937 return X, None otherwise.
1937 return X, None otherwise.
1938 """
1938 """
1939 if (len(tree) == 3 and tree[:2] == _aliasarg
1939 if (len(tree) == 3 and tree[:2] == _aliasarg
1940 and tree[2][0] == 'string'):
1940 and tree[2][0] == 'string'):
1941 return tree[2][1]
1941 return tree[2][1]
1942 return None
1942 return None
1943
1943
1944 def _checkaliasarg(tree, known=None):
1944 def _checkaliasarg(tree, known=None):
1945 """Check tree contains no _aliasarg construct or only ones which
1945 """Check tree contains no _aliasarg construct or only ones which
1946 value is in known. Used to avoid alias placeholders injection.
1946 value is in known. Used to avoid alias placeholders injection.
1947 """
1947 """
1948 if isinstance(tree, tuple):
1948 if isinstance(tree, tuple):
1949 arg = _getaliasarg(tree)
1949 arg = _getaliasarg(tree)
1950 if arg is not None and (not known or arg not in known):
1950 if arg is not None and (not known or arg not in known):
1951 raise error.ParseError(_("not a function: %s") % '_aliasarg')
1951 raise error.ParseError(_("not a function: %s") % '_aliasarg')
1952 for t in tree:
1952 for t in tree:
1953 _checkaliasarg(t, known)
1953 _checkaliasarg(t, known)
1954
1954
1955 class revsetalias(object):
1955 class revsetalias(object):
1956 funcre = re.compile('^([^(]+)\(([^)]+)\)$')
1956 funcre = re.compile('^([^(]+)\(([^)]+)\)$')
1957 args = None
1957 args = None
1958
1958
1959 def __init__(self, name, value):
1959 def __init__(self, name, value):
1960 '''Aliases like:
1960 '''Aliases like:
1961
1961
1962 h = heads(default)
1962 h = heads(default)
1963 b($1) = ancestors($1) - ancestors(default)
1963 b($1) = ancestors($1) - ancestors(default)
1964 '''
1964 '''
1965 m = self.funcre.search(name)
1965 m = self.funcre.search(name)
1966 if m:
1966 if m:
1967 self.name = m.group(1)
1967 self.name = m.group(1)
1968 self.tree = ('func', ('symbol', m.group(1)))
1968 self.tree = ('func', ('symbol', m.group(1)))
1969 self.args = [x.strip() for x in m.group(2).split(',')]
1969 self.args = [x.strip() for x in m.group(2).split(',')]
1970 for arg in self.args:
1970 for arg in self.args:
1971 # _aliasarg() is an unknown symbol only used separate
1971 # _aliasarg() is an unknown symbol only used separate
1972 # alias argument placeholders from regular strings.
1972 # alias argument placeholders from regular strings.
1973 value = value.replace(arg, '_aliasarg(%r)' % (arg,))
1973 value = value.replace(arg, '_aliasarg(%r)' % (arg,))
1974 else:
1974 else:
1975 self.name = name
1975 self.name = name
1976 self.tree = ('symbol', name)
1976 self.tree = ('symbol', name)
1977
1977
1978 self.replacement, pos = parse(value)
1978 self.replacement, pos = parse(value)
1979 if pos != len(value):
1979 if pos != len(value):
1980 raise error.ParseError(_('invalid token'), pos)
1980 raise error.ParseError(_('invalid token'), pos)
1981 # Check for placeholder injection
1981 # Check for placeholder injection
1982 _checkaliasarg(self.replacement, self.args)
1982 _checkaliasarg(self.replacement, self.args)
1983
1983
1984 def _getalias(aliases, tree):
1984 def _getalias(aliases, tree):
1985 """If tree looks like an unexpanded alias, return it. Return None
1985 """If tree looks like an unexpanded alias, return it. Return None
1986 otherwise.
1986 otherwise.
1987 """
1987 """
1988 if isinstance(tree, tuple) and tree:
1988 if isinstance(tree, tuple) and tree:
1989 if tree[0] == 'symbol' and len(tree) == 2:
1989 if tree[0] == 'symbol' and len(tree) == 2:
1990 name = tree[1]
1990 name = tree[1]
1991 alias = aliases.get(name)
1991 alias = aliases.get(name)
1992 if alias and alias.args is None and alias.tree == tree:
1992 if alias and alias.args is None and alias.tree == tree:
1993 return alias
1993 return alias
1994 if tree[0] == 'func' and len(tree) > 1:
1994 if tree[0] == 'func' and len(tree) > 1:
1995 if tree[1][0] == 'symbol' and len(tree[1]) == 2:
1995 if tree[1][0] == 'symbol' and len(tree[1]) == 2:
1996 name = tree[1][1]
1996 name = tree[1][1]
1997 alias = aliases.get(name)
1997 alias = aliases.get(name)
1998 if alias and alias.args is not None and alias.tree == tree[:2]:
1998 if alias and alias.args is not None and alias.tree == tree[:2]:
1999 return alias
1999 return alias
2000 return None
2000 return None
2001
2001
2002 def _expandargs(tree, args):
2002 def _expandargs(tree, args):
2003 """Replace _aliasarg instances with the substitution value of the
2003 """Replace _aliasarg instances with the substitution value of the
2004 same name in args, recursively.
2004 same name in args, recursively.
2005 """
2005 """
2006 if not tree or not isinstance(tree, tuple):
2006 if not tree or not isinstance(tree, tuple):
2007 return tree
2007 return tree
2008 arg = _getaliasarg(tree)
2008 arg = _getaliasarg(tree)
2009 if arg is not None:
2009 if arg is not None:
2010 return args[arg]
2010 return args[arg]
2011 return tuple(_expandargs(t, args) for t in tree)
2011 return tuple(_expandargs(t, args) for t in tree)
2012
2012
2013 def _expandaliases(aliases, tree, expanding, cache):
2013 def _expandaliases(aliases, tree, expanding, cache):
2014 """Expand aliases in tree, recursively.
2014 """Expand aliases in tree, recursively.
2015
2015
2016 'aliases' is a dictionary mapping user defined aliases to
2016 'aliases' is a dictionary mapping user defined aliases to
2017 revsetalias objects.
2017 revsetalias objects.
2018 """
2018 """
2019 if not isinstance(tree, tuple):
2019 if not isinstance(tree, tuple):
2020 # Do not expand raw strings
2020 # Do not expand raw strings
2021 return tree
2021 return tree
2022 alias = _getalias(aliases, tree)
2022 alias = _getalias(aliases, tree)
2023 if alias is not None:
2023 if alias is not None:
2024 if alias in expanding:
2024 if alias in expanding:
2025 raise error.ParseError(_('infinite expansion of revset alias "%s" '
2025 raise error.ParseError(_('infinite expansion of revset alias "%s" '
2026 'detected') % alias.name)
2026 'detected') % alias.name)
2027 expanding.append(alias)
2027 expanding.append(alias)
2028 if alias.name not in cache:
2028 if alias.name not in cache:
2029 cache[alias.name] = _expandaliases(aliases, alias.replacement,
2029 cache[alias.name] = _expandaliases(aliases, alias.replacement,
2030 expanding, cache)
2030 expanding, cache)
2031 result = cache[alias.name]
2031 result = cache[alias.name]
2032 expanding.pop()
2032 expanding.pop()
2033 if alias.args is not None:
2033 if alias.args is not None:
2034 l = getlist(tree[2])
2034 l = getlist(tree[2])
2035 if len(l) != len(alias.args):
2035 if len(l) != len(alias.args):
2036 raise error.ParseError(
2036 raise error.ParseError(
2037 _('invalid number of arguments: %s') % len(l))
2037 _('invalid number of arguments: %s') % len(l))
2038 l = [_expandaliases(aliases, a, [], cache) for a in l]
2038 l = [_expandaliases(aliases, a, [], cache) for a in l]
2039 result = _expandargs(result, dict(zip(alias.args, l)))
2039 result = _expandargs(result, dict(zip(alias.args, l)))
2040 else:
2040 else:
2041 result = tuple(_expandaliases(aliases, t, expanding, cache)
2041 result = tuple(_expandaliases(aliases, t, expanding, cache)
2042 for t in tree)
2042 for t in tree)
2043 return result
2043 return result
2044
2044
2045 def findaliases(ui, tree):
2045 def findaliases(ui, tree):
2046 _checkaliasarg(tree)
2046 _checkaliasarg(tree)
2047 aliases = {}
2047 aliases = {}
2048 for k, v in ui.configitems('revsetalias'):
2048 for k, v in ui.configitems('revsetalias'):
2049 alias = revsetalias(k, v)
2049 alias = revsetalias(k, v)
2050 aliases[alias.name] = alias
2050 aliases[alias.name] = alias
2051 return _expandaliases(aliases, tree, [], {})
2051 return _expandaliases(aliases, tree, [], {})
2052
2052
2053 def parse(spec, lookup=None):
2053 def parse(spec, lookup=None):
2054 p = parser.parser(tokenize, elements)
2054 p = parser.parser(tokenize, elements)
2055 return p.parse(spec, lookup=lookup)
2055 return p.parse(spec, lookup=lookup)
2056
2056
2057 def match(ui, spec, repo=None):
2057 def match(ui, spec, repo=None):
2058 if not spec:
2058 if not spec:
2059 raise error.ParseError(_("empty query"))
2059 raise error.ParseError(_("empty query"))
2060 lookup = None
2060 lookup = None
2061 if repo:
2061 if repo:
2062 lookup = repo.__contains__
2062 lookup = repo.__contains__
2063 tree, pos = parse(spec, lookup)
2063 tree, pos = parse(spec, lookup)
2064 if (pos != len(spec)):
2064 if (pos != len(spec)):
2065 raise error.ParseError(_("invalid token"), pos)
2065 raise error.ParseError(_("invalid token"), pos)
2066 if ui:
2066 if ui:
2067 tree = findaliases(ui, tree)
2067 tree = findaliases(ui, tree)
2068 weight, tree = optimize(tree, True)
2068 weight, tree = optimize(tree, True)
2069 def mfunc(repo, subset):
2069 def mfunc(repo, subset):
2070 if util.safehasattr(subset, 'set'):
2070 if util.safehasattr(subset, 'set'):
2071 return getset(repo, subset, tree)
2071 return getset(repo, subset, tree)
2072 return getset(repo, baseset(subset), tree)
2072 return getset(repo, baseset(subset), tree)
2073 return mfunc
2073 return mfunc
2074
2074
2075 def formatspec(expr, *args):
2075 def formatspec(expr, *args):
2076 '''
2076 '''
2077 This is a convenience function for using revsets internally, and
2077 This is a convenience function for using revsets internally, and
2078 escapes arguments appropriately. Aliases are intentionally ignored
2078 escapes arguments appropriately. Aliases are intentionally ignored
2079 so that intended expression behavior isn't accidentally subverted.
2079 so that intended expression behavior isn't accidentally subverted.
2080
2080
2081 Supported arguments:
2081 Supported arguments:
2082
2082
2083 %r = revset expression, parenthesized
2083 %r = revset expression, parenthesized
2084 %d = int(arg), no quoting
2084 %d = int(arg), no quoting
2085 %s = string(arg), escaped and single-quoted
2085 %s = string(arg), escaped and single-quoted
2086 %b = arg.branch(), escaped and single-quoted
2086 %b = arg.branch(), escaped and single-quoted
2087 %n = hex(arg), single-quoted
2087 %n = hex(arg), single-quoted
2088 %% = a literal '%'
2088 %% = a literal '%'
2089
2089
2090 Prefixing the type with 'l' specifies a parenthesized list of that type.
2090 Prefixing the type with 'l' specifies a parenthesized list of that type.
2091
2091
2092 >>> formatspec('%r:: and %lr', '10 or 11', ("this()", "that()"))
2092 >>> formatspec('%r:: and %lr', '10 or 11', ("this()", "that()"))
2093 '(10 or 11):: and ((this()) or (that()))'
2093 '(10 or 11):: and ((this()) or (that()))'
2094 >>> formatspec('%d:: and not %d::', 10, 20)
2094 >>> formatspec('%d:: and not %d::', 10, 20)
2095 '10:: and not 20::'
2095 '10:: and not 20::'
2096 >>> formatspec('%ld or %ld', [], [1])
2096 >>> formatspec('%ld or %ld', [], [1])
2097 "_list('') or 1"
2097 "_list('') or 1"
2098 >>> formatspec('keyword(%s)', 'foo\\xe9')
2098 >>> formatspec('keyword(%s)', 'foo\\xe9')
2099 "keyword('foo\\\\xe9')"
2099 "keyword('foo\\\\xe9')"
2100 >>> b = lambda: 'default'
2100 >>> b = lambda: 'default'
2101 >>> b.branch = b
2101 >>> b.branch = b
2102 >>> formatspec('branch(%b)', b)
2102 >>> formatspec('branch(%b)', b)
2103 "branch('default')"
2103 "branch('default')"
2104 >>> formatspec('root(%ls)', ['a', 'b', 'c', 'd'])
2104 >>> formatspec('root(%ls)', ['a', 'b', 'c', 'd'])
2105 "root(_list('a\\x00b\\x00c\\x00d'))"
2105 "root(_list('a\\x00b\\x00c\\x00d'))"
2106 '''
2106 '''
2107
2107
2108 def quote(s):
2108 def quote(s):
2109 return repr(str(s))
2109 return repr(str(s))
2110
2110
2111 def argtype(c, arg):
2111 def argtype(c, arg):
2112 if c == 'd':
2112 if c == 'd':
2113 return str(int(arg))
2113 return str(int(arg))
2114 elif c == 's':
2114 elif c == 's':
2115 return quote(arg)
2115 return quote(arg)
2116 elif c == 'r':
2116 elif c == 'r':
2117 parse(arg) # make sure syntax errors are confined
2117 parse(arg) # make sure syntax errors are confined
2118 return '(%s)' % arg
2118 return '(%s)' % arg
2119 elif c == 'n':
2119 elif c == 'n':
2120 return quote(node.hex(arg))
2120 return quote(node.hex(arg))
2121 elif c == 'b':
2121 elif c == 'b':
2122 return quote(arg.branch())
2122 return quote(arg.branch())
2123
2123
2124 def listexp(s, t):
2124 def listexp(s, t):
2125 l = len(s)
2125 l = len(s)
2126 if l == 0:
2126 if l == 0:
2127 return "_list('')"
2127 return "_list('')"
2128 elif l == 1:
2128 elif l == 1:
2129 return argtype(t, s[0])
2129 return argtype(t, s[0])
2130 elif t == 'd':
2130 elif t == 'd':
2131 return "_intlist('%s')" % "\0".join(str(int(a)) for a in s)
2131 return "_intlist('%s')" % "\0".join(str(int(a)) for a in s)
2132 elif t == 's':
2132 elif t == 's':
2133 return "_list('%s')" % "\0".join(s)
2133 return "_list('%s')" % "\0".join(s)
2134 elif t == 'n':
2134 elif t == 'n':
2135 return "_hexlist('%s')" % "\0".join(node.hex(a) for a in s)
2135 return "_hexlist('%s')" % "\0".join(node.hex(a) for a in s)
2136 elif t == 'b':
2136 elif t == 'b':
2137 return "_list('%s')" % "\0".join(a.branch() for a in s)
2137 return "_list('%s')" % "\0".join(a.branch() for a in s)
2138
2138
2139 m = l // 2
2139 m = l // 2
2140 return '(%s or %s)' % (listexp(s[:m], t), listexp(s[m:], t))
2140 return '(%s or %s)' % (listexp(s[:m], t), listexp(s[m:], t))
2141
2141
2142 ret = ''
2142 ret = ''
2143 pos = 0
2143 pos = 0
2144 arg = 0
2144 arg = 0
2145 while pos < len(expr):
2145 while pos < len(expr):
2146 c = expr[pos]
2146 c = expr[pos]
2147 if c == '%':
2147 if c == '%':
2148 pos += 1
2148 pos += 1
2149 d = expr[pos]
2149 d = expr[pos]
2150 if d == '%':
2150 if d == '%':
2151 ret += d
2151 ret += d
2152 elif d in 'dsnbr':
2152 elif d in 'dsnbr':
2153 ret += argtype(d, args[arg])
2153 ret += argtype(d, args[arg])
2154 arg += 1
2154 arg += 1
2155 elif d == 'l':
2155 elif d == 'l':
2156 # a list of some type
2156 # a list of some type
2157 pos += 1
2157 pos += 1
2158 d = expr[pos]
2158 d = expr[pos]
2159 ret += listexp(list(args[arg]), d)
2159 ret += listexp(list(args[arg]), d)
2160 arg += 1
2160 arg += 1
2161 else:
2161 else:
2162 raise util.Abort('unexpected revspec format character %s' % d)
2162 raise util.Abort('unexpected revspec format character %s' % d)
2163 else:
2163 else:
2164 ret += c
2164 ret += c
2165 pos += 1
2165 pos += 1
2166
2166
2167 return ret
2167 return ret
2168
2168
2169 def prettyformat(tree):
2169 def prettyformat(tree):
2170 def _prettyformat(tree, level, lines):
2170 def _prettyformat(tree, level, lines):
2171 if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
2171 if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
2172 lines.append((level, str(tree)))
2172 lines.append((level, str(tree)))
2173 else:
2173 else:
2174 lines.append((level, '(%s' % tree[0]))
2174 lines.append((level, '(%s' % tree[0]))
2175 for s in tree[1:]:
2175 for s in tree[1:]:
2176 _prettyformat(s, level + 1, lines)
2176 _prettyformat(s, level + 1, lines)
2177 lines[-1:] = [(lines[-1][0], lines[-1][1] + ')')]
2177 lines[-1:] = [(lines[-1][0], lines[-1][1] + ')')]
2178
2178
2179 lines = []
2179 lines = []
2180 _prettyformat(tree, 0, lines)
2180 _prettyformat(tree, 0, lines)
2181 output = '\n'.join((' '*l + s) for l, s in lines)
2181 output = '\n'.join((' '*l + s) for l, s in lines)
2182 return output
2182 return output
2183
2183
2184 def depth(tree):
2184 def depth(tree):
2185 if isinstance(tree, tuple):
2185 if isinstance(tree, tuple):
2186 return max(map(depth, tree)) + 1
2186 return max(map(depth, tree)) + 1
2187 else:
2187 else:
2188 return 0
2188 return 0
2189
2189
2190 def funcsused(tree):
2190 def funcsused(tree):
2191 if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
2191 if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
2192 return set()
2192 return set()
2193 else:
2193 else:
2194 funcs = set()
2194 funcs = set()
2195 for s in tree[1:]:
2195 for s in tree[1:]:
2196 funcs |= funcsused(s)
2196 funcs |= funcsused(s)
2197 if tree[0] == 'func':
2197 if tree[0] == 'func':
2198 funcs.add(tree[1][1])
2198 funcs.add(tree[1][1])
2199 return funcs
2199 return funcs
2200
2200
2201 class baseset(list):
2201 class baseset(list):
2202 """Basic data structure that represents a revset and contains the basic
2202 """Basic data structure that represents a revset and contains the basic
2203 operation that it should be able to perform.
2203 operation that it should be able to perform.
2204
2204
2205 Every method in this class should be implemented by any smartset class.
2205 Every method in this class should be implemented by any smartset class.
2206 """
2206 """
2207 def __init__(self, data=()):
2207 def __init__(self, data=()):
2208 super(baseset, self).__init__(data)
2208 super(baseset, self).__init__(data)
2209 self._set = None
2209 self._set = None
2210
2210
2211 def ascending(self):
2211 def ascending(self):
2212 """Sorts the set in ascending order (in place).
2212 """Sorts the set in ascending order (in place).
2213
2213
2214 This is part of the mandatory API for smartset."""
2214 This is part of the mandatory API for smartset."""
2215 self.sort()
2215 self.sort()
2216
2216
2217 def descending(self):
2217 def descending(self):
2218 """Sorts the set in descending order (in place).
2218 """Sorts the set in descending order (in place).
2219
2219
2220 This is part of the mandatory API for smartset."""
2220 This is part of the mandatory API for smartset."""
2221 self.sort(reverse=True)
2221 self.sort(reverse=True)
2222
2222
2223 def min(self):
2223 def min(self):
2224 return min(self)
2224 return min(self)
2225
2225
2226 def max(self):
2226 def max(self):
2227 return max(self)
2227 return max(self)
2228
2228
2229 def set(self):
2229 def set(self):
2230 """Returns a set or a smartset containing all the elements.
2230 """Returns a set or a smartset containing all the elements.
2231
2231
2232 The returned structure should be the fastest option for membership
2232 The returned structure should be the fastest option for membership
2233 testing.
2233 testing.
2234
2234
2235 This is part of the mandatory API for smartset."""
2235 This is part of the mandatory API for smartset."""
2236 if not self._set:
2236 if not self._set:
2237 self._set = set(self)
2237 self._set = set(self)
2238 return self._set
2238 return self._set
2239
2239
2240 def __sub__(self, other):
2240 def __sub__(self, other):
2241 """Returns a new object with the substraction of the two collections.
2241 """Returns a new object with the substraction of the two collections.
2242
2242
2243 This is part of the mandatory API for smartset."""
2243 This is part of the mandatory API for smartset."""
2244 if isinstance(other, baseset):
2244 if isinstance(other, baseset):
2245 s = other.set()
2245 s = other.set()
2246 else:
2246 else:
2247 s = set(other)
2247 s = set(other)
2248 return baseset(self.set() - s)
2248 return baseset(self.set() - s)
2249
2249
2250 def __and__(self, other):
2250 def __and__(self, other):
2251 """Returns a new object with the intersection of the two collections.
2251 """Returns a new object with the intersection of the two collections.
2252
2252
2253 This is part of the mandatory API for smartset."""
2253 This is part of the mandatory API for smartset."""
2254 if isinstance(other, baseset):
2254 if isinstance(other, baseset):
2255 other = other.set()
2255 other = other.set()
2256 return baseset([y for y in self if y in other])
2256 return baseset([y for y in self if y in other])
2257
2257
2258 def __add__(self, other):
2258 def __add__(self, other):
2259 """Returns a new object with the union of the two collections.
2259 """Returns a new object with the union of the two collections.
2260
2260
2261 This is part of the mandatory API for smartset."""
2261 This is part of the mandatory API for smartset."""
2262 s = self.set()
2262 s = self.set()
2263 l = [r for r in other if r not in s]
2263 l = [r for r in other if r not in s]
2264 return baseset(list(self) + l)
2264 return baseset(list(self) + l)
2265
2265
2266 def isascending(self):
2266 def isascending(self):
2267 """Returns True if the collection is ascending order, False if not.
2267 """Returns True if the collection is ascending order, False if not.
2268
2268
2269 This is part of the mandatory API for smartset."""
2269 This is part of the mandatory API for smartset."""
2270 return False
2270 return False
2271
2271
2272 def isdescending(self):
2272 def isdescending(self):
2273 """Returns True if the collection is descending order, False if not.
2273 """Returns True if the collection is descending order, False if not.
2274
2274
2275 This is part of the mandatory API for smartset."""
2275 This is part of the mandatory API for smartset."""
2276 return False
2276 return False
2277
2277
2278 def filter(self, condition):
2278 def filter(self, condition):
2279 """Returns this smartset filtered by condition as a new smartset.
2279 """Returns this smartset filtered by condition as a new smartset.
2280
2280
2281 `condition` is a callable which takes a revision number and returns a
2281 `condition` is a callable which takes a revision number and returns a
2282 boolean.
2282 boolean.
2283
2283
2284 This is part of the mandatory API for smartset."""
2284 This is part of the mandatory API for smartset."""
2285 return lazyset(self, condition)
2285 return lazyset(self, condition)
2286
2286
2287 class _orderedsetmixin(object):
2287 class _orderedsetmixin(object):
2288 """Mixin class with utility methods for smartsets
2288 """Mixin class with utility methods for smartsets
2289
2289
2290 This should be extended by smartsets which have the isascending(),
2290 This should be extended by smartsets which have the isascending(),
2291 isdescending() and reverse() methods"""
2291 isdescending() and reverse() methods"""
2292
2292
2293 def _first(self):
2293 def _first(self):
2294 """return the first revision in the set"""
2294 """return the first revision in the set"""
2295 for r in self:
2295 for r in self:
2296 return r
2296 return r
2297 raise ValueError('arg is an empty sequence')
2297 raise ValueError('arg is an empty sequence')
2298
2298
2299 def _last(self):
2299 def _last(self):
2300 """return the last revision in the set"""
2300 """return the last revision in the set"""
2301 self.reverse()
2301 self.reverse()
2302 m = self._first()
2302 m = self._first()
2303 self.reverse()
2303 self.reverse()
2304 return m
2304 return m
2305
2305
2306 def min(self):
2306 def min(self):
2307 """return the smallest element in the set"""
2307 """return the smallest element in the set"""
2308 if self.isascending():
2308 if self.isascending():
2309 return self._first()
2309 return self._first()
2310 return self._last()
2310 return self._last()
2311
2311
2312 def max(self):
2312 def max(self):
2313 """return the largest element in the set"""
2313 """return the largest element in the set"""
2314 if self.isascending():
2314 if self.isascending():
2315 return self._last()
2315 return self._last()
2316 return self._first()
2316 return self._first()
2317
2317
2318 class lazyset(object):
2318 class lazyset(object):
2319 """Duck type for baseset class which iterates lazily over the revisions in
2319 """Duck type for baseset class which iterates lazily over the revisions in
2320 the subset and contains a function which tests for membership in the
2320 the subset and contains a function which tests for membership in the
2321 revset
2321 revset
2322 """
2322 """
2323 def __init__(self, subset, condition=lambda x: True):
2323 def __init__(self, subset, condition=lambda x: True):
2324 """
2324 """
2325 condition: a function that decide whether a revision in the subset
2325 condition: a function that decide whether a revision in the subset
2326 belongs to the revset or not.
2326 belongs to the revset or not.
2327 """
2327 """
2328 self._subset = subset
2328 self._subset = subset
2329 self._condition = condition
2329 self._condition = condition
2330 self._cache = {}
2330 self._cache = {}
2331
2331
2332 def ascending(self):
2332 def ascending(self):
2333 self._subset.sort()
2333 self._subset.sort()
2334
2334
2335 def descending(self):
2335 def descending(self):
2336 self._subset.sort(reverse=True)
2336 self._subset.sort(reverse=True)
2337
2337
2338 def min(self):
2338 def min(self):
2339 return min(self)
2339 return min(self)
2340
2340
2341 def max(self):
2341 def max(self):
2342 return max(self)
2342 return max(self)
2343
2343
2344 def __contains__(self, x):
2344 def __contains__(self, x):
2345 c = self._cache
2345 c = self._cache
2346 if x not in c:
2346 if x not in c:
2347 c[x] = x in self._subset and self._condition(x)
2347 c[x] = x in self._subset and self._condition(x)
2348 return c[x]
2348 return c[x]
2349
2349
2350 def __iter__(self):
2350 def __iter__(self):
2351 cond = self._condition
2351 cond = self._condition
2352 for x in self._subset:
2352 for x in self._subset:
2353 if cond(x):
2353 if cond(x):
2354 yield x
2354 yield x
2355
2355
2356 def __and__(self, x):
2356 def __and__(self, x):
2357 return lazyset(self, lambda r: r in x)
2357 return lazyset(self, lambda r: r in x)
2358
2358
2359 def __sub__(self, x):
2359 def __sub__(self, x):
2360 return lazyset(self, lambda r: r not in x)
2360 return lazyset(self, lambda r: r not in x)
2361
2361
2362 def __add__(self, x):
2362 def __add__(self, x):
2363 return _addset(self, x)
2363 return _addset(self, x)
2364
2364
2365 def __nonzero__(self):
2365 def __nonzero__(self):
2366 for r in self:
2366 for r in self:
2367 return True
2367 return True
2368 return False
2368 return False
2369
2369
2370 def __len__(self):
2370 def __len__(self):
2371 # Basic implementation to be changed in future patches.
2371 # Basic implementation to be changed in future patches.
2372 l = baseset([r for r in self])
2372 l = baseset([r for r in self])
2373 return len(l)
2373 return len(l)
2374
2374
2375 def __getitem__(self, x):
2375 def __getitem__(self, x):
2376 # Basic implementation to be changed in future patches.
2376 # Basic implementation to be changed in future patches.
2377 l = baseset([r for r in self])
2377 l = baseset([r for r in self])
2378 return l[x]
2378 return l[x]
2379
2379
2380 def sort(self, reverse=False):
2380 def sort(self, reverse=False):
2381 if not util.safehasattr(self._subset, 'sort'):
2381 if not util.safehasattr(self._subset, 'sort'):
2382 self._subset = baseset(self._subset)
2382 self._subset = baseset(self._subset)
2383 self._subset.sort(reverse=reverse)
2383 self._subset.sort(reverse=reverse)
2384
2384
2385 def reverse(self):
2385 def reverse(self):
2386 self._subset.reverse()
2386 self._subset.reverse()
2387
2387
2388 def set(self):
2388 def set(self):
2389 return set([r for r in self])
2389 return set([r for r in self])
2390
2390
2391 def isascending(self):
2391 def isascending(self):
2392 return False
2392 return False
2393
2393
2394 def isdescending(self):
2394 def isdescending(self):
2395 return False
2395 return False
2396
2396
2397 def filter(self, l):
2397 def filter(self, l):
2398 return lazyset(self, l)
2398 return lazyset(self, l)
2399
2399
2400 class orderedlazyset(_orderedsetmixin, lazyset):
2400 class orderedlazyset(_orderedsetmixin, lazyset):
2401 """Subclass of lazyset which subset can be ordered either ascending or
2401 """Subclass of lazyset which subset can be ordered either ascending or
2402 descendingly
2402 descendingly
2403 """
2403 """
2404 def __init__(self, subset, condition, ascending=True):
2404 def __init__(self, subset, condition, ascending=True):
2405 super(orderedlazyset, self).__init__(subset, condition)
2405 super(orderedlazyset, self).__init__(subset, condition)
2406 self._ascending = ascending
2406 self._ascending = ascending
2407
2407
2408 def filter(self, l):
2408 def filter(self, l):
2409 return orderedlazyset(self, l, ascending=self._ascending)
2409 return orderedlazyset(self, l, ascending=self._ascending)
2410
2410
2411 def ascending(self):
2411 def ascending(self):
2412 if not self._ascending:
2412 if not self._ascending:
2413 self.reverse()
2413 self.reverse()
2414
2414
2415 def descending(self):
2415 def descending(self):
2416 if self._ascending:
2416 if self._ascending:
2417 self.reverse()
2417 self.reverse()
2418
2418
2419 def __and__(self, x):
2419 def __and__(self, x):
2420 return orderedlazyset(self, lambda r: r in x,
2420 return orderedlazyset(self, lambda r: r in x,
2421 ascending=self._ascending)
2421 ascending=self._ascending)
2422
2422
2423 def __sub__(self, x):
2423 def __sub__(self, x):
2424 return orderedlazyset(self, lambda r: r not in x,
2424 return orderedlazyset(self, lambda r: r not in x,
2425 ascending=self._ascending)
2425 ascending=self._ascending)
2426
2426
2427 def __add__(self, x):
2427 def __add__(self, x):
2428 kwargs = {}
2428 kwargs = {}
2429 if self.isascending() and x.isascending():
2429 if self.isascending() and x.isascending():
2430 kwargs['ascending'] = True
2430 kwargs['ascending'] = True
2431 if self.isdescending() and x.isdescending():
2431 if self.isdescending() and x.isdescending():
2432 kwargs['ascending'] = False
2432 kwargs['ascending'] = False
2433 return _addset(self, x, **kwargs)
2433 return _addset(self, x, **kwargs)
2434
2434
2435 def sort(self, reverse=False):
2435 def sort(self, reverse=False):
2436 if reverse:
2436 if reverse:
2437 if self._ascending:
2437 if self._ascending:
2438 self._subset.sort(reverse=reverse)
2438 self._subset.sort(reverse=reverse)
2439 else:
2439 else:
2440 if not self._ascending:
2440 if not self._ascending:
2441 self._subset.sort(reverse=reverse)
2441 self._subset.sort(reverse=reverse)
2442 self._ascending = not reverse
2442 self._ascending = not reverse
2443
2443
2444 def isascending(self):
2444 def isascending(self):
2445 return self._ascending
2445 return self._ascending
2446
2446
2447 def isdescending(self):
2447 def isdescending(self):
2448 return not self._ascending
2448 return not self._ascending
2449
2449
2450 def reverse(self):
2450 def reverse(self):
2451 self._subset.reverse()
2451 self._subset.reverse()
2452 self._ascending = not self._ascending
2452 self._ascending = not self._ascending
2453
2453
2454 class _addset(_orderedsetmixin):
2454 class _addset(_orderedsetmixin):
2455 """Represent the addition of two sets
2455 """Represent the addition of two sets
2456
2456
2457 Wrapper structure for lazily adding two structures without losing much
2457 Wrapper structure for lazily adding two structures without losing much
2458 performance on the __contains__ method
2458 performance on the __contains__ method
2459
2459
2460 If the ascending attribute is set, that means the two structures are
2460 If the ascending attribute is set, that means the two structures are
2461 ordered in either an ascending or descending way. Therefore, we can add
2461 ordered in either an ascending or descending way. Therefore, we can add
2462 them mantaining the order by iterating over both at the same time
2462 them maintaining the order by iterating over both at the same time
2463
2463
2464 This class does not duck-type baseset and it's only supposed to be used
2464 This class does not duck-type baseset and it's only supposed to be used
2465 internally
2465 internally
2466 """
2466 """
2467 def __init__(self, revs1, revs2, ascending=None):
2467 def __init__(self, revs1, revs2, ascending=None):
2468 self._r1 = revs1
2468 self._r1 = revs1
2469 self._r2 = revs2
2469 self._r2 = revs2
2470 self._iter = None
2470 self._iter = None
2471 self._ascending = ascending
2471 self._ascending = ascending
2472 self._genlist = None
2472 self._genlist = None
2473
2473
2474 def __len__(self):
2474 def __len__(self):
2475 return len(self._list)
2475 return len(self._list)
2476
2476
2477 @util.propertycache
2477 @util.propertycache
2478 def _list(self):
2478 def _list(self):
2479 if not self._genlist:
2479 if not self._genlist:
2480 self._genlist = baseset(self._iterator())
2480 self._genlist = baseset(self._iterator())
2481 return self._genlist
2481 return self._genlist
2482
2482
2483 def filter(self, condition):
2483 def filter(self, condition):
2484 if self._ascending is not None:
2484 if self._ascending is not None:
2485 return orderedlazyset(self, condition, ascending=self._ascending)
2485 return orderedlazyset(self, condition, ascending=self._ascending)
2486 return lazyset(self, condition)
2486 return lazyset(self, condition)
2487
2487
2488 def ascending(self):
2488 def ascending(self):
2489 if self._ascending is None:
2489 if self._ascending is None:
2490 self.sort()
2490 self.sort()
2491 self._ascending = True
2491 self._ascending = True
2492 else:
2492 else:
2493 if not self._ascending:
2493 if not self._ascending:
2494 self.reverse()
2494 self.reverse()
2495
2495
2496 def descending(self):
2496 def descending(self):
2497 if self._ascending is None:
2497 if self._ascending is None:
2498 self.sort(reverse=True)
2498 self.sort(reverse=True)
2499 self._ascending = False
2499 self._ascending = False
2500 else:
2500 else:
2501 if self._ascending:
2501 if self._ascending:
2502 self.reverse()
2502 self.reverse()
2503
2503
2504 def __and__(self, other):
2504 def __and__(self, other):
2505 filterfunc = other.__contains__
2505 filterfunc = other.__contains__
2506 if self._ascending is not None:
2506 if self._ascending is not None:
2507 return orderedlazyset(self, filterfunc, ascending=self._ascending)
2507 return orderedlazyset(self, filterfunc, ascending=self._ascending)
2508 return lazyset(self, filterfunc)
2508 return lazyset(self, filterfunc)
2509
2509
2510 def __sub__(self, other):
2510 def __sub__(self, other):
2511 filterfunc = lambda r: r not in other
2511 filterfunc = lambda r: r not in other
2512 if self._ascending is not None:
2512 if self._ascending is not None:
2513 return orderedlazyset(self, filterfunc, ascending=self._ascending)
2513 return orderedlazyset(self, filterfunc, ascending=self._ascending)
2514 return lazyset(self, filterfunc)
2514 return lazyset(self, filterfunc)
2515
2515
2516 def __add__(self, other):
2516 def __add__(self, other):
2517 """When both collections are ascending or descending, preserve the order
2517 """When both collections are ascending or descending, preserve the order
2518 """
2518 """
2519 kwargs = {}
2519 kwargs = {}
2520 if self._ascending is not None:
2520 if self._ascending is not None:
2521 if self.isascending() and other.isascending():
2521 if self.isascending() and other.isascending():
2522 kwargs['ascending'] = True
2522 kwargs['ascending'] = True
2523 if self.isdescending() and other.isdescending():
2523 if self.isdescending() and other.isdescending():
2524 kwargs['ascending'] = False
2524 kwargs['ascending'] = False
2525 return _addset(self, other, **kwargs)
2525 return _addset(self, other, **kwargs)
2526
2526
2527 def _iterator(self):
2527 def _iterator(self):
2528 """Iterate over both collections without repeating elements
2528 """Iterate over both collections without repeating elements
2529
2529
2530 If the ascending attribute is not set, iterate over the first one and
2530 If the ascending attribute is not set, iterate over the first one and
2531 then over the second one checking for membership on the first one so we
2531 then over the second one checking for membership on the first one so we
2532 dont yield any duplicates.
2532 dont yield any duplicates.
2533
2533
2534 If the ascending attribute is set, iterate over both collections at the
2534 If the ascending attribute is set, iterate over both collections at the
2535 same time, yielding only one value at a time in the given order.
2535 same time, yielding only one value at a time in the given order.
2536 """
2536 """
2537 if not self._iter:
2537 if not self._iter:
2538 def gen():
2538 def gen():
2539 if self._ascending is None:
2539 if self._ascending is None:
2540 for r in self._r1:
2540 for r in self._r1:
2541 yield r
2541 yield r
2542 s = self._r1.set()
2542 s = self._r1.set()
2543 for r in self._r2:
2543 for r in self._r2:
2544 if r not in s:
2544 if r not in s:
2545 yield r
2545 yield r
2546 else:
2546 else:
2547 iter1 = iter(self._r1)
2547 iter1 = iter(self._r1)
2548 iter2 = iter(self._r2)
2548 iter2 = iter(self._r2)
2549
2549
2550 val1 = None
2550 val1 = None
2551 val2 = None
2551 val2 = None
2552
2552
2553 choice = max
2553 choice = max
2554 if self._ascending:
2554 if self._ascending:
2555 choice = min
2555 choice = min
2556 try:
2556 try:
2557 # Consume both iterators in an ordered way until one is
2557 # Consume both iterators in an ordered way until one is
2558 # empty
2558 # empty
2559 while True:
2559 while True:
2560 if val1 is None:
2560 if val1 is None:
2561 val1 = iter1.next()
2561 val1 = iter1.next()
2562 if val2 is None:
2562 if val2 is None:
2563 val2 = iter2.next()
2563 val2 = iter2.next()
2564 next = choice(val1, val2)
2564 next = choice(val1, val2)
2565 yield next
2565 yield next
2566 if val1 == next:
2566 if val1 == next:
2567 val1 = None
2567 val1 = None
2568 if val2 == next:
2568 if val2 == next:
2569 val2 = None
2569 val2 = None
2570 except StopIteration:
2570 except StopIteration:
2571 # Flush any remaining values and consume the other one
2571 # Flush any remaining values and consume the other one
2572 it = iter2
2572 it = iter2
2573 if val1 is not None:
2573 if val1 is not None:
2574 yield val1
2574 yield val1
2575 it = iter1
2575 it = iter1
2576 elif val2 is not None:
2576 elif val2 is not None:
2577 # might have been equality and both are empty
2577 # might have been equality and both are empty
2578 yield val2
2578 yield val2
2579 for val in it:
2579 for val in it:
2580 yield val
2580 yield val
2581
2581
2582 self._iter = _generatorset(gen())
2582 self._iter = _generatorset(gen())
2583
2583
2584 return self._iter
2584 return self._iter
2585
2585
2586 def __iter__(self):
2586 def __iter__(self):
2587 if self._genlist:
2587 if self._genlist:
2588 return iter(self._genlist)
2588 return iter(self._genlist)
2589 return iter(self._iterator())
2589 return iter(self._iterator())
2590
2590
2591 def __contains__(self, x):
2591 def __contains__(self, x):
2592 return x in self._r1 or x in self._r2
2592 return x in self._r1 or x in self._r2
2593
2593
2594 def set(self):
2594 def set(self):
2595 return self
2595 return self
2596
2596
2597 def sort(self, reverse=False):
2597 def sort(self, reverse=False):
2598 """Sort the added set
2598 """Sort the added set
2599
2599
2600 For this we use the cached list with all the generated values and if we
2600 For this we use the cached list with all the generated values and if we
2601 know they are ascending or descending we can sort them in a smart way.
2601 know they are ascending or descending we can sort them in a smart way.
2602 """
2602 """
2603 if self._ascending is None:
2603 if self._ascending is None:
2604 self._list.sort(reverse=reverse)
2604 self._list.sort(reverse=reverse)
2605 self._ascending = not reverse
2605 self._ascending = not reverse
2606 else:
2606 else:
2607 if bool(self._ascending) == bool(reverse):
2607 if bool(self._ascending) == bool(reverse):
2608 self.reverse()
2608 self.reverse()
2609
2609
2610 def isascending(self):
2610 def isascending(self):
2611 return self._ascending is not None and self._ascending
2611 return self._ascending is not None and self._ascending
2612
2612
2613 def isdescending(self):
2613 def isdescending(self):
2614 return self._ascending is not None and not self._ascending
2614 return self._ascending is not None and not self._ascending
2615
2615
2616 def reverse(self):
2616 def reverse(self):
2617 self._list.reverse()
2617 self._list.reverse()
2618 if self._ascending is not None:
2618 if self._ascending is not None:
2619 self._ascending = not self._ascending
2619 self._ascending = not self._ascending
2620
2620
2621 class _generatorset(object):
2621 class _generatorset(object):
2622 """Wrap a generator for lazy iteration
2622 """Wrap a generator for lazy iteration
2623
2623
2624 Wrapper structure for generators that provides lazy membership and can
2624 Wrapper structure for generators that provides lazy membership and can
2625 be iterated more than once.
2625 be iterated more than once.
2626 When asked for membership it generates values until either it finds the
2626 When asked for membership it generates values until either it finds the
2627 requested one or has gone through all the elements in the generator
2627 requested one or has gone through all the elements in the generator
2628
2628
2629 This class does not duck-type baseset and it's only supposed to be used
2629 This class does not duck-type baseset and it's only supposed to be used
2630 internally
2630 internally
2631 """
2631 """
2632 def __init__(self, gen):
2632 def __init__(self, gen):
2633 """
2633 """
2634 gen: a generator producing the values for the generatorset.
2634 gen: a generator producing the values for the generatorset.
2635 """
2635 """
2636 self._gen = gen
2636 self._gen = gen
2637 self._cache = {}
2637 self._cache = {}
2638 self._genlist = baseset([])
2638 self._genlist = baseset([])
2639 self._finished = False
2639 self._finished = False
2640
2640
2641 def __contains__(self, x):
2641 def __contains__(self, x):
2642 if x in self._cache:
2642 if x in self._cache:
2643 return self._cache[x]
2643 return self._cache[x]
2644
2644
2645 # Use new values only, as existing values would be cached.
2645 # Use new values only, as existing values would be cached.
2646 for l in self._consumegen():
2646 for l in self._consumegen():
2647 if l == x:
2647 if l == x:
2648 return True
2648 return True
2649
2649
2650 self._cache[x] = False
2650 self._cache[x] = False
2651 return False
2651 return False
2652
2652
2653 def __iter__(self):
2653 def __iter__(self):
2654 if self._finished:
2654 if self._finished:
2655 for x in self._genlist:
2655 for x in self._genlist:
2656 yield x
2656 yield x
2657 return
2657 return
2658
2658
2659 i = 0
2659 i = 0
2660 genlist = self._genlist
2660 genlist = self._genlist
2661 consume = self._consumegen()
2661 consume = self._consumegen()
2662 while True:
2662 while True:
2663 if i < len(genlist):
2663 if i < len(genlist):
2664 yield genlist[i]
2664 yield genlist[i]
2665 else:
2665 else:
2666 yield consume.next()
2666 yield consume.next()
2667 i += 1
2667 i += 1
2668
2668
2669 def _consumegen(self):
2669 def _consumegen(self):
2670 for item in self._gen:
2670 for item in self._gen:
2671 self._cache[item] = True
2671 self._cache[item] = True
2672 self._genlist.append(item)
2672 self._genlist.append(item)
2673 yield item
2673 yield item
2674 self._finished = True
2674 self._finished = True
2675
2675
2676 def set(self):
2676 def set(self):
2677 return self
2677 return self
2678
2678
2679 def sort(self, reverse=False):
2679 def sort(self, reverse=False):
2680 if not self._finished:
2680 if not self._finished:
2681 for i in self:
2681 for i in self:
2682 continue
2682 continue
2683 self._genlist.sort(reverse=reverse)
2683 self._genlist.sort(reverse=reverse)
2684
2684
2685 class _ascgeneratorset(_generatorset):
2685 class _ascgeneratorset(_generatorset):
2686 """Wrap a generator of ascending elements for lazy iteration
2686 """Wrap a generator of ascending elements for lazy iteration
2687
2687
2688 Same structure as _generatorset but stops iterating after it goes past
2688 Same structure as _generatorset but stops iterating after it goes past
2689 the value when asked for membership and the element is not contained
2689 the value when asked for membership and the element is not contained
2690
2690
2691 This class does not duck-type baseset and it's only supposed to be used
2691 This class does not duck-type baseset and it's only supposed to be used
2692 internally
2692 internally
2693 """
2693 """
2694 def __contains__(self, x):
2694 def __contains__(self, x):
2695 if x in self._cache:
2695 if x in self._cache:
2696 return self._cache[x]
2696 return self._cache[x]
2697
2697
2698 # Use new values only, as existing values would be cached.
2698 # Use new values only, as existing values would be cached.
2699 for l in self._consumegen():
2699 for l in self._consumegen():
2700 if l == x:
2700 if l == x:
2701 return True
2701 return True
2702 if l > x:
2702 if l > x:
2703 break
2703 break
2704
2704
2705 self._cache[x] = False
2705 self._cache[x] = False
2706 return False
2706 return False
2707
2707
2708 class _descgeneratorset(_generatorset):
2708 class _descgeneratorset(_generatorset):
2709 """Wrap a generator of descending elements for lazy iteration
2709 """Wrap a generator of descending elements for lazy iteration
2710
2710
2711 Same structure as _generatorset but stops iterating after it goes past
2711 Same structure as _generatorset but stops iterating after it goes past
2712 the value when asked for membership and the element is not contained
2712 the value when asked for membership and the element is not contained
2713
2713
2714 This class does not duck-type baseset and it's only supposed to be used
2714 This class does not duck-type baseset and it's only supposed to be used
2715 internally
2715 internally
2716 """
2716 """
2717 def __contains__(self, x):
2717 def __contains__(self, x):
2718 if x in self._cache:
2718 if x in self._cache:
2719 return self._cache[x]
2719 return self._cache[x]
2720
2720
2721 # Use new values only, as existing values would be cached.
2721 # Use new values only, as existing values would be cached.
2722 for l in self._consumegen():
2722 for l in self._consumegen():
2723 if l == x:
2723 if l == x:
2724 return True
2724 return True
2725 if l < x:
2725 if l < x:
2726 break
2726 break
2727
2727
2728 self._cache[x] = False
2728 self._cache[x] = False
2729 return False
2729 return False
2730
2730
2731 class spanset(_orderedsetmixin):
2731 class spanset(_orderedsetmixin):
2732 """Duck type for baseset class which represents a range of revisions and
2732 """Duck type for baseset class which represents a range of revisions and
2733 can work lazily and without having all the range in memory
2733 can work lazily and without having all the range in memory
2734
2734
2735 Note that spanset(x, y) behave almost like xrange(x, y) except for two
2735 Note that spanset(x, y) behave almost like xrange(x, y) except for two
2736 notable points:
2736 notable points:
2737 - when x < y it will be automatically descending,
2737 - when x < y it will be automatically descending,
2738 - revision filtered with this repoview will be skipped.
2738 - revision filtered with this repoview will be skipped.
2739
2739
2740 """
2740 """
2741 def __init__(self, repo, start=0, end=None):
2741 def __init__(self, repo, start=0, end=None):
2742 """
2742 """
2743 start: first revision included the set
2743 start: first revision included the set
2744 (default to 0)
2744 (default to 0)
2745 end: first revision excluded (last+1)
2745 end: first revision excluded (last+1)
2746 (default to len(repo)
2746 (default to len(repo)
2747
2747
2748 Spanset will be descending if `end` < `start`.
2748 Spanset will be descending if `end` < `start`.
2749 """
2749 """
2750 self._start = start
2750 self._start = start
2751 if end is not None:
2751 if end is not None:
2752 self._end = end
2752 self._end = end
2753 else:
2753 else:
2754 self._end = len(repo)
2754 self._end = len(repo)
2755 self._hiddenrevs = repo.changelog.filteredrevs
2755 self._hiddenrevs = repo.changelog.filteredrevs
2756
2756
2757 def ascending(self):
2757 def ascending(self):
2758 if self._start > self._end:
2758 if self._start > self._end:
2759 self.reverse()
2759 self.reverse()
2760
2760
2761 def descending(self):
2761 def descending(self):
2762 if self._start < self._end:
2762 if self._start < self._end:
2763 self.reverse()
2763 self.reverse()
2764
2764
2765 def _contained(self, rev):
2765 def _contained(self, rev):
2766 return (rev <= self._start and rev > self._end) or (rev >= self._start
2766 return (rev <= self._start and rev > self._end) or (rev >= self._start
2767 and rev < self._end)
2767 and rev < self._end)
2768
2768
2769 def __iter__(self):
2769 def __iter__(self):
2770 if self._start <= self._end:
2770 if self._start <= self._end:
2771 iterrange = xrange(self._start, self._end)
2771 iterrange = xrange(self._start, self._end)
2772 else:
2772 else:
2773 iterrange = xrange(self._start, self._end, -1)
2773 iterrange = xrange(self._start, self._end, -1)
2774
2774
2775 if self._hiddenrevs:
2775 if self._hiddenrevs:
2776 s = self._hiddenrevs
2776 s = self._hiddenrevs
2777 for r in iterrange:
2777 for r in iterrange:
2778 if r not in s:
2778 if r not in s:
2779 yield r
2779 yield r
2780 else:
2780 else:
2781 for r in iterrange:
2781 for r in iterrange:
2782 yield r
2782 yield r
2783
2783
2784 def __contains__(self, x):
2784 def __contains__(self, x):
2785 return self._contained(x) and not (self._hiddenrevs and rev in
2785 return self._contained(x) and not (self._hiddenrevs and rev in
2786 self._hiddenrevs)
2786 self._hiddenrevs)
2787
2787
2788 def __nonzero__(self):
2788 def __nonzero__(self):
2789 for r in self:
2789 for r in self:
2790 return True
2790 return True
2791 return False
2791 return False
2792
2792
2793 def __and__(self, x):
2793 def __and__(self, x):
2794 if isinstance(x, baseset):
2794 if isinstance(x, baseset):
2795 x = x.set()
2795 x = x.set()
2796 if self._start <= self._end:
2796 if self._start <= self._end:
2797 return orderedlazyset(self, lambda r: r in x)
2797 return orderedlazyset(self, lambda r: r in x)
2798 else:
2798 else:
2799 return orderedlazyset(self, lambda r: r in x, ascending=False)
2799 return orderedlazyset(self, lambda r: r in x, ascending=False)
2800
2800
2801 def __sub__(self, x):
2801 def __sub__(self, x):
2802 if isinstance(x, baseset):
2802 if isinstance(x, baseset):
2803 x = x.set()
2803 x = x.set()
2804 if self._start <= self._end:
2804 if self._start <= self._end:
2805 return orderedlazyset(self, lambda r: r not in x)
2805 return orderedlazyset(self, lambda r: r not in x)
2806 else:
2806 else:
2807 return orderedlazyset(self, lambda r: r not in x, ascending=False)
2807 return orderedlazyset(self, lambda r: r not in x, ascending=False)
2808
2808
2809 def __add__(self, x):
2809 def __add__(self, x):
2810 kwargs = {}
2810 kwargs = {}
2811 if self.isascending() and x.isascending():
2811 if self.isascending() and x.isascending():
2812 kwargs['ascending'] = True
2812 kwargs['ascending'] = True
2813 if self.isdescending() and x.isdescending():
2813 if self.isdescending() and x.isdescending():
2814 kwargs['ascending'] = False
2814 kwargs['ascending'] = False
2815 return _addset(self, x, **kwargs)
2815 return _addset(self, x, **kwargs)
2816
2816
2817 def __len__(self):
2817 def __len__(self):
2818 if not self._hiddenrevs:
2818 if not self._hiddenrevs:
2819 return abs(self._end - self._start)
2819 return abs(self._end - self._start)
2820 else:
2820 else:
2821 count = 0
2821 count = 0
2822 for rev in self._hiddenrevs:
2822 for rev in self._hiddenrevs:
2823 if self._contained(rev):
2823 if self._contained(rev):
2824 count += 1
2824 count += 1
2825 return abs(self._end - self._start) - count
2825 return abs(self._end - self._start) - count
2826
2826
2827 def __getitem__(self, x):
2827 def __getitem__(self, x):
2828 # Basic implementation to be changed in future patches.
2828 # Basic implementation to be changed in future patches.
2829 l = baseset([r for r in self])
2829 l = baseset([r for r in self])
2830 return l[x]
2830 return l[x]
2831
2831
2832 def sort(self, reverse=False):
2832 def sort(self, reverse=False):
2833 if bool(reverse) != (self._start > self._end):
2833 if bool(reverse) != (self._start > self._end):
2834 self.reverse()
2834 self.reverse()
2835
2835
2836 def reverse(self):
2836 def reverse(self):
2837 # Just switch the _start and _end parameters
2837 # Just switch the _start and _end parameters
2838 if self._start <= self._end:
2838 if self._start <= self._end:
2839 self._start, self._end = self._end - 1, self._start - 1
2839 self._start, self._end = self._end - 1, self._start - 1
2840 else:
2840 else:
2841 self._start, self._end = self._end + 1, self._start + 1
2841 self._start, self._end = self._end + 1, self._start + 1
2842
2842
2843 def set(self):
2843 def set(self):
2844 return self
2844 return self
2845
2845
2846 def isascending(self):
2846 def isascending(self):
2847 return self._start < self._end
2847 return self._start < self._end
2848
2848
2849 def isdescending(self):
2849 def isdescending(self):
2850 return self._start > self._end
2850 return self._start > self._end
2851
2851
2852 def filter(self, l):
2852 def filter(self, l):
2853 if self._start <= self._end:
2853 if self._start <= self._end:
2854 return orderedlazyset(self, l)
2854 return orderedlazyset(self, l)
2855 else:
2855 else:
2856 return orderedlazyset(self, l, ascending=False)
2856 return orderedlazyset(self, l, ascending=False)
2857
2857
2858 # tell hggettext to extract docstrings from these functions:
2858 # tell hggettext to extract docstrings from these functions:
2859 i18nfunctions = symbols.values()
2859 i18nfunctions = symbols.values()
@@ -1,785 +1,785 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod
11 import changegroup as changegroupmod
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 return sep.join(map(hex, l))
175 return sep.join(map(hex, l))
176
176
177 # batched call argument encoding
177 # batched call argument encoding
178
178
179 def escapearg(plain):
179 def escapearg(plain):
180 return (plain
180 return (plain
181 .replace(':', '::')
181 .replace(':', '::')
182 .replace(',', ':,')
182 .replace(',', ':,')
183 .replace(';', ':;')
183 .replace(';', ':;')
184 .replace('=', ':='))
184 .replace('=', ':='))
185
185
186 def unescapearg(escaped):
186 def unescapearg(escaped):
187 return (escaped
187 return (escaped
188 .replace(':=', '=')
188 .replace(':=', '=')
189 .replace(':;', ';')
189 .replace(':;', ';')
190 .replace(':,', ',')
190 .replace(':,', ',')
191 .replace('::', ':'))
191 .replace('::', ':'))
192
192
193 # client side
193 # client side
194
194
195 class wirepeer(peer.peerrepository):
195 class wirepeer(peer.peerrepository):
196
196
197 def batch(self):
197 def batch(self):
198 return remotebatch(self)
198 return remotebatch(self)
199 def _submitbatch(self, req):
199 def _submitbatch(self, req):
200 cmds = []
200 cmds = []
201 for op, argsdict in req:
201 for op, argsdict in req:
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
203 cmds.append('%s %s' % (op, args))
203 cmds.append('%s %s' % (op, args))
204 rsp = self._call("batch", cmds=';'.join(cmds))
204 rsp = self._call("batch", cmds=';'.join(cmds))
205 return rsp.split(';')
205 return rsp.split(';')
206 def _submitone(self, op, args):
206 def _submitone(self, op, args):
207 return self._call(op, **args)
207 return self._call(op, **args)
208
208
209 @batchable
209 @batchable
210 def lookup(self, key):
210 def lookup(self, key):
211 self.requirecap('lookup', _('look up remote revision'))
211 self.requirecap('lookup', _('look up remote revision'))
212 f = future()
212 f = future()
213 yield {'key': encoding.fromlocal(key)}, f
213 yield {'key': encoding.fromlocal(key)}, f
214 d = f.value
214 d = f.value
215 success, data = d[:-1].split(" ", 1)
215 success, data = d[:-1].split(" ", 1)
216 if int(success):
216 if int(success):
217 yield bin(data)
217 yield bin(data)
218 self._abort(error.RepoError(data))
218 self._abort(error.RepoError(data))
219
219
220 @batchable
220 @batchable
221 def heads(self):
221 def heads(self):
222 f = future()
222 f = future()
223 yield {}, f
223 yield {}, f
224 d = f.value
224 d = f.value
225 try:
225 try:
226 yield decodelist(d[:-1])
226 yield decodelist(d[:-1])
227 except ValueError:
227 except ValueError:
228 self._abort(error.ResponseError(_("unexpected response:"), d))
228 self._abort(error.ResponseError(_("unexpected response:"), d))
229
229
230 @batchable
230 @batchable
231 def known(self, nodes):
231 def known(self, nodes):
232 f = future()
232 f = future()
233 yield {'nodes': encodelist(nodes)}, f
233 yield {'nodes': encodelist(nodes)}, f
234 d = f.value
234 d = f.value
235 try:
235 try:
236 yield [bool(int(f)) for f in d]
236 yield [bool(int(f)) for f in d]
237 except ValueError:
237 except ValueError:
238 self._abort(error.ResponseError(_("unexpected response:"), d))
238 self._abort(error.ResponseError(_("unexpected response:"), d))
239
239
240 @batchable
240 @batchable
241 def branchmap(self):
241 def branchmap(self):
242 f = future()
242 f = future()
243 yield {}, f
243 yield {}, f
244 d = f.value
244 d = f.value
245 try:
245 try:
246 branchmap = {}
246 branchmap = {}
247 for branchpart in d.splitlines():
247 for branchpart in d.splitlines():
248 branchname, branchheads = branchpart.split(' ', 1)
248 branchname, branchheads = branchpart.split(' ', 1)
249 branchname = encoding.tolocal(urllib.unquote(branchname))
249 branchname = encoding.tolocal(urllib.unquote(branchname))
250 branchheads = decodelist(branchheads)
250 branchheads = decodelist(branchheads)
251 branchmap[branchname] = branchheads
251 branchmap[branchname] = branchheads
252 yield branchmap
252 yield branchmap
253 except TypeError:
253 except TypeError:
254 self._abort(error.ResponseError(_("unexpected response:"), d))
254 self._abort(error.ResponseError(_("unexpected response:"), d))
255
255
256 def branches(self, nodes):
256 def branches(self, nodes):
257 n = encodelist(nodes)
257 n = encodelist(nodes)
258 d = self._call("branches", nodes=n)
258 d = self._call("branches", nodes=n)
259 try:
259 try:
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
261 return br
261 return br
262 except ValueError:
262 except ValueError:
263 self._abort(error.ResponseError(_("unexpected response:"), d))
263 self._abort(error.ResponseError(_("unexpected response:"), d))
264
264
265 def between(self, pairs):
265 def between(self, pairs):
266 batch = 8 # avoid giant requests
266 batch = 8 # avoid giant requests
267 r = []
267 r = []
268 for i in xrange(0, len(pairs), batch):
268 for i in xrange(0, len(pairs), batch):
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
270 d = self._call("between", pairs=n)
270 d = self._call("between", pairs=n)
271 try:
271 try:
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
273 except ValueError:
273 except ValueError:
274 self._abort(error.ResponseError(_("unexpected response:"), d))
274 self._abort(error.ResponseError(_("unexpected response:"), d))
275 return r
275 return r
276
276
277 @batchable
277 @batchable
278 def pushkey(self, namespace, key, old, new):
278 def pushkey(self, namespace, key, old, new):
279 if not self.capable('pushkey'):
279 if not self.capable('pushkey'):
280 yield False, None
280 yield False, None
281 f = future()
281 f = future()
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
283 yield {'namespace': encoding.fromlocal(namespace),
283 yield {'namespace': encoding.fromlocal(namespace),
284 'key': encoding.fromlocal(key),
284 'key': encoding.fromlocal(key),
285 'old': encoding.fromlocal(old),
285 'old': encoding.fromlocal(old),
286 'new': encoding.fromlocal(new)}, f
286 'new': encoding.fromlocal(new)}, f
287 d = f.value
287 d = f.value
288 d, output = d.split('\n', 1)
288 d, output = d.split('\n', 1)
289 try:
289 try:
290 d = bool(int(d))
290 d = bool(int(d))
291 except ValueError:
291 except ValueError:
292 raise error.ResponseError(
292 raise error.ResponseError(
293 _('push failed (unexpected response):'), d)
293 _('push failed (unexpected response):'), d)
294 for l in output.splitlines(True):
294 for l in output.splitlines(True):
295 self.ui.status(_('remote: '), l)
295 self.ui.status(_('remote: '), l)
296 yield d
296 yield d
297
297
298 @batchable
298 @batchable
299 def listkeys(self, namespace):
299 def listkeys(self, namespace):
300 if not self.capable('pushkey'):
300 if not self.capable('pushkey'):
301 yield {}, None
301 yield {}, None
302 f = future()
302 f = future()
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
304 yield {'namespace': encoding.fromlocal(namespace)}, f
304 yield {'namespace': encoding.fromlocal(namespace)}, f
305 d = f.value
305 d = f.value
306 r = {}
306 r = {}
307 for l in d.splitlines():
307 for l in d.splitlines():
308 k, v = l.split('\t')
308 k, v = l.split('\t')
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
310 yield r
310 yield r
311
311
312 def stream_out(self):
312 def stream_out(self):
313 return self._callstream('stream_out')
313 return self._callstream('stream_out')
314
314
315 def changegroup(self, nodes, kind):
315 def changegroup(self, nodes, kind):
316 n = encodelist(nodes)
316 n = encodelist(nodes)
317 f = self._callcompressable("changegroup", roots=n)
317 f = self._callcompressable("changegroup", roots=n)
318 return changegroupmod.unbundle10(f, 'UN')
318 return changegroupmod.unbundle10(f, 'UN')
319
319
320 def changegroupsubset(self, bases, heads, kind):
320 def changegroupsubset(self, bases, heads, kind):
321 self.requirecap('changegroupsubset', _('look up remote changes'))
321 self.requirecap('changegroupsubset', _('look up remote changes'))
322 bases = encodelist(bases)
322 bases = encodelist(bases)
323 heads = encodelist(heads)
323 heads = encodelist(heads)
324 f = self._callcompressable("changegroupsubset",
324 f = self._callcompressable("changegroupsubset",
325 bases=bases, heads=heads)
325 bases=bases, heads=heads)
326 return changegroupmod.unbundle10(f, 'UN')
326 return changegroupmod.unbundle10(f, 'UN')
327
327
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None):
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None):
329 self.requirecap('getbundle', _('look up remote changes'))
329 self.requirecap('getbundle', _('look up remote changes'))
330 opts = {}
330 opts = {}
331 if heads is not None:
331 if heads is not None:
332 opts['heads'] = encodelist(heads)
332 opts['heads'] = encodelist(heads)
333 if common is not None:
333 if common is not None:
334 opts['common'] = encodelist(common)
334 opts['common'] = encodelist(common)
335 if bundlecaps is not None:
335 if bundlecaps is not None:
336 opts['bundlecaps'] = ','.join(bundlecaps)
336 opts['bundlecaps'] = ','.join(bundlecaps)
337 f = self._callcompressable("getbundle", **opts)
337 f = self._callcompressable("getbundle", **opts)
338 return changegroupmod.unbundle10(f, 'UN')
338 return changegroupmod.unbundle10(f, 'UN')
339
339
340 def unbundle(self, cg, heads, source):
340 def unbundle(self, cg, heads, source):
341 '''Send cg (a readable file-like object representing the
341 '''Send cg (a readable file-like object representing the
342 changegroup to push, typically a chunkbuffer object) to the
342 changegroup to push, typically a chunkbuffer object) to the
343 remote server as a bundle. Return an integer indicating the
343 remote server as a bundle. Return an integer indicating the
344 result of the push (see localrepository.addchangegroup()).'''
344 result of the push (see localrepository.addchangegroup()).'''
345
345
346 if heads != ['force'] and self.capable('unbundlehash'):
346 if heads != ['force'] and self.capable('unbundlehash'):
347 heads = encodelist(['hashed',
347 heads = encodelist(['hashed',
348 util.sha1(''.join(sorted(heads))).digest()])
348 util.sha1(''.join(sorted(heads))).digest()])
349 else:
349 else:
350 heads = encodelist(heads)
350 heads = encodelist(heads)
351
351
352 ret, output = self._callpush("unbundle", cg, heads=heads)
352 ret, output = self._callpush("unbundle", cg, heads=heads)
353 if ret == "":
353 if ret == "":
354 raise error.ResponseError(
354 raise error.ResponseError(
355 _('push failed:'), output)
355 _('push failed:'), output)
356 try:
356 try:
357 ret = int(ret)
357 ret = int(ret)
358 except ValueError:
358 except ValueError:
359 raise error.ResponseError(
359 raise error.ResponseError(
360 _('push failed (unexpected response):'), ret)
360 _('push failed (unexpected response):'), ret)
361
361
362 for l in output.splitlines(True):
362 for l in output.splitlines(True):
363 self.ui.status(_('remote: '), l)
363 self.ui.status(_('remote: '), l)
364 return ret
364 return ret
365
365
366 def debugwireargs(self, one, two, three=None, four=None, five=None):
366 def debugwireargs(self, one, two, three=None, four=None, five=None):
367 # don't pass optional arguments left at their default value
367 # don't pass optional arguments left at their default value
368 opts = {}
368 opts = {}
369 if three is not None:
369 if three is not None:
370 opts['three'] = three
370 opts['three'] = three
371 if four is not None:
371 if four is not None:
372 opts['four'] = four
372 opts['four'] = four
373 return self._call('debugwireargs', one=one, two=two, **opts)
373 return self._call('debugwireargs', one=one, two=two, **opts)
374
374
375 def _call(self, cmd, **args):
375 def _call(self, cmd, **args):
376 """execute <cmd> on the server
376 """execute <cmd> on the server
377
377
378 The command is expected to return a simple string.
378 The command is expected to return a simple string.
379
379
380 returns the server reply as a string."""
380 returns the server reply as a string."""
381 raise NotImplementedError()
381 raise NotImplementedError()
382
382
383 def _callstream(self, cmd, **args):
383 def _callstream(self, cmd, **args):
384 """execute <cmd> on the server
384 """execute <cmd> on the server
385
385
386 The command is expected to return a stream.
386 The command is expected to return a stream.
387
387
388 returns the server reply as a file like object."""
388 returns the server reply as a file like object."""
389 raise NotImplementedError()
389 raise NotImplementedError()
390
390
391 def _callcompressable(self, cmd, **args):
391 def _callcompressable(self, cmd, **args):
392 """execute <cmd> on the server
392 """execute <cmd> on the server
393
393
394 The command is expected to return a stream.
394 The command is expected to return a stream.
395
395
396 The stream may have been compressed in some implementaitons. This
396 The stream may have been compressed in some implementations. This
397 function takes care of the decompression. This is the only difference
397 function takes care of the decompression. This is the only difference
398 with _callstream.
398 with _callstream.
399
399
400 returns the server reply as a file like object.
400 returns the server reply as a file like object.
401 """
401 """
402 raise NotImplementedError()
402 raise NotImplementedError()
403
403
404 def _callpush(self, cmd, fp, **args):
404 def _callpush(self, cmd, fp, **args):
405 """execute a <cmd> on server
405 """execute a <cmd> on server
406
406
407 The command is expected to be related to a push. Push has a special
407 The command is expected to be related to a push. Push has a special
408 return method.
408 return method.
409
409
410 returns the server reply as a (ret, output) tuple. ret is either
410 returns the server reply as a (ret, output) tuple. ret is either
411 empty (error) or a stringified int.
411 empty (error) or a stringified int.
412 """
412 """
413 raise NotImplementedError()
413 raise NotImplementedError()
414
414
415 def _abort(self, exception):
415 def _abort(self, exception):
416 """clearly abort the wire protocol connection and raise the exception
416 """clearly abort the wire protocol connection and raise the exception
417 """
417 """
418 raise NotImplementedError()
418 raise NotImplementedError()
419
419
420 # server side
420 # server side
421
421
422 # wire protocol command can either return a string or one of these classes.
422 # wire protocol command can either return a string or one of these classes.
423 class streamres(object):
423 class streamres(object):
424 """wireproto reply: binary stream
424 """wireproto reply: binary stream
425
425
426 The call was successful and the result is a stream.
426 The call was successful and the result is a stream.
427 Iterate on the `self.gen` attribute to retrieve chunks.
427 Iterate on the `self.gen` attribute to retrieve chunks.
428 """
428 """
429 def __init__(self, gen):
429 def __init__(self, gen):
430 self.gen = gen
430 self.gen = gen
431
431
432 class pushres(object):
432 class pushres(object):
433 """wireproto reply: success with simple integer return
433 """wireproto reply: success with simple integer return
434
434
435 The call was successful and returned an integer contained in `self.res`.
435 The call was successful and returned an integer contained in `self.res`.
436 """
436 """
437 def __init__(self, res):
437 def __init__(self, res):
438 self.res = res
438 self.res = res
439
439
440 class pusherr(object):
440 class pusherr(object):
441 """wireproto reply: failure
441 """wireproto reply: failure
442
442
443 The call failed. The `self.res` attribute contains the error message.
443 The call failed. The `self.res` attribute contains the error message.
444 """
444 """
445 def __init__(self, res):
445 def __init__(self, res):
446 self.res = res
446 self.res = res
447
447
448 class ooberror(object):
448 class ooberror(object):
449 """wireproto reply: failure of a batch of operation
449 """wireproto reply: failure of a batch of operation
450
450
451 Something failed during a batch call. The error message is stored in
451 Something failed during a batch call. The error message is stored in
452 `self.message`.
452 `self.message`.
453 """
453 """
454 def __init__(self, message):
454 def __init__(self, message):
455 self.message = message
455 self.message = message
456
456
457 def dispatch(repo, proto, command):
457 def dispatch(repo, proto, command):
458 repo = repo.filtered("served")
458 repo = repo.filtered("served")
459 func, spec = commands[command]
459 func, spec = commands[command]
460 args = proto.getargs(spec)
460 args = proto.getargs(spec)
461 return func(repo, proto, *args)
461 return func(repo, proto, *args)
462
462
463 def options(cmd, keys, others):
463 def options(cmd, keys, others):
464 opts = {}
464 opts = {}
465 for k in keys:
465 for k in keys:
466 if k in others:
466 if k in others:
467 opts[k] = others[k]
467 opts[k] = others[k]
468 del others[k]
468 del others[k]
469 if others:
469 if others:
470 sys.stderr.write("abort: %s got unexpected arguments %s\n"
470 sys.stderr.write("abort: %s got unexpected arguments %s\n"
471 % (cmd, ",".join(others)))
471 % (cmd, ",".join(others)))
472 return opts
472 return opts
473
473
474 # list of commands
474 # list of commands
475 commands = {}
475 commands = {}
476
476
477 def wireprotocommand(name, args=''):
477 def wireprotocommand(name, args=''):
478 """decorator for wireprotocol command"""
478 """decorator for wire protocol command"""
479 def register(func):
479 def register(func):
480 commands[name] = (func, args)
480 commands[name] = (func, args)
481 return func
481 return func
482 return register
482 return register
483
483
484 @wireprotocommand('batch', 'cmds *')
484 @wireprotocommand('batch', 'cmds *')
485 def batch(repo, proto, cmds, others):
485 def batch(repo, proto, cmds, others):
486 repo = repo.filtered("served")
486 repo = repo.filtered("served")
487 res = []
487 res = []
488 for pair in cmds.split(';'):
488 for pair in cmds.split(';'):
489 op, args = pair.split(' ', 1)
489 op, args = pair.split(' ', 1)
490 vals = {}
490 vals = {}
491 for a in args.split(','):
491 for a in args.split(','):
492 if a:
492 if a:
493 n, v = a.split('=')
493 n, v = a.split('=')
494 vals[n] = unescapearg(v)
494 vals[n] = unescapearg(v)
495 func, spec = commands[op]
495 func, spec = commands[op]
496 if spec:
496 if spec:
497 keys = spec.split()
497 keys = spec.split()
498 data = {}
498 data = {}
499 for k in keys:
499 for k in keys:
500 if k == '*':
500 if k == '*':
501 star = {}
501 star = {}
502 for key in vals.keys():
502 for key in vals.keys():
503 if key not in keys:
503 if key not in keys:
504 star[key] = vals[key]
504 star[key] = vals[key]
505 data['*'] = star
505 data['*'] = star
506 else:
506 else:
507 data[k] = vals[k]
507 data[k] = vals[k]
508 result = func(repo, proto, *[data[k] for k in keys])
508 result = func(repo, proto, *[data[k] for k in keys])
509 else:
509 else:
510 result = func(repo, proto)
510 result = func(repo, proto)
511 if isinstance(result, ooberror):
511 if isinstance(result, ooberror):
512 return result
512 return result
513 res.append(escapearg(result))
513 res.append(escapearg(result))
514 return ';'.join(res)
514 return ';'.join(res)
515
515
516 @wireprotocommand('between', 'pairs')
516 @wireprotocommand('between', 'pairs')
517 def between(repo, proto, pairs):
517 def between(repo, proto, pairs):
518 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
518 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
519 r = []
519 r = []
520 for b in repo.between(pairs):
520 for b in repo.between(pairs):
521 r.append(encodelist(b) + "\n")
521 r.append(encodelist(b) + "\n")
522 return "".join(r)
522 return "".join(r)
523
523
524 @wireprotocommand('branchmap')
524 @wireprotocommand('branchmap')
525 def branchmap(repo, proto):
525 def branchmap(repo, proto):
526 branchmap = repo.branchmap()
526 branchmap = repo.branchmap()
527 heads = []
527 heads = []
528 for branch, nodes in branchmap.iteritems():
528 for branch, nodes in branchmap.iteritems():
529 branchname = urllib.quote(encoding.fromlocal(branch))
529 branchname = urllib.quote(encoding.fromlocal(branch))
530 branchnodes = encodelist(nodes)
530 branchnodes = encodelist(nodes)
531 heads.append('%s %s' % (branchname, branchnodes))
531 heads.append('%s %s' % (branchname, branchnodes))
532 return '\n'.join(heads)
532 return '\n'.join(heads)
533
533
534 @wireprotocommand('branches', 'nodes')
534 @wireprotocommand('branches', 'nodes')
535 def branches(repo, proto, nodes):
535 def branches(repo, proto, nodes):
536 nodes = decodelist(nodes)
536 nodes = decodelist(nodes)
537 r = []
537 r = []
538 for b in repo.branches(nodes):
538 for b in repo.branches(nodes):
539 r.append(encodelist(b) + "\n")
539 r.append(encodelist(b) + "\n")
540 return "".join(r)
540 return "".join(r)
541
541
542
542
543 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
543 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
544 'known', 'getbundle', 'unbundlehash', 'batch']
544 'known', 'getbundle', 'unbundlehash', 'batch']
545
545
546 def _capabilities(repo, proto):
546 def _capabilities(repo, proto):
547 """return a list of capabilities for a repo
547 """return a list of capabilities for a repo
548
548
549 This function exists to allow extensions to easily wrap capabilities
549 This function exists to allow extensions to easily wrap capabilities
550 computation
550 computation
551
551
552 - returns a lists: easy to alter
552 - returns a lists: easy to alter
553 - change done here will be propagated to both `capabilities` and `hello`
553 - change done here will be propagated to both `capabilities` and `hello`
554 command without any other effort. without any other action needed.
554 command without any other action needed.
555 """
555 """
556 # copy to prevent modification of the global list
556 # copy to prevent modification of the global list
557 caps = list(wireprotocaps)
557 caps = list(wireprotocaps)
558 if _allowstream(repo.ui):
558 if _allowstream(repo.ui):
559 if repo.ui.configbool('server', 'preferuncompressed', False):
559 if repo.ui.configbool('server', 'preferuncompressed', False):
560 caps.append('stream-preferred')
560 caps.append('stream-preferred')
561 requiredformats = repo.requirements & repo.supportedformats
561 requiredformats = repo.requirements & repo.supportedformats
562 # if our local revlogs are just revlogv1, add 'stream' cap
562 # if our local revlogs are just revlogv1, add 'stream' cap
563 if not requiredformats - set(('revlogv1',)):
563 if not requiredformats - set(('revlogv1',)):
564 caps.append('stream')
564 caps.append('stream')
565 # otherwise, add 'streamreqs' detailing our local revlog format
565 # otherwise, add 'streamreqs' detailing our local revlog format
566 else:
566 else:
567 caps.append('streamreqs=%s' % ','.join(requiredformats))
567 caps.append('streamreqs=%s' % ','.join(requiredformats))
568 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
568 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
569 caps.append('httpheader=1024')
569 caps.append('httpheader=1024')
570 return caps
570 return caps
571
571
572 # If you are writting and extension and consider wrapping this function. Wrap
572 # If you are writing an extension and consider wrapping this function. Wrap
573 # `_capabilities` instead.
573 # `_capabilities` instead.
574 @wireprotocommand('capabilities')
574 @wireprotocommand('capabilities')
575 def capabilities(repo, proto):
575 def capabilities(repo, proto):
576 return ' '.join(_capabilities(repo, proto))
576 return ' '.join(_capabilities(repo, proto))
577
577
578 @wireprotocommand('changegroup', 'roots')
578 @wireprotocommand('changegroup', 'roots')
579 def changegroup(repo, proto, roots):
579 def changegroup(repo, proto, roots):
580 nodes = decodelist(roots)
580 nodes = decodelist(roots)
581 cg = changegroupmod.changegroup(repo, nodes, 'serve')
581 cg = changegroupmod.changegroup(repo, nodes, 'serve')
582 return streamres(proto.groupchunks(cg))
582 return streamres(proto.groupchunks(cg))
583
583
584 @wireprotocommand('changegroupsubset', 'bases heads')
584 @wireprotocommand('changegroupsubset', 'bases heads')
585 def changegroupsubset(repo, proto, bases, heads):
585 def changegroupsubset(repo, proto, bases, heads):
586 bases = decodelist(bases)
586 bases = decodelist(bases)
587 heads = decodelist(heads)
587 heads = decodelist(heads)
588 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
588 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
589 return streamres(proto.groupchunks(cg))
589 return streamres(proto.groupchunks(cg))
590
590
591 @wireprotocommand('debugwireargs', 'one two *')
591 @wireprotocommand('debugwireargs', 'one two *')
592 def debugwireargs(repo, proto, one, two, others):
592 def debugwireargs(repo, proto, one, two, others):
593 # only accept optional args from the known set
593 # only accept optional args from the known set
594 opts = options('debugwireargs', ['three', 'four'], others)
594 opts = options('debugwireargs', ['three', 'four'], others)
595 return repo.debugwireargs(one, two, **opts)
595 return repo.debugwireargs(one, two, **opts)
596
596
597 @wireprotocommand('getbundle', '*')
597 @wireprotocommand('getbundle', '*')
598 def getbundle(repo, proto, others):
598 def getbundle(repo, proto, others):
599 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
599 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
600 for k, v in opts.iteritems():
600 for k, v in opts.iteritems():
601 if k in ('heads', 'common'):
601 if k in ('heads', 'common'):
602 opts[k] = decodelist(v)
602 opts[k] = decodelist(v)
603 elif k == 'bundlecaps':
603 elif k == 'bundlecaps':
604 opts[k] = set(v.split(','))
604 opts[k] = set(v.split(','))
605 cg = changegroupmod.getbundle(repo, 'serve', **opts)
605 cg = changegroupmod.getbundle(repo, 'serve', **opts)
606 return streamres(proto.groupchunks(cg))
606 return streamres(proto.groupchunks(cg))
607
607
608 @wireprotocommand('heads')
608 @wireprotocommand('heads')
609 def heads(repo, proto):
609 def heads(repo, proto):
610 h = repo.heads()
610 h = repo.heads()
611 return encodelist(h) + "\n"
611 return encodelist(h) + "\n"
612
612
613 @wireprotocommand('hello')
613 @wireprotocommand('hello')
614 def hello(repo, proto):
614 def hello(repo, proto):
615 '''the hello command returns a set of lines describing various
615 '''the hello command returns a set of lines describing various
616 interesting things about the server, in an RFC822-like format.
616 interesting things about the server, in an RFC822-like format.
617 Currently the only one defined is "capabilities", which
617 Currently the only one defined is "capabilities", which
618 consists of a line in the form:
618 consists of a line in the form:
619
619
620 capabilities: space separated list of tokens
620 capabilities: space separated list of tokens
621 '''
621 '''
622 return "capabilities: %s\n" % (capabilities(repo, proto))
622 return "capabilities: %s\n" % (capabilities(repo, proto))
623
623
624 @wireprotocommand('listkeys', 'namespace')
624 @wireprotocommand('listkeys', 'namespace')
625 def listkeys(repo, proto, namespace):
625 def listkeys(repo, proto, namespace):
626 d = repo.listkeys(encoding.tolocal(namespace)).items()
626 d = repo.listkeys(encoding.tolocal(namespace)).items()
627 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
627 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
628 for k, v in d])
628 for k, v in d])
629 return t
629 return t
630
630
631 @wireprotocommand('lookup', 'key')
631 @wireprotocommand('lookup', 'key')
632 def lookup(repo, proto, key):
632 def lookup(repo, proto, key):
633 try:
633 try:
634 k = encoding.tolocal(key)
634 k = encoding.tolocal(key)
635 c = repo[k]
635 c = repo[k]
636 r = c.hex()
636 r = c.hex()
637 success = 1
637 success = 1
638 except Exception, inst:
638 except Exception, inst:
639 r = str(inst)
639 r = str(inst)
640 success = 0
640 success = 0
641 return "%s %s\n" % (success, r)
641 return "%s %s\n" % (success, r)
642
642
643 @wireprotocommand('known', 'nodes *')
643 @wireprotocommand('known', 'nodes *')
644 def known(repo, proto, nodes, others):
644 def known(repo, proto, nodes, others):
645 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
645 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
646
646
647 @wireprotocommand('pushkey', 'namespace key old new')
647 @wireprotocommand('pushkey', 'namespace key old new')
648 def pushkey(repo, proto, namespace, key, old, new):
648 def pushkey(repo, proto, namespace, key, old, new):
649 # compatibility with pre-1.8 clients which were accidentally
649 # compatibility with pre-1.8 clients which were accidentally
650 # sending raw binary nodes rather than utf-8-encoded hex
650 # sending raw binary nodes rather than utf-8-encoded hex
651 if len(new) == 20 and new.encode('string-escape') != new:
651 if len(new) == 20 and new.encode('string-escape') != new:
652 # looks like it could be a binary node
652 # looks like it could be a binary node
653 try:
653 try:
654 new.decode('utf-8')
654 new.decode('utf-8')
655 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
655 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
656 except UnicodeDecodeError:
656 except UnicodeDecodeError:
657 pass # binary, leave unmodified
657 pass # binary, leave unmodified
658 else:
658 else:
659 new = encoding.tolocal(new) # normal path
659 new = encoding.tolocal(new) # normal path
660
660
661 if util.safehasattr(proto, 'restore'):
661 if util.safehasattr(proto, 'restore'):
662
662
663 proto.redirect()
663 proto.redirect()
664
664
665 try:
665 try:
666 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
666 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
667 encoding.tolocal(old), new) or False
667 encoding.tolocal(old), new) or False
668 except util.Abort:
668 except util.Abort:
669 r = False
669 r = False
670
670
671 output = proto.restore()
671 output = proto.restore()
672
672
673 return '%s\n%s' % (int(r), output)
673 return '%s\n%s' % (int(r), output)
674
674
675 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
675 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
676 encoding.tolocal(old), new)
676 encoding.tolocal(old), new)
677 return '%s\n' % int(r)
677 return '%s\n' % int(r)
678
678
679 def _allowstream(ui):
679 def _allowstream(ui):
680 return ui.configbool('server', 'uncompressed', True, untrusted=True)
680 return ui.configbool('server', 'uncompressed', True, untrusted=True)
681
681
682 def _walkstreamfiles(repo):
682 def _walkstreamfiles(repo):
683 # this is it's own function so extensions can override it
683 # this is it's own function so extensions can override it
684 return repo.store.walk()
684 return repo.store.walk()
685
685
686 @wireprotocommand('stream_out')
686 @wireprotocommand('stream_out')
687 def stream(repo, proto):
687 def stream(repo, proto):
688 '''If the server supports streaming clone, it advertises the "stream"
688 '''If the server supports streaming clone, it advertises the "stream"
689 capability with a value representing the version and flags of the repo
689 capability with a value representing the version and flags of the repo
690 it is serving. Client checks to see if it understands the format.
690 it is serving. Client checks to see if it understands the format.
691
691
692 The format is simple: the server writes out a line with the amount
692 The format is simple: the server writes out a line with the amount
693 of files, then the total amount of bytes to be transferred (separated
693 of files, then the total amount of bytes to be transferred (separated
694 by a space). Then, for each file, the server first writes the filename
694 by a space). Then, for each file, the server first writes the filename
695 and filesize (separated by the null character), then the file contents.
695 and file size (separated by the null character), then the file contents.
696 '''
696 '''
697
697
698 if not _allowstream(repo.ui):
698 if not _allowstream(repo.ui):
699 return '1\n'
699 return '1\n'
700
700
701 entries = []
701 entries = []
702 total_bytes = 0
702 total_bytes = 0
703 try:
703 try:
704 # get consistent snapshot of repo, lock during scan
704 # get consistent snapshot of repo, lock during scan
705 lock = repo.lock()
705 lock = repo.lock()
706 try:
706 try:
707 repo.ui.debug('scanning\n')
707 repo.ui.debug('scanning\n')
708 for name, ename, size in _walkstreamfiles(repo):
708 for name, ename, size in _walkstreamfiles(repo):
709 if size:
709 if size:
710 entries.append((name, size))
710 entries.append((name, size))
711 total_bytes += size
711 total_bytes += size
712 finally:
712 finally:
713 lock.release()
713 lock.release()
714 except error.LockError:
714 except error.LockError:
715 return '2\n' # error: 2
715 return '2\n' # error: 2
716
716
717 def streamer(repo, entries, total):
717 def streamer(repo, entries, total):
718 '''stream out all metadata files in repository.'''
718 '''stream out all metadata files in repository.'''
719 yield '0\n' # success
719 yield '0\n' # success
720 repo.ui.debug('%d files, %d bytes to transfer\n' %
720 repo.ui.debug('%d files, %d bytes to transfer\n' %
721 (len(entries), total_bytes))
721 (len(entries), total_bytes))
722 yield '%d %d\n' % (len(entries), total_bytes)
722 yield '%d %d\n' % (len(entries), total_bytes)
723
723
724 sopener = repo.sopener
724 sopener = repo.sopener
725 oldaudit = sopener.mustaudit
725 oldaudit = sopener.mustaudit
726 debugflag = repo.ui.debugflag
726 debugflag = repo.ui.debugflag
727 sopener.mustaudit = False
727 sopener.mustaudit = False
728
728
729 try:
729 try:
730 for name, size in entries:
730 for name, size in entries:
731 if debugflag:
731 if debugflag:
732 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
732 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
733 # partially encode name over the wire for backwards compat
733 # partially encode name over the wire for backwards compat
734 yield '%s\0%d\n' % (store.encodedir(name), size)
734 yield '%s\0%d\n' % (store.encodedir(name), size)
735 if size <= 65536:
735 if size <= 65536:
736 fp = sopener(name)
736 fp = sopener(name)
737 try:
737 try:
738 data = fp.read(size)
738 data = fp.read(size)
739 finally:
739 finally:
740 fp.close()
740 fp.close()
741 yield data
741 yield data
742 else:
742 else:
743 for chunk in util.filechunkiter(sopener(name), limit=size):
743 for chunk in util.filechunkiter(sopener(name), limit=size):
744 yield chunk
744 yield chunk
745 # replace with "finally:" when support for python 2.4 has been dropped
745 # replace with "finally:" when support for python 2.4 has been dropped
746 except Exception:
746 except Exception:
747 sopener.mustaudit = oldaudit
747 sopener.mustaudit = oldaudit
748 raise
748 raise
749 sopener.mustaudit = oldaudit
749 sopener.mustaudit = oldaudit
750
750
751 return streamres(streamer(repo, entries, total_bytes))
751 return streamres(streamer(repo, entries, total_bytes))
752
752
753 @wireprotocommand('unbundle', 'heads')
753 @wireprotocommand('unbundle', 'heads')
754 def unbundle(repo, proto, heads):
754 def unbundle(repo, proto, heads):
755 their_heads = decodelist(heads)
755 their_heads = decodelist(heads)
756
756
757 try:
757 try:
758 proto.redirect()
758 proto.redirect()
759
759
760 exchange.check_heads(repo, their_heads, 'preparing changes')
760 exchange.check_heads(repo, their_heads, 'preparing changes')
761
761
762 # write bundle data to temporary file because it can be big
762 # write bundle data to temporary file because it can be big
763 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
763 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
764 fp = os.fdopen(fd, 'wb+')
764 fp = os.fdopen(fd, 'wb+')
765 r = 0
765 r = 0
766 try:
766 try:
767 proto.getfile(fp)
767 proto.getfile(fp)
768 fp.seek(0)
768 fp.seek(0)
769 gen = changegroupmod.readbundle(fp, None)
769 gen = changegroupmod.readbundle(fp, None)
770 r = exchange.unbundle(repo, gen, their_heads, 'serve',
770 r = exchange.unbundle(repo, gen, their_heads, 'serve',
771 proto._client())
771 proto._client())
772 return pushres(r)
772 return pushres(r)
773
773
774 finally:
774 finally:
775 fp.close()
775 fp.close()
776 os.unlink(tempname)
776 os.unlink(tempname)
777 except util.Abort, inst:
777 except util.Abort, inst:
778 # The old code we moved used sys.stderr directly.
778 # The old code we moved used sys.stderr directly.
779 # We did not changed it to minise code change.
779 # We did not change it to minimise code change.
780 # This need to be moved to something proper.
780 # This need to be moved to something proper.
781 # Feel free to do it.
781 # Feel free to do it.
782 sys.stderr.write("abort: %s\n" % inst)
782 sys.stderr.write("abort: %s\n" % inst)
783 return pushres(0)
783 return pushres(0)
784 except exchange.PushRaced, exc:
784 except exchange.PushRaced, exc:
785 return pusherr(str(exc))
785 return pusherr(str(exc))
@@ -1,1321 +1,1321 b''
1 #!/usr/bin/env python
1 #!/usr/bin/env python
2 #
2 #
3 # run-tests.py - Run a set of tests on Mercurial
3 # run-tests.py - Run a set of tests on Mercurial
4 #
4 #
5 # Copyright 2006 Matt Mackall <mpm@selenic.com>
5 # Copyright 2006 Matt Mackall <mpm@selenic.com>
6 #
6 #
7 # This software may be used and distributed according to the terms of the
7 # This software may be used and distributed according to the terms of the
8 # GNU General Public License version 2 or any later version.
8 # GNU General Public License version 2 or any later version.
9
9
10 # Modifying this script is tricky because it has many modes:
10 # Modifying this script is tricky because it has many modes:
11 # - serial (default) vs parallel (-jN, N > 1)
11 # - serial (default) vs parallel (-jN, N > 1)
12 # - no coverage (default) vs coverage (-c, -C, -s)
12 # - no coverage (default) vs coverage (-c, -C, -s)
13 # - temp install (default) vs specific hg script (--with-hg, --local)
13 # - temp install (default) vs specific hg script (--with-hg, --local)
14 # - tests are a mix of shell scripts and Python scripts
14 # - tests are a mix of shell scripts and Python scripts
15 #
15 #
16 # If you change this script, it is recommended that you ensure you
16 # If you change this script, it is recommended that you ensure you
17 # haven't broken it by running it in various modes with a representative
17 # haven't broken it by running it in various modes with a representative
18 # sample of test scripts. For example:
18 # sample of test scripts. For example:
19 #
19 #
20 # 1) serial, no coverage, temp install:
20 # 1) serial, no coverage, temp install:
21 # ./run-tests.py test-s*
21 # ./run-tests.py test-s*
22 # 2) serial, no coverage, local hg:
22 # 2) serial, no coverage, local hg:
23 # ./run-tests.py --local test-s*
23 # ./run-tests.py --local test-s*
24 # 3) serial, coverage, temp install:
24 # 3) serial, coverage, temp install:
25 # ./run-tests.py -c test-s*
25 # ./run-tests.py -c test-s*
26 # 4) serial, coverage, local hg:
26 # 4) serial, coverage, local hg:
27 # ./run-tests.py -c --local test-s* # unsupported
27 # ./run-tests.py -c --local test-s* # unsupported
28 # 5) parallel, no coverage, temp install:
28 # 5) parallel, no coverage, temp install:
29 # ./run-tests.py -j2 test-s*
29 # ./run-tests.py -j2 test-s*
30 # 6) parallel, no coverage, local hg:
30 # 6) parallel, no coverage, local hg:
31 # ./run-tests.py -j2 --local test-s*
31 # ./run-tests.py -j2 --local test-s*
32 # 7) parallel, coverage, temp install:
32 # 7) parallel, coverage, temp install:
33 # ./run-tests.py -j2 -c test-s* # currently broken
33 # ./run-tests.py -j2 -c test-s* # currently broken
34 # 8) parallel, coverage, local install:
34 # 8) parallel, coverage, local install:
35 # ./run-tests.py -j2 -c --local test-s* # unsupported (and broken)
35 # ./run-tests.py -j2 -c --local test-s* # unsupported (and broken)
36 # 9) parallel, custom tmp dir:
36 # 9) parallel, custom tmp dir:
37 # ./run-tests.py -j2 --tmpdir /tmp/myhgtests
37 # ./run-tests.py -j2 --tmpdir /tmp/myhgtests
38 #
38 #
39 # (You could use any subset of the tests: test-s* happens to match
39 # (You could use any subset of the tests: test-s* happens to match
40 # enough that it's worth doing parallel runs, few enough that it
40 # enough that it's worth doing parallel runs, few enough that it
41 # completes fairly quickly, includes both shell and Python scripts, and
41 # completes fairly quickly, includes both shell and Python scripts, and
42 # includes some scripts that run daemon processes.)
42 # includes some scripts that run daemon processes.)
43
43
44 from distutils import version
44 from distutils import version
45 import difflib
45 import difflib
46 import errno
46 import errno
47 import optparse
47 import optparse
48 import os
48 import os
49 import shutil
49 import shutil
50 import subprocess
50 import subprocess
51 import signal
51 import signal
52 import sys
52 import sys
53 import tempfile
53 import tempfile
54 import time
54 import time
55 import random
55 import random
56 import re
56 import re
57 import threading
57 import threading
58 import killdaemons as killmod
58 import killdaemons as killmod
59 import Queue as queue
59 import Queue as queue
60
60
61 processlock = threading.Lock()
61 processlock = threading.Lock()
62
62
63 # subprocess._cleanup can race with any Popen.wait or Popen.poll on py24
63 # subprocess._cleanup can race with any Popen.wait or Popen.poll on py24
64 # http://bugs.python.org/issue1731717 for details. We shouldn't be producing
64 # http://bugs.python.org/issue1731717 for details. We shouldn't be producing
65 # zombies but it's pretty harmless even if we do.
65 # zombies but it's pretty harmless even if we do.
66 if sys.version_info < (2, 5):
66 if sys.version_info < (2, 5):
67 subprocess._cleanup = lambda: None
67 subprocess._cleanup = lambda: None
68
68
69 closefds = os.name == 'posix'
69 closefds = os.name == 'posix'
70 def Popen4(cmd, wd, timeout, env=None):
70 def Popen4(cmd, wd, timeout, env=None):
71 processlock.acquire()
71 processlock.acquire()
72 p = subprocess.Popen(cmd, shell=True, bufsize=-1, cwd=wd, env=env,
72 p = subprocess.Popen(cmd, shell=True, bufsize=-1, cwd=wd, env=env,
73 close_fds=closefds,
73 close_fds=closefds,
74 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
74 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
75 stderr=subprocess.STDOUT)
75 stderr=subprocess.STDOUT)
76 processlock.release()
76 processlock.release()
77
77
78 p.fromchild = p.stdout
78 p.fromchild = p.stdout
79 p.tochild = p.stdin
79 p.tochild = p.stdin
80 p.childerr = p.stderr
80 p.childerr = p.stderr
81
81
82 p.timeout = False
82 p.timeout = False
83 if timeout:
83 if timeout:
84 def t():
84 def t():
85 start = time.time()
85 start = time.time()
86 while time.time() - start < timeout and p.returncode is None:
86 while time.time() - start < timeout and p.returncode is None:
87 time.sleep(.1)
87 time.sleep(.1)
88 p.timeout = True
88 p.timeout = True
89 if p.returncode is None:
89 if p.returncode is None:
90 terminate(p)
90 terminate(p)
91 threading.Thread(target=t).start()
91 threading.Thread(target=t).start()
92
92
93 return p
93 return p
94
94
95 # reserved exit code to skip test (used by hghave)
95 # reserved exit code to skip test (used by hghave)
96 SKIPPED_STATUS = 80
96 SKIPPED_STATUS = 80
97 SKIPPED_PREFIX = 'skipped: '
97 SKIPPED_PREFIX = 'skipped: '
98 FAILED_PREFIX = 'hghave check failed: '
98 FAILED_PREFIX = 'hghave check failed: '
99 PYTHON = sys.executable.replace('\\', '/')
99 PYTHON = sys.executable.replace('\\', '/')
100 IMPL_PATH = 'PYTHONPATH'
100 IMPL_PATH = 'PYTHONPATH'
101 if 'java' in sys.platform:
101 if 'java' in sys.platform:
102 IMPL_PATH = 'JYTHONPATH'
102 IMPL_PATH = 'JYTHONPATH'
103
103
104 requiredtools = [os.path.basename(sys.executable), "diff", "grep", "unzip",
104 requiredtools = [os.path.basename(sys.executable), "diff", "grep", "unzip",
105 "gunzip", "bunzip2", "sed"]
105 "gunzip", "bunzip2", "sed"]
106 createdfiles = []
106 createdfiles = []
107
107
108 defaults = {
108 defaults = {
109 'jobs': ('HGTEST_JOBS', 1),
109 'jobs': ('HGTEST_JOBS', 1),
110 'timeout': ('HGTEST_TIMEOUT', 180),
110 'timeout': ('HGTEST_TIMEOUT', 180),
111 'port': ('HGTEST_PORT', 20059),
111 'port': ('HGTEST_PORT', 20059),
112 'shell': ('HGTEST_SHELL', 'sh'),
112 'shell': ('HGTEST_SHELL', 'sh'),
113 }
113 }
114
114
115 def parselistfiles(files, listtype, warn=True):
115 def parselistfiles(files, listtype, warn=True):
116 entries = dict()
116 entries = dict()
117 for filename in files:
117 for filename in files:
118 try:
118 try:
119 path = os.path.expanduser(os.path.expandvars(filename))
119 path = os.path.expanduser(os.path.expandvars(filename))
120 f = open(path, "r")
120 f = open(path, "r")
121 except IOError, err:
121 except IOError, err:
122 if err.errno != errno.ENOENT:
122 if err.errno != errno.ENOENT:
123 raise
123 raise
124 if warn:
124 if warn:
125 print "warning: no such %s file: %s" % (listtype, filename)
125 print "warning: no such %s file: %s" % (listtype, filename)
126 continue
126 continue
127
127
128 for line in f.readlines():
128 for line in f.readlines():
129 line = line.split('#', 1)[0].strip()
129 line = line.split('#', 1)[0].strip()
130 if line:
130 if line:
131 entries[line] = filename
131 entries[line] = filename
132
132
133 f.close()
133 f.close()
134 return entries
134 return entries
135
135
136 def getparser():
136 def getparser():
137 parser = optparse.OptionParser("%prog [options] [tests]")
137 parser = optparse.OptionParser("%prog [options] [tests]")
138
138
139 # keep these sorted
139 # keep these sorted
140 parser.add_option("--blacklist", action="append",
140 parser.add_option("--blacklist", action="append",
141 help="skip tests listed in the specified blacklist file")
141 help="skip tests listed in the specified blacklist file")
142 parser.add_option("--whitelist", action="append",
142 parser.add_option("--whitelist", action="append",
143 help="always run tests listed in the specified whitelist file")
143 help="always run tests listed in the specified whitelist file")
144 parser.add_option("--changed", type="string",
144 parser.add_option("--changed", type="string",
145 help="run tests that are changed in parent rev or working directory")
145 help="run tests that are changed in parent rev or working directory")
146 parser.add_option("-C", "--annotate", action="store_true",
146 parser.add_option("-C", "--annotate", action="store_true",
147 help="output files annotated with coverage")
147 help="output files annotated with coverage")
148 parser.add_option("-c", "--cover", action="store_true",
148 parser.add_option("-c", "--cover", action="store_true",
149 help="print a test coverage report")
149 help="print a test coverage report")
150 parser.add_option("-d", "--debug", action="store_true",
150 parser.add_option("-d", "--debug", action="store_true",
151 help="debug mode: write output of test scripts to console"
151 help="debug mode: write output of test scripts to console"
152 " rather than capturing and diff'ing it (disables timeout)")
152 " rather than capturing and diffing it (disables timeout)")
153 parser.add_option("-f", "--first", action="store_true",
153 parser.add_option("-f", "--first", action="store_true",
154 help="exit on the first test failure")
154 help="exit on the first test failure")
155 parser.add_option("-H", "--htmlcov", action="store_true",
155 parser.add_option("-H", "--htmlcov", action="store_true",
156 help="create an HTML report of the coverage of the files")
156 help="create an HTML report of the coverage of the files")
157 parser.add_option("-i", "--interactive", action="store_true",
157 parser.add_option("-i", "--interactive", action="store_true",
158 help="prompt to accept changed output")
158 help="prompt to accept changed output")
159 parser.add_option("-j", "--jobs", type="int",
159 parser.add_option("-j", "--jobs", type="int",
160 help="number of jobs to run in parallel"
160 help="number of jobs to run in parallel"
161 " (default: $%s or %d)" % defaults['jobs'])
161 " (default: $%s or %d)" % defaults['jobs'])
162 parser.add_option("--keep-tmpdir", action="store_true",
162 parser.add_option("--keep-tmpdir", action="store_true",
163 help="keep temporary directory after running tests")
163 help="keep temporary directory after running tests")
164 parser.add_option("-k", "--keywords",
164 parser.add_option("-k", "--keywords",
165 help="run tests matching keywords")
165 help="run tests matching keywords")
166 parser.add_option("-l", "--local", action="store_true",
166 parser.add_option("-l", "--local", action="store_true",
167 help="shortcut for --with-hg=<testdir>/../hg")
167 help="shortcut for --with-hg=<testdir>/../hg")
168 parser.add_option("--loop", action="store_true",
168 parser.add_option("--loop", action="store_true",
169 help="loop tests repeatedly")
169 help="loop tests repeatedly")
170 parser.add_option("-n", "--nodiff", action="store_true",
170 parser.add_option("-n", "--nodiff", action="store_true",
171 help="skip showing test changes")
171 help="skip showing test changes")
172 parser.add_option("-p", "--port", type="int",
172 parser.add_option("-p", "--port", type="int",
173 help="port on which servers should listen"
173 help="port on which servers should listen"
174 " (default: $%s or %d)" % defaults['port'])
174 " (default: $%s or %d)" % defaults['port'])
175 parser.add_option("--compiler", type="string",
175 parser.add_option("--compiler", type="string",
176 help="compiler to build with")
176 help="compiler to build with")
177 parser.add_option("--pure", action="store_true",
177 parser.add_option("--pure", action="store_true",
178 help="use pure Python code instead of C extensions")
178 help="use pure Python code instead of C extensions")
179 parser.add_option("-R", "--restart", action="store_true",
179 parser.add_option("-R", "--restart", action="store_true",
180 help="restart at last error")
180 help="restart at last error")
181 parser.add_option("-r", "--retest", action="store_true",
181 parser.add_option("-r", "--retest", action="store_true",
182 help="retest failed tests")
182 help="retest failed tests")
183 parser.add_option("-S", "--noskips", action="store_true",
183 parser.add_option("-S", "--noskips", action="store_true",
184 help="don't report skip tests verbosely")
184 help="don't report skip tests verbosely")
185 parser.add_option("--shell", type="string",
185 parser.add_option("--shell", type="string",
186 help="shell to use (default: $%s or %s)" % defaults['shell'])
186 help="shell to use (default: $%s or %s)" % defaults['shell'])
187 parser.add_option("-t", "--timeout", type="int",
187 parser.add_option("-t", "--timeout", type="int",
188 help="kill errant tests after TIMEOUT seconds"
188 help="kill errant tests after TIMEOUT seconds"
189 " (default: $%s or %d)" % defaults['timeout'])
189 " (default: $%s or %d)" % defaults['timeout'])
190 parser.add_option("--time", action="store_true",
190 parser.add_option("--time", action="store_true",
191 help="time how long each test takes")
191 help="time how long each test takes")
192 parser.add_option("--tmpdir", type="string",
192 parser.add_option("--tmpdir", type="string",
193 help="run tests in the given temporary directory"
193 help="run tests in the given temporary directory"
194 " (implies --keep-tmpdir)")
194 " (implies --keep-tmpdir)")
195 parser.add_option("-v", "--verbose", action="store_true",
195 parser.add_option("-v", "--verbose", action="store_true",
196 help="output verbose messages")
196 help="output verbose messages")
197 parser.add_option("--view", type="string",
197 parser.add_option("--view", type="string",
198 help="external diff viewer")
198 help="external diff viewer")
199 parser.add_option("--with-hg", type="string",
199 parser.add_option("--with-hg", type="string",
200 metavar="HG",
200 metavar="HG",
201 help="test using specified hg script rather than a "
201 help="test using specified hg script rather than a "
202 "temporary installation")
202 "temporary installation")
203 parser.add_option("-3", "--py3k-warnings", action="store_true",
203 parser.add_option("-3", "--py3k-warnings", action="store_true",
204 help="enable Py3k warnings on Python 2.6+")
204 help="enable Py3k warnings on Python 2.6+")
205 parser.add_option('--extra-config-opt', action="append",
205 parser.add_option('--extra-config-opt', action="append",
206 help='set the given config opt in the test hgrc')
206 help='set the given config opt in the test hgrc')
207 parser.add_option('--random', action="store_true",
207 parser.add_option('--random', action="store_true",
208 help='run tests in random order')
208 help='run tests in random order')
209
209
210 for option, (envvar, default) in defaults.items():
210 for option, (envvar, default) in defaults.items():
211 defaults[option] = type(default)(os.environ.get(envvar, default))
211 defaults[option] = type(default)(os.environ.get(envvar, default))
212 parser.set_defaults(**defaults)
212 parser.set_defaults(**defaults)
213
213
214 return parser
214 return parser
215
215
216 def parseargs(args, parser):
216 def parseargs(args, parser):
217 (options, args) = parser.parse_args(args)
217 (options, args) = parser.parse_args(args)
218
218
219 # jython is always pure
219 # jython is always pure
220 if 'java' in sys.platform or '__pypy__' in sys.modules:
220 if 'java' in sys.platform or '__pypy__' in sys.modules:
221 options.pure = True
221 options.pure = True
222
222
223 if options.with_hg:
223 if options.with_hg:
224 options.with_hg = os.path.expanduser(options.with_hg)
224 options.with_hg = os.path.expanduser(options.with_hg)
225 if not (os.path.isfile(options.with_hg) and
225 if not (os.path.isfile(options.with_hg) and
226 os.access(options.with_hg, os.X_OK)):
226 os.access(options.with_hg, os.X_OK)):
227 parser.error('--with-hg must specify an executable hg script')
227 parser.error('--with-hg must specify an executable hg script')
228 if not os.path.basename(options.with_hg) == 'hg':
228 if not os.path.basename(options.with_hg) == 'hg':
229 sys.stderr.write('warning: --with-hg should specify an hg script\n')
229 sys.stderr.write('warning: --with-hg should specify an hg script\n')
230 if options.local:
230 if options.local:
231 testdir = os.path.dirname(os.path.realpath(sys.argv[0]))
231 testdir = os.path.dirname(os.path.realpath(sys.argv[0]))
232 hgbin = os.path.join(os.path.dirname(testdir), 'hg')
232 hgbin = os.path.join(os.path.dirname(testdir), 'hg')
233 if os.name != 'nt' and not os.access(hgbin, os.X_OK):
233 if os.name != 'nt' and not os.access(hgbin, os.X_OK):
234 parser.error('--local specified, but %r not found or not executable'
234 parser.error('--local specified, but %r not found or not executable'
235 % hgbin)
235 % hgbin)
236 options.with_hg = hgbin
236 options.with_hg = hgbin
237
237
238 options.anycoverage = options.cover or options.annotate or options.htmlcov
238 options.anycoverage = options.cover or options.annotate or options.htmlcov
239 if options.anycoverage:
239 if options.anycoverage:
240 try:
240 try:
241 import coverage
241 import coverage
242 covver = version.StrictVersion(coverage.__version__).version
242 covver = version.StrictVersion(coverage.__version__).version
243 if covver < (3, 3):
243 if covver < (3, 3):
244 parser.error('coverage options require coverage 3.3 or later')
244 parser.error('coverage options require coverage 3.3 or later')
245 except ImportError:
245 except ImportError:
246 parser.error('coverage options now require the coverage package')
246 parser.error('coverage options now require the coverage package')
247
247
248 if options.anycoverage and options.local:
248 if options.anycoverage and options.local:
249 # this needs some path mangling somewhere, I guess
249 # this needs some path mangling somewhere, I guess
250 parser.error("sorry, coverage options do not work when --local "
250 parser.error("sorry, coverage options do not work when --local "
251 "is specified")
251 "is specified")
252
252
253 global verbose
253 global verbose
254 if options.verbose:
254 if options.verbose:
255 verbose = ''
255 verbose = ''
256
256
257 if options.tmpdir:
257 if options.tmpdir:
258 options.tmpdir = os.path.expanduser(options.tmpdir)
258 options.tmpdir = os.path.expanduser(options.tmpdir)
259
259
260 if options.jobs < 1:
260 if options.jobs < 1:
261 parser.error('--jobs must be positive')
261 parser.error('--jobs must be positive')
262 if options.interactive and options.debug:
262 if options.interactive and options.debug:
263 parser.error("-i/--interactive and -d/--debug are incompatible")
263 parser.error("-i/--interactive and -d/--debug are incompatible")
264 if options.debug:
264 if options.debug:
265 if options.timeout != defaults['timeout']:
265 if options.timeout != defaults['timeout']:
266 sys.stderr.write(
266 sys.stderr.write(
267 'warning: --timeout option ignored with --debug\n')
267 'warning: --timeout option ignored with --debug\n')
268 options.timeout = 0
268 options.timeout = 0
269 if options.py3k_warnings:
269 if options.py3k_warnings:
270 if sys.version_info[:2] < (2, 6) or sys.version_info[:2] >= (3, 0):
270 if sys.version_info[:2] < (2, 6) or sys.version_info[:2] >= (3, 0):
271 parser.error('--py3k-warnings can only be used on Python 2.6+')
271 parser.error('--py3k-warnings can only be used on Python 2.6+')
272 if options.blacklist:
272 if options.blacklist:
273 options.blacklist = parselistfiles(options.blacklist, 'blacklist')
273 options.blacklist = parselistfiles(options.blacklist, 'blacklist')
274 if options.whitelist:
274 if options.whitelist:
275 options.whitelisted = parselistfiles(options.whitelist, 'whitelist')
275 options.whitelisted = parselistfiles(options.whitelist, 'whitelist')
276 else:
276 else:
277 options.whitelisted = {}
277 options.whitelisted = {}
278
278
279 return (options, args)
279 return (options, args)
280
280
281 def rename(src, dst):
281 def rename(src, dst):
282 """Like os.rename(), trade atomicity and opened files friendliness
282 """Like os.rename(), trade atomicity and opened files friendliness
283 for existing destination support.
283 for existing destination support.
284 """
284 """
285 shutil.copy(src, dst)
285 shutil.copy(src, dst)
286 os.remove(src)
286 os.remove(src)
287
287
288 def parsehghaveoutput(lines):
288 def parsehghaveoutput(lines):
289 '''Parse hghave log lines.
289 '''Parse hghave log lines.
290 Return tuple of lists (missing, failed):
290 Return tuple of lists (missing, failed):
291 * the missing/unknown features
291 * the missing/unknown features
292 * the features for which existence check failed'''
292 * the features for which existence check failed'''
293 missing = []
293 missing = []
294 failed = []
294 failed = []
295 for line in lines:
295 for line in lines:
296 if line.startswith(SKIPPED_PREFIX):
296 if line.startswith(SKIPPED_PREFIX):
297 line = line.splitlines()[0]
297 line = line.splitlines()[0]
298 missing.append(line[len(SKIPPED_PREFIX):])
298 missing.append(line[len(SKIPPED_PREFIX):])
299 elif line.startswith(FAILED_PREFIX):
299 elif line.startswith(FAILED_PREFIX):
300 line = line.splitlines()[0]
300 line = line.splitlines()[0]
301 failed.append(line[len(FAILED_PREFIX):])
301 failed.append(line[len(FAILED_PREFIX):])
302
302
303 return missing, failed
303 return missing, failed
304
304
305 def showdiff(expected, output, ref, err):
305 def showdiff(expected, output, ref, err):
306 print
306 print
307 servefail = False
307 servefail = False
308 for line in difflib.unified_diff(expected, output, ref, err):
308 for line in difflib.unified_diff(expected, output, ref, err):
309 sys.stdout.write(line)
309 sys.stdout.write(line)
310 if not servefail and line.startswith(
310 if not servefail and line.startswith(
311 '+ abort: child process failed to start'):
311 '+ abort: child process failed to start'):
312 servefail = True
312 servefail = True
313 return {'servefail': servefail}
313 return {'servefail': servefail}
314
314
315
315
316 verbose = False
316 verbose = False
317 def vlog(*msg):
317 def vlog(*msg):
318 if verbose is not False:
318 if verbose is not False:
319 iolock.acquire()
319 iolock.acquire()
320 if verbose:
320 if verbose:
321 print verbose,
321 print verbose,
322 for m in msg:
322 for m in msg:
323 print m,
323 print m,
324 print
324 print
325 sys.stdout.flush()
325 sys.stdout.flush()
326 iolock.release()
326 iolock.release()
327
327
328 def log(*msg):
328 def log(*msg):
329 iolock.acquire()
329 iolock.acquire()
330 if verbose:
330 if verbose:
331 print verbose,
331 print verbose,
332 for m in msg:
332 for m in msg:
333 print m,
333 print m,
334 print
334 print
335 sys.stdout.flush()
335 sys.stdout.flush()
336 iolock.release()
336 iolock.release()
337
337
338 def findprogram(program):
338 def findprogram(program):
339 """Search PATH for a executable program"""
339 """Search PATH for a executable program"""
340 for p in os.environ.get('PATH', os.defpath).split(os.pathsep):
340 for p in os.environ.get('PATH', os.defpath).split(os.pathsep):
341 name = os.path.join(p, program)
341 name = os.path.join(p, program)
342 if os.name == 'nt' or os.access(name, os.X_OK):
342 if os.name == 'nt' or os.access(name, os.X_OK):
343 return name
343 return name
344 return None
344 return None
345
345
346 def createhgrc(path, options):
346 def createhgrc(path, options):
347 # create a fresh hgrc
347 # create a fresh hgrc
348 hgrc = open(path, 'w')
348 hgrc = open(path, 'w')
349 hgrc.write('[ui]\n')
349 hgrc.write('[ui]\n')
350 hgrc.write('slash = True\n')
350 hgrc.write('slash = True\n')
351 hgrc.write('interactive = False\n')
351 hgrc.write('interactive = False\n')
352 hgrc.write('[defaults]\n')
352 hgrc.write('[defaults]\n')
353 hgrc.write('backout = -d "0 0"\n')
353 hgrc.write('backout = -d "0 0"\n')
354 hgrc.write('commit = -d "0 0"\n')
354 hgrc.write('commit = -d "0 0"\n')
355 hgrc.write('shelve = --date "0 0"\n')
355 hgrc.write('shelve = --date "0 0"\n')
356 hgrc.write('tag = -d "0 0"\n')
356 hgrc.write('tag = -d "0 0"\n')
357 if options.extra_config_opt:
357 if options.extra_config_opt:
358 for opt in options.extra_config_opt:
358 for opt in options.extra_config_opt:
359 section, key = opt.split('.', 1)
359 section, key = opt.split('.', 1)
360 assert '=' in key, ('extra config opt %s must '
360 assert '=' in key, ('extra config opt %s must '
361 'have an = for assignment' % opt)
361 'have an = for assignment' % opt)
362 hgrc.write('[%s]\n%s\n' % (section, key))
362 hgrc.write('[%s]\n%s\n' % (section, key))
363 hgrc.close()
363 hgrc.close()
364
364
365 def createenv(options, testtmp, threadtmp, port):
365 def createenv(options, testtmp, threadtmp, port):
366 env = os.environ.copy()
366 env = os.environ.copy()
367 env['TESTTMP'] = testtmp
367 env['TESTTMP'] = testtmp
368 env['HOME'] = testtmp
368 env['HOME'] = testtmp
369 env["HGPORT"] = str(port)
369 env["HGPORT"] = str(port)
370 env["HGPORT1"] = str(port + 1)
370 env["HGPORT1"] = str(port + 1)
371 env["HGPORT2"] = str(port + 2)
371 env["HGPORT2"] = str(port + 2)
372 env["HGRCPATH"] = os.path.join(threadtmp, '.hgrc')
372 env["HGRCPATH"] = os.path.join(threadtmp, '.hgrc')
373 env["DAEMON_PIDS"] = os.path.join(threadtmp, 'daemon.pids')
373 env["DAEMON_PIDS"] = os.path.join(threadtmp, 'daemon.pids')
374 env["HGEDITOR"] = sys.executable + ' -c "import sys; sys.exit(0)"'
374 env["HGEDITOR"] = sys.executable + ' -c "import sys; sys.exit(0)"'
375 env["HGMERGE"] = "internal:merge"
375 env["HGMERGE"] = "internal:merge"
376 env["HGUSER"] = "test"
376 env["HGUSER"] = "test"
377 env["HGENCODING"] = "ascii"
377 env["HGENCODING"] = "ascii"
378 env["HGENCODINGMODE"] = "strict"
378 env["HGENCODINGMODE"] = "strict"
379
379
380 # Reset some environment variables to well-known values so that
380 # Reset some environment variables to well-known values so that
381 # the tests produce repeatable output.
381 # the tests produce repeatable output.
382 env['LANG'] = env['LC_ALL'] = env['LANGUAGE'] = 'C'
382 env['LANG'] = env['LC_ALL'] = env['LANGUAGE'] = 'C'
383 env['TZ'] = 'GMT'
383 env['TZ'] = 'GMT'
384 env["EMAIL"] = "Foo Bar <foo.bar@example.com>"
384 env["EMAIL"] = "Foo Bar <foo.bar@example.com>"
385 env['COLUMNS'] = '80'
385 env['COLUMNS'] = '80'
386 env['TERM'] = 'xterm'
386 env['TERM'] = 'xterm'
387
387
388 for k in ('HG HGPROF CDPATH GREP_OPTIONS http_proxy no_proxy ' +
388 for k in ('HG HGPROF CDPATH GREP_OPTIONS http_proxy no_proxy ' +
389 'NO_PROXY').split():
389 'NO_PROXY').split():
390 if k in env:
390 if k in env:
391 del env[k]
391 del env[k]
392
392
393 # unset env related to hooks
393 # unset env related to hooks
394 for k in env.keys():
394 for k in env.keys():
395 if k.startswith('HG_'):
395 if k.startswith('HG_'):
396 del env[k]
396 del env[k]
397
397
398 return env
398 return env
399
399
400 def checktools():
400 def checktools():
401 # Before we go any further, check for pre-requisite tools
401 # Before we go any further, check for pre-requisite tools
402 # stuff from coreutils (cat, rm, etc) are not tested
402 # stuff from coreutils (cat, rm, etc) are not tested
403 for p in requiredtools:
403 for p in requiredtools:
404 if os.name == 'nt' and not p.endswith('.exe'):
404 if os.name == 'nt' and not p.endswith('.exe'):
405 p += '.exe'
405 p += '.exe'
406 found = findprogram(p)
406 found = findprogram(p)
407 if found:
407 if found:
408 vlog("# Found prerequisite", p, "at", found)
408 vlog("# Found prerequisite", p, "at", found)
409 else:
409 else:
410 print "WARNING: Did not find prerequisite tool: "+p
410 print "WARNING: Did not find prerequisite tool: "+p
411
411
412 def terminate(proc):
412 def terminate(proc):
413 """Terminate subprocess (with fallback for Python versions < 2.6)"""
413 """Terminate subprocess (with fallback for Python versions < 2.6)"""
414 vlog('# Terminating process %d' % proc.pid)
414 vlog('# Terminating process %d' % proc.pid)
415 try:
415 try:
416 getattr(proc, 'terminate', lambda : os.kill(proc.pid, signal.SIGTERM))()
416 getattr(proc, 'terminate', lambda : os.kill(proc.pid, signal.SIGTERM))()
417 except OSError:
417 except OSError:
418 pass
418 pass
419
419
420 def killdaemons(pidfile):
420 def killdaemons(pidfile):
421 return killmod.killdaemons(pidfile, tryhard=False, remove=True,
421 return killmod.killdaemons(pidfile, tryhard=False, remove=True,
422 logfn=vlog)
422 logfn=vlog)
423
423
424 def cleanup(options):
424 def cleanup(options):
425 if not options.keep_tmpdir:
425 if not options.keep_tmpdir:
426 vlog("# Cleaning up HGTMP", HGTMP)
426 vlog("# Cleaning up HGTMP", HGTMP)
427 shutil.rmtree(HGTMP, True)
427 shutil.rmtree(HGTMP, True)
428 for f in createdfiles:
428 for f in createdfiles:
429 try:
429 try:
430 os.remove(f)
430 os.remove(f)
431 except OSError:
431 except OSError:
432 pass
432 pass
433
433
434 def usecorrectpython():
434 def usecorrectpython():
435 # some tests run python interpreter. they must use same
435 # some tests run python interpreter. they must use same
436 # interpreter we use or bad things will happen.
436 # interpreter we use or bad things will happen.
437 pyexename = sys.platform == 'win32' and 'python.exe' or 'python'
437 pyexename = sys.platform == 'win32' and 'python.exe' or 'python'
438 if getattr(os, 'symlink', None):
438 if getattr(os, 'symlink', None):
439 vlog("# Making python executable in test path a symlink to '%s'" %
439 vlog("# Making python executable in test path a symlink to '%s'" %
440 sys.executable)
440 sys.executable)
441 mypython = os.path.join(TMPBINDIR, pyexename)
441 mypython = os.path.join(TMPBINDIR, pyexename)
442 try:
442 try:
443 if os.readlink(mypython) == sys.executable:
443 if os.readlink(mypython) == sys.executable:
444 return
444 return
445 os.unlink(mypython)
445 os.unlink(mypython)
446 except OSError, err:
446 except OSError, err:
447 if err.errno != errno.ENOENT:
447 if err.errno != errno.ENOENT:
448 raise
448 raise
449 if findprogram(pyexename) != sys.executable:
449 if findprogram(pyexename) != sys.executable:
450 try:
450 try:
451 os.symlink(sys.executable, mypython)
451 os.symlink(sys.executable, mypython)
452 createdfiles.append(mypython)
452 createdfiles.append(mypython)
453 except OSError, err:
453 except OSError, err:
454 # child processes may race, which is harmless
454 # child processes may race, which is harmless
455 if err.errno != errno.EEXIST:
455 if err.errno != errno.EEXIST:
456 raise
456 raise
457 else:
457 else:
458 exedir, exename = os.path.split(sys.executable)
458 exedir, exename = os.path.split(sys.executable)
459 vlog("# Modifying search path to find %s as %s in '%s'" %
459 vlog("# Modifying search path to find %s as %s in '%s'" %
460 (exename, pyexename, exedir))
460 (exename, pyexename, exedir))
461 path = os.environ['PATH'].split(os.pathsep)
461 path = os.environ['PATH'].split(os.pathsep)
462 while exedir in path:
462 while exedir in path:
463 path.remove(exedir)
463 path.remove(exedir)
464 os.environ['PATH'] = os.pathsep.join([exedir] + path)
464 os.environ['PATH'] = os.pathsep.join([exedir] + path)
465 if not findprogram(pyexename):
465 if not findprogram(pyexename):
466 print "WARNING: Cannot find %s in search path" % pyexename
466 print "WARNING: Cannot find %s in search path" % pyexename
467
467
468 def installhg(options):
468 def installhg(options):
469 vlog("# Performing temporary installation of HG")
469 vlog("# Performing temporary installation of HG")
470 installerrs = os.path.join("tests", "install.err")
470 installerrs = os.path.join("tests", "install.err")
471 compiler = ''
471 compiler = ''
472 if options.compiler:
472 if options.compiler:
473 compiler = '--compiler ' + options.compiler
473 compiler = '--compiler ' + options.compiler
474 pure = options.pure and "--pure" or ""
474 pure = options.pure and "--pure" or ""
475 py3 = ''
475 py3 = ''
476 if sys.version_info[0] == 3:
476 if sys.version_info[0] == 3:
477 py3 = '--c2to3'
477 py3 = '--c2to3'
478
478
479 # Run installer in hg root
479 # Run installer in hg root
480 script = os.path.realpath(sys.argv[0])
480 script = os.path.realpath(sys.argv[0])
481 hgroot = os.path.dirname(os.path.dirname(script))
481 hgroot = os.path.dirname(os.path.dirname(script))
482 os.chdir(hgroot)
482 os.chdir(hgroot)
483 nohome = '--home=""'
483 nohome = '--home=""'
484 if os.name == 'nt':
484 if os.name == 'nt':
485 # The --home="" trick works only on OS where os.sep == '/'
485 # The --home="" trick works only on OS where os.sep == '/'
486 # because of a distutils convert_path() fast-path. Avoid it at
486 # because of a distutils convert_path() fast-path. Avoid it at
487 # least on Windows for now, deal with .pydistutils.cfg bugs
487 # least on Windows for now, deal with .pydistutils.cfg bugs
488 # when they happen.
488 # when they happen.
489 nohome = ''
489 nohome = ''
490 cmd = ('%(exe)s setup.py %(py3)s %(pure)s clean --all'
490 cmd = ('%(exe)s setup.py %(py3)s %(pure)s clean --all'
491 ' build %(compiler)s --build-base="%(base)s"'
491 ' build %(compiler)s --build-base="%(base)s"'
492 ' install --force --prefix="%(prefix)s" --install-lib="%(libdir)s"'
492 ' install --force --prefix="%(prefix)s" --install-lib="%(libdir)s"'
493 ' --install-scripts="%(bindir)s" %(nohome)s >%(logfile)s 2>&1'
493 ' --install-scripts="%(bindir)s" %(nohome)s >%(logfile)s 2>&1'
494 % {'exe': sys.executable, 'py3': py3, 'pure': pure,
494 % {'exe': sys.executable, 'py3': py3, 'pure': pure,
495 'compiler': compiler, 'base': os.path.join(HGTMP, "build"),
495 'compiler': compiler, 'base': os.path.join(HGTMP, "build"),
496 'prefix': INST, 'libdir': PYTHONDIR, 'bindir': BINDIR,
496 'prefix': INST, 'libdir': PYTHONDIR, 'bindir': BINDIR,
497 'nohome': nohome, 'logfile': installerrs})
497 'nohome': nohome, 'logfile': installerrs})
498 vlog("# Running", cmd)
498 vlog("# Running", cmd)
499 if os.system(cmd) == 0:
499 if os.system(cmd) == 0:
500 if not options.verbose:
500 if not options.verbose:
501 os.remove(installerrs)
501 os.remove(installerrs)
502 else:
502 else:
503 f = open(installerrs)
503 f = open(installerrs)
504 for line in f:
504 for line in f:
505 print line,
505 print line,
506 f.close()
506 f.close()
507 sys.exit(1)
507 sys.exit(1)
508 os.chdir(TESTDIR)
508 os.chdir(TESTDIR)
509
509
510 usecorrectpython()
510 usecorrectpython()
511
511
512 if options.py3k_warnings and not options.anycoverage:
512 if options.py3k_warnings and not options.anycoverage:
513 vlog("# Updating hg command to enable Py3k Warnings switch")
513 vlog("# Updating hg command to enable Py3k Warnings switch")
514 f = open(os.path.join(BINDIR, 'hg'), 'r')
514 f = open(os.path.join(BINDIR, 'hg'), 'r')
515 lines = [line.rstrip() for line in f]
515 lines = [line.rstrip() for line in f]
516 lines[0] += ' -3'
516 lines[0] += ' -3'
517 f.close()
517 f.close()
518 f = open(os.path.join(BINDIR, 'hg'), 'w')
518 f = open(os.path.join(BINDIR, 'hg'), 'w')
519 for line in lines:
519 for line in lines:
520 f.write(line + '\n')
520 f.write(line + '\n')
521 f.close()
521 f.close()
522
522
523 hgbat = os.path.join(BINDIR, 'hg.bat')
523 hgbat = os.path.join(BINDIR, 'hg.bat')
524 if os.path.isfile(hgbat):
524 if os.path.isfile(hgbat):
525 # hg.bat expects to be put in bin/scripts while run-tests.py
525 # hg.bat expects to be put in bin/scripts while run-tests.py
526 # installation layout put it in bin/ directly. Fix it
526 # installation layout put it in bin/ directly. Fix it
527 f = open(hgbat, 'rb')
527 f = open(hgbat, 'rb')
528 data = f.read()
528 data = f.read()
529 f.close()
529 f.close()
530 if '"%~dp0..\python" "%~dp0hg" %*' in data:
530 if '"%~dp0..\python" "%~dp0hg" %*' in data:
531 data = data.replace('"%~dp0..\python" "%~dp0hg" %*',
531 data = data.replace('"%~dp0..\python" "%~dp0hg" %*',
532 '"%~dp0python" "%~dp0hg" %*')
532 '"%~dp0python" "%~dp0hg" %*')
533 f = open(hgbat, 'wb')
533 f = open(hgbat, 'wb')
534 f.write(data)
534 f.write(data)
535 f.close()
535 f.close()
536 else:
536 else:
537 print 'WARNING: cannot fix hg.bat reference to python.exe'
537 print 'WARNING: cannot fix hg.bat reference to python.exe'
538
538
539 if options.anycoverage:
539 if options.anycoverage:
540 custom = os.path.join(TESTDIR, 'sitecustomize.py')
540 custom = os.path.join(TESTDIR, 'sitecustomize.py')
541 target = os.path.join(PYTHONDIR, 'sitecustomize.py')
541 target = os.path.join(PYTHONDIR, 'sitecustomize.py')
542 vlog('# Installing coverage trigger to %s' % target)
542 vlog('# Installing coverage trigger to %s' % target)
543 shutil.copyfile(custom, target)
543 shutil.copyfile(custom, target)
544 rc = os.path.join(TESTDIR, '.coveragerc')
544 rc = os.path.join(TESTDIR, '.coveragerc')
545 vlog('# Installing coverage rc to %s' % rc)
545 vlog('# Installing coverage rc to %s' % rc)
546 os.environ['COVERAGE_PROCESS_START'] = rc
546 os.environ['COVERAGE_PROCESS_START'] = rc
547 fn = os.path.join(INST, '..', '.coverage')
547 fn = os.path.join(INST, '..', '.coverage')
548 os.environ['COVERAGE_FILE'] = fn
548 os.environ['COVERAGE_FILE'] = fn
549
549
550 def outputtimes(options):
550 def outputtimes(options):
551 vlog('# Producing time report')
551 vlog('# Producing time report')
552 times.sort(key=lambda t: (t[1], t[0]), reverse=True)
552 times.sort(key=lambda t: (t[1], t[0]), reverse=True)
553 cols = '%7.3f %s'
553 cols = '%7.3f %s'
554 print '\n%-7s %s' % ('Time', 'Test')
554 print '\n%-7s %s' % ('Time', 'Test')
555 for test, timetaken in times:
555 for test, timetaken in times:
556 print cols % (timetaken, test)
556 print cols % (timetaken, test)
557
557
558 def outputcoverage(options):
558 def outputcoverage(options):
559
559
560 vlog('# Producing coverage report')
560 vlog('# Producing coverage report')
561 os.chdir(PYTHONDIR)
561 os.chdir(PYTHONDIR)
562
562
563 def covrun(*args):
563 def covrun(*args):
564 cmd = 'coverage %s' % ' '.join(args)
564 cmd = 'coverage %s' % ' '.join(args)
565 vlog('# Running: %s' % cmd)
565 vlog('# Running: %s' % cmd)
566 os.system(cmd)
566 os.system(cmd)
567
567
568 covrun('-c')
568 covrun('-c')
569 omit = ','.join(os.path.join(x, '*') for x in [BINDIR, TESTDIR])
569 omit = ','.join(os.path.join(x, '*') for x in [BINDIR, TESTDIR])
570 covrun('-i', '-r', '"--omit=%s"' % omit) # report
570 covrun('-i', '-r', '"--omit=%s"' % omit) # report
571 if options.htmlcov:
571 if options.htmlcov:
572 htmldir = os.path.join(TESTDIR, 'htmlcov')
572 htmldir = os.path.join(TESTDIR, 'htmlcov')
573 covrun('-i', '-b', '"--directory=%s"' % htmldir, '"--omit=%s"' % omit)
573 covrun('-i', '-b', '"--directory=%s"' % htmldir, '"--omit=%s"' % omit)
574 if options.annotate:
574 if options.annotate:
575 adir = os.path.join(TESTDIR, 'annotated')
575 adir = os.path.join(TESTDIR, 'annotated')
576 if not os.path.isdir(adir):
576 if not os.path.isdir(adir):
577 os.mkdir(adir)
577 os.mkdir(adir)
578 covrun('-i', '-a', '"--directory=%s"' % adir, '"--omit=%s"' % omit)
578 covrun('-i', '-a', '"--directory=%s"' % adir, '"--omit=%s"' % omit)
579
579
580 def pytest(test, wd, options, replacements, env):
580 def pytest(test, wd, options, replacements, env):
581 py3kswitch = options.py3k_warnings and ' -3' or ''
581 py3kswitch = options.py3k_warnings and ' -3' or ''
582 cmd = '%s%s "%s"' % (PYTHON, py3kswitch, test)
582 cmd = '%s%s "%s"' % (PYTHON, py3kswitch, test)
583 vlog("# Running", cmd)
583 vlog("# Running", cmd)
584 if os.name == 'nt':
584 if os.name == 'nt':
585 replacements.append((r'\r\n', '\n'))
585 replacements.append((r'\r\n', '\n'))
586 return run(cmd, wd, options, replacements, env)
586 return run(cmd, wd, options, replacements, env)
587
587
588 needescape = re.compile(r'[\x00-\x08\x0b-\x1f\x7f-\xff]').search
588 needescape = re.compile(r'[\x00-\x08\x0b-\x1f\x7f-\xff]').search
589 escapesub = re.compile(r'[\x00-\x08\x0b-\x1f\\\x7f-\xff]').sub
589 escapesub = re.compile(r'[\x00-\x08\x0b-\x1f\\\x7f-\xff]').sub
590 escapemap = dict((chr(i), r'\x%02x' % i) for i in range(256))
590 escapemap = dict((chr(i), r'\x%02x' % i) for i in range(256))
591 escapemap.update({'\\': '\\\\', '\r': r'\r'})
591 escapemap.update({'\\': '\\\\', '\r': r'\r'})
592 def escapef(m):
592 def escapef(m):
593 return escapemap[m.group(0)]
593 return escapemap[m.group(0)]
594 def stringescape(s):
594 def stringescape(s):
595 return escapesub(escapef, s)
595 return escapesub(escapef, s)
596
596
597 def rematch(el, l):
597 def rematch(el, l):
598 try:
598 try:
599 # use \Z to ensure that the regex matches to the end of the string
599 # use \Z to ensure that the regex matches to the end of the string
600 if os.name == 'nt':
600 if os.name == 'nt':
601 return re.match(el + r'\r?\n\Z', l)
601 return re.match(el + r'\r?\n\Z', l)
602 return re.match(el + r'\n\Z', l)
602 return re.match(el + r'\n\Z', l)
603 except re.error:
603 except re.error:
604 # el is an invalid regex
604 # el is an invalid regex
605 return False
605 return False
606
606
607 def globmatch(el, l):
607 def globmatch(el, l):
608 # The only supported special characters are * and ? plus / which also
608 # The only supported special characters are * and ? plus / which also
609 # matches \ on windows. Escaping of these caracters is supported.
609 # matches \ on windows. Escaping of these characters is supported.
610 if el + '\n' == l:
610 if el + '\n' == l:
611 if os.altsep:
611 if os.altsep:
612 # matching on "/" is not needed for this line
612 # matching on "/" is not needed for this line
613 return '-glob'
613 return '-glob'
614 return True
614 return True
615 i, n = 0, len(el)
615 i, n = 0, len(el)
616 res = ''
616 res = ''
617 while i < n:
617 while i < n:
618 c = el[i]
618 c = el[i]
619 i += 1
619 i += 1
620 if c == '\\' and el[i] in '*?\\/':
620 if c == '\\' and el[i] in '*?\\/':
621 res += el[i - 1:i + 1]
621 res += el[i - 1:i + 1]
622 i += 1
622 i += 1
623 elif c == '*':
623 elif c == '*':
624 res += '.*'
624 res += '.*'
625 elif c == '?':
625 elif c == '?':
626 res += '.'
626 res += '.'
627 elif c == '/' and os.altsep:
627 elif c == '/' and os.altsep:
628 res += '[/\\\\]'
628 res += '[/\\\\]'
629 else:
629 else:
630 res += re.escape(c)
630 res += re.escape(c)
631 return rematch(res, l)
631 return rematch(res, l)
632
632
633 def linematch(el, l):
633 def linematch(el, l):
634 if el == l: # perfect match (fast)
634 if el == l: # perfect match (fast)
635 return True
635 return True
636 if el:
636 if el:
637 if el.endswith(" (esc)\n"):
637 if el.endswith(" (esc)\n"):
638 el = el[:-7].decode('string-escape') + '\n'
638 el = el[:-7].decode('string-escape') + '\n'
639 if el == l or os.name == 'nt' and el[:-1] + '\r\n' == l:
639 if el == l or os.name == 'nt' and el[:-1] + '\r\n' == l:
640 return True
640 return True
641 if el.endswith(" (re)\n"):
641 if el.endswith(" (re)\n"):
642 return rematch(el[:-6], l)
642 return rematch(el[:-6], l)
643 if el.endswith(" (glob)\n"):
643 if el.endswith(" (glob)\n"):
644 return globmatch(el[:-8], l)
644 return globmatch(el[:-8], l)
645 if os.altsep and l.replace('\\', '/') == el:
645 if os.altsep and l.replace('\\', '/') == el:
646 return '+glob'
646 return '+glob'
647 return False
647 return False
648
648
649 def tsttest(test, wd, options, replacements, env):
649 def tsttest(test, wd, options, replacements, env):
650 # We generate a shell script which outputs unique markers to line
650 # We generate a shell script which outputs unique markers to line
651 # up script results with our source. These markers include input
651 # up script results with our source. These markers include input
652 # line number and the last return code
652 # line number and the last return code
653 salt = "SALT" + str(time.time())
653 salt = "SALT" + str(time.time())
654 def addsalt(line, inpython):
654 def addsalt(line, inpython):
655 if inpython:
655 if inpython:
656 script.append('%s %d 0\n' % (salt, line))
656 script.append('%s %d 0\n' % (salt, line))
657 else:
657 else:
658 script.append('echo %s %s $?\n' % (salt, line))
658 script.append('echo %s %s $?\n' % (salt, line))
659
659
660 # After we run the shell script, we re-unify the script output
660 # After we run the shell script, we re-unify the script output
661 # with non-active parts of the source, with synchronization by our
661 # with non-active parts of the source, with synchronization by our
662 # SALT line number markers. The after table contains the
662 # SALT line number markers. The after table contains the
663 # non-active components, ordered by line number
663 # non-active components, ordered by line number
664 after = {}
664 after = {}
665 pos = prepos = -1
665 pos = prepos = -1
666
666
667 # Expected shellscript output
667 # Expected shell script output
668 expected = {}
668 expected = {}
669
669
670 # We keep track of whether or not we're in a Python block so we
670 # We keep track of whether or not we're in a Python block so we
671 # can generate the surrounding doctest magic
671 # can generate the surrounding doctest magic
672 inpython = False
672 inpython = False
673
673
674 # True or False when in a true or false conditional section
674 # True or False when in a true or false conditional section
675 skipping = None
675 skipping = None
676
676
677 def hghave(reqs):
677 def hghave(reqs):
678 # TODO: do something smarter when all other uses of hghave is gone
678 # TODO: do something smarter when all other uses of hghave is gone
679 tdir = TESTDIR.replace('\\', '/')
679 tdir = TESTDIR.replace('\\', '/')
680 proc = Popen4('%s -c "%s/hghave %s"' %
680 proc = Popen4('%s -c "%s/hghave %s"' %
681 (options.shell, tdir, ' '.join(reqs)), wd, 0)
681 (options.shell, tdir, ' '.join(reqs)), wd, 0)
682 stdout, stderr = proc.communicate()
682 stdout, stderr = proc.communicate()
683 ret = proc.wait()
683 ret = proc.wait()
684 if wifexited(ret):
684 if wifexited(ret):
685 ret = os.WEXITSTATUS(ret)
685 ret = os.WEXITSTATUS(ret)
686 if ret == 2:
686 if ret == 2:
687 print stdout
687 print stdout
688 sys.exit(1)
688 sys.exit(1)
689 return ret == 0
689 return ret == 0
690
690
691 f = open(test)
691 f = open(test)
692 t = f.readlines()
692 t = f.readlines()
693 f.close()
693 f.close()
694
694
695 script = []
695 script = []
696 if options.debug:
696 if options.debug:
697 script.append('set -x\n')
697 script.append('set -x\n')
698 if os.getenv('MSYSTEM'):
698 if os.getenv('MSYSTEM'):
699 script.append('alias pwd="pwd -W"\n')
699 script.append('alias pwd="pwd -W"\n')
700 n = 0
700 n = 0
701 for n, l in enumerate(t):
701 for n, l in enumerate(t):
702 if not l.endswith('\n'):
702 if not l.endswith('\n'):
703 l += '\n'
703 l += '\n'
704 if l.startswith('#if'):
704 if l.startswith('#if'):
705 lsplit = l.split()
705 lsplit = l.split()
706 if len(lsplit) < 2 or lsplit[0] != '#if':
706 if len(lsplit) < 2 or lsplit[0] != '#if':
707 after.setdefault(pos, []).append(' !!! invalid #if\n')
707 after.setdefault(pos, []).append(' !!! invalid #if\n')
708 if skipping is not None:
708 if skipping is not None:
709 after.setdefault(pos, []).append(' !!! nested #if\n')
709 after.setdefault(pos, []).append(' !!! nested #if\n')
710 skipping = not hghave(lsplit[1:])
710 skipping = not hghave(lsplit[1:])
711 after.setdefault(pos, []).append(l)
711 after.setdefault(pos, []).append(l)
712 elif l.startswith('#else'):
712 elif l.startswith('#else'):
713 if skipping is None:
713 if skipping is None:
714 after.setdefault(pos, []).append(' !!! missing #if\n')
714 after.setdefault(pos, []).append(' !!! missing #if\n')
715 skipping = not skipping
715 skipping = not skipping
716 after.setdefault(pos, []).append(l)
716 after.setdefault(pos, []).append(l)
717 elif l.startswith('#endif'):
717 elif l.startswith('#endif'):
718 if skipping is None:
718 if skipping is None:
719 after.setdefault(pos, []).append(' !!! missing #if\n')
719 after.setdefault(pos, []).append(' !!! missing #if\n')
720 skipping = None
720 skipping = None
721 after.setdefault(pos, []).append(l)
721 after.setdefault(pos, []).append(l)
722 elif skipping:
722 elif skipping:
723 after.setdefault(pos, []).append(l)
723 after.setdefault(pos, []).append(l)
724 elif l.startswith(' >>> '): # python inlines
724 elif l.startswith(' >>> '): # python inlines
725 after.setdefault(pos, []).append(l)
725 after.setdefault(pos, []).append(l)
726 prepos = pos
726 prepos = pos
727 pos = n
727 pos = n
728 if not inpython:
728 if not inpython:
729 # we've just entered a Python block, add the header
729 # we've just entered a Python block, add the header
730 inpython = True
730 inpython = True
731 addsalt(prepos, False) # make sure we report the exit code
731 addsalt(prepos, False) # make sure we report the exit code
732 script.append('%s -m heredoctest <<EOF\n' % PYTHON)
732 script.append('%s -m heredoctest <<EOF\n' % PYTHON)
733 addsalt(n, True)
733 addsalt(n, True)
734 script.append(l[2:])
734 script.append(l[2:])
735 elif l.startswith(' ... '): # python inlines
735 elif l.startswith(' ... '): # python inlines
736 after.setdefault(prepos, []).append(l)
736 after.setdefault(prepos, []).append(l)
737 script.append(l[2:])
737 script.append(l[2:])
738 elif l.startswith(' $ '): # commands
738 elif l.startswith(' $ '): # commands
739 if inpython:
739 if inpython:
740 script.append("EOF\n")
740 script.append("EOF\n")
741 inpython = False
741 inpython = False
742 after.setdefault(pos, []).append(l)
742 after.setdefault(pos, []).append(l)
743 prepos = pos
743 prepos = pos
744 pos = n
744 pos = n
745 addsalt(n, False)
745 addsalt(n, False)
746 cmd = l[4:].split()
746 cmd = l[4:].split()
747 if len(cmd) == 2 and cmd[0] == 'cd':
747 if len(cmd) == 2 and cmd[0] == 'cd':
748 l = ' $ cd %s || exit 1\n' % cmd[1]
748 l = ' $ cd %s || exit 1\n' % cmd[1]
749 script.append(l[4:])
749 script.append(l[4:])
750 elif l.startswith(' > '): # continuations
750 elif l.startswith(' > '): # continuations
751 after.setdefault(prepos, []).append(l)
751 after.setdefault(prepos, []).append(l)
752 script.append(l[4:])
752 script.append(l[4:])
753 elif l.startswith(' '): # results
753 elif l.startswith(' '): # results
754 # queue up a list of expected results
754 # queue up a list of expected results
755 expected.setdefault(pos, []).append(l[2:])
755 expected.setdefault(pos, []).append(l[2:])
756 else:
756 else:
757 if inpython:
757 if inpython:
758 script.append("EOF\n")
758 script.append("EOF\n")
759 inpython = False
759 inpython = False
760 # non-command/result - queue up for merged output
760 # non-command/result - queue up for merged output
761 after.setdefault(pos, []).append(l)
761 after.setdefault(pos, []).append(l)
762
762
763 if inpython:
763 if inpython:
764 script.append("EOF\n")
764 script.append("EOF\n")
765 if skipping is not None:
765 if skipping is not None:
766 after.setdefault(pos, []).append(' !!! missing #endif\n')
766 after.setdefault(pos, []).append(' !!! missing #endif\n')
767 addsalt(n + 1, False)
767 addsalt(n + 1, False)
768
768
769 # Write out the script and execute it
769 # Write out the script and execute it
770 name = wd + '.sh'
770 name = wd + '.sh'
771 f = open(name, 'w')
771 f = open(name, 'w')
772 for l in script:
772 for l in script:
773 f.write(l)
773 f.write(l)
774 f.close()
774 f.close()
775
775
776 cmd = '%s "%s"' % (options.shell, name)
776 cmd = '%s "%s"' % (options.shell, name)
777 vlog("# Running", cmd)
777 vlog("# Running", cmd)
778 exitcode, output = run(cmd, wd, options, replacements, env)
778 exitcode, output = run(cmd, wd, options, replacements, env)
779 # do not merge output if skipped, return hghave message instead
779 # do not merge output if skipped, return hghave message instead
780 # similarly, with --debug, output is None
780 # similarly, with --debug, output is None
781 if exitcode == SKIPPED_STATUS or output is None:
781 if exitcode == SKIPPED_STATUS or output is None:
782 return exitcode, output
782 return exitcode, output
783
783
784 # Merge the script output back into a unified test
784 # Merge the script output back into a unified test
785
785
786 warnonly = 1 # 1: not yet, 2: yes, 3: for sure not
786 warnonly = 1 # 1: not yet, 2: yes, 3: for sure not
787 if exitcode != 0: # failure has been reported
787 if exitcode != 0: # failure has been reported
788 warnonly = 3 # set to "for sure not"
788 warnonly = 3 # set to "for sure not"
789 pos = -1
789 pos = -1
790 postout = []
790 postout = []
791 for l in output:
791 for l in output:
792 lout, lcmd = l, None
792 lout, lcmd = l, None
793 if salt in l:
793 if salt in l:
794 lout, lcmd = l.split(salt, 1)
794 lout, lcmd = l.split(salt, 1)
795
795
796 if lout:
796 if lout:
797 if not lout.endswith('\n'):
797 if not lout.endswith('\n'):
798 lout += ' (no-eol)\n'
798 lout += ' (no-eol)\n'
799
799
800 # find the expected output at the current position
800 # find the expected output at the current position
801 el = None
801 el = None
802 if pos in expected and expected[pos]:
802 if pos in expected and expected[pos]:
803 el = expected[pos].pop(0)
803 el = expected[pos].pop(0)
804
804
805 r = linematch(el, lout)
805 r = linematch(el, lout)
806 if isinstance(r, str):
806 if isinstance(r, str):
807 if r == '+glob':
807 if r == '+glob':
808 lout = el[:-1] + ' (glob)\n'
808 lout = el[:-1] + ' (glob)\n'
809 r = '' # warn only this line
809 r = '' # warn only this line
810 elif r == '-glob':
810 elif r == '-glob':
811 lout = ''.join(el.rsplit(' (glob)', 1))
811 lout = ''.join(el.rsplit(' (glob)', 1))
812 r = '' # warn only this line
812 r = '' # warn only this line
813 else:
813 else:
814 log('\ninfo, unknown linematch result: %r\n' % r)
814 log('\ninfo, unknown linematch result: %r\n' % r)
815 r = False
815 r = False
816 if r:
816 if r:
817 postout.append(" " + el)
817 postout.append(" " + el)
818 else:
818 else:
819 if needescape(lout):
819 if needescape(lout):
820 lout = stringescape(lout.rstrip('\n')) + " (esc)\n"
820 lout = stringescape(lout.rstrip('\n')) + " (esc)\n"
821 postout.append(" " + lout) # let diff deal with it
821 postout.append(" " + lout) # let diff deal with it
822 if r != '': # if line failed
822 if r != '': # if line failed
823 warnonly = 3 # set to "for sure not"
823 warnonly = 3 # set to "for sure not"
824 elif warnonly == 1: # is "not yet" (and line is warn only)
824 elif warnonly == 1: # is "not yet" (and line is warn only)
825 warnonly = 2 # set to "yes" do warn
825 warnonly = 2 # set to "yes" do warn
826
826
827 if lcmd:
827 if lcmd:
828 # add on last return code
828 # add on last return code
829 ret = int(lcmd.split()[1])
829 ret = int(lcmd.split()[1])
830 if ret != 0:
830 if ret != 0:
831 postout.append(" [%s]\n" % ret)
831 postout.append(" [%s]\n" % ret)
832 if pos in after:
832 if pos in after:
833 # merge in non-active test bits
833 # merge in non-active test bits
834 postout += after.pop(pos)
834 postout += after.pop(pos)
835 pos = int(lcmd.split()[0])
835 pos = int(lcmd.split()[0])
836
836
837 if pos in after:
837 if pos in after:
838 postout += after.pop(pos)
838 postout += after.pop(pos)
839
839
840 if warnonly == 2:
840 if warnonly == 2:
841 exitcode = False # set exitcode to warned
841 exitcode = False # set exitcode to warned
842 return exitcode, postout
842 return exitcode, postout
843
843
844 wifexited = getattr(os, "WIFEXITED", lambda x: False)
844 wifexited = getattr(os, "WIFEXITED", lambda x: False)
845 def run(cmd, wd, options, replacements, env):
845 def run(cmd, wd, options, replacements, env):
846 """Run command in a sub-process, capturing the output (stdout and stderr).
846 """Run command in a sub-process, capturing the output (stdout and stderr).
847 Return a tuple (exitcode, output). output is None in debug mode."""
847 Return a tuple (exitcode, output). output is None in debug mode."""
848 # TODO: Use subprocess.Popen if we're running on Python 2.4
848 # TODO: Use subprocess.Popen if we're running on Python 2.4
849 if options.debug:
849 if options.debug:
850 proc = subprocess.Popen(cmd, shell=True, cwd=wd, env=env)
850 proc = subprocess.Popen(cmd, shell=True, cwd=wd, env=env)
851 ret = proc.wait()
851 ret = proc.wait()
852 return (ret, None)
852 return (ret, None)
853
853
854 proc = Popen4(cmd, wd, options.timeout, env)
854 proc = Popen4(cmd, wd, options.timeout, env)
855 def cleanup():
855 def cleanup():
856 terminate(proc)
856 terminate(proc)
857 ret = proc.wait()
857 ret = proc.wait()
858 if ret == 0:
858 if ret == 0:
859 ret = signal.SIGTERM << 8
859 ret = signal.SIGTERM << 8
860 killdaemons(env['DAEMON_PIDS'])
860 killdaemons(env['DAEMON_PIDS'])
861 return ret
861 return ret
862
862
863 output = ''
863 output = ''
864 proc.tochild.close()
864 proc.tochild.close()
865
865
866 try:
866 try:
867 output = proc.fromchild.read()
867 output = proc.fromchild.read()
868 except KeyboardInterrupt:
868 except KeyboardInterrupt:
869 vlog('# Handling keyboard interrupt')
869 vlog('# Handling keyboard interrupt')
870 cleanup()
870 cleanup()
871 raise
871 raise
872
872
873 ret = proc.wait()
873 ret = proc.wait()
874 if wifexited(ret):
874 if wifexited(ret):
875 ret = os.WEXITSTATUS(ret)
875 ret = os.WEXITSTATUS(ret)
876
876
877 if proc.timeout:
877 if proc.timeout:
878 ret = 'timeout'
878 ret = 'timeout'
879
879
880 if ret:
880 if ret:
881 killdaemons(env['DAEMON_PIDS'])
881 killdaemons(env['DAEMON_PIDS'])
882
882
883 if abort:
883 if abort:
884 raise KeyboardInterrupt()
884 raise KeyboardInterrupt()
885
885
886 for s, r in replacements:
886 for s, r in replacements:
887 output = re.sub(s, r, output)
887 output = re.sub(s, r, output)
888 return ret, output.splitlines(True)
888 return ret, output.splitlines(True)
889
889
890 def runone(options, test, count):
890 def runone(options, test, count):
891 '''returns a result element: (code, test, msg)'''
891 '''returns a result element: (code, test, msg)'''
892
892
893 def skip(msg):
893 def skip(msg):
894 if options.verbose:
894 if options.verbose:
895 log("\nSkipping %s: %s" % (testpath, msg))
895 log("\nSkipping %s: %s" % (testpath, msg))
896 return 's', test, msg
896 return 's', test, msg
897
897
898 def fail(msg, ret):
898 def fail(msg, ret):
899 warned = ret is False
899 warned = ret is False
900 if not options.nodiff:
900 if not options.nodiff:
901 log("\n%s: %s %s" % (warned and 'Warning' or 'ERROR', test, msg))
901 log("\n%s: %s %s" % (warned and 'Warning' or 'ERROR', test, msg))
902 if (not ret and options.interactive
902 if (not ret and options.interactive
903 and os.path.exists(testpath + ".err")):
903 and os.path.exists(testpath + ".err")):
904 iolock.acquire()
904 iolock.acquire()
905 print "Accept this change? [n] ",
905 print "Accept this change? [n] ",
906 answer = sys.stdin.readline().strip()
906 answer = sys.stdin.readline().strip()
907 iolock.release()
907 iolock.release()
908 if answer.lower() in "y yes".split():
908 if answer.lower() in "y yes".split():
909 if test.endswith(".t"):
909 if test.endswith(".t"):
910 rename(testpath + ".err", testpath)
910 rename(testpath + ".err", testpath)
911 else:
911 else:
912 rename(testpath + ".err", testpath + ".out")
912 rename(testpath + ".err", testpath + ".out")
913 return '.', test, ''
913 return '.', test, ''
914 return warned and '~' or '!', test, msg
914 return warned and '~' or '!', test, msg
915
915
916 def success():
916 def success():
917 return '.', test, ''
917 return '.', test, ''
918
918
919 def ignore(msg):
919 def ignore(msg):
920 return 'i', test, msg
920 return 'i', test, msg
921
921
922 def describe(ret):
922 def describe(ret):
923 if ret < 0:
923 if ret < 0:
924 return 'killed by signal %d' % -ret
924 return 'killed by signal %d' % -ret
925 return 'returned error code %d' % ret
925 return 'returned error code %d' % ret
926
926
927 testpath = os.path.join(TESTDIR, test)
927 testpath = os.path.join(TESTDIR, test)
928 err = os.path.join(TESTDIR, test + ".err")
928 err = os.path.join(TESTDIR, test + ".err")
929 lctest = test.lower()
929 lctest = test.lower()
930
930
931 if not os.path.exists(testpath):
931 if not os.path.exists(testpath):
932 return skip("doesn't exist")
932 return skip("doesn't exist")
933
933
934 if not (options.whitelisted and test in options.whitelisted):
934 if not (options.whitelisted and test in options.whitelisted):
935 if options.blacklist and test in options.blacklist:
935 if options.blacklist and test in options.blacklist:
936 return skip("blacklisted")
936 return skip("blacklisted")
937
937
938 if options.retest and not os.path.exists(test + ".err"):
938 if options.retest and not os.path.exists(test + ".err"):
939 return ignore("not retesting")
939 return ignore("not retesting")
940
940
941 if options.keywords:
941 if options.keywords:
942 fp = open(test)
942 fp = open(test)
943 t = fp.read().lower() + test.lower()
943 t = fp.read().lower() + test.lower()
944 fp.close()
944 fp.close()
945 for k in options.keywords.lower().split():
945 for k in options.keywords.lower().split():
946 if k in t:
946 if k in t:
947 break
947 break
948 else:
948 else:
949 return ignore("doesn't match keyword")
949 return ignore("doesn't match keyword")
950
950
951 if not os.path.basename(lctest).startswith("test-"):
951 if not os.path.basename(lctest).startswith("test-"):
952 return skip("not a test file")
952 return skip("not a test file")
953 for ext, func, out in testtypes:
953 for ext, func, out in testtypes:
954 if lctest.endswith(ext):
954 if lctest.endswith(ext):
955 runner = func
955 runner = func
956 ref = os.path.join(TESTDIR, test + out)
956 ref = os.path.join(TESTDIR, test + out)
957 break
957 break
958 else:
958 else:
959 return skip("unknown test type")
959 return skip("unknown test type")
960
960
961 vlog("# Test", test)
961 vlog("# Test", test)
962
962
963 if os.path.exists(err):
963 if os.path.exists(err):
964 os.remove(err) # Remove any previous output files
964 os.remove(err) # Remove any previous output files
965
965
966 # Make a tmp subdirectory to work in
966 # Make a tmp subdirectory to work in
967 threadtmp = os.path.join(HGTMP, "child%d" % count)
967 threadtmp = os.path.join(HGTMP, "child%d" % count)
968 testtmp = os.path.join(threadtmp, os.path.basename(test))
968 testtmp = os.path.join(threadtmp, os.path.basename(test))
969 os.mkdir(threadtmp)
969 os.mkdir(threadtmp)
970 os.mkdir(testtmp)
970 os.mkdir(testtmp)
971
971
972 port = options.port + count * 3
972 port = options.port + count * 3
973 replacements = [
973 replacements = [
974 (r':%s\b' % port, ':$HGPORT'),
974 (r':%s\b' % port, ':$HGPORT'),
975 (r':%s\b' % (port + 1), ':$HGPORT1'),
975 (r':%s\b' % (port + 1), ':$HGPORT1'),
976 (r':%s\b' % (port + 2), ':$HGPORT2'),
976 (r':%s\b' % (port + 2), ':$HGPORT2'),
977 ]
977 ]
978 if os.name == 'nt':
978 if os.name == 'nt':
979 replacements.append(
979 replacements.append(
980 (''.join(c.isalpha() and '[%s%s]' % (c.lower(), c.upper()) or
980 (''.join(c.isalpha() and '[%s%s]' % (c.lower(), c.upper()) or
981 c in '/\\' and r'[/\\]' or
981 c in '/\\' and r'[/\\]' or
982 c.isdigit() and c or
982 c.isdigit() and c or
983 '\\' + c
983 '\\' + c
984 for c in testtmp), '$TESTTMP'))
984 for c in testtmp), '$TESTTMP'))
985 else:
985 else:
986 replacements.append((re.escape(testtmp), '$TESTTMP'))
986 replacements.append((re.escape(testtmp), '$TESTTMP'))
987
987
988 env = createenv(options, testtmp, threadtmp, port)
988 env = createenv(options, testtmp, threadtmp, port)
989 createhgrc(env['HGRCPATH'], options)
989 createhgrc(env['HGRCPATH'], options)
990
990
991 starttime = time.time()
991 starttime = time.time()
992 try:
992 try:
993 ret, out = runner(testpath, testtmp, options, replacements, env)
993 ret, out = runner(testpath, testtmp, options, replacements, env)
994 except KeyboardInterrupt:
994 except KeyboardInterrupt:
995 endtime = time.time()
995 endtime = time.time()
996 log('INTERRUPTED: %s (after %d seconds)' % (test, endtime - starttime))
996 log('INTERRUPTED: %s (after %d seconds)' % (test, endtime - starttime))
997 raise
997 raise
998 endtime = time.time()
998 endtime = time.time()
999 times.append((test, endtime - starttime))
999 times.append((test, endtime - starttime))
1000 vlog("# Ret was:", ret)
1000 vlog("# Ret was:", ret)
1001
1001
1002 killdaemons(env['DAEMON_PIDS'])
1002 killdaemons(env['DAEMON_PIDS'])
1003
1003
1004 skipped = (ret == SKIPPED_STATUS)
1004 skipped = (ret == SKIPPED_STATUS)
1005
1005
1006 # If we're not in --debug mode and reference output file exists,
1006 # If we're not in --debug mode and reference output file exists,
1007 # check test output against it.
1007 # check test output against it.
1008 if options.debug:
1008 if options.debug:
1009 refout = None # to match "out is None"
1009 refout = None # to match "out is None"
1010 elif os.path.exists(ref):
1010 elif os.path.exists(ref):
1011 f = open(ref, "r")
1011 f = open(ref, "r")
1012 refout = f.read().splitlines(True)
1012 refout = f.read().splitlines(True)
1013 f.close()
1013 f.close()
1014 else:
1014 else:
1015 refout = []
1015 refout = []
1016
1016
1017 if (ret != 0 or out != refout) and not skipped and not options.debug:
1017 if (ret != 0 or out != refout) and not skipped and not options.debug:
1018 # Save errors to a file for diagnosis
1018 # Save errors to a file for diagnosis
1019 f = open(err, "wb")
1019 f = open(err, "wb")
1020 for line in out:
1020 for line in out:
1021 f.write(line)
1021 f.write(line)
1022 f.close()
1022 f.close()
1023
1023
1024 if skipped:
1024 if skipped:
1025 if out is None: # debug mode: nothing to parse
1025 if out is None: # debug mode: nothing to parse
1026 missing = ['unknown']
1026 missing = ['unknown']
1027 failed = None
1027 failed = None
1028 else:
1028 else:
1029 missing, failed = parsehghaveoutput(out)
1029 missing, failed = parsehghaveoutput(out)
1030 if not missing:
1030 if not missing:
1031 missing = ['irrelevant']
1031 missing = ['irrelevant']
1032 if failed:
1032 if failed:
1033 result = fail("hghave failed checking for %s" % failed[-1], ret)
1033 result = fail("hghave failed checking for %s" % failed[-1], ret)
1034 skipped = False
1034 skipped = False
1035 else:
1035 else:
1036 result = skip(missing[-1])
1036 result = skip(missing[-1])
1037 elif ret == 'timeout':
1037 elif ret == 'timeout':
1038 result = fail("timed out", ret)
1038 result = fail("timed out", ret)
1039 elif out != refout:
1039 elif out != refout:
1040 info = {}
1040 info = {}
1041 if not options.nodiff:
1041 if not options.nodiff:
1042 iolock.acquire()
1042 iolock.acquire()
1043 if options.view:
1043 if options.view:
1044 os.system("%s %s %s" % (options.view, ref, err))
1044 os.system("%s %s %s" % (options.view, ref, err))
1045 else:
1045 else:
1046 info = showdiff(refout, out, ref, err)
1046 info = showdiff(refout, out, ref, err)
1047 iolock.release()
1047 iolock.release()
1048 msg = ""
1048 msg = ""
1049 if info.get('servefail'): msg += "serve failed and "
1049 if info.get('servefail'): msg += "serve failed and "
1050 if ret:
1050 if ret:
1051 msg += "output changed and " + describe(ret)
1051 msg += "output changed and " + describe(ret)
1052 else:
1052 else:
1053 msg += "output changed"
1053 msg += "output changed"
1054 result = fail(msg, ret)
1054 result = fail(msg, ret)
1055 elif ret:
1055 elif ret:
1056 result = fail(describe(ret), ret)
1056 result = fail(describe(ret), ret)
1057 else:
1057 else:
1058 result = success()
1058 result = success()
1059
1059
1060 if not options.verbose:
1060 if not options.verbose:
1061 iolock.acquire()
1061 iolock.acquire()
1062 sys.stdout.write(result[0])
1062 sys.stdout.write(result[0])
1063 sys.stdout.flush()
1063 sys.stdout.flush()
1064 iolock.release()
1064 iolock.release()
1065
1065
1066 if not options.keep_tmpdir:
1066 if not options.keep_tmpdir:
1067 shutil.rmtree(threadtmp, True)
1067 shutil.rmtree(threadtmp, True)
1068 return result
1068 return result
1069
1069
1070 _hgpath = None
1070 _hgpath = None
1071
1071
1072 def _gethgpath():
1072 def _gethgpath():
1073 """Return the path to the mercurial package that is actually found by
1073 """Return the path to the mercurial package that is actually found by
1074 the current Python interpreter."""
1074 the current Python interpreter."""
1075 global _hgpath
1075 global _hgpath
1076 if _hgpath is not None:
1076 if _hgpath is not None:
1077 return _hgpath
1077 return _hgpath
1078
1078
1079 cmd = '%s -c "import mercurial; print (mercurial.__path__[0])"'
1079 cmd = '%s -c "import mercurial; print (mercurial.__path__[0])"'
1080 pipe = os.popen(cmd % PYTHON)
1080 pipe = os.popen(cmd % PYTHON)
1081 try:
1081 try:
1082 _hgpath = pipe.read().strip()
1082 _hgpath = pipe.read().strip()
1083 finally:
1083 finally:
1084 pipe.close()
1084 pipe.close()
1085 return _hgpath
1085 return _hgpath
1086
1086
1087 def _checkhglib(verb):
1087 def _checkhglib(verb):
1088 """Ensure that the 'mercurial' package imported by python is
1088 """Ensure that the 'mercurial' package imported by python is
1089 the one we expect it to be. If not, print a warning to stderr."""
1089 the one we expect it to be. If not, print a warning to stderr."""
1090 expecthg = os.path.join(PYTHONDIR, 'mercurial')
1090 expecthg = os.path.join(PYTHONDIR, 'mercurial')
1091 actualhg = _gethgpath()
1091 actualhg = _gethgpath()
1092 if os.path.abspath(actualhg) != os.path.abspath(expecthg):
1092 if os.path.abspath(actualhg) != os.path.abspath(expecthg):
1093 sys.stderr.write('warning: %s with unexpected mercurial lib: %s\n'
1093 sys.stderr.write('warning: %s with unexpected mercurial lib: %s\n'
1094 ' (expected %s)\n'
1094 ' (expected %s)\n'
1095 % (verb, actualhg, expecthg))
1095 % (verb, actualhg, expecthg))
1096
1096
1097 results = {'.':[], '!':[], '~': [], 's':[], 'i':[]}
1097 results = {'.':[], '!':[], '~': [], 's':[], 'i':[]}
1098 times = []
1098 times = []
1099 iolock = threading.Lock()
1099 iolock = threading.Lock()
1100 abort = False
1100 abort = False
1101
1101
1102 def scheduletests(options, tests):
1102 def scheduletests(options, tests):
1103 jobs = options.jobs
1103 jobs = options.jobs
1104 done = queue.Queue()
1104 done = queue.Queue()
1105 running = 0
1105 running = 0
1106 count = 0
1106 count = 0
1107 global abort
1107 global abort
1108
1108
1109 def job(test, count):
1109 def job(test, count):
1110 try:
1110 try:
1111 done.put(runone(options, test, count))
1111 done.put(runone(options, test, count))
1112 except KeyboardInterrupt:
1112 except KeyboardInterrupt:
1113 pass
1113 pass
1114 except: # re-raises
1114 except: # re-raises
1115 done.put(('!', test, 'run-test raised an error, see traceback'))
1115 done.put(('!', test, 'run-test raised an error, see traceback'))
1116 raise
1116 raise
1117
1117
1118 try:
1118 try:
1119 while tests or running:
1119 while tests or running:
1120 if not done.empty() or running == jobs or not tests:
1120 if not done.empty() or running == jobs or not tests:
1121 try:
1121 try:
1122 code, test, msg = done.get(True, 1)
1122 code, test, msg = done.get(True, 1)
1123 results[code].append((test, msg))
1123 results[code].append((test, msg))
1124 if options.first and code not in '.si':
1124 if options.first and code not in '.si':
1125 break
1125 break
1126 except queue.Empty:
1126 except queue.Empty:
1127 continue
1127 continue
1128 running -= 1
1128 running -= 1
1129 if tests and not running == jobs:
1129 if tests and not running == jobs:
1130 test = tests.pop(0)
1130 test = tests.pop(0)
1131 if options.loop:
1131 if options.loop:
1132 tests.append(test)
1132 tests.append(test)
1133 t = threading.Thread(target=job, name=test, args=(test, count))
1133 t = threading.Thread(target=job, name=test, args=(test, count))
1134 t.start()
1134 t.start()
1135 running += 1
1135 running += 1
1136 count += 1
1136 count += 1
1137 except KeyboardInterrupt:
1137 except KeyboardInterrupt:
1138 abort = True
1138 abort = True
1139
1139
1140 def runtests(options, tests):
1140 def runtests(options, tests):
1141 try:
1141 try:
1142 if INST:
1142 if INST:
1143 installhg(options)
1143 installhg(options)
1144 _checkhglib("Testing")
1144 _checkhglib("Testing")
1145 else:
1145 else:
1146 usecorrectpython()
1146 usecorrectpython()
1147
1147
1148 if options.restart:
1148 if options.restart:
1149 orig = list(tests)
1149 orig = list(tests)
1150 while tests:
1150 while tests:
1151 if os.path.exists(tests[0] + ".err"):
1151 if os.path.exists(tests[0] + ".err"):
1152 break
1152 break
1153 tests.pop(0)
1153 tests.pop(0)
1154 if not tests:
1154 if not tests:
1155 print "running all tests"
1155 print "running all tests"
1156 tests = orig
1156 tests = orig
1157
1157
1158 scheduletests(options, tests)
1158 scheduletests(options, tests)
1159
1159
1160 failed = len(results['!'])
1160 failed = len(results['!'])
1161 warned = len(results['~'])
1161 warned = len(results['~'])
1162 tested = len(results['.']) + failed + warned
1162 tested = len(results['.']) + failed + warned
1163 skipped = len(results['s'])
1163 skipped = len(results['s'])
1164 ignored = len(results['i'])
1164 ignored = len(results['i'])
1165
1165
1166 print
1166 print
1167 if not options.noskips:
1167 if not options.noskips:
1168 for s in results['s']:
1168 for s in results['s']:
1169 print "Skipped %s: %s" % s
1169 print "Skipped %s: %s" % s
1170 for s in results['~']:
1170 for s in results['~']:
1171 print "Warned %s: %s" % s
1171 print "Warned %s: %s" % s
1172 for s in results['!']:
1172 for s in results['!']:
1173 print "Failed %s: %s" % s
1173 print "Failed %s: %s" % s
1174 _checkhglib("Tested")
1174 _checkhglib("Tested")
1175 print "# Ran %d tests, %d skipped, %d warned, %d failed." % (
1175 print "# Ran %d tests, %d skipped, %d warned, %d failed." % (
1176 tested, skipped + ignored, warned, failed)
1176 tested, skipped + ignored, warned, failed)
1177 if results['!']:
1177 if results['!']:
1178 print 'python hash seed:', os.environ['PYTHONHASHSEED']
1178 print 'python hash seed:', os.environ['PYTHONHASHSEED']
1179 if options.time:
1179 if options.time:
1180 outputtimes(options)
1180 outputtimes(options)
1181
1181
1182 if options.anycoverage:
1182 if options.anycoverage:
1183 outputcoverage(options)
1183 outputcoverage(options)
1184 except KeyboardInterrupt:
1184 except KeyboardInterrupt:
1185 failed = True
1185 failed = True
1186 print "\ninterrupted!"
1186 print "\ninterrupted!"
1187
1187
1188 if failed:
1188 if failed:
1189 return 1
1189 return 1
1190 if warned:
1190 if warned:
1191 return 80
1191 return 80
1192
1192
1193 testtypes = [('.py', pytest, '.out'),
1193 testtypes = [('.py', pytest, '.out'),
1194 ('.t', tsttest, '')]
1194 ('.t', tsttest, '')]
1195
1195
1196 def main(args, parser=None):
1196 def main(args, parser=None):
1197 parser = parser or getparser()
1197 parser = parser or getparser()
1198 (options, args) = parseargs(args, parser)
1198 (options, args) = parseargs(args, parser)
1199 os.umask(022)
1199 os.umask(022)
1200
1200
1201 checktools()
1201 checktools()
1202
1202
1203 if not args:
1203 if not args:
1204 if options.changed:
1204 if options.changed:
1205 proc = Popen4('hg st --rev "%s" -man0 .' % options.changed,
1205 proc = Popen4('hg st --rev "%s" -man0 .' % options.changed,
1206 None, 0)
1206 None, 0)
1207 stdout, stderr = proc.communicate()
1207 stdout, stderr = proc.communicate()
1208 args = stdout.strip('\0').split('\0')
1208 args = stdout.strip('\0').split('\0')
1209 else:
1209 else:
1210 args = os.listdir(".")
1210 args = os.listdir(".")
1211
1211
1212 tests = [t for t in args
1212 tests = [t for t in args
1213 if os.path.basename(t).startswith("test-")
1213 if os.path.basename(t).startswith("test-")
1214 and (t.endswith(".py") or t.endswith(".t"))]
1214 and (t.endswith(".py") or t.endswith(".t"))]
1215
1215
1216 if options.random:
1216 if options.random:
1217 random.shuffle(tests)
1217 random.shuffle(tests)
1218 else:
1218 else:
1219 # keywords for slow tests
1219 # keywords for slow tests
1220 slow = 'svn gendoc check-code-hg'.split()
1220 slow = 'svn gendoc check-code-hg'.split()
1221 def sortkey(f):
1221 def sortkey(f):
1222 # run largest tests first, as they tend to take the longest
1222 # run largest tests first, as they tend to take the longest
1223 try:
1223 try:
1224 val = -os.stat(f).st_size
1224 val = -os.stat(f).st_size
1225 except OSError, e:
1225 except OSError, e:
1226 if e.errno != errno.ENOENT:
1226 if e.errno != errno.ENOENT:
1227 raise
1227 raise
1228 return -1e9 # file does not exist, tell early
1228 return -1e9 # file does not exist, tell early
1229 for kw in slow:
1229 for kw in slow:
1230 if kw in f:
1230 if kw in f:
1231 val *= 10
1231 val *= 10
1232 return val
1232 return val
1233 tests.sort(key=sortkey)
1233 tests.sort(key=sortkey)
1234
1234
1235 if 'PYTHONHASHSEED' not in os.environ:
1235 if 'PYTHONHASHSEED' not in os.environ:
1236 # use a random python hash seed all the time
1236 # use a random python hash seed all the time
1237 # we do the randomness ourself to know what seed is used
1237 # we do the randomness ourself to know what seed is used
1238 os.environ['PYTHONHASHSEED'] = str(random.getrandbits(32))
1238 os.environ['PYTHONHASHSEED'] = str(random.getrandbits(32))
1239
1239
1240 global TESTDIR, HGTMP, INST, BINDIR, TMPBINDIR, PYTHONDIR, COVERAGE_FILE
1240 global TESTDIR, HGTMP, INST, BINDIR, TMPBINDIR, PYTHONDIR, COVERAGE_FILE
1241 TESTDIR = os.environ["TESTDIR"] = os.getcwd()
1241 TESTDIR = os.environ["TESTDIR"] = os.getcwd()
1242 if options.tmpdir:
1242 if options.tmpdir:
1243 options.keep_tmpdir = True
1243 options.keep_tmpdir = True
1244 tmpdir = options.tmpdir
1244 tmpdir = options.tmpdir
1245 if os.path.exists(tmpdir):
1245 if os.path.exists(tmpdir):
1246 # Meaning of tmpdir has changed since 1.3: we used to create
1246 # Meaning of tmpdir has changed since 1.3: we used to create
1247 # HGTMP inside tmpdir; now HGTMP is tmpdir. So fail if
1247 # HGTMP inside tmpdir; now HGTMP is tmpdir. So fail if
1248 # tmpdir already exists.
1248 # tmpdir already exists.
1249 print "error: temp dir %r already exists" % tmpdir
1249 print "error: temp dir %r already exists" % tmpdir
1250 return 1
1250 return 1
1251
1251
1252 # Automatically removing tmpdir sounds convenient, but could
1252 # Automatically removing tmpdir sounds convenient, but could
1253 # really annoy anyone in the habit of using "--tmpdir=/tmp"
1253 # really annoy anyone in the habit of using "--tmpdir=/tmp"
1254 # or "--tmpdir=$HOME".
1254 # or "--tmpdir=$HOME".
1255 #vlog("# Removing temp dir", tmpdir)
1255 #vlog("# Removing temp dir", tmpdir)
1256 #shutil.rmtree(tmpdir)
1256 #shutil.rmtree(tmpdir)
1257 os.makedirs(tmpdir)
1257 os.makedirs(tmpdir)
1258 else:
1258 else:
1259 d = None
1259 d = None
1260 if os.name == 'nt':
1260 if os.name == 'nt':
1261 # without this, we get the default temp dir location, but
1261 # without this, we get the default temp dir location, but
1262 # in all lowercase, which causes troubles with paths (issue3490)
1262 # in all lowercase, which causes troubles with paths (issue3490)
1263 d = os.getenv('TMP')
1263 d = os.getenv('TMP')
1264 tmpdir = tempfile.mkdtemp('', 'hgtests.', d)
1264 tmpdir = tempfile.mkdtemp('', 'hgtests.', d)
1265 HGTMP = os.environ['HGTMP'] = os.path.realpath(tmpdir)
1265 HGTMP = os.environ['HGTMP'] = os.path.realpath(tmpdir)
1266
1266
1267 if options.with_hg:
1267 if options.with_hg:
1268 INST = None
1268 INST = None
1269 BINDIR = os.path.dirname(os.path.realpath(options.with_hg))
1269 BINDIR = os.path.dirname(os.path.realpath(options.with_hg))
1270 TMPBINDIR = os.path.join(HGTMP, 'install', 'bin')
1270 TMPBINDIR = os.path.join(HGTMP, 'install', 'bin')
1271 os.makedirs(TMPBINDIR)
1271 os.makedirs(TMPBINDIR)
1272
1272
1273 # This looks redundant with how Python initializes sys.path from
1273 # This looks redundant with how Python initializes sys.path from
1274 # the location of the script being executed. Needed because the
1274 # the location of the script being executed. Needed because the
1275 # "hg" specified by --with-hg is not the only Python script
1275 # "hg" specified by --with-hg is not the only Python script
1276 # executed in the test suite that needs to import 'mercurial'
1276 # executed in the test suite that needs to import 'mercurial'
1277 # ... which means it's not really redundant at all.
1277 # ... which means it's not really redundant at all.
1278 PYTHONDIR = BINDIR
1278 PYTHONDIR = BINDIR
1279 else:
1279 else:
1280 INST = os.path.join(HGTMP, "install")
1280 INST = os.path.join(HGTMP, "install")
1281 BINDIR = os.environ["BINDIR"] = os.path.join(INST, "bin")
1281 BINDIR = os.environ["BINDIR"] = os.path.join(INST, "bin")
1282 TMPBINDIR = BINDIR
1282 TMPBINDIR = BINDIR
1283 PYTHONDIR = os.path.join(INST, "lib", "python")
1283 PYTHONDIR = os.path.join(INST, "lib", "python")
1284
1284
1285 os.environ["BINDIR"] = BINDIR
1285 os.environ["BINDIR"] = BINDIR
1286 os.environ["PYTHON"] = PYTHON
1286 os.environ["PYTHON"] = PYTHON
1287
1287
1288 path = [BINDIR] + os.environ["PATH"].split(os.pathsep)
1288 path = [BINDIR] + os.environ["PATH"].split(os.pathsep)
1289 if TMPBINDIR != BINDIR:
1289 if TMPBINDIR != BINDIR:
1290 path = [TMPBINDIR] + path
1290 path = [TMPBINDIR] + path
1291 os.environ["PATH"] = os.pathsep.join(path)
1291 os.environ["PATH"] = os.pathsep.join(path)
1292
1292
1293 # Include TESTDIR in PYTHONPATH so that out-of-tree extensions
1293 # Include TESTDIR in PYTHONPATH so that out-of-tree extensions
1294 # can run .../tests/run-tests.py test-foo where test-foo
1294 # can run .../tests/run-tests.py test-foo where test-foo
1295 # adds an extension to HGRC. Also include run-test.py directory to import
1295 # adds an extension to HGRC. Also include run-test.py directory to import
1296 # modules like heredoctest.
1296 # modules like heredoctest.
1297 pypath = [PYTHONDIR, TESTDIR, os.path.abspath(os.path.dirname(__file__))]
1297 pypath = [PYTHONDIR, TESTDIR, os.path.abspath(os.path.dirname(__file__))]
1298 # We have to augment PYTHONPATH, rather than simply replacing
1298 # We have to augment PYTHONPATH, rather than simply replacing
1299 # it, in case external libraries are only available via current
1299 # it, in case external libraries are only available via current
1300 # PYTHONPATH. (In particular, the Subversion bindings on OS X
1300 # PYTHONPATH. (In particular, the Subversion bindings on OS X
1301 # are in /opt/subversion.)
1301 # are in /opt/subversion.)
1302 oldpypath = os.environ.get(IMPL_PATH)
1302 oldpypath = os.environ.get(IMPL_PATH)
1303 if oldpypath:
1303 if oldpypath:
1304 pypath.append(oldpypath)
1304 pypath.append(oldpypath)
1305 os.environ[IMPL_PATH] = os.pathsep.join(pypath)
1305 os.environ[IMPL_PATH] = os.pathsep.join(pypath)
1306
1306
1307 COVERAGE_FILE = os.path.join(TESTDIR, ".coverage")
1307 COVERAGE_FILE = os.path.join(TESTDIR, ".coverage")
1308
1308
1309 vlog("# Using TESTDIR", TESTDIR)
1309 vlog("# Using TESTDIR", TESTDIR)
1310 vlog("# Using HGTMP", HGTMP)
1310 vlog("# Using HGTMP", HGTMP)
1311 vlog("# Using PATH", os.environ["PATH"])
1311 vlog("# Using PATH", os.environ["PATH"])
1312 vlog("# Using", IMPL_PATH, os.environ[IMPL_PATH])
1312 vlog("# Using", IMPL_PATH, os.environ[IMPL_PATH])
1313
1313
1314 try:
1314 try:
1315 return runtests(options, tests) or 0
1315 return runtests(options, tests) or 0
1316 finally:
1316 finally:
1317 time.sleep(.1)
1317 time.sleep(.1)
1318 cleanup(options)
1318 cleanup(options)
1319
1319
1320 if __name__ == '__main__':
1320 if __name__ == '__main__':
1321 sys.exit(main(sys.argv[1:]))
1321 sys.exit(main(sys.argv[1:]))
@@ -1,136 +1,136 b''
1 from mercurial import ancestor, commands, hg, ui, util
1 from mercurial import ancestor, commands, hg, ui, util
2
2
3 # graph is a dict of child->parent adjacency lists for this graph:
3 # graph is a dict of child->parent adjacency lists for this graph:
4 # o 13
4 # o 13
5 # |
5 # |
6 # | o 12
6 # | o 12
7 # | |
7 # | |
8 # | | o 11
8 # | | o 11
9 # | | |\
9 # | | |\
10 # | | | | o 10
10 # | | | | o 10
11 # | | | | |
11 # | | | | |
12 # | o---+ | 9
12 # | o---+ | 9
13 # | | | | |
13 # | | | | |
14 # o | | | | 8
14 # o | | | | 8
15 # / / / /
15 # / / / /
16 # | | o | 7
16 # | | o | 7
17 # | | | |
17 # | | | |
18 # o---+ | 6
18 # o---+ | 6
19 # / / /
19 # / / /
20 # | | o 5
20 # | | o 5
21 # | |/
21 # | |/
22 # | o 4
22 # | o 4
23 # | |
23 # | |
24 # o | 3
24 # o | 3
25 # | |
25 # | |
26 # | o 2
26 # | o 2
27 # |/
27 # |/
28 # o 1
28 # o 1
29 # |
29 # |
30 # o 0
30 # o 0
31
31
32 graph = {0: [-1], 1: [0], 2: [1], 3: [1], 4: [2], 5: [4], 6: [4],
32 graph = {0: [-1], 1: [0], 2: [1], 3: [1], 4: [2], 5: [4], 6: [4],
33 7: [4], 8: [-1], 9: [6, 7], 10: [5], 11: [3, 7], 12: [9],
33 7: [4], 8: [-1], 9: [6, 7], 10: [5], 11: [3, 7], 12: [9],
34 13: [8]}
34 13: [8]}
35 pfunc = graph.get
35 pfunc = graph.get
36
36
37 class mockchangelog(object):
37 class mockchangelog(object):
38 parentrevs = graph.get
38 parentrevs = graph.get
39
39
40 def runmissingancestors(revs, bases):
40 def runmissingancestors(revs, bases):
41 print "%% ancestors of %s and not of %s" % (revs, bases)
41 print "%% ancestors of %s and not of %s" % (revs, bases)
42 print ancestor.missingancestors(revs, bases, pfunc)
42 print ancestor.missingancestors(revs, bases, pfunc)
43
43
44 def test_missingancestors():
44 def test_missingancestors():
45 # Empty revs
45 # Empty revs
46 runmissingancestors([], [1])
46 runmissingancestors([], [1])
47 runmissingancestors([], [])
47 runmissingancestors([], [])
48
48
49 # If bases is empty, it's the same as if it were [nullrev]
49 # If bases is empty, it's the same as if it were [nullrev]
50 runmissingancestors([12], [])
50 runmissingancestors([12], [])
51
51
52 # Trivial case: revs == bases
52 # Trivial case: revs == bases
53 runmissingancestors([0], [0])
53 runmissingancestors([0], [0])
54 runmissingancestors([4, 5, 6], [6, 5, 4])
54 runmissingancestors([4, 5, 6], [6, 5, 4])
55
55
56 # With nullrev
56 # With nullrev
57 runmissingancestors([-1], [12])
57 runmissingancestors([-1], [12])
58 runmissingancestors([12], [-1])
58 runmissingancestors([12], [-1])
59
59
60 # 9 is a parent of 12. 7 is a parent of 9, so an ancestor of 12. 6 is an
60 # 9 is a parent of 12. 7 is a parent of 9, so an ancestor of 12. 6 is an
61 # ancestor of 12 but not of 7.
61 # ancestor of 12 but not of 7.
62 runmissingancestors([12], [9])
62 runmissingancestors([12], [9])
63 runmissingancestors([9], [12])
63 runmissingancestors([9], [12])
64 runmissingancestors([12, 9], [7])
64 runmissingancestors([12, 9], [7])
65 runmissingancestors([7, 6], [12])
65 runmissingancestors([7, 6], [12])
66
66
67 # More complex cases
67 # More complex cases
68 runmissingancestors([10], [11, 12])
68 runmissingancestors([10], [11, 12])
69 runmissingancestors([11], [10])
69 runmissingancestors([11], [10])
70 runmissingancestors([11], [10, 12])
70 runmissingancestors([11], [10, 12])
71 runmissingancestors([12], [10])
71 runmissingancestors([12], [10])
72 runmissingancestors([12], [11])
72 runmissingancestors([12], [11])
73 runmissingancestors([10, 11, 12], [13])
73 runmissingancestors([10, 11, 12], [13])
74 runmissingancestors([13], [10, 11, 12])
74 runmissingancestors([13], [10, 11, 12])
75
75
76 def genlazyancestors(revs, stoprev=0, inclusive=False):
76 def genlazyancestors(revs, stoprev=0, inclusive=False):
77 print ("%% lazy ancestor set for %s, stoprev = %s, inclusive = %s" %
77 print ("%% lazy ancestor set for %s, stoprev = %s, inclusive = %s" %
78 (revs, stoprev, inclusive))
78 (revs, stoprev, inclusive))
79 return ancestor.lazyancestors(mockchangelog, revs, stoprev=stoprev,
79 return ancestor.lazyancestors(mockchangelog, revs, stoprev=stoprev,
80 inclusive=inclusive)
80 inclusive=inclusive)
81
81
82 def printlazyancestors(s, l):
82 def printlazyancestors(s, l):
83 print [n for n in l if n in s]
83 print [n for n in l if n in s]
84
84
85 def test_lazyancestors():
85 def test_lazyancestors():
86 # Empty revs
86 # Empty revs
87 s = genlazyancestors([])
87 s = genlazyancestors([])
88 printlazyancestors(s, [3, 0, -1])
88 printlazyancestors(s, [3, 0, -1])
89
89
90 # Standard example
90 # Standard example
91 s = genlazyancestors([11, 13])
91 s = genlazyancestors([11, 13])
92 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
92 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
93
93
94 # Including revs
94 # Including revs
95 s = genlazyancestors([11, 13], inclusive=True)
95 s = genlazyancestors([11, 13], inclusive=True)
96 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
96 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
97
97
98 # Test with stoprev
98 # Test with stoprev
99 s = genlazyancestors([11, 13], stoprev=6)
99 s = genlazyancestors([11, 13], stoprev=6)
100 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
100 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
101 s = genlazyancestors([11, 13], stoprev=6, inclusive=True)
101 s = genlazyancestors([11, 13], stoprev=6, inclusive=True)
102 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
102 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
103
103
104
104
105 # The C gca algorithm requires a real repo. These are textual descriptions of
105 # The C gca algorithm requires a real repo. These are textual descriptions of
106 # dags that have been known to be problematic.
106 # DAGs that have been known to be problematic.
107 dagtests = [
107 dagtests = [
108 '+2*2*2/*3/2',
108 '+2*2*2/*3/2',
109 '+3*3/*2*2/*4*4/*4/2*4/2*2',
109 '+3*3/*2*2/*4*4/*4/2*4/2*2',
110 ]
110 ]
111 def test_gca():
111 def test_gca():
112 u = ui.ui()
112 u = ui.ui()
113 for i, dag in enumerate(dagtests):
113 for i, dag in enumerate(dagtests):
114 repo = hg.repository(u, 'gca%d' % i, create=1)
114 repo = hg.repository(u, 'gca%d' % i, create=1)
115 cl = repo.changelog
115 cl = repo.changelog
116 if not util.safehasattr(cl.index, 'ancestors'):
116 if not util.safehasattr(cl.index, 'ancestors'):
117 # C version not available
117 # C version not available
118 return
118 return
119
119
120 commands.debugbuilddag(u, repo, dag)
120 commands.debugbuilddag(u, repo, dag)
121 # Compare the results of the Python and C versions. This does not
121 # Compare the results of the Python and C versions. This does not
122 # include choosing a winner when more than one gca exists -- we make
122 # include choosing a winner when more than one gca exists -- we make
123 # sure both return exactly the same set of gcas.
123 # sure both return exactly the same set of gcas.
124 for a in cl:
124 for a in cl:
125 for b in cl:
125 for b in cl:
126 cgcas = sorted(cl.index.ancestors(a, b))
126 cgcas = sorted(cl.index.ancestors(a, b))
127 pygcas = sorted(ancestor.ancestors(cl.parentrevs, a, b))
127 pygcas = sorted(ancestor.ancestors(cl.parentrevs, a, b))
128 if cgcas != pygcas:
128 if cgcas != pygcas:
129 print "test_gca: for dag %s, gcas for %d, %d:" % (dag, a, b)
129 print "test_gca: for dag %s, gcas for %d, %d:" % (dag, a, b)
130 print " C returned: %s" % cgcas
130 print " C returned: %s" % cgcas
131 print " Python returned: %s" % pygcas
131 print " Python returned: %s" % pygcas
132
132
133 if __name__ == '__main__':
133 if __name__ == '__main__':
134 test_missingancestors()
134 test_missingancestors()
135 test_lazyancestors()
135 test_lazyancestors()
136 test_gca()
136 test_gca()
@@ -1,191 +1,191 b''
1 $ hg init
1 $ hg init
2
2
3 no bookmarks
3 no bookmarks
4
4
5 $ hg bookmarks
5 $ hg bookmarks
6 no bookmarks set
6 no bookmarks set
7
7
8 set bookmark X
8 set bookmark X
9
9
10 $ hg bookmark X
10 $ hg bookmark X
11
11
12 list bookmarks
12 list bookmarks
13
13
14 $ hg bookmark
14 $ hg bookmark
15 * X -1:000000000000
15 * X -1:000000000000
16
16
17 list bookmarks with color
17 list bookmarks with color
18
18
19 $ hg --config extensions.color= --config color.mode=ansi \
19 $ hg --config extensions.color= --config color.mode=ansi \
20 > bookmark --color=always
20 > bookmark --color=always
21 \x1b[0;32m * X -1:000000000000\x1b[0m (esc)
21 \x1b[0;32m * X -1:000000000000\x1b[0m (esc)
22
22
23 update to bookmark X
23 update to bookmark X
24
24
25 $ hg update X
25 $ hg update X
26 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
26 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
27
27
28 list bookmarks
28 list bookmarks
29
29
30 $ hg bookmarks
30 $ hg bookmarks
31 * X -1:000000000000
31 * X -1:000000000000
32
32
33 rename
33 rename
34
34
35 $ hg bookmark -m X Z
35 $ hg bookmark -m X Z
36
36
37 list bookmarks
37 list bookmarks
38
38
39 $ cat .hg/bookmarks.current
39 $ cat .hg/bookmarks.current
40 Z (no-eol)
40 Z (no-eol)
41 $ cat .hg/bookmarks
41 $ cat .hg/bookmarks
42 0000000000000000000000000000000000000000 Z
42 0000000000000000000000000000000000000000 Z
43 $ hg bookmarks
43 $ hg bookmarks
44 * Z -1:000000000000
44 * Z -1:000000000000
45
45
46 new bookmarks X and Y, first one made active
46 new bookmarks X and Y, first one made active
47
47
48 $ hg bookmark Y X
48 $ hg bookmark Y X
49
49
50 list bookmarks
50 list bookmarks
51
51
52 $ hg bookmark
52 $ hg bookmark
53 X -1:000000000000
53 X -1:000000000000
54 * Y -1:000000000000
54 * Y -1:000000000000
55 Z -1:000000000000
55 Z -1:000000000000
56
56
57 $ hg bookmark -d X
57 $ hg bookmark -d X
58
58
59 commit
59 commit
60
60
61 $ echo 'b' > b
61 $ echo 'b' > b
62 $ hg add b
62 $ hg add b
63 $ hg commit -m'test'
63 $ hg commit -m'test'
64
64
65 list bookmarks
65 list bookmarks
66
66
67 $ hg bookmark
67 $ hg bookmark
68 * Y 0:719295282060
68 * Y 0:719295282060
69 Z -1:000000000000
69 Z -1:000000000000
70
70
71 Verify that switching to Z updates the current bookmark:
71 Verify that switching to Z updates the current bookmark:
72 $ hg update Z
72 $ hg update Z
73 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
73 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
74 $ hg bookmark
74 $ hg bookmark
75 Y 0:719295282060
75 Y 0:719295282060
76 * Z -1:000000000000
76 * Z -1:000000000000
77
77
78 Switch back to Y for the remaining tests in this file:
78 Switch back to Y for the remaining tests in this file:
79 $ hg update Y
79 $ hg update Y
80 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
80 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
81
81
82 delete bookmarks
82 delete bookmarks
83
83
84 $ hg bookmark -d Y
84 $ hg bookmark -d Y
85 $ hg bookmark -d Z
85 $ hg bookmark -d Z
86
86
87 list bookmarks
87 list bookmarks
88
88
89 $ hg bookmark
89 $ hg bookmark
90 no bookmarks set
90 no bookmarks set
91
91
92 update to tip
92 update to tip
93
93
94 $ hg update tip
94 $ hg update tip
95 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
95 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
96
96
97 set bookmark Y using -r . but make sure that the active
97 set bookmark Y using -r . but make sure that the active
98 bookmark is not activated
98 bookmark is not activated
99
99
100 $ hg bookmark -r . Y
100 $ hg bookmark -r . Y
101
101
102 list bookmarks, Y should not be active
102 list bookmarks, Y should not be active
103
103
104 $ hg bookmark
104 $ hg bookmark
105 Y 0:719295282060
105 Y 0:719295282060
106
106
107 now, activate Y
107 now, activate Y
108
108
109 $ hg up -q Y
109 $ hg up -q Y
110
110
111 set bookmark Z using -i
111 set bookmark Z using -i
112
112
113 $ hg bookmark -r . -i Z
113 $ hg bookmark -r . -i Z
114 $ hg bookmarks
114 $ hg bookmarks
115 * Y 0:719295282060
115 * Y 0:719295282060
116 Z 0:719295282060
116 Z 0:719295282060
117
117
118 deactivate current bookmark using -i
118 deactivate current bookmark using -i
119
119
120 $ hg bookmark -i Y
120 $ hg bookmark -i Y
121 $ hg bookmarks
121 $ hg bookmarks
122 Y 0:719295282060
122 Y 0:719295282060
123 Z 0:719295282060
123 Z 0:719295282060
124
124
125 $ hg up -q Y
125 $ hg up -q Y
126 $ hg bookmark -i
126 $ hg bookmark -i
127 $ hg bookmarks
127 $ hg bookmarks
128 Y 0:719295282060
128 Y 0:719295282060
129 Z 0:719295282060
129 Z 0:719295282060
130 $ hg bookmark -i
130 $ hg bookmark -i
131 no active bookmark
131 no active bookmark
132 $ hg up -q Y
132 $ hg up -q Y
133 $ hg bookmarks
133 $ hg bookmarks
134 * Y 0:719295282060
134 * Y 0:719295282060
135 Z 0:719295282060
135 Z 0:719295282060
136
136
137 deactivate current bookmark while renaming
137 deactivate current bookmark while renaming
138
138
139 $ hg bookmark -i -m Y X
139 $ hg bookmark -i -m Y X
140 $ hg bookmarks
140 $ hg bookmarks
141 X 0:719295282060
141 X 0:719295282060
142 Z 0:719295282060
142 Z 0:719295282060
143
143
144 bare update moves the active bookmark forward and clear the divergent bookmarks
144 bare update moves the active bookmark forward and clear the divergent bookmarks
145
145
146 $ echo a > a
146 $ echo a > a
147 $ hg ci -Am1
147 $ hg ci -Am1
148 adding a
148 adding a
149 $ echo b >> a
149 $ echo b >> a
150 $ hg ci -Am2
150 $ hg ci -Am2
151 $ hg bookmark X@1 -r 1
151 $ hg bookmark X@1 -r 1
152 $ hg bookmark X@2 -r 2
152 $ hg bookmark X@2 -r 2
153 $ hg update X
153 $ hg update X
154 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
154 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
155 $ hg bookmarks
155 $ hg bookmarks
156 * X 0:719295282060
156 * X 0:719295282060
157 X@1 1:cc586d725fbe
157 X@1 1:cc586d725fbe
158 X@2 2:49e1c4e84c58
158 X@2 2:49e1c4e84c58
159 Z 0:719295282060
159 Z 0:719295282060
160 $ hg update
160 $ hg update
161 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
161 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
162 updating bookmark X
162 updating bookmark X
163 $ hg bookmarks
163 $ hg bookmarks
164 * X 2:49e1c4e84c58
164 * X 2:49e1c4e84c58
165 Z 0:719295282060
165 Z 0:719295282060
166
166
167 test deleting .hg/bookmarks.current when explicitly updating
167 test deleting .hg/bookmarks.current when explicitly updating
168 to a revision
168 to a revision
169
169
170 $ echo a >> b
170 $ echo a >> b
171 $ hg ci -m.
171 $ hg ci -m.
172 $ hg up -q X
172 $ hg up -q X
173 $ test -f .hg/bookmarks.current
173 $ test -f .hg/bookmarks.current
174
174
175 try to update to it again to make sure we don't
175 try to update to it again to make sure we don't
176 set and then unset it
176 set and then unset it
177
177
178 $ hg up -q X
178 $ hg up -q X
179 $ test -f .hg/bookmarks.current
179 $ test -f .hg/bookmarks.current
180
180
181 $ hg up -q 1
181 $ hg up -q 1
182 $ test -f .hg/bookmarks.current
182 $ test -f .hg/bookmarks.current
183 [1]
183 [1]
184
184
185 when a bookmark is active, hg up -r . is
185 when a bookmark is active, hg up -r . is
186 analogus to hg book -i <active bookmark>
186 analogous to hg book -i <active bookmark>
187
187
188 $ hg up -q X
188 $ hg up -q X
189 $ hg up -q .
189 $ hg up -q .
190 $ test -f .hg/bookmarks.current
190 $ test -f .hg/bookmarks.current
191 [1]
191 [1]
@@ -1,650 +1,650 b''
1 Setting up test
1 Setting up test
2
2
3 $ hg init test
3 $ hg init test
4 $ cd test
4 $ cd test
5 $ echo 0 > afile
5 $ echo 0 > afile
6 $ hg add afile
6 $ hg add afile
7 $ hg commit -m "0.0"
7 $ hg commit -m "0.0"
8 $ echo 1 >> afile
8 $ echo 1 >> afile
9 $ hg commit -m "0.1"
9 $ hg commit -m "0.1"
10 $ echo 2 >> afile
10 $ echo 2 >> afile
11 $ hg commit -m "0.2"
11 $ hg commit -m "0.2"
12 $ echo 3 >> afile
12 $ echo 3 >> afile
13 $ hg commit -m "0.3"
13 $ hg commit -m "0.3"
14 $ hg update -C 0
14 $ hg update -C 0
15 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
15 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
16 $ echo 1 >> afile
16 $ echo 1 >> afile
17 $ hg commit -m "1.1"
17 $ hg commit -m "1.1"
18 created new head
18 created new head
19 $ echo 2 >> afile
19 $ echo 2 >> afile
20 $ hg commit -m "1.2"
20 $ hg commit -m "1.2"
21 $ echo "a line" > fred
21 $ echo "a line" > fred
22 $ echo 3 >> afile
22 $ echo 3 >> afile
23 $ hg add fred
23 $ hg add fred
24 $ hg commit -m "1.3"
24 $ hg commit -m "1.3"
25 $ hg mv afile adifferentfile
25 $ hg mv afile adifferentfile
26 $ hg commit -m "1.3m"
26 $ hg commit -m "1.3m"
27 $ hg update -C 3
27 $ hg update -C 3
28 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
28 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
29 $ hg mv afile anotherfile
29 $ hg mv afile anotherfile
30 $ hg commit -m "0.3m"
30 $ hg commit -m "0.3m"
31 $ hg verify
31 $ hg verify
32 checking changesets
32 checking changesets
33 checking manifests
33 checking manifests
34 crosschecking files in changesets and manifests
34 crosschecking files in changesets and manifests
35 checking files
35 checking files
36 4 files, 9 changesets, 7 total revisions
36 4 files, 9 changesets, 7 total revisions
37 $ cd ..
37 $ cd ..
38 $ hg init empty
38 $ hg init empty
39
39
40 Bundle and phase
40 Bundle and phase
41
41
42 $ hg -R test phase --force --secret 0
42 $ hg -R test phase --force --secret 0
43 $ hg -R test bundle phase.hg empty
43 $ hg -R test bundle phase.hg empty
44 searching for changes
44 searching for changes
45 no changes found (ignored 9 secret changesets)
45 no changes found (ignored 9 secret changesets)
46 [1]
46 [1]
47 $ hg -R test phase --draft -r 'head()'
47 $ hg -R test phase --draft -r 'head()'
48
48
49 Bundle --all
49 Bundle --all
50
50
51 $ hg -R test bundle --all all.hg
51 $ hg -R test bundle --all all.hg
52 9 changesets found
52 9 changesets found
53
53
54 Bundle test to full.hg
54 Bundle test to full.hg
55
55
56 $ hg -R test bundle full.hg empty
56 $ hg -R test bundle full.hg empty
57 searching for changes
57 searching for changes
58 9 changesets found
58 9 changesets found
59
59
60 Unbundle full.hg in test
60 Unbundle full.hg in test
61
61
62 $ hg -R test unbundle full.hg
62 $ hg -R test unbundle full.hg
63 adding changesets
63 adding changesets
64 adding manifests
64 adding manifests
65 adding file changes
65 adding file changes
66 added 0 changesets with 0 changes to 4 files
66 added 0 changesets with 0 changes to 4 files
67 (run 'hg update' to get a working copy)
67 (run 'hg update' to get a working copy)
68
68
69 Verify empty
69 Verify empty
70
70
71 $ hg -R empty heads
71 $ hg -R empty heads
72 [1]
72 [1]
73 $ hg -R empty verify
73 $ hg -R empty verify
74 checking changesets
74 checking changesets
75 checking manifests
75 checking manifests
76 crosschecking files in changesets and manifests
76 crosschecking files in changesets and manifests
77 checking files
77 checking files
78 0 files, 0 changesets, 0 total revisions
78 0 files, 0 changesets, 0 total revisions
79
79
80 Pull full.hg into test (using --cwd)
80 Pull full.hg into test (using --cwd)
81
81
82 $ hg --cwd test pull ../full.hg
82 $ hg --cwd test pull ../full.hg
83 pulling from ../full.hg
83 pulling from ../full.hg
84 searching for changes
84 searching for changes
85 no changes found
85 no changes found
86
86
87 Verify that there are no leaked temporary files after pull (issue2797)
87 Verify that there are no leaked temporary files after pull (issue2797)
88
88
89 $ ls test/.hg | grep .hg10un
89 $ ls test/.hg | grep .hg10un
90 [1]
90 [1]
91
91
92 Pull full.hg into empty (using --cwd)
92 Pull full.hg into empty (using --cwd)
93
93
94 $ hg --cwd empty pull ../full.hg
94 $ hg --cwd empty pull ../full.hg
95 pulling from ../full.hg
95 pulling from ../full.hg
96 requesting all changes
96 requesting all changes
97 adding changesets
97 adding changesets
98 adding manifests
98 adding manifests
99 adding file changes
99 adding file changes
100 added 9 changesets with 7 changes to 4 files (+1 heads)
100 added 9 changesets with 7 changes to 4 files (+1 heads)
101 (run 'hg heads' to see heads, 'hg merge' to merge)
101 (run 'hg heads' to see heads, 'hg merge' to merge)
102
102
103 Rollback empty
103 Rollback empty
104
104
105 $ hg -R empty rollback
105 $ hg -R empty rollback
106 repository tip rolled back to revision -1 (undo pull)
106 repository tip rolled back to revision -1 (undo pull)
107
107
108 Pull full.hg into empty again (using --cwd)
108 Pull full.hg into empty again (using --cwd)
109
109
110 $ hg --cwd empty pull ../full.hg
110 $ hg --cwd empty pull ../full.hg
111 pulling from ../full.hg
111 pulling from ../full.hg
112 requesting all changes
112 requesting all changes
113 adding changesets
113 adding changesets
114 adding manifests
114 adding manifests
115 adding file changes
115 adding file changes
116 added 9 changesets with 7 changes to 4 files (+1 heads)
116 added 9 changesets with 7 changes to 4 files (+1 heads)
117 (run 'hg heads' to see heads, 'hg merge' to merge)
117 (run 'hg heads' to see heads, 'hg merge' to merge)
118
118
119 Pull full.hg into test (using -R)
119 Pull full.hg into test (using -R)
120
120
121 $ hg -R test pull full.hg
121 $ hg -R test pull full.hg
122 pulling from full.hg
122 pulling from full.hg
123 searching for changes
123 searching for changes
124 no changes found
124 no changes found
125
125
126 Pull full.hg into empty (using -R)
126 Pull full.hg into empty (using -R)
127
127
128 $ hg -R empty pull full.hg
128 $ hg -R empty pull full.hg
129 pulling from full.hg
129 pulling from full.hg
130 searching for changes
130 searching for changes
131 no changes found
131 no changes found
132
132
133 Rollback empty
133 Rollback empty
134
134
135 $ hg -R empty rollback
135 $ hg -R empty rollback
136 repository tip rolled back to revision -1 (undo pull)
136 repository tip rolled back to revision -1 (undo pull)
137
137
138 Pull full.hg into empty again (using -R)
138 Pull full.hg into empty again (using -R)
139
139
140 $ hg -R empty pull full.hg
140 $ hg -R empty pull full.hg
141 pulling from full.hg
141 pulling from full.hg
142 requesting all changes
142 requesting all changes
143 adding changesets
143 adding changesets
144 adding manifests
144 adding manifests
145 adding file changes
145 adding file changes
146 added 9 changesets with 7 changes to 4 files (+1 heads)
146 added 9 changesets with 7 changes to 4 files (+1 heads)
147 (run 'hg heads' to see heads, 'hg merge' to merge)
147 (run 'hg heads' to see heads, 'hg merge' to merge)
148
148
149 Log -R full.hg in fresh empty
149 Log -R full.hg in fresh empty
150
150
151 $ rm -r empty
151 $ rm -r empty
152 $ hg init empty
152 $ hg init empty
153 $ cd empty
153 $ cd empty
154 $ hg -R bundle://../full.hg log
154 $ hg -R bundle://../full.hg log
155 changeset: 8:aa35859c02ea
155 changeset: 8:aa35859c02ea
156 tag: tip
156 tag: tip
157 parent: 3:eebf5a27f8ca
157 parent: 3:eebf5a27f8ca
158 user: test
158 user: test
159 date: Thu Jan 01 00:00:00 1970 +0000
159 date: Thu Jan 01 00:00:00 1970 +0000
160 summary: 0.3m
160 summary: 0.3m
161
161
162 changeset: 7:a6a34bfa0076
162 changeset: 7:a6a34bfa0076
163 user: test
163 user: test
164 date: Thu Jan 01 00:00:00 1970 +0000
164 date: Thu Jan 01 00:00:00 1970 +0000
165 summary: 1.3m
165 summary: 1.3m
166
166
167 changeset: 6:7373c1169842
167 changeset: 6:7373c1169842
168 user: test
168 user: test
169 date: Thu Jan 01 00:00:00 1970 +0000
169 date: Thu Jan 01 00:00:00 1970 +0000
170 summary: 1.3
170 summary: 1.3
171
171
172 changeset: 5:1bb50a9436a7
172 changeset: 5:1bb50a9436a7
173 user: test
173 user: test
174 date: Thu Jan 01 00:00:00 1970 +0000
174 date: Thu Jan 01 00:00:00 1970 +0000
175 summary: 1.2
175 summary: 1.2
176
176
177 changeset: 4:095197eb4973
177 changeset: 4:095197eb4973
178 parent: 0:f9ee2f85a263
178 parent: 0:f9ee2f85a263
179 user: test
179 user: test
180 date: Thu Jan 01 00:00:00 1970 +0000
180 date: Thu Jan 01 00:00:00 1970 +0000
181 summary: 1.1
181 summary: 1.1
182
182
183 changeset: 3:eebf5a27f8ca
183 changeset: 3:eebf5a27f8ca
184 user: test
184 user: test
185 date: Thu Jan 01 00:00:00 1970 +0000
185 date: Thu Jan 01 00:00:00 1970 +0000
186 summary: 0.3
186 summary: 0.3
187
187
188 changeset: 2:e38ba6f5b7e0
188 changeset: 2:e38ba6f5b7e0
189 user: test
189 user: test
190 date: Thu Jan 01 00:00:00 1970 +0000
190 date: Thu Jan 01 00:00:00 1970 +0000
191 summary: 0.2
191 summary: 0.2
192
192
193 changeset: 1:34c2bf6b0626
193 changeset: 1:34c2bf6b0626
194 user: test
194 user: test
195 date: Thu Jan 01 00:00:00 1970 +0000
195 date: Thu Jan 01 00:00:00 1970 +0000
196 summary: 0.1
196 summary: 0.1
197
197
198 changeset: 0:f9ee2f85a263
198 changeset: 0:f9ee2f85a263
199 user: test
199 user: test
200 date: Thu Jan 01 00:00:00 1970 +0000
200 date: Thu Jan 01 00:00:00 1970 +0000
201 summary: 0.0
201 summary: 0.0
202
202
203 Make sure bundlerepo doesn't leak tempfiles (issue2491)
203 Make sure bundlerepo doesn't leak tempfiles (issue2491)
204
204
205 $ ls .hg
205 $ ls .hg
206 00changelog.i
206 00changelog.i
207 cache
207 cache
208 requires
208 requires
209 store
209 store
210
210
211 Pull ../full.hg into empty (with hook)
211 Pull ../full.hg into empty (with hook)
212
212
213 $ echo "[hooks]" >> .hg/hgrc
213 $ echo "[hooks]" >> .hg/hgrc
214 $ echo "changegroup = python \"$TESTDIR/printenv.py\" changegroup" >> .hg/hgrc
214 $ echo "changegroup = python \"$TESTDIR/printenv.py\" changegroup" >> .hg/hgrc
215
215
216 doesn't work (yet ?)
216 doesn't work (yet ?)
217
217
218 hg -R bundle://../full.hg verify
218 hg -R bundle://../full.hg verify
219
219
220 $ hg pull bundle://../full.hg
220 $ hg pull bundle://../full.hg
221 pulling from bundle:../full.hg
221 pulling from bundle:../full.hg
222 requesting all changes
222 requesting all changes
223 adding changesets
223 adding changesets
224 adding manifests
224 adding manifests
225 adding file changes
225 adding file changes
226 added 9 changesets with 7 changes to 4 files (+1 heads)
226 added 9 changesets with 7 changes to 4 files (+1 heads)
227 changegroup hook: HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_SOURCE=pull HG_URL=bundle:../full.hg
227 changegroup hook: HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_SOURCE=pull HG_URL=bundle:../full.hg
228 (run 'hg heads' to see heads, 'hg merge' to merge)
228 (run 'hg heads' to see heads, 'hg merge' to merge)
229
229
230 Rollback empty
230 Rollback empty
231
231
232 $ hg rollback
232 $ hg rollback
233 repository tip rolled back to revision -1 (undo pull)
233 repository tip rolled back to revision -1 (undo pull)
234 $ cd ..
234 $ cd ..
235
235
236 Log -R bundle:empty+full.hg
236 Log -R bundle:empty+full.hg
237
237
238 $ hg -R bundle:empty+full.hg log --template="{rev} "; echo ""
238 $ hg -R bundle:empty+full.hg log --template="{rev} "; echo ""
239 8 7 6 5 4 3 2 1 0
239 8 7 6 5 4 3 2 1 0
240
240
241 Pull full.hg into empty again (using -R; with hook)
241 Pull full.hg into empty again (using -R; with hook)
242
242
243 $ hg -R empty pull full.hg
243 $ hg -R empty pull full.hg
244 pulling from full.hg
244 pulling from full.hg
245 requesting all changes
245 requesting all changes
246 adding changesets
246 adding changesets
247 adding manifests
247 adding manifests
248 adding file changes
248 adding file changes
249 added 9 changesets with 7 changes to 4 files (+1 heads)
249 added 9 changesets with 7 changes to 4 files (+1 heads)
250 changegroup hook: HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_SOURCE=pull HG_URL=bundle:empty+full.hg
250 changegroup hook: HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_SOURCE=pull HG_URL=bundle:empty+full.hg
251 (run 'hg heads' to see heads, 'hg merge' to merge)
251 (run 'hg heads' to see heads, 'hg merge' to merge)
252
252
253 Create partial clones
253 Create partial clones
254
254
255 $ rm -r empty
255 $ rm -r empty
256 $ hg init empty
256 $ hg init empty
257 $ hg clone -r 3 test partial
257 $ hg clone -r 3 test partial
258 adding changesets
258 adding changesets
259 adding manifests
259 adding manifests
260 adding file changes
260 adding file changes
261 added 4 changesets with 4 changes to 1 files
261 added 4 changesets with 4 changes to 1 files
262 updating to branch default
262 updating to branch default
263 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
263 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
264 $ hg clone partial partial2
264 $ hg clone partial partial2
265 updating to branch default
265 updating to branch default
266 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
266 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
267 $ cd partial
267 $ cd partial
268
268
269 Log -R full.hg in partial
269 Log -R full.hg in partial
270
270
271 $ hg -R bundle://../full.hg log
271 $ hg -R bundle://../full.hg log
272 changeset: 8:aa35859c02ea
272 changeset: 8:aa35859c02ea
273 tag: tip
273 tag: tip
274 parent: 3:eebf5a27f8ca
274 parent: 3:eebf5a27f8ca
275 user: test
275 user: test
276 date: Thu Jan 01 00:00:00 1970 +0000
276 date: Thu Jan 01 00:00:00 1970 +0000
277 summary: 0.3m
277 summary: 0.3m
278
278
279 changeset: 7:a6a34bfa0076
279 changeset: 7:a6a34bfa0076
280 user: test
280 user: test
281 date: Thu Jan 01 00:00:00 1970 +0000
281 date: Thu Jan 01 00:00:00 1970 +0000
282 summary: 1.3m
282 summary: 1.3m
283
283
284 changeset: 6:7373c1169842
284 changeset: 6:7373c1169842
285 user: test
285 user: test
286 date: Thu Jan 01 00:00:00 1970 +0000
286 date: Thu Jan 01 00:00:00 1970 +0000
287 summary: 1.3
287 summary: 1.3
288
288
289 changeset: 5:1bb50a9436a7
289 changeset: 5:1bb50a9436a7
290 user: test
290 user: test
291 date: Thu Jan 01 00:00:00 1970 +0000
291 date: Thu Jan 01 00:00:00 1970 +0000
292 summary: 1.2
292 summary: 1.2
293
293
294 changeset: 4:095197eb4973
294 changeset: 4:095197eb4973
295 parent: 0:f9ee2f85a263
295 parent: 0:f9ee2f85a263
296 user: test
296 user: test
297 date: Thu Jan 01 00:00:00 1970 +0000
297 date: Thu Jan 01 00:00:00 1970 +0000
298 summary: 1.1
298 summary: 1.1
299
299
300 changeset: 3:eebf5a27f8ca
300 changeset: 3:eebf5a27f8ca
301 user: test
301 user: test
302 date: Thu Jan 01 00:00:00 1970 +0000
302 date: Thu Jan 01 00:00:00 1970 +0000
303 summary: 0.3
303 summary: 0.3
304
304
305 changeset: 2:e38ba6f5b7e0
305 changeset: 2:e38ba6f5b7e0
306 user: test
306 user: test
307 date: Thu Jan 01 00:00:00 1970 +0000
307 date: Thu Jan 01 00:00:00 1970 +0000
308 summary: 0.2
308 summary: 0.2
309
309
310 changeset: 1:34c2bf6b0626
310 changeset: 1:34c2bf6b0626
311 user: test
311 user: test
312 date: Thu Jan 01 00:00:00 1970 +0000
312 date: Thu Jan 01 00:00:00 1970 +0000
313 summary: 0.1
313 summary: 0.1
314
314
315 changeset: 0:f9ee2f85a263
315 changeset: 0:f9ee2f85a263
316 user: test
316 user: test
317 date: Thu Jan 01 00:00:00 1970 +0000
317 date: Thu Jan 01 00:00:00 1970 +0000
318 summary: 0.0
318 summary: 0.0
319
319
320
320
321 Incoming full.hg in partial
321 Incoming full.hg in partial
322
322
323 $ hg incoming bundle://../full.hg
323 $ hg incoming bundle://../full.hg
324 comparing with bundle:../full.hg
324 comparing with bundle:../full.hg
325 searching for changes
325 searching for changes
326 changeset: 4:095197eb4973
326 changeset: 4:095197eb4973
327 parent: 0:f9ee2f85a263
327 parent: 0:f9ee2f85a263
328 user: test
328 user: test
329 date: Thu Jan 01 00:00:00 1970 +0000
329 date: Thu Jan 01 00:00:00 1970 +0000
330 summary: 1.1
330 summary: 1.1
331
331
332 changeset: 5:1bb50a9436a7
332 changeset: 5:1bb50a9436a7
333 user: test
333 user: test
334 date: Thu Jan 01 00:00:00 1970 +0000
334 date: Thu Jan 01 00:00:00 1970 +0000
335 summary: 1.2
335 summary: 1.2
336
336
337 changeset: 6:7373c1169842
337 changeset: 6:7373c1169842
338 user: test
338 user: test
339 date: Thu Jan 01 00:00:00 1970 +0000
339 date: Thu Jan 01 00:00:00 1970 +0000
340 summary: 1.3
340 summary: 1.3
341
341
342 changeset: 7:a6a34bfa0076
342 changeset: 7:a6a34bfa0076
343 user: test
343 user: test
344 date: Thu Jan 01 00:00:00 1970 +0000
344 date: Thu Jan 01 00:00:00 1970 +0000
345 summary: 1.3m
345 summary: 1.3m
346
346
347 changeset: 8:aa35859c02ea
347 changeset: 8:aa35859c02ea
348 tag: tip
348 tag: tip
349 parent: 3:eebf5a27f8ca
349 parent: 3:eebf5a27f8ca
350 user: test
350 user: test
351 date: Thu Jan 01 00:00:00 1970 +0000
351 date: Thu Jan 01 00:00:00 1970 +0000
352 summary: 0.3m
352 summary: 0.3m
353
353
354
354
355 Outgoing -R full.hg vs partial2 in partial
355 Outgoing -R full.hg vs partial2 in partial
356
356
357 $ hg -R bundle://../full.hg outgoing ../partial2
357 $ hg -R bundle://../full.hg outgoing ../partial2
358 comparing with ../partial2
358 comparing with ../partial2
359 searching for changes
359 searching for changes
360 changeset: 4:095197eb4973
360 changeset: 4:095197eb4973
361 parent: 0:f9ee2f85a263
361 parent: 0:f9ee2f85a263
362 user: test
362 user: test
363 date: Thu Jan 01 00:00:00 1970 +0000
363 date: Thu Jan 01 00:00:00 1970 +0000
364 summary: 1.1
364 summary: 1.1
365
365
366 changeset: 5:1bb50a9436a7
366 changeset: 5:1bb50a9436a7
367 user: test
367 user: test
368 date: Thu Jan 01 00:00:00 1970 +0000
368 date: Thu Jan 01 00:00:00 1970 +0000
369 summary: 1.2
369 summary: 1.2
370
370
371 changeset: 6:7373c1169842
371 changeset: 6:7373c1169842
372 user: test
372 user: test
373 date: Thu Jan 01 00:00:00 1970 +0000
373 date: Thu Jan 01 00:00:00 1970 +0000
374 summary: 1.3
374 summary: 1.3
375
375
376 changeset: 7:a6a34bfa0076
376 changeset: 7:a6a34bfa0076
377 user: test
377 user: test
378 date: Thu Jan 01 00:00:00 1970 +0000
378 date: Thu Jan 01 00:00:00 1970 +0000
379 summary: 1.3m
379 summary: 1.3m
380
380
381 changeset: 8:aa35859c02ea
381 changeset: 8:aa35859c02ea
382 tag: tip
382 tag: tip
383 parent: 3:eebf5a27f8ca
383 parent: 3:eebf5a27f8ca
384 user: test
384 user: test
385 date: Thu Jan 01 00:00:00 1970 +0000
385 date: Thu Jan 01 00:00:00 1970 +0000
386 summary: 0.3m
386 summary: 0.3m
387
387
388
388
389 Outgoing -R does-not-exist.hg vs partial2 in partial
389 Outgoing -R does-not-exist.hg vs partial2 in partial
390
390
391 $ hg -R bundle://../does-not-exist.hg outgoing ../partial2
391 $ hg -R bundle://../does-not-exist.hg outgoing ../partial2
392 abort: *../does-not-exist.hg* (glob)
392 abort: *../does-not-exist.hg* (glob)
393 [255]
393 [255]
394 $ cd ..
394 $ cd ..
395
395
396 hide outer repo
396 hide outer repo
397 $ hg init
397 $ hg init
398
398
399 Direct clone from bundle (all-history)
399 Direct clone from bundle (all-history)
400
400
401 $ hg clone full.hg full-clone
401 $ hg clone full.hg full-clone
402 requesting all changes
402 requesting all changes
403 adding changesets
403 adding changesets
404 adding manifests
404 adding manifests
405 adding file changes
405 adding file changes
406 added 9 changesets with 7 changes to 4 files (+1 heads)
406 added 9 changesets with 7 changes to 4 files (+1 heads)
407 updating to branch default
407 updating to branch default
408 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
408 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
409 $ hg -R full-clone heads
409 $ hg -R full-clone heads
410 changeset: 8:aa35859c02ea
410 changeset: 8:aa35859c02ea
411 tag: tip
411 tag: tip
412 parent: 3:eebf5a27f8ca
412 parent: 3:eebf5a27f8ca
413 user: test
413 user: test
414 date: Thu Jan 01 00:00:00 1970 +0000
414 date: Thu Jan 01 00:00:00 1970 +0000
415 summary: 0.3m
415 summary: 0.3m
416
416
417 changeset: 7:a6a34bfa0076
417 changeset: 7:a6a34bfa0076
418 user: test
418 user: test
419 date: Thu Jan 01 00:00:00 1970 +0000
419 date: Thu Jan 01 00:00:00 1970 +0000
420 summary: 1.3m
420 summary: 1.3m
421
421
422 $ rm -r full-clone
422 $ rm -r full-clone
423
423
424 When cloning from a non-copiable repository into '', do not
424 When cloning from a non-copiable repository into '', do not
425 recurse infinitely (issue 2528)
425 recurse infinitely (issue 2528)
426
426
427 $ hg clone full.hg ''
427 $ hg clone full.hg ''
428 abort: empty destination path is not valid
428 abort: empty destination path is not valid
429 [255]
429 [255]
430
430
431 test for http://mercurial.selenic.com/bts/issue216
431 test for http://mercurial.selenic.com/bts/issue216
432
432
433 Unbundle incremental bundles into fresh empty in one go
433 Unbundle incremental bundles into fresh empty in one go
434
434
435 $ rm -r empty
435 $ rm -r empty
436 $ hg init empty
436 $ hg init empty
437 $ hg -R test bundle --base null -r 0 ../0.hg
437 $ hg -R test bundle --base null -r 0 ../0.hg
438 1 changesets found
438 1 changesets found
439 $ hg -R test bundle --base 0 -r 1 ../1.hg
439 $ hg -R test bundle --base 0 -r 1 ../1.hg
440 1 changesets found
440 1 changesets found
441 $ hg -R empty unbundle -u ../0.hg ../1.hg
441 $ hg -R empty unbundle -u ../0.hg ../1.hg
442 adding changesets
442 adding changesets
443 adding manifests
443 adding manifests
444 adding file changes
444 adding file changes
445 added 1 changesets with 1 changes to 1 files
445 added 1 changesets with 1 changes to 1 files
446 adding changesets
446 adding changesets
447 adding manifests
447 adding manifests
448 adding file changes
448 adding file changes
449 added 1 changesets with 1 changes to 1 files
449 added 1 changesets with 1 changes to 1 files
450 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
450 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
451
451
452 View full contents of the bundle
452 View full contents of the bundle
453 $ hg -R test bundle --base null -r 3 ../partial.hg
453 $ hg -R test bundle --base null -r 3 ../partial.hg
454 4 changesets found
454 4 changesets found
455 $ cd test
455 $ cd test
456 $ hg -R ../../partial.hg log -r "bundle()"
456 $ hg -R ../../partial.hg log -r "bundle()"
457 changeset: 0:f9ee2f85a263
457 changeset: 0:f9ee2f85a263
458 user: test
458 user: test
459 date: Thu Jan 01 00:00:00 1970 +0000
459 date: Thu Jan 01 00:00:00 1970 +0000
460 summary: 0.0
460 summary: 0.0
461
461
462 changeset: 1:34c2bf6b0626
462 changeset: 1:34c2bf6b0626
463 user: test
463 user: test
464 date: Thu Jan 01 00:00:00 1970 +0000
464 date: Thu Jan 01 00:00:00 1970 +0000
465 summary: 0.1
465 summary: 0.1
466
466
467 changeset: 2:e38ba6f5b7e0
467 changeset: 2:e38ba6f5b7e0
468 user: test
468 user: test
469 date: Thu Jan 01 00:00:00 1970 +0000
469 date: Thu Jan 01 00:00:00 1970 +0000
470 summary: 0.2
470 summary: 0.2
471
471
472 changeset: 3:eebf5a27f8ca
472 changeset: 3:eebf5a27f8ca
473 user: test
473 user: test
474 date: Thu Jan 01 00:00:00 1970 +0000
474 date: Thu Jan 01 00:00:00 1970 +0000
475 summary: 0.3
475 summary: 0.3
476
476
477 $ cd ..
477 $ cd ..
478
478
479 test for 540d1059c802
479 test for 540d1059c802
480
480
481 test for 540d1059c802
481 test for 540d1059c802
482
482
483 $ hg init orig
483 $ hg init orig
484 $ cd orig
484 $ cd orig
485 $ echo foo > foo
485 $ echo foo > foo
486 $ hg add foo
486 $ hg add foo
487 $ hg ci -m 'add foo'
487 $ hg ci -m 'add foo'
488
488
489 $ hg clone . ../copy
489 $ hg clone . ../copy
490 updating to branch default
490 updating to branch default
491 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
491 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
492 $ hg tag foo
492 $ hg tag foo
493
493
494 $ cd ../copy
494 $ cd ../copy
495 $ echo >> foo
495 $ echo >> foo
496 $ hg ci -m 'change foo'
496 $ hg ci -m 'change foo'
497 $ hg bundle ../bundle.hg ../orig
497 $ hg bundle ../bundle.hg ../orig
498 searching for changes
498 searching for changes
499 1 changesets found
499 1 changesets found
500
500
501 $ cd ../orig
501 $ cd ../orig
502 $ hg incoming ../bundle.hg
502 $ hg incoming ../bundle.hg
503 comparing with ../bundle.hg
503 comparing with ../bundle.hg
504 searching for changes
504 searching for changes
505 changeset: 2:ed1b79f46b9a
505 changeset: 2:ed1b79f46b9a
506 tag: tip
506 tag: tip
507 parent: 0:bbd179dfa0a7
507 parent: 0:bbd179dfa0a7
508 user: test
508 user: test
509 date: Thu Jan 01 00:00:00 1970 +0000
509 date: Thu Jan 01 00:00:00 1970 +0000
510 summary: change foo
510 summary: change foo
511
511
512 $ cd ..
512 $ cd ..
513
513
514 test bundle with # in the filename (issue2154):
514 test bundle with # in the filename (issue2154):
515
515
516 $ cp bundle.hg 'test#bundle.hg'
516 $ cp bundle.hg 'test#bundle.hg'
517 $ cd orig
517 $ cd orig
518 $ hg incoming '../test#bundle.hg'
518 $ hg incoming '../test#bundle.hg'
519 comparing with ../test
519 comparing with ../test
520 abort: unknown revision 'bundle.hg'!
520 abort: unknown revision 'bundle.hg'!
521 [255]
521 [255]
522
522
523 note that percent encoding is not handled:
523 note that percent encoding is not handled:
524
524
525 $ hg incoming ../test%23bundle.hg
525 $ hg incoming ../test%23bundle.hg
526 abort: repository ../test%23bundle.hg not found!
526 abort: repository ../test%23bundle.hg not found!
527 [255]
527 [255]
528 $ cd ..
528 $ cd ..
529
529
530 test to bundle revisions on the newly created branch (issue3828):
530 test to bundle revisions on the newly created branch (issue3828):
531
531
532 $ hg -q clone -U test test-clone
532 $ hg -q clone -U test test-clone
533 $ cd test
533 $ cd test
534
534
535 $ hg -q branch foo
535 $ hg -q branch foo
536 $ hg commit -m "create foo branch"
536 $ hg commit -m "create foo branch"
537 $ hg -q outgoing ../test-clone
537 $ hg -q outgoing ../test-clone
538 9:b4f5acb1ee27
538 9:b4f5acb1ee27
539 $ hg -q bundle --branch foo foo.hg ../test-clone
539 $ hg -q bundle --branch foo foo.hg ../test-clone
540 $ hg -R foo.hg -q log -r "bundle()"
540 $ hg -R foo.hg -q log -r "bundle()"
541 9:b4f5acb1ee27
541 9:b4f5acb1ee27
542
542
543 $ cd ..
543 $ cd ..
544
544
545 test for http://mercurial.selenic.com/bts/issue1144
545 test for http://mercurial.selenic.com/bts/issue1144
546
546
547 test that verify bundle does not traceback
547 test that verify bundle does not traceback
548
548
549 partial history bundle, fails w/ unkown parent
549 partial history bundle, fails w/ unknown parent
550
550
551 $ hg -R bundle.hg verify
551 $ hg -R bundle.hg verify
552 abort: 00changelog.i@bbd179dfa0a7: unknown parent!
552 abort: 00changelog.i@bbd179dfa0a7: unknown parent!
553 [255]
553 [255]
554
554
555 full history bundle, refuses to verify non-local repo
555 full history bundle, refuses to verify non-local repo
556
556
557 $ hg -R all.hg verify
557 $ hg -R all.hg verify
558 abort: cannot verify bundle or remote repos
558 abort: cannot verify bundle or remote repos
559 [255]
559 [255]
560
560
561 but, regular verify must continue to work
561 but, regular verify must continue to work
562
562
563 $ hg -R orig verify
563 $ hg -R orig verify
564 checking changesets
564 checking changesets
565 checking manifests
565 checking manifests
566 crosschecking files in changesets and manifests
566 crosschecking files in changesets and manifests
567 checking files
567 checking files
568 2 files, 2 changesets, 2 total revisions
568 2 files, 2 changesets, 2 total revisions
569
569
570 diff against bundle
570 diff against bundle
571
571
572 $ hg init b
572 $ hg init b
573 $ cd b
573 $ cd b
574 $ hg -R ../all.hg diff -r tip
574 $ hg -R ../all.hg diff -r tip
575 diff -r aa35859c02ea anotherfile
575 diff -r aa35859c02ea anotherfile
576 --- a/anotherfile Thu Jan 01 00:00:00 1970 +0000
576 --- a/anotherfile Thu Jan 01 00:00:00 1970 +0000
577 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
577 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
578 @@ -1,4 +0,0 @@
578 @@ -1,4 +0,0 @@
579 -0
579 -0
580 -1
580 -1
581 -2
581 -2
582 -3
582 -3
583 $ cd ..
583 $ cd ..
584
584
585 bundle single branch
585 bundle single branch
586
586
587 $ hg init branchy
587 $ hg init branchy
588 $ cd branchy
588 $ cd branchy
589 $ echo a >a
589 $ echo a >a
590 $ echo x >x
590 $ echo x >x
591 $ hg ci -Ama
591 $ hg ci -Ama
592 adding a
592 adding a
593 adding x
593 adding x
594 $ echo c >c
594 $ echo c >c
595 $ echo xx >x
595 $ echo xx >x
596 $ hg ci -Amc
596 $ hg ci -Amc
597 adding c
597 adding c
598 $ echo c1 >c1
598 $ echo c1 >c1
599 $ hg ci -Amc1
599 $ hg ci -Amc1
600 adding c1
600 adding c1
601 $ hg up 0
601 $ hg up 0
602 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
602 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
603 $ echo b >b
603 $ echo b >b
604 $ hg ci -Amb
604 $ hg ci -Amb
605 adding b
605 adding b
606 created new head
606 created new head
607 $ echo b1 >b1
607 $ echo b1 >b1
608 $ echo xx >x
608 $ echo xx >x
609 $ hg ci -Amb1
609 $ hg ci -Amb1
610 adding b1
610 adding b1
611 $ hg clone -q -r2 . part
611 $ hg clone -q -r2 . part
612
612
613 == bundling via incoming
613 == bundling via incoming
614
614
615 $ hg in -R part --bundle incoming.hg --template "{node}\n" .
615 $ hg in -R part --bundle incoming.hg --template "{node}\n" .
616 comparing with .
616 comparing with .
617 searching for changes
617 searching for changes
618 1a38c1b849e8b70c756d2d80b0b9a3ac0b7ea11a
618 1a38c1b849e8b70c756d2d80b0b9a3ac0b7ea11a
619 057f4db07f61970e1c11e83be79e9d08adc4dc31
619 057f4db07f61970e1c11e83be79e9d08adc4dc31
620
620
621 == bundling
621 == bundling
622
622
623 $ hg bundle bundle.hg part --debug
623 $ hg bundle bundle.hg part --debug
624 query 1; heads
624 query 1; heads
625 searching for changes
625 searching for changes
626 all remote heads known locally
626 all remote heads known locally
627 2 changesets found
627 2 changesets found
628 list of changesets:
628 list of changesets:
629 1a38c1b849e8b70c756d2d80b0b9a3ac0b7ea11a
629 1a38c1b849e8b70c756d2d80b0b9a3ac0b7ea11a
630 057f4db07f61970e1c11e83be79e9d08adc4dc31
630 057f4db07f61970e1c11e83be79e9d08adc4dc31
631 bundling: 1/2 changesets (50.00%)
631 bundling: 1/2 changesets (50.00%)
632 bundling: 2/2 changesets (100.00%)
632 bundling: 2/2 changesets (100.00%)
633 bundling: 1/2 manifests (50.00%)
633 bundling: 1/2 manifests (50.00%)
634 bundling: 2/2 manifests (100.00%)
634 bundling: 2/2 manifests (100.00%)
635 bundling: b 1/3 files (33.33%)
635 bundling: b 1/3 files (33.33%)
636 bundling: b1 2/3 files (66.67%)
636 bundling: b1 2/3 files (66.67%)
637 bundling: x 3/3 files (100.00%)
637 bundling: x 3/3 files (100.00%)
638
638
639 == Test for issue3441
639 == Test for issue3441
640
640
641 $ hg clone -q -r0 . part2
641 $ hg clone -q -r0 . part2
642 $ hg -q -R part2 pull bundle.hg
642 $ hg -q -R part2 pull bundle.hg
643 $ hg -R part2 verify
643 $ hg -R part2 verify
644 checking changesets
644 checking changesets
645 checking manifests
645 checking manifests
646 crosschecking files in changesets and manifests
646 crosschecking files in changesets and manifests
647 checking files
647 checking files
648 4 files, 3 changesets, 5 total revisions
648 4 files, 3 changesets, 5 total revisions
649
649
650 $ cd ..
650 $ cd ..
@@ -1,1856 +1,1856 b''
1 $ hg init a
1 $ hg init a
2 $ cd a
2 $ cd a
3 $ echo a > a
3 $ echo a > a
4 $ hg add a
4 $ hg add a
5 $ echo line 1 > b
5 $ echo line 1 > b
6 $ echo line 2 >> b
6 $ echo line 2 >> b
7 $ hg commit -l b -d '1000000 0' -u 'User Name <user@hostname>'
7 $ hg commit -l b -d '1000000 0' -u 'User Name <user@hostname>'
8
8
9 $ hg add b
9 $ hg add b
10 $ echo other 1 > c
10 $ echo other 1 > c
11 $ echo other 2 >> c
11 $ echo other 2 >> c
12 $ echo >> c
12 $ echo >> c
13 $ echo other 3 >> c
13 $ echo other 3 >> c
14 $ hg commit -l c -d '1100000 0' -u 'A. N. Other <other@place>'
14 $ hg commit -l c -d '1100000 0' -u 'A. N. Other <other@place>'
15
15
16 $ hg add c
16 $ hg add c
17 $ hg commit -m 'no person' -d '1200000 0' -u 'other@place'
17 $ hg commit -m 'no person' -d '1200000 0' -u 'other@place'
18 $ echo c >> c
18 $ echo c >> c
19 $ hg commit -m 'no user, no domain' -d '1300000 0' -u 'person'
19 $ hg commit -m 'no user, no domain' -d '1300000 0' -u 'person'
20
20
21 $ echo foo > .hg/branch
21 $ echo foo > .hg/branch
22 $ hg commit -m 'new branch' -d '1400000 0' -u 'person'
22 $ hg commit -m 'new branch' -d '1400000 0' -u 'person'
23
23
24 $ hg co -q 3
24 $ hg co -q 3
25 $ echo other 4 >> d
25 $ echo other 4 >> d
26 $ hg add d
26 $ hg add d
27 $ hg commit -m 'new head' -d '1500000 0' -u 'person'
27 $ hg commit -m 'new head' -d '1500000 0' -u 'person'
28
28
29 $ hg merge -q foo
29 $ hg merge -q foo
30 $ hg commit -m 'merge' -d '1500001 0' -u 'person'
30 $ hg commit -m 'merge' -d '1500001 0' -u 'person'
31
31
32 Second branch starting at nullrev:
32 Second branch starting at nullrev:
33
33
34 $ hg update null
34 $ hg update null
35 0 files updated, 0 files merged, 4 files removed, 0 files unresolved
35 0 files updated, 0 files merged, 4 files removed, 0 files unresolved
36 $ echo second > second
36 $ echo second > second
37 $ hg add second
37 $ hg add second
38 $ hg commit -m second -d '1000000 0' -u 'User Name <user@hostname>'
38 $ hg commit -m second -d '1000000 0' -u 'User Name <user@hostname>'
39 created new head
39 created new head
40
40
41 $ echo third > third
41 $ echo third > third
42 $ hg add third
42 $ hg add third
43 $ hg mv second fourth
43 $ hg mv second fourth
44 $ hg commit -m third -d "2020-01-01 10:01"
44 $ hg commit -m third -d "2020-01-01 10:01"
45
45
46 $ hg log --template '{join(file_copies, ",\n")}\n' -r .
46 $ hg log --template '{join(file_copies, ",\n")}\n' -r .
47 fourth (second)
47 fourth (second)
48 $ hg log -T '{file_copies % "{source} -> {name}\n"}' -r .
48 $ hg log -T '{file_copies % "{source} -> {name}\n"}' -r .
49 second -> fourth
49 second -> fourth
50
50
51 Quoting for ui.logtemplate
51 Quoting for ui.logtemplate
52
52
53 $ hg tip --config "ui.logtemplate={rev}\n"
53 $ hg tip --config "ui.logtemplate={rev}\n"
54 8
54 8
55 $ hg tip --config "ui.logtemplate='{rev}\n'"
55 $ hg tip --config "ui.logtemplate='{rev}\n'"
56 8
56 8
57 $ hg tip --config 'ui.logtemplate="{rev}\n"'
57 $ hg tip --config 'ui.logtemplate="{rev}\n"'
58 8
58 8
59
59
60 Make sure user/global hgrc does not affect tests
60 Make sure user/global hgrc does not affect tests
61
61
62 $ echo '[ui]' > .hg/hgrc
62 $ echo '[ui]' > .hg/hgrc
63 $ echo 'logtemplate =' >> .hg/hgrc
63 $ echo 'logtemplate =' >> .hg/hgrc
64 $ echo 'style =' >> .hg/hgrc
64 $ echo 'style =' >> .hg/hgrc
65
65
66 Add some simple styles to settings
66 Add some simple styles to settings
67
67
68 $ echo '[templates]' >> .hg/hgrc
68 $ echo '[templates]' >> .hg/hgrc
69 $ printf 'simple = "{rev}\\n"\n' >> .hg/hgrc
69 $ printf 'simple = "{rev}\\n"\n' >> .hg/hgrc
70 $ printf 'simple2 = {rev}\\n\n' >> .hg/hgrc
70 $ printf 'simple2 = {rev}\\n\n' >> .hg/hgrc
71
71
72 $ hg log -l1 -Tsimple
72 $ hg log -l1 -Tsimple
73 8
73 8
74 $ hg log -l1 -Tsimple2
74 $ hg log -l1 -Tsimple2
75 8
75 8
76
76
77 Test templates and style maps in files:
77 Test templates and style maps in files:
78
78
79 $ echo "{rev}" > tmpl
79 $ echo "{rev}" > tmpl
80 $ hg log -l1 -T./tmpl
80 $ hg log -l1 -T./tmpl
81 8
81 8
82 $ hg log -l1 -Tblah/blah
82 $ hg log -l1 -Tblah/blah
83 blah/blah (no-eol)
83 blah/blah (no-eol)
84
84
85 $ printf 'changeset = "{rev}\\n"\n' > map-simple
85 $ printf 'changeset = "{rev}\\n"\n' > map-simple
86 $ hg log -l1 -T./map-simple
86 $ hg log -l1 -T./map-simple
87 8
87 8
88
88
89 Default style is like normal output:
89 Default style is like normal output:
90
90
91 $ hg log > log.out
91 $ hg log > log.out
92 $ hg log --style default > style.out
92 $ hg log --style default > style.out
93 $ cmp log.out style.out || diff -u log.out style.out
93 $ cmp log.out style.out || diff -u log.out style.out
94
94
95 $ hg log -v > log.out
95 $ hg log -v > log.out
96 $ hg log -v --style default > style.out
96 $ hg log -v --style default > style.out
97 $ cmp log.out style.out || diff -u log.out style.out
97 $ cmp log.out style.out || diff -u log.out style.out
98
98
99 $ hg log --debug > log.out
99 $ hg log --debug > log.out
100 $ hg log --debug --style default > style.out
100 $ hg log --debug --style default > style.out
101 $ cmp log.out style.out || diff -u log.out style.out
101 $ cmp log.out style.out || diff -u log.out style.out
102
102
103 Revision with no copies (used to print a traceback):
103 Revision with no copies (used to print a traceback):
104
104
105 $ hg tip -v --template '\n'
105 $ hg tip -v --template '\n'
106
106
107
107
108 Compact style works:
108 Compact style works:
109
109
110 $ hg log -Tcompact
110 $ hg log -Tcompact
111 8[tip] 95c24699272e 2020-01-01 10:01 +0000 test
111 8[tip] 95c24699272e 2020-01-01 10:01 +0000 test
112 third
112 third
113
113
114 7:-1 29114dbae42b 1970-01-12 13:46 +0000 user
114 7:-1 29114dbae42b 1970-01-12 13:46 +0000 user
115 second
115 second
116
116
117 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
117 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
118 merge
118 merge
119
119
120 5:3 13207e5a10d9 1970-01-18 08:40 +0000 person
120 5:3 13207e5a10d9 1970-01-18 08:40 +0000 person
121 new head
121 new head
122
122
123 4 bbe44766e73d 1970-01-17 04:53 +0000 person
123 4 bbe44766e73d 1970-01-17 04:53 +0000 person
124 new branch
124 new branch
125
125
126 3 10e46f2dcbf4 1970-01-16 01:06 +0000 person
126 3 10e46f2dcbf4 1970-01-16 01:06 +0000 person
127 no user, no domain
127 no user, no domain
128
128
129 2 97054abb4ab8 1970-01-14 21:20 +0000 other
129 2 97054abb4ab8 1970-01-14 21:20 +0000 other
130 no person
130 no person
131
131
132 1 b608e9d1a3f0 1970-01-13 17:33 +0000 other
132 1 b608e9d1a3f0 1970-01-13 17:33 +0000 other
133 other 1
133 other 1
134
134
135 0 1e4e1b8f71e0 1970-01-12 13:46 +0000 user
135 0 1e4e1b8f71e0 1970-01-12 13:46 +0000 user
136 line 1
136 line 1
137
137
138
138
139 $ hg log -v --style compact
139 $ hg log -v --style compact
140 8[tip] 95c24699272e 2020-01-01 10:01 +0000 test
140 8[tip] 95c24699272e 2020-01-01 10:01 +0000 test
141 third
141 third
142
142
143 7:-1 29114dbae42b 1970-01-12 13:46 +0000 User Name <user@hostname>
143 7:-1 29114dbae42b 1970-01-12 13:46 +0000 User Name <user@hostname>
144 second
144 second
145
145
146 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
146 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
147 merge
147 merge
148
148
149 5:3 13207e5a10d9 1970-01-18 08:40 +0000 person
149 5:3 13207e5a10d9 1970-01-18 08:40 +0000 person
150 new head
150 new head
151
151
152 4 bbe44766e73d 1970-01-17 04:53 +0000 person
152 4 bbe44766e73d 1970-01-17 04:53 +0000 person
153 new branch
153 new branch
154
154
155 3 10e46f2dcbf4 1970-01-16 01:06 +0000 person
155 3 10e46f2dcbf4 1970-01-16 01:06 +0000 person
156 no user, no domain
156 no user, no domain
157
157
158 2 97054abb4ab8 1970-01-14 21:20 +0000 other@place
158 2 97054abb4ab8 1970-01-14 21:20 +0000 other@place
159 no person
159 no person
160
160
161 1 b608e9d1a3f0 1970-01-13 17:33 +0000 A. N. Other <other@place>
161 1 b608e9d1a3f0 1970-01-13 17:33 +0000 A. N. Other <other@place>
162 other 1
162 other 1
163 other 2
163 other 2
164
164
165 other 3
165 other 3
166
166
167 0 1e4e1b8f71e0 1970-01-12 13:46 +0000 User Name <user@hostname>
167 0 1e4e1b8f71e0 1970-01-12 13:46 +0000 User Name <user@hostname>
168 line 1
168 line 1
169 line 2
169 line 2
170
170
171
171
172 $ hg log --debug --style compact
172 $ hg log --debug --style compact
173 8[tip]:7,-1 95c24699272e 2020-01-01 10:01 +0000 test
173 8[tip]:7,-1 95c24699272e 2020-01-01 10:01 +0000 test
174 third
174 third
175
175
176 7:-1,-1 29114dbae42b 1970-01-12 13:46 +0000 User Name <user@hostname>
176 7:-1,-1 29114dbae42b 1970-01-12 13:46 +0000 User Name <user@hostname>
177 second
177 second
178
178
179 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
179 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
180 merge
180 merge
181
181
182 5:3,-1 13207e5a10d9 1970-01-18 08:40 +0000 person
182 5:3,-1 13207e5a10d9 1970-01-18 08:40 +0000 person
183 new head
183 new head
184
184
185 4:3,-1 bbe44766e73d 1970-01-17 04:53 +0000 person
185 4:3,-1 bbe44766e73d 1970-01-17 04:53 +0000 person
186 new branch
186 new branch
187
187
188 3:2,-1 10e46f2dcbf4 1970-01-16 01:06 +0000 person
188 3:2,-1 10e46f2dcbf4 1970-01-16 01:06 +0000 person
189 no user, no domain
189 no user, no domain
190
190
191 2:1,-1 97054abb4ab8 1970-01-14 21:20 +0000 other@place
191 2:1,-1 97054abb4ab8 1970-01-14 21:20 +0000 other@place
192 no person
192 no person
193
193
194 1:0,-1 b608e9d1a3f0 1970-01-13 17:33 +0000 A. N. Other <other@place>
194 1:0,-1 b608e9d1a3f0 1970-01-13 17:33 +0000 A. N. Other <other@place>
195 other 1
195 other 1
196 other 2
196 other 2
197
197
198 other 3
198 other 3
199
199
200 0:-1,-1 1e4e1b8f71e0 1970-01-12 13:46 +0000 User Name <user@hostname>
200 0:-1,-1 1e4e1b8f71e0 1970-01-12 13:46 +0000 User Name <user@hostname>
201 line 1
201 line 1
202 line 2
202 line 2
203
203
204
204
205 Test xml styles:
205 Test xml styles:
206
206
207 $ hg log --style xml
207 $ hg log --style xml
208 <?xml version="1.0"?>
208 <?xml version="1.0"?>
209 <log>
209 <log>
210 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
210 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
211 <tag>tip</tag>
211 <tag>tip</tag>
212 <author email="test">test</author>
212 <author email="test">test</author>
213 <date>2020-01-01T10:01:00+00:00</date>
213 <date>2020-01-01T10:01:00+00:00</date>
214 <msg xml:space="preserve">third</msg>
214 <msg xml:space="preserve">third</msg>
215 </logentry>
215 </logentry>
216 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
216 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
217 <parent revision="-1" node="0000000000000000000000000000000000000000" />
217 <parent revision="-1" node="0000000000000000000000000000000000000000" />
218 <author email="user@hostname">User Name</author>
218 <author email="user@hostname">User Name</author>
219 <date>1970-01-12T13:46:40+00:00</date>
219 <date>1970-01-12T13:46:40+00:00</date>
220 <msg xml:space="preserve">second</msg>
220 <msg xml:space="preserve">second</msg>
221 </logentry>
221 </logentry>
222 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
222 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
223 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
223 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
224 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
224 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
225 <author email="person">person</author>
225 <author email="person">person</author>
226 <date>1970-01-18T08:40:01+00:00</date>
226 <date>1970-01-18T08:40:01+00:00</date>
227 <msg xml:space="preserve">merge</msg>
227 <msg xml:space="preserve">merge</msg>
228 </logentry>
228 </logentry>
229 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
229 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
230 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
230 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
231 <author email="person">person</author>
231 <author email="person">person</author>
232 <date>1970-01-18T08:40:00+00:00</date>
232 <date>1970-01-18T08:40:00+00:00</date>
233 <msg xml:space="preserve">new head</msg>
233 <msg xml:space="preserve">new head</msg>
234 </logentry>
234 </logentry>
235 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
235 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
236 <branch>foo</branch>
236 <branch>foo</branch>
237 <author email="person">person</author>
237 <author email="person">person</author>
238 <date>1970-01-17T04:53:20+00:00</date>
238 <date>1970-01-17T04:53:20+00:00</date>
239 <msg xml:space="preserve">new branch</msg>
239 <msg xml:space="preserve">new branch</msg>
240 </logentry>
240 </logentry>
241 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
241 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
242 <author email="person">person</author>
242 <author email="person">person</author>
243 <date>1970-01-16T01:06:40+00:00</date>
243 <date>1970-01-16T01:06:40+00:00</date>
244 <msg xml:space="preserve">no user, no domain</msg>
244 <msg xml:space="preserve">no user, no domain</msg>
245 </logentry>
245 </logentry>
246 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
246 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
247 <author email="other@place">other</author>
247 <author email="other@place">other</author>
248 <date>1970-01-14T21:20:00+00:00</date>
248 <date>1970-01-14T21:20:00+00:00</date>
249 <msg xml:space="preserve">no person</msg>
249 <msg xml:space="preserve">no person</msg>
250 </logentry>
250 </logentry>
251 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
251 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
252 <author email="other@place">A. N. Other</author>
252 <author email="other@place">A. N. Other</author>
253 <date>1970-01-13T17:33:20+00:00</date>
253 <date>1970-01-13T17:33:20+00:00</date>
254 <msg xml:space="preserve">other 1
254 <msg xml:space="preserve">other 1
255 other 2
255 other 2
256
256
257 other 3</msg>
257 other 3</msg>
258 </logentry>
258 </logentry>
259 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
259 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
260 <author email="user@hostname">User Name</author>
260 <author email="user@hostname">User Name</author>
261 <date>1970-01-12T13:46:40+00:00</date>
261 <date>1970-01-12T13:46:40+00:00</date>
262 <msg xml:space="preserve">line 1
262 <msg xml:space="preserve">line 1
263 line 2</msg>
263 line 2</msg>
264 </logentry>
264 </logentry>
265 </log>
265 </log>
266
266
267 $ hg log -v --style xml
267 $ hg log -v --style xml
268 <?xml version="1.0"?>
268 <?xml version="1.0"?>
269 <log>
269 <log>
270 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
270 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
271 <tag>tip</tag>
271 <tag>tip</tag>
272 <author email="test">test</author>
272 <author email="test">test</author>
273 <date>2020-01-01T10:01:00+00:00</date>
273 <date>2020-01-01T10:01:00+00:00</date>
274 <msg xml:space="preserve">third</msg>
274 <msg xml:space="preserve">third</msg>
275 <paths>
275 <paths>
276 <path action="A">fourth</path>
276 <path action="A">fourth</path>
277 <path action="A">third</path>
277 <path action="A">third</path>
278 <path action="R">second</path>
278 <path action="R">second</path>
279 </paths>
279 </paths>
280 <copies>
280 <copies>
281 <copy source="second">fourth</copy>
281 <copy source="second">fourth</copy>
282 </copies>
282 </copies>
283 </logentry>
283 </logentry>
284 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
284 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
285 <parent revision="-1" node="0000000000000000000000000000000000000000" />
285 <parent revision="-1" node="0000000000000000000000000000000000000000" />
286 <author email="user@hostname">User Name</author>
286 <author email="user@hostname">User Name</author>
287 <date>1970-01-12T13:46:40+00:00</date>
287 <date>1970-01-12T13:46:40+00:00</date>
288 <msg xml:space="preserve">second</msg>
288 <msg xml:space="preserve">second</msg>
289 <paths>
289 <paths>
290 <path action="A">second</path>
290 <path action="A">second</path>
291 </paths>
291 </paths>
292 </logentry>
292 </logentry>
293 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
293 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
294 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
294 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
295 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
295 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
296 <author email="person">person</author>
296 <author email="person">person</author>
297 <date>1970-01-18T08:40:01+00:00</date>
297 <date>1970-01-18T08:40:01+00:00</date>
298 <msg xml:space="preserve">merge</msg>
298 <msg xml:space="preserve">merge</msg>
299 <paths>
299 <paths>
300 </paths>
300 </paths>
301 </logentry>
301 </logentry>
302 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
302 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
303 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
303 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
304 <author email="person">person</author>
304 <author email="person">person</author>
305 <date>1970-01-18T08:40:00+00:00</date>
305 <date>1970-01-18T08:40:00+00:00</date>
306 <msg xml:space="preserve">new head</msg>
306 <msg xml:space="preserve">new head</msg>
307 <paths>
307 <paths>
308 <path action="A">d</path>
308 <path action="A">d</path>
309 </paths>
309 </paths>
310 </logentry>
310 </logentry>
311 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
311 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
312 <branch>foo</branch>
312 <branch>foo</branch>
313 <author email="person">person</author>
313 <author email="person">person</author>
314 <date>1970-01-17T04:53:20+00:00</date>
314 <date>1970-01-17T04:53:20+00:00</date>
315 <msg xml:space="preserve">new branch</msg>
315 <msg xml:space="preserve">new branch</msg>
316 <paths>
316 <paths>
317 </paths>
317 </paths>
318 </logentry>
318 </logentry>
319 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
319 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
320 <author email="person">person</author>
320 <author email="person">person</author>
321 <date>1970-01-16T01:06:40+00:00</date>
321 <date>1970-01-16T01:06:40+00:00</date>
322 <msg xml:space="preserve">no user, no domain</msg>
322 <msg xml:space="preserve">no user, no domain</msg>
323 <paths>
323 <paths>
324 <path action="M">c</path>
324 <path action="M">c</path>
325 </paths>
325 </paths>
326 </logentry>
326 </logentry>
327 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
327 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
328 <author email="other@place">other</author>
328 <author email="other@place">other</author>
329 <date>1970-01-14T21:20:00+00:00</date>
329 <date>1970-01-14T21:20:00+00:00</date>
330 <msg xml:space="preserve">no person</msg>
330 <msg xml:space="preserve">no person</msg>
331 <paths>
331 <paths>
332 <path action="A">c</path>
332 <path action="A">c</path>
333 </paths>
333 </paths>
334 </logentry>
334 </logentry>
335 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
335 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
336 <author email="other@place">A. N. Other</author>
336 <author email="other@place">A. N. Other</author>
337 <date>1970-01-13T17:33:20+00:00</date>
337 <date>1970-01-13T17:33:20+00:00</date>
338 <msg xml:space="preserve">other 1
338 <msg xml:space="preserve">other 1
339 other 2
339 other 2
340
340
341 other 3</msg>
341 other 3</msg>
342 <paths>
342 <paths>
343 <path action="A">b</path>
343 <path action="A">b</path>
344 </paths>
344 </paths>
345 </logentry>
345 </logentry>
346 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
346 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
347 <author email="user@hostname">User Name</author>
347 <author email="user@hostname">User Name</author>
348 <date>1970-01-12T13:46:40+00:00</date>
348 <date>1970-01-12T13:46:40+00:00</date>
349 <msg xml:space="preserve">line 1
349 <msg xml:space="preserve">line 1
350 line 2</msg>
350 line 2</msg>
351 <paths>
351 <paths>
352 <path action="A">a</path>
352 <path action="A">a</path>
353 </paths>
353 </paths>
354 </logentry>
354 </logentry>
355 </log>
355 </log>
356
356
357 $ hg log --debug --style xml
357 $ hg log --debug --style xml
358 <?xml version="1.0"?>
358 <?xml version="1.0"?>
359 <log>
359 <log>
360 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
360 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
361 <tag>tip</tag>
361 <tag>tip</tag>
362 <parent revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453" />
362 <parent revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453" />
363 <parent revision="-1" node="0000000000000000000000000000000000000000" />
363 <parent revision="-1" node="0000000000000000000000000000000000000000" />
364 <author email="test">test</author>
364 <author email="test">test</author>
365 <date>2020-01-01T10:01:00+00:00</date>
365 <date>2020-01-01T10:01:00+00:00</date>
366 <msg xml:space="preserve">third</msg>
366 <msg xml:space="preserve">third</msg>
367 <paths>
367 <paths>
368 <path action="A">fourth</path>
368 <path action="A">fourth</path>
369 <path action="A">third</path>
369 <path action="A">third</path>
370 <path action="R">second</path>
370 <path action="R">second</path>
371 </paths>
371 </paths>
372 <copies>
372 <copies>
373 <copy source="second">fourth</copy>
373 <copy source="second">fourth</copy>
374 </copies>
374 </copies>
375 <extra key="branch">default</extra>
375 <extra key="branch">default</extra>
376 </logentry>
376 </logentry>
377 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
377 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
378 <parent revision="-1" node="0000000000000000000000000000000000000000" />
378 <parent revision="-1" node="0000000000000000000000000000000000000000" />
379 <parent revision="-1" node="0000000000000000000000000000000000000000" />
379 <parent revision="-1" node="0000000000000000000000000000000000000000" />
380 <author email="user@hostname">User Name</author>
380 <author email="user@hostname">User Name</author>
381 <date>1970-01-12T13:46:40+00:00</date>
381 <date>1970-01-12T13:46:40+00:00</date>
382 <msg xml:space="preserve">second</msg>
382 <msg xml:space="preserve">second</msg>
383 <paths>
383 <paths>
384 <path action="A">second</path>
384 <path action="A">second</path>
385 </paths>
385 </paths>
386 <extra key="branch">default</extra>
386 <extra key="branch">default</extra>
387 </logentry>
387 </logentry>
388 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
388 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
389 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
389 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
390 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
390 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
391 <author email="person">person</author>
391 <author email="person">person</author>
392 <date>1970-01-18T08:40:01+00:00</date>
392 <date>1970-01-18T08:40:01+00:00</date>
393 <msg xml:space="preserve">merge</msg>
393 <msg xml:space="preserve">merge</msg>
394 <paths>
394 <paths>
395 </paths>
395 </paths>
396 <extra key="branch">default</extra>
396 <extra key="branch">default</extra>
397 </logentry>
397 </logentry>
398 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
398 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
399 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
399 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
400 <parent revision="-1" node="0000000000000000000000000000000000000000" />
400 <parent revision="-1" node="0000000000000000000000000000000000000000" />
401 <author email="person">person</author>
401 <author email="person">person</author>
402 <date>1970-01-18T08:40:00+00:00</date>
402 <date>1970-01-18T08:40:00+00:00</date>
403 <msg xml:space="preserve">new head</msg>
403 <msg xml:space="preserve">new head</msg>
404 <paths>
404 <paths>
405 <path action="A">d</path>
405 <path action="A">d</path>
406 </paths>
406 </paths>
407 <extra key="branch">default</extra>
407 <extra key="branch">default</extra>
408 </logentry>
408 </logentry>
409 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
409 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
410 <branch>foo</branch>
410 <branch>foo</branch>
411 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
411 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
412 <parent revision="-1" node="0000000000000000000000000000000000000000" />
412 <parent revision="-1" node="0000000000000000000000000000000000000000" />
413 <author email="person">person</author>
413 <author email="person">person</author>
414 <date>1970-01-17T04:53:20+00:00</date>
414 <date>1970-01-17T04:53:20+00:00</date>
415 <msg xml:space="preserve">new branch</msg>
415 <msg xml:space="preserve">new branch</msg>
416 <paths>
416 <paths>
417 </paths>
417 </paths>
418 <extra key="branch">foo</extra>
418 <extra key="branch">foo</extra>
419 </logentry>
419 </logentry>
420 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
420 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
421 <parent revision="2" node="97054abb4ab824450e9164180baf491ae0078465" />
421 <parent revision="2" node="97054abb4ab824450e9164180baf491ae0078465" />
422 <parent revision="-1" node="0000000000000000000000000000000000000000" />
422 <parent revision="-1" node="0000000000000000000000000000000000000000" />
423 <author email="person">person</author>
423 <author email="person">person</author>
424 <date>1970-01-16T01:06:40+00:00</date>
424 <date>1970-01-16T01:06:40+00:00</date>
425 <msg xml:space="preserve">no user, no domain</msg>
425 <msg xml:space="preserve">no user, no domain</msg>
426 <paths>
426 <paths>
427 <path action="M">c</path>
427 <path action="M">c</path>
428 </paths>
428 </paths>
429 <extra key="branch">default</extra>
429 <extra key="branch">default</extra>
430 </logentry>
430 </logentry>
431 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
431 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
432 <parent revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965" />
432 <parent revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965" />
433 <parent revision="-1" node="0000000000000000000000000000000000000000" />
433 <parent revision="-1" node="0000000000000000000000000000000000000000" />
434 <author email="other@place">other</author>
434 <author email="other@place">other</author>
435 <date>1970-01-14T21:20:00+00:00</date>
435 <date>1970-01-14T21:20:00+00:00</date>
436 <msg xml:space="preserve">no person</msg>
436 <msg xml:space="preserve">no person</msg>
437 <paths>
437 <paths>
438 <path action="A">c</path>
438 <path action="A">c</path>
439 </paths>
439 </paths>
440 <extra key="branch">default</extra>
440 <extra key="branch">default</extra>
441 </logentry>
441 </logentry>
442 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
442 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
443 <parent revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f" />
443 <parent revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f" />
444 <parent revision="-1" node="0000000000000000000000000000000000000000" />
444 <parent revision="-1" node="0000000000000000000000000000000000000000" />
445 <author email="other@place">A. N. Other</author>
445 <author email="other@place">A. N. Other</author>
446 <date>1970-01-13T17:33:20+00:00</date>
446 <date>1970-01-13T17:33:20+00:00</date>
447 <msg xml:space="preserve">other 1
447 <msg xml:space="preserve">other 1
448 other 2
448 other 2
449
449
450 other 3</msg>
450 other 3</msg>
451 <paths>
451 <paths>
452 <path action="A">b</path>
452 <path action="A">b</path>
453 </paths>
453 </paths>
454 <extra key="branch">default</extra>
454 <extra key="branch">default</extra>
455 </logentry>
455 </logentry>
456 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
456 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
457 <parent revision="-1" node="0000000000000000000000000000000000000000" />
457 <parent revision="-1" node="0000000000000000000000000000000000000000" />
458 <parent revision="-1" node="0000000000000000000000000000000000000000" />
458 <parent revision="-1" node="0000000000000000000000000000000000000000" />
459 <author email="user@hostname">User Name</author>
459 <author email="user@hostname">User Name</author>
460 <date>1970-01-12T13:46:40+00:00</date>
460 <date>1970-01-12T13:46:40+00:00</date>
461 <msg xml:space="preserve">line 1
461 <msg xml:space="preserve">line 1
462 line 2</msg>
462 line 2</msg>
463 <paths>
463 <paths>
464 <path action="A">a</path>
464 <path action="A">a</path>
465 </paths>
465 </paths>
466 <extra key="branch">default</extra>
466 <extra key="branch">default</extra>
467 </logentry>
467 </logentry>
468 </log>
468 </log>
469
469
470
470
471 Error if style not readable:
471 Error if style not readable:
472
472
473 #if unix-permissions no-root
473 #if unix-permissions no-root
474 $ touch q
474 $ touch q
475 $ chmod 0 q
475 $ chmod 0 q
476 $ hg log --style ./q
476 $ hg log --style ./q
477 abort: Permission denied: ./q
477 abort: Permission denied: ./q
478 [255]
478 [255]
479 #endif
479 #endif
480
480
481 Error if no style:
481 Error if no style:
482
482
483 $ hg log --style notexist
483 $ hg log --style notexist
484 abort: style 'notexist' not found
484 abort: style 'notexist' not found
485 (available styles: bisect, changelog, compact, default, phases, xml)
485 (available styles: bisect, changelog, compact, default, phases, xml)
486 [255]
486 [255]
487
487
488 Error if style missing key:
488 Error if style missing key:
489
489
490 $ echo 'q = q' > t
490 $ echo 'q = q' > t
491 $ hg log --style ./t
491 $ hg log --style ./t
492 abort: "changeset" not in template map
492 abort: "changeset" not in template map
493 [255]
493 [255]
494
494
495 Error if style missing value:
495 Error if style missing value:
496
496
497 $ echo 'changeset =' > t
497 $ echo 'changeset =' > t
498 $ hg log --style t
498 $ hg log --style t
499 abort: t:1: missing value
499 abort: t:1: missing value
500 [255]
500 [255]
501
501
502 Error if include fails:
502 Error if include fails:
503
503
504 $ echo 'changeset = q' >> t
504 $ echo 'changeset = q' >> t
505 #if unix-permissions no-root
505 #if unix-permissions no-root
506 $ hg log --style ./t
506 $ hg log --style ./t
507 abort: template file ./q: Permission denied
507 abort: template file ./q: Permission denied
508 [255]
508 [255]
509 $ rm q
509 $ rm q
510 #endif
510 #endif
511
511
512 Include works:
512 Include works:
513
513
514 $ echo '{rev}' > q
514 $ echo '{rev}' > q
515 $ hg log --style ./t
515 $ hg log --style ./t
516 8
516 8
517 7
517 7
518 6
518 6
519 5
519 5
520 4
520 4
521 3
521 3
522 2
522 2
523 1
523 1
524 0
524 0
525
525
526 Missing non-standard names give no error (backward compatibility):
526 Missing non-standard names give no error (backward compatibility):
527
527
528 $ echo "changeset = '{c}'" > t
528 $ echo "changeset = '{c}'" > t
529 $ hg log --style ./t
529 $ hg log --style ./t
530
530
531 Defining non-standard name works:
531 Defining non-standard name works:
532
532
533 $ cat <<EOF > t
533 $ cat <<EOF > t
534 > changeset = '{c}'
534 > changeset = '{c}'
535 > c = q
535 > c = q
536 > EOF
536 > EOF
537 $ hg log --style ./t
537 $ hg log --style ./t
538 8
538 8
539 7
539 7
540 6
540 6
541 5
541 5
542 4
542 4
543 3
543 3
544 2
544 2
545 1
545 1
546 0
546 0
547
547
548 ui.style works:
548 ui.style works:
549
549
550 $ echo '[ui]' > .hg/hgrc
550 $ echo '[ui]' > .hg/hgrc
551 $ echo 'style = t' >> .hg/hgrc
551 $ echo 'style = t' >> .hg/hgrc
552 $ hg log
552 $ hg log
553 8
553 8
554 7
554 7
555 6
555 6
556 5
556 5
557 4
557 4
558 3
558 3
559 2
559 2
560 1
560 1
561 0
561 0
562
562
563
563
564 Issue338:
564 Issue338:
565
565
566 $ hg log --style=changelog > changelog
566 $ hg log --style=changelog > changelog
567
567
568 $ cat changelog
568 $ cat changelog
569 2020-01-01 test <test>
569 2020-01-01 test <test>
570
570
571 * fourth, second, third:
571 * fourth, second, third:
572 third
572 third
573 [95c24699272e] [tip]
573 [95c24699272e] [tip]
574
574
575 1970-01-12 User Name <user@hostname>
575 1970-01-12 User Name <user@hostname>
576
576
577 * second:
577 * second:
578 second
578 second
579 [29114dbae42b]
579 [29114dbae42b]
580
580
581 1970-01-18 person <person>
581 1970-01-18 person <person>
582
582
583 * merge
583 * merge
584 [d41e714fe50d]
584 [d41e714fe50d]
585
585
586 * d:
586 * d:
587 new head
587 new head
588 [13207e5a10d9]
588 [13207e5a10d9]
589
589
590 1970-01-17 person <person>
590 1970-01-17 person <person>
591
591
592 * new branch
592 * new branch
593 [bbe44766e73d] <foo>
593 [bbe44766e73d] <foo>
594
594
595 1970-01-16 person <person>
595 1970-01-16 person <person>
596
596
597 * c:
597 * c:
598 no user, no domain
598 no user, no domain
599 [10e46f2dcbf4]
599 [10e46f2dcbf4]
600
600
601 1970-01-14 other <other@place>
601 1970-01-14 other <other@place>
602
602
603 * c:
603 * c:
604 no person
604 no person
605 [97054abb4ab8]
605 [97054abb4ab8]
606
606
607 1970-01-13 A. N. Other <other@place>
607 1970-01-13 A. N. Other <other@place>
608
608
609 * b:
609 * b:
610 other 1 other 2
610 other 1 other 2
611
611
612 other 3
612 other 3
613 [b608e9d1a3f0]
613 [b608e9d1a3f0]
614
614
615 1970-01-12 User Name <user@hostname>
615 1970-01-12 User Name <user@hostname>
616
616
617 * a:
617 * a:
618 line 1 line 2
618 line 1 line 2
619 [1e4e1b8f71e0]
619 [1e4e1b8f71e0]
620
620
621
621
622 Issue2130: xml output for 'hg heads' is malformed
622 Issue2130: xml output for 'hg heads' is malformed
623
623
624 $ hg heads --style changelog
624 $ hg heads --style changelog
625 2020-01-01 test <test>
625 2020-01-01 test <test>
626
626
627 * fourth, second, third:
627 * fourth, second, third:
628 third
628 third
629 [95c24699272e] [tip]
629 [95c24699272e] [tip]
630
630
631 1970-01-18 person <person>
631 1970-01-18 person <person>
632
632
633 * merge
633 * merge
634 [d41e714fe50d]
634 [d41e714fe50d]
635
635
636 1970-01-17 person <person>
636 1970-01-17 person <person>
637
637
638 * new branch
638 * new branch
639 [bbe44766e73d] <foo>
639 [bbe44766e73d] <foo>
640
640
641
641
642 Keys work:
642 Keys work:
643
643
644 $ for key in author branch branches date desc file_adds file_dels file_mods \
644 $ for key in author branch branches date desc file_adds file_dels file_mods \
645 > file_copies file_copies_switch files \
645 > file_copies file_copies_switch files \
646 > manifest node parents rev tags diffstat extras \
646 > manifest node parents rev tags diffstat extras \
647 > p1rev p2rev p1node p2node; do
647 > p1rev p2rev p1node p2node; do
648 > for mode in '' --verbose --debug; do
648 > for mode in '' --verbose --debug; do
649 > hg log $mode --template "$key$mode: {$key}\n"
649 > hg log $mode --template "$key$mode: {$key}\n"
650 > done
650 > done
651 > done
651 > done
652 author: test
652 author: test
653 author: User Name <user@hostname>
653 author: User Name <user@hostname>
654 author: person
654 author: person
655 author: person
655 author: person
656 author: person
656 author: person
657 author: person
657 author: person
658 author: other@place
658 author: other@place
659 author: A. N. Other <other@place>
659 author: A. N. Other <other@place>
660 author: User Name <user@hostname>
660 author: User Name <user@hostname>
661 author--verbose: test
661 author--verbose: test
662 author--verbose: User Name <user@hostname>
662 author--verbose: User Name <user@hostname>
663 author--verbose: person
663 author--verbose: person
664 author--verbose: person
664 author--verbose: person
665 author--verbose: person
665 author--verbose: person
666 author--verbose: person
666 author--verbose: person
667 author--verbose: other@place
667 author--verbose: other@place
668 author--verbose: A. N. Other <other@place>
668 author--verbose: A. N. Other <other@place>
669 author--verbose: User Name <user@hostname>
669 author--verbose: User Name <user@hostname>
670 author--debug: test
670 author--debug: test
671 author--debug: User Name <user@hostname>
671 author--debug: User Name <user@hostname>
672 author--debug: person
672 author--debug: person
673 author--debug: person
673 author--debug: person
674 author--debug: person
674 author--debug: person
675 author--debug: person
675 author--debug: person
676 author--debug: other@place
676 author--debug: other@place
677 author--debug: A. N. Other <other@place>
677 author--debug: A. N. Other <other@place>
678 author--debug: User Name <user@hostname>
678 author--debug: User Name <user@hostname>
679 branch: default
679 branch: default
680 branch: default
680 branch: default
681 branch: default
681 branch: default
682 branch: default
682 branch: default
683 branch: foo
683 branch: foo
684 branch: default
684 branch: default
685 branch: default
685 branch: default
686 branch: default
686 branch: default
687 branch: default
687 branch: default
688 branch--verbose: default
688 branch--verbose: default
689 branch--verbose: default
689 branch--verbose: default
690 branch--verbose: default
690 branch--verbose: default
691 branch--verbose: default
691 branch--verbose: default
692 branch--verbose: foo
692 branch--verbose: foo
693 branch--verbose: default
693 branch--verbose: default
694 branch--verbose: default
694 branch--verbose: default
695 branch--verbose: default
695 branch--verbose: default
696 branch--verbose: default
696 branch--verbose: default
697 branch--debug: default
697 branch--debug: default
698 branch--debug: default
698 branch--debug: default
699 branch--debug: default
699 branch--debug: default
700 branch--debug: default
700 branch--debug: default
701 branch--debug: foo
701 branch--debug: foo
702 branch--debug: default
702 branch--debug: default
703 branch--debug: default
703 branch--debug: default
704 branch--debug: default
704 branch--debug: default
705 branch--debug: default
705 branch--debug: default
706 branches:
706 branches:
707 branches:
707 branches:
708 branches:
708 branches:
709 branches:
709 branches:
710 branches: foo
710 branches: foo
711 branches:
711 branches:
712 branches:
712 branches:
713 branches:
713 branches:
714 branches:
714 branches:
715 branches--verbose:
715 branches--verbose:
716 branches--verbose:
716 branches--verbose:
717 branches--verbose:
717 branches--verbose:
718 branches--verbose:
718 branches--verbose:
719 branches--verbose: foo
719 branches--verbose: foo
720 branches--verbose:
720 branches--verbose:
721 branches--verbose:
721 branches--verbose:
722 branches--verbose:
722 branches--verbose:
723 branches--verbose:
723 branches--verbose:
724 branches--debug:
724 branches--debug:
725 branches--debug:
725 branches--debug:
726 branches--debug:
726 branches--debug:
727 branches--debug:
727 branches--debug:
728 branches--debug: foo
728 branches--debug: foo
729 branches--debug:
729 branches--debug:
730 branches--debug:
730 branches--debug:
731 branches--debug:
731 branches--debug:
732 branches--debug:
732 branches--debug:
733 date: 1577872860.00
733 date: 1577872860.00
734 date: 1000000.00
734 date: 1000000.00
735 date: 1500001.00
735 date: 1500001.00
736 date: 1500000.00
736 date: 1500000.00
737 date: 1400000.00
737 date: 1400000.00
738 date: 1300000.00
738 date: 1300000.00
739 date: 1200000.00
739 date: 1200000.00
740 date: 1100000.00
740 date: 1100000.00
741 date: 1000000.00
741 date: 1000000.00
742 date--verbose: 1577872860.00
742 date--verbose: 1577872860.00
743 date--verbose: 1000000.00
743 date--verbose: 1000000.00
744 date--verbose: 1500001.00
744 date--verbose: 1500001.00
745 date--verbose: 1500000.00
745 date--verbose: 1500000.00
746 date--verbose: 1400000.00
746 date--verbose: 1400000.00
747 date--verbose: 1300000.00
747 date--verbose: 1300000.00
748 date--verbose: 1200000.00
748 date--verbose: 1200000.00
749 date--verbose: 1100000.00
749 date--verbose: 1100000.00
750 date--verbose: 1000000.00
750 date--verbose: 1000000.00
751 date--debug: 1577872860.00
751 date--debug: 1577872860.00
752 date--debug: 1000000.00
752 date--debug: 1000000.00
753 date--debug: 1500001.00
753 date--debug: 1500001.00
754 date--debug: 1500000.00
754 date--debug: 1500000.00
755 date--debug: 1400000.00
755 date--debug: 1400000.00
756 date--debug: 1300000.00
756 date--debug: 1300000.00
757 date--debug: 1200000.00
757 date--debug: 1200000.00
758 date--debug: 1100000.00
758 date--debug: 1100000.00
759 date--debug: 1000000.00
759 date--debug: 1000000.00
760 desc: third
760 desc: third
761 desc: second
761 desc: second
762 desc: merge
762 desc: merge
763 desc: new head
763 desc: new head
764 desc: new branch
764 desc: new branch
765 desc: no user, no domain
765 desc: no user, no domain
766 desc: no person
766 desc: no person
767 desc: other 1
767 desc: other 1
768 other 2
768 other 2
769
769
770 other 3
770 other 3
771 desc: line 1
771 desc: line 1
772 line 2
772 line 2
773 desc--verbose: third
773 desc--verbose: third
774 desc--verbose: second
774 desc--verbose: second
775 desc--verbose: merge
775 desc--verbose: merge
776 desc--verbose: new head
776 desc--verbose: new head
777 desc--verbose: new branch
777 desc--verbose: new branch
778 desc--verbose: no user, no domain
778 desc--verbose: no user, no domain
779 desc--verbose: no person
779 desc--verbose: no person
780 desc--verbose: other 1
780 desc--verbose: other 1
781 other 2
781 other 2
782
782
783 other 3
783 other 3
784 desc--verbose: line 1
784 desc--verbose: line 1
785 line 2
785 line 2
786 desc--debug: third
786 desc--debug: third
787 desc--debug: second
787 desc--debug: second
788 desc--debug: merge
788 desc--debug: merge
789 desc--debug: new head
789 desc--debug: new head
790 desc--debug: new branch
790 desc--debug: new branch
791 desc--debug: no user, no domain
791 desc--debug: no user, no domain
792 desc--debug: no person
792 desc--debug: no person
793 desc--debug: other 1
793 desc--debug: other 1
794 other 2
794 other 2
795
795
796 other 3
796 other 3
797 desc--debug: line 1
797 desc--debug: line 1
798 line 2
798 line 2
799 file_adds: fourth third
799 file_adds: fourth third
800 file_adds: second
800 file_adds: second
801 file_adds:
801 file_adds:
802 file_adds: d
802 file_adds: d
803 file_adds:
803 file_adds:
804 file_adds:
804 file_adds:
805 file_adds: c
805 file_adds: c
806 file_adds: b
806 file_adds: b
807 file_adds: a
807 file_adds: a
808 file_adds--verbose: fourth third
808 file_adds--verbose: fourth third
809 file_adds--verbose: second
809 file_adds--verbose: second
810 file_adds--verbose:
810 file_adds--verbose:
811 file_adds--verbose: d
811 file_adds--verbose: d
812 file_adds--verbose:
812 file_adds--verbose:
813 file_adds--verbose:
813 file_adds--verbose:
814 file_adds--verbose: c
814 file_adds--verbose: c
815 file_adds--verbose: b
815 file_adds--verbose: b
816 file_adds--verbose: a
816 file_adds--verbose: a
817 file_adds--debug: fourth third
817 file_adds--debug: fourth third
818 file_adds--debug: second
818 file_adds--debug: second
819 file_adds--debug:
819 file_adds--debug:
820 file_adds--debug: d
820 file_adds--debug: d
821 file_adds--debug:
821 file_adds--debug:
822 file_adds--debug:
822 file_adds--debug:
823 file_adds--debug: c
823 file_adds--debug: c
824 file_adds--debug: b
824 file_adds--debug: b
825 file_adds--debug: a
825 file_adds--debug: a
826 file_dels: second
826 file_dels: second
827 file_dels:
827 file_dels:
828 file_dels:
828 file_dels:
829 file_dels:
829 file_dels:
830 file_dels:
830 file_dels:
831 file_dels:
831 file_dels:
832 file_dels:
832 file_dels:
833 file_dels:
833 file_dels:
834 file_dels:
834 file_dels:
835 file_dels--verbose: second
835 file_dels--verbose: second
836 file_dels--verbose:
836 file_dels--verbose:
837 file_dels--verbose:
837 file_dels--verbose:
838 file_dels--verbose:
838 file_dels--verbose:
839 file_dels--verbose:
839 file_dels--verbose:
840 file_dels--verbose:
840 file_dels--verbose:
841 file_dels--verbose:
841 file_dels--verbose:
842 file_dels--verbose:
842 file_dels--verbose:
843 file_dels--verbose:
843 file_dels--verbose:
844 file_dels--debug: second
844 file_dels--debug: second
845 file_dels--debug:
845 file_dels--debug:
846 file_dels--debug:
846 file_dels--debug:
847 file_dels--debug:
847 file_dels--debug:
848 file_dels--debug:
848 file_dels--debug:
849 file_dels--debug:
849 file_dels--debug:
850 file_dels--debug:
850 file_dels--debug:
851 file_dels--debug:
851 file_dels--debug:
852 file_dels--debug:
852 file_dels--debug:
853 file_mods:
853 file_mods:
854 file_mods:
854 file_mods:
855 file_mods:
855 file_mods:
856 file_mods:
856 file_mods:
857 file_mods:
857 file_mods:
858 file_mods: c
858 file_mods: c
859 file_mods:
859 file_mods:
860 file_mods:
860 file_mods:
861 file_mods:
861 file_mods:
862 file_mods--verbose:
862 file_mods--verbose:
863 file_mods--verbose:
863 file_mods--verbose:
864 file_mods--verbose:
864 file_mods--verbose:
865 file_mods--verbose:
865 file_mods--verbose:
866 file_mods--verbose:
866 file_mods--verbose:
867 file_mods--verbose: c
867 file_mods--verbose: c
868 file_mods--verbose:
868 file_mods--verbose:
869 file_mods--verbose:
869 file_mods--verbose:
870 file_mods--verbose:
870 file_mods--verbose:
871 file_mods--debug:
871 file_mods--debug:
872 file_mods--debug:
872 file_mods--debug:
873 file_mods--debug:
873 file_mods--debug:
874 file_mods--debug:
874 file_mods--debug:
875 file_mods--debug:
875 file_mods--debug:
876 file_mods--debug: c
876 file_mods--debug: c
877 file_mods--debug:
877 file_mods--debug:
878 file_mods--debug:
878 file_mods--debug:
879 file_mods--debug:
879 file_mods--debug:
880 file_copies: fourth (second)
880 file_copies: fourth (second)
881 file_copies:
881 file_copies:
882 file_copies:
882 file_copies:
883 file_copies:
883 file_copies:
884 file_copies:
884 file_copies:
885 file_copies:
885 file_copies:
886 file_copies:
886 file_copies:
887 file_copies:
887 file_copies:
888 file_copies:
888 file_copies:
889 file_copies--verbose: fourth (second)
889 file_copies--verbose: fourth (second)
890 file_copies--verbose:
890 file_copies--verbose:
891 file_copies--verbose:
891 file_copies--verbose:
892 file_copies--verbose:
892 file_copies--verbose:
893 file_copies--verbose:
893 file_copies--verbose:
894 file_copies--verbose:
894 file_copies--verbose:
895 file_copies--verbose:
895 file_copies--verbose:
896 file_copies--verbose:
896 file_copies--verbose:
897 file_copies--verbose:
897 file_copies--verbose:
898 file_copies--debug: fourth (second)
898 file_copies--debug: fourth (second)
899 file_copies--debug:
899 file_copies--debug:
900 file_copies--debug:
900 file_copies--debug:
901 file_copies--debug:
901 file_copies--debug:
902 file_copies--debug:
902 file_copies--debug:
903 file_copies--debug:
903 file_copies--debug:
904 file_copies--debug:
904 file_copies--debug:
905 file_copies--debug:
905 file_copies--debug:
906 file_copies--debug:
906 file_copies--debug:
907 file_copies_switch:
907 file_copies_switch:
908 file_copies_switch:
908 file_copies_switch:
909 file_copies_switch:
909 file_copies_switch:
910 file_copies_switch:
910 file_copies_switch:
911 file_copies_switch:
911 file_copies_switch:
912 file_copies_switch:
912 file_copies_switch:
913 file_copies_switch:
913 file_copies_switch:
914 file_copies_switch:
914 file_copies_switch:
915 file_copies_switch:
915 file_copies_switch:
916 file_copies_switch--verbose:
916 file_copies_switch--verbose:
917 file_copies_switch--verbose:
917 file_copies_switch--verbose:
918 file_copies_switch--verbose:
918 file_copies_switch--verbose:
919 file_copies_switch--verbose:
919 file_copies_switch--verbose:
920 file_copies_switch--verbose:
920 file_copies_switch--verbose:
921 file_copies_switch--verbose:
921 file_copies_switch--verbose:
922 file_copies_switch--verbose:
922 file_copies_switch--verbose:
923 file_copies_switch--verbose:
923 file_copies_switch--verbose:
924 file_copies_switch--verbose:
924 file_copies_switch--verbose:
925 file_copies_switch--debug:
925 file_copies_switch--debug:
926 file_copies_switch--debug:
926 file_copies_switch--debug:
927 file_copies_switch--debug:
927 file_copies_switch--debug:
928 file_copies_switch--debug:
928 file_copies_switch--debug:
929 file_copies_switch--debug:
929 file_copies_switch--debug:
930 file_copies_switch--debug:
930 file_copies_switch--debug:
931 file_copies_switch--debug:
931 file_copies_switch--debug:
932 file_copies_switch--debug:
932 file_copies_switch--debug:
933 file_copies_switch--debug:
933 file_copies_switch--debug:
934 files: fourth second third
934 files: fourth second third
935 files: second
935 files: second
936 files:
936 files:
937 files: d
937 files: d
938 files:
938 files:
939 files: c
939 files: c
940 files: c
940 files: c
941 files: b
941 files: b
942 files: a
942 files: a
943 files--verbose: fourth second third
943 files--verbose: fourth second third
944 files--verbose: second
944 files--verbose: second
945 files--verbose:
945 files--verbose:
946 files--verbose: d
946 files--verbose: d
947 files--verbose:
947 files--verbose:
948 files--verbose: c
948 files--verbose: c
949 files--verbose: c
949 files--verbose: c
950 files--verbose: b
950 files--verbose: b
951 files--verbose: a
951 files--verbose: a
952 files--debug: fourth second third
952 files--debug: fourth second third
953 files--debug: second
953 files--debug: second
954 files--debug:
954 files--debug:
955 files--debug: d
955 files--debug: d
956 files--debug:
956 files--debug:
957 files--debug: c
957 files--debug: c
958 files--debug: c
958 files--debug: c
959 files--debug: b
959 files--debug: b
960 files--debug: a
960 files--debug: a
961 manifest: 6:94961b75a2da
961 manifest: 6:94961b75a2da
962 manifest: 5:f2dbc354b94e
962 manifest: 5:f2dbc354b94e
963 manifest: 4:4dc3def4f9b4
963 manifest: 4:4dc3def4f9b4
964 manifest: 4:4dc3def4f9b4
964 manifest: 4:4dc3def4f9b4
965 manifest: 3:cb5a1327723b
965 manifest: 3:cb5a1327723b
966 manifest: 3:cb5a1327723b
966 manifest: 3:cb5a1327723b
967 manifest: 2:6e0e82995c35
967 manifest: 2:6e0e82995c35
968 manifest: 1:4e8d705b1e53
968 manifest: 1:4e8d705b1e53
969 manifest: 0:a0c8bcbbb45c
969 manifest: 0:a0c8bcbbb45c
970 manifest--verbose: 6:94961b75a2da
970 manifest--verbose: 6:94961b75a2da
971 manifest--verbose: 5:f2dbc354b94e
971 manifest--verbose: 5:f2dbc354b94e
972 manifest--verbose: 4:4dc3def4f9b4
972 manifest--verbose: 4:4dc3def4f9b4
973 manifest--verbose: 4:4dc3def4f9b4
973 manifest--verbose: 4:4dc3def4f9b4
974 manifest--verbose: 3:cb5a1327723b
974 manifest--verbose: 3:cb5a1327723b
975 manifest--verbose: 3:cb5a1327723b
975 manifest--verbose: 3:cb5a1327723b
976 manifest--verbose: 2:6e0e82995c35
976 manifest--verbose: 2:6e0e82995c35
977 manifest--verbose: 1:4e8d705b1e53
977 manifest--verbose: 1:4e8d705b1e53
978 manifest--verbose: 0:a0c8bcbbb45c
978 manifest--verbose: 0:a0c8bcbbb45c
979 manifest--debug: 6:94961b75a2da554b4df6fb599e5bfc7d48de0c64
979 manifest--debug: 6:94961b75a2da554b4df6fb599e5bfc7d48de0c64
980 manifest--debug: 5:f2dbc354b94e5ec0b4f10680ee0cee816101d0bf
980 manifest--debug: 5:f2dbc354b94e5ec0b4f10680ee0cee816101d0bf
981 manifest--debug: 4:4dc3def4f9b4c6e8de820f6ee74737f91e96a216
981 manifest--debug: 4:4dc3def4f9b4c6e8de820f6ee74737f91e96a216
982 manifest--debug: 4:4dc3def4f9b4c6e8de820f6ee74737f91e96a216
982 manifest--debug: 4:4dc3def4f9b4c6e8de820f6ee74737f91e96a216
983 manifest--debug: 3:cb5a1327723bada42f117e4c55a303246eaf9ccc
983 manifest--debug: 3:cb5a1327723bada42f117e4c55a303246eaf9ccc
984 manifest--debug: 3:cb5a1327723bada42f117e4c55a303246eaf9ccc
984 manifest--debug: 3:cb5a1327723bada42f117e4c55a303246eaf9ccc
985 manifest--debug: 2:6e0e82995c35d0d57a52aca8da4e56139e06b4b1
985 manifest--debug: 2:6e0e82995c35d0d57a52aca8da4e56139e06b4b1
986 manifest--debug: 1:4e8d705b1e53e3f9375e0e60dc7b525d8211fe55
986 manifest--debug: 1:4e8d705b1e53e3f9375e0e60dc7b525d8211fe55
987 manifest--debug: 0:a0c8bcbbb45c63b90b70ad007bf38961f64f2af0
987 manifest--debug: 0:a0c8bcbbb45c63b90b70ad007bf38961f64f2af0
988 node: 95c24699272ef57d062b8bccc32c878bf841784a
988 node: 95c24699272ef57d062b8bccc32c878bf841784a
989 node: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
989 node: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
990 node: d41e714fe50d9e4a5f11b4d595d543481b5f980b
990 node: d41e714fe50d9e4a5f11b4d595d543481b5f980b
991 node: 13207e5a10d9fd28ec424934298e176197f2c67f
991 node: 13207e5a10d9fd28ec424934298e176197f2c67f
992 node: bbe44766e73d5f11ed2177f1838de10c53ef3e74
992 node: bbe44766e73d5f11ed2177f1838de10c53ef3e74
993 node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
993 node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
994 node: 97054abb4ab824450e9164180baf491ae0078465
994 node: 97054abb4ab824450e9164180baf491ae0078465
995 node: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
995 node: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
996 node: 1e4e1b8f71e05681d422154f5421e385fec3454f
996 node: 1e4e1b8f71e05681d422154f5421e385fec3454f
997 node--verbose: 95c24699272ef57d062b8bccc32c878bf841784a
997 node--verbose: 95c24699272ef57d062b8bccc32c878bf841784a
998 node--verbose: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
998 node--verbose: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
999 node--verbose: d41e714fe50d9e4a5f11b4d595d543481b5f980b
999 node--verbose: d41e714fe50d9e4a5f11b4d595d543481b5f980b
1000 node--verbose: 13207e5a10d9fd28ec424934298e176197f2c67f
1000 node--verbose: 13207e5a10d9fd28ec424934298e176197f2c67f
1001 node--verbose: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1001 node--verbose: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1002 node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1002 node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1003 node--verbose: 97054abb4ab824450e9164180baf491ae0078465
1003 node--verbose: 97054abb4ab824450e9164180baf491ae0078465
1004 node--verbose: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1004 node--verbose: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1005 node--verbose: 1e4e1b8f71e05681d422154f5421e385fec3454f
1005 node--verbose: 1e4e1b8f71e05681d422154f5421e385fec3454f
1006 node--debug: 95c24699272ef57d062b8bccc32c878bf841784a
1006 node--debug: 95c24699272ef57d062b8bccc32c878bf841784a
1007 node--debug: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1007 node--debug: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1008 node--debug: d41e714fe50d9e4a5f11b4d595d543481b5f980b
1008 node--debug: d41e714fe50d9e4a5f11b4d595d543481b5f980b
1009 node--debug: 13207e5a10d9fd28ec424934298e176197f2c67f
1009 node--debug: 13207e5a10d9fd28ec424934298e176197f2c67f
1010 node--debug: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1010 node--debug: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1011 node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1011 node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1012 node--debug: 97054abb4ab824450e9164180baf491ae0078465
1012 node--debug: 97054abb4ab824450e9164180baf491ae0078465
1013 node--debug: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1013 node--debug: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1014 node--debug: 1e4e1b8f71e05681d422154f5421e385fec3454f
1014 node--debug: 1e4e1b8f71e05681d422154f5421e385fec3454f
1015 parents:
1015 parents:
1016 parents: -1:000000000000
1016 parents: -1:000000000000
1017 parents: 5:13207e5a10d9 4:bbe44766e73d
1017 parents: 5:13207e5a10d9 4:bbe44766e73d
1018 parents: 3:10e46f2dcbf4
1018 parents: 3:10e46f2dcbf4
1019 parents:
1019 parents:
1020 parents:
1020 parents:
1021 parents:
1021 parents:
1022 parents:
1022 parents:
1023 parents:
1023 parents:
1024 parents--verbose:
1024 parents--verbose:
1025 parents--verbose: -1:000000000000
1025 parents--verbose: -1:000000000000
1026 parents--verbose: 5:13207e5a10d9 4:bbe44766e73d
1026 parents--verbose: 5:13207e5a10d9 4:bbe44766e73d
1027 parents--verbose: 3:10e46f2dcbf4
1027 parents--verbose: 3:10e46f2dcbf4
1028 parents--verbose:
1028 parents--verbose:
1029 parents--verbose:
1029 parents--verbose:
1030 parents--verbose:
1030 parents--verbose:
1031 parents--verbose:
1031 parents--verbose:
1032 parents--verbose:
1032 parents--verbose:
1033 parents--debug: 7:29114dbae42b9f078cf2714dbe3a86bba8ec7453 -1:0000000000000000000000000000000000000000
1033 parents--debug: 7:29114dbae42b9f078cf2714dbe3a86bba8ec7453 -1:0000000000000000000000000000000000000000
1034 parents--debug: -1:0000000000000000000000000000000000000000 -1:0000000000000000000000000000000000000000
1034 parents--debug: -1:0000000000000000000000000000000000000000 -1:0000000000000000000000000000000000000000
1035 parents--debug: 5:13207e5a10d9fd28ec424934298e176197f2c67f 4:bbe44766e73d5f11ed2177f1838de10c53ef3e74
1035 parents--debug: 5:13207e5a10d9fd28ec424934298e176197f2c67f 4:bbe44766e73d5f11ed2177f1838de10c53ef3e74
1036 parents--debug: 3:10e46f2dcbf4823578cf180f33ecf0b957964c47 -1:0000000000000000000000000000000000000000
1036 parents--debug: 3:10e46f2dcbf4823578cf180f33ecf0b957964c47 -1:0000000000000000000000000000000000000000
1037 parents--debug: 3:10e46f2dcbf4823578cf180f33ecf0b957964c47 -1:0000000000000000000000000000000000000000
1037 parents--debug: 3:10e46f2dcbf4823578cf180f33ecf0b957964c47 -1:0000000000000000000000000000000000000000
1038 parents--debug: 2:97054abb4ab824450e9164180baf491ae0078465 -1:0000000000000000000000000000000000000000
1038 parents--debug: 2:97054abb4ab824450e9164180baf491ae0078465 -1:0000000000000000000000000000000000000000
1039 parents--debug: 1:b608e9d1a3f0273ccf70fb85fd6866b3482bf965 -1:0000000000000000000000000000000000000000
1039 parents--debug: 1:b608e9d1a3f0273ccf70fb85fd6866b3482bf965 -1:0000000000000000000000000000000000000000
1040 parents--debug: 0:1e4e1b8f71e05681d422154f5421e385fec3454f -1:0000000000000000000000000000000000000000
1040 parents--debug: 0:1e4e1b8f71e05681d422154f5421e385fec3454f -1:0000000000000000000000000000000000000000
1041 parents--debug: -1:0000000000000000000000000000000000000000 -1:0000000000000000000000000000000000000000
1041 parents--debug: -1:0000000000000000000000000000000000000000 -1:0000000000000000000000000000000000000000
1042 rev: 8
1042 rev: 8
1043 rev: 7
1043 rev: 7
1044 rev: 6
1044 rev: 6
1045 rev: 5
1045 rev: 5
1046 rev: 4
1046 rev: 4
1047 rev: 3
1047 rev: 3
1048 rev: 2
1048 rev: 2
1049 rev: 1
1049 rev: 1
1050 rev: 0
1050 rev: 0
1051 rev--verbose: 8
1051 rev--verbose: 8
1052 rev--verbose: 7
1052 rev--verbose: 7
1053 rev--verbose: 6
1053 rev--verbose: 6
1054 rev--verbose: 5
1054 rev--verbose: 5
1055 rev--verbose: 4
1055 rev--verbose: 4
1056 rev--verbose: 3
1056 rev--verbose: 3
1057 rev--verbose: 2
1057 rev--verbose: 2
1058 rev--verbose: 1
1058 rev--verbose: 1
1059 rev--verbose: 0
1059 rev--verbose: 0
1060 rev--debug: 8
1060 rev--debug: 8
1061 rev--debug: 7
1061 rev--debug: 7
1062 rev--debug: 6
1062 rev--debug: 6
1063 rev--debug: 5
1063 rev--debug: 5
1064 rev--debug: 4
1064 rev--debug: 4
1065 rev--debug: 3
1065 rev--debug: 3
1066 rev--debug: 2
1066 rev--debug: 2
1067 rev--debug: 1
1067 rev--debug: 1
1068 rev--debug: 0
1068 rev--debug: 0
1069 tags: tip
1069 tags: tip
1070 tags:
1070 tags:
1071 tags:
1071 tags:
1072 tags:
1072 tags:
1073 tags:
1073 tags:
1074 tags:
1074 tags:
1075 tags:
1075 tags:
1076 tags:
1076 tags:
1077 tags:
1077 tags:
1078 tags--verbose: tip
1078 tags--verbose: tip
1079 tags--verbose:
1079 tags--verbose:
1080 tags--verbose:
1080 tags--verbose:
1081 tags--verbose:
1081 tags--verbose:
1082 tags--verbose:
1082 tags--verbose:
1083 tags--verbose:
1083 tags--verbose:
1084 tags--verbose:
1084 tags--verbose:
1085 tags--verbose:
1085 tags--verbose:
1086 tags--verbose:
1086 tags--verbose:
1087 tags--debug: tip
1087 tags--debug: tip
1088 tags--debug:
1088 tags--debug:
1089 tags--debug:
1089 tags--debug:
1090 tags--debug:
1090 tags--debug:
1091 tags--debug:
1091 tags--debug:
1092 tags--debug:
1092 tags--debug:
1093 tags--debug:
1093 tags--debug:
1094 tags--debug:
1094 tags--debug:
1095 tags--debug:
1095 tags--debug:
1096 diffstat: 3: +2/-1
1096 diffstat: 3: +2/-1
1097 diffstat: 1: +1/-0
1097 diffstat: 1: +1/-0
1098 diffstat: 0: +0/-0
1098 diffstat: 0: +0/-0
1099 diffstat: 1: +1/-0
1099 diffstat: 1: +1/-0
1100 diffstat: 0: +0/-0
1100 diffstat: 0: +0/-0
1101 diffstat: 1: +1/-0
1101 diffstat: 1: +1/-0
1102 diffstat: 1: +4/-0
1102 diffstat: 1: +4/-0
1103 diffstat: 1: +2/-0
1103 diffstat: 1: +2/-0
1104 diffstat: 1: +1/-0
1104 diffstat: 1: +1/-0
1105 diffstat--verbose: 3: +2/-1
1105 diffstat--verbose: 3: +2/-1
1106 diffstat--verbose: 1: +1/-0
1106 diffstat--verbose: 1: +1/-0
1107 diffstat--verbose: 0: +0/-0
1107 diffstat--verbose: 0: +0/-0
1108 diffstat--verbose: 1: +1/-0
1108 diffstat--verbose: 1: +1/-0
1109 diffstat--verbose: 0: +0/-0
1109 diffstat--verbose: 0: +0/-0
1110 diffstat--verbose: 1: +1/-0
1110 diffstat--verbose: 1: +1/-0
1111 diffstat--verbose: 1: +4/-0
1111 diffstat--verbose: 1: +4/-0
1112 diffstat--verbose: 1: +2/-0
1112 diffstat--verbose: 1: +2/-0
1113 diffstat--verbose: 1: +1/-0
1113 diffstat--verbose: 1: +1/-0
1114 diffstat--debug: 3: +2/-1
1114 diffstat--debug: 3: +2/-1
1115 diffstat--debug: 1: +1/-0
1115 diffstat--debug: 1: +1/-0
1116 diffstat--debug: 0: +0/-0
1116 diffstat--debug: 0: +0/-0
1117 diffstat--debug: 1: +1/-0
1117 diffstat--debug: 1: +1/-0
1118 diffstat--debug: 0: +0/-0
1118 diffstat--debug: 0: +0/-0
1119 diffstat--debug: 1: +1/-0
1119 diffstat--debug: 1: +1/-0
1120 diffstat--debug: 1: +4/-0
1120 diffstat--debug: 1: +4/-0
1121 diffstat--debug: 1: +2/-0
1121 diffstat--debug: 1: +2/-0
1122 diffstat--debug: 1: +1/-0
1122 diffstat--debug: 1: +1/-0
1123 extras: branch=default
1123 extras: branch=default
1124 extras: branch=default
1124 extras: branch=default
1125 extras: branch=default
1125 extras: branch=default
1126 extras: branch=default
1126 extras: branch=default
1127 extras: branch=foo
1127 extras: branch=foo
1128 extras: branch=default
1128 extras: branch=default
1129 extras: branch=default
1129 extras: branch=default
1130 extras: branch=default
1130 extras: branch=default
1131 extras: branch=default
1131 extras: branch=default
1132 extras--verbose: branch=default
1132 extras--verbose: branch=default
1133 extras--verbose: branch=default
1133 extras--verbose: branch=default
1134 extras--verbose: branch=default
1134 extras--verbose: branch=default
1135 extras--verbose: branch=default
1135 extras--verbose: branch=default
1136 extras--verbose: branch=foo
1136 extras--verbose: branch=foo
1137 extras--verbose: branch=default
1137 extras--verbose: branch=default
1138 extras--verbose: branch=default
1138 extras--verbose: branch=default
1139 extras--verbose: branch=default
1139 extras--verbose: branch=default
1140 extras--verbose: branch=default
1140 extras--verbose: branch=default
1141 extras--debug: branch=default
1141 extras--debug: branch=default
1142 extras--debug: branch=default
1142 extras--debug: branch=default
1143 extras--debug: branch=default
1143 extras--debug: branch=default
1144 extras--debug: branch=default
1144 extras--debug: branch=default
1145 extras--debug: branch=foo
1145 extras--debug: branch=foo
1146 extras--debug: branch=default
1146 extras--debug: branch=default
1147 extras--debug: branch=default
1147 extras--debug: branch=default
1148 extras--debug: branch=default
1148 extras--debug: branch=default
1149 extras--debug: branch=default
1149 extras--debug: branch=default
1150 p1rev: 7
1150 p1rev: 7
1151 p1rev: -1
1151 p1rev: -1
1152 p1rev: 5
1152 p1rev: 5
1153 p1rev: 3
1153 p1rev: 3
1154 p1rev: 3
1154 p1rev: 3
1155 p1rev: 2
1155 p1rev: 2
1156 p1rev: 1
1156 p1rev: 1
1157 p1rev: 0
1157 p1rev: 0
1158 p1rev: -1
1158 p1rev: -1
1159 p1rev--verbose: 7
1159 p1rev--verbose: 7
1160 p1rev--verbose: -1
1160 p1rev--verbose: -1
1161 p1rev--verbose: 5
1161 p1rev--verbose: 5
1162 p1rev--verbose: 3
1162 p1rev--verbose: 3
1163 p1rev--verbose: 3
1163 p1rev--verbose: 3
1164 p1rev--verbose: 2
1164 p1rev--verbose: 2
1165 p1rev--verbose: 1
1165 p1rev--verbose: 1
1166 p1rev--verbose: 0
1166 p1rev--verbose: 0
1167 p1rev--verbose: -1
1167 p1rev--verbose: -1
1168 p1rev--debug: 7
1168 p1rev--debug: 7
1169 p1rev--debug: -1
1169 p1rev--debug: -1
1170 p1rev--debug: 5
1170 p1rev--debug: 5
1171 p1rev--debug: 3
1171 p1rev--debug: 3
1172 p1rev--debug: 3
1172 p1rev--debug: 3
1173 p1rev--debug: 2
1173 p1rev--debug: 2
1174 p1rev--debug: 1
1174 p1rev--debug: 1
1175 p1rev--debug: 0
1175 p1rev--debug: 0
1176 p1rev--debug: -1
1176 p1rev--debug: -1
1177 p2rev: -1
1177 p2rev: -1
1178 p2rev: -1
1178 p2rev: -1
1179 p2rev: 4
1179 p2rev: 4
1180 p2rev: -1
1180 p2rev: -1
1181 p2rev: -1
1181 p2rev: -1
1182 p2rev: -1
1182 p2rev: -1
1183 p2rev: -1
1183 p2rev: -1
1184 p2rev: -1
1184 p2rev: -1
1185 p2rev: -1
1185 p2rev: -1
1186 p2rev--verbose: -1
1186 p2rev--verbose: -1
1187 p2rev--verbose: -1
1187 p2rev--verbose: -1
1188 p2rev--verbose: 4
1188 p2rev--verbose: 4
1189 p2rev--verbose: -1
1189 p2rev--verbose: -1
1190 p2rev--verbose: -1
1190 p2rev--verbose: -1
1191 p2rev--verbose: -1
1191 p2rev--verbose: -1
1192 p2rev--verbose: -1
1192 p2rev--verbose: -1
1193 p2rev--verbose: -1
1193 p2rev--verbose: -1
1194 p2rev--verbose: -1
1194 p2rev--verbose: -1
1195 p2rev--debug: -1
1195 p2rev--debug: -1
1196 p2rev--debug: -1
1196 p2rev--debug: -1
1197 p2rev--debug: 4
1197 p2rev--debug: 4
1198 p2rev--debug: -1
1198 p2rev--debug: -1
1199 p2rev--debug: -1
1199 p2rev--debug: -1
1200 p2rev--debug: -1
1200 p2rev--debug: -1
1201 p2rev--debug: -1
1201 p2rev--debug: -1
1202 p2rev--debug: -1
1202 p2rev--debug: -1
1203 p2rev--debug: -1
1203 p2rev--debug: -1
1204 p1node: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1204 p1node: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1205 p1node: 0000000000000000000000000000000000000000
1205 p1node: 0000000000000000000000000000000000000000
1206 p1node: 13207e5a10d9fd28ec424934298e176197f2c67f
1206 p1node: 13207e5a10d9fd28ec424934298e176197f2c67f
1207 p1node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1207 p1node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1208 p1node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1208 p1node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1209 p1node: 97054abb4ab824450e9164180baf491ae0078465
1209 p1node: 97054abb4ab824450e9164180baf491ae0078465
1210 p1node: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1210 p1node: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1211 p1node: 1e4e1b8f71e05681d422154f5421e385fec3454f
1211 p1node: 1e4e1b8f71e05681d422154f5421e385fec3454f
1212 p1node: 0000000000000000000000000000000000000000
1212 p1node: 0000000000000000000000000000000000000000
1213 p1node--verbose: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1213 p1node--verbose: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1214 p1node--verbose: 0000000000000000000000000000000000000000
1214 p1node--verbose: 0000000000000000000000000000000000000000
1215 p1node--verbose: 13207e5a10d9fd28ec424934298e176197f2c67f
1215 p1node--verbose: 13207e5a10d9fd28ec424934298e176197f2c67f
1216 p1node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1216 p1node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1217 p1node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1217 p1node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1218 p1node--verbose: 97054abb4ab824450e9164180baf491ae0078465
1218 p1node--verbose: 97054abb4ab824450e9164180baf491ae0078465
1219 p1node--verbose: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1219 p1node--verbose: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1220 p1node--verbose: 1e4e1b8f71e05681d422154f5421e385fec3454f
1220 p1node--verbose: 1e4e1b8f71e05681d422154f5421e385fec3454f
1221 p1node--verbose: 0000000000000000000000000000000000000000
1221 p1node--verbose: 0000000000000000000000000000000000000000
1222 p1node--debug: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1222 p1node--debug: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1223 p1node--debug: 0000000000000000000000000000000000000000
1223 p1node--debug: 0000000000000000000000000000000000000000
1224 p1node--debug: 13207e5a10d9fd28ec424934298e176197f2c67f
1224 p1node--debug: 13207e5a10d9fd28ec424934298e176197f2c67f
1225 p1node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1225 p1node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1226 p1node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1226 p1node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1227 p1node--debug: 97054abb4ab824450e9164180baf491ae0078465
1227 p1node--debug: 97054abb4ab824450e9164180baf491ae0078465
1228 p1node--debug: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1228 p1node--debug: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1229 p1node--debug: 1e4e1b8f71e05681d422154f5421e385fec3454f
1229 p1node--debug: 1e4e1b8f71e05681d422154f5421e385fec3454f
1230 p1node--debug: 0000000000000000000000000000000000000000
1230 p1node--debug: 0000000000000000000000000000000000000000
1231 p2node: 0000000000000000000000000000000000000000
1231 p2node: 0000000000000000000000000000000000000000
1232 p2node: 0000000000000000000000000000000000000000
1232 p2node: 0000000000000000000000000000000000000000
1233 p2node: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1233 p2node: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1234 p2node: 0000000000000000000000000000000000000000
1234 p2node: 0000000000000000000000000000000000000000
1235 p2node: 0000000000000000000000000000000000000000
1235 p2node: 0000000000000000000000000000000000000000
1236 p2node: 0000000000000000000000000000000000000000
1236 p2node: 0000000000000000000000000000000000000000
1237 p2node: 0000000000000000000000000000000000000000
1237 p2node: 0000000000000000000000000000000000000000
1238 p2node: 0000000000000000000000000000000000000000
1238 p2node: 0000000000000000000000000000000000000000
1239 p2node: 0000000000000000000000000000000000000000
1239 p2node: 0000000000000000000000000000000000000000
1240 p2node--verbose: 0000000000000000000000000000000000000000
1240 p2node--verbose: 0000000000000000000000000000000000000000
1241 p2node--verbose: 0000000000000000000000000000000000000000
1241 p2node--verbose: 0000000000000000000000000000000000000000
1242 p2node--verbose: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1242 p2node--verbose: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1243 p2node--verbose: 0000000000000000000000000000000000000000
1243 p2node--verbose: 0000000000000000000000000000000000000000
1244 p2node--verbose: 0000000000000000000000000000000000000000
1244 p2node--verbose: 0000000000000000000000000000000000000000
1245 p2node--verbose: 0000000000000000000000000000000000000000
1245 p2node--verbose: 0000000000000000000000000000000000000000
1246 p2node--verbose: 0000000000000000000000000000000000000000
1246 p2node--verbose: 0000000000000000000000000000000000000000
1247 p2node--verbose: 0000000000000000000000000000000000000000
1247 p2node--verbose: 0000000000000000000000000000000000000000
1248 p2node--verbose: 0000000000000000000000000000000000000000
1248 p2node--verbose: 0000000000000000000000000000000000000000
1249 p2node--debug: 0000000000000000000000000000000000000000
1249 p2node--debug: 0000000000000000000000000000000000000000
1250 p2node--debug: 0000000000000000000000000000000000000000
1250 p2node--debug: 0000000000000000000000000000000000000000
1251 p2node--debug: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1251 p2node--debug: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1252 p2node--debug: 0000000000000000000000000000000000000000
1252 p2node--debug: 0000000000000000000000000000000000000000
1253 p2node--debug: 0000000000000000000000000000000000000000
1253 p2node--debug: 0000000000000000000000000000000000000000
1254 p2node--debug: 0000000000000000000000000000000000000000
1254 p2node--debug: 0000000000000000000000000000000000000000
1255 p2node--debug: 0000000000000000000000000000000000000000
1255 p2node--debug: 0000000000000000000000000000000000000000
1256 p2node--debug: 0000000000000000000000000000000000000000
1256 p2node--debug: 0000000000000000000000000000000000000000
1257 p2node--debug: 0000000000000000000000000000000000000000
1257 p2node--debug: 0000000000000000000000000000000000000000
1258
1258
1259 Filters work:
1259 Filters work:
1260
1260
1261 $ hg log --template '{author|domain}\n'
1261 $ hg log --template '{author|domain}\n'
1262
1262
1263 hostname
1263 hostname
1264
1264
1265
1265
1266
1266
1267
1267
1268 place
1268 place
1269 place
1269 place
1270 hostname
1270 hostname
1271
1271
1272 $ hg log --template '{author|person}\n'
1272 $ hg log --template '{author|person}\n'
1273 test
1273 test
1274 User Name
1274 User Name
1275 person
1275 person
1276 person
1276 person
1277 person
1277 person
1278 person
1278 person
1279 other
1279 other
1280 A. N. Other
1280 A. N. Other
1281 User Name
1281 User Name
1282
1282
1283 $ hg log --template '{author|user}\n'
1283 $ hg log --template '{author|user}\n'
1284 test
1284 test
1285 user
1285 user
1286 person
1286 person
1287 person
1287 person
1288 person
1288 person
1289 person
1289 person
1290 other
1290 other
1291 other
1291 other
1292 user
1292 user
1293
1293
1294 $ hg log --template '{date|date}\n'
1294 $ hg log --template '{date|date}\n'
1295 Wed Jan 01 10:01:00 2020 +0000
1295 Wed Jan 01 10:01:00 2020 +0000
1296 Mon Jan 12 13:46:40 1970 +0000
1296 Mon Jan 12 13:46:40 1970 +0000
1297 Sun Jan 18 08:40:01 1970 +0000
1297 Sun Jan 18 08:40:01 1970 +0000
1298 Sun Jan 18 08:40:00 1970 +0000
1298 Sun Jan 18 08:40:00 1970 +0000
1299 Sat Jan 17 04:53:20 1970 +0000
1299 Sat Jan 17 04:53:20 1970 +0000
1300 Fri Jan 16 01:06:40 1970 +0000
1300 Fri Jan 16 01:06:40 1970 +0000
1301 Wed Jan 14 21:20:00 1970 +0000
1301 Wed Jan 14 21:20:00 1970 +0000
1302 Tue Jan 13 17:33:20 1970 +0000
1302 Tue Jan 13 17:33:20 1970 +0000
1303 Mon Jan 12 13:46:40 1970 +0000
1303 Mon Jan 12 13:46:40 1970 +0000
1304
1304
1305 $ hg log --template '{date|isodate}\n'
1305 $ hg log --template '{date|isodate}\n'
1306 2020-01-01 10:01 +0000
1306 2020-01-01 10:01 +0000
1307 1970-01-12 13:46 +0000
1307 1970-01-12 13:46 +0000
1308 1970-01-18 08:40 +0000
1308 1970-01-18 08:40 +0000
1309 1970-01-18 08:40 +0000
1309 1970-01-18 08:40 +0000
1310 1970-01-17 04:53 +0000
1310 1970-01-17 04:53 +0000
1311 1970-01-16 01:06 +0000
1311 1970-01-16 01:06 +0000
1312 1970-01-14 21:20 +0000
1312 1970-01-14 21:20 +0000
1313 1970-01-13 17:33 +0000
1313 1970-01-13 17:33 +0000
1314 1970-01-12 13:46 +0000
1314 1970-01-12 13:46 +0000
1315
1315
1316 $ hg log --template '{date|isodatesec}\n'
1316 $ hg log --template '{date|isodatesec}\n'
1317 2020-01-01 10:01:00 +0000
1317 2020-01-01 10:01:00 +0000
1318 1970-01-12 13:46:40 +0000
1318 1970-01-12 13:46:40 +0000
1319 1970-01-18 08:40:01 +0000
1319 1970-01-18 08:40:01 +0000
1320 1970-01-18 08:40:00 +0000
1320 1970-01-18 08:40:00 +0000
1321 1970-01-17 04:53:20 +0000
1321 1970-01-17 04:53:20 +0000
1322 1970-01-16 01:06:40 +0000
1322 1970-01-16 01:06:40 +0000
1323 1970-01-14 21:20:00 +0000
1323 1970-01-14 21:20:00 +0000
1324 1970-01-13 17:33:20 +0000
1324 1970-01-13 17:33:20 +0000
1325 1970-01-12 13:46:40 +0000
1325 1970-01-12 13:46:40 +0000
1326
1326
1327 $ hg log --template '{date|rfc822date}\n'
1327 $ hg log --template '{date|rfc822date}\n'
1328 Wed, 01 Jan 2020 10:01:00 +0000
1328 Wed, 01 Jan 2020 10:01:00 +0000
1329 Mon, 12 Jan 1970 13:46:40 +0000
1329 Mon, 12 Jan 1970 13:46:40 +0000
1330 Sun, 18 Jan 1970 08:40:01 +0000
1330 Sun, 18 Jan 1970 08:40:01 +0000
1331 Sun, 18 Jan 1970 08:40:00 +0000
1331 Sun, 18 Jan 1970 08:40:00 +0000
1332 Sat, 17 Jan 1970 04:53:20 +0000
1332 Sat, 17 Jan 1970 04:53:20 +0000
1333 Fri, 16 Jan 1970 01:06:40 +0000
1333 Fri, 16 Jan 1970 01:06:40 +0000
1334 Wed, 14 Jan 1970 21:20:00 +0000
1334 Wed, 14 Jan 1970 21:20:00 +0000
1335 Tue, 13 Jan 1970 17:33:20 +0000
1335 Tue, 13 Jan 1970 17:33:20 +0000
1336 Mon, 12 Jan 1970 13:46:40 +0000
1336 Mon, 12 Jan 1970 13:46:40 +0000
1337
1337
1338 $ hg log --template '{desc|firstline}\n'
1338 $ hg log --template '{desc|firstline}\n'
1339 third
1339 third
1340 second
1340 second
1341 merge
1341 merge
1342 new head
1342 new head
1343 new branch
1343 new branch
1344 no user, no domain
1344 no user, no domain
1345 no person
1345 no person
1346 other 1
1346 other 1
1347 line 1
1347 line 1
1348
1348
1349 $ hg log --template '{node|short}\n'
1349 $ hg log --template '{node|short}\n'
1350 95c24699272e
1350 95c24699272e
1351 29114dbae42b
1351 29114dbae42b
1352 d41e714fe50d
1352 d41e714fe50d
1353 13207e5a10d9
1353 13207e5a10d9
1354 bbe44766e73d
1354 bbe44766e73d
1355 10e46f2dcbf4
1355 10e46f2dcbf4
1356 97054abb4ab8
1356 97054abb4ab8
1357 b608e9d1a3f0
1357 b608e9d1a3f0
1358 1e4e1b8f71e0
1358 1e4e1b8f71e0
1359
1359
1360 $ hg log --template '<changeset author="{author|xmlescape}"/>\n'
1360 $ hg log --template '<changeset author="{author|xmlescape}"/>\n'
1361 <changeset author="test"/>
1361 <changeset author="test"/>
1362 <changeset author="User Name &lt;user@hostname&gt;"/>
1362 <changeset author="User Name &lt;user@hostname&gt;"/>
1363 <changeset author="person"/>
1363 <changeset author="person"/>
1364 <changeset author="person"/>
1364 <changeset author="person"/>
1365 <changeset author="person"/>
1365 <changeset author="person"/>
1366 <changeset author="person"/>
1366 <changeset author="person"/>
1367 <changeset author="other@place"/>
1367 <changeset author="other@place"/>
1368 <changeset author="A. N. Other &lt;other@place&gt;"/>
1368 <changeset author="A. N. Other &lt;other@place&gt;"/>
1369 <changeset author="User Name &lt;user@hostname&gt;"/>
1369 <changeset author="User Name &lt;user@hostname&gt;"/>
1370
1370
1371 $ hg log --template '{rev}: {children}\n'
1371 $ hg log --template '{rev}: {children}\n'
1372 8:
1372 8:
1373 7: 8:95c24699272e
1373 7: 8:95c24699272e
1374 6:
1374 6:
1375 5: 6:d41e714fe50d
1375 5: 6:d41e714fe50d
1376 4: 6:d41e714fe50d
1376 4: 6:d41e714fe50d
1377 3: 4:bbe44766e73d 5:13207e5a10d9
1377 3: 4:bbe44766e73d 5:13207e5a10d9
1378 2: 3:10e46f2dcbf4
1378 2: 3:10e46f2dcbf4
1379 1: 2:97054abb4ab8
1379 1: 2:97054abb4ab8
1380 0: 1:b608e9d1a3f0
1380 0: 1:b608e9d1a3f0
1381
1381
1382 Formatnode filter works:
1382 Formatnode filter works:
1383
1383
1384 $ hg -q log -r 0 --template '{node|formatnode}\n'
1384 $ hg -q log -r 0 --template '{node|formatnode}\n'
1385 1e4e1b8f71e0
1385 1e4e1b8f71e0
1386
1386
1387 $ hg log -r 0 --template '{node|formatnode}\n'
1387 $ hg log -r 0 --template '{node|formatnode}\n'
1388 1e4e1b8f71e0
1388 1e4e1b8f71e0
1389
1389
1390 $ hg -v log -r 0 --template '{node|formatnode}\n'
1390 $ hg -v log -r 0 --template '{node|formatnode}\n'
1391 1e4e1b8f71e0
1391 1e4e1b8f71e0
1392
1392
1393 $ hg --debug log -r 0 --template '{node|formatnode}\n'
1393 $ hg --debug log -r 0 --template '{node|formatnode}\n'
1394 1e4e1b8f71e05681d422154f5421e385fec3454f
1394 1e4e1b8f71e05681d422154f5421e385fec3454f
1395
1395
1396 Age filter:
1396 Age filter:
1397
1397
1398 $ hg log --template '{date|age}\n' > /dev/null || exit 1
1398 $ hg log --template '{date|age}\n' > /dev/null || exit 1
1399
1399
1400 >>> from datetime import datetime, timedelta
1400 >>> from datetime import datetime, timedelta
1401 >>> fp = open('a', 'w')
1401 >>> fp = open('a', 'w')
1402 >>> n = datetime.now() + timedelta(366 * 7)
1402 >>> n = datetime.now() + timedelta(366 * 7)
1403 >>> fp.write('%d-%d-%d 00:00' % (n.year, n.month, n.day))
1403 >>> fp.write('%d-%d-%d 00:00' % (n.year, n.month, n.day))
1404 >>> fp.close()
1404 >>> fp.close()
1405 $ hg add a
1405 $ hg add a
1406 $ hg commit -m future -d "`cat a`"
1406 $ hg commit -m future -d "`cat a`"
1407
1407
1408 $ hg log -l1 --template '{date|age}\n'
1408 $ hg log -l1 --template '{date|age}\n'
1409 7 years from now
1409 7 years from now
1410
1410
1411 Error on syntax:
1411 Error on syntax:
1412
1412
1413 $ echo 'x = "f' >> t
1413 $ echo 'x = "f' >> t
1414 $ hg log
1414 $ hg log
1415 abort: t:3: unmatched quotes
1415 abort: t:3: unmatched quotes
1416 [255]
1416 [255]
1417
1417
1418 Behind the scenes, this will throw TypeError
1418 Behind the scenes, this will throw TypeError
1419
1419
1420 $ hg log -l 3 --template '{date|obfuscate}\n'
1420 $ hg log -l 3 --template '{date|obfuscate}\n'
1421 abort: template filter 'obfuscate' is not compatible with keyword 'date'
1421 abort: template filter 'obfuscate' is not compatible with keyword 'date'
1422 [255]
1422 [255]
1423
1423
1424 Behind the scenes, this will throw a ValueError
1424 Behind the scenes, this will throw a ValueError
1425
1425
1426 $ hg log -l 3 --template 'line: {desc|shortdate}\n'
1426 $ hg log -l 3 --template 'line: {desc|shortdate}\n'
1427 abort: template filter 'shortdate' is not compatible with keyword 'desc'
1427 abort: template filter 'shortdate' is not compatible with keyword 'desc'
1428 [255]
1428 [255]
1429
1429
1430 Behind the scenes, this will throw AttributeError
1430 Behind the scenes, this will throw AttributeError
1431
1431
1432 $ hg log -l 3 --template 'line: {date|escape}\n'
1432 $ hg log -l 3 --template 'line: {date|escape}\n'
1433 abort: template filter 'escape' is not compatible with keyword 'date'
1433 abort: template filter 'escape' is not compatible with keyword 'date'
1434 [255]
1434 [255]
1435
1435
1436 Behind the scenes, this will throw ValueError
1436 Behind the scenes, this will throw ValueError
1437
1437
1438 $ hg tip --template '{author|email|date}\n'
1438 $ hg tip --template '{author|email|date}\n'
1439 abort: template filter 'datefilter' is not compatible with keyword 'author'
1439 abort: template filter 'datefilter' is not compatible with keyword 'author'
1440 [255]
1440 [255]
1441
1441
1442 Thrown an error if a template function doesn't exist
1442 Thrown an error if a template function doesn't exist
1443
1443
1444 $ hg tip --template '{foo()}\n'
1444 $ hg tip --template '{foo()}\n'
1445 hg: parse error: unknown function 'foo'
1445 hg: parse error: unknown function 'foo'
1446 [255]
1446 [255]
1447
1447
1448 $ cd ..
1448 $ cd ..
1449
1449
1450
1450
1451 latesttag:
1451 latesttag:
1452
1452
1453 $ hg init latesttag
1453 $ hg init latesttag
1454 $ cd latesttag
1454 $ cd latesttag
1455
1455
1456 $ echo a > file
1456 $ echo a > file
1457 $ hg ci -Am a -d '0 0'
1457 $ hg ci -Am a -d '0 0'
1458 adding file
1458 adding file
1459
1459
1460 $ echo b >> file
1460 $ echo b >> file
1461 $ hg ci -m b -d '1 0'
1461 $ hg ci -m b -d '1 0'
1462
1462
1463 $ echo c >> head1
1463 $ echo c >> head1
1464 $ hg ci -Am h1c -d '2 0'
1464 $ hg ci -Am h1c -d '2 0'
1465 adding head1
1465 adding head1
1466
1466
1467 $ hg update -q 1
1467 $ hg update -q 1
1468 $ echo d >> head2
1468 $ echo d >> head2
1469 $ hg ci -Am h2d -d '3 0'
1469 $ hg ci -Am h2d -d '3 0'
1470 adding head2
1470 adding head2
1471 created new head
1471 created new head
1472
1472
1473 $ echo e >> head2
1473 $ echo e >> head2
1474 $ hg ci -m h2e -d '4 0'
1474 $ hg ci -m h2e -d '4 0'
1475
1475
1476 $ hg merge -q
1476 $ hg merge -q
1477 $ hg ci -m merge -d '5 -3600'
1477 $ hg ci -m merge -d '5 -3600'
1478
1478
1479 No tag set:
1479 No tag set:
1480
1480
1481 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1481 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1482 5: null+5
1482 5: null+5
1483 4: null+4
1483 4: null+4
1484 3: null+3
1484 3: null+3
1485 2: null+3
1485 2: null+3
1486 1: null+2
1486 1: null+2
1487 0: null+1
1487 0: null+1
1488
1488
1489 One common tag: longuest path wins:
1489 One common tag: longest path wins:
1490
1490
1491 $ hg tag -r 1 -m t1 -d '6 0' t1
1491 $ hg tag -r 1 -m t1 -d '6 0' t1
1492 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1492 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1493 6: t1+4
1493 6: t1+4
1494 5: t1+3
1494 5: t1+3
1495 4: t1+2
1495 4: t1+2
1496 3: t1+1
1496 3: t1+1
1497 2: t1+1
1497 2: t1+1
1498 1: t1+0
1498 1: t1+0
1499 0: null+1
1499 0: null+1
1500
1500
1501 One ancestor tag: more recent wins:
1501 One ancestor tag: more recent wins:
1502
1502
1503 $ hg tag -r 2 -m t2 -d '7 0' t2
1503 $ hg tag -r 2 -m t2 -d '7 0' t2
1504 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1504 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1505 7: t2+3
1505 7: t2+3
1506 6: t2+2
1506 6: t2+2
1507 5: t2+1
1507 5: t2+1
1508 4: t1+2
1508 4: t1+2
1509 3: t1+1
1509 3: t1+1
1510 2: t2+0
1510 2: t2+0
1511 1: t1+0
1511 1: t1+0
1512 0: null+1
1512 0: null+1
1513
1513
1514 Two branch tags: more recent wins:
1514 Two branch tags: more recent wins:
1515
1515
1516 $ hg tag -r 3 -m t3 -d '8 0' t3
1516 $ hg tag -r 3 -m t3 -d '8 0' t3
1517 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1517 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1518 8: t3+5
1518 8: t3+5
1519 7: t3+4
1519 7: t3+4
1520 6: t3+3
1520 6: t3+3
1521 5: t3+2
1521 5: t3+2
1522 4: t3+1
1522 4: t3+1
1523 3: t3+0
1523 3: t3+0
1524 2: t2+0
1524 2: t2+0
1525 1: t1+0
1525 1: t1+0
1526 0: null+1
1526 0: null+1
1527
1527
1528 Merged tag overrides:
1528 Merged tag overrides:
1529
1529
1530 $ hg tag -r 5 -m t5 -d '9 0' t5
1530 $ hg tag -r 5 -m t5 -d '9 0' t5
1531 $ hg tag -r 3 -m at3 -d '10 0' at3
1531 $ hg tag -r 3 -m at3 -d '10 0' at3
1532 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1532 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1533 10: t5+5
1533 10: t5+5
1534 9: t5+4
1534 9: t5+4
1535 8: t5+3
1535 8: t5+3
1536 7: t5+2
1536 7: t5+2
1537 6: t5+1
1537 6: t5+1
1538 5: t5+0
1538 5: t5+0
1539 4: at3:t3+1
1539 4: at3:t3+1
1540 3: at3:t3+0
1540 3: at3:t3+0
1541 2: t2+0
1541 2: t2+0
1542 1: t1+0
1542 1: t1+0
1543 0: null+1
1543 0: null+1
1544
1544
1545 $ cd ..
1545 $ cd ..
1546
1546
1547
1547
1548 Style path expansion: issue1948 - ui.style option doesn't work on OSX
1548 Style path expansion: issue1948 - ui.style option doesn't work on OSX
1549 if it is a relative path
1549 if it is a relative path
1550
1550
1551 $ mkdir -p home/styles
1551 $ mkdir -p home/styles
1552
1552
1553 $ cat > home/styles/teststyle <<EOF
1553 $ cat > home/styles/teststyle <<EOF
1554 > changeset = 'test {rev}:{node|short}\n'
1554 > changeset = 'test {rev}:{node|short}\n'
1555 > EOF
1555 > EOF
1556
1556
1557 $ HOME=`pwd`/home; export HOME
1557 $ HOME=`pwd`/home; export HOME
1558
1558
1559 $ cat > latesttag/.hg/hgrc <<EOF
1559 $ cat > latesttag/.hg/hgrc <<EOF
1560 > [ui]
1560 > [ui]
1561 > style = ~/styles/teststyle
1561 > style = ~/styles/teststyle
1562 > EOF
1562 > EOF
1563
1563
1564 $ hg -R latesttag tip
1564 $ hg -R latesttag tip
1565 test 10:9b4a630e5f5f
1565 test 10:9b4a630e5f5f
1566
1566
1567 Test recursive showlist template (issue1989):
1567 Test recursive showlist template (issue1989):
1568
1568
1569 $ cat > style1989 <<EOF
1569 $ cat > style1989 <<EOF
1570 > changeset = '{file_mods}{manifest}{extras}'
1570 > changeset = '{file_mods}{manifest}{extras}'
1571 > file_mod = 'M|{author|person}\n'
1571 > file_mod = 'M|{author|person}\n'
1572 > manifest = '{rev},{author}\n'
1572 > manifest = '{rev},{author}\n'
1573 > extra = '{key}: {author}\n'
1573 > extra = '{key}: {author}\n'
1574 > EOF
1574 > EOF
1575
1575
1576 $ hg -R latesttag log -r tip --style=style1989
1576 $ hg -R latesttag log -r tip --style=style1989
1577 M|test
1577 M|test
1578 10,test
1578 10,test
1579 branch: test
1579 branch: test
1580
1580
1581 Test new-style inline templating:
1581 Test new-style inline templating:
1582
1582
1583 $ hg log -R latesttag -r tip --template 'modified files: {file_mods % " {file}\n"}\n'
1583 $ hg log -R latesttag -r tip --template 'modified files: {file_mods % " {file}\n"}\n'
1584 modified files: .hgtags
1584 modified files: .hgtags
1585
1585
1586 Test the sub function of templating for expansion:
1586 Test the sub function of templating for expansion:
1587
1587
1588 $ hg log -R latesttag -r 10 --template '{sub("[0-9]", "x", "{rev}")}\n'
1588 $ hg log -R latesttag -r 10 --template '{sub("[0-9]", "x", "{rev}")}\n'
1589 xx
1589 xx
1590
1590
1591 Test the strip function with chars specified:
1591 Test the strip function with chars specified:
1592
1592
1593 $ hg log -R latesttag --template '{desc}\n'
1593 $ hg log -R latesttag --template '{desc}\n'
1594 at3
1594 at3
1595 t5
1595 t5
1596 t3
1596 t3
1597 t2
1597 t2
1598 t1
1598 t1
1599 merge
1599 merge
1600 h2e
1600 h2e
1601 h2d
1601 h2d
1602 h1c
1602 h1c
1603 b
1603 b
1604 a
1604 a
1605
1605
1606 $ hg log -R latesttag --template '{strip(desc, "te")}\n'
1606 $ hg log -R latesttag --template '{strip(desc, "te")}\n'
1607 at3
1607 at3
1608 5
1608 5
1609 3
1609 3
1610 2
1610 2
1611 1
1611 1
1612 merg
1612 merg
1613 h2
1613 h2
1614 h2d
1614 h2d
1615 h1c
1615 h1c
1616 b
1616 b
1617 a
1617 a
1618
1618
1619 Test date format:
1619 Test date format:
1620
1620
1621 $ hg log -R latesttag --template 'date: {date(date, "%y %m %d %S %z")}\n'
1621 $ hg log -R latesttag --template 'date: {date(date, "%y %m %d %S %z")}\n'
1622 date: 70 01 01 10 +0000
1622 date: 70 01 01 10 +0000
1623 date: 70 01 01 09 +0000
1623 date: 70 01 01 09 +0000
1624 date: 70 01 01 08 +0000
1624 date: 70 01 01 08 +0000
1625 date: 70 01 01 07 +0000
1625 date: 70 01 01 07 +0000
1626 date: 70 01 01 06 +0000
1626 date: 70 01 01 06 +0000
1627 date: 70 01 01 05 +0100
1627 date: 70 01 01 05 +0100
1628 date: 70 01 01 04 +0000
1628 date: 70 01 01 04 +0000
1629 date: 70 01 01 03 +0000
1629 date: 70 01 01 03 +0000
1630 date: 70 01 01 02 +0000
1630 date: 70 01 01 02 +0000
1631 date: 70 01 01 01 +0000
1631 date: 70 01 01 01 +0000
1632 date: 70 01 01 00 +0000
1632 date: 70 01 01 00 +0000
1633
1633
1634 Test string escaping:
1634 Test string escaping:
1635
1635
1636 $ hg log -R latesttag -r 0 --template '>\n<>\\n<{if(rev, "[>\n<>\\n<]")}>\n<>\\n<\n'
1636 $ hg log -R latesttag -r 0 --template '>\n<>\\n<{if(rev, "[>\n<>\\n<]")}>\n<>\\n<\n'
1637 >
1637 >
1638 <>\n<[>
1638 <>\n<[>
1639 <>\n<]>
1639 <>\n<]>
1640 <>\n<
1640 <>\n<
1641
1641
1642 "string-escape"-ed "\x5c\x786e" becomes r"\x6e" (once) or r"n" (twice)
1642 "string-escape"-ed "\x5c\x786e" becomes r"\x6e" (once) or r"n" (twice)
1643
1643
1644 $ hg log -R a -r 0 --template '{if("1", "\x5c\x786e", "NG")}\n'
1644 $ hg log -R a -r 0 --template '{if("1", "\x5c\x786e", "NG")}\n'
1645 \x6e
1645 \x6e
1646 $ hg log -R a -r 0 --template '{if("1", r"\x5c\x786e", "NG")}\n'
1646 $ hg log -R a -r 0 --template '{if("1", r"\x5c\x786e", "NG")}\n'
1647 \x5c\x786e
1647 \x5c\x786e
1648 $ hg log -R a -r 0 --template '{if("", "NG", "\x5c\x786e")}\n'
1648 $ hg log -R a -r 0 --template '{if("", "NG", "\x5c\x786e")}\n'
1649 \x6e
1649 \x6e
1650 $ hg log -R a -r 0 --template '{if("", "NG", r"\x5c\x786e")}\n'
1650 $ hg log -R a -r 0 --template '{if("", "NG", r"\x5c\x786e")}\n'
1651 \x5c\x786e
1651 \x5c\x786e
1652
1652
1653 $ hg log -R a -r 2 --template '{ifeq("no perso\x6e", desc, "\x5c\x786e", "NG")}\n'
1653 $ hg log -R a -r 2 --template '{ifeq("no perso\x6e", desc, "\x5c\x786e", "NG")}\n'
1654 \x6e
1654 \x6e
1655 $ hg log -R a -r 2 --template '{ifeq(r"no perso\x6e", desc, "NG", r"\x5c\x786e")}\n'
1655 $ hg log -R a -r 2 --template '{ifeq(r"no perso\x6e", desc, "NG", r"\x5c\x786e")}\n'
1656 \x5c\x786e
1656 \x5c\x786e
1657 $ hg log -R a -r 2 --template '{ifeq(desc, "no perso\x6e", "\x5c\x786e", "NG")}\n'
1657 $ hg log -R a -r 2 --template '{ifeq(desc, "no perso\x6e", "\x5c\x786e", "NG")}\n'
1658 \x6e
1658 \x6e
1659 $ hg log -R a -r 2 --template '{ifeq(desc, r"no perso\x6e", "NG", r"\x5c\x786e")}\n'
1659 $ hg log -R a -r 2 --template '{ifeq(desc, r"no perso\x6e", "NG", r"\x5c\x786e")}\n'
1660 \x5c\x786e
1660 \x5c\x786e
1661
1661
1662 $ hg log -R a -r 8 --template '{join(files, "\n")}\n'
1662 $ hg log -R a -r 8 --template '{join(files, "\n")}\n'
1663 fourth
1663 fourth
1664 second
1664 second
1665 third
1665 third
1666 $ hg log -R a -r 8 --template '{join(files, r"\n")}\n'
1666 $ hg log -R a -r 8 --template '{join(files, r"\n")}\n'
1667 fourth\nsecond\nthird
1667 fourth\nsecond\nthird
1668
1668
1669 $ hg log -R a -r 2 --template '{rstdoc("1st\n\n2nd", "htm\x6c")}'
1669 $ hg log -R a -r 2 --template '{rstdoc("1st\n\n2nd", "htm\x6c")}'
1670 <p>
1670 <p>
1671 1st
1671 1st
1672 </p>
1672 </p>
1673 <p>
1673 <p>
1674 2nd
1674 2nd
1675 </p>
1675 </p>
1676 $ hg log -R a -r 2 --template '{rstdoc(r"1st\n\n2nd", "html")}'
1676 $ hg log -R a -r 2 --template '{rstdoc(r"1st\n\n2nd", "html")}'
1677 <p>
1677 <p>
1678 1st\n\n2nd
1678 1st\n\n2nd
1679 </p>
1679 </p>
1680 $ hg log -R a -r 2 --template '{rstdoc("1st\n\n2nd", r"htm\x6c")}'
1680 $ hg log -R a -r 2 --template '{rstdoc("1st\n\n2nd", r"htm\x6c")}'
1681 1st
1681 1st
1682
1682
1683 2nd
1683 2nd
1684
1684
1685 $ hg log -R a -r 2 --template '{strip(desc, "\x6e")}\n'
1685 $ hg log -R a -r 2 --template '{strip(desc, "\x6e")}\n'
1686 o perso
1686 o perso
1687 $ hg log -R a -r 2 --template '{strip(desc, r"\x6e")}\n'
1687 $ hg log -R a -r 2 --template '{strip(desc, r"\x6e")}\n'
1688 no person
1688 no person
1689 $ hg log -R a -r 2 --template '{strip("no perso\x6e", "\x6e")}\n'
1689 $ hg log -R a -r 2 --template '{strip("no perso\x6e", "\x6e")}\n'
1690 o perso
1690 o perso
1691 $ hg log -R a -r 2 --template '{strip(r"no perso\x6e", r"\x6e")}\n'
1691 $ hg log -R a -r 2 --template '{strip(r"no perso\x6e", r"\x6e")}\n'
1692 no perso
1692 no perso
1693
1693
1694 $ hg log -R a -r 2 --template '{sub("\\x6e", "\x2d", desc)}\n'
1694 $ hg log -R a -r 2 --template '{sub("\\x6e", "\x2d", desc)}\n'
1695 -o perso-
1695 -o perso-
1696 $ hg log -R a -r 2 --template '{sub(r"\\x6e", "-", desc)}\n'
1696 $ hg log -R a -r 2 --template '{sub(r"\\x6e", "-", desc)}\n'
1697 no person
1697 no person
1698 $ hg log -R a -r 2 --template '{sub("n", r"\x2d", desc)}\n'
1698 $ hg log -R a -r 2 --template '{sub("n", r"\x2d", desc)}\n'
1699 \x2do perso\x2d
1699 \x2do perso\x2d
1700 $ hg log -R a -r 2 --template '{sub("n", "\x2d", "no perso\x6e")}\n'
1700 $ hg log -R a -r 2 --template '{sub("n", "\x2d", "no perso\x6e")}\n'
1701 -o perso-
1701 -o perso-
1702 $ hg log -R a -r 2 --template '{sub("n", r"\x2d", r"no perso\x6e")}\n'
1702 $ hg log -R a -r 2 --template '{sub("n", r"\x2d", r"no perso\x6e")}\n'
1703 \x2do perso\x6e
1703 \x2do perso\x6e
1704
1704
1705 $ hg log -R a -r 8 --template '{files % "{file}\n"}'
1705 $ hg log -R a -r 8 --template '{files % "{file}\n"}'
1706 fourth
1706 fourth
1707 second
1707 second
1708 third
1708 third
1709 $ hg log -R a -r 8 --template '{files % r"{file}\n"}\n'
1709 $ hg log -R a -r 8 --template '{files % r"{file}\n"}\n'
1710 fourth\nsecond\nthird\n
1710 fourth\nsecond\nthird\n
1711
1711
1712 Test string escapeing in nested expression:
1712 Test string escaping in nested expression:
1713
1713
1714 $ hg log -R a -r 8 --template '{ifeq(r"\x6e", if("1", "\x5c\x786e"), join(files, "\x5c\x786e"))}\n'
1714 $ hg log -R a -r 8 --template '{ifeq(r"\x6e", if("1", "\x5c\x786e"), join(files, "\x5c\x786e"))}\n'
1715 fourth\x6esecond\x6ethird
1715 fourth\x6esecond\x6ethird
1716 $ hg log -R a -r 8 --template '{ifeq(if("1", r"\x6e"), "\x5c\x786e", join(files, "\x5c\x786e"))}\n'
1716 $ hg log -R a -r 8 --template '{ifeq(if("1", r"\x6e"), "\x5c\x786e", join(files, "\x5c\x786e"))}\n'
1717 fourth\x6esecond\x6ethird
1717 fourth\x6esecond\x6ethird
1718
1718
1719 $ hg log -R a -r 8 --template '{join(files, ifeq(branch, "default", "\x5c\x786e"))}\n'
1719 $ hg log -R a -r 8 --template '{join(files, ifeq(branch, "default", "\x5c\x786e"))}\n'
1720 fourth\x6esecond\x6ethird
1720 fourth\x6esecond\x6ethird
1721 $ hg log -R a -r 8 --template '{join(files, ifeq(branch, "default", r"\x5c\x786e"))}\n'
1721 $ hg log -R a -r 8 --template '{join(files, ifeq(branch, "default", r"\x5c\x786e"))}\n'
1722 fourth\x5c\x786esecond\x5c\x786ethird
1722 fourth\x5c\x786esecond\x5c\x786ethird
1723
1723
1724 $ hg log -R a -r 3:4 --template '{rev}:{sub(if("1", "\x6e"), ifeq(branch, "foo", r"\x5c\x786e", "\x5c\x786e"), desc)}\n'
1724 $ hg log -R a -r 3:4 --template '{rev}:{sub(if("1", "\x6e"), ifeq(branch, "foo", r"\x5c\x786e", "\x5c\x786e"), desc)}\n'
1725 3:\x6eo user, \x6eo domai\x6e
1725 3:\x6eo user, \x6eo domai\x6e
1726 4:\x5c\x786eew bra\x5c\x786ech
1726 4:\x5c\x786eew bra\x5c\x786ech
1727
1727
1728 Test recursive evaluation:
1728 Test recursive evaluation:
1729
1729
1730 $ hg init r
1730 $ hg init r
1731 $ cd r
1731 $ cd r
1732 $ echo a > a
1732 $ echo a > a
1733 $ hg ci -Am '{rev}'
1733 $ hg ci -Am '{rev}'
1734 adding a
1734 adding a
1735 $ hg log -r 0 --template '{if(rev, desc)}\n'
1735 $ hg log -r 0 --template '{if(rev, desc)}\n'
1736 {rev}
1736 {rev}
1737 $ hg log -r 0 --template '{if(rev, "{author} {rev}")}\n'
1737 $ hg log -r 0 --template '{if(rev, "{author} {rev}")}\n'
1738 test 0
1738 test 0
1739
1739
1740 $ hg branch -q 'text.{rev}'
1740 $ hg branch -q 'text.{rev}'
1741 $ echo aa >> aa
1741 $ echo aa >> aa
1742 $ hg ci -u '{node|short}' -m 'desc to be wrapped desc to be wrapped'
1742 $ hg ci -u '{node|short}' -m 'desc to be wrapped desc to be wrapped'
1743
1743
1744 $ hg log -l1 --template '{fill(desc, "20", author, branch)}'
1744 $ hg log -l1 --template '{fill(desc, "20", author, branch)}'
1745 {node|short}desc to
1745 {node|short}desc to
1746 text.{rev}be wrapped
1746 text.{rev}be wrapped
1747 text.{rev}desc to be
1747 text.{rev}desc to be
1748 text.{rev}wrapped (no-eol)
1748 text.{rev}wrapped (no-eol)
1749 $ hg log -l1 --template '{fill(desc, "20", "{node|short}:", "text.{rev}:")}'
1749 $ hg log -l1 --template '{fill(desc, "20", "{node|short}:", "text.{rev}:")}'
1750 bcc7ff960b8e:desc to
1750 bcc7ff960b8e:desc to
1751 text.1:be wrapped
1751 text.1:be wrapped
1752 text.1:desc to be
1752 text.1:desc to be
1753 text.1:wrapped (no-eol)
1753 text.1:wrapped (no-eol)
1754
1754
1755 $ hg log -l 1 --template '{sub(r"[0-9]", "-", author)}'
1755 $ hg log -l 1 --template '{sub(r"[0-9]", "-", author)}'
1756 {node|short} (no-eol)
1756 {node|short} (no-eol)
1757 $ hg log -l 1 --template '{sub(r"[0-9]", "-", "{node|short}")}'
1757 $ hg log -l 1 --template '{sub(r"[0-9]", "-", "{node|short}")}'
1758 bcc-ff---b-e (no-eol)
1758 bcc-ff---b-e (no-eol)
1759
1759
1760 $ cat >> .hg/hgrc <<EOF
1760 $ cat >> .hg/hgrc <<EOF
1761 > [extensions]
1761 > [extensions]
1762 > color=
1762 > color=
1763 > [color]
1763 > [color]
1764 > mode=ansi
1764 > mode=ansi
1765 > text.{rev} = red
1765 > text.{rev} = red
1766 > text.1 = green
1766 > text.1 = green
1767 > EOF
1767 > EOF
1768 $ hg log --color=always -l 1 --template '{label(branch, "text\n")}'
1768 $ hg log --color=always -l 1 --template '{label(branch, "text\n")}'
1769 \x1b[0;31mtext\x1b[0m (esc)
1769 \x1b[0;31mtext\x1b[0m (esc)
1770 $ hg log --color=always -l 1 --template '{label("text.{rev}", "text\n")}'
1770 $ hg log --color=always -l 1 --template '{label("text.{rev}", "text\n")}'
1771 \x1b[0;32mtext\x1b[0m (esc)
1771 \x1b[0;32mtext\x1b[0m (esc)
1772
1772
1773 Test branches inside if statement:
1773 Test branches inside if statement:
1774
1774
1775 $ hg log -r 0 --template '{if(branches, "yes", "no")}\n'
1775 $ hg log -r 0 --template '{if(branches, "yes", "no")}\n'
1776 no
1776 no
1777
1777
1778 Test shortest(node) function:
1778 Test shortest(node) function:
1779
1779
1780 $ echo b > b
1780 $ echo b > b
1781 $ hg ci -qAm b
1781 $ hg ci -qAm b
1782 $ hg log --template '{shortest(node)}\n'
1782 $ hg log --template '{shortest(node)}\n'
1783 e777
1783 e777
1784 bcc7
1784 bcc7
1785 f776
1785 f776
1786 $ hg log --template '{shortest(node, 10)}\n'
1786 $ hg log --template '{shortest(node, 10)}\n'
1787 e777603221
1787 e777603221
1788 bcc7ff960b
1788 bcc7ff960b
1789 f7769ec2ab
1789 f7769ec2ab
1790
1790
1791 Test pad function
1791 Test pad function
1792
1792
1793 $ hg log --template '{pad(rev, 20)} {author|user}\n'
1793 $ hg log --template '{pad(rev, 20)} {author|user}\n'
1794 2 test
1794 2 test
1795 1 {node|short}
1795 1 {node|short}
1796 0 test
1796 0 test
1797
1797
1798 $ hg log --template '{pad(rev, 20, " ", True)} {author|user}\n'
1798 $ hg log --template '{pad(rev, 20, " ", True)} {author|user}\n'
1799 2 test
1799 2 test
1800 1 {node|short}
1800 1 {node|short}
1801 0 test
1801 0 test
1802
1802
1803 $ hg log --template '{pad(rev, 20, "-", False)} {author|user}\n'
1803 $ hg log --template '{pad(rev, 20, "-", False)} {author|user}\n'
1804 2------------------- test
1804 2------------------- test
1805 1------------------- {node|short}
1805 1------------------- {node|short}
1806 0------------------- test
1806 0------------------- test
1807
1807
1808 Test ifcontains function
1808 Test ifcontains function
1809
1809
1810 $ hg log --template '{rev} {ifcontains("a", file_adds, "added a", "did not add a")}\n'
1810 $ hg log --template '{rev} {ifcontains("a", file_adds, "added a", "did not add a")}\n'
1811 2 did not add a
1811 2 did not add a
1812 1 did not add a
1812 1 did not add a
1813 0 added a
1813 0 added a
1814
1814
1815 Test revset function
1815 Test revset function
1816
1816
1817 $ hg log --template '{rev} {ifcontains(rev, revset("."), "current rev", "not current rev")}\n'
1817 $ hg log --template '{rev} {ifcontains(rev, revset("."), "current rev", "not current rev")}\n'
1818 2 current rev
1818 2 current rev
1819 1 not current rev
1819 1 not current rev
1820 0 not current rev
1820 0 not current rev
1821
1821
1822 $ hg log --template '{rev} Parents: {revset("parents(%s)", rev)}\n'
1822 $ hg log --template '{rev} Parents: {revset("parents(%s)", rev)}\n'
1823 2 Parents: 1
1823 2 Parents: 1
1824 1 Parents: 0
1824 1 Parents: 0
1825 0 Parents:
1825 0 Parents:
1826
1826
1827 $ hg log --template 'Rev: {rev}\n{revset("::%s", rev) % "Ancestor: {revision}\n"}\n'
1827 $ hg log --template 'Rev: {rev}\n{revset("::%s", rev) % "Ancestor: {revision}\n"}\n'
1828 Rev: 2
1828 Rev: 2
1829 Ancestor: 0
1829 Ancestor: 0
1830 Ancestor: 1
1830 Ancestor: 1
1831 Ancestor: 2
1831 Ancestor: 2
1832
1832
1833 Rev: 1
1833 Rev: 1
1834 Ancestor: 0
1834 Ancestor: 0
1835 Ancestor: 1
1835 Ancestor: 1
1836
1836
1837 Rev: 0
1837 Rev: 0
1838 Ancestor: 0
1838 Ancestor: 0
1839
1839
1840 Test current bookmark templating
1840 Test current bookmark templating
1841
1841
1842 $ hg book foo
1842 $ hg book foo
1843 $ hg book bar
1843 $ hg book bar
1844 $ hg log --template "{rev} {bookmarks % '{bookmark}{ifeq(bookmark, current, \"*\")} '}\n"
1844 $ hg log --template "{rev} {bookmarks % '{bookmark}{ifeq(bookmark, current, \"*\")} '}\n"
1845 2 bar* foo
1845 2 bar* foo
1846 1
1846 1
1847 0
1847 0
1848
1848
1849 Test stringify on sub expressions
1849 Test stringify on sub expressions
1850
1850
1851 $ cd ..
1851 $ cd ..
1852 $ hg log -R a -r 8 --template '{join(files, if("1", if("1", ", ")))}\n'
1852 $ hg log -R a -r 8 --template '{join(files, if("1", if("1", ", ")))}\n'
1853 fourth, second, third
1853 fourth, second, third
1854 $ hg log -R a -r 8 --template '{strip(if("1", if("1", "-abc-")), if("1", if("1", "-")))}\n'
1854 $ hg log -R a -r 8 --template '{strip(if("1", if("1", "-abc-")), if("1", if("1", "-")))}\n'
1855 abc
1855 abc
1856
1856
@@ -1,468 +1,468 b''
1 $ cat >> $HGRCPATH <<EOF
1 $ cat >> $HGRCPATH <<EOF
2 > [extensions]
2 > [extensions]
3 > convert=
3 > convert=
4 > [convert]
4 > [convert]
5 > hg.saverev=False
5 > hg.saverev=False
6 > EOF
6 > EOF
7 $ hg help convert
7 $ hg help convert
8 hg convert [OPTION]... SOURCE [DEST [REVMAP]]
8 hg convert [OPTION]... SOURCE [DEST [REVMAP]]
9
9
10 convert a foreign SCM repository to a Mercurial one.
10 convert a foreign SCM repository to a Mercurial one.
11
11
12 Accepted source formats [identifiers]:
12 Accepted source formats [identifiers]:
13
13
14 - Mercurial [hg]
14 - Mercurial [hg]
15 - CVS [cvs]
15 - CVS [cvs]
16 - Darcs [darcs]
16 - Darcs [darcs]
17 - git [git]
17 - git [git]
18 - Subversion [svn]
18 - Subversion [svn]
19 - Monotone [mtn]
19 - Monotone [mtn]
20 - GNU Arch [gnuarch]
20 - GNU Arch [gnuarch]
21 - Bazaar [bzr]
21 - Bazaar [bzr]
22 - Perforce [p4]
22 - Perforce [p4]
23
23
24 Accepted destination formats [identifiers]:
24 Accepted destination formats [identifiers]:
25
25
26 - Mercurial [hg]
26 - Mercurial [hg]
27 - Subversion [svn] (history on branches is not preserved)
27 - Subversion [svn] (history on branches is not preserved)
28
28
29 If no revision is given, all revisions will be converted. Otherwise,
29 If no revision is given, all revisions will be converted. Otherwise,
30 convert will only import up to the named revision (given in a format
30 convert will only import up to the named revision (given in a format
31 understood by the source).
31 understood by the source).
32
32
33 If no destination directory name is specified, it defaults to the basename
33 If no destination directory name is specified, it defaults to the basename
34 of the source with "-hg" appended. If the destination repository doesn't
34 of the source with "-hg" appended. If the destination repository doesn't
35 exist, it will be created.
35 exist, it will be created.
36
36
37 By default, all sources except Mercurial will use --branchsort. Mercurial
37 By default, all sources except Mercurial will use --branchsort. Mercurial
38 uses --sourcesort to preserve original revision numbers order. Sort modes
38 uses --sourcesort to preserve original revision numbers order. Sort modes
39 have the following effects:
39 have the following effects:
40
40
41 --branchsort convert from parent to child revision when possible, which
41 --branchsort convert from parent to child revision when possible, which
42 means branches are usually converted one after the other.
42 means branches are usually converted one after the other.
43 It generates more compact repositories.
43 It generates more compact repositories.
44 --datesort sort revisions by date. Converted repositories have good-
44 --datesort sort revisions by date. Converted repositories have good-
45 looking changelogs but are often an order of magnitude
45 looking changelogs but are often an order of magnitude
46 larger than the same ones generated by --branchsort.
46 larger than the same ones generated by --branchsort.
47 --sourcesort try to preserve source revisions order, only supported by
47 --sourcesort try to preserve source revisions order, only supported by
48 Mercurial sources.
48 Mercurial sources.
49 --closesort try to move closed revisions as close as possible to parent
49 --closesort try to move closed revisions as close as possible to parent
50 branches, only supported by Mercurial sources.
50 branches, only supported by Mercurial sources.
51
51
52 If "REVMAP" isn't given, it will be put in a default location
52 If "REVMAP" isn't given, it will be put in a default location
53 ("<dest>/.hg/shamap" by default). The "REVMAP" is a simple text file that
53 ("<dest>/.hg/shamap" by default). The "REVMAP" is a simple text file that
54 maps each source commit ID to the destination ID for that revision, like
54 maps each source commit ID to the destination ID for that revision, like
55 so:
55 so:
56
56
57 <source ID> <destination ID>
57 <source ID> <destination ID>
58
58
59 If the file doesn't exist, it's automatically created. It's updated on
59 If the file doesn't exist, it's automatically created. It's updated on
60 each commit copied, so "hg convert" can be interrupted and can be run
60 each commit copied, so "hg convert" can be interrupted and can be run
61 repeatedly to copy new commits.
61 repeatedly to copy new commits.
62
62
63 The authormap is a simple text file that maps each source commit author to
63 The authormap is a simple text file that maps each source commit author to
64 a destination commit author. It is handy for source SCMs that use unix
64 a destination commit author. It is handy for source SCMs that use unix
65 logins to identify authors (e.g.: CVS). One line per author mapping and
65 logins to identify authors (e.g.: CVS). One line per author mapping and
66 the line format is:
66 the line format is:
67
67
68 source author = destination author
68 source author = destination author
69
69
70 Empty lines and lines starting with a "#" are ignored.
70 Empty lines and lines starting with a "#" are ignored.
71
71
72 The filemap is a file that allows filtering and remapping of files and
72 The filemap is a file that allows filtering and remapping of files and
73 directories. Each line can contain one of the following directives:
73 directories. Each line can contain one of the following directives:
74
74
75 include path/to/file-or-dir
75 include path/to/file-or-dir
76
76
77 exclude path/to/file-or-dir
77 exclude path/to/file-or-dir
78
78
79 rename path/to/source path/to/destination
79 rename path/to/source path/to/destination
80
80
81 Comment lines start with "#". A specified path matches if it equals the
81 Comment lines start with "#". A specified path matches if it equals the
82 full relative name of a file or one of its parent directories. The
82 full relative name of a file or one of its parent directories. The
83 "include" or "exclude" directive with the longest matching path applies,
83 "include" or "exclude" directive with the longest matching path applies,
84 so line order does not matter.
84 so line order does not matter.
85
85
86 The "include" directive causes a file, or all files under a directory, to
86 The "include" directive causes a file, or all files under a directory, to
87 be included in the destination repository. The default if there are no
87 be included in the destination repository. The default if there are no
88 "include" statements is to include everything. If there are any "include"
88 "include" statements is to include everything. If there are any "include"
89 statements, nothing else is included. The "exclude" directive causes files
89 statements, nothing else is included. The "exclude" directive causes files
90 or directories to be omitted. The "rename" directive renames a file or
90 or directories to be omitted. The "rename" directive renames a file or
91 directory if it is converted. To rename from a subdirectory into the root
91 directory if it is converted. To rename from a subdirectory into the root
92 of the repository, use "." as the path to rename to.
92 of the repository, use "." as the path to rename to.
93
93
94 The splicemap is a file that allows insertion of synthetic history,
94 The splicemap is a file that allows insertion of synthetic history,
95 letting you specify the parents of a revision. This is useful if you want
95 letting you specify the parents of a revision. This is useful if you want
96 to e.g. give a Subversion merge two parents, or graft two disconnected
96 to e.g. give a Subversion merge two parents, or graft two disconnected
97 series of history together. Each entry contains a key, followed by a
97 series of history together. Each entry contains a key, followed by a
98 space, followed by one or two comma-separated values:
98 space, followed by one or two comma-separated values:
99
99
100 key parent1, parent2
100 key parent1, parent2
101
101
102 The key is the revision ID in the source revision control system whose
102 The key is the revision ID in the source revision control system whose
103 parents should be modified (same format as a key in .hg/shamap). The
103 parents should be modified (same format as a key in .hg/shamap). The
104 values are the revision IDs (in either the source or destination revision
104 values are the revision IDs (in either the source or destination revision
105 control system) that should be used as the new parents for that node. For
105 control system) that should be used as the new parents for that node. For
106 example, if you have merged "release-1.0" into "trunk", then you should
106 example, if you have merged "release-1.0" into "trunk", then you should
107 specify the revision on "trunk" as the first parent and the one on the
107 specify the revision on "trunk" as the first parent and the one on the
108 "release-1.0" branch as the second.
108 "release-1.0" branch as the second.
109
109
110 The branchmap is a file that allows you to rename a branch when it is
110 The branchmap is a file that allows you to rename a branch when it is
111 being brought in from whatever external repository. When used in
111 being brought in from whatever external repository. When used in
112 conjunction with a splicemap, it allows for a powerful combination to help
112 conjunction with a splicemap, it allows for a powerful combination to help
113 fix even the most badly mismanaged repositories and turn them into nicely
113 fix even the most badly mismanaged repositories and turn them into nicely
114 structured Mercurial repositories. The branchmap contains lines of the
114 structured Mercurial repositories. The branchmap contains lines of the
115 form:
115 form:
116
116
117 original_branch_name new_branch_name
117 original_branch_name new_branch_name
118
118
119 where "original_branch_name" is the name of the branch in the source
119 where "original_branch_name" is the name of the branch in the source
120 repository, and "new_branch_name" is the name of the branch is the
120 repository, and "new_branch_name" is the name of the branch is the
121 destination repository. No whitespace is allowed in the branch names. This
121 destination repository. No whitespace is allowed in the branch names. This
122 can be used to (for instance) move code in one repository from "default"
122 can be used to (for instance) move code in one repository from "default"
123 to a named branch.
123 to a named branch.
124
124
125 The closemap is a file that allows closing of a branch. This is useful if
125 The closemap is a file that allows closing of a branch. This is useful if
126 you want to close a branch. Each entry contains a revision or hash
126 you want to close a branch. Each entry contains a revision or hash
127 separated by white space.
127 separated by white space.
128
128
129 The tagpmap is a file that exactly analogous to the branchmap. This will
129 The tagmap is a file that exactly analogous to the branchmap. This will
130 rename tags on the fly and prevent the 'update tags' commit usually found
130 rename tags on the fly and prevent the 'update tags' commit usually found
131 at the end of a convert process.
131 at the end of a convert process.
132
132
133 Mercurial Source
133 Mercurial Source
134 ################
134 ################
135
135
136 The Mercurial source recognizes the following configuration options, which
136 The Mercurial source recognizes the following configuration options, which
137 you can set on the command line with "--config":
137 you can set on the command line with "--config":
138
138
139 convert.hg.ignoreerrors
139 convert.hg.ignoreerrors
140 ignore integrity errors when reading. Use it to fix
140 ignore integrity errors when reading. Use it to fix
141 Mercurial repositories with missing revlogs, by converting
141 Mercurial repositories with missing revlogs, by converting
142 from and to Mercurial. Default is False.
142 from and to Mercurial. Default is False.
143 convert.hg.saverev
143 convert.hg.saverev
144 store original revision ID in changeset (forces target IDs
144 store original revision ID in changeset (forces target IDs
145 to change). It takes a boolean argument and defaults to
145 to change). It takes a boolean argument and defaults to
146 False.
146 False.
147 convert.hg.revs
147 convert.hg.revs
148 revset specifying the source revisions to convert.
148 revset specifying the source revisions to convert.
149
149
150 CVS Source
150 CVS Source
151 ##########
151 ##########
152
152
153 CVS source will use a sandbox (i.e. a checked-out copy) from CVS to
153 CVS source will use a sandbox (i.e. a checked-out copy) from CVS to
154 indicate the starting point of what will be converted. Direct access to
154 indicate the starting point of what will be converted. Direct access to
155 the repository files is not needed, unless of course the repository is
155 the repository files is not needed, unless of course the repository is
156 ":local:". The conversion uses the top level directory in the sandbox to
156 ":local:". The conversion uses the top level directory in the sandbox to
157 find the CVS repository, and then uses CVS rlog commands to find files to
157 find the CVS repository, and then uses CVS rlog commands to find files to
158 convert. This means that unless a filemap is given, all files under the
158 convert. This means that unless a filemap is given, all files under the
159 starting directory will be converted, and that any directory
159 starting directory will be converted, and that any directory
160 reorganization in the CVS sandbox is ignored.
160 reorganization in the CVS sandbox is ignored.
161
161
162 The following options can be used with "--config":
162 The following options can be used with "--config":
163
163
164 convert.cvsps.cache
164 convert.cvsps.cache
165 Set to False to disable remote log caching, for testing and
165 Set to False to disable remote log caching, for testing and
166 debugging purposes. Default is True.
166 debugging purposes. Default is True.
167 convert.cvsps.fuzz
167 convert.cvsps.fuzz
168 Specify the maximum time (in seconds) that is allowed
168 Specify the maximum time (in seconds) that is allowed
169 between commits with identical user and log message in a
169 between commits with identical user and log message in a
170 single changeset. When very large files were checked in as
170 single changeset. When very large files were checked in as
171 part of a changeset then the default may not be long enough.
171 part of a changeset then the default may not be long enough.
172 The default is 60.
172 The default is 60.
173 convert.cvsps.mergeto
173 convert.cvsps.mergeto
174 Specify a regular expression to which commit log messages
174 Specify a regular expression to which commit log messages
175 are matched. If a match occurs, then the conversion process
175 are matched. If a match occurs, then the conversion process
176 will insert a dummy revision merging the branch on which
176 will insert a dummy revision merging the branch on which
177 this log message occurs to the branch indicated in the
177 this log message occurs to the branch indicated in the
178 regex. Default is "{{mergetobranch ([-\w]+)}}"
178 regex. Default is "{{mergetobranch ([-\w]+)}}"
179 convert.cvsps.mergefrom
179 convert.cvsps.mergefrom
180 Specify a regular expression to which commit log messages
180 Specify a regular expression to which commit log messages
181 are matched. If a match occurs, then the conversion process
181 are matched. If a match occurs, then the conversion process
182 will add the most recent revision on the branch indicated in
182 will add the most recent revision on the branch indicated in
183 the regex as the second parent of the changeset. Default is
183 the regex as the second parent of the changeset. Default is
184 "{{mergefrombranch ([-\w]+)}}"
184 "{{mergefrombranch ([-\w]+)}}"
185 convert.localtimezone
185 convert.localtimezone
186 use local time (as determined by the TZ environment
186 use local time (as determined by the TZ environment
187 variable) for changeset date/times. The default is False
187 variable) for changeset date/times. The default is False
188 (use UTC).
188 (use UTC).
189 hooks.cvslog Specify a Python function to be called at the end of
189 hooks.cvslog Specify a Python function to be called at the end of
190 gathering the CVS log. The function is passed a list with
190 gathering the CVS log. The function is passed a list with
191 the log entries, and can modify the entries in-place, or add
191 the log entries, and can modify the entries in-place, or add
192 or delete them.
192 or delete them.
193 hooks.cvschangesets
193 hooks.cvschangesets
194 Specify a Python function to be called after the changesets
194 Specify a Python function to be called after the changesets
195 are calculated from the CVS log. The function is passed a
195 are calculated from the CVS log. The function is passed a
196 list with the changeset entries, and can modify the
196 list with the changeset entries, and can modify the
197 changesets in-place, or add or delete them.
197 changesets in-place, or add or delete them.
198
198
199 An additional "debugcvsps" Mercurial command allows the builtin changeset
199 An additional "debugcvsps" Mercurial command allows the builtin changeset
200 merging code to be run without doing a conversion. Its parameters and
200 merging code to be run without doing a conversion. Its parameters and
201 output are similar to that of cvsps 2.1. Please see the command help for
201 output are similar to that of cvsps 2.1. Please see the command help for
202 more details.
202 more details.
203
203
204 Subversion Source
204 Subversion Source
205 #################
205 #################
206
206
207 Subversion source detects classical trunk/branches/tags layouts. By
207 Subversion source detects classical trunk/branches/tags layouts. By
208 default, the supplied "svn://repo/path/" source URL is converted as a
208 default, the supplied "svn://repo/path/" source URL is converted as a
209 single branch. If "svn://repo/path/trunk" exists it replaces the default
209 single branch. If "svn://repo/path/trunk" exists it replaces the default
210 branch. If "svn://repo/path/branches" exists, its subdirectories are
210 branch. If "svn://repo/path/branches" exists, its subdirectories are
211 listed as possible branches. If "svn://repo/path/tags" exists, it is
211 listed as possible branches. If "svn://repo/path/tags" exists, it is
212 looked for tags referencing converted branches. Default "trunk",
212 looked for tags referencing converted branches. Default "trunk",
213 "branches" and "tags" values can be overridden with following options. Set
213 "branches" and "tags" values can be overridden with following options. Set
214 them to paths relative to the source URL, or leave them blank to disable
214 them to paths relative to the source URL, or leave them blank to disable
215 auto detection.
215 auto detection.
216
216
217 The following options can be set with "--config":
217 The following options can be set with "--config":
218
218
219 convert.svn.branches
219 convert.svn.branches
220 specify the directory containing branches. The default is
220 specify the directory containing branches. The default is
221 "branches".
221 "branches".
222 convert.svn.tags
222 convert.svn.tags
223 specify the directory containing tags. The default is
223 specify the directory containing tags. The default is
224 "tags".
224 "tags".
225 convert.svn.trunk
225 convert.svn.trunk
226 specify the name of the trunk branch. The default is
226 specify the name of the trunk branch. The default is
227 "trunk".
227 "trunk".
228 convert.localtimezone
228 convert.localtimezone
229 use local time (as determined by the TZ environment
229 use local time (as determined by the TZ environment
230 variable) for changeset date/times. The default is False
230 variable) for changeset date/times. The default is False
231 (use UTC).
231 (use UTC).
232
232
233 Source history can be retrieved starting at a specific revision, instead
233 Source history can be retrieved starting at a specific revision, instead
234 of being integrally converted. Only single branch conversions are
234 of being integrally converted. Only single branch conversions are
235 supported.
235 supported.
236
236
237 convert.svn.startrev
237 convert.svn.startrev
238 specify start Subversion revision number. The default is 0.
238 specify start Subversion revision number. The default is 0.
239
239
240 Perforce Source
240 Perforce Source
241 ###############
241 ###############
242
242
243 The Perforce (P4) importer can be given a p4 depot path or a client
243 The Perforce (P4) importer can be given a p4 depot path or a client
244 specification as source. It will convert all files in the source to a flat
244 specification as source. It will convert all files in the source to a flat
245 Mercurial repository, ignoring labels, branches and integrations. Note
245 Mercurial repository, ignoring labels, branches and integrations. Note
246 that when a depot path is given you then usually should specify a target
246 that when a depot path is given you then usually should specify a target
247 directory, because otherwise the target may be named "...-hg".
247 directory, because otherwise the target may be named "...-hg".
248
248
249 It is possible to limit the amount of source history to be converted by
249 It is possible to limit the amount of source history to be converted by
250 specifying an initial Perforce revision:
250 specifying an initial Perforce revision:
251
251
252 convert.p4.startrev
252 convert.p4.startrev
253 specify initial Perforce revision (a Perforce changelist
253 specify initial Perforce revision (a Perforce changelist
254 number).
254 number).
255
255
256 Mercurial Destination
256 Mercurial Destination
257 #####################
257 #####################
258
258
259 The following options are supported:
259 The following options are supported:
260
260
261 convert.hg.clonebranches
261 convert.hg.clonebranches
262 dispatch source branches in separate clones. The default is
262 dispatch source branches in separate clones. The default is
263 False.
263 False.
264 convert.hg.tagsbranch
264 convert.hg.tagsbranch
265 branch name for tag revisions, defaults to "default".
265 branch name for tag revisions, defaults to "default".
266 convert.hg.usebranchnames
266 convert.hg.usebranchnames
267 preserve branch names. The default is True.
267 preserve branch names. The default is True.
268
268
269 options:
269 options:
270
270
271 -s --source-type TYPE source repository type
271 -s --source-type TYPE source repository type
272 -d --dest-type TYPE destination repository type
272 -d --dest-type TYPE destination repository type
273 -r --rev REV import up to source revision REV
273 -r --rev REV import up to source revision REV
274 -A --authormap FILE remap usernames using this file
274 -A --authormap FILE remap usernames using this file
275 --filemap FILE remap file names using contents of file
275 --filemap FILE remap file names using contents of file
276 --splicemap FILE splice synthesized history into place
276 --splicemap FILE splice synthesized history into place
277 --branchmap FILE change branch names while converting
277 --branchmap FILE change branch names while converting
278 --closemap FILE closes given revs
278 --closemap FILE closes given revs
279 --tagmap FILE change tag names while converting
279 --tagmap FILE change tag names while converting
280 --branchsort try to sort changesets by branches
280 --branchsort try to sort changesets by branches
281 --datesort try to sort changesets by date
281 --datesort try to sort changesets by date
282 --sourcesort preserve source changesets order
282 --sourcesort preserve source changesets order
283 --closesort try to reorder closed revisions
283 --closesort try to reorder closed revisions
284
284
285 use "hg -v help convert" to show the global options
285 use "hg -v help convert" to show the global options
286 $ hg init a
286 $ hg init a
287 $ cd a
287 $ cd a
288 $ echo a > a
288 $ echo a > a
289 $ hg ci -d'0 0' -Ama
289 $ hg ci -d'0 0' -Ama
290 adding a
290 adding a
291 $ hg cp a b
291 $ hg cp a b
292 $ hg ci -d'1 0' -mb
292 $ hg ci -d'1 0' -mb
293 $ hg rm a
293 $ hg rm a
294 $ hg ci -d'2 0' -mc
294 $ hg ci -d'2 0' -mc
295 $ hg mv b a
295 $ hg mv b a
296 $ hg ci -d'3 0' -md
296 $ hg ci -d'3 0' -md
297 $ echo a >> a
297 $ echo a >> a
298 $ hg ci -d'4 0' -me
298 $ hg ci -d'4 0' -me
299 $ cd ..
299 $ cd ..
300 $ hg convert a 2>&1 | grep -v 'subversion python bindings could not be loaded'
300 $ hg convert a 2>&1 | grep -v 'subversion python bindings could not be loaded'
301 assuming destination a-hg
301 assuming destination a-hg
302 initializing destination a-hg repository
302 initializing destination a-hg repository
303 scanning source...
303 scanning source...
304 sorting...
304 sorting...
305 converting...
305 converting...
306 4 a
306 4 a
307 3 b
307 3 b
308 2 c
308 2 c
309 1 d
309 1 d
310 0 e
310 0 e
311 $ hg --cwd a-hg pull ../a
311 $ hg --cwd a-hg pull ../a
312 pulling from ../a
312 pulling from ../a
313 searching for changes
313 searching for changes
314 no changes found
314 no changes found
315
315
316 conversion to existing file should fail
316 conversion to existing file should fail
317
317
318 $ touch bogusfile
318 $ touch bogusfile
319 $ hg convert a bogusfile
319 $ hg convert a bogusfile
320 initializing destination bogusfile repository
320 initializing destination bogusfile repository
321 abort: cannot create new bundle repository
321 abort: cannot create new bundle repository
322 [255]
322 [255]
323
323
324 #if unix-permissions no-root
324 #if unix-permissions no-root
325
325
326 conversion to dir without permissions should fail
326 conversion to dir without permissions should fail
327
327
328 $ mkdir bogusdir
328 $ mkdir bogusdir
329 $ chmod 000 bogusdir
329 $ chmod 000 bogusdir
330
330
331 $ hg convert a bogusdir
331 $ hg convert a bogusdir
332 abort: Permission denied: 'bogusdir'
332 abort: Permission denied: 'bogusdir'
333 [255]
333 [255]
334
334
335 user permissions should succeed
335 user permissions should succeed
336
336
337 $ chmod 700 bogusdir
337 $ chmod 700 bogusdir
338 $ hg convert a bogusdir
338 $ hg convert a bogusdir
339 initializing destination bogusdir repository
339 initializing destination bogusdir repository
340 scanning source...
340 scanning source...
341 sorting...
341 sorting...
342 converting...
342 converting...
343 4 a
343 4 a
344 3 b
344 3 b
345 2 c
345 2 c
346 1 d
346 1 d
347 0 e
347 0 e
348
348
349 #endif
349 #endif
350
350
351 test pre and post conversion actions
351 test pre and post conversion actions
352
352
353 $ echo 'include b' > filemap
353 $ echo 'include b' > filemap
354 $ hg convert --debug --filemap filemap a partialb | \
354 $ hg convert --debug --filemap filemap a partialb | \
355 > grep 'run hg'
355 > grep 'run hg'
356 run hg source pre-conversion action
356 run hg source pre-conversion action
357 run hg sink pre-conversion action
357 run hg sink pre-conversion action
358 run hg sink post-conversion action
358 run hg sink post-conversion action
359 run hg source post-conversion action
359 run hg source post-conversion action
360
360
361 converting empty dir should fail "nicely
361 converting empty dir should fail "nicely
362
362
363 $ mkdir emptydir
363 $ mkdir emptydir
364
364
365 override $PATH to ensure p4 not visible; use $PYTHON in case we're
365 override $PATH to ensure p4 not visible; use $PYTHON in case we're
366 running from a devel copy, not a temp installation
366 running from a devel copy, not a temp installation
367
367
368 $ PATH="$BINDIR" $PYTHON "$BINDIR"/hg convert emptydir
368 $ PATH="$BINDIR" $PYTHON "$BINDIR"/hg convert emptydir
369 assuming destination emptydir-hg
369 assuming destination emptydir-hg
370 initializing destination emptydir-hg repository
370 initializing destination emptydir-hg repository
371 emptydir does not look like a CVS checkout
371 emptydir does not look like a CVS checkout
372 emptydir does not look like a Git repository
372 emptydir does not look like a Git repository
373 emptydir does not look like a Subversion repository
373 emptydir does not look like a Subversion repository
374 emptydir is not a local Mercurial repository
374 emptydir is not a local Mercurial repository
375 emptydir does not look like a darcs repository
375 emptydir does not look like a darcs repository
376 emptydir does not look like a monotone repository
376 emptydir does not look like a monotone repository
377 emptydir does not look like a GNU Arch repository
377 emptydir does not look like a GNU Arch repository
378 emptydir does not look like a Bazaar repository
378 emptydir does not look like a Bazaar repository
379 cannot find required "p4" tool
379 cannot find required "p4" tool
380 abort: emptydir: missing or unsupported repository
380 abort: emptydir: missing or unsupported repository
381 [255]
381 [255]
382
382
383 convert with imaginary source type
383 convert with imaginary source type
384
384
385 $ hg convert --source-type foo a a-foo
385 $ hg convert --source-type foo a a-foo
386 initializing destination a-foo repository
386 initializing destination a-foo repository
387 abort: foo: invalid source repository type
387 abort: foo: invalid source repository type
388 [255]
388 [255]
389
389
390 convert with imaginary sink type
390 convert with imaginary sink type
391
391
392 $ hg convert --dest-type foo a a-foo
392 $ hg convert --dest-type foo a a-foo
393 abort: foo: invalid destination repository type
393 abort: foo: invalid destination repository type
394 [255]
394 [255]
395
395
396 testing: convert must not produce duplicate entries in fncache
396 testing: convert must not produce duplicate entries in fncache
397
397
398 $ hg convert a b
398 $ hg convert a b
399 initializing destination b repository
399 initializing destination b repository
400 scanning source...
400 scanning source...
401 sorting...
401 sorting...
402 converting...
402 converting...
403 4 a
403 4 a
404 3 b
404 3 b
405 2 c
405 2 c
406 1 d
406 1 d
407 0 e
407 0 e
408
408
409 contents of fncache file:
409 contents of fncache file:
410
410
411 $ cat b/.hg/store/fncache | sort
411 $ cat b/.hg/store/fncache | sort
412 data/a.i
412 data/a.i
413 data/b.i
413 data/b.i
414
414
415 test bogus URL
415 test bogus URL
416
416
417 $ hg convert -q bzr+ssh://foobar@selenic.com/baz baz
417 $ hg convert -q bzr+ssh://foobar@selenic.com/baz baz
418 abort: bzr+ssh://foobar@selenic.com/baz: missing or unsupported repository
418 abort: bzr+ssh://foobar@selenic.com/baz: missing or unsupported repository
419 [255]
419 [255]
420
420
421 test revset converted() lookup
421 test revset converted() lookup
422
422
423 $ hg --config convert.hg.saverev=True convert a c
423 $ hg --config convert.hg.saverev=True convert a c
424 initializing destination c repository
424 initializing destination c repository
425 scanning source...
425 scanning source...
426 sorting...
426 sorting...
427 converting...
427 converting...
428 4 a
428 4 a
429 3 b
429 3 b
430 2 c
430 2 c
431 1 d
431 1 d
432 0 e
432 0 e
433 $ echo f > c/f
433 $ echo f > c/f
434 $ hg -R c ci -d'0 0' -Amf
434 $ hg -R c ci -d'0 0' -Amf
435 adding f
435 adding f
436 created new head
436 created new head
437 $ hg -R c log -r "converted(09d945a62ce6)"
437 $ hg -R c log -r "converted(09d945a62ce6)"
438 changeset: 1:98c3dd46a874
438 changeset: 1:98c3dd46a874
439 user: test
439 user: test
440 date: Thu Jan 01 00:00:01 1970 +0000
440 date: Thu Jan 01 00:00:01 1970 +0000
441 summary: b
441 summary: b
442
442
443 $ hg -R c log -r "converted()"
443 $ hg -R c log -r "converted()"
444 changeset: 0:31ed57b2037c
444 changeset: 0:31ed57b2037c
445 user: test
445 user: test
446 date: Thu Jan 01 00:00:00 1970 +0000
446 date: Thu Jan 01 00:00:00 1970 +0000
447 summary: a
447 summary: a
448
448
449 changeset: 1:98c3dd46a874
449 changeset: 1:98c3dd46a874
450 user: test
450 user: test
451 date: Thu Jan 01 00:00:01 1970 +0000
451 date: Thu Jan 01 00:00:01 1970 +0000
452 summary: b
452 summary: b
453
453
454 changeset: 2:3b9ca06ef716
454 changeset: 2:3b9ca06ef716
455 user: test
455 user: test
456 date: Thu Jan 01 00:00:02 1970 +0000
456 date: Thu Jan 01 00:00:02 1970 +0000
457 summary: c
457 summary: c
458
458
459 changeset: 3:4e0debd37cf2
459 changeset: 3:4e0debd37cf2
460 user: test
460 user: test
461 date: Thu Jan 01 00:00:03 1970 +0000
461 date: Thu Jan 01 00:00:03 1970 +0000
462 summary: d
462 summary: d
463
463
464 changeset: 4:9de3bc9349c5
464 changeset: 4:9de3bc9349c5
465 user: test
465 user: test
466 date: Thu Jan 01 00:00:04 1970 +0000
466 date: Thu Jan 01 00:00:04 1970 +0000
467 summary: e
467 summary: e
468
468
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now