##// END OF EJS Templates
spelling: fixes from spell checker
Mads Kiilerich -
r21024:7731a228 default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,579 +1,579 b''
1 1 # color.py color output for the status and qseries commands
2 2 #
3 3 # Copyright (C) 2007 Kevin Christen <kevin.christen@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''colorize output from some commands
9 9
10 10 This extension modifies the status and resolve commands to add color
11 11 to their output to reflect file status, the qseries command to add
12 12 color to reflect patch status (applied, unapplied, missing), and to
13 13 diff-related commands to highlight additions, removals, diff headers,
14 14 and trailing whitespace.
15 15
16 16 Other effects in addition to color, like bold and underlined text, are
17 17 also available. By default, the terminfo database is used to find the
18 18 terminal codes used to change color and effect. If terminfo is not
19 19 available, then effects are rendered with the ECMA-48 SGR control
20 20 function (aka ANSI escape codes).
21 21
22 22 Default effects may be overridden from your configuration file::
23 23
24 24 [color]
25 25 status.modified = blue bold underline red_background
26 26 status.added = green bold
27 27 status.removed = red bold blue_background
28 28 status.deleted = cyan bold underline
29 29 status.unknown = magenta bold underline
30 30 status.ignored = black bold
31 31
32 32 # 'none' turns off all effects
33 33 status.clean = none
34 34 status.copied = none
35 35
36 36 qseries.applied = blue bold underline
37 37 qseries.unapplied = black bold
38 38 qseries.missing = red bold
39 39
40 40 diff.diffline = bold
41 41 diff.extended = cyan bold
42 42 diff.file_a = red bold
43 43 diff.file_b = green bold
44 44 diff.hunk = magenta
45 45 diff.deleted = red
46 46 diff.inserted = green
47 47 diff.changed = white
48 48 diff.trailingwhitespace = bold red_background
49 49
50 50 resolve.unresolved = red bold
51 51 resolve.resolved = green bold
52 52
53 53 bookmarks.current = green
54 54
55 55 branches.active = none
56 56 branches.closed = black bold
57 57 branches.current = green
58 58 branches.inactive = none
59 59
60 60 tags.normal = green
61 61 tags.local = black bold
62 62
63 63 rebase.rebased = blue
64 64 rebase.remaining = red bold
65 65
66 66 shelve.age = cyan
67 67 shelve.newest = green bold
68 68 shelve.name = blue bold
69 69
70 70 histedit.remaining = red bold
71 71
72 72 The available effects in terminfo mode are 'blink', 'bold', 'dim',
73 73 'inverse', 'invisible', 'italic', 'standout', and 'underline'; in
74 74 ECMA-48 mode, the options are 'bold', 'inverse', 'italic', and
75 75 'underline'. How each is rendered depends on the terminal emulator.
76 76 Some may not be available for a given terminal type, and will be
77 77 silently ignored.
78 78
79 79 Note that on some systems, terminfo mode may cause problems when using
80 80 color with the pager extension and less -R. less with the -R option
81 81 will only display ECMA-48 color codes, and terminfo mode may sometimes
82 82 emit codes that less doesn't understand. You can work around this by
83 83 either using ansi mode (or auto mode), or by using less -r (which will
84 84 pass through all terminal control codes, not just color control
85 85 codes).
86 86
87 87 Because there are only eight standard colors, this module allows you
88 88 to define color names for other color slots which might be available
89 89 for your terminal type, assuming terminfo mode. For instance::
90 90
91 91 color.brightblue = 12
92 92 color.pink = 207
93 93 color.orange = 202
94 94
95 95 to set 'brightblue' to color slot 12 (useful for 16 color terminals
96 96 that have brighter colors defined in the upper eight) and, 'pink' and
97 97 'orange' to colors in 256-color xterm's default color cube. These
98 98 defined colors may then be used as any of the pre-defined eight,
99 99 including appending '_background' to set the background to that color.
100 100
101 101 By default, the color extension will use ANSI mode (or win32 mode on
102 102 Windows) if it detects a terminal. To override auto mode (to enable
103 103 terminfo mode, for example), set the following configuration option::
104 104
105 105 [color]
106 106 mode = terminfo
107 107
108 108 Any value other than 'ansi', 'win32', 'terminfo', or 'auto' will
109 109 disable color.
110 110 '''
111 111
112 112 import os
113 113
114 114 from mercurial import commands, dispatch, extensions, ui as uimod, util
115 115 from mercurial import templater, error
116 116 from mercurial.i18n import _
117 117
118 118 testedwith = 'internal'
119 119
120 120 # start and stop parameters for effects
121 121 _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33,
122 122 'blue': 34, 'magenta': 35, 'cyan': 36, 'white': 37, 'bold': 1,
123 123 'italic': 3, 'underline': 4, 'inverse': 7,
124 124 'black_background': 40, 'red_background': 41,
125 125 'green_background': 42, 'yellow_background': 43,
126 126 'blue_background': 44, 'purple_background': 45,
127 127 'cyan_background': 46, 'white_background': 47}
128 128
129 129 def _terminfosetup(ui, mode):
130 130 '''Initialize terminfo data and the terminal if we're in terminfo mode.'''
131 131
132 132 global _terminfo_params
133 133 # If we failed to load curses, we go ahead and return.
134 134 if not _terminfo_params:
135 135 return
136 136 # Otherwise, see what the config file says.
137 137 if mode not in ('auto', 'terminfo'):
138 138 return
139 139
140 140 _terminfo_params.update((key[6:], (False, int(val)))
141 141 for key, val in ui.configitems('color')
142 142 if key.startswith('color.'))
143 143
144 144 try:
145 145 curses.setupterm()
146 146 except curses.error, e:
147 147 _terminfo_params = {}
148 148 return
149 149
150 150 for key, (b, e) in _terminfo_params.items():
151 151 if not b:
152 152 continue
153 153 if not curses.tigetstr(e):
154 154 # Most terminals don't support dim, invis, etc, so don't be
155 155 # noisy and use ui.debug().
156 156 ui.debug("no terminfo entry for %s\n" % e)
157 157 del _terminfo_params[key]
158 158 if not curses.tigetstr('setaf') or not curses.tigetstr('setab'):
159 159 # Only warn about missing terminfo entries if we explicitly asked for
160 160 # terminfo mode.
161 161 if mode == "terminfo":
162 162 ui.warn(_("no terminfo entry for setab/setaf: reverting to "
163 163 "ECMA-48 color\n"))
164 164 _terminfo_params = {}
165 165
166 166 def _modesetup(ui, coloropt):
167 167 global _terminfo_params
168 168
169 169 auto = coloropt == 'auto'
170 170 always = not auto and util.parsebool(coloropt)
171 171 if not always and not auto:
172 172 return None
173 173
174 174 formatted = always or (os.environ.get('TERM') != 'dumb' and ui.formatted())
175 175
176 176 mode = ui.config('color', 'mode', 'auto')
177 177 realmode = mode
178 178 if mode == 'auto':
179 179 if os.name == 'nt' and 'TERM' not in os.environ:
180 180 # looks line a cmd.exe console, use win32 API or nothing
181 181 realmode = 'win32'
182 182 else:
183 183 realmode = 'ansi'
184 184
185 185 if realmode == 'win32':
186 186 _terminfo_params = {}
187 187 if not w32effects:
188 188 if mode == 'win32':
189 189 # only warn if color.mode is explicitly set to win32
190 190 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
191 191 return None
192 192 _effects.update(w32effects)
193 193 elif realmode == 'ansi':
194 194 _terminfo_params = {}
195 195 elif realmode == 'terminfo':
196 196 _terminfosetup(ui, mode)
197 197 if not _terminfo_params:
198 198 if mode == 'terminfo':
199 199 ## FIXME Shouldn't we return None in this case too?
200 200 # only warn if color.mode is explicitly set to win32
201 201 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
202 202 realmode = 'ansi'
203 203 else:
204 204 return None
205 205
206 206 if always or (auto and formatted):
207 207 return realmode
208 208 return None
209 209
210 210 try:
211 211 import curses
212 212 # Mapping from effect name to terminfo attribute name or color number.
213 213 # This will also force-load the curses module.
214 214 _terminfo_params = {'none': (True, 'sgr0'),
215 215 'standout': (True, 'smso'),
216 216 'underline': (True, 'smul'),
217 217 'reverse': (True, 'rev'),
218 218 'inverse': (True, 'rev'),
219 219 'blink': (True, 'blink'),
220 220 'dim': (True, 'dim'),
221 221 'bold': (True, 'bold'),
222 222 'invisible': (True, 'invis'),
223 223 'italic': (True, 'sitm'),
224 224 'black': (False, curses.COLOR_BLACK),
225 225 'red': (False, curses.COLOR_RED),
226 226 'green': (False, curses.COLOR_GREEN),
227 227 'yellow': (False, curses.COLOR_YELLOW),
228 228 'blue': (False, curses.COLOR_BLUE),
229 229 'magenta': (False, curses.COLOR_MAGENTA),
230 230 'cyan': (False, curses.COLOR_CYAN),
231 231 'white': (False, curses.COLOR_WHITE)}
232 232 except ImportError:
233 233 _terminfo_params = False
234 234
235 235 _styles = {'grep.match': 'red bold',
236 236 'grep.linenumber': 'green',
237 237 'grep.rev': 'green',
238 238 'grep.change': 'green',
239 239 'grep.sep': 'cyan',
240 240 'grep.filename': 'magenta',
241 241 'grep.user': 'magenta',
242 242 'grep.date': 'magenta',
243 243 'bookmarks.current': 'green',
244 244 'branches.active': 'none',
245 245 'branches.closed': 'black bold',
246 246 'branches.current': 'green',
247 247 'branches.inactive': 'none',
248 248 'diff.changed': 'white',
249 249 'diff.deleted': 'red',
250 250 'diff.diffline': 'bold',
251 251 'diff.extended': 'cyan bold',
252 252 'diff.file_a': 'red bold',
253 253 'diff.file_b': 'green bold',
254 254 'diff.hunk': 'magenta',
255 255 'diff.inserted': 'green',
256 256 'diff.trailingwhitespace': 'bold red_background',
257 257 'diffstat.deleted': 'red',
258 258 'diffstat.inserted': 'green',
259 259 'histedit.remaining': 'red bold',
260 260 'ui.prompt': 'yellow',
261 261 'log.changeset': 'yellow',
262 262 'rebase.rebased': 'blue',
263 263 'rebase.remaining': 'red bold',
264 264 'resolve.resolved': 'green bold',
265 265 'resolve.unresolved': 'red bold',
266 266 'shelve.age': 'cyan',
267 267 'shelve.newest': 'green bold',
268 268 'shelve.name': 'blue bold',
269 269 'status.added': 'green bold',
270 270 'status.clean': 'none',
271 271 'status.copied': 'none',
272 272 'status.deleted': 'cyan bold underline',
273 273 'status.ignored': 'black bold',
274 274 'status.modified': 'blue bold',
275 275 'status.removed': 'red bold',
276 276 'status.unknown': 'magenta bold underline',
277 277 'tags.normal': 'green',
278 278 'tags.local': 'black bold'}
279 279
280 280
281 281 def _effect_str(effect):
282 282 '''Helper function for render_effects().'''
283 283
284 284 bg = False
285 285 if effect.endswith('_background'):
286 286 bg = True
287 287 effect = effect[:-11]
288 288 attr, val = _terminfo_params[effect]
289 289 if attr:
290 290 return curses.tigetstr(val)
291 291 elif bg:
292 292 return curses.tparm(curses.tigetstr('setab'), val)
293 293 else:
294 294 return curses.tparm(curses.tigetstr('setaf'), val)
295 295
296 296 def render_effects(text, effects):
297 297 'Wrap text in commands to turn on each effect.'
298 298 if not text:
299 299 return text
300 300 if not _terminfo_params:
301 301 start = [str(_effects[e]) for e in ['none'] + effects.split()]
302 302 start = '\033[' + ';'.join(start) + 'm'
303 303 stop = '\033[' + str(_effects['none']) + 'm'
304 304 else:
305 305 start = ''.join(_effect_str(effect)
306 306 for effect in ['none'] + effects.split())
307 307 stop = _effect_str('none')
308 308 return ''.join([start, text, stop])
309 309
310 310 def extstyles():
311 311 for name, ext in extensions.extensions():
312 312 _styles.update(getattr(ext, 'colortable', {}))
313 313
314 314 def valideffect(effect):
315 315 'Determine if the effect is valid or not.'
316 316 good = False
317 317 if not _terminfo_params and effect in _effects:
318 318 good = True
319 319 elif effect in _terminfo_params or effect[:-11] in _terminfo_params:
320 320 good = True
321 321 return good
322 322
323 323 def configstyles(ui):
324 324 for status, cfgeffects in ui.configitems('color'):
325 325 if '.' not in status or status.startswith('color.'):
326 326 continue
327 327 cfgeffects = ui.configlist('color', status)
328 328 if cfgeffects:
329 329 good = []
330 330 for e in cfgeffects:
331 331 if valideffect(e):
332 332 good.append(e)
333 333 else:
334 334 ui.warn(_("ignoring unknown color/effect %r "
335 335 "(configured in color.%s)\n")
336 336 % (e, status))
337 337 _styles[status] = ' '.join(good)
338 338
339 339 class colorui(uimod.ui):
340 340 def popbuffer(self, labeled=False):
341 341 if self._colormode is None:
342 342 return super(colorui, self).popbuffer(labeled)
343 343
344 344 if labeled:
345 345 return ''.join(self.label(a, label) for a, label
346 346 in self._buffers.pop())
347 347 return ''.join(a for a, label in self._buffers.pop())
348 348
349 349 _colormode = 'ansi'
350 350 def write(self, *args, **opts):
351 351 if self._colormode is None:
352 352 return super(colorui, self).write(*args, **opts)
353 353
354 354 label = opts.get('label', '')
355 355 if self._buffers:
356 356 self._buffers[-1].extend([(str(a), label) for a in args])
357 357 elif self._colormode == 'win32':
358 358 for a in args:
359 359 win32print(a, super(colorui, self).write, **opts)
360 360 else:
361 361 return super(colorui, self).write(
362 362 *[self.label(str(a), label) for a in args], **opts)
363 363
364 364 def write_err(self, *args, **opts):
365 365 if self._colormode is None:
366 366 return super(colorui, self).write_err(*args, **opts)
367 367
368 368 label = opts.get('label', '')
369 369 if self._colormode == 'win32':
370 370 for a in args:
371 371 win32print(a, super(colorui, self).write_err, **opts)
372 372 else:
373 373 return super(colorui, self).write_err(
374 374 *[self.label(str(a), label) for a in args], **opts)
375 375
376 376 def label(self, msg, label):
377 377 if self._colormode is None:
378 378 return super(colorui, self).label(msg, label)
379 379
380 380 effects = []
381 381 for l in label.split():
382 382 s = _styles.get(l, '')
383 383 if s:
384 384 effects.append(s)
385 385 elif valideffect(l):
386 386 effects.append(l)
387 387 effects = ' '.join(effects)
388 388 if effects:
389 389 return '\n'.join([render_effects(s, effects)
390 390 for s in msg.split('\n')])
391 391 return msg
392 392
393 393 def templatelabel(context, mapping, args):
394 394 if len(args) != 2:
395 395 # i18n: "label" is a keyword
396 396 raise error.ParseError(_("label expects two arguments"))
397 397
398 398 thing = templater._evalifliteral(args[1], context, mapping)
399 399
400 400 # apparently, repo could be a string that is the favicon?
401 401 repo = mapping.get('repo', '')
402 402 if isinstance(repo, str):
403 403 return thing
404 404
405 405 label = templater._evalifliteral(args[0], context, mapping)
406 406
407 407 thing = templater.stringify(thing)
408 408 label = templater.stringify(label)
409 409
410 410 return repo.ui.label(thing, label)
411 411
412 412 def uisetup(ui):
413 413 if ui.plain():
414 414 return
415 415 if not isinstance(ui, colorui):
416 416 colorui.__bases__ = (ui.__class__,)
417 417 ui.__class__ = colorui
418 418 def colorcmd(orig, ui_, opts, cmd, cmdfunc):
419 419 mode = _modesetup(ui_, opts['color'])
420 420 colorui._colormode = mode
421 421 if mode:
422 422 extstyles()
423 423 configstyles(ui_)
424 424 return orig(ui_, opts, cmd, cmdfunc)
425 425 extensions.wrapfunction(dispatch, '_runcommand', colorcmd)
426 426 templater.funcs['label'] = templatelabel
427 427
428 428 def extsetup(ui):
429 429 commands.globalopts.append(
430 430 ('', 'color', 'auto',
431 431 # i18n: 'always', 'auto', and 'never' are keywords and should
432 432 # not be translated
433 433 _("when to colorize (boolean, always, auto, or never)"),
434 434 _('TYPE')))
435 435
436 436 def debugcolor(ui, repo, **opts):
437 437 global _styles
438 438 _styles = {}
439 439 for effect in _effects.keys():
440 440 _styles[effect] = effect
441 ui.write(('colormode: %s\n') % ui._colormode)
441 ui.write(('color mode: %s\n') % ui._colormode)
442 442 ui.write(_('available colors:\n'))
443 443 for label, colors in _styles.items():
444 444 ui.write(('%s\n') % colors, label=label)
445 445
446 446 if os.name != 'nt':
447 447 w32effects = None
448 448 else:
449 449 import re, ctypes
450 450
451 451 _kernel32 = ctypes.windll.kernel32
452 452
453 453 _WORD = ctypes.c_ushort
454 454
455 455 _INVALID_HANDLE_VALUE = -1
456 456
457 457 class _COORD(ctypes.Structure):
458 458 _fields_ = [('X', ctypes.c_short),
459 459 ('Y', ctypes.c_short)]
460 460
461 461 class _SMALL_RECT(ctypes.Structure):
462 462 _fields_ = [('Left', ctypes.c_short),
463 463 ('Top', ctypes.c_short),
464 464 ('Right', ctypes.c_short),
465 465 ('Bottom', ctypes.c_short)]
466 466
467 467 class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
468 468 _fields_ = [('dwSize', _COORD),
469 469 ('dwCursorPosition', _COORD),
470 470 ('wAttributes', _WORD),
471 471 ('srWindow', _SMALL_RECT),
472 472 ('dwMaximumWindowSize', _COORD)]
473 473
474 474 _STD_OUTPUT_HANDLE = 0xfffffff5L # (DWORD)-11
475 475 _STD_ERROR_HANDLE = 0xfffffff4L # (DWORD)-12
476 476
477 477 _FOREGROUND_BLUE = 0x0001
478 478 _FOREGROUND_GREEN = 0x0002
479 479 _FOREGROUND_RED = 0x0004
480 480 _FOREGROUND_INTENSITY = 0x0008
481 481
482 482 _BACKGROUND_BLUE = 0x0010
483 483 _BACKGROUND_GREEN = 0x0020
484 484 _BACKGROUND_RED = 0x0040
485 485 _BACKGROUND_INTENSITY = 0x0080
486 486
487 487 _COMMON_LVB_REVERSE_VIDEO = 0x4000
488 488 _COMMON_LVB_UNDERSCORE = 0x8000
489 489
490 490 # http://msdn.microsoft.com/en-us/library/ms682088%28VS.85%29.aspx
491 491 w32effects = {
492 492 'none': -1,
493 493 'black': 0,
494 494 'red': _FOREGROUND_RED,
495 495 'green': _FOREGROUND_GREEN,
496 496 'yellow': _FOREGROUND_RED | _FOREGROUND_GREEN,
497 497 'blue': _FOREGROUND_BLUE,
498 498 'magenta': _FOREGROUND_BLUE | _FOREGROUND_RED,
499 499 'cyan': _FOREGROUND_BLUE | _FOREGROUND_GREEN,
500 500 'white': _FOREGROUND_RED | _FOREGROUND_GREEN | _FOREGROUND_BLUE,
501 501 'bold': _FOREGROUND_INTENSITY,
502 502 'black_background': 0x100, # unused value > 0x0f
503 503 'red_background': _BACKGROUND_RED,
504 504 'green_background': _BACKGROUND_GREEN,
505 505 'yellow_background': _BACKGROUND_RED | _BACKGROUND_GREEN,
506 506 'blue_background': _BACKGROUND_BLUE,
507 507 'purple_background': _BACKGROUND_BLUE | _BACKGROUND_RED,
508 508 'cyan_background': _BACKGROUND_BLUE | _BACKGROUND_GREEN,
509 509 'white_background': (_BACKGROUND_RED | _BACKGROUND_GREEN |
510 510 _BACKGROUND_BLUE),
511 511 'bold_background': _BACKGROUND_INTENSITY,
512 512 'underline': _COMMON_LVB_UNDERSCORE, # double-byte charsets only
513 513 'inverse': _COMMON_LVB_REVERSE_VIDEO, # double-byte charsets only
514 514 }
515 515
516 516 passthrough = set([_FOREGROUND_INTENSITY,
517 517 _BACKGROUND_INTENSITY,
518 518 _COMMON_LVB_UNDERSCORE,
519 519 _COMMON_LVB_REVERSE_VIDEO])
520 520
521 521 stdout = _kernel32.GetStdHandle(
522 522 _STD_OUTPUT_HANDLE) # don't close the handle returned
523 523 if stdout is None or stdout == _INVALID_HANDLE_VALUE:
524 524 w32effects = None
525 525 else:
526 526 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
527 527 if not _kernel32.GetConsoleScreenBufferInfo(
528 528 stdout, ctypes.byref(csbi)):
529 529 # stdout may not support GetConsoleScreenBufferInfo()
530 530 # when called from subprocess or redirected
531 531 w32effects = None
532 532 else:
533 533 origattr = csbi.wAttributes
534 534 ansire = re.compile('\033\[([^m]*)m([^\033]*)(.*)',
535 535 re.MULTILINE | re.DOTALL)
536 536
537 537 def win32print(text, orig, **opts):
538 538 label = opts.get('label', '')
539 539 attr = origattr
540 540
541 541 def mapcolor(val, attr):
542 542 if val == -1:
543 543 return origattr
544 544 elif val in passthrough:
545 545 return attr | val
546 546 elif val > 0x0f:
547 547 return (val & 0x70) | (attr & 0x8f)
548 548 else:
549 549 return (val & 0x07) | (attr & 0xf8)
550 550
551 551 # determine console attributes based on labels
552 552 for l in label.split():
553 553 style = _styles.get(l, '')
554 554 for effect in style.split():
555 555 attr = mapcolor(w32effects[effect], attr)
556 556
557 557 # hack to ensure regexp finds data
558 558 if not text.startswith('\033['):
559 559 text = '\033[m' + text
560 560
561 561 # Look for ANSI-like codes embedded in text
562 562 m = re.match(ansire, text)
563 563
564 564 try:
565 565 while m:
566 566 for sattr in m.group(1).split(';'):
567 567 if sattr:
568 568 attr = mapcolor(int(sattr), attr)
569 569 _kernel32.SetConsoleTextAttribute(stdout, attr)
570 570 orig(m.group(2), **opts)
571 571 m = re.match(ansire, m.group(3))
572 572 finally:
573 573 # Explicitly reset original attributes
574 574 _kernel32.SetConsoleTextAttribute(stdout, origattr)
575 575
576 576 cmdtable = {
577 577 'debugcolor':
578 578 (debugcolor, [], ('hg debugcolor'))
579 579 }
@@ -1,395 +1,395 b''
1 1 # convert.py Foreign SCM converter
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''import revisions from foreign VCS repositories into Mercurial'''
9 9
10 10 import convcmd
11 11 import cvsps
12 12 import subversion
13 13 from mercurial import commands, templatekw
14 14 from mercurial.i18n import _
15 15
16 16 testedwith = 'internal'
17 17
18 18 # Commands definition was moved elsewhere to ease demandload job.
19 19
20 20 def convert(ui, src, dest=None, revmapfile=None, **opts):
21 21 """convert a foreign SCM repository to a Mercurial one.
22 22
23 23 Accepted source formats [identifiers]:
24 24
25 25 - Mercurial [hg]
26 26 - CVS [cvs]
27 27 - Darcs [darcs]
28 28 - git [git]
29 29 - Subversion [svn]
30 30 - Monotone [mtn]
31 31 - GNU Arch [gnuarch]
32 32 - Bazaar [bzr]
33 33 - Perforce [p4]
34 34
35 35 Accepted destination formats [identifiers]:
36 36
37 37 - Mercurial [hg]
38 38 - Subversion [svn] (history on branches is not preserved)
39 39
40 40 If no revision is given, all revisions will be converted.
41 41 Otherwise, convert will only import up to the named revision
42 42 (given in a format understood by the source).
43 43
44 44 If no destination directory name is specified, it defaults to the
45 45 basename of the source with ``-hg`` appended. If the destination
46 46 repository doesn't exist, it will be created.
47 47
48 48 By default, all sources except Mercurial will use --branchsort.
49 49 Mercurial uses --sourcesort to preserve original revision numbers
50 50 order. Sort modes have the following effects:
51 51
52 52 --branchsort convert from parent to child revision when possible,
53 53 which means branches are usually converted one after
54 54 the other. It generates more compact repositories.
55 55
56 56 --datesort sort revisions by date. Converted repositories have
57 57 good-looking changelogs but are often an order of
58 58 magnitude larger than the same ones generated by
59 59 --branchsort.
60 60
61 61 --sourcesort try to preserve source revisions order, only
62 62 supported by Mercurial sources.
63 63
64 64 --closesort try to move closed revisions as close as possible
65 65 to parent branches, only supported by Mercurial
66 66 sources.
67 67
68 68 If ``REVMAP`` isn't given, it will be put in a default location
69 69 (``<dest>/.hg/shamap`` by default). The ``REVMAP`` is a simple
70 70 text file that maps each source commit ID to the destination ID
71 71 for that revision, like so::
72 72
73 73 <source ID> <destination ID>
74 74
75 75 If the file doesn't exist, it's automatically created. It's
76 76 updated on each commit copied, so :hg:`convert` can be interrupted
77 77 and can be run repeatedly to copy new commits.
78 78
79 79 The authormap is a simple text file that maps each source commit
80 80 author to a destination commit author. It is handy for source SCMs
81 81 that use unix logins to identify authors (e.g.: CVS). One line per
82 82 author mapping and the line format is::
83 83
84 84 source author = destination author
85 85
86 86 Empty lines and lines starting with a ``#`` are ignored.
87 87
88 88 The filemap is a file that allows filtering and remapping of files
89 89 and directories. Each line can contain one of the following
90 90 directives::
91 91
92 92 include path/to/file-or-dir
93 93
94 94 exclude path/to/file-or-dir
95 95
96 96 rename path/to/source path/to/destination
97 97
98 98 Comment lines start with ``#``. A specified path matches if it
99 99 equals the full relative name of a file or one of its parent
100 100 directories. The ``include`` or ``exclude`` directive with the
101 101 longest matching path applies, so line order does not matter.
102 102
103 103 The ``include`` directive causes a file, or all files under a
104 104 directory, to be included in the destination repository. The default
105 105 if there are no ``include`` statements is to include everything.
106 106 If there are any ``include`` statements, nothing else is included.
107 107 The ``exclude`` directive causes files or directories to
108 108 be omitted. The ``rename`` directive renames a file or directory if
109 109 it is converted. To rename from a subdirectory into the root of
110 110 the repository, use ``.`` as the path to rename to.
111 111
112 112 The splicemap is a file that allows insertion of synthetic
113 113 history, letting you specify the parents of a revision. This is
114 114 useful if you want to e.g. give a Subversion merge two parents, or
115 115 graft two disconnected series of history together. Each entry
116 116 contains a key, followed by a space, followed by one or two
117 117 comma-separated values::
118 118
119 119 key parent1, parent2
120 120
121 121 The key is the revision ID in the source
122 122 revision control system whose parents should be modified (same
123 123 format as a key in .hg/shamap). The values are the revision IDs
124 124 (in either the source or destination revision control system) that
125 125 should be used as the new parents for that node. For example, if
126 126 you have merged "release-1.0" into "trunk", then you should
127 127 specify the revision on "trunk" as the first parent and the one on
128 128 the "release-1.0" branch as the second.
129 129
130 130 The branchmap is a file that allows you to rename a branch when it is
131 131 being brought in from whatever external repository. When used in
132 132 conjunction with a splicemap, it allows for a powerful combination
133 133 to help fix even the most badly mismanaged repositories and turn them
134 134 into nicely structured Mercurial repositories. The branchmap contains
135 135 lines of the form::
136 136
137 137 original_branch_name new_branch_name
138 138
139 139 where "original_branch_name" is the name of the branch in the
140 140 source repository, and "new_branch_name" is the name of the branch
141 141 is the destination repository. No whitespace is allowed in the
142 142 branch names. This can be used to (for instance) move code in one
143 143 repository from "default" to a named branch.
144 144
145 145 The closemap is a file that allows closing of a branch. This is useful if
146 146 you want to close a branch. Each entry contains a revision or hash
147 147 separated by white space.
148 148
149 The tagpmap is a file that exactly analogous to the branchmap. This will
149 The tagmap is a file that exactly analogous to the branchmap. This will
150 150 rename tags on the fly and prevent the 'update tags' commit usually found
151 151 at the end of a convert process.
152 152
153 153 Mercurial Source
154 154 ################
155 155
156 156 The Mercurial source recognizes the following configuration
157 157 options, which you can set on the command line with ``--config``:
158 158
159 159 :convert.hg.ignoreerrors: ignore integrity errors when reading.
160 160 Use it to fix Mercurial repositories with missing revlogs, by
161 161 converting from and to Mercurial. Default is False.
162 162
163 163 :convert.hg.saverev: store original revision ID in changeset
164 164 (forces target IDs to change). It takes a boolean argument and
165 165 defaults to False.
166 166
167 167 :convert.hg.revs: revset specifying the source revisions to convert.
168 168
169 169 CVS Source
170 170 ##########
171 171
172 172 CVS source will use a sandbox (i.e. a checked-out copy) from CVS
173 173 to indicate the starting point of what will be converted. Direct
174 174 access to the repository files is not needed, unless of course the
175 175 repository is ``:local:``. The conversion uses the top level
176 176 directory in the sandbox to find the CVS repository, and then uses
177 177 CVS rlog commands to find files to convert. This means that unless
178 178 a filemap is given, all files under the starting directory will be
179 179 converted, and that any directory reorganization in the CVS
180 180 sandbox is ignored.
181 181
182 182 The following options can be used with ``--config``:
183 183
184 184 :convert.cvsps.cache: Set to False to disable remote log caching,
185 185 for testing and debugging purposes. Default is True.
186 186
187 187 :convert.cvsps.fuzz: Specify the maximum time (in seconds) that is
188 188 allowed between commits with identical user and log message in
189 189 a single changeset. When very large files were checked in as
190 190 part of a changeset then the default may not be long enough.
191 191 The default is 60.
192 192
193 193 :convert.cvsps.mergeto: Specify a regular expression to which
194 194 commit log messages are matched. If a match occurs, then the
195 195 conversion process will insert a dummy revision merging the
196 196 branch on which this log message occurs to the branch
197 197 indicated in the regex. Default is ``{{mergetobranch
198 198 ([-\\w]+)}}``
199 199
200 200 :convert.cvsps.mergefrom: Specify a regular expression to which
201 201 commit log messages are matched. If a match occurs, then the
202 202 conversion process will add the most recent revision on the
203 203 branch indicated in the regex as the second parent of the
204 204 changeset. Default is ``{{mergefrombranch ([-\\w]+)}}``
205 205
206 206 :convert.localtimezone: use local time (as determined by the TZ
207 207 environment variable) for changeset date/times. The default
208 208 is False (use UTC).
209 209
210 210 :hooks.cvslog: Specify a Python function to be called at the end of
211 211 gathering the CVS log. The function is passed a list with the
212 212 log entries, and can modify the entries in-place, or add or
213 213 delete them.
214 214
215 215 :hooks.cvschangesets: Specify a Python function to be called after
216 216 the changesets are calculated from the CVS log. The
217 217 function is passed a list with the changeset entries, and can
218 218 modify the changesets in-place, or add or delete them.
219 219
220 220 An additional "debugcvsps" Mercurial command allows the builtin
221 221 changeset merging code to be run without doing a conversion. Its
222 222 parameters and output are similar to that of cvsps 2.1. Please see
223 223 the command help for more details.
224 224
225 225 Subversion Source
226 226 #################
227 227
228 228 Subversion source detects classical trunk/branches/tags layouts.
229 229 By default, the supplied ``svn://repo/path/`` source URL is
230 230 converted as a single branch. If ``svn://repo/path/trunk`` exists
231 231 it replaces the default branch. If ``svn://repo/path/branches``
232 232 exists, its subdirectories are listed as possible branches. If
233 233 ``svn://repo/path/tags`` exists, it is looked for tags referencing
234 234 converted branches. Default ``trunk``, ``branches`` and ``tags``
235 235 values can be overridden with following options. Set them to paths
236 236 relative to the source URL, or leave them blank to disable auto
237 237 detection.
238 238
239 239 The following options can be set with ``--config``:
240 240
241 241 :convert.svn.branches: specify the directory containing branches.
242 242 The default is ``branches``.
243 243
244 244 :convert.svn.tags: specify the directory containing tags. The
245 245 default is ``tags``.
246 246
247 247 :convert.svn.trunk: specify the name of the trunk branch. The
248 248 default is ``trunk``.
249 249
250 250 :convert.localtimezone: use local time (as determined by the TZ
251 251 environment variable) for changeset date/times. The default
252 252 is False (use UTC).
253 253
254 254 Source history can be retrieved starting at a specific revision,
255 255 instead of being integrally converted. Only single branch
256 256 conversions are supported.
257 257
258 258 :convert.svn.startrev: specify start Subversion revision number.
259 259 The default is 0.
260 260
261 261 Perforce Source
262 262 ###############
263 263
264 264 The Perforce (P4) importer can be given a p4 depot path or a
265 265 client specification as source. It will convert all files in the
266 266 source to a flat Mercurial repository, ignoring labels, branches
267 267 and integrations. Note that when a depot path is given you then
268 268 usually should specify a target directory, because otherwise the
269 269 target may be named ``...-hg``.
270 270
271 271 It is possible to limit the amount of source history to be
272 272 converted by specifying an initial Perforce revision:
273 273
274 274 :convert.p4.startrev: specify initial Perforce revision (a
275 275 Perforce changelist number).
276 276
277 277 Mercurial Destination
278 278 #####################
279 279
280 280 The following options are supported:
281 281
282 282 :convert.hg.clonebranches: dispatch source branches in separate
283 283 clones. The default is False.
284 284
285 285 :convert.hg.tagsbranch: branch name for tag revisions, defaults to
286 286 ``default``.
287 287
288 288 :convert.hg.usebranchnames: preserve branch names. The default is
289 289 True.
290 290 """
291 291 return convcmd.convert(ui, src, dest, revmapfile, **opts)
292 292
293 293 def debugsvnlog(ui, **opts):
294 294 return subversion.debugsvnlog(ui, **opts)
295 295
296 296 def debugcvsps(ui, *args, **opts):
297 297 '''create changeset information from CVS
298 298
299 299 This command is intended as a debugging tool for the CVS to
300 300 Mercurial converter, and can be used as a direct replacement for
301 301 cvsps.
302 302
303 303 Hg debugcvsps reads the CVS rlog for current directory (or any
304 304 named directory) in the CVS repository, and converts the log to a
305 305 series of changesets based on matching commit log entries and
306 306 dates.'''
307 307 return cvsps.debugcvsps(ui, *args, **opts)
308 308
309 309 commands.norepo += " convert debugsvnlog debugcvsps"
310 310
311 311 cmdtable = {
312 312 "convert":
313 313 (convert,
314 314 [('', 'authors', '',
315 315 _('username mapping filename (DEPRECATED, use --authormap instead)'),
316 316 _('FILE')),
317 317 ('s', 'source-type', '',
318 318 _('source repository type'), _('TYPE')),
319 319 ('d', 'dest-type', '',
320 320 _('destination repository type'), _('TYPE')),
321 321 ('r', 'rev', '',
322 322 _('import up to source revision REV'), _('REV')),
323 323 ('A', 'authormap', '',
324 324 _('remap usernames using this file'), _('FILE')),
325 325 ('', 'filemap', '',
326 326 _('remap file names using contents of file'), _('FILE')),
327 327 ('', 'splicemap', '',
328 328 _('splice synthesized history into place'), _('FILE')),
329 329 ('', 'branchmap', '',
330 330 _('change branch names while converting'), _('FILE')),
331 331 ('', 'closemap', '',
332 332 _('closes given revs'), _('FILE')),
333 333 ('', 'tagmap', '',
334 334 _('change tag names while converting'), _('FILE')),
335 335 ('', 'branchsort', None, _('try to sort changesets by branches')),
336 336 ('', 'datesort', None, _('try to sort changesets by date')),
337 337 ('', 'sourcesort', None, _('preserve source changesets order')),
338 338 ('', 'closesort', None, _('try to reorder closed revisions'))],
339 339 _('hg convert [OPTION]... SOURCE [DEST [REVMAP]]')),
340 340 "debugsvnlog":
341 341 (debugsvnlog,
342 342 [],
343 343 'hg debugsvnlog'),
344 344 "debugcvsps":
345 345 (debugcvsps,
346 346 [
347 347 # Main options shared with cvsps-2.1
348 348 ('b', 'branches', [], _('only return changes on specified branches')),
349 349 ('p', 'prefix', '', _('prefix to remove from file names')),
350 350 ('r', 'revisions', [],
351 351 _('only return changes after or between specified tags')),
352 352 ('u', 'update-cache', None, _("update cvs log cache")),
353 353 ('x', 'new-cache', None, _("create new cvs log cache")),
354 354 ('z', 'fuzz', 60, _('set commit time fuzz in seconds')),
355 355 ('', 'root', '', _('specify cvsroot')),
356 356 # Options specific to builtin cvsps
357 357 ('', 'parents', '', _('show parent changesets')),
358 358 ('', 'ancestors', '',
359 359 _('show current changeset in ancestor branches')),
360 360 # Options that are ignored for compatibility with cvsps-2.1
361 361 ('A', 'cvs-direct', None, _('ignored for compatibility')),
362 362 ],
363 363 _('hg debugcvsps [OPTION]... [PATH]...')),
364 364 }
365 365
366 366 def kwconverted(ctx, name):
367 367 rev = ctx.extra().get('convert_revision', '')
368 368 if rev.startswith('svn:'):
369 369 if name == 'svnrev':
370 370 return str(subversion.revsplit(rev)[2])
371 371 elif name == 'svnpath':
372 372 return subversion.revsplit(rev)[1]
373 373 elif name == 'svnuuid':
374 374 return subversion.revsplit(rev)[0]
375 375 return rev
376 376
377 377 def kwsvnrev(repo, ctx, **args):
378 378 """:svnrev: String. Converted subversion revision number."""
379 379 return kwconverted(ctx, 'svnrev')
380 380
381 381 def kwsvnpath(repo, ctx, **args):
382 382 """:svnpath: String. Converted subversion revision project path."""
383 383 return kwconverted(ctx, 'svnpath')
384 384
385 385 def kwsvnuuid(repo, ctx, **args):
386 386 """:svnuuid: String. Converted subversion revision repository identifier."""
387 387 return kwconverted(ctx, 'svnuuid')
388 388
389 389 def extsetup(ui):
390 390 templatekw.keywords['svnrev'] = kwsvnrev
391 391 templatekw.keywords['svnpath'] = kwsvnpath
392 392 templatekw.keywords['svnuuid'] = kwsvnuuid
393 393
394 394 # tell hggettext to extract docstrings from these functions:
395 395 i18nfunctions = [kwsvnrev, kwsvnpath, kwsvnuuid]
@@ -1,659 +1,659 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 All numbers are unsigned and big endian.
27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: (16 bits integer)
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 The blob contains a space separated list of parameters. parameter with value
43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safefly ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 - Stream level parameters should remains simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 - Textual data allow easy human inspection of a the bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: (16 bits inter)
68 68
69 69 The total number of Bytes used by the part headers. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 :typename: alphanumerical part name
88 :parttype: alphanumerical part name
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 Part's parameter may have arbitraty content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 :payload:
117 117
118 118 payload is a series of `<chunksize><chunkdata>`.
119 119
120 120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
121 121 `chunksize` says)` The payload part is concluded by a zero size chunk.
122 122
123 123 The current implementation always produces either zero or one chunk.
124 This is an implementation limitation that will ultimatly be lifted.
124 This is an implementation limitation that will ultimately be lifted.
125 125
126 126 Bundle processing
127 127 ============================
128 128
129 129 Each part is processed in order using a "part handler". Handler are registered
130 130 for a certain part type.
131 131
132 132 The matching of a part to its handler is case insensitive. The case of the
133 133 part type is used to know if a part is mandatory or advisory. If the Part type
134 134 contains any uppercase char it is considered mandatory. When no handler is
135 135 known for a Mandatory part, the process is aborted and an exception is raised.
136 136 If the part is advisory and no handler is known, the part is ignored. When the
137 137 process is aborted, the full bundle is still read from the stream to keep the
138 138 channel usable. But none of the part read from an abort are processed. In the
139 139 future, dropping the stream may become an option for channel we do not care to
140 140 preserve.
141 141 """
142 142
143 143 import util
144 144 import struct
145 145 import urllib
146 146 import string
147 147
148 148 import changegroup
149 149 from i18n import _
150 150
151 151 _pack = struct.pack
152 152 _unpack = struct.unpack
153 153
154 154 _magicstring = 'HG20'
155 155
156 156 _fstreamparamsize = '>H'
157 157 _fpartheadersize = '>H'
158 158 _fparttypesize = '>B'
159 159 _fpartid = '>I'
160 160 _fpayloadsize = '>I'
161 161 _fpartparamcount = '>BB'
162 162
163 163 preferedchunksize = 4096
164 164
165 165 def _makefpartparamsizes(nbparams):
166 166 """return a struct format to read part parameter sizes
167 167
168 168 The number parameters is variable so we need to build that format
169 169 dynamically.
170 170 """
171 171 return '>'+('BB'*nbparams)
172 172
173 173 parthandlermapping = {}
174 174
175 175 def parthandler(parttype):
176 176 """decorator that register a function as a bundle2 part handler
177 177
178 178 eg::
179 179
180 180 @parthandler('myparttype')
181 181 def myparttypehandler(...):
182 182 '''process a part of type "my part".'''
183 183 ...
184 184 """
185 185 def _decorator(func):
186 186 lparttype = parttype.lower() # enforce lower case matching.
187 187 assert lparttype not in parthandlermapping
188 188 parthandlermapping[lparttype] = func
189 189 return func
190 190 return _decorator
191 191
192 192 class unbundlerecords(object):
193 193 """keep record of what happens during and unbundle
194 194
195 195 New records are added using `records.add('cat', obj)`. Where 'cat' is a
196 category of record and obj is an arbitraty object.
196 category of record and obj is an arbitrary object.
197 197
198 198 `records['cat']` will return all entries of this category 'cat'.
199 199
200 200 Iterating on the object itself will yield `('category', obj)` tuples
201 201 for all entries.
202 202
203 203 All iterations happens in chronological order.
204 204 """
205 205
206 206 def __init__(self):
207 207 self._categories = {}
208 208 self._sequences = []
209 209 self._replies = {}
210 210
211 211 def add(self, category, entry, inreplyto=None):
212 212 """add a new record of a given category.
213 213
214 214 The entry can then be retrieved in the list returned by
215 215 self['category']."""
216 216 self._categories.setdefault(category, []).append(entry)
217 217 self._sequences.append((category, entry))
218 218 if inreplyto is not None:
219 219 self.getreplies(inreplyto).add(category, entry)
220 220
221 221 def getreplies(self, partid):
222 222 """get the subrecords that replies to a specific part"""
223 223 return self._replies.setdefault(partid, unbundlerecords())
224 224
225 225 def __getitem__(self, cat):
226 226 return tuple(self._categories.get(cat, ()))
227 227
228 228 def __iter__(self):
229 229 return iter(self._sequences)
230 230
231 231 def __len__(self):
232 232 return len(self._sequences)
233 233
234 234 def __nonzero__(self):
235 235 return bool(self._sequences)
236 236
237 237 class bundleoperation(object):
238 238 """an object that represents a single bundling process
239 239
240 240 Its purpose is to carry unbundle-related objects and states.
241 241
242 242 A new object should be created at the beginning of each bundle processing.
243 243 The object is to be returned by the processing function.
244 244
245 245 The object has very little content now it will ultimately contain:
246 246 * an access to the repo the bundle is applied to,
247 247 * a ui object,
248 248 * a way to retrieve a transaction to add changes to the repo,
249 249 * a way to record the result of processing each part,
250 250 * a way to construct a bundle response when applicable.
251 251 """
252 252
253 253 def __init__(self, repo, transactiongetter):
254 254 self.repo = repo
255 255 self.ui = repo.ui
256 256 self.records = unbundlerecords()
257 257 self.gettransaction = transactiongetter
258 258 self.reply = None
259 259
260 260 class TransactionUnavailable(RuntimeError):
261 261 pass
262 262
263 263 def _notransaction():
264 264 """default method to get a transaction while processing a bundle
265 265
266 266 Raise an exception to highlight the fact that no transaction was expected
267 267 to be created"""
268 268 raise TransactionUnavailable()
269 269
270 270 def processbundle(repo, unbundler, transactiongetter=_notransaction):
271 271 """This function process a bundle, apply effect to/from a repo
272 272
273 273 It iterates over each part then searches for and uses the proper handling
274 274 code to process the part. Parts are processed in order.
275 275
276 276 This is very early version of this function that will be strongly reworked
277 277 before final usage.
278 278
279 279 Unknown Mandatory part will abort the process.
280 280 """
281 281 op = bundleoperation(repo, transactiongetter)
282 282 # todo:
283 283 # - only create reply bundle if requested.
284 284 op.reply = bundle20(op.ui)
285 285 # todo:
286 286 # - replace this is a init function soon.
287 287 # - exception catching
288 288 unbundler.params
289 289 iterparts = iter(unbundler)
290 290 part = None
291 291 try:
292 292 for part in iterparts:
293 293 parttype = part.type
294 294 # part key are matched lower case
295 295 key = parttype.lower()
296 296 try:
297 297 handler = parthandlermapping[key]
298 298 op.ui.debug('found a handler for part %r\n' % parttype)
299 299 except KeyError:
300 300 if key != parttype: # mandatory parts
301 301 # todo:
302 302 # - use a more precise exception
303 303 raise
304 304 op.ui.debug('ignoring unknown advisory part %r\n' % key)
305 305 # consuming the part
306 306 part.read()
307 307 continue
308 308
309 309 # handler is called outside the above try block so that we don't
310 310 # risk catching KeyErrors from anything other than the
311 311 # parthandlermapping lookup (any KeyError raised by handler()
312 312 # itself represents a defect of a different variety).
313 313 handler(op, part)
314 314 part.read()
315 315 except Exception:
316 316 if part is not None:
317 317 # consume the bundle content
318 318 part.read()
319 319 for part in iterparts:
320 320 # consume the bundle content
321 321 part.read()
322 322 raise
323 323 return op
324 324
325 325 class bundle20(object):
326 326 """represent an outgoing bundle2 container
327 327
328 328 Use the `addparam` method to add stream level parameter. and `addpart` to
329 329 populate it. Then call `getchunks` to retrieve all the binary chunks of
330 datathat compose the bundle2 container."""
330 data that compose the bundle2 container."""
331 331
332 332 def __init__(self, ui):
333 333 self.ui = ui
334 334 self._params = []
335 335 self._parts = []
336 336
337 337 def addparam(self, name, value=None):
338 338 """add a stream level parameter"""
339 339 if not name:
340 340 raise ValueError('empty parameter name')
341 341 if name[0] not in string.letters:
342 342 raise ValueError('non letter first character: %r' % name)
343 343 self._params.append((name, value))
344 344
345 345 def addpart(self, part):
346 346 """add a new part to the bundle2 container
347 347
348 Parts contains the actuall applicative payload."""
348 Parts contains the actual applicative payload."""
349 349 assert part.id is None
350 350 part.id = len(self._parts) # very cheap counter
351 351 self._parts.append(part)
352 352
353 353 def getchunks(self):
354 354 self.ui.debug('start emission of %s stream\n' % _magicstring)
355 355 yield _magicstring
356 356 param = self._paramchunk()
357 357 self.ui.debug('bundle parameter: %s\n' % param)
358 358 yield _pack(_fstreamparamsize, len(param))
359 359 if param:
360 360 yield param
361 361
362 362 self.ui.debug('start of parts\n')
363 363 for part in self._parts:
364 364 self.ui.debug('bundle part: "%s"\n' % part.type)
365 365 for chunk in part.getchunks():
366 366 yield chunk
367 367 self.ui.debug('end of bundle\n')
368 368 yield '\0\0'
369 369
370 370 def _paramchunk(self):
371 371 """return a encoded version of all stream parameters"""
372 372 blocks = []
373 373 for par, value in self._params:
374 374 par = urllib.quote(par)
375 375 if value is not None:
376 376 value = urllib.quote(value)
377 377 par = '%s=%s' % (par, value)
378 378 blocks.append(par)
379 379 return ' '.join(blocks)
380 380
381 381 class unpackermixin(object):
382 382 """A mixin to extract bytes and struct data from a stream"""
383 383
384 384 def __init__(self, fp):
385 385 self._fp = fp
386 386
387 387 def _unpack(self, format):
388 388 """unpack this struct format from the stream"""
389 389 data = self._readexact(struct.calcsize(format))
390 390 return _unpack(format, data)
391 391
392 392 def _readexact(self, size):
393 393 """read exactly <size> bytes from the stream"""
394 394 return changegroup.readexactly(self._fp, size)
395 395
396 396
397 397 class unbundle20(unpackermixin):
398 398 """interpret a bundle2 stream
399 399
400 400 (this will eventually yield parts)"""
401 401
402 402 def __init__(self, ui, fp):
403 403 self.ui = ui
404 404 super(unbundle20, self).__init__(fp)
405 405 header = self._readexact(4)
406 406 magic, version = header[0:2], header[2:4]
407 407 if magic != 'HG':
408 408 raise util.Abort(_('not a Mercurial bundle'))
409 409 if version != '20':
410 410 raise util.Abort(_('unknown bundle version %s') % version)
411 411 self.ui.debug('start processing of %s stream\n' % header)
412 412
413 413 @util.propertycache
414 414 def params(self):
415 """dictionnary of stream level parameters"""
415 """dictionary of stream level parameters"""
416 416 self.ui.debug('reading bundle2 stream parameters\n')
417 417 params = {}
418 418 paramssize = self._unpack(_fstreamparamsize)[0]
419 419 if paramssize:
420 420 for p in self._readexact(paramssize).split(' '):
421 421 p = p.split('=', 1)
422 422 p = [urllib.unquote(i) for i in p]
423 423 if len(p) < 2:
424 424 p.append(None)
425 425 self._processparam(*p)
426 426 params[p[0]] = p[1]
427 427 return params
428 428
429 429 def _processparam(self, name, value):
430 430 """process a parameter, applying its effect if needed
431 431
432 432 Parameter starting with a lower case letter are advisory and will be
433 433 ignored when unknown. Those starting with an upper case letter are
434 434 mandatory and will this function will raise a KeyError when unknown.
435 435
436 436 Note: no option are currently supported. Any input will be either
437 437 ignored or failing.
438 438 """
439 439 if not name:
440 440 raise ValueError('empty parameter name')
441 441 if name[0] not in string.letters:
442 442 raise ValueError('non letter first character: %r' % name)
443 443 # Some logic will be later added here to try to process the option for
444 444 # a dict of known parameter.
445 445 if name[0].islower():
446 446 self.ui.debug("ignoring unknown parameter %r\n" % name)
447 447 else:
448 448 raise KeyError(name)
449 449
450 450
451 451 def __iter__(self):
452 452 """yield all parts contained in the stream"""
453 453 # make sure param have been loaded
454 454 self.params
455 455 self.ui.debug('start extraction of bundle2 parts\n')
456 456 headerblock = self._readpartheader()
457 457 while headerblock is not None:
458 458 part = unbundlepart(self.ui, headerblock, self._fp)
459 459 yield part
460 460 headerblock = self._readpartheader()
461 461 self.ui.debug('end of bundle2 stream\n')
462 462
463 463 def _readpartheader(self):
464 464 """reads a part header size and return the bytes blob
465 465
466 466 returns None if empty"""
467 467 headersize = self._unpack(_fpartheadersize)[0]
468 468 self.ui.debug('part header size: %i\n' % headersize)
469 469 if headersize:
470 470 return self._readexact(headersize)
471 471 return None
472 472
473 473
474 474 class bundlepart(object):
475 475 """A bundle2 part contains application level payload
476 476
477 477 The part `type` is used to route the part to the application level
478 478 handler.
479 479 """
480 480
481 481 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
482 482 data=''):
483 483 self.id = None
484 484 self.type = parttype
485 485 self.data = data
486 486 self.mandatoryparams = mandatoryparams
487 487 self.advisoryparams = advisoryparams
488 488
489 489 def getchunks(self):
490 490 #### header
491 491 ## parttype
492 492 header = [_pack(_fparttypesize, len(self.type)),
493 493 self.type, _pack(_fpartid, self.id),
494 494 ]
495 495 ## parameters
496 496 # count
497 497 manpar = self.mandatoryparams
498 498 advpar = self.advisoryparams
499 499 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
500 500 # size
501 501 parsizes = []
502 502 for key, value in manpar:
503 503 parsizes.append(len(key))
504 504 parsizes.append(len(value))
505 505 for key, value in advpar:
506 506 parsizes.append(len(key))
507 507 parsizes.append(len(value))
508 508 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
509 509 header.append(paramsizes)
510 510 # key, value
511 511 for key, value in manpar:
512 512 header.append(key)
513 513 header.append(value)
514 514 for key, value in advpar:
515 515 header.append(key)
516 516 header.append(value)
517 517 ## finalize header
518 518 headerchunk = ''.join(header)
519 519 yield _pack(_fpartheadersize, len(headerchunk))
520 520 yield headerchunk
521 521 ## payload
522 522 for chunk in self._payloadchunks():
523 523 yield _pack(_fpayloadsize, len(chunk))
524 524 yield chunk
525 525 # end of payload
526 526 yield _pack(_fpayloadsize, 0)
527 527
528 528 def _payloadchunks(self):
529 529 """yield chunks of a the part payload
530 530
531 531 Exists to handle the different methods to provide data to a part."""
532 532 # we only support fixed size data now.
533 533 # This will be improved in the future.
534 534 if util.safehasattr(self.data, 'next'):
535 535 buff = util.chunkbuffer(self.data)
536 536 chunk = buff.read(preferedchunksize)
537 537 while chunk:
538 538 yield chunk
539 539 chunk = buff.read(preferedchunksize)
540 540 elif len(self.data):
541 541 yield self.data
542 542
543 543 class unbundlepart(unpackermixin):
544 544 """a bundle part read from a bundle"""
545 545
546 546 def __init__(self, ui, header, fp):
547 547 super(unbundlepart, self).__init__(fp)
548 548 self.ui = ui
549 549 # unbundle state attr
550 550 self._headerdata = header
551 551 self._headeroffset = 0
552 552 self._initialized = False
553 553 self.consumed = False
554 554 # part data
555 555 self.id = None
556 556 self.type = None
557 557 self.mandatoryparams = None
558 558 self.advisoryparams = None
559 559 self._payloadstream = None
560 560 self._readheader()
561 561
562 562 def _fromheader(self, size):
563 563 """return the next <size> byte from the header"""
564 564 offset = self._headeroffset
565 565 data = self._headerdata[offset:(offset + size)]
566 566 self._headeroffset = offset + size
567 567 return data
568 568
569 569 def _unpackheader(self, format):
570 570 """read given format from header
571 571
572 572 This automatically compute the size of the format to read."""
573 573 data = self._fromheader(struct.calcsize(format))
574 574 return _unpack(format, data)
575 575
576 576 def _readheader(self):
577 577 """read the header and setup the object"""
578 578 typesize = self._unpackheader(_fparttypesize)[0]
579 579 self.type = self._fromheader(typesize)
580 580 self.ui.debug('part type: "%s"\n' % self.type)
581 581 self.id = self._unpackheader(_fpartid)[0]
582 582 self.ui.debug('part id: "%s"\n' % self.id)
583 583 ## reading parameters
584 584 # param count
585 585 mancount, advcount = self._unpackheader(_fpartparamcount)
586 586 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
587 587 # param size
588 588 fparamsizes = _makefpartparamsizes(mancount + advcount)
589 589 paramsizes = self._unpackheader(fparamsizes)
590 590 # make it a list of couple again
591 591 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
592 592 # split mandatory from advisory
593 593 mansizes = paramsizes[:mancount]
594 594 advsizes = paramsizes[mancount:]
595 595 # retrive param value
596 596 manparams = []
597 597 for key, value in mansizes:
598 598 manparams.append((self._fromheader(key), self._fromheader(value)))
599 599 advparams = []
600 600 for key, value in advsizes:
601 601 advparams.append((self._fromheader(key), self._fromheader(value)))
602 602 self.mandatoryparams = manparams
603 603 self.advisoryparams = advparams
604 604 ## part payload
605 605 def payloadchunks():
606 606 payloadsize = self._unpack(_fpayloadsize)[0]
607 607 self.ui.debug('payload chunk size: %i\n' % payloadsize)
608 608 while payloadsize:
609 609 yield self._readexact(payloadsize)
610 610 payloadsize = self._unpack(_fpayloadsize)[0]
611 611 self.ui.debug('payload chunk size: %i\n' % payloadsize)
612 612 self._payloadstream = util.chunkbuffer(payloadchunks())
613 613 # we read the data, tell it
614 614 self._initialized = True
615 615
616 616 def read(self, size=None):
617 617 """read payload data"""
618 618 if not self._initialized:
619 619 self._readheader()
620 620 if size is None:
621 621 data = self._payloadstream.read()
622 622 else:
623 623 data = self._payloadstream.read(size)
624 624 if size is None or len(data) < size:
625 625 self.consumed = True
626 626 return data
627 627
628 628
629 629 @parthandler('changegroup')
630 630 def handlechangegroup(op, inpart):
631 631 """apply a changegroup part on the repo
632 632
633 633 This is a very early implementation that will massive rework before being
634 634 inflicted to any end-user.
635 635 """
636 636 # Make sure we trigger a transaction creation
637 637 #
638 638 # The addchangegroup function will get a transaction object by itself, but
639 639 # we need to make sure we trigger the creation of a transaction object used
640 640 # for the whole processing scope.
641 641 op.gettransaction()
642 642 cg = changegroup.readbundle(inpart, 'bundle2part')
643 643 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
644 644 op.records.add('changegroup', {'return': ret})
645 645 if op.reply is not None:
646 646 # This is definitly not the final form of this
647 647 # return. But one need to start somewhere.
648 648 part = bundlepart('reply:changegroup', (),
649 649 [('in-reply-to', str(inpart.id)),
650 650 ('return', '%i' % ret)])
651 651 op.reply.addpart(part)
652 652 assert not inpart.read()
653 653
654 654 @parthandler('reply:changegroup')
655 655 def handlechangegroup(op, inpart):
656 656 p = dict(inpart.advisoryparams)
657 657 ret = int(p['return'])
658 658 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
659 659
@@ -1,2361 +1,2361 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from node import hex, nullid, nullrev, short
9 9 from i18n import _
10 10 import os, sys, errno, re, tempfile
11 11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 12 import match as matchmod
13 13 import context, repair, graphmod, revset, phases, obsolete, pathutil
14 14 import changelog
15 15 import bookmarks
16 16 import lock as lockmod
17 17
18 18 def parsealiases(cmd):
19 19 return cmd.lstrip("^").split("|")
20 20
21 21 def findpossible(cmd, table, strict=False):
22 22 """
23 23 Return cmd -> (aliases, command table entry)
24 24 for each matching command.
25 25 Return debug commands (or their aliases) only if no normal command matches.
26 26 """
27 27 choice = {}
28 28 debugchoice = {}
29 29
30 30 if cmd in table:
31 31 # short-circuit exact matches, "log" alias beats "^log|history"
32 32 keys = [cmd]
33 33 else:
34 34 keys = table.keys()
35 35
36 36 for e in keys:
37 37 aliases = parsealiases(e)
38 38 found = None
39 39 if cmd in aliases:
40 40 found = cmd
41 41 elif not strict:
42 42 for a in aliases:
43 43 if a.startswith(cmd):
44 44 found = a
45 45 break
46 46 if found is not None:
47 47 if aliases[0].startswith("debug") or found.startswith("debug"):
48 48 debugchoice[found] = (aliases, table[e])
49 49 else:
50 50 choice[found] = (aliases, table[e])
51 51
52 52 if not choice and debugchoice:
53 53 choice = debugchoice
54 54
55 55 return choice
56 56
57 57 def findcmd(cmd, table, strict=True):
58 58 """Return (aliases, command table entry) for command string."""
59 59 choice = findpossible(cmd, table, strict)
60 60
61 61 if cmd in choice:
62 62 return choice[cmd]
63 63
64 64 if len(choice) > 1:
65 65 clist = choice.keys()
66 66 clist.sort()
67 67 raise error.AmbiguousCommand(cmd, clist)
68 68
69 69 if choice:
70 70 return choice.values()[0]
71 71
72 72 raise error.UnknownCommand(cmd)
73 73
74 74 def findrepo(p):
75 75 while not os.path.isdir(os.path.join(p, ".hg")):
76 76 oldp, p = p, os.path.dirname(p)
77 77 if p == oldp:
78 78 return None
79 79
80 80 return p
81 81
82 82 def bailifchanged(repo):
83 83 if repo.dirstate.p2() != nullid:
84 84 raise util.Abort(_('outstanding uncommitted merge'))
85 85 modified, added, removed, deleted = repo.status()[:4]
86 86 if modified or added or removed or deleted:
87 87 raise util.Abort(_('uncommitted changes'))
88 88 ctx = repo[None]
89 89 for s in sorted(ctx.substate):
90 90 if ctx.sub(s).dirty():
91 91 raise util.Abort(_("uncommitted changes in subrepo %s") % s)
92 92
93 93 def logmessage(ui, opts):
94 94 """ get the log message according to -m and -l option """
95 95 message = opts.get('message')
96 96 logfile = opts.get('logfile')
97 97
98 98 if message and logfile:
99 99 raise util.Abort(_('options --message and --logfile are mutually '
100 100 'exclusive'))
101 101 if not message and logfile:
102 102 try:
103 103 if logfile == '-':
104 104 message = ui.fin.read()
105 105 else:
106 106 message = '\n'.join(util.readfile(logfile).splitlines())
107 107 except IOError, inst:
108 108 raise util.Abort(_("can't read commit message '%s': %s") %
109 109 (logfile, inst.strerror))
110 110 return message
111 111
112 112 def loglimit(opts):
113 113 """get the log limit according to option -l/--limit"""
114 114 limit = opts.get('limit')
115 115 if limit:
116 116 try:
117 117 limit = int(limit)
118 118 except ValueError:
119 119 raise util.Abort(_('limit must be a positive integer'))
120 120 if limit <= 0:
121 121 raise util.Abort(_('limit must be positive'))
122 122 else:
123 123 limit = None
124 124 return limit
125 125
126 126 def makefilename(repo, pat, node, desc=None,
127 127 total=None, seqno=None, revwidth=None, pathname=None):
128 128 node_expander = {
129 129 'H': lambda: hex(node),
130 130 'R': lambda: str(repo.changelog.rev(node)),
131 131 'h': lambda: short(node),
132 132 'm': lambda: re.sub('[^\w]', '_', str(desc))
133 133 }
134 134 expander = {
135 135 '%': lambda: '%',
136 136 'b': lambda: os.path.basename(repo.root),
137 137 }
138 138
139 139 try:
140 140 if node:
141 141 expander.update(node_expander)
142 142 if node:
143 143 expander['r'] = (lambda:
144 144 str(repo.changelog.rev(node)).zfill(revwidth or 0))
145 145 if total is not None:
146 146 expander['N'] = lambda: str(total)
147 147 if seqno is not None:
148 148 expander['n'] = lambda: str(seqno)
149 149 if total is not None and seqno is not None:
150 150 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
151 151 if pathname is not None:
152 152 expander['s'] = lambda: os.path.basename(pathname)
153 153 expander['d'] = lambda: os.path.dirname(pathname) or '.'
154 154 expander['p'] = lambda: pathname
155 155
156 156 newname = []
157 157 patlen = len(pat)
158 158 i = 0
159 159 while i < patlen:
160 160 c = pat[i]
161 161 if c == '%':
162 162 i += 1
163 163 c = pat[i]
164 164 c = expander[c]()
165 165 newname.append(c)
166 166 i += 1
167 167 return ''.join(newname)
168 168 except KeyError, inst:
169 169 raise util.Abort(_("invalid format spec '%%%s' in output filename") %
170 170 inst.args[0])
171 171
172 172 def makefileobj(repo, pat, node=None, desc=None, total=None,
173 173 seqno=None, revwidth=None, mode='wb', modemap=None,
174 174 pathname=None):
175 175
176 176 writable = mode not in ('r', 'rb')
177 177
178 178 if not pat or pat == '-':
179 179 fp = writable and repo.ui.fout or repo.ui.fin
180 180 if util.safehasattr(fp, 'fileno'):
181 181 return os.fdopen(os.dup(fp.fileno()), mode)
182 182 else:
183 183 # if this fp can't be duped properly, return
184 184 # a dummy object that can be closed
185 185 class wrappedfileobj(object):
186 186 noop = lambda x: None
187 187 def __init__(self, f):
188 188 self.f = f
189 189 def __getattr__(self, attr):
190 190 if attr == 'close':
191 191 return self.noop
192 192 else:
193 193 return getattr(self.f, attr)
194 194
195 195 return wrappedfileobj(fp)
196 196 if util.safehasattr(pat, 'write') and writable:
197 197 return pat
198 198 if util.safehasattr(pat, 'read') and 'r' in mode:
199 199 return pat
200 200 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
201 201 if modemap is not None:
202 202 mode = modemap.get(fn, mode)
203 203 if mode == 'wb':
204 204 modemap[fn] = 'ab'
205 205 return open(fn, mode)
206 206
207 207 def openrevlog(repo, cmd, file_, opts):
208 208 """opens the changelog, manifest, a filelog or a given revlog"""
209 209 cl = opts['changelog']
210 210 mf = opts['manifest']
211 211 msg = None
212 212 if cl and mf:
213 213 msg = _('cannot specify --changelog and --manifest at the same time')
214 214 elif cl or mf:
215 215 if file_:
216 216 msg = _('cannot specify filename with --changelog or --manifest')
217 217 elif not repo:
218 218 msg = _('cannot specify --changelog or --manifest '
219 219 'without a repository')
220 220 if msg:
221 221 raise util.Abort(msg)
222 222
223 223 r = None
224 224 if repo:
225 225 if cl:
226 226 r = repo.changelog
227 227 elif mf:
228 228 r = repo.manifest
229 229 elif file_:
230 230 filelog = repo.file(file_)
231 231 if len(filelog):
232 232 r = filelog
233 233 if not r:
234 234 if not file_:
235 235 raise error.CommandError(cmd, _('invalid arguments'))
236 236 if not os.path.isfile(file_):
237 237 raise util.Abort(_("revlog '%s' not found") % file_)
238 238 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
239 239 file_[:-2] + ".i")
240 240 return r
241 241
242 242 def copy(ui, repo, pats, opts, rename=False):
243 243 # called with the repo lock held
244 244 #
245 245 # hgsep => pathname that uses "/" to separate directories
246 246 # ossep => pathname that uses os.sep to separate directories
247 247 cwd = repo.getcwd()
248 248 targets = {}
249 249 after = opts.get("after")
250 250 dryrun = opts.get("dry_run")
251 251 wctx = repo[None]
252 252
253 253 def walkpat(pat):
254 254 srcs = []
255 255 badstates = after and '?' or '?r'
256 256 m = scmutil.match(repo[None], [pat], opts, globbed=True)
257 257 for abs in repo.walk(m):
258 258 state = repo.dirstate[abs]
259 259 rel = m.rel(abs)
260 260 exact = m.exact(abs)
261 261 if state in badstates:
262 262 if exact and state == '?':
263 263 ui.warn(_('%s: not copying - file is not managed\n') % rel)
264 264 if exact and state == 'r':
265 265 ui.warn(_('%s: not copying - file has been marked for'
266 266 ' remove\n') % rel)
267 267 continue
268 268 # abs: hgsep
269 269 # rel: ossep
270 270 srcs.append((abs, rel, exact))
271 271 return srcs
272 272
273 273 # abssrc: hgsep
274 274 # relsrc: ossep
275 275 # otarget: ossep
276 276 def copyfile(abssrc, relsrc, otarget, exact):
277 277 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
278 278 if '/' in abstarget:
279 279 # We cannot normalize abstarget itself, this would prevent
280 280 # case only renames, like a => A.
281 281 abspath, absname = abstarget.rsplit('/', 1)
282 282 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
283 283 reltarget = repo.pathto(abstarget, cwd)
284 284 target = repo.wjoin(abstarget)
285 285 src = repo.wjoin(abssrc)
286 286 state = repo.dirstate[abstarget]
287 287
288 288 scmutil.checkportable(ui, abstarget)
289 289
290 290 # check for collisions
291 291 prevsrc = targets.get(abstarget)
292 292 if prevsrc is not None:
293 293 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
294 294 (reltarget, repo.pathto(abssrc, cwd),
295 295 repo.pathto(prevsrc, cwd)))
296 296 return
297 297
298 298 # check for overwrites
299 299 exists = os.path.lexists(target)
300 300 samefile = False
301 301 if exists and abssrc != abstarget:
302 302 if (repo.dirstate.normalize(abssrc) ==
303 303 repo.dirstate.normalize(abstarget)):
304 304 if not rename:
305 305 ui.warn(_("%s: can't copy - same file\n") % reltarget)
306 306 return
307 307 exists = False
308 308 samefile = True
309 309
310 310 if not after and exists or after and state in 'mn':
311 311 if not opts['force']:
312 312 ui.warn(_('%s: not overwriting - file exists\n') %
313 313 reltarget)
314 314 return
315 315
316 316 if after:
317 317 if not exists:
318 318 if rename:
319 319 ui.warn(_('%s: not recording move - %s does not exist\n') %
320 320 (relsrc, reltarget))
321 321 else:
322 322 ui.warn(_('%s: not recording copy - %s does not exist\n') %
323 323 (relsrc, reltarget))
324 324 return
325 325 elif not dryrun:
326 326 try:
327 327 if exists:
328 328 os.unlink(target)
329 329 targetdir = os.path.dirname(target) or '.'
330 330 if not os.path.isdir(targetdir):
331 331 os.makedirs(targetdir)
332 332 if samefile:
333 333 tmp = target + "~hgrename"
334 334 os.rename(src, tmp)
335 335 os.rename(tmp, target)
336 336 else:
337 337 util.copyfile(src, target)
338 338 srcexists = True
339 339 except IOError, inst:
340 340 if inst.errno == errno.ENOENT:
341 341 ui.warn(_('%s: deleted in working copy\n') % relsrc)
342 342 srcexists = False
343 343 else:
344 344 ui.warn(_('%s: cannot copy - %s\n') %
345 345 (relsrc, inst.strerror))
346 346 return True # report a failure
347 347
348 348 if ui.verbose or not exact:
349 349 if rename:
350 350 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
351 351 else:
352 352 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
353 353
354 354 targets[abstarget] = abssrc
355 355
356 356 # fix up dirstate
357 357 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
358 358 dryrun=dryrun, cwd=cwd)
359 359 if rename and not dryrun:
360 360 if not after and srcexists and not samefile:
361 361 util.unlinkpath(repo.wjoin(abssrc))
362 362 wctx.forget([abssrc])
363 363
364 364 # pat: ossep
365 365 # dest ossep
366 366 # srcs: list of (hgsep, hgsep, ossep, bool)
367 367 # return: function that takes hgsep and returns ossep
368 368 def targetpathfn(pat, dest, srcs):
369 369 if os.path.isdir(pat):
370 370 abspfx = pathutil.canonpath(repo.root, cwd, pat)
371 371 abspfx = util.localpath(abspfx)
372 372 if destdirexists:
373 373 striplen = len(os.path.split(abspfx)[0])
374 374 else:
375 375 striplen = len(abspfx)
376 376 if striplen:
377 377 striplen += len(os.sep)
378 378 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
379 379 elif destdirexists:
380 380 res = lambda p: os.path.join(dest,
381 381 os.path.basename(util.localpath(p)))
382 382 else:
383 383 res = lambda p: dest
384 384 return res
385 385
386 386 # pat: ossep
387 387 # dest ossep
388 388 # srcs: list of (hgsep, hgsep, ossep, bool)
389 389 # return: function that takes hgsep and returns ossep
390 390 def targetpathafterfn(pat, dest, srcs):
391 391 if matchmod.patkind(pat):
392 392 # a mercurial pattern
393 393 res = lambda p: os.path.join(dest,
394 394 os.path.basename(util.localpath(p)))
395 395 else:
396 396 abspfx = pathutil.canonpath(repo.root, cwd, pat)
397 397 if len(abspfx) < len(srcs[0][0]):
398 398 # A directory. Either the target path contains the last
399 399 # component of the source path or it does not.
400 400 def evalpath(striplen):
401 401 score = 0
402 402 for s in srcs:
403 403 t = os.path.join(dest, util.localpath(s[0])[striplen:])
404 404 if os.path.lexists(t):
405 405 score += 1
406 406 return score
407 407
408 408 abspfx = util.localpath(abspfx)
409 409 striplen = len(abspfx)
410 410 if striplen:
411 411 striplen += len(os.sep)
412 412 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
413 413 score = evalpath(striplen)
414 414 striplen1 = len(os.path.split(abspfx)[0])
415 415 if striplen1:
416 416 striplen1 += len(os.sep)
417 417 if evalpath(striplen1) > score:
418 418 striplen = striplen1
419 419 res = lambda p: os.path.join(dest,
420 420 util.localpath(p)[striplen:])
421 421 else:
422 422 # a file
423 423 if destdirexists:
424 424 res = lambda p: os.path.join(dest,
425 425 os.path.basename(util.localpath(p)))
426 426 else:
427 427 res = lambda p: dest
428 428 return res
429 429
430 430
431 431 pats = scmutil.expandpats(pats)
432 432 if not pats:
433 433 raise util.Abort(_('no source or destination specified'))
434 434 if len(pats) == 1:
435 435 raise util.Abort(_('no destination specified'))
436 436 dest = pats.pop()
437 437 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
438 438 if not destdirexists:
439 439 if len(pats) > 1 or matchmod.patkind(pats[0]):
440 440 raise util.Abort(_('with multiple sources, destination must be an '
441 441 'existing directory'))
442 442 if util.endswithsep(dest):
443 443 raise util.Abort(_('destination %s is not a directory') % dest)
444 444
445 445 tfn = targetpathfn
446 446 if after:
447 447 tfn = targetpathafterfn
448 448 copylist = []
449 449 for pat in pats:
450 450 srcs = walkpat(pat)
451 451 if not srcs:
452 452 continue
453 453 copylist.append((tfn(pat, dest, srcs), srcs))
454 454 if not copylist:
455 455 raise util.Abort(_('no files to copy'))
456 456
457 457 errors = 0
458 458 for targetpath, srcs in copylist:
459 459 for abssrc, relsrc, exact in srcs:
460 460 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
461 461 errors += 1
462 462
463 463 if errors:
464 464 ui.warn(_('(consider using --after)\n'))
465 465
466 466 return errors != 0
467 467
468 468 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
469 469 runargs=None, appendpid=False):
470 470 '''Run a command as a service.'''
471 471
472 472 def writepid(pid):
473 473 if opts['pid_file']:
474 474 mode = appendpid and 'a' or 'w'
475 475 fp = open(opts['pid_file'], mode)
476 476 fp.write(str(pid) + '\n')
477 477 fp.close()
478 478
479 479 if opts['daemon'] and not opts['daemon_pipefds']:
480 480 # Signal child process startup with file removal
481 481 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
482 482 os.close(lockfd)
483 483 try:
484 484 if not runargs:
485 485 runargs = util.hgcmd() + sys.argv[1:]
486 486 runargs.append('--daemon-pipefds=%s' % lockpath)
487 487 # Don't pass --cwd to the child process, because we've already
488 488 # changed directory.
489 489 for i in xrange(1, len(runargs)):
490 490 if runargs[i].startswith('--cwd='):
491 491 del runargs[i]
492 492 break
493 493 elif runargs[i].startswith('--cwd'):
494 494 del runargs[i:i + 2]
495 495 break
496 496 def condfn():
497 497 return not os.path.exists(lockpath)
498 498 pid = util.rundetached(runargs, condfn)
499 499 if pid < 0:
500 500 raise util.Abort(_('child process failed to start'))
501 501 writepid(pid)
502 502 finally:
503 503 try:
504 504 os.unlink(lockpath)
505 505 except OSError, e:
506 506 if e.errno != errno.ENOENT:
507 507 raise
508 508 if parentfn:
509 509 return parentfn(pid)
510 510 else:
511 511 return
512 512
513 513 if initfn:
514 514 initfn()
515 515
516 516 if not opts['daemon']:
517 517 writepid(os.getpid())
518 518
519 519 if opts['daemon_pipefds']:
520 520 lockpath = opts['daemon_pipefds']
521 521 try:
522 522 os.setsid()
523 523 except AttributeError:
524 524 pass
525 525 os.unlink(lockpath)
526 526 util.hidewindow()
527 527 sys.stdout.flush()
528 528 sys.stderr.flush()
529 529
530 530 nullfd = os.open(os.devnull, os.O_RDWR)
531 531 logfilefd = nullfd
532 532 if logfile:
533 533 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
534 534 os.dup2(nullfd, 0)
535 535 os.dup2(logfilefd, 1)
536 536 os.dup2(logfilefd, 2)
537 537 if nullfd not in (0, 1, 2):
538 538 os.close(nullfd)
539 539 if logfile and logfilefd not in (0, 1, 2):
540 540 os.close(logfilefd)
541 541
542 542 if runfn:
543 543 return runfn()
544 544
545 545 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
546 546 """Utility function used by commands.import to import a single patch
547 547
548 548 This function is explicitly defined here to help the evolve extension to
549 549 wrap this part of the import logic.
550 550
551 551 The API is currently a bit ugly because it a simple code translation from
552 552 the import command. Feel free to make it better.
553 553
554 554 :hunk: a patch (as a binary string)
555 555 :parents: nodes that will be parent of the created commit
556 556 :opts: the full dict of option passed to the import command
557 557 :msgs: list to save commit message to.
558 558 (used in case we need to save it when failing)
559 559 :updatefunc: a function that update a repo to a given node
560 560 updatefunc(<repo>, <node>)
561 561 """
562 562 tmpname, message, user, date, branch, nodeid, p1, p2 = \
563 563 patch.extract(ui, hunk)
564 564
565 565 editor = commiteditor
566 566 if opts.get('edit'):
567 567 editor = commitforceeditor
568 568 update = not opts.get('bypass')
569 569 strip = opts["strip"]
570 570 sim = float(opts.get('similarity') or 0)
571 571 if not tmpname:
572 572 return (None, None)
573 573 msg = _('applied to working directory')
574 574
575 575 try:
576 576 cmdline_message = logmessage(ui, opts)
577 577 if cmdline_message:
578 578 # pickup the cmdline msg
579 579 message = cmdline_message
580 580 elif message:
581 581 # pickup the patch msg
582 582 message = message.strip()
583 583 else:
584 584 # launch the editor
585 585 message = None
586 586 ui.debug('message:\n%s\n' % message)
587 587
588 588 if len(parents) == 1:
589 589 parents.append(repo[nullid])
590 590 if opts.get('exact'):
591 591 if not nodeid or not p1:
592 592 raise util.Abort(_('not a Mercurial patch'))
593 593 p1 = repo[p1]
594 594 p2 = repo[p2 or nullid]
595 595 elif p2:
596 596 try:
597 597 p1 = repo[p1]
598 598 p2 = repo[p2]
599 599 # Without any options, consider p2 only if the
600 600 # patch is being applied on top of the recorded
601 601 # first parent.
602 602 if p1 != parents[0]:
603 603 p1 = parents[0]
604 604 p2 = repo[nullid]
605 605 except error.RepoError:
606 606 p1, p2 = parents
607 607 else:
608 608 p1, p2 = parents
609 609
610 610 n = None
611 611 if update:
612 612 if p1 != parents[0]:
613 613 updatefunc(repo, p1.node())
614 614 if p2 != parents[1]:
615 615 repo.setparents(p1.node(), p2.node())
616 616
617 617 if opts.get('exact') or opts.get('import_branch'):
618 618 repo.dirstate.setbranch(branch or 'default')
619 619
620 620 files = set()
621 621 patch.patch(ui, repo, tmpname, strip=strip, files=files,
622 622 eolmode=None, similarity=sim / 100.0)
623 623 files = list(files)
624 624 if opts.get('no_commit'):
625 625 if message:
626 626 msgs.append(message)
627 627 else:
628 628 if opts.get('exact') or p2:
629 629 # If you got here, you either use --force and know what
630 630 # you are doing or used --exact or a merge patch while
631 631 # being updated to its first parent.
632 632 m = None
633 633 else:
634 634 m = scmutil.matchfiles(repo, files or [])
635 635 n = repo.commit(message, opts.get('user') or user,
636 636 opts.get('date') or date, match=m,
637 637 editor=editor)
638 638 else:
639 639 if opts.get('exact') or opts.get('import_branch'):
640 640 branch = branch or 'default'
641 641 else:
642 642 branch = p1.branch()
643 643 store = patch.filestore()
644 644 try:
645 645 files = set()
646 646 try:
647 647 patch.patchrepo(ui, repo, p1, store, tmpname, strip,
648 648 files, eolmode=None)
649 649 except patch.PatchError, e:
650 650 raise util.Abort(str(e))
651 651 memctx = context.makememctx(repo, (p1.node(), p2.node()),
652 652 message,
653 653 opts.get('user') or user,
654 654 opts.get('date') or date,
655 655 branch, files, store,
656 656 editor=commiteditor)
657 657 repo.savecommitmessage(memctx.description())
658 658 n = memctx.commit()
659 659 finally:
660 660 store.close()
661 661 if opts.get('exact') and hex(n) != nodeid:
662 662 raise util.Abort(_('patch is damaged or loses information'))
663 663 if n:
664 664 # i18n: refers to a short changeset id
665 665 msg = _('created %s') % short(n)
666 666 return (msg, n)
667 667 finally:
668 668 os.unlink(tmpname)
669 669
670 670 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
671 671 opts=None):
672 672 '''export changesets as hg patches.'''
673 673
674 674 total = len(revs)
675 675 revwidth = max([len(str(rev)) for rev in revs])
676 676 filemode = {}
677 677
678 678 def single(rev, seqno, fp):
679 679 ctx = repo[rev]
680 680 node = ctx.node()
681 681 parents = [p.node() for p in ctx.parents() if p]
682 682 branch = ctx.branch()
683 683 if switch_parent:
684 684 parents.reverse()
685 685 prev = (parents and parents[0]) or nullid
686 686
687 687 shouldclose = False
688 688 if not fp and len(template) > 0:
689 689 desc_lines = ctx.description().rstrip().split('\n')
690 690 desc = desc_lines[0] #Commit always has a first line.
691 691 fp = makefileobj(repo, template, node, desc=desc, total=total,
692 692 seqno=seqno, revwidth=revwidth, mode='wb',
693 693 modemap=filemode)
694 694 if fp != template:
695 695 shouldclose = True
696 696 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
697 697 repo.ui.note("%s\n" % fp.name)
698 698
699 699 if not fp:
700 700 write = repo.ui.write
701 701 else:
702 702 def write(s, **kw):
703 703 fp.write(s)
704 704
705 705
706 706 write("# HG changeset patch\n")
707 707 write("# User %s\n" % ctx.user())
708 708 write("# Date %d %d\n" % ctx.date())
709 709 write("# %s\n" % util.datestr(ctx.date()))
710 710 if branch and branch != 'default':
711 711 write("# Branch %s\n" % branch)
712 712 write("# Node ID %s\n" % hex(node))
713 713 write("# Parent %s\n" % hex(prev))
714 714 if len(parents) > 1:
715 715 write("# Parent %s\n" % hex(parents[1]))
716 716 write(ctx.description().rstrip())
717 717 write("\n\n")
718 718
719 719 for chunk, label in patch.diffui(repo, prev, node, opts=opts):
720 720 write(chunk, label=label)
721 721
722 722 if shouldclose:
723 723 fp.close()
724 724
725 725 for seqno, rev in enumerate(revs):
726 726 single(rev, seqno + 1, fp)
727 727
728 728 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
729 729 changes=None, stat=False, fp=None, prefix='',
730 730 listsubrepos=False):
731 731 '''show diff or diffstat.'''
732 732 if fp is None:
733 733 write = ui.write
734 734 else:
735 735 def write(s, **kw):
736 736 fp.write(s)
737 737
738 738 if stat:
739 739 diffopts = diffopts.copy(context=0)
740 740 width = 80
741 741 if not ui.plain():
742 742 width = ui.termwidth()
743 743 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
744 744 prefix=prefix)
745 745 for chunk, label in patch.diffstatui(util.iterlines(chunks),
746 746 width=width,
747 747 git=diffopts.git):
748 748 write(chunk, label=label)
749 749 else:
750 750 for chunk, label in patch.diffui(repo, node1, node2, match,
751 751 changes, diffopts, prefix=prefix):
752 752 write(chunk, label=label)
753 753
754 754 if listsubrepos:
755 755 ctx1 = repo[node1]
756 756 ctx2 = repo[node2]
757 757 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
758 758 tempnode2 = node2
759 759 try:
760 760 if node2 is not None:
761 761 tempnode2 = ctx2.substate[subpath][1]
762 762 except KeyError:
763 763 # A subrepo that existed in node1 was deleted between node1 and
764 764 # node2 (inclusive). Thus, ctx2's substate won't contain that
765 765 # subpath. The best we can do is to ignore it.
766 766 tempnode2 = None
767 767 submatch = matchmod.narrowmatcher(subpath, match)
768 768 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
769 769 stat=stat, fp=fp, prefix=prefix)
770 770
771 771 class changeset_printer(object):
772 772 '''show changeset information when templating not requested.'''
773 773
774 774 def __init__(self, ui, repo, patch, diffopts, buffered):
775 775 self.ui = ui
776 776 self.repo = repo
777 777 self.buffered = buffered
778 778 self.patch = patch
779 779 self.diffopts = diffopts
780 780 self.header = {}
781 781 self.hunk = {}
782 782 self.lastheader = None
783 783 self.footer = None
784 784
785 785 def flush(self, rev):
786 786 if rev in self.header:
787 787 h = self.header[rev]
788 788 if h != self.lastheader:
789 789 self.lastheader = h
790 790 self.ui.write(h)
791 791 del self.header[rev]
792 792 if rev in self.hunk:
793 793 self.ui.write(self.hunk[rev])
794 794 del self.hunk[rev]
795 795 return 1
796 796 return 0
797 797
798 798 def close(self):
799 799 if self.footer:
800 800 self.ui.write(self.footer)
801 801
802 802 def show(self, ctx, copies=None, matchfn=None, **props):
803 803 if self.buffered:
804 804 self.ui.pushbuffer()
805 805 self._show(ctx, copies, matchfn, props)
806 806 self.hunk[ctx.rev()] = self.ui.popbuffer(labeled=True)
807 807 else:
808 808 self._show(ctx, copies, matchfn, props)
809 809
810 810 def _show(self, ctx, copies, matchfn, props):
811 811 '''show a single changeset or file revision'''
812 812 changenode = ctx.node()
813 813 rev = ctx.rev()
814 814
815 815 if self.ui.quiet:
816 816 self.ui.write("%d:%s\n" % (rev, short(changenode)),
817 817 label='log.node')
818 818 return
819 819
820 820 log = self.repo.changelog
821 821 date = util.datestr(ctx.date())
822 822
823 823 hexfunc = self.ui.debugflag and hex or short
824 824
825 825 parents = [(p, hexfunc(log.node(p)))
826 826 for p in self._meaningful_parentrevs(log, rev)]
827 827
828 828 # i18n: column positioning for "hg log"
829 829 self.ui.write(_("changeset: %d:%s\n") % (rev, hexfunc(changenode)),
830 830 label='log.changeset changeset.%s' % ctx.phasestr())
831 831
832 832 branch = ctx.branch()
833 833 # don't show the default branch name
834 834 if branch != 'default':
835 835 # i18n: column positioning for "hg log"
836 836 self.ui.write(_("branch: %s\n") % branch,
837 837 label='log.branch')
838 838 for bookmark in self.repo.nodebookmarks(changenode):
839 839 # i18n: column positioning for "hg log"
840 840 self.ui.write(_("bookmark: %s\n") % bookmark,
841 841 label='log.bookmark')
842 842 for tag in self.repo.nodetags(changenode):
843 843 # i18n: column positioning for "hg log"
844 844 self.ui.write(_("tag: %s\n") % tag,
845 845 label='log.tag')
846 846 if self.ui.debugflag and ctx.phase():
847 847 # i18n: column positioning for "hg log"
848 848 self.ui.write(_("phase: %s\n") % _(ctx.phasestr()),
849 849 label='log.phase')
850 850 for parent in parents:
851 851 # i18n: column positioning for "hg log"
852 852 self.ui.write(_("parent: %d:%s\n") % parent,
853 853 label='log.parent changeset.%s' % ctx.phasestr())
854 854
855 855 if self.ui.debugflag:
856 856 mnode = ctx.manifestnode()
857 857 # i18n: column positioning for "hg log"
858 858 self.ui.write(_("manifest: %d:%s\n") %
859 859 (self.repo.manifest.rev(mnode), hex(mnode)),
860 860 label='ui.debug log.manifest')
861 861 # i18n: column positioning for "hg log"
862 862 self.ui.write(_("user: %s\n") % ctx.user(),
863 863 label='log.user')
864 864 # i18n: column positioning for "hg log"
865 865 self.ui.write(_("date: %s\n") % date,
866 866 label='log.date')
867 867
868 868 if self.ui.debugflag:
869 869 files = self.repo.status(log.parents(changenode)[0], changenode)[:3]
870 870 for key, value in zip([# i18n: column positioning for "hg log"
871 871 _("files:"),
872 872 # i18n: column positioning for "hg log"
873 873 _("files+:"),
874 874 # i18n: column positioning for "hg log"
875 875 _("files-:")], files):
876 876 if value:
877 877 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
878 878 label='ui.debug log.files')
879 879 elif ctx.files() and self.ui.verbose:
880 880 # i18n: column positioning for "hg log"
881 881 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
882 882 label='ui.note log.files')
883 883 if copies and self.ui.verbose:
884 884 copies = ['%s (%s)' % c for c in copies]
885 885 # i18n: column positioning for "hg log"
886 886 self.ui.write(_("copies: %s\n") % ' '.join(copies),
887 887 label='ui.note log.copies')
888 888
889 889 extra = ctx.extra()
890 890 if extra and self.ui.debugflag:
891 891 for key, value in sorted(extra.items()):
892 892 # i18n: column positioning for "hg log"
893 893 self.ui.write(_("extra: %s=%s\n")
894 894 % (key, value.encode('string_escape')),
895 895 label='ui.debug log.extra')
896 896
897 897 description = ctx.description().strip()
898 898 if description:
899 899 if self.ui.verbose:
900 900 self.ui.write(_("description:\n"),
901 901 label='ui.note log.description')
902 902 self.ui.write(description,
903 903 label='ui.note log.description')
904 904 self.ui.write("\n\n")
905 905 else:
906 906 # i18n: column positioning for "hg log"
907 907 self.ui.write(_("summary: %s\n") %
908 908 description.splitlines()[0],
909 909 label='log.summary')
910 910 self.ui.write("\n")
911 911
912 912 self.showpatch(changenode, matchfn)
913 913
914 914 def showpatch(self, node, matchfn):
915 915 if not matchfn:
916 916 matchfn = self.patch
917 917 if matchfn:
918 918 stat = self.diffopts.get('stat')
919 919 diff = self.diffopts.get('patch')
920 920 diffopts = patch.diffopts(self.ui, self.diffopts)
921 921 prev = self.repo.changelog.parents(node)[0]
922 922 if stat:
923 923 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
924 924 match=matchfn, stat=True)
925 925 if diff:
926 926 if stat:
927 927 self.ui.write("\n")
928 928 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
929 929 match=matchfn, stat=False)
930 930 self.ui.write("\n")
931 931
932 932 def _meaningful_parentrevs(self, log, rev):
933 933 """Return list of meaningful (or all if debug) parentrevs for rev.
934 934
935 935 For merges (two non-nullrev revisions) both parents are meaningful.
936 936 Otherwise the first parent revision is considered meaningful if it
937 937 is not the preceding revision.
938 938 """
939 939 parents = log.parentrevs(rev)
940 940 if not self.ui.debugflag and parents[1] == nullrev:
941 941 if parents[0] >= rev - 1:
942 942 parents = []
943 943 else:
944 944 parents = [parents[0]]
945 945 return parents
946 946
947 947
948 948 class changeset_templater(changeset_printer):
949 949 '''format changeset information.'''
950 950
951 951 def __init__(self, ui, repo, patch, diffopts, tmpl, mapfile, buffered):
952 952 changeset_printer.__init__(self, ui, repo, patch, diffopts, buffered)
953 953 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
954 954 defaulttempl = {
955 955 'parent': '{rev}:{node|formatnode} ',
956 956 'manifest': '{rev}:{node|formatnode}',
957 957 'file_copy': '{name} ({source})',
958 958 'extra': '{key}={value|stringescape}'
959 959 }
960 960 # filecopy is preserved for compatibility reasons
961 961 defaulttempl['filecopy'] = defaulttempl['file_copy']
962 962 self.t = templater.templater(mapfile, {'formatnode': formatnode},
963 963 cache=defaulttempl)
964 964 if tmpl:
965 965 self.t.cache['changeset'] = tmpl
966 966
967 967 self.cache = {}
968 968
969 969 def _meaningful_parentrevs(self, ctx):
970 970 """Return list of meaningful (or all if debug) parentrevs for rev.
971 971 """
972 972 parents = ctx.parents()
973 973 if len(parents) > 1:
974 974 return parents
975 975 if self.ui.debugflag:
976 976 return [parents[0], self.repo['null']]
977 977 if parents[0].rev() >= ctx.rev() - 1:
978 978 return []
979 979 return parents
980 980
981 981 def _show(self, ctx, copies, matchfn, props):
982 982 '''show a single changeset or file revision'''
983 983
984 984 showlist = templatekw.showlist
985 985
986 986 # showparents() behaviour depends on ui trace level which
987 987 # causes unexpected behaviours at templating level and makes
988 988 # it harder to extract it in a standalone function. Its
989 989 # behaviour cannot be changed so leave it here for now.
990 990 def showparents(**args):
991 991 ctx = args['ctx']
992 992 parents = [[('rev', p.rev()), ('node', p.hex())]
993 993 for p in self._meaningful_parentrevs(ctx)]
994 994 return showlist('parent', parents, **args)
995 995
996 996 props = props.copy()
997 997 props.update(templatekw.keywords)
998 998 props['parents'] = showparents
999 999 props['templ'] = self.t
1000 1000 props['ctx'] = ctx
1001 1001 props['repo'] = self.repo
1002 1002 props['revcache'] = {'copies': copies}
1003 1003 props['cache'] = self.cache
1004 1004
1005 1005 # find correct templates for current mode
1006 1006
1007 1007 tmplmodes = [
1008 1008 (True, None),
1009 1009 (self.ui.verbose, 'verbose'),
1010 1010 (self.ui.quiet, 'quiet'),
1011 1011 (self.ui.debugflag, 'debug'),
1012 1012 ]
1013 1013
1014 1014 types = {'header': '', 'footer':'', 'changeset': 'changeset'}
1015 1015 for mode, postfix in tmplmodes:
1016 1016 for type in types:
1017 1017 cur = postfix and ('%s_%s' % (type, postfix)) or type
1018 1018 if mode and cur in self.t:
1019 1019 types[type] = cur
1020 1020
1021 1021 try:
1022 1022
1023 1023 # write header
1024 1024 if types['header']:
1025 1025 h = templater.stringify(self.t(types['header'], **props))
1026 1026 if self.buffered:
1027 1027 self.header[ctx.rev()] = h
1028 1028 else:
1029 1029 if self.lastheader != h:
1030 1030 self.lastheader = h
1031 1031 self.ui.write(h)
1032 1032
1033 1033 # write changeset metadata, then patch if requested
1034 1034 key = types['changeset']
1035 1035 self.ui.write(templater.stringify(self.t(key, **props)))
1036 1036 self.showpatch(ctx.node(), matchfn)
1037 1037
1038 1038 if types['footer']:
1039 1039 if not self.footer:
1040 1040 self.footer = templater.stringify(self.t(types['footer'],
1041 1041 **props))
1042 1042
1043 1043 except KeyError, inst:
1044 1044 msg = _("%s: no key named '%s'")
1045 1045 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
1046 1046 except SyntaxError, inst:
1047 1047 raise util.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1048 1048
1049 1049 def gettemplate(ui, tmpl, style):
1050 1050 """
1051 1051 Find the template matching the given template spec or style.
1052 1052 """
1053 1053
1054 1054 # ui settings
1055 1055 if not tmpl and not style:
1056 1056 tmpl = ui.config('ui', 'logtemplate')
1057 1057 if tmpl:
1058 1058 try:
1059 1059 tmpl = templater.parsestring(tmpl)
1060 1060 except SyntaxError:
1061 1061 tmpl = templater.parsestring(tmpl, quoted=False)
1062 1062 return tmpl, None
1063 1063 else:
1064 1064 style = util.expandpath(ui.config('ui', 'style', ''))
1065 1065
1066 1066 if style:
1067 1067 mapfile = style
1068 1068 if not os.path.split(mapfile)[0]:
1069 1069 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1070 1070 or templater.templatepath(mapfile))
1071 1071 if mapname:
1072 1072 mapfile = mapname
1073 1073 return None, mapfile
1074 1074
1075 1075 if not tmpl:
1076 1076 return None, None
1077 1077
1078 1078 # looks like a literal template?
1079 1079 if '{' in tmpl:
1080 1080 return tmpl, None
1081 1081
1082 1082 # perhaps a stock style?
1083 1083 if not os.path.split(tmpl)[0]:
1084 1084 mapname = (templater.templatepath('map-cmdline.' + tmpl)
1085 1085 or templater.templatepath(tmpl))
1086 1086 if mapname and os.path.isfile(mapname):
1087 1087 return None, mapname
1088 1088
1089 1089 # perhaps it's a reference to [templates]
1090 1090 t = ui.config('templates', tmpl)
1091 1091 if t:
1092 1092 try:
1093 1093 tmpl = templater.parsestring(t)
1094 1094 except SyntaxError:
1095 1095 tmpl = templater.parsestring(t, quoted=False)
1096 1096 return tmpl, None
1097 1097
1098 1098 # perhaps it's a path to a map or a template
1099 1099 if ('/' in tmpl or '\\' in tmpl) and os.path.isfile(tmpl):
1100 1100 # is it a mapfile for a style?
1101 1101 if os.path.basename(tmpl).startswith("map-"):
1102 1102 return None, os.path.realpath(tmpl)
1103 1103 tmpl = open(tmpl).read()
1104 1104 return tmpl, None
1105 1105
1106 1106 # constant string?
1107 1107 return tmpl, None
1108 1108
1109 1109 def show_changeset(ui, repo, opts, buffered=False):
1110 1110 """show one changeset using template or regular display.
1111 1111
1112 1112 Display format will be the first non-empty hit of:
1113 1113 1. option 'template'
1114 1114 2. option 'style'
1115 1115 3. [ui] setting 'logtemplate'
1116 1116 4. [ui] setting 'style'
1117 1117 If all of these values are either the unset or the empty string,
1118 1118 regular display via changeset_printer() is done.
1119 1119 """
1120 1120 # options
1121 1121 patch = None
1122 1122 if opts.get('patch') or opts.get('stat'):
1123 1123 patch = scmutil.matchall(repo)
1124 1124
1125 1125 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1126 1126
1127 1127 if not tmpl and not mapfile:
1128 1128 return changeset_printer(ui, repo, patch, opts, buffered)
1129 1129
1130 1130 try:
1131 1131 t = changeset_templater(ui, repo, patch, opts, tmpl, mapfile, buffered)
1132 1132 except SyntaxError, inst:
1133 1133 raise util.Abort(inst.args[0])
1134 1134 return t
1135 1135
1136 1136 def showmarker(ui, marker):
1137 1137 """utility function to display obsolescence marker in a readable way
1138 1138
1139 1139 To be used by debug function."""
1140 1140 ui.write(hex(marker.precnode()))
1141 1141 for repl in marker.succnodes():
1142 1142 ui.write(' ')
1143 1143 ui.write(hex(repl))
1144 1144 ui.write(' %X ' % marker._data[2])
1145 1145 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1146 1146 sorted(marker.metadata().items()))))
1147 1147 ui.write('\n')
1148 1148
1149 1149 def finddate(ui, repo, date):
1150 1150 """Find the tipmost changeset that matches the given date spec"""
1151 1151
1152 1152 df = util.matchdate(date)
1153 1153 m = scmutil.matchall(repo)
1154 1154 results = {}
1155 1155
1156 1156 def prep(ctx, fns):
1157 1157 d = ctx.date()
1158 1158 if df(d[0]):
1159 1159 results[ctx.rev()] = d
1160 1160
1161 1161 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1162 1162 rev = ctx.rev()
1163 1163 if rev in results:
1164 1164 ui.status(_("found revision %s from %s\n") %
1165 1165 (rev, util.datestr(results[rev])))
1166 1166 return str(rev)
1167 1167
1168 1168 raise util.Abort(_("revision matching date not found"))
1169 1169
1170 1170 def increasingwindows(windowsize=8, sizelimit=512):
1171 1171 while True:
1172 1172 yield windowsize
1173 1173 if windowsize < sizelimit:
1174 1174 windowsize *= 2
1175 1175
1176 1176 class FileWalkError(Exception):
1177 1177 pass
1178 1178
1179 1179 def walkfilerevs(repo, match, follow, revs, fncache):
1180 1180 '''Walks the file history for the matched files.
1181 1181
1182 1182 Returns the changeset revs that are involved in the file history.
1183 1183
1184 1184 Throws FileWalkError if the file history can't be walked using
1185 1185 filelogs alone.
1186 1186 '''
1187 1187 wanted = set()
1188 1188 copies = []
1189 1189 minrev, maxrev = min(revs), max(revs)
1190 1190 def filerevgen(filelog, last):
1191 1191 """
1192 1192 Only files, no patterns. Check the history of each file.
1193 1193
1194 1194 Examines filelog entries within minrev, maxrev linkrev range
1195 1195 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1196 1196 tuples in backwards order
1197 1197 """
1198 1198 cl_count = len(repo)
1199 1199 revs = []
1200 1200 for j in xrange(0, last + 1):
1201 1201 linkrev = filelog.linkrev(j)
1202 1202 if linkrev < minrev:
1203 1203 continue
1204 1204 # only yield rev for which we have the changelog, it can
1205 1205 # happen while doing "hg log" during a pull or commit
1206 1206 if linkrev >= cl_count:
1207 1207 break
1208 1208
1209 1209 parentlinkrevs = []
1210 1210 for p in filelog.parentrevs(j):
1211 1211 if p != nullrev:
1212 1212 parentlinkrevs.append(filelog.linkrev(p))
1213 1213 n = filelog.node(j)
1214 1214 revs.append((linkrev, parentlinkrevs,
1215 1215 follow and filelog.renamed(n)))
1216 1216
1217 1217 return reversed(revs)
1218 1218 def iterfiles():
1219 1219 pctx = repo['.']
1220 1220 for filename in match.files():
1221 1221 if follow:
1222 1222 if filename not in pctx:
1223 1223 raise util.Abort(_('cannot follow file not in parent '
1224 1224 'revision: "%s"') % filename)
1225 1225 yield filename, pctx[filename].filenode()
1226 1226 else:
1227 1227 yield filename, None
1228 1228 for filename_node in copies:
1229 1229 yield filename_node
1230 1230
1231 1231 for file_, node in iterfiles():
1232 1232 filelog = repo.file(file_)
1233 1233 if not len(filelog):
1234 1234 if node is None:
1235 1235 # A zero count may be a directory or deleted file, so
1236 1236 # try to find matching entries on the slow path.
1237 1237 if follow:
1238 1238 raise util.Abort(
1239 1239 _('cannot follow nonexistent file: "%s"') % file_)
1240 1240 raise FileWalkError("Cannot walk via filelog")
1241 1241 else:
1242 1242 continue
1243 1243
1244 1244 if node is None:
1245 1245 last = len(filelog) - 1
1246 1246 else:
1247 1247 last = filelog.rev(node)
1248 1248
1249 1249
1250 1250 # keep track of all ancestors of the file
1251 1251 ancestors = set([filelog.linkrev(last)])
1252 1252
1253 1253 # iterate from latest to oldest revision
1254 1254 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1255 1255 if not follow:
1256 1256 if rev > maxrev:
1257 1257 continue
1258 1258 else:
1259 1259 # Note that last might not be the first interesting
1260 1260 # rev to us:
1261 1261 # if the file has been changed after maxrev, we'll
1262 1262 # have linkrev(last) > maxrev, and we still need
1263 1263 # to explore the file graph
1264 1264 if rev not in ancestors:
1265 1265 continue
1266 1266 # XXX insert 1327 fix here
1267 1267 if flparentlinkrevs:
1268 1268 ancestors.update(flparentlinkrevs)
1269 1269
1270 1270 fncache.setdefault(rev, []).append(file_)
1271 1271 wanted.add(rev)
1272 1272 if copied:
1273 1273 copies.append(copied)
1274 1274
1275 1275 return wanted
1276 1276
1277 1277 def walkchangerevs(repo, match, opts, prepare):
1278 1278 '''Iterate over files and the revs in which they changed.
1279 1279
1280 1280 Callers most commonly need to iterate backwards over the history
1281 1281 in which they are interested. Doing so has awful (quadratic-looking)
1282 1282 performance, so we use iterators in a "windowed" way.
1283 1283
1284 1284 We walk a window of revisions in the desired order. Within the
1285 1285 window, we first walk forwards to gather data, then in the desired
1286 1286 order (usually backwards) to display it.
1287 1287
1288 1288 This function returns an iterator yielding contexts. Before
1289 1289 yielding each context, the iterator will first call the prepare
1290 1290 function on each context in the window in forward order.'''
1291 1291
1292 1292 follow = opts.get('follow') or opts.get('follow_first')
1293 1293
1294 1294 if opts.get('rev'):
1295 1295 revs = scmutil.revrange(repo, opts.get('rev'))
1296 1296 elif follow:
1297 1297 revs = repo.revs('reverse(:.)')
1298 1298 else:
1299 1299 revs = revset.spanset(repo)
1300 1300 revs.reverse()
1301 1301 if not revs:
1302 1302 return []
1303 1303 wanted = set()
1304 1304 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1305 1305 fncache = {}
1306 1306 change = repo.changectx
1307 1307
1308 1308 # First step is to fill wanted, the set of revisions that we want to yield.
1309 1309 # When it does not induce extra cost, we also fill fncache for revisions in
1310 1310 # wanted: a cache of filenames that were changed (ctx.files()) and that
1311 1311 # match the file filtering conditions.
1312 1312
1313 1313 if not slowpath and not match.files():
1314 1314 # No files, no patterns. Display all revs.
1315 1315 wanted = revs
1316 1316
1317 1317 if not slowpath and match.files():
1318 1318 # We only have to read through the filelog to find wanted revisions
1319 1319
1320 1320 try:
1321 1321 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1322 1322 except FileWalkError:
1323 1323 slowpath = True
1324 1324
1325 1325 # We decided to fall back to the slowpath because at least one
1326 1326 # of the paths was not a file. Check to see if at least one of them
1327 1327 # existed in history, otherwise simply return
1328 1328 for path in match.files():
1329 1329 if path == '.' or path in repo.store:
1330 1330 break
1331 1331 else:
1332 1332 return []
1333 1333
1334 1334 if slowpath:
1335 1335 # We have to read the changelog to match filenames against
1336 1336 # changed files
1337 1337
1338 1338 if follow:
1339 1339 raise util.Abort(_('can only follow copies/renames for explicit '
1340 1340 'filenames'))
1341 1341
1342 1342 # The slow path checks files modified in every changeset.
1343 1343 # This is really slow on large repos, so compute the set lazily.
1344 1344 class lazywantedset(object):
1345 1345 def __init__(self):
1346 1346 self.set = set()
1347 1347 self.revs = set(revs)
1348 1348
1349 1349 # No need to worry about locality here because it will be accessed
1350 1350 # in the same order as the increasing window below.
1351 1351 def __contains__(self, value):
1352 1352 if value in self.set:
1353 1353 return True
1354 1354 elif not value in self.revs:
1355 1355 return False
1356 1356 else:
1357 1357 self.revs.discard(value)
1358 1358 ctx = change(value)
1359 1359 matches = filter(match, ctx.files())
1360 1360 if matches:
1361 1361 fncache[value] = matches
1362 1362 self.set.add(value)
1363 1363 return True
1364 1364 return False
1365 1365
1366 1366 def discard(self, value):
1367 1367 self.revs.discard(value)
1368 1368 self.set.discard(value)
1369 1369
1370 1370 wanted = lazywantedset()
1371 1371
1372 1372 class followfilter(object):
1373 1373 def __init__(self, onlyfirst=False):
1374 1374 self.startrev = nullrev
1375 1375 self.roots = set()
1376 1376 self.onlyfirst = onlyfirst
1377 1377
1378 1378 def match(self, rev):
1379 1379 def realparents(rev):
1380 1380 if self.onlyfirst:
1381 1381 return repo.changelog.parentrevs(rev)[0:1]
1382 1382 else:
1383 1383 return filter(lambda x: x != nullrev,
1384 1384 repo.changelog.parentrevs(rev))
1385 1385
1386 1386 if self.startrev == nullrev:
1387 1387 self.startrev = rev
1388 1388 return True
1389 1389
1390 1390 if rev > self.startrev:
1391 1391 # forward: all descendants
1392 1392 if not self.roots:
1393 1393 self.roots.add(self.startrev)
1394 1394 for parent in realparents(rev):
1395 1395 if parent in self.roots:
1396 1396 self.roots.add(rev)
1397 1397 return True
1398 1398 else:
1399 1399 # backwards: all parents
1400 1400 if not self.roots:
1401 1401 self.roots.update(realparents(self.startrev))
1402 1402 if rev in self.roots:
1403 1403 self.roots.remove(rev)
1404 1404 self.roots.update(realparents(rev))
1405 1405 return True
1406 1406
1407 1407 return False
1408 1408
1409 1409 # it might be worthwhile to do this in the iterator if the rev range
1410 1410 # is descending and the prune args are all within that range
1411 1411 for rev in opts.get('prune', ()):
1412 1412 rev = repo[rev].rev()
1413 1413 ff = followfilter()
1414 1414 stop = min(revs[0], revs[-1])
1415 1415 for x in xrange(rev, stop - 1, -1):
1416 1416 if ff.match(x):
1417 1417 wanted = wanted - [x]
1418 1418
1419 1419 # Now that wanted is correctly initialized, we can iterate over the
1420 1420 # revision range, yielding only revisions in wanted.
1421 1421 def iterate():
1422 1422 if follow and not match.files():
1423 1423 ff = followfilter(onlyfirst=opts.get('follow_first'))
1424 1424 def want(rev):
1425 1425 return ff.match(rev) and rev in wanted
1426 1426 else:
1427 1427 def want(rev):
1428 1428 return rev in wanted
1429 1429
1430 1430 it = iter(revs)
1431 1431 stopiteration = False
1432 1432 for windowsize in increasingwindows():
1433 1433 nrevs = []
1434 1434 for i in xrange(windowsize):
1435 1435 try:
1436 1436 rev = it.next()
1437 1437 if want(rev):
1438 1438 nrevs.append(rev)
1439 1439 except (StopIteration):
1440 1440 stopiteration = True
1441 1441 break
1442 1442 for rev in sorted(nrevs):
1443 1443 fns = fncache.get(rev)
1444 1444 ctx = change(rev)
1445 1445 if not fns:
1446 1446 def fns_generator():
1447 1447 for f in ctx.files():
1448 1448 if match(f):
1449 1449 yield f
1450 1450 fns = fns_generator()
1451 1451 prepare(ctx, fns)
1452 1452 for rev in nrevs:
1453 1453 yield change(rev)
1454 1454
1455 1455 if stopiteration:
1456 1456 break
1457 1457
1458 1458 return iterate()
1459 1459
1460 1460 def _makegraphfilematcher(repo, pats, followfirst):
1461 1461 # When displaying a revision with --patch --follow FILE, we have
1462 1462 # to know which file of the revision must be diffed. With
1463 1463 # --follow, we want the names of the ancestors of FILE in the
1464 1464 # revision, stored in "fcache". "fcache" is populated by
1465 1465 # reproducing the graph traversal already done by --follow revset
1466 1466 # and relating linkrevs to file names (which is not "correct" but
1467 1467 # good enough).
1468 1468 fcache = {}
1469 1469 fcacheready = [False]
1470 1470 pctx = repo['.']
1471 1471 wctx = repo[None]
1472 1472
1473 1473 def populate():
1474 1474 for fn in pats:
1475 1475 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1476 1476 for c in i:
1477 1477 fcache.setdefault(c.linkrev(), set()).add(c.path())
1478 1478
1479 1479 def filematcher(rev):
1480 1480 if not fcacheready[0]:
1481 1481 # Lazy initialization
1482 1482 fcacheready[0] = True
1483 1483 populate()
1484 1484 return scmutil.match(wctx, fcache.get(rev, []), default='path')
1485 1485
1486 1486 return filematcher
1487 1487
1488 1488 def _makegraphlogrevset(repo, pats, opts, revs):
1489 1489 """Return (expr, filematcher) where expr is a revset string built
1490 1490 from log options and file patterns or None. If --stat or --patch
1491 1491 are not passed filematcher is None. Otherwise it is a callable
1492 1492 taking a revision number and returning a match objects filtering
1493 1493 the files to be detailed when displaying the revision.
1494 1494 """
1495 1495 opt2revset = {
1496 1496 'no_merges': ('not merge()', None),
1497 1497 'only_merges': ('merge()', None),
1498 1498 '_ancestors': ('ancestors(%(val)s)', None),
1499 1499 '_fancestors': ('_firstancestors(%(val)s)', None),
1500 1500 '_descendants': ('descendants(%(val)s)', None),
1501 1501 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1502 1502 '_matchfiles': ('_matchfiles(%(val)s)', None),
1503 1503 'date': ('date(%(val)r)', None),
1504 1504 'branch': ('branch(%(val)r)', ' or '),
1505 1505 '_patslog': ('filelog(%(val)r)', ' or '),
1506 1506 '_patsfollow': ('follow(%(val)r)', ' or '),
1507 1507 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1508 1508 'keyword': ('keyword(%(val)r)', ' or '),
1509 1509 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1510 1510 'user': ('user(%(val)r)', ' or '),
1511 1511 }
1512 1512
1513 1513 opts = dict(opts)
1514 1514 # follow or not follow?
1515 1515 follow = opts.get('follow') or opts.get('follow_first')
1516 1516 followfirst = opts.get('follow_first') and 1 or 0
1517 1517 # --follow with FILE behaviour depends on revs...
1518 1518 it = iter(revs)
1519 1519 startrev = it.next()
1520 1520 try:
1521 1521 followdescendants = startrev < it.next()
1522 1522 except (StopIteration):
1523 1523 followdescendants = False
1524 1524
1525 1525 # branch and only_branch are really aliases and must be handled at
1526 1526 # the same time
1527 1527 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1528 1528 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1529 1529 # pats/include/exclude are passed to match.match() directly in
1530 1530 # _matchfiles() revset but walkchangerevs() builds its matcher with
1531 1531 # scmutil.match(). The difference is input pats are globbed on
1532 1532 # platforms without shell expansion (windows).
1533 1533 pctx = repo[None]
1534 1534 match, pats = scmutil.matchandpats(pctx, pats, opts)
1535 1535 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1536 1536 if not slowpath:
1537 1537 for f in match.files():
1538 1538 if follow and f not in pctx:
1539 1539 raise util.Abort(_('cannot follow file not in parent '
1540 1540 'revision: "%s"') % f)
1541 1541 filelog = repo.file(f)
1542 1542 if not filelog:
1543 1543 # A zero count may be a directory or deleted file, so
1544 1544 # try to find matching entries on the slow path.
1545 1545 if follow:
1546 1546 raise util.Abort(
1547 1547 _('cannot follow nonexistent file: "%s"') % f)
1548 1548 slowpath = True
1549 1549
1550 1550 # We decided to fall back to the slowpath because at least one
1551 1551 # of the paths was not a file. Check to see if at least one of them
1552 1552 # existed in history - in that case, we'll continue down the
1553 1553 # slowpath; otherwise, we can turn off the slowpath
1554 1554 if slowpath:
1555 1555 for path in match.files():
1556 1556 if path == '.' or path in repo.store:
1557 1557 break
1558 1558 else:
1559 1559 slowpath = False
1560 1560
1561 1561 if slowpath:
1562 1562 # See walkchangerevs() slow path.
1563 1563 #
1564 1564 if follow:
1565 1565 raise util.Abort(_('can only follow copies/renames for explicit '
1566 1566 'filenames'))
1567 1567 # pats/include/exclude cannot be represented as separate
1568 1568 # revset expressions as their filtering logic applies at file
1569 1569 # level. For instance "-I a -X a" matches a revision touching
1570 1570 # "a" and "b" while "file(a) and not file(b)" does
1571 1571 # not. Besides, filesets are evaluated against the working
1572 1572 # directory.
1573 1573 matchargs = ['r:', 'd:relpath']
1574 1574 for p in pats:
1575 1575 matchargs.append('p:' + p)
1576 1576 for p in opts.get('include', []):
1577 1577 matchargs.append('i:' + p)
1578 1578 for p in opts.get('exclude', []):
1579 1579 matchargs.append('x:' + p)
1580 1580 matchargs = ','.join(('%r' % p) for p in matchargs)
1581 1581 opts['_matchfiles'] = matchargs
1582 1582 else:
1583 1583 if follow:
1584 1584 fpats = ('_patsfollow', '_patsfollowfirst')
1585 1585 fnopats = (('_ancestors', '_fancestors'),
1586 1586 ('_descendants', '_fdescendants'))
1587 1587 if pats:
1588 1588 # follow() revset interprets its file argument as a
1589 1589 # manifest entry, so use match.files(), not pats.
1590 1590 opts[fpats[followfirst]] = list(match.files())
1591 1591 else:
1592 1592 opts[fnopats[followdescendants][followfirst]] = str(startrev)
1593 1593 else:
1594 1594 opts['_patslog'] = list(pats)
1595 1595
1596 1596 filematcher = None
1597 1597 if opts.get('patch') or opts.get('stat'):
1598 1598 if follow:
1599 1599 filematcher = _makegraphfilematcher(repo, pats, followfirst)
1600 1600 else:
1601 1601 filematcher = lambda rev: match
1602 1602
1603 1603 expr = []
1604 1604 for op, val in opts.iteritems():
1605 1605 if not val:
1606 1606 continue
1607 1607 if op not in opt2revset:
1608 1608 continue
1609 1609 revop, andor = opt2revset[op]
1610 1610 if '%(val)' not in revop:
1611 1611 expr.append(revop)
1612 1612 else:
1613 1613 if not isinstance(val, list):
1614 1614 e = revop % {'val': val}
1615 1615 else:
1616 1616 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
1617 1617 expr.append(e)
1618 1618
1619 1619 if expr:
1620 1620 expr = '(' + ' and '.join(expr) + ')'
1621 1621 else:
1622 1622 expr = None
1623 1623 return expr, filematcher
1624 1624
1625 1625 def getgraphlogrevs(repo, pats, opts):
1626 1626 """Return (revs, expr, filematcher) where revs is an iterable of
1627 1627 revision numbers, expr is a revset string built from log options
1628 1628 and file patterns or None, and used to filter 'revs'. If --stat or
1629 1629 --patch are not passed filematcher is None. Otherwise it is a
1630 1630 callable taking a revision number and returning a match objects
1631 1631 filtering the files to be detailed when displaying the revision.
1632 1632 """
1633 1633 if not len(repo):
1634 1634 return [], None, None
1635 1635 limit = loglimit(opts)
1636 1636 # Default --rev value depends on --follow but --follow behaviour
1637 1637 # depends on revisions resolved from --rev...
1638 1638 follow = opts.get('follow') or opts.get('follow_first')
1639 1639 possiblyunsorted = False # whether revs might need sorting
1640 1640 if opts.get('rev'):
1641 1641 revs = scmutil.revrange(repo, opts['rev'])
1642 1642 # Don't sort here because _makegraphlogrevset might depend on the
1643 1643 # order of revs
1644 1644 possiblyunsorted = True
1645 1645 else:
1646 1646 if follow and len(repo) > 0:
1647 1647 revs = repo.revs('reverse(:.)')
1648 1648 else:
1649 1649 revs = revset.spanset(repo)
1650 1650 revs.reverse()
1651 1651 if not revs:
1652 1652 return revset.baseset(), None, None
1653 1653 expr, filematcher = _makegraphlogrevset(repo, pats, opts, revs)
1654 1654 if possiblyunsorted:
1655 1655 revs.sort(reverse=True)
1656 1656 if expr:
1657 1657 # Revset matchers often operate faster on revisions in changelog
1658 1658 # order, because most filters deal with the changelog.
1659 1659 revs.reverse()
1660 1660 matcher = revset.match(repo.ui, expr)
1661 1661 # Revset matches can reorder revisions. "A or B" typically returns
1662 1662 # returns the revision matching A then the revision matching B. Sort
1663 1663 # again to fix that.
1664 1664 revs = matcher(repo, revs)
1665 1665 revs.sort(reverse=True)
1666 1666 if limit is not None:
1667 1667 limitedrevs = revset.baseset()
1668 1668 for idx, rev in enumerate(revs):
1669 1669 if idx >= limit:
1670 1670 break
1671 1671 limitedrevs.append(rev)
1672 1672 revs = limitedrevs
1673 1673
1674 1674 return revs, expr, filematcher
1675 1675
1676 1676 def displaygraph(ui, dag, displayer, showparents, edgefn, getrenamed=None,
1677 1677 filematcher=None):
1678 1678 seen, state = [], graphmod.asciistate()
1679 1679 for rev, type, ctx, parents in dag:
1680 1680 char = 'o'
1681 1681 if ctx.node() in showparents:
1682 1682 char = '@'
1683 1683 elif ctx.obsolete():
1684 1684 char = 'x'
1685 1685 copies = None
1686 1686 if getrenamed and ctx.rev():
1687 1687 copies = []
1688 1688 for fn in ctx.files():
1689 1689 rename = getrenamed(fn, ctx.rev())
1690 1690 if rename:
1691 1691 copies.append((fn, rename[0]))
1692 1692 revmatchfn = None
1693 1693 if filematcher is not None:
1694 1694 revmatchfn = filematcher(ctx.rev())
1695 1695 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
1696 1696 lines = displayer.hunk.pop(rev).split('\n')
1697 1697 if not lines[-1]:
1698 1698 del lines[-1]
1699 1699 displayer.flush(rev)
1700 1700 edges = edgefn(type, char, lines, seen, rev, parents)
1701 1701 for type, char, lines, coldata in edges:
1702 1702 graphmod.ascii(ui, state, type, char, lines, coldata)
1703 1703 displayer.close()
1704 1704
1705 1705 def graphlog(ui, repo, *pats, **opts):
1706 1706 # Parameters are identical to log command ones
1707 1707 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
1708 1708 revdag = graphmod.dagwalker(repo, revs)
1709 1709
1710 1710 getrenamed = None
1711 1711 if opts.get('copies'):
1712 1712 endrev = None
1713 1713 if opts.get('rev'):
1714 1714 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
1715 1715 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
1716 1716 displayer = show_changeset(ui, repo, opts, buffered=True)
1717 1717 showparents = [ctx.node() for ctx in repo[None].parents()]
1718 1718 displaygraph(ui, revdag, displayer, showparents,
1719 1719 graphmod.asciiedges, getrenamed, filematcher)
1720 1720
1721 1721 def checkunsupportedgraphflags(pats, opts):
1722 1722 for op in ["newest_first"]:
1723 1723 if op in opts and opts[op]:
1724 1724 raise util.Abort(_("-G/--graph option is incompatible with --%s")
1725 1725 % op.replace("_", "-"))
1726 1726
1727 1727 def graphrevs(repo, nodes, opts):
1728 1728 limit = loglimit(opts)
1729 1729 nodes.reverse()
1730 1730 if limit is not None:
1731 1731 nodes = nodes[:limit]
1732 1732 return graphmod.nodes(repo, nodes)
1733 1733
1734 1734 def add(ui, repo, match, dryrun, listsubrepos, prefix, explicitonly):
1735 1735 join = lambda f: os.path.join(prefix, f)
1736 1736 bad = []
1737 1737 oldbad = match.bad
1738 1738 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1739 1739 names = []
1740 1740 wctx = repo[None]
1741 1741 cca = None
1742 1742 abort, warn = scmutil.checkportabilityalert(ui)
1743 1743 if abort or warn:
1744 1744 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
1745 1745 for f in repo.walk(match):
1746 1746 exact = match.exact(f)
1747 1747 if exact or not explicitonly and f not in repo.dirstate:
1748 1748 if cca:
1749 1749 cca(f)
1750 1750 names.append(f)
1751 1751 if ui.verbose or not exact:
1752 1752 ui.status(_('adding %s\n') % match.rel(join(f)))
1753 1753
1754 1754 for subpath in sorted(wctx.substate):
1755 1755 sub = wctx.sub(subpath)
1756 1756 try:
1757 1757 submatch = matchmod.narrowmatcher(subpath, match)
1758 1758 if listsubrepos:
1759 1759 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1760 1760 False))
1761 1761 else:
1762 1762 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1763 1763 True))
1764 1764 except error.LookupError:
1765 1765 ui.status(_("skipping missing subrepository: %s\n")
1766 1766 % join(subpath))
1767 1767
1768 1768 if not dryrun:
1769 1769 rejected = wctx.add(names, prefix)
1770 1770 bad.extend(f for f in rejected if f in match.files())
1771 1771 return bad
1772 1772
1773 1773 def forget(ui, repo, match, prefix, explicitonly):
1774 1774 join = lambda f: os.path.join(prefix, f)
1775 1775 bad = []
1776 1776 oldbad = match.bad
1777 1777 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1778 1778 wctx = repo[None]
1779 1779 forgot = []
1780 1780 s = repo.status(match=match, clean=True)
1781 1781 forget = sorted(s[0] + s[1] + s[3] + s[6])
1782 1782 if explicitonly:
1783 1783 forget = [f for f in forget if match.exact(f)]
1784 1784
1785 1785 for subpath in sorted(wctx.substate):
1786 1786 sub = wctx.sub(subpath)
1787 1787 try:
1788 1788 submatch = matchmod.narrowmatcher(subpath, match)
1789 1789 subbad, subforgot = sub.forget(ui, submatch, prefix)
1790 1790 bad.extend([subpath + '/' + f for f in subbad])
1791 1791 forgot.extend([subpath + '/' + f for f in subforgot])
1792 1792 except error.LookupError:
1793 1793 ui.status(_("skipping missing subrepository: %s\n")
1794 1794 % join(subpath))
1795 1795
1796 1796 if not explicitonly:
1797 1797 for f in match.files():
1798 1798 if f not in repo.dirstate and not os.path.isdir(match.rel(join(f))):
1799 1799 if f not in forgot:
1800 1800 if os.path.exists(match.rel(join(f))):
1801 1801 ui.warn(_('not removing %s: '
1802 1802 'file is already untracked\n')
1803 1803 % match.rel(join(f)))
1804 1804 bad.append(f)
1805 1805
1806 1806 for f in forget:
1807 1807 if ui.verbose or not match.exact(f):
1808 1808 ui.status(_('removing %s\n') % match.rel(join(f)))
1809 1809
1810 1810 rejected = wctx.forget(forget, prefix)
1811 1811 bad.extend(f for f in rejected if f in match.files())
1812 1812 forgot.extend(forget)
1813 1813 return bad, forgot
1814 1814
1815 1815 def duplicatecopies(repo, rev, fromrev):
1816 1816 '''reproduce copies from fromrev to rev in the dirstate'''
1817 1817 for dst, src in copies.pathcopies(repo[fromrev], repo[rev]).iteritems():
1818 1818 # copies.pathcopies returns backward renames, so dst might not
1819 1819 # actually be in the dirstate
1820 1820 if repo.dirstate[dst] in "nma":
1821 1821 repo.dirstate.copy(src, dst)
1822 1822
1823 1823 def commit(ui, repo, commitfunc, pats, opts):
1824 1824 '''commit the specified files or all outstanding changes'''
1825 1825 date = opts.get('date')
1826 1826 if date:
1827 1827 opts['date'] = util.parsedate(date)
1828 1828 message = logmessage(ui, opts)
1829 1829
1830 1830 # extract addremove carefully -- this function can be called from a command
1831 1831 # that doesn't support addremove
1832 1832 if opts.get('addremove'):
1833 1833 scmutil.addremove(repo, pats, opts)
1834 1834
1835 1835 return commitfunc(ui, repo, message,
1836 1836 scmutil.match(repo[None], pats, opts), opts)
1837 1837
1838 1838 def amend(ui, repo, commitfunc, old, extra, pats, opts):
1839 1839 ui.note(_('amending changeset %s\n') % old)
1840 1840 base = old.p1()
1841 1841
1842 1842 wlock = lock = newid = None
1843 1843 try:
1844 1844 wlock = repo.wlock()
1845 1845 lock = repo.lock()
1846 1846 tr = repo.transaction('amend')
1847 1847 try:
1848 1848 # See if we got a message from -m or -l, if not, open the editor
1849 1849 # with the message of the changeset to amend
1850 1850 message = logmessage(ui, opts)
1851 1851 # ensure logfile does not conflict with later enforcement of the
1852 1852 # message. potential logfile content has been processed by
1853 1853 # `logmessage` anyway.
1854 1854 opts.pop('logfile')
1855 1855 # First, do a regular commit to record all changes in the working
1856 1856 # directory (if there are any)
1857 1857 ui.callhooks = False
1858 1858 currentbookmark = repo._bookmarkcurrent
1859 1859 try:
1860 1860 repo._bookmarkcurrent = None
1861 1861 opts['message'] = 'temporary amend commit for %s' % old
1862 1862 node = commit(ui, repo, commitfunc, pats, opts)
1863 1863 finally:
1864 1864 repo._bookmarkcurrent = currentbookmark
1865 1865 ui.callhooks = True
1866 1866 ctx = repo[node]
1867 1867
1868 1868 # Participating changesets:
1869 1869 #
1870 1870 # node/ctx o - new (intermediate) commit that contains changes
1871 1871 # | from working dir to go into amending commit
1872 1872 # | (or a workingctx if there were no changes)
1873 1873 # |
1874 1874 # old o - changeset to amend
1875 1875 # |
1876 1876 # base o - parent of amending changeset
1877 1877
1878 1878 # Update extra dict from amended commit (e.g. to preserve graft
1879 1879 # source)
1880 1880 extra.update(old.extra())
1881 1881
1882 1882 # Also update it from the intermediate commit or from the wctx
1883 1883 extra.update(ctx.extra())
1884 1884
1885 1885 if len(old.parents()) > 1:
1886 1886 # ctx.files() isn't reliable for merges, so fall back to the
1887 1887 # slower repo.status() method
1888 1888 files = set([fn for st in repo.status(base, old)[:3]
1889 1889 for fn in st])
1890 1890 else:
1891 1891 files = set(old.files())
1892 1892
1893 1893 # Second, we use either the commit we just did, or if there were no
1894 1894 # changes the parent of the working directory as the version of the
1895 1895 # files in the final amend commit
1896 1896 if node:
1897 1897 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
1898 1898
1899 1899 user = ctx.user()
1900 1900 date = ctx.date()
1901 1901 # Recompute copies (avoid recording a -> b -> a)
1902 1902 copied = copies.pathcopies(base, ctx)
1903 1903
1904 1904 # Prune files which were reverted by the updates: if old
1905 1905 # introduced file X and our intermediate commit, node,
1906 1906 # renamed that file, then those two files are the same and
1907 1907 # we can discard X from our list of files. Likewise if X
1908 1908 # was deleted, it's no longer relevant
1909 1909 files.update(ctx.files())
1910 1910
1911 1911 def samefile(f):
1912 1912 if f in ctx.manifest():
1913 1913 a = ctx.filectx(f)
1914 1914 if f in base.manifest():
1915 1915 b = base.filectx(f)
1916 1916 return (not a.cmp(b)
1917 1917 and a.flags() == b.flags())
1918 1918 else:
1919 1919 return False
1920 1920 else:
1921 1921 return f not in base.manifest()
1922 1922 files = [f for f in files if not samefile(f)]
1923 1923
1924 1924 def filectxfn(repo, ctx_, path):
1925 1925 try:
1926 1926 fctx = ctx[path]
1927 1927 flags = fctx.flags()
1928 1928 mctx = context.memfilectx(fctx.path(), fctx.data(),
1929 1929 islink='l' in flags,
1930 1930 isexec='x' in flags,
1931 1931 copied=copied.get(path))
1932 1932 return mctx
1933 1933 except KeyError:
1934 1934 raise IOError
1935 1935 else:
1936 1936 ui.note(_('copying changeset %s to %s\n') % (old, base))
1937 1937
1938 1938 # Use version of files as in the old cset
1939 1939 def filectxfn(repo, ctx_, path):
1940 1940 try:
1941 1941 return old.filectx(path)
1942 1942 except KeyError:
1943 1943 raise IOError
1944 1944
1945 1945 user = opts.get('user') or old.user()
1946 1946 date = opts.get('date') or old.date()
1947 1947 editmsg = False
1948 1948 if not message:
1949 1949 editmsg = True
1950 1950 message = old.description()
1951 1951
1952 1952 pureextra = extra.copy()
1953 1953 extra['amend_source'] = old.hex()
1954 1954
1955 1955 new = context.memctx(repo,
1956 1956 parents=[base.node(), old.p2().node()],
1957 1957 text=message,
1958 1958 files=files,
1959 1959 filectxfn=filectxfn,
1960 1960 user=user,
1961 1961 date=date,
1962 1962 extra=extra)
1963 1963 if editmsg:
1964 1964 new._text = commitforceeditor(repo, new, [])
1965 1965 repo.savecommitmessage(new.description())
1966 1966
1967 1967 newdesc = changelog.stripdesc(new.description())
1968 1968 if ((not node)
1969 1969 and newdesc == old.description()
1970 1970 and user == old.user()
1971 1971 and date == old.date()
1972 1972 and pureextra == old.extra()):
1973 1973 # nothing changed. continuing here would create a new node
1974 1974 # anyway because of the amend_source noise.
1975 1975 #
1976 1976 # This not what we expect from amend.
1977 1977 return old.node()
1978 1978
1979 1979 ph = repo.ui.config('phases', 'new-commit', phases.draft)
1980 1980 try:
1981 1981 if opts.get('secret'):
1982 1982 commitphase = 'secret'
1983 1983 else:
1984 1984 commitphase = old.phase()
1985 1985 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
1986 1986 newid = repo.commitctx(new)
1987 1987 finally:
1988 1988 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
1989 1989 if newid != old.node():
1990 1990 # Reroute the working copy parent to the new changeset
1991 1991 repo.setparents(newid, nullid)
1992 1992
1993 1993 # Move bookmarks from old parent to amend commit
1994 1994 bms = repo.nodebookmarks(old.node())
1995 1995 if bms:
1996 1996 marks = repo._bookmarks
1997 1997 for bm in bms:
1998 1998 marks[bm] = newid
1999 1999 marks.write()
2000 2000 #commit the whole amend process
2001 2001 if obsolete._enabled and newid != old.node():
2002 2002 # mark the new changeset as successor of the rewritten one
2003 2003 new = repo[newid]
2004 2004 obs = [(old, (new,))]
2005 2005 if node:
2006 2006 obs.append((ctx, ()))
2007 2007
2008 2008 obsolete.createmarkers(repo, obs)
2009 2009 tr.close()
2010 2010 finally:
2011 2011 tr.release()
2012 2012 if (not obsolete._enabled) and newid != old.node():
2013 2013 # Strip the intermediate commit (if there was one) and the amended
2014 2014 # commit
2015 2015 if node:
2016 2016 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2017 2017 ui.note(_('stripping amended changeset %s\n') % old)
2018 2018 repair.strip(ui, repo, old.node(), topic='amend-backup')
2019 2019 finally:
2020 2020 if newid is None:
2021 2021 repo.dirstate.invalidate()
2022 2022 lockmod.release(lock, wlock)
2023 2023 return newid
2024 2024
2025 2025 def commiteditor(repo, ctx, subs):
2026 2026 if ctx.description():
2027 2027 return ctx.description()
2028 2028 return commitforceeditor(repo, ctx, subs)
2029 2029
2030 2030 def commitforceeditor(repo, ctx, subs):
2031 2031 edittext = []
2032 2032 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2033 2033 if ctx.description():
2034 2034 edittext.append(ctx.description())
2035 2035 edittext.append("")
2036 2036 edittext.append("") # Empty line between message and comments.
2037 2037 edittext.append(_("HG: Enter commit message."
2038 2038 " Lines beginning with 'HG:' are removed."))
2039 2039 edittext.append(_("HG: Leave message empty to abort commit."))
2040 2040 edittext.append("HG: --")
2041 2041 edittext.append(_("HG: user: %s") % ctx.user())
2042 2042 if ctx.p2():
2043 2043 edittext.append(_("HG: branch merge"))
2044 2044 if ctx.branch():
2045 2045 edittext.append(_("HG: branch '%s'") % ctx.branch())
2046 2046 if bookmarks.iscurrent(repo):
2047 2047 edittext.append(_("HG: bookmark '%s'") % repo._bookmarkcurrent)
2048 2048 edittext.extend([_("HG: subrepo %s") % s for s in subs])
2049 2049 edittext.extend([_("HG: added %s") % f for f in added])
2050 2050 edittext.extend([_("HG: changed %s") % f for f in modified])
2051 2051 edittext.extend([_("HG: removed %s") % f for f in removed])
2052 2052 if not added and not modified and not removed:
2053 2053 edittext.append(_("HG: no files changed"))
2054 2054 edittext.append("")
2055 2055 # run editor in the repository root
2056 2056 olddir = os.getcwd()
2057 2057 os.chdir(repo.root)
2058 2058 text = repo.ui.edit("\n".join(edittext), ctx.user(), ctx.extra())
2059 2059 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2060 2060 os.chdir(olddir)
2061 2061
2062 2062 if not text.strip():
2063 2063 raise util.Abort(_("empty commit message"))
2064 2064
2065 2065 return text
2066 2066
2067 2067 def commitstatus(repo, node, branch, bheads=None, opts={}):
2068 2068 ctx = repo[node]
2069 2069 parents = ctx.parents()
2070 2070
2071 2071 if (not opts.get('amend') and bheads and node not in bheads and not
2072 2072 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2073 2073 repo.ui.status(_('created new head\n'))
2074 2074 # The message is not printed for initial roots. For the other
2075 2075 # changesets, it is printed in the following situations:
2076 2076 #
2077 2077 # Par column: for the 2 parents with ...
2078 2078 # N: null or no parent
2079 2079 # B: parent is on another named branch
2080 2080 # C: parent is a regular non head changeset
2081 2081 # H: parent was a branch head of the current branch
2082 2082 # Msg column: whether we print "created new head" message
2083 2083 # In the following, it is assumed that there already exists some
2084 2084 # initial branch heads of the current branch, otherwise nothing is
2085 2085 # printed anyway.
2086 2086 #
2087 2087 # Par Msg Comment
2088 2088 # N N y additional topo root
2089 2089 #
2090 2090 # B N y additional branch root
2091 2091 # C N y additional topo head
2092 2092 # H N n usual case
2093 2093 #
2094 2094 # B B y weird additional branch root
2095 2095 # C B y branch merge
2096 2096 # H B n merge with named branch
2097 2097 #
2098 2098 # C C y additional head from merge
2099 2099 # C H n merge with a head
2100 2100 #
2101 2101 # H H n head merge: head count decreases
2102 2102
2103 2103 if not opts.get('close_branch'):
2104 2104 for r in parents:
2105 2105 if r.closesbranch() and r.branch() == branch:
2106 2106 repo.ui.status(_('reopening closed branch head %d\n') % r)
2107 2107
2108 2108 if repo.ui.debugflag:
2109 2109 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2110 2110 elif repo.ui.verbose:
2111 2111 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2112 2112
2113 2113 def revert(ui, repo, ctx, parents, *pats, **opts):
2114 2114 parent, p2 = parents
2115 2115 node = ctx.node()
2116 2116
2117 2117 mf = ctx.manifest()
2118 2118 if node == parent:
2119 2119 pmf = mf
2120 2120 else:
2121 2121 pmf = None
2122 2122
2123 2123 # need all matching names in dirstate and manifest of target rev,
2124 2124 # so have to walk both. do not print errors if files exist in one
2125 2125 # but not other.
2126 2126
2127 2127 names = {}
2128 2128
2129 2129 wlock = repo.wlock()
2130 2130 try:
2131 2131 # walk dirstate.
2132 2132
2133 2133 m = scmutil.match(repo[None], pats, opts)
2134 2134 m.bad = lambda x, y: False
2135 2135 for abs in repo.walk(m):
2136 2136 names[abs] = m.rel(abs), m.exact(abs)
2137 2137
2138 2138 # walk target manifest.
2139 2139
2140 2140 def badfn(path, msg):
2141 2141 if path in names:
2142 2142 return
2143 2143 if path in ctx.substate:
2144 2144 return
2145 2145 path_ = path + '/'
2146 2146 for f in names:
2147 2147 if f.startswith(path_):
2148 2148 return
2149 2149 ui.warn("%s: %s\n" % (m.rel(path), msg))
2150 2150
2151 2151 m = scmutil.match(ctx, pats, opts)
2152 2152 m.bad = badfn
2153 2153 for abs in ctx.walk(m):
2154 2154 if abs not in names:
2155 2155 names[abs] = m.rel(abs), m.exact(abs)
2156 2156
2157 2157 # get the list of subrepos that must be reverted
2158 2158 targetsubs = sorted(s for s in ctx.substate if m(s))
2159 2159 m = scmutil.matchfiles(repo, names)
2160 2160 changes = repo.status(match=m)[:4]
2161 2161 modified, added, removed, deleted = map(set, changes)
2162 2162
2163 2163 # if f is a rename, also revert the source
2164 2164 cwd = repo.getcwd()
2165 2165 for f in added:
2166 2166 src = repo.dirstate.copied(f)
2167 2167 if src and src not in names and repo.dirstate[src] == 'r':
2168 2168 removed.add(src)
2169 2169 names[src] = (repo.pathto(src, cwd), True)
2170 2170
2171 2171 def removeforget(abs):
2172 2172 if repo.dirstate[abs] == 'a':
2173 2173 return _('forgetting %s\n')
2174 2174 return _('removing %s\n')
2175 2175
2176 2176 revert = ([], _('reverting %s\n'))
2177 2177 add = ([], _('adding %s\n'))
2178 2178 remove = ([], removeforget)
2179 2179 undelete = ([], _('undeleting %s\n'))
2180 2180
2181 2181 disptable = (
2182 2182 # dispatch table:
2183 2183 # file state
2184 2184 # action if in target manifest
2185 2185 # action if not in target manifest
2186 2186 # make backup if in target manifest
2187 2187 # make backup if not in target manifest
2188 2188 (modified, revert, remove, True, True),
2189 2189 (added, revert, remove, True, False),
2190 2190 (removed, undelete, None, True, False),
2191 2191 (deleted, revert, remove, False, False),
2192 2192 )
2193 2193
2194 2194 for abs, (rel, exact) in sorted(names.items()):
2195 2195 mfentry = mf.get(abs)
2196 2196 target = repo.wjoin(abs)
2197 2197 def handle(xlist, dobackup):
2198 2198 xlist[0].append(abs)
2199 2199 if (dobackup and not opts.get('no_backup') and
2200 2200 os.path.lexists(target) and
2201 2201 abs in ctx and repo[None][abs].cmp(ctx[abs])):
2202 2202 bakname = "%s.orig" % rel
2203 2203 ui.note(_('saving current version of %s as %s\n') %
2204 2204 (rel, bakname))
2205 2205 if not opts.get('dry_run'):
2206 2206 util.rename(target, bakname)
2207 2207 if ui.verbose or not exact:
2208 2208 msg = xlist[1]
2209 2209 if not isinstance(msg, basestring):
2210 2210 msg = msg(abs)
2211 2211 ui.status(msg % rel)
2212 2212 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2213 2213 if abs not in table:
2214 2214 continue
2215 2215 # file has changed in dirstate
2216 2216 if mfentry:
2217 2217 handle(hitlist, backuphit)
2218 2218 elif misslist is not None:
2219 2219 handle(misslist, backupmiss)
2220 2220 break
2221 2221 else:
2222 2222 if abs not in repo.dirstate:
2223 2223 if mfentry:
2224 2224 handle(add, True)
2225 2225 elif exact:
2226 2226 ui.warn(_('file not managed: %s\n') % rel)
2227 2227 continue
2228 2228 # file has not changed in dirstate
2229 2229 if node == parent:
2230 2230 if exact:
2231 2231 ui.warn(_('no changes needed to %s\n') % rel)
2232 2232 continue
2233 2233 if pmf is None:
2234 2234 # only need parent manifest in this unlikely case,
2235 2235 # so do not read by default
2236 2236 pmf = repo[parent].manifest()
2237 2237 if abs in pmf and mfentry:
2238 2238 # if version of file is same in parent and target
2239 2239 # manifests, do nothing
2240 2240 if (pmf[abs] != mfentry or
2241 2241 pmf.flags(abs) != mf.flags(abs)):
2242 2242 handle(revert, False)
2243 2243 else:
2244 2244 handle(remove, False)
2245 2245 if not opts.get('dry_run'):
2246 2246 _performrevert(repo, parents, ctx, revert, add, remove, undelete)
2247 2247
2248 2248 if targetsubs:
2249 2249 # Revert the subrepos on the revert list
2250 2250 for sub in targetsubs:
2251 2251 ctx.sub(sub).revert(ui, ctx.substate[sub], *pats, **opts)
2252 2252 finally:
2253 2253 wlock.release()
2254 2254
2255 2255 def _performrevert(repo, parents, ctx, revert, add, remove, undelete):
2256 2256 """function that actually perform all the action computed for revert
2257 2257
2258 2258 This is an independent function to let extension to plug in and react to
2259 2259 the imminent revert.
2260 2260
2261 Make sure you have the working directory locked when caling this function.
2261 Make sure you have the working directory locked when calling this function.
2262 2262 """
2263 2263 parent, p2 = parents
2264 2264 node = ctx.node()
2265 2265 def checkout(f):
2266 2266 fc = ctx[f]
2267 2267 repo.wwrite(f, fc.data(), fc.flags())
2268 2268
2269 2269 audit_path = pathutil.pathauditor(repo.root)
2270 2270 for f in remove[0]:
2271 2271 if repo.dirstate[f] == 'a':
2272 2272 repo.dirstate.drop(f)
2273 2273 continue
2274 2274 audit_path(f)
2275 2275 try:
2276 2276 util.unlinkpath(repo.wjoin(f))
2277 2277 except OSError:
2278 2278 pass
2279 2279 repo.dirstate.remove(f)
2280 2280
2281 2281 normal = None
2282 2282 if node == parent:
2283 2283 # We're reverting to our parent. If possible, we'd like status
2284 2284 # to report the file as clean. We have to use normallookup for
2285 2285 # merges to avoid losing information about merged/dirty files.
2286 2286 if p2 != nullid:
2287 2287 normal = repo.dirstate.normallookup
2288 2288 else:
2289 2289 normal = repo.dirstate.normal
2290 2290 for f in revert[0]:
2291 2291 checkout(f)
2292 2292 if normal:
2293 2293 normal(f)
2294 2294
2295 2295 for f in add[0]:
2296 2296 checkout(f)
2297 2297 repo.dirstate.add(f)
2298 2298
2299 2299 normal = repo.dirstate.normallookup
2300 2300 if node == parent and p2 == nullid:
2301 2301 normal = repo.dirstate.normal
2302 2302 for f in undelete[0]:
2303 2303 checkout(f)
2304 2304 normal(f)
2305 2305
2306 2306 copied = copies.pathcopies(repo[parent], ctx)
2307 2307
2308 2308 for f in add[0] + undelete[0] + revert[0]:
2309 2309 if f in copied:
2310 2310 repo.dirstate.copy(copied[f], f)
2311 2311
2312 2312 def command(table):
2313 2313 '''returns a function object bound to table which can be used as
2314 2314 a decorator for populating table as a command table'''
2315 2315
2316 2316 def cmd(name, options=(), synopsis=None):
2317 2317 def decorator(func):
2318 2318 if synopsis:
2319 2319 table[name] = func, list(options), synopsis
2320 2320 else:
2321 2321 table[name] = func, list(options)
2322 2322 return func
2323 2323 return decorator
2324 2324
2325 2325 return cmd
2326 2326
2327 2327 # a list of (ui, repo) functions called by commands.summary
2328 2328 summaryhooks = util.hooks()
2329 2329
2330 2330 # A list of state files kept by multistep operations like graft.
2331 2331 # Since graft cannot be aborted, it is considered 'clearable' by update.
2332 2332 # note: bisect is intentionally excluded
2333 2333 # (state file, clearable, allowcommit, error, hint)
2334 2334 unfinishedstates = [
2335 2335 ('graftstate', True, False, _('graft in progress'),
2336 2336 _("use 'hg graft --continue' or 'hg update' to abort")),
2337 2337 ('updatestate', True, False, _('last update was interrupted'),
2338 2338 _("use 'hg update' to get a consistent checkout"))
2339 2339 ]
2340 2340
2341 2341 def checkunfinished(repo, commit=False):
2342 2342 '''Look for an unfinished multistep operation, like graft, and abort
2343 2343 if found. It's probably good to check this right before
2344 2344 bailifchanged().
2345 2345 '''
2346 2346 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2347 2347 if commit and allowcommit:
2348 2348 continue
2349 2349 if repo.vfs.exists(f):
2350 2350 raise util.Abort(msg, hint=hint)
2351 2351
2352 2352 def clearunfinished(repo):
2353 2353 '''Check for unfinished operations (as above), and clear the ones
2354 2354 that are clearable.
2355 2355 '''
2356 2356 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2357 2357 if not clearable and repo.vfs.exists(f):
2358 2358 raise util.Abort(msg, hint=hint)
2359 2359 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2360 2360 if clearable and repo.vfs.exists(f):
2361 2361 util.unlink(repo.join(f))
@@ -1,644 +1,644 b''
1 # exchange.py - utily to exchange data between repo.
1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 from node import hex, nullid
10 10 import errno
11 11 import util, scmutil, changegroup, base85
12 12 import discovery, phases, obsolete, bookmarks, bundle2
13 13
14 14
15 15 class pushoperation(object):
16 16 """A object that represent a single push operation
17 17
18 18 It purpose is to carry push related state and very common operation.
19 19
20 A new should be created at the begining of each push and discarded
20 A new should be created at the beginning of each push and discarded
21 21 afterward.
22 22 """
23 23
24 24 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
25 25 # repo we push from
26 26 self.repo = repo
27 27 self.ui = repo.ui
28 28 # repo we push to
29 29 self.remote = remote
30 30 # force option provided
31 31 self.force = force
32 32 # revs to be pushed (None is "all")
33 33 self.revs = revs
34 34 # allow push of new branch
35 35 self.newbranch = newbranch
36 36 # did a local lock get acquired?
37 37 self.locallocked = None
38 38 # Integer version of the push result
39 39 # - None means nothing to push
40 40 # - 0 means HTTP error
41 41 # - 1 means we pushed and remote head count is unchanged *or*
42 42 # we have outgoing changesets but refused to push
43 43 # - other values as described by addchangegroup()
44 44 self.ret = None
45 # discover.outgoing object (contains common and outgoin data)
45 # discover.outgoing object (contains common and outgoing data)
46 46 self.outgoing = None
47 47 # all remote heads before the push
48 48 self.remoteheads = None
49 49 # testable as a boolean indicating if any nodes are missing locally.
50 50 self.incoming = None
51 51 # set of all heads common after changeset bundle push
52 52 self.commonheads = None
53 53
54 54 def push(repo, remote, force=False, revs=None, newbranch=False):
55 55 '''Push outgoing changesets (limited by revs) from a local
56 56 repository to remote. Return an integer:
57 57 - None means nothing to push
58 58 - 0 means HTTP error
59 59 - 1 means we pushed and remote head count is unchanged *or*
60 60 we have outgoing changesets but refused to push
61 61 - other values as described by addchangegroup()
62 62 '''
63 63 pushop = pushoperation(repo, remote, force, revs, newbranch)
64 64 if pushop.remote.local():
65 65 missing = (set(pushop.repo.requirements)
66 66 - pushop.remote.local().supported)
67 67 if missing:
68 68 msg = _("required features are not"
69 69 " supported in the destination:"
70 70 " %s") % (', '.join(sorted(missing)))
71 71 raise util.Abort(msg)
72 72
73 73 # there are two ways to push to remote repo:
74 74 #
75 75 # addchangegroup assumes local user can lock remote
76 76 # repo (local filesystem, old ssh servers).
77 77 #
78 78 # unbundle assumes local user cannot lock remote repo (new ssh
79 79 # servers, http servers).
80 80
81 81 if not pushop.remote.canpush():
82 82 raise util.Abort(_("destination does not support push"))
83 83 # get local lock as we might write phase data
84 84 locallock = None
85 85 try:
86 86 locallock = pushop.repo.lock()
87 87 pushop.locallocked = True
88 88 except IOError, err:
89 89 pushop.locallocked = False
90 90 if err.errno != errno.EACCES:
91 91 raise
92 92 # source repo cannot be locked.
93 93 # We do not abort the push, but just disable the local phase
94 94 # synchronisation.
95 95 msg = 'cannot lock source repository: %s\n' % err
96 96 pushop.ui.debug(msg)
97 97 try:
98 98 pushop.repo.checkpush(pushop)
99 99 lock = None
100 100 unbundle = pushop.remote.capable('unbundle')
101 101 if not unbundle:
102 102 lock = pushop.remote.lock()
103 103 try:
104 104 _pushdiscovery(pushop)
105 105 if _pushcheckoutgoing(pushop):
106 106 _pushchangeset(pushop)
107 107 _pushcomputecommonheads(pushop)
108 108 _pushsyncphase(pushop)
109 109 _pushobsolete(pushop)
110 110 finally:
111 111 if lock is not None:
112 112 lock.release()
113 113 finally:
114 114 if locallock is not None:
115 115 locallock.release()
116 116
117 117 _pushbookmark(pushop)
118 118 return pushop.ret
119 119
120 120 def _pushdiscovery(pushop):
121 121 # discovery
122 122 unfi = pushop.repo.unfiltered()
123 123 fci = discovery.findcommonincoming
124 124 commoninc = fci(unfi, pushop.remote, force=pushop.force)
125 125 common, inc, remoteheads = commoninc
126 126 fco = discovery.findcommonoutgoing
127 127 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
128 128 commoninc=commoninc, force=pushop.force)
129 129 pushop.outgoing = outgoing
130 130 pushop.remoteheads = remoteheads
131 131 pushop.incoming = inc
132 132
133 133 def _pushcheckoutgoing(pushop):
134 134 outgoing = pushop.outgoing
135 135 unfi = pushop.repo.unfiltered()
136 136 if not outgoing.missing:
137 137 # nothing to push
138 138 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
139 139 return False
140 140 # something to push
141 141 if not pushop.force:
142 142 # if repo.obsstore == False --> no obsolete
143 143 # then, save the iteration
144 144 if unfi.obsstore:
145 145 # this message are here for 80 char limit reason
146 146 mso = _("push includes obsolete changeset: %s!")
147 147 mst = "push includes %s changeset: %s!"
148 148 # plain versions for i18n tool to detect them
149 149 _("push includes unstable changeset: %s!")
150 150 _("push includes bumped changeset: %s!")
151 151 _("push includes divergent changeset: %s!")
152 152 # If we are to push if there is at least one
153 153 # obsolete or unstable changeset in missing, at
154 154 # least one of the missinghead will be obsolete or
155 155 # unstable. So checking heads only is ok
156 156 for node in outgoing.missingheads:
157 157 ctx = unfi[node]
158 158 if ctx.obsolete():
159 159 raise util.Abort(mso % ctx)
160 160 elif ctx.troubled():
161 161 raise util.Abort(_(mst)
162 162 % (ctx.troubles()[0],
163 163 ctx))
164 164 newbm = pushop.ui.configlist('bookmarks', 'pushing')
165 165 discovery.checkheads(unfi, pushop.remote, outgoing,
166 166 pushop.remoteheads,
167 167 pushop.newbranch,
168 168 bool(pushop.incoming),
169 169 newbm)
170 170 return True
171 171
172 172 def _pushchangeset(pushop):
173 173 """Make the actual push of changeset bundle to remote repo"""
174 174 outgoing = pushop.outgoing
175 175 unbundle = pushop.remote.capable('unbundle')
176 176 # TODO: get bundlecaps from remote
177 177 bundlecaps = None
178 178 # create a changegroup from local
179 179 if pushop.revs is None and not (outgoing.excluded
180 180 or pushop.repo.changelog.filteredrevs):
181 181 # push everything,
182 182 # use the fast path, no race possible on push
183 183 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
184 184 cg = changegroup.getsubset(pushop.repo,
185 185 outgoing,
186 186 bundler,
187 187 'push',
188 188 fastpath=True)
189 189 else:
190 190 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
191 191 bundlecaps)
192 192
193 193 # apply changegroup to remote
194 194 if unbundle:
195 195 # local repo finds heads on server, finds out what
196 196 # revs it must push. once revs transferred, if server
197 197 # finds it has different heads (someone else won
198 198 # commit/push race), server aborts.
199 199 if pushop.force:
200 200 remoteheads = ['force']
201 201 else:
202 202 remoteheads = pushop.remoteheads
203 203 # ssh: return remote's addchangegroup()
204 204 # http: return remote's addchangegroup() or 0 for error
205 205 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
206 206 'push')
207 207 else:
208 208 # we return an integer indicating remote head count
209 209 # change
210 210 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
211 211
212 212 def _pushcomputecommonheads(pushop):
213 213 unfi = pushop.repo.unfiltered()
214 214 if pushop.ret:
215 215 # push succeed, synchronize target of the push
216 216 cheads = pushop.outgoing.missingheads
217 217 elif pushop.revs is None:
218 218 # All out push fails. synchronize all common
219 219 cheads = pushop.outgoing.commonheads
220 220 else:
221 221 # I want cheads = heads(::missingheads and ::commonheads)
222 222 # (missingheads is revs with secret changeset filtered out)
223 223 #
224 224 # This can be expressed as:
225 225 # cheads = ( (missingheads and ::commonheads)
226 226 # + (commonheads and ::missingheads))"
227 227 # )
228 228 #
229 229 # while trying to push we already computed the following:
230 230 # common = (::commonheads)
231 231 # missing = ((commonheads::missingheads) - commonheads)
232 232 #
233 233 # We can pick:
234 234 # * missingheads part of common (::commonheads)
235 235 common = set(pushop.outgoing.common)
236 236 nm = pushop.repo.changelog.nodemap
237 237 cheads = [node for node in pushop.revs if nm[node] in common]
238 238 # and
239 239 # * commonheads parents on missing
240 240 revset = unfi.set('%ln and parents(roots(%ln))',
241 241 pushop.outgoing.commonheads,
242 242 pushop.outgoing.missing)
243 243 cheads.extend(c.node() for c in revset)
244 244 pushop.commonheads = cheads
245 245
246 246 def _pushsyncphase(pushop):
247 """synchronise phase information locally and remotly"""
247 """synchronise phase information locally and remotely"""
248 248 unfi = pushop.repo.unfiltered()
249 249 cheads = pushop.commonheads
250 250 if pushop.ret:
251 251 # push succeed, synchronize target of the push
252 252 cheads = pushop.outgoing.missingheads
253 253 elif pushop.revs is None:
254 254 # All out push fails. synchronize all common
255 255 cheads = pushop.outgoing.commonheads
256 256 else:
257 257 # I want cheads = heads(::missingheads and ::commonheads)
258 258 # (missingheads is revs with secret changeset filtered out)
259 259 #
260 260 # This can be expressed as:
261 261 # cheads = ( (missingheads and ::commonheads)
262 262 # + (commonheads and ::missingheads))"
263 263 # )
264 264 #
265 265 # while trying to push we already computed the following:
266 266 # common = (::commonheads)
267 267 # missing = ((commonheads::missingheads) - commonheads)
268 268 #
269 269 # We can pick:
270 270 # * missingheads part of common (::commonheads)
271 271 common = set(pushop.outgoing.common)
272 272 nm = pushop.repo.changelog.nodemap
273 273 cheads = [node for node in pushop.revs if nm[node] in common]
274 274 # and
275 275 # * commonheads parents on missing
276 276 revset = unfi.set('%ln and parents(roots(%ln))',
277 277 pushop.outgoing.commonheads,
278 278 pushop.outgoing.missing)
279 279 cheads.extend(c.node() for c in revset)
280 280 pushop.commonheads = cheads
281 281 # even when we don't push, exchanging phase data is useful
282 282 remotephases = pushop.remote.listkeys('phases')
283 283 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
284 284 and remotephases # server supports phases
285 285 and pushop.ret is None # nothing was pushed
286 286 and remotephases.get('publishing', False)):
287 287 # When:
288 288 # - this is a subrepo push
289 289 # - and remote support phase
290 290 # - and no changeset was pushed
291 291 # - and remote is publishing
292 292 # We may be in issue 3871 case!
293 293 # We drop the possible phase synchronisation done by
294 294 # courtesy to publish changesets possibly locally draft
295 295 # on the remote.
296 296 remotephases = {'publishing': 'True'}
297 297 if not remotephases: # old server or public only reply from non-publishing
298 298 _localphasemove(pushop, cheads)
299 299 # don't push any phase data as there is nothing to push
300 300 else:
301 301 ana = phases.analyzeremotephases(pushop.repo, cheads,
302 302 remotephases)
303 303 pheads, droots = ana
304 304 ### Apply remote phase on local
305 305 if remotephases.get('publishing', False):
306 306 _localphasemove(pushop, cheads)
307 307 else: # publish = False
308 308 _localphasemove(pushop, pheads)
309 309 _localphasemove(pushop, cheads, phases.draft)
310 310 ### Apply local phase on remote
311 311
312 312 # Get the list of all revs draft on remote by public here.
313 313 # XXX Beware that revset break if droots is not strictly
314 314 # XXX root we may want to ensure it is but it is costly
315 315 outdated = unfi.set('heads((%ln::%ln) and public())',
316 316 droots, cheads)
317 317 for newremotehead in outdated:
318 318 r = pushop.remote.pushkey('phases',
319 319 newremotehead.hex(),
320 320 str(phases.draft),
321 321 str(phases.public))
322 322 if not r:
323 323 pushop.ui.warn(_('updating %s to public failed!\n')
324 324 % newremotehead)
325 325
326 326 def _localphasemove(pushop, nodes, phase=phases.public):
327 327 """move <nodes> to <phase> in the local source repo"""
328 328 if pushop.locallocked:
329 329 phases.advanceboundary(pushop.repo, phase, nodes)
330 330 else:
331 331 # repo is not locked, do not change any phases!
332 332 # Informs the user that phases should have been moved when
333 333 # applicable.
334 334 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
335 335 phasestr = phases.phasenames[phase]
336 336 if actualmoves:
337 337 pushop.ui.status(_('cannot lock source repo, skipping '
338 338 'local %s phase update\n') % phasestr)
339 339
340 340 def _pushobsolete(pushop):
341 341 """utility function to push obsolete markers to a remote"""
342 342 pushop.ui.debug('try to push obsolete markers to remote\n')
343 343 repo = pushop.repo
344 344 remote = pushop.remote
345 345 if (obsolete._enabled and repo.obsstore and
346 346 'obsolete' in remote.listkeys('namespaces')):
347 347 rslts = []
348 348 remotedata = repo.listkeys('obsolete')
349 349 for key in sorted(remotedata, reverse=True):
350 350 # reverse sort to ensure we end with dump0
351 351 data = remotedata[key]
352 352 rslts.append(remote.pushkey('obsolete', key, '', data))
353 353 if [r for r in rslts if not r]:
354 354 msg = _('failed to push some obsolete markers!\n')
355 355 repo.ui.warn(msg)
356 356
357 357 def _pushbookmark(pushop):
358 358 """Update bookmark position on remote"""
359 359 ui = pushop.ui
360 360 repo = pushop.repo.unfiltered()
361 361 remote = pushop.remote
362 362 ui.debug("checking for updated bookmarks\n")
363 363 revnums = map(repo.changelog.rev, pushop.revs or [])
364 364 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
365 365 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
366 366 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
367 367 srchex=hex)
368 368
369 369 for b, scid, dcid in advsrc:
370 370 if ancestors and repo[scid].rev() not in ancestors:
371 371 continue
372 372 if remote.pushkey('bookmarks', b, dcid, scid):
373 373 ui.status(_("updating bookmark %s\n") % b)
374 374 else:
375 375 ui.warn(_('updating bookmark %s failed!\n') % b)
376 376
377 377 class pulloperation(object):
378 378 """A object that represent a single pull operation
379 379
380 380 It purpose is to carry push related state and very common operation.
381 381
382 A new should be created at the begining of each pull and discarded
382 A new should be created at the beginning of each pull and discarded
383 383 afterward.
384 384 """
385 385
386 386 def __init__(self, repo, remote, heads=None, force=False):
387 387 # repo we pull into
388 388 self.repo = repo
389 389 # repo we pull from
390 390 self.remote = remote
391 391 # revision we try to pull (None is "all")
392 392 self.heads = heads
393 393 # do we force pull?
394 394 self.force = force
395 395 # the name the pull transaction
396 396 self._trname = 'pull\n' + util.hidepassword(remote.url())
397 397 # hold the transaction once created
398 398 self._tr = None
399 399 # set of common changeset between local and remote before pull
400 400 self.common = None
401 401 # set of pulled head
402 402 self.rheads = None
403 # list of missing changeset to fetch remotly
403 # list of missing changeset to fetch remotely
404 404 self.fetch = None
405 # result of changegroup pulling (used as returng code by pull)
405 # result of changegroup pulling (used as return code by pull)
406 406 self.cgresult = None
407 407 # list of step remaining todo (related to future bundle2 usage)
408 408 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
409 409
410 410 @util.propertycache
411 411 def pulledsubset(self):
412 412 """heads of the set of changeset target by the pull"""
413 413 # compute target subset
414 414 if self.heads is None:
415 415 # We pulled every thing possible
416 416 # sync on everything common
417 417 c = set(self.common)
418 418 ret = list(self.common)
419 419 for n in self.rheads:
420 420 if n not in c:
421 421 ret.append(n)
422 422 return ret
423 423 else:
424 424 # We pulled a specific subset
425 425 # sync on this subset
426 426 return self.heads
427 427
428 428 def gettransaction(self):
429 429 """get appropriate pull transaction, creating it if needed"""
430 430 if self._tr is None:
431 431 self._tr = self.repo.transaction(self._trname)
432 432 return self._tr
433 433
434 434 def closetransaction(self):
435 435 """close transaction if created"""
436 436 if self._tr is not None:
437 437 self._tr.close()
438 438
439 439 def releasetransaction(self):
440 440 """release transaction if created"""
441 441 if self._tr is not None:
442 442 self._tr.release()
443 443
444 444 def pull(repo, remote, heads=None, force=False):
445 445 pullop = pulloperation(repo, remote, heads, force)
446 446 if pullop.remote.local():
447 447 missing = set(pullop.remote.requirements) - pullop.repo.supported
448 448 if missing:
449 449 msg = _("required features are not"
450 450 " supported in the destination:"
451 451 " %s") % (', '.join(sorted(missing)))
452 452 raise util.Abort(msg)
453 453
454 454 lock = pullop.repo.lock()
455 455 try:
456 456 _pulldiscovery(pullop)
457 457 if pullop.remote.capable('bundle2'):
458 458 _pullbundle2(pullop)
459 459 if 'changegroup' in pullop.todosteps:
460 460 _pullchangeset(pullop)
461 461 if 'phases' in pullop.todosteps:
462 462 _pullphase(pullop)
463 463 if 'obsmarkers' in pullop.todosteps:
464 464 _pullobsolete(pullop)
465 465 pullop.closetransaction()
466 466 finally:
467 467 pullop.releasetransaction()
468 468 lock.release()
469 469
470 470 return pullop.cgresult
471 471
472 472 def _pulldiscovery(pullop):
473 473 """discovery phase for the pull
474 474
475 475 Current handle changeset discovery only, will change handle all discovery
476 476 at some point."""
477 477 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
478 478 pullop.remote,
479 479 heads=pullop.heads,
480 480 force=pullop.force)
481 481 pullop.common, pullop.fetch, pullop.rheads = tmp
482 482
483 483 def _pullbundle2(pullop):
484 484 """pull data using bundle2
485 485
486 486 For now, the only supported data are changegroup."""
487 487 kwargs = {'bundlecaps': set(['HG20'])}
488 488 # pulling changegroup
489 489 pullop.todosteps.remove('changegroup')
490 490 if not pullop.fetch:
491 491 pullop.repo.ui.status(_("no changes found\n"))
492 492 pullop.cgresult = 0
493 493 else:
494 494 kwargs['common'] = pullop.common
495 495 kwargs['heads'] = pullop.heads or pullop.rheads
496 496 if pullop.heads is None and list(pullop.common) == [nullid]:
497 497 pullop.repo.ui.status(_("requesting all changes\n"))
498 498 if kwargs.keys() == ['format']:
499 499 return # nothing to pull
500 500 bundle = pullop.remote.getbundle('pull', **kwargs)
501 501 try:
502 502 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
503 503 except KeyError, exc:
504 504 raise util.Abort('missing support for %s' % exc)
505 505 assert len(op.records['changegroup']) == 1
506 506 pullop.cgresult = op.records['changegroup'][0]['return']
507 507
508 508 def _pullchangeset(pullop):
509 509 """pull changeset from unbundle into the local repo"""
510 510 # We delay the open of the transaction as late as possible so we
511 511 # don't open transaction for nothing or you break future useful
512 512 # rollback call
513 513 pullop.todosteps.remove('changegroup')
514 514 if not pullop.fetch:
515 515 pullop.repo.ui.status(_("no changes found\n"))
516 516 pullop.cgresult = 0
517 517 return
518 518 pullop.gettransaction()
519 519 if pullop.heads is None and list(pullop.common) == [nullid]:
520 520 pullop.repo.ui.status(_("requesting all changes\n"))
521 521 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
522 522 # issue1320, avoid a race if remote changed after discovery
523 523 pullop.heads = pullop.rheads
524 524
525 525 if pullop.remote.capable('getbundle'):
526 526 # TODO: get bundlecaps from remote
527 527 cg = pullop.remote.getbundle('pull', common=pullop.common,
528 528 heads=pullop.heads or pullop.rheads)
529 529 elif pullop.heads is None:
530 530 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
531 531 elif not pullop.remote.capable('changegroupsubset'):
532 532 raise util.Abort(_("partial pull cannot be done because "
533 533 "other repository doesn't support "
534 534 "changegroupsubset."))
535 535 else:
536 536 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
537 537 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
538 538 pullop.remote.url())
539 539
540 540 def _pullphase(pullop):
541 541 # Get remote phases data from remote
542 542 pullop.todosteps.remove('phases')
543 543 remotephases = pullop.remote.listkeys('phases')
544 544 publishing = bool(remotephases.get('publishing', False))
545 545 if remotephases and not publishing:
546 546 # remote is new and unpublishing
547 547 pheads, _dr = phases.analyzeremotephases(pullop.repo,
548 548 pullop.pulledsubset,
549 549 remotephases)
550 550 phases.advanceboundary(pullop.repo, phases.public, pheads)
551 551 phases.advanceboundary(pullop.repo, phases.draft,
552 552 pullop.pulledsubset)
553 553 else:
554 554 # Remote is old or publishing all common changesets
555 555 # should be seen as public
556 556 phases.advanceboundary(pullop.repo, phases.public,
557 557 pullop.pulledsubset)
558 558
559 559 def _pullobsolete(pullop):
560 560 """utility function to pull obsolete markers from a remote
561 561
562 562 The `gettransaction` is function that return the pull transaction, creating
563 563 one if necessary. We return the transaction to inform the calling code that
564 564 a new transaction have been created (when applicable).
565 565
566 566 Exists mostly to allow overriding for experimentation purpose"""
567 567 pullop.todosteps.remove('obsmarkers')
568 568 tr = None
569 569 if obsolete._enabled:
570 570 pullop.repo.ui.debug('fetching remote obsolete markers\n')
571 571 remoteobs = pullop.remote.listkeys('obsolete')
572 572 if 'dump0' in remoteobs:
573 573 tr = pullop.gettransaction()
574 574 for key in sorted(remoteobs, reverse=True):
575 575 if key.startswith('dump'):
576 576 data = base85.b85decode(remoteobs[key])
577 577 pullop.repo.obsstore.mergemarkers(tr, data)
578 578 pullop.repo.invalidatevolatilesets()
579 579 return tr
580 580
581 581 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
582 582 """return a full bundle (with potentially multiple kind of parts)
583 583
584 584 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
585 585 passed. For now, the bundle can contain only changegroup, but this will
586 586 changes when more part type will be available for bundle2.
587 587
588 588 This is different from changegroup.getbundle that only returns an HG10
589 589 changegroup bundle. They may eventually get reunited in the future when we
590 590 have a clearer idea of the API we what to query different data.
591 591
592 592 The implementation is at a very early stage and will get massive rework
593 593 when the API of bundle is refined.
594 594 """
595 595 # build bundle here.
596 596 cg = changegroup.getbundle(repo, source, heads=heads,
597 597 common=common, bundlecaps=bundlecaps)
598 598 if bundlecaps is None or 'HG20' not in bundlecaps:
599 599 return cg
600 600 # very crude first implementation,
601 601 # the bundle API will change and the generation will be done lazily.
602 602 bundler = bundle2.bundle20(repo.ui)
603 603 def cgchunks(cg=cg):
604 604 yield 'HG10UN'
605 605 for c in cg.getchunks():
606 606 yield c
607 607 part = bundle2.bundlepart('changegroup', data=cgchunks())
608 608 bundler.addpart(part)
609 609 return bundle2.unbundle20(repo.ui, util.chunkbuffer(bundler.getchunks()))
610 610
611 611 class PushRaced(RuntimeError):
612 """An exception raised during unbunding that indicate a push race"""
612 """An exception raised during unbundling that indicate a push race"""
613 613
614 614 def check_heads(repo, their_heads, context):
615 615 """check if the heads of a repo have been modified
616 616
617 617 Used by peer for unbundling.
618 618 """
619 619 heads = repo.heads()
620 620 heads_hash = util.sha1(''.join(sorted(heads))).digest()
621 621 if not (their_heads == ['force'] or their_heads == heads or
622 622 their_heads == ['hashed', heads_hash]):
623 623 # someone else committed/pushed/unbundled while we
624 624 # were transferring data
625 625 raise PushRaced('repository changed while %s - '
626 626 'please try again' % context)
627 627
628 628 def unbundle(repo, cg, heads, source, url):
629 629 """Apply a bundle to a repo.
630 630
631 631 this function makes sure the repo is locked during the application and have
632 mechanism to check that no push race occured between the creation of the
632 mechanism to check that no push race occurred between the creation of the
633 633 bundle and its application.
634 634
635 635 If the push was raced as PushRaced exception is raised."""
636 636 r = 0
637 637 lock = repo.lock()
638 638 try:
639 639 check_heads(repo, heads, 'uploading changes')
640 640 # push can proceed
641 641 r = changegroup.addchangegroup(repo, cg, source, url)
642 642 finally:
643 643 lock.release()
644 644 return r
@@ -1,1020 +1,1020 b''
1 1 # merge.py - directory-level update/merge handling for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 import struct
9 9
10 10 from node import nullid, nullrev, hex, bin
11 11 from i18n import _
12 12 from mercurial import obsolete
13 13 import error, util, filemerge, copies, subrepo, worker, dicthelpers
14 14 import errno, os, shutil
15 15
16 16 _pack = struct.pack
17 17 _unpack = struct.unpack
18 18
19 19 def _droponode(data):
20 20 # used for compatibility for v1
21 21 bits = data.split("\0")
22 22 bits = bits[:-2] + bits[-1:]
23 23 return "\0".join(bits)
24 24
25 25 class mergestate(object):
26 26 '''track 3-way merge state of individual files
27 27
28 28 it is stored on disk when needed. Two file are used, one with an old
29 29 format, one with a new format. Both contains similar data, but the new
30 30 format can store new kind of field.
31 31
32 32 Current new format is a list of arbitrary record of the form:
33 33
34 34 [type][length][content]
35 35
36 36 Type is a single character, length is a 4 bytes integer, content is an
37 37 arbitrary suites of bytes of length `length`.
38 38
39 39 Type should be a letter. Capital letter are mandatory record, Mercurial
40 40 should abort if they are unknown. lower case record can be safely ignored.
41 41
42 42 Currently known record:
43 43
44 44 L: the node of the "local" part of the merge (hexified version)
45 45 O: the node of the "other" part of the merge (hexified version)
46 46 F: a file to be merged entry
47 47 '''
48 48 statepathv1 = "merge/state"
49 49 statepathv2 = "merge/state2"
50 50
51 51 def __init__(self, repo):
52 52 self._repo = repo
53 53 self._dirty = False
54 54 self._read()
55 55
56 56 def reset(self, node=None, other=None):
57 57 self._state = {}
58 58 if node:
59 59 self._local = node
60 60 self._other = other
61 61 shutil.rmtree(self._repo.join("merge"), True)
62 62 self._dirty = False
63 63
64 64 def _read(self):
65 65 """Analyse each record content to restore a serialized state from disk
66 66
67 67 This function process "record" entry produced by the de-serialization
68 68 of on disk file.
69 69 """
70 70 self._state = {}
71 71 records = self._readrecords()
72 72 for rtype, record in records:
73 73 if rtype == 'L':
74 74 self._local = bin(record)
75 75 elif rtype == 'O':
76 76 self._other = bin(record)
77 77 elif rtype == "F":
78 78 bits = record.split("\0")
79 79 self._state[bits[0]] = bits[1:]
80 80 elif not rtype.islower():
81 81 raise util.Abort(_('unsupported merge state record: %s')
82 82 % rtype)
83 83 self._dirty = False
84 84
85 85 def _readrecords(self):
86 86 """Read merge state from disk and return a list of record (TYPE, data)
87 87
88 We read data from both V1 and Ve files decide which on to use.
88 We read data from both v1 and v2 files and decide which one to use.
89 89
90 V1 have been used by version prior to 2.9.1 and contains less data than
91 v2. We read both version and check if no data in v2 contradict one in
90 V1 has been used by version prior to 2.9.1 and contains less data than
91 v2. We read both versions and check if no data in v2 contradicts
92 92 v1. If there is not contradiction we can safely assume that both v1
93 93 and v2 were written at the same time and use the extract data in v2. If
94 94 there is contradiction we ignore v2 content as we assume an old version
95 of Mercurial have over written the mergstate file and left an old v2
95 of Mercurial has overwritten the mergestate file and left an old v2
96 96 file around.
97 97
98 98 returns list of record [(TYPE, data), ...]"""
99 99 v1records = self._readrecordsv1()
100 100 v2records = self._readrecordsv2()
101 101 oldv2 = set() # old format version of v2 record
102 102 for rec in v2records:
103 103 if rec[0] == 'L':
104 104 oldv2.add(rec)
105 105 elif rec[0] == 'F':
106 106 # drop the onode data (not contained in v1)
107 107 oldv2.add(('F', _droponode(rec[1])))
108 108 for rec in v1records:
109 109 if rec not in oldv2:
110 110 # v1 file is newer than v2 file, use it
111 111 # we have to infer the "other" changeset of the merge
112 112 # we cannot do better than that with v1 of the format
113 113 mctx = self._repo[None].parents()[-1]
114 114 v1records.append(('O', mctx.hex()))
115 115 # add place holder "other" file node information
116 116 # nobody is using it yet so we do no need to fetch the data
117 117 # if mctx was wrong `mctx[bits[-2]]` may fails.
118 118 for idx, r in enumerate(v1records):
119 119 if r[0] == 'F':
120 120 bits = r[1].split("\0")
121 121 bits.insert(-2, '')
122 122 v1records[idx] = (r[0], "\0".join(bits))
123 123 return v1records
124 124 else:
125 125 return v2records
126 126
127 127 def _readrecordsv1(self):
128 128 """read on disk merge state for version 1 file
129 129
130 130 returns list of record [(TYPE, data), ...]
131 131
132 132 Note: the "F" data from this file are one entry short
133 133 (no "other file node" entry)
134 134 """
135 135 records = []
136 136 try:
137 137 f = self._repo.opener(self.statepathv1)
138 138 for i, l in enumerate(f):
139 139 if i == 0:
140 140 records.append(('L', l[:-1]))
141 141 else:
142 142 records.append(('F', l[:-1]))
143 143 f.close()
144 144 except IOError, err:
145 145 if err.errno != errno.ENOENT:
146 146 raise
147 147 return records
148 148
149 149 def _readrecordsv2(self):
150 150 """read on disk merge state for version 2 file
151 151
152 152 returns list of record [(TYPE, data), ...]
153 153 """
154 154 records = []
155 155 try:
156 156 f = self._repo.opener(self.statepathv2)
157 157 data = f.read()
158 158 off = 0
159 159 end = len(data)
160 160 while off < end:
161 161 rtype = data[off]
162 162 off += 1
163 163 length = _unpack('>I', data[off:(off + 4)])[0]
164 164 off += 4
165 165 record = data[off:(off + length)]
166 166 off += length
167 167 records.append((rtype, record))
168 168 f.close()
169 169 except IOError, err:
170 170 if err.errno != errno.ENOENT:
171 171 raise
172 172 return records
173 173
174 174 def commit(self):
175 175 """Write current state on disk (if necessary)"""
176 176 if self._dirty:
177 177 records = []
178 178 records.append(("L", hex(self._local)))
179 179 records.append(("O", hex(self._other)))
180 180 for d, v in self._state.iteritems():
181 181 records.append(("F", "\0".join([d] + v)))
182 182 self._writerecords(records)
183 183 self._dirty = False
184 184
185 185 def _writerecords(self, records):
186 186 """Write current state on disk (both v1 and v2)"""
187 187 self._writerecordsv1(records)
188 188 self._writerecordsv2(records)
189 189
190 190 def _writerecordsv1(self, records):
191 191 """Write current state on disk in a version 1 file"""
192 192 f = self._repo.opener(self.statepathv1, "w")
193 193 irecords = iter(records)
194 194 lrecords = irecords.next()
195 195 assert lrecords[0] == 'L'
196 196 f.write(hex(self._local) + "\n")
197 197 for rtype, data in irecords:
198 198 if rtype == "F":
199 199 f.write("%s\n" % _droponode(data))
200 200 f.close()
201 201
202 202 def _writerecordsv2(self, records):
203 203 """Write current state on disk in a version 2 file"""
204 204 f = self._repo.opener(self.statepathv2, "w")
205 205 for key, data in records:
206 206 assert len(key) == 1
207 207 format = ">sI%is" % len(data)
208 208 f.write(_pack(format, key, len(data), data))
209 209 f.close()
210 210
211 211 def add(self, fcl, fco, fca, fd):
212 212 """add a new (potentially?) conflicting file the merge state
213 213 fcl: file context for local,
214 214 fco: file context for remote,
215 215 fca: file context for ancestors,
216 216 fd: file path of the resulting merge.
217 217
218 218 note: also write the local version to the `.hg/merge` directory.
219 219 """
220 220 hash = util.sha1(fcl.path()).hexdigest()
221 221 self._repo.opener.write("merge/" + hash, fcl.data())
222 222 self._state[fd] = ['u', hash, fcl.path(),
223 223 fca.path(), hex(fca.filenode()),
224 224 fco.path(), hex(fco.filenode()),
225 225 fcl.flags()]
226 226 self._dirty = True
227 227
228 228 def __contains__(self, dfile):
229 229 return dfile in self._state
230 230
231 231 def __getitem__(self, dfile):
232 232 return self._state[dfile][0]
233 233
234 234 def __iter__(self):
235 235 l = self._state.keys()
236 236 l.sort()
237 237 for f in l:
238 238 yield f
239 239
240 240 def files(self):
241 241 return self._state.keys()
242 242
243 243 def mark(self, dfile, state):
244 244 self._state[dfile][0] = state
245 245 self._dirty = True
246 246
247 247 def resolve(self, dfile, wctx):
248 248 """rerun merge process for file path `dfile`"""
249 249 if self[dfile] == 'r':
250 250 return 0
251 251 stateentry = self._state[dfile]
252 252 state, hash, lfile, afile, anode, ofile, onode, flags = stateentry
253 253 octx = self._repo[self._other]
254 254 fcd = wctx[dfile]
255 255 fco = octx[ofile]
256 256 fca = self._repo.filectx(afile, fileid=anode)
257 257 # "premerge" x flags
258 258 flo = fco.flags()
259 259 fla = fca.flags()
260 260 if 'x' in flags + flo + fla and 'l' not in flags + flo + fla:
261 261 if fca.node() == nullid:
262 262 self._repo.ui.warn(_('warning: cannot merge flags for %s\n') %
263 263 afile)
264 264 elif flags == fla:
265 265 flags = flo
266 266 # restore local
267 267 f = self._repo.opener("merge/" + hash)
268 268 self._repo.wwrite(dfile, f.read(), flags)
269 269 f.close()
270 270 r = filemerge.filemerge(self._repo, self._local, lfile, fcd, fco, fca)
271 271 if r is None:
272 272 # no real conflict
273 273 del self._state[dfile]
274 274 self._dirty = True
275 275 elif not r:
276 276 self.mark(dfile, 'r')
277 277 return r
278 278
279 279 def _checkunknownfile(repo, wctx, mctx, f):
280 280 return (not repo.dirstate._ignore(f)
281 281 and os.path.isfile(repo.wjoin(f))
282 282 and repo.wopener.audit.check(f)
283 283 and repo.dirstate.normalize(f) not in repo.dirstate
284 284 and mctx[f].cmp(wctx[f]))
285 285
286 286 def _checkunknown(repo, wctx, mctx):
287 287 "check for collisions between unknown files and files in mctx"
288 288
289 289 error = False
290 290 for f in mctx:
291 291 if f not in wctx and _checkunknownfile(repo, wctx, mctx, f):
292 292 error = True
293 293 wctx._repo.ui.warn(_("%s: untracked file differs\n") % f)
294 294 if error:
295 295 raise util.Abort(_("untracked files in working directory differ "
296 296 "from files in requested revision"))
297 297
298 298 def _forgetremoved(wctx, mctx, branchmerge):
299 299 """
300 300 Forget removed files
301 301
302 302 If we're jumping between revisions (as opposed to merging), and if
303 303 neither the working directory nor the target rev has the file,
304 304 then we need to remove it from the dirstate, to prevent the
305 305 dirstate from listing the file when it is no longer in the
306 306 manifest.
307 307
308 308 If we're merging, and the other revision has removed a file
309 309 that is not present in the working directory, we need to mark it
310 310 as removed.
311 311 """
312 312
313 313 actions = []
314 314 state = branchmerge and 'r' or 'f'
315 315 for f in wctx.deleted():
316 316 if f not in mctx:
317 317 actions.append((f, state, None, "forget deleted"))
318 318
319 319 if not branchmerge:
320 320 for f in wctx.removed():
321 321 if f not in mctx:
322 322 actions.append((f, "f", None, "forget removed"))
323 323
324 324 return actions
325 325
326 326 def _checkcollision(repo, wmf, actions):
327 327 # build provisional merged manifest up
328 328 pmmf = set(wmf)
329 329
330 330 def addop(f, args):
331 331 pmmf.add(f)
332 332 def removeop(f, args):
333 333 pmmf.discard(f)
334 334 def nop(f, args):
335 335 pass
336 336
337 337 def renamemoveop(f, args):
338 338 f2, flags = args
339 339 pmmf.discard(f2)
340 340 pmmf.add(f)
341 341 def renamegetop(f, args):
342 342 f2, flags = args
343 343 pmmf.add(f)
344 344 def mergeop(f, args):
345 345 f1, f2, fa, move, anc = args
346 346 if move:
347 347 pmmf.discard(f1)
348 348 pmmf.add(f)
349 349
350 350 opmap = {
351 351 "a": addop,
352 352 "dm": renamemoveop,
353 353 "dg": renamegetop,
354 354 "dr": nop,
355 355 "e": nop,
356 356 "f": addop, # untracked file should be kept in working directory
357 357 "g": addop,
358 358 "m": mergeop,
359 359 "r": removeop,
360 360 "rd": nop,
361 361 "cd": addop,
362 362 "dc": addop,
363 363 }
364 364 for f, m, args, msg in actions:
365 365 op = opmap.get(m)
366 366 assert op, m
367 367 op(f, args)
368 368
369 369 # check case-folding collision in provisional merged manifest
370 370 foldmap = {}
371 371 for f in sorted(pmmf):
372 372 fold = util.normcase(f)
373 373 if fold in foldmap:
374 374 raise util.Abort(_("case-folding collision between %s and %s")
375 375 % (f, foldmap[fold]))
376 376 foldmap[fold] = f
377 377
378 378 def manifestmerge(repo, wctx, p2, pa, branchmerge, force, partial,
379 379 acceptremote=False):
380 380 """
381 381 Merge p1 and p2 with ancestor pa and generate merge action list
382 382
383 383 branchmerge and force are as passed in to update
384 384 partial = function to filter file lists
385 385 acceptremote = accept the incoming changes without prompting
386 386 """
387 387
388 388 overwrite = force and not branchmerge
389 389 actions, copy, movewithdir = [], {}, {}
390 390
391 391 followcopies = False
392 392 if overwrite:
393 393 pa = wctx
394 394 elif pa == p2: # backwards
395 395 pa = wctx.p1()
396 396 elif not branchmerge and not wctx.dirty(missing=True):
397 397 pass
398 398 elif pa and repo.ui.configbool("merge", "followcopies", True):
399 399 followcopies = True
400 400
401 401 # manifests fetched in order are going to be faster, so prime the caches
402 402 [x.manifest() for x in
403 403 sorted(wctx.parents() + [p2, pa], key=lambda x: x.rev())]
404 404
405 405 if followcopies:
406 406 ret = copies.mergecopies(repo, wctx, p2, pa)
407 407 copy, movewithdir, diverge, renamedelete = ret
408 408 for of, fl in diverge.iteritems():
409 409 actions.append((of, "dr", (fl,), "divergent renames"))
410 410 for of, fl in renamedelete.iteritems():
411 411 actions.append((of, "rd", (fl,), "rename and delete"))
412 412
413 413 repo.ui.note(_("resolving manifests\n"))
414 414 repo.ui.debug(" branchmerge: %s, force: %s, partial: %s\n"
415 415 % (bool(branchmerge), bool(force), bool(partial)))
416 416 repo.ui.debug(" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
417 417
418 418 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
419 419 copied = set(copy.values())
420 420 copied.update(movewithdir.values())
421 421
422 422 if '.hgsubstate' in m1:
423 423 # check whether sub state is modified
424 424 for s in sorted(wctx.substate):
425 425 if wctx.sub(s).dirty():
426 426 m1['.hgsubstate'] += "+"
427 427 break
428 428
429 429 aborts = []
430 430 # Compare manifests
431 431 fdiff = dicthelpers.diff(m1, m2)
432 432 flagsdiff = m1.flagsdiff(m2)
433 433 diff12 = dicthelpers.join(fdiff, flagsdiff)
434 434
435 435 for f, (n12, fl12) in diff12.iteritems():
436 436 if n12:
437 437 n1, n2 = n12
438 438 else: # file contents didn't change, but flags did
439 439 n1 = n2 = m1.get(f, None)
440 440 if n1 is None:
441 441 # Since n1 == n2, the file isn't present in m2 either. This
442 442 # means that the file was removed or deleted locally and
443 443 # removed remotely, but that residual entries remain in flags.
444 444 # This can happen in manifests generated by workingctx.
445 445 continue
446 446 if fl12:
447 447 fl1, fl2 = fl12
448 448 else: # flags didn't change, file contents did
449 449 fl1 = fl2 = m1.flags(f)
450 450
451 451 if partial and not partial(f):
452 452 continue
453 453 if n1 and n2:
454 454 fa = f
455 455 a = ma.get(f, nullid)
456 456 if a == nullid:
457 457 fa = copy.get(f, f)
458 458 # Note: f as default is wrong - we can't really make a 3-way
459 459 # merge without an ancestor file.
460 460 fla = ma.flags(fa)
461 461 nol = 'l' not in fl1 + fl2 + fla
462 462 if n2 == a and fl2 == fla:
463 463 pass # remote unchanged - keep local
464 464 elif n1 == a and fl1 == fla: # local unchanged - use remote
465 465 if n1 == n2: # optimization: keep local content
466 466 actions.append((f, "e", (fl2,), "update permissions"))
467 467 else:
468 468 actions.append((f, "g", (fl2,), "remote is newer"))
469 469 elif nol and n2 == a: # remote only changed 'x'
470 470 actions.append((f, "e", (fl2,), "update permissions"))
471 471 elif nol and n1 == a: # local only changed 'x'
472 472 actions.append((f, "g", (fl1,), "remote is newer"))
473 473 else: # both changed something
474 474 actions.append((f, "m", (f, f, fa, False, pa.node()),
475 475 "versions differ"))
476 476 elif f in copied: # files we'll deal with on m2 side
477 477 pass
478 478 elif n1 and f in movewithdir: # directory rename, move local
479 479 f2 = movewithdir[f]
480 480 actions.append((f2, "dm", (f, fl1),
481 481 "remote directory rename - move from " + f))
482 482 elif n1 and f in copy:
483 483 f2 = copy[f]
484 484 actions.append((f, "m", (f, f2, f2, False, pa.node()),
485 485 "local copied/moved from " + f2))
486 486 elif n1 and f in ma: # clean, a different, no remote
487 487 if n1 != ma[f]:
488 488 if acceptremote:
489 489 actions.append((f, "r", None, "remote delete"))
490 490 else:
491 491 actions.append((f, "cd", None, "prompt changed/deleted"))
492 492 elif n1[20:] == "a": # added, no remote
493 493 actions.append((f, "f", None, "remote deleted"))
494 494 else:
495 495 actions.append((f, "r", None, "other deleted"))
496 496 elif n2 and f in movewithdir:
497 497 f2 = movewithdir[f]
498 498 actions.append((f2, "dg", (f, fl2),
499 499 "local directory rename - get from " + f))
500 500 elif n2 and f in copy:
501 501 f2 = copy[f]
502 502 if f2 in m2:
503 503 actions.append((f, "m", (f2, f, f2, False, pa.node()),
504 504 "remote copied from " + f2))
505 505 else:
506 506 actions.append((f, "m", (f2, f, f2, True, pa.node()),
507 507 "remote moved from " + f2))
508 508 elif n2 and f not in ma:
509 509 # local unknown, remote created: the logic is described by the
510 510 # following table:
511 511 #
512 512 # force branchmerge different | action
513 513 # n * n | get
514 514 # n * y | abort
515 515 # y n * | get
516 516 # y y n | get
517 517 # y y y | merge
518 518 #
519 519 # Checking whether the files are different is expensive, so we
520 520 # don't do that when we can avoid it.
521 521 if force and not branchmerge:
522 522 actions.append((f, "g", (fl2,), "remote created"))
523 523 else:
524 524 different = _checkunknownfile(repo, wctx, p2, f)
525 525 if force and branchmerge and different:
526 526 # FIXME: This is wrong - f is not in ma ...
527 527 actions.append((f, "m", (f, f, f, False, pa.node()),
528 528 "remote differs from untracked local"))
529 529 elif not force and different:
530 530 aborts.append((f, "ud"))
531 531 else:
532 532 actions.append((f, "g", (fl2,), "remote created"))
533 533 elif n2 and n2 != ma[f]:
534 534 different = _checkunknownfile(repo, wctx, p2, f)
535 535 if not force and different:
536 536 aborts.append((f, "ud"))
537 537 else:
538 538 # if different: old untracked f may be overwritten and lost
539 539 if acceptremote:
540 540 actions.append((f, "g", (m2.flags(f),),
541 541 "remote recreating"))
542 542 else:
543 543 actions.append((f, "dc", (m2.flags(f),),
544 544 "prompt deleted/changed"))
545 545
546 546 for f, m in sorted(aborts):
547 547 if m == "ud":
548 548 repo.ui.warn(_("%s: untracked file differs\n") % f)
549 549 else: assert False, m
550 550 if aborts:
551 551 raise util.Abort(_("untracked files in working directory differ "
552 552 "from files in requested revision"))
553 553
554 554 if not util.checkcase(repo.path):
555 555 # check collision between files only in p2 for clean update
556 556 if (not branchmerge and
557 557 (force or not wctx.dirty(missing=True, branch=False))):
558 558 _checkcollision(repo, m2, [])
559 559 else:
560 560 _checkcollision(repo, m1, actions)
561 561
562 562 return actions
563 563
564 564 def actionkey(a):
565 565 return a[1] in "rf" and -1 or 0, a
566 566
567 567 def getremove(repo, mctx, overwrite, args):
568 568 """apply usually-non-interactive updates to the working directory
569 569
570 570 mctx is the context to be merged into the working copy
571 571
572 572 yields tuples for progress updates
573 573 """
574 574 verbose = repo.ui.verbose
575 575 unlink = util.unlinkpath
576 576 wjoin = repo.wjoin
577 577 fctx = mctx.filectx
578 578 wwrite = repo.wwrite
579 579 audit = repo.wopener.audit
580 580 i = 0
581 581 for arg in args:
582 582 f = arg[0]
583 583 if arg[1] == 'r':
584 584 if verbose:
585 585 repo.ui.note(_("removing %s\n") % f)
586 586 audit(f)
587 587 try:
588 588 unlink(wjoin(f), ignoremissing=True)
589 589 except OSError, inst:
590 590 repo.ui.warn(_("update failed to remove %s: %s!\n") %
591 591 (f, inst.strerror))
592 592 else:
593 593 if verbose:
594 594 repo.ui.note(_("getting %s\n") % f)
595 595 wwrite(f, fctx(f).data(), arg[2][0])
596 596 if i == 100:
597 597 yield i, f
598 598 i = 0
599 599 i += 1
600 600 if i > 0:
601 601 yield i, f
602 602
603 603 def applyupdates(repo, actions, wctx, mctx, overwrite):
604 604 """apply the merge action list to the working directory
605 605
606 606 wctx is the working copy context
607 607 mctx is the context to be merged into the working copy
608 608
609 609 Return a tuple of counts (updated, merged, removed, unresolved) that
610 610 describes how many files were affected by the update.
611 611 """
612 612
613 613 updated, merged, removed, unresolved = 0, 0, 0, 0
614 614 ms = mergestate(repo)
615 615 ms.reset(wctx.p1().node(), mctx.node())
616 616 moves = []
617 617 actions.sort(key=actionkey)
618 618
619 619 # prescan for merges
620 620 for a in actions:
621 621 f, m, args, msg = a
622 622 repo.ui.debug(" %s: %s -> %s\n" % (f, msg, m))
623 623 if m == "m": # merge
624 624 f1, f2, fa, move, anc = args
625 625 if f == '.hgsubstate': # merged internally
626 626 continue
627 627 repo.ui.debug(" preserving %s for resolve of %s\n" % (f1, f))
628 628 fcl = wctx[f1]
629 629 fco = mctx[f2]
630 630 actx = repo[anc]
631 631 if fa in actx:
632 632 fca = actx[fa]
633 633 else:
634 634 fca = repo.filectx(f1, fileid=nullrev)
635 635 ms.add(fcl, fco, fca, f)
636 636 if f1 != f and move:
637 637 moves.append(f1)
638 638
639 639 audit = repo.wopener.audit
640 640
641 641 # remove renamed files after safely stored
642 642 for f in moves:
643 643 if os.path.lexists(repo.wjoin(f)):
644 644 repo.ui.debug("removing %s\n" % f)
645 645 audit(f)
646 646 util.unlinkpath(repo.wjoin(f))
647 647
648 648 numupdates = len(actions)
649 649 workeractions = [a for a in actions if a[1] in 'gr']
650 650 updateactions = [a for a in workeractions if a[1] == 'g']
651 651 updated = len(updateactions)
652 652 removeactions = [a for a in workeractions if a[1] == 'r']
653 653 removed = len(removeactions)
654 654 actions = [a for a in actions if a[1] not in 'gr']
655 655
656 656 hgsub = [a[1] for a in workeractions if a[0] == '.hgsubstate']
657 657 if hgsub and hgsub[0] == 'r':
658 658 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
659 659
660 660 z = 0
661 661 prog = worker.worker(repo.ui, 0.001, getremove, (repo, mctx, overwrite),
662 662 removeactions)
663 663 for i, item in prog:
664 664 z += i
665 665 repo.ui.progress(_('updating'), z, item=item, total=numupdates,
666 666 unit=_('files'))
667 667 prog = worker.worker(repo.ui, 0.001, getremove, (repo, mctx, overwrite),
668 668 updateactions)
669 669 for i, item in prog:
670 670 z += i
671 671 repo.ui.progress(_('updating'), z, item=item, total=numupdates,
672 672 unit=_('files'))
673 673
674 674 if hgsub and hgsub[0] == 'g':
675 675 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
676 676
677 677 _updating = _('updating')
678 678 _files = _('files')
679 679 progress = repo.ui.progress
680 680
681 681 for i, a in enumerate(actions):
682 682 f, m, args, msg = a
683 683 progress(_updating, z + i + 1, item=f, total=numupdates, unit=_files)
684 684 if m == "m": # merge
685 685 f1, f2, fa, move, anc = args
686 686 if f == '.hgsubstate': # subrepo states need updating
687 687 subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx),
688 688 overwrite)
689 689 continue
690 690 audit(f)
691 691 r = ms.resolve(f, wctx)
692 692 if r is not None and r > 0:
693 693 unresolved += 1
694 694 else:
695 695 if r is None:
696 696 updated += 1
697 697 else:
698 698 merged += 1
699 699 elif m == "dm": # directory rename, move local
700 700 f0, flags = args
701 701 repo.ui.note(_("moving %s to %s\n") % (f0, f))
702 702 audit(f)
703 703 repo.wwrite(f, wctx.filectx(f0).data(), flags)
704 704 util.unlinkpath(repo.wjoin(f0))
705 705 updated += 1
706 706 elif m == "dg": # local directory rename, get
707 707 f0, flags = args
708 708 repo.ui.note(_("getting %s to %s\n") % (f0, f))
709 709 repo.wwrite(f, mctx.filectx(f0).data(), flags)
710 710 updated += 1
711 711 elif m == "dr": # divergent renames
712 712 fl, = args
713 713 repo.ui.warn(_("note: possible conflict - %s was renamed "
714 714 "multiple times to:\n") % f)
715 715 for nf in fl:
716 716 repo.ui.warn(" %s\n" % nf)
717 717 elif m == "rd": # rename and delete
718 718 fl, = args
719 719 repo.ui.warn(_("note: possible conflict - %s was deleted "
720 720 "and renamed to:\n") % f)
721 721 for nf in fl:
722 722 repo.ui.warn(" %s\n" % nf)
723 723 elif m == "e": # exec
724 724 flags, = args
725 725 audit(f)
726 726 util.setflags(repo.wjoin(f), 'l' in flags, 'x' in flags)
727 727 updated += 1
728 728 ms.commit()
729 729 progress(_updating, None, total=numupdates, unit=_files)
730 730
731 731 return updated, merged, removed, unresolved
732 732
733 733 def calculateupdates(repo, tctx, mctx, ancestor, branchmerge, force, partial,
734 734 acceptremote=False):
735 735 "Calculate the actions needed to merge mctx into tctx"
736 736 actions = []
737 737 actions += manifestmerge(repo, tctx, mctx,
738 738 ancestor,
739 739 branchmerge, force,
740 740 partial, acceptremote)
741 741
742 742 # Filter out prompts.
743 743 newactions, prompts = [], []
744 744 for a in actions:
745 745 if a[1] in ("cd", "dc"):
746 746 prompts.append(a)
747 747 else:
748 748 newactions.append(a)
749 749 # Prompt and create actions. TODO: Move this towards resolve phase.
750 750 for f, m, args, msg in sorted(prompts):
751 751 if m == "cd":
752 752 if repo.ui.promptchoice(
753 753 _("local changed %s which remote deleted\n"
754 754 "use (c)hanged version or (d)elete?"
755 755 "$$ &Changed $$ &Delete") % f, 0):
756 756 newactions.append((f, "r", None, "prompt delete"))
757 757 else:
758 758 newactions.append((f, "a", None, "prompt keep"))
759 759 elif m == "dc":
760 760 flags, = args
761 761 if repo.ui.promptchoice(
762 762 _("remote changed %s which local deleted\n"
763 763 "use (c)hanged version or leave (d)eleted?"
764 764 "$$ &Changed $$ &Deleted") % f, 0) == 0:
765 765 newactions.append((f, "g", (flags,), "prompt recreating"))
766 766 else: assert False, m
767 767
768 768 if tctx.rev() is None:
769 769 newactions += _forgetremoved(tctx, mctx, branchmerge)
770 770
771 771 return newactions
772 772
773 773 def recordupdates(repo, actions, branchmerge):
774 774 "record merge actions to the dirstate"
775 775
776 776 for a in actions:
777 777 f, m, args, msg = a
778 778 if m == "r": # remove
779 779 if branchmerge:
780 780 repo.dirstate.remove(f)
781 781 else:
782 782 repo.dirstate.drop(f)
783 783 elif m == "a": # re-add
784 784 if not branchmerge:
785 785 repo.dirstate.add(f)
786 786 elif m == "f": # forget
787 787 repo.dirstate.drop(f)
788 788 elif m == "e": # exec change
789 789 repo.dirstate.normallookup(f)
790 790 elif m == "g": # get
791 791 if branchmerge:
792 792 repo.dirstate.otherparent(f)
793 793 else:
794 794 repo.dirstate.normal(f)
795 795 elif m == "m": # merge
796 796 f1, f2, fa, move, anc = args
797 797 if branchmerge:
798 798 # We've done a branch merge, mark this file as merged
799 799 # so that we properly record the merger later
800 800 repo.dirstate.merge(f)
801 801 if f1 != f2: # copy/rename
802 802 if move:
803 803 repo.dirstate.remove(f1)
804 804 if f1 != f:
805 805 repo.dirstate.copy(f1, f)
806 806 else:
807 807 repo.dirstate.copy(f2, f)
808 808 else:
809 809 # We've update-merged a locally modified file, so
810 810 # we set the dirstate to emulate a normal checkout
811 811 # of that file some time in the past. Thus our
812 812 # merge will appear as a normal local file
813 813 # modification.
814 814 if f2 == f: # file not locally copied/moved
815 815 repo.dirstate.normallookup(f)
816 816 if move:
817 817 repo.dirstate.drop(f1)
818 818 elif m == "dm": # directory rename, move local
819 819 f0, flag = args
820 820 if f0 not in repo.dirstate:
821 821 # untracked file moved
822 822 continue
823 823 if branchmerge:
824 824 repo.dirstate.add(f)
825 825 repo.dirstate.remove(f0)
826 826 repo.dirstate.copy(f0, f)
827 827 else:
828 828 repo.dirstate.normal(f)
829 829 repo.dirstate.drop(f0)
830 830 elif m == "dg": # directory rename, get
831 831 f0, flag = args
832 832 if branchmerge:
833 833 repo.dirstate.add(f)
834 834 repo.dirstate.copy(f0, f)
835 835 else:
836 836 repo.dirstate.normal(f)
837 837
838 838 def update(repo, node, branchmerge, force, partial, ancestor=None,
839 839 mergeancestor=False):
840 840 """
841 841 Perform a merge between the working directory and the given node
842 842
843 843 node = the node to update to, or None if unspecified
844 844 branchmerge = whether to merge between branches
845 845 force = whether to force branch merging or file overwriting
846 846 partial = a function to filter file lists (dirstate not updated)
847 847 mergeancestor = whether it is merging with an ancestor. If true,
848 848 we should accept the incoming changes for any prompts that occur.
849 849 If false, merging with an ancestor (fast-forward) is only allowed
850 850 between different named branches. This flag is used by rebase extension
851 851 as a temporary fix and should be avoided in general.
852 852
853 853 The table below shows all the behaviors of the update command
854 854 given the -c and -C or no options, whether the working directory
855 855 is dirty, whether a revision is specified, and the relationship of
856 856 the parent rev to the target rev (linear, on the same named
857 857 branch, or on another named branch).
858 858
859 859 This logic is tested by test-update-branches.t.
860 860
861 861 -c -C dirty rev | linear same cross
862 862 n n n n | ok (1) x
863 863 n n n y | ok ok ok
864 864 n n y n | merge (2) (2)
865 865 n n y y | merge (3) (3)
866 866 n y * * | --- discard ---
867 867 y n y * | --- (4) ---
868 868 y n n * | --- ok ---
869 869 y y * * | --- (5) ---
870 870
871 871 x = can't happen
872 872 * = don't-care
873 873 1 = abort: not a linear update (merge or update --check to force update)
874 874 2 = abort: uncommitted changes (commit and merge, or update --clean to
875 875 discard changes)
876 876 3 = abort: uncommitted changes (commit or update --clean to discard changes)
877 877 4 = abort: uncommitted changes (checked in commands.py)
878 878 5 = incompatible options (checked in commands.py)
879 879
880 880 Return the same tuple as applyupdates().
881 881 """
882 882
883 883 onode = node
884 884 wlock = repo.wlock()
885 885 try:
886 886 wc = repo[None]
887 887 pl = wc.parents()
888 888 p1 = pl[0]
889 889 pa = None
890 890 if ancestor:
891 891 pa = repo[ancestor]
892 892
893 893 if node is None:
894 894 # Here is where we should consider bookmarks, divergent bookmarks,
895 895 # foreground changesets (successors), and tip of current branch;
896 896 # but currently we are only checking the branch tips.
897 897 try:
898 898 node = repo.branchtip(wc.branch())
899 899 except error.RepoLookupError:
900 900 if wc.branch() == "default": # no default branch!
901 901 node = repo.lookup("tip") # update to tip
902 902 else:
903 903 raise util.Abort(_("branch %s not found") % wc.branch())
904 904
905 905 if p1.obsolete() and not p1.children():
906 906 # allow updating to successors
907 907 successors = obsolete.successorssets(repo, p1.node())
908 908
909 909 # behavior of certain cases is as follows,
910 910 #
911 911 # divergent changesets: update to highest rev, similar to what
912 912 # is currently done when there are more than one head
913 913 # (i.e. 'tip')
914 914 #
915 915 # replaced changesets: same as divergent except we know there
916 916 # is no conflict
917 917 #
918 918 # pruned changeset: no update is done; though, we could
919 919 # consider updating to the first non-obsolete parent,
920 920 # similar to what is current done for 'hg prune'
921 921
922 922 if successors:
923 923 # flatten the list here handles both divergent (len > 1)
924 924 # and the usual case (len = 1)
925 925 successors = [n for sub in successors for n in sub]
926 926
927 927 # get the max revision for the given successors set,
928 928 # i.e. the 'tip' of a set
929 929 node = repo.revs("max(%ln)", successors)[0]
930 930 pa = p1
931 931
932 932 overwrite = force and not branchmerge
933 933
934 934 p2 = repo[node]
935 935 if pa is None:
936 936 pa = p1.ancestor(p2)
937 937
938 938 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), str(p1), str(p2)
939 939
940 940 ### check phase
941 941 if not overwrite and len(pl) > 1:
942 942 raise util.Abort(_("outstanding uncommitted merges"))
943 943 if branchmerge:
944 944 if pa == p2:
945 945 raise util.Abort(_("merging with a working directory ancestor"
946 946 " has no effect"))
947 947 elif pa == p1:
948 948 if not mergeancestor and p1.branch() == p2.branch():
949 949 raise util.Abort(_("nothing to merge"),
950 950 hint=_("use 'hg update' "
951 951 "or check 'hg heads'"))
952 952 if not force and (wc.files() or wc.deleted()):
953 953 raise util.Abort(_("uncommitted changes"),
954 954 hint=_("use 'hg status' to list changes"))
955 955 for s in sorted(wc.substate):
956 956 if wc.sub(s).dirty():
957 957 raise util.Abort(_("uncommitted changes in "
958 958 "subrepository '%s'") % s)
959 959
960 960 elif not overwrite:
961 961 if p1 == p2: # no-op update
962 962 # call the hooks and exit early
963 963 repo.hook('preupdate', throw=True, parent1=xp2, parent2='')
964 964 repo.hook('update', parent1=xp2, parent2='', error=0)
965 965 return 0, 0, 0, 0
966 966
967 967 if pa not in (p1, p2): # nonlinear
968 968 dirty = wc.dirty(missing=True)
969 969 if dirty or onode is None:
970 970 # Branching is a bit strange to ensure we do the minimal
971 971 # amount of call to obsolete.background.
972 972 foreground = obsolete.foreground(repo, [p1.node()])
973 973 # note: the <node> variable contains a random identifier
974 974 if repo[node].node() in foreground:
975 975 pa = p1 # allow updating to successors
976 976 elif dirty:
977 977 msg = _("uncommitted changes")
978 978 if onode is None:
979 979 hint = _("commit and merge, or update --clean to"
980 980 " discard changes")
981 981 else:
982 982 hint = _("commit or update --clean to discard"
983 983 " changes")
984 984 raise util.Abort(msg, hint=hint)
985 985 else: # node is none
986 986 msg = _("not a linear update")
987 987 hint = _("merge or update --check to force update")
988 988 raise util.Abort(msg, hint=hint)
989 989 else:
990 990 # Allow jumping branches if clean and specific rev given
991 991 pa = p1
992 992
993 993 ### calculate phase
994 994 actions = calculateupdates(repo, wc, p2, pa,
995 995 branchmerge, force, partial, mergeancestor)
996 996
997 997 ### apply phase
998 998 if not branchmerge: # just jump to the new rev
999 999 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, ''
1000 1000 if not partial:
1001 1001 repo.hook('preupdate', throw=True, parent1=xp1, parent2=xp2)
1002 1002 # note that we're in the middle of an update
1003 1003 repo.vfs.write('updatestate', p2.hex())
1004 1004
1005 1005 stats = applyupdates(repo, actions, wc, p2, overwrite)
1006 1006
1007 1007 if not partial:
1008 1008 repo.setparents(fp1, fp2)
1009 1009 recordupdates(repo, actions, branchmerge)
1010 1010 # update completed, clear state
1011 1011 util.unlink(repo.join('updatestate'))
1012 1012
1013 1013 if not branchmerge:
1014 1014 repo.dirstate.setbranch(p2.branch())
1015 1015 finally:
1016 1016 wlock.release()
1017 1017
1018 1018 if not partial:
1019 1019 repo.hook('update', parent1=xp1, parent2=xp2, error=stats[3])
1020 1020 return stats
@@ -1,867 +1,867 b''
1 1 # obsolete.py - obsolete markers handling
2 2 #
3 3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 4 # Logilab SA <contact@logilab.fr>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 """Obsolete markers handling
10 10
11 11 An obsolete marker maps an old changeset to a list of new
12 12 changesets. If the list of new changesets is empty, the old changeset
13 13 is said to be "killed". Otherwise, the old changeset is being
14 14 "replaced" by the new changesets.
15 15
16 16 Obsolete markers can be used to record and distribute changeset graph
17 17 transformations performed by history rewriting operations, and help
18 18 building new tools to reconciliate conflicting rewriting actions. To
19 19 facilitate conflicts resolution, markers include various annotations
20 20 besides old and news changeset identifiers, such as creation date or
21 21 author name.
22 22
23 23 The old obsoleted changeset is called "precursor" and possible replacements are
24 24 called "successors". Markers that used changeset X as a precursors are called
25 25 "successor markers of X" because they hold information about the successors of
26 26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 27 Y" because they hold information about the precursors of Y.
28 28
29 29 Examples:
30 30
31 31 - When changeset A is replacement by a changeset A', one marker is stored:
32 32
33 33 (A, (A'))
34 34
35 35 - When changesets A and B are folded into a new changeset C two markers are
36 36 stored:
37 37
38 38 (A, (C,)) and (B, (C,))
39 39
40 40 - When changeset A is simply "pruned" from the graph, a marker in create:
41 41
42 42 (A, ())
43 43
44 44 - When changeset A is split into B and C, a single marker are used:
45 45
46 46 (A, (C, C))
47 47
48 48 We use a single marker to distinct the "split" case from the "divergence"
49 49 case. If two independents operation rewrite the same changeset A in to A' and
50 50 A'' when have an error case: divergent rewriting. We can detect it because
51 51 two markers will be created independently:
52 52
53 53 (A, (B,)) and (A, (C,))
54 54
55 55 Format
56 56 ------
57 57
58 58 Markers are stored in an append-only file stored in
59 59 '.hg/store/obsstore'.
60 60
61 61 The file starts with a version header:
62 62
63 63 - 1 unsigned byte: version number, starting at zero.
64 64
65 65
66 66 The header is followed by the markers. Each marker is made of:
67 67
68 68 - 1 unsigned byte: number of new changesets "R", could be zero.
69 69
70 70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 71
72 72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 73 markers common operations, to avoid repeated decoding of metadata
74 74 entries.
75 75
76 76 - 20 bytes: obsoleted changeset identifier.
77 77
78 78 - N*20 bytes: new changesets identifiers.
79 79
80 80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 81 string contains a key and a value, separated by a color ':', without
82 82 additional encoding. Keys cannot contain '\0' or ':' and values
83 83 cannot contain '\0'.
84 84 """
85 85 import struct
86 86 import util, base85, node
87 87 import phases
88 88 from i18n import _
89 89
90 90 _pack = struct.pack
91 91 _unpack = struct.unpack
92 92
93 93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
94 94
95 95 # the obsolete feature is not mature enough to be enabled by default.
96 96 # you have to rely on third party extension extension to enable this.
97 97 _enabled = False
98 98
99 99 # data used for parsing and writing
100 100 _fmversion = 0
101 101 _fmfixed = '>BIB20s'
102 102 _fmnode = '20s'
103 103 _fmfsize = struct.calcsize(_fmfixed)
104 104 _fnodesize = struct.calcsize(_fmnode)
105 105
106 106 ### obsolescence marker flag
107 107
108 108 ## bumpedfix flag
109 109 #
110 110 # When a changeset A' succeed to a changeset A which became public, we call A'
111 111 # "bumped" because it's a successors of a public changesets
112 112 #
113 113 # o A' (bumped)
114 114 # |`:
115 115 # | o A
116 116 # |/
117 117 # o Z
118 118 #
119 119 # The way to solve this situation is to create a new changeset Ad as children
120 120 # of A. This changeset have the same content than A'. So the diff from A to A'
121 121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
122 122 #
123 123 # o Ad
124 124 # |`:
125 125 # | x A'
126 126 # |'|
127 127 # o | A
128 128 # |/
129 129 # o Z
130 130 #
131 131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
132 132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
133 133 # This flag mean that the successors express the changes between the public and
134 134 # bumped version and fix the situation, breaking the transitivity of
135 135 # "bumped" here.
136 136 bumpedfix = 1
137 137
138 138 def _readmarkers(data):
139 139 """Read and enumerate markers from raw data"""
140 140 off = 0
141 141 diskversion = _unpack('>B', data[off:off + 1])[0]
142 142 off += 1
143 143 if diskversion != _fmversion:
144 144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
145 145 % diskversion)
146 146
147 147 # Loop on markers
148 148 l = len(data)
149 149 while off + _fmfsize <= l:
150 150 # read fixed part
151 151 cur = data[off:off + _fmfsize]
152 152 off += _fmfsize
153 153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
154 154 # read replacement
155 155 sucs = ()
156 156 if nbsuc:
157 157 s = (_fnodesize * nbsuc)
158 158 cur = data[off:off + s]
159 159 sucs = _unpack(_fmnode * nbsuc, cur)
160 160 off += s
161 161 # read metadata
162 162 # (metadata will be decoded on demand)
163 163 metadata = data[off:off + mdsize]
164 164 if len(metadata) != mdsize:
165 165 raise util.Abort(_('parsing obsolete marker: metadata is too '
166 166 'short, %d bytes expected, got %d')
167 167 % (mdsize, len(metadata)))
168 168 off += mdsize
169 169 yield (pre, sucs, flags, metadata)
170 170
171 171 def encodemeta(meta):
172 172 """Return encoded metadata string to string mapping.
173 173
174 174 Assume no ':' in key and no '\0' in both key and value."""
175 175 for key, value in meta.iteritems():
176 176 if ':' in key or '\0' in key:
177 177 raise ValueError("':' and '\0' are forbidden in metadata key'")
178 178 if '\0' in value:
179 raise ValueError("':' are forbidden in metadata value'")
179 raise ValueError("':' is forbidden in metadata value'")
180 180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
181 181
182 182 def decodemeta(data):
183 183 """Return string to string dictionary from encoded version."""
184 184 d = {}
185 185 for l in data.split('\0'):
186 186 if l:
187 187 key, value = l.split(':')
188 188 d[key] = value
189 189 return d
190 190
191 191 class marker(object):
192 192 """Wrap obsolete marker raw data"""
193 193
194 194 def __init__(self, repo, data):
195 195 # the repo argument will be used to create changectx in later version
196 196 self._repo = repo
197 197 self._data = data
198 198 self._decodedmeta = None
199 199
200 200 def __hash__(self):
201 201 return hash(self._data)
202 202
203 203 def __eq__(self, other):
204 204 if type(other) != type(self):
205 205 return False
206 206 return self._data == other._data
207 207
208 208 def precnode(self):
209 209 """Precursor changeset node identifier"""
210 210 return self._data[0]
211 211
212 212 def succnodes(self):
213 213 """List of successor changesets node identifiers"""
214 214 return self._data[1]
215 215
216 216 def metadata(self):
217 217 """Decoded metadata dictionary"""
218 218 if self._decodedmeta is None:
219 219 self._decodedmeta = decodemeta(self._data[3])
220 220 return self._decodedmeta
221 221
222 222 def date(self):
223 223 """Creation date as (unixtime, offset)"""
224 224 parts = self.metadata()['date'].split(' ')
225 225 return (float(parts[0]), int(parts[1]))
226 226
227 227 class obsstore(object):
228 228 """Store obsolete markers
229 229
230 230 Markers can be accessed with two mappings:
231 231 - precursors[x] -> set(markers on precursors edges of x)
232 232 - successors[x] -> set(markers on successors edges of x)
233 233 """
234 234
235 235 def __init__(self, sopener):
236 236 # caches for various obsolescence related cache
237 237 self.caches = {}
238 238 self._all = []
239 239 # new markers to serialize
240 240 self.precursors = {}
241 241 self.successors = {}
242 242 self.sopener = sopener
243 243 data = sopener.tryread('obsstore')
244 244 if data:
245 245 self._load(_readmarkers(data))
246 246
247 247 def __iter__(self):
248 248 return iter(self._all)
249 249
250 250 def __len__(self):
251 251 return len(self._all)
252 252
253 253 def __nonzero__(self):
254 254 return bool(self._all)
255 255
256 256 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
257 257 """obsolete: add a new obsolete marker
258 258
259 259 * ensuring it is hashable
260 260 * check mandatory metadata
261 261 * encode metadata
262 262
263 263 If you are a human writing code creating marker you want to use the
264 264 `createmarkers` function in this module instead.
265 265
266 266 return True if a new marker have been added, False if the markers
267 267 already existed (no op).
268 268 """
269 269 if metadata is None:
270 270 metadata = {}
271 271 if 'date' not in metadata:
272 272 metadata['date'] = "%d %d" % util.makedate()
273 273 if len(prec) != 20:
274 274 raise ValueError(prec)
275 275 for succ in succs:
276 276 if len(succ) != 20:
277 277 raise ValueError(succ)
278 278 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
279 279 return bool(self.add(transaction, [marker]))
280 280
281 281 def add(self, transaction, markers):
282 282 """Add new markers to the store
283 283
284 284 Take care of filtering duplicate.
285 285 Return the number of new marker."""
286 286 if not _enabled:
287 287 raise util.Abort('obsolete feature is not enabled on this repo')
288 288 known = set(self._all)
289 289 new = []
290 290 for m in markers:
291 291 if m not in known:
292 292 known.add(m)
293 293 new.append(m)
294 294 if new:
295 295 f = self.sopener('obsstore', 'ab')
296 296 try:
297 297 # Whether the file's current position is at the begin or at
298 298 # the end after opening a file for appending is implementation
299 299 # defined. So we must seek to the end before calling tell(),
300 300 # or we may get a zero offset for non-zero sized files on
301 301 # some platforms (issue3543).
302 302 f.seek(0, _SEEK_END)
303 303 offset = f.tell()
304 304 transaction.add('obsstore', offset)
305 305 # offset == 0: new file - add the version header
306 306 for bytes in _encodemarkers(new, offset == 0):
307 307 f.write(bytes)
308 308 finally:
309 309 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
310 310 # call 'filecacheentry.refresh()' here
311 311 f.close()
312 312 self._load(new)
313 313 # new marker *may* have changed several set. invalidate the cache.
314 314 self.caches.clear()
315 315 return len(new)
316 316
317 317 def mergemarkers(self, transaction, data):
318 318 markers = _readmarkers(data)
319 319 self.add(transaction, markers)
320 320
321 321 def _load(self, markers):
322 322 for mark in markers:
323 323 self._all.append(mark)
324 324 pre, sucs = mark[:2]
325 325 self.successors.setdefault(pre, set()).add(mark)
326 326 for suc in sucs:
327 327 self.precursors.setdefault(suc, set()).add(mark)
328 328 if node.nullid in self.precursors:
329 329 raise util.Abort(_('bad obsolescence marker detected: '
330 330 'invalid successors nullid'))
331 331
332 332 def _encodemarkers(markers, addheader=False):
333 333 # Kept separate from flushmarkers(), it will be reused for
334 334 # markers exchange.
335 335 if addheader:
336 336 yield _pack('>B', _fmversion)
337 337 for marker in markers:
338 338 yield _encodeonemarker(marker)
339 339
340 340
341 341 def _encodeonemarker(marker):
342 342 pre, sucs, flags, metadata = marker
343 343 nbsuc = len(sucs)
344 344 format = _fmfixed + (_fmnode * nbsuc)
345 345 data = [nbsuc, len(metadata), flags, pre]
346 346 data.extend(sucs)
347 347 return _pack(format, *data) + metadata
348 348
349 349 # arbitrary picked to fit into 8K limit from HTTP server
350 350 # you have to take in account:
351 351 # - the version header
352 352 # - the base85 encoding
353 353 _maxpayload = 5300
354 354
355 355 def _pushkeyescape(markers):
356 356 """encode markers into a dict suitable for pushkey exchange
357 357
358 - binary data is base86 encoded
359 - splitted in chunks less than 5300 bytes"""
358 - binary data is base85 encoded
359 - split in chunks smaller than 5300 bytes"""
360 360 keys = {}
361 361 parts = []
362 362 currentlen = _maxpayload * 2 # ensure we create a new part
363 363 for marker in markers:
364 364 nextdata = _encodeonemarker(marker)
365 365 if (len(nextdata) + currentlen > _maxpayload):
366 366 currentpart = []
367 367 currentlen = 0
368 368 parts.append(currentpart)
369 369 currentpart.append(nextdata)
370 370 currentlen += len(nextdata)
371 371 for idx, part in enumerate(reversed(parts)):
372 372 data = ''.join([_pack('>B', _fmversion)] + part)
373 373 keys['dump%i' % idx] = base85.b85encode(data)
374 374 return keys
375 375
376 376 def listmarkers(repo):
377 377 """List markers over pushkey"""
378 378 if not repo.obsstore:
379 379 return {}
380 380 return _pushkeyescape(repo.obsstore)
381 381
382 382 def pushmarker(repo, key, old, new):
383 383 """Push markers over pushkey"""
384 384 if not key.startswith('dump'):
385 385 repo.ui.warn(_('unknown key: %r') % key)
386 386 return 0
387 387 if old:
388 388 repo.ui.warn(_('unexpected old value') % key)
389 389 return 0
390 390 data = base85.b85decode(new)
391 391 lock = repo.lock()
392 392 try:
393 393 tr = repo.transaction('pushkey: obsolete markers')
394 394 try:
395 395 repo.obsstore.mergemarkers(tr, data)
396 396 tr.close()
397 397 return 1
398 398 finally:
399 399 tr.release()
400 400 finally:
401 401 lock.release()
402 402
403 403 def allmarkers(repo):
404 404 """all obsolete markers known in a repository"""
405 405 for markerdata in repo.obsstore:
406 406 yield marker(repo, markerdata)
407 407
408 408 def precursormarkers(ctx):
409 409 """obsolete marker marking this changeset as a successors"""
410 410 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
411 411 yield marker(ctx._repo, data)
412 412
413 413 def successormarkers(ctx):
414 414 """obsolete marker making this changeset obsolete"""
415 415 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
416 416 yield marker(ctx._repo, data)
417 417
418 418 def allsuccessors(obsstore, nodes, ignoreflags=0):
419 419 """Yield node for every successor of <nodes>.
420 420
421 421 Some successors may be unknown locally.
422 422
423 423 This is a linear yield unsuited to detecting split changesets. It includes
424 424 initial nodes too."""
425 425 remaining = set(nodes)
426 426 seen = set(remaining)
427 427 while remaining:
428 428 current = remaining.pop()
429 429 yield current
430 430 for mark in obsstore.successors.get(current, ()):
431 431 # ignore marker flagged with specified flag
432 432 if mark[2] & ignoreflags:
433 433 continue
434 434 for suc in mark[1]:
435 435 if suc not in seen:
436 436 seen.add(suc)
437 437 remaining.add(suc)
438 438
439 439 def allprecursors(obsstore, nodes, ignoreflags=0):
440 440 """Yield node for every precursors of <nodes>.
441 441
442 442 Some precursors may be unknown locally.
443 443
444 444 This is a linear yield unsuited to detecting folded changesets. It includes
445 445 initial nodes too."""
446 446
447 447 remaining = set(nodes)
448 448 seen = set(remaining)
449 449 while remaining:
450 450 current = remaining.pop()
451 451 yield current
452 452 for mark in obsstore.precursors.get(current, ()):
453 453 # ignore marker flagged with specified flag
454 454 if mark[2] & ignoreflags:
455 455 continue
456 456 suc = mark[0]
457 457 if suc not in seen:
458 458 seen.add(suc)
459 459 remaining.add(suc)
460 460
461 461 def foreground(repo, nodes):
462 462 """return all nodes in the "foreground" of other node
463 463
464 464 The foreground of a revision is anything reachable using parent -> children
465 465 or precursor -> successor relation. It is very similar to "descendant" but
466 466 augmented with obsolescence information.
467 467
468 468 Beware that possible obsolescence cycle may result if complex situation.
469 469 """
470 470 repo = repo.unfiltered()
471 471 foreground = set(repo.set('%ln::', nodes))
472 472 if repo.obsstore:
473 473 # We only need this complicated logic if there is obsolescence
474 474 # XXX will probably deserve an optimised revset.
475 475 nm = repo.changelog.nodemap
476 476 plen = -1
477 477 # compute the whole set of successors or descendants
478 478 while len(foreground) != plen:
479 479 plen = len(foreground)
480 480 succs = set(c.node() for c in foreground)
481 481 mutable = [c.node() for c in foreground if c.mutable()]
482 482 succs.update(allsuccessors(repo.obsstore, mutable))
483 483 known = (n for n in succs if n in nm)
484 484 foreground = set(repo.set('%ln::', known))
485 485 return set(c.node() for c in foreground)
486 486
487 487
488 488 def successorssets(repo, initialnode, cache=None):
489 489 """Return all set of successors of initial nodes
490 490
491 491 The successors set of a changeset A are a group of revisions that succeed
492 492 A. It succeeds A as a consistent whole, each revision being only a partial
493 493 replacement. The successors set contains non-obsolete changesets only.
494 494
495 495 This function returns the full list of successor sets which is why it
496 496 returns a list of tuples and not just a single tuple. Each tuple is a valid
497 497 successors set. Not that (A,) may be a valid successors set for changeset A
498 498 (see below).
499 499
500 500 In most cases, a changeset A will have a single element (e.g. the changeset
501 501 A is replaced by A') in its successors set. Though, it is also common for a
502 502 changeset A to have no elements in its successor set (e.g. the changeset
503 503 has been pruned). Therefore, the returned list of successors sets will be
504 504 [(A',)] or [], respectively.
505 505
506 506 When a changeset A is split into A' and B', however, it will result in a
507 507 successors set containing more than a single element, i.e. [(A',B')].
508 508 Divergent changesets will result in multiple successors sets, i.e. [(A',),
509 509 (A'')].
510 510
511 511 If a changeset A is not obsolete, then it will conceptually have no
512 512 successors set. To distinguish this from a pruned changeset, the successor
513 513 set will only contain itself, i.e. [(A,)].
514 514
515 515 Finally, successors unknown locally are considered to be pruned (obsoleted
516 516 without any successors).
517 517
518 518 The optional `cache` parameter is a dictionary that may contain precomputed
519 519 successors sets. It is meant to reuse the computation of a previous call to
520 520 `successorssets` when multiple calls are made at the same time. The cache
521 521 dictionary is updated in place. The caller is responsible for its live
522 522 spawn. Code that makes multiple calls to `successorssets` *must* use this
523 523 cache mechanism or suffer terrible performances.
524 524
525 525 """
526 526
527 527 succmarkers = repo.obsstore.successors
528 528
529 529 # Stack of nodes we search successors sets for
530 530 toproceed = [initialnode]
531 531 # set version of above list for fast loop detection
532 532 # element added to "toproceed" must be added here
533 533 stackedset = set(toproceed)
534 534 if cache is None:
535 535 cache = {}
536 536
537 537 # This while loop is the flattened version of a recursive search for
538 538 # successors sets
539 539 #
540 540 # def successorssets(x):
541 541 # successors = directsuccessors(x)
542 542 # ss = [[]]
543 543 # for succ in directsuccessors(x):
544 544 # # product as in itertools cartesian product
545 545 # ss = product(ss, successorssets(succ))
546 546 # return ss
547 547 #
548 548 # But we can not use plain recursive calls here:
549 549 # - that would blow the python call stack
550 550 # - obsolescence markers may have cycles, we need to handle them.
551 551 #
552 552 # The `toproceed` list act as our call stack. Every node we search
553 553 # successors set for are stacked there.
554 554 #
555 555 # The `stackedset` is set version of this stack used to check if a node is
556 556 # already stacked. This check is used to detect cycles and prevent infinite
557 557 # loop.
558 558 #
559 559 # successors set of all nodes are stored in the `cache` dictionary.
560 560 #
561 561 # After this while loop ends we use the cache to return the successors sets
562 562 # for the node requested by the caller.
563 563 while toproceed:
564 564 # Every iteration tries to compute the successors sets of the topmost
565 565 # node of the stack: CURRENT.
566 566 #
567 567 # There are four possible outcomes:
568 568 #
569 569 # 1) We already know the successors sets of CURRENT:
570 570 # -> mission accomplished, pop it from the stack.
571 571 # 2) Node is not obsolete:
572 572 # -> the node is its own successors sets. Add it to the cache.
573 573 # 3) We do not know successors set of direct successors of CURRENT:
574 574 # -> We add those successors to the stack.
575 575 # 4) We know successors sets of all direct successors of CURRENT:
576 576 # -> We can compute CURRENT successors set and add it to the
577 577 # cache.
578 578 #
579 579 current = toproceed[-1]
580 580 if current in cache:
581 581 # case (1): We already know the successors sets
582 582 stackedset.remove(toproceed.pop())
583 583 elif current not in succmarkers:
584 584 # case (2): The node is not obsolete.
585 585 if current in repo:
586 586 # We have a valid last successors.
587 587 cache[current] = [(current,)]
588 588 else:
589 589 # Final obsolete version is unknown locally.
590 590 # Do not count that as a valid successors
591 591 cache[current] = []
592 592 else:
593 593 # cases (3) and (4)
594 594 #
595 595 # We proceed in two phases. Phase 1 aims to distinguish case (3)
596 596 # from case (4):
597 597 #
598 598 # For each direct successors of CURRENT, we check whether its
599 599 # successors sets are known. If they are not, we stack the
600 600 # unknown node and proceed to the next iteration of the while
601 601 # loop. (case 3)
602 602 #
603 603 # During this step, we may detect obsolescence cycles: a node
604 604 # with unknown successors sets but already in the call stack.
605 605 # In such a situation, we arbitrary set the successors sets of
606 606 # the node to nothing (node pruned) to break the cycle.
607 607 #
608 608 # If no break was encountered we proceed to phase 2.
609 609 #
610 610 # Phase 2 computes successors sets of CURRENT (case 4); see details
611 611 # in phase 2 itself.
612 612 #
613 613 # Note the two levels of iteration in each phase.
614 614 # - The first one handles obsolescence markers using CURRENT as
615 615 # precursor (successors markers of CURRENT).
616 616 #
617 617 # Having multiple entry here means divergence.
618 618 #
619 619 # - The second one handles successors defined in each marker.
620 620 #
621 621 # Having none means pruned node, multiple successors means split,
622 622 # single successors are standard replacement.
623 623 #
624 624 for mark in sorted(succmarkers[current]):
625 625 for suc in mark[1]:
626 626 if suc not in cache:
627 627 if suc in stackedset:
628 628 # cycle breaking
629 629 cache[suc] = []
630 630 else:
631 631 # case (3) If we have not computed successors sets
632 632 # of one of those successors we add it to the
633 633 # `toproceed` stack and stop all work for this
634 634 # iteration.
635 635 toproceed.append(suc)
636 636 stackedset.add(suc)
637 637 break
638 638 else:
639 639 continue
640 640 break
641 641 else:
642 642 # case (4): we know all successors sets of all direct
643 643 # successors
644 644 #
645 645 # Successors set contributed by each marker depends on the
646 646 # successors sets of all its "successors" node.
647 647 #
648 648 # Each different marker is a divergence in the obsolescence
649 649 # history. It contributes successors sets distinct from other
650 650 # markers.
651 651 #
652 652 # Within a marker, a successor may have divergent successors
653 653 # sets. In such a case, the marker will contribute multiple
654 654 # divergent successors sets. If multiple successors have
655 # divergent successors sets, a cartesian product is used.
655 # divergent successors sets, a Cartesian product is used.
656 656 #
657 657 # At the end we post-process successors sets to remove
658 658 # duplicated entry and successors set that are strict subset of
659 659 # another one.
660 660 succssets = []
661 661 for mark in sorted(succmarkers[current]):
662 662 # successors sets contributed by this marker
663 663 markss = [[]]
664 664 for suc in mark[1]:
665 665 # cardinal product with previous successors
666 666 productresult = []
667 667 for prefix in markss:
668 668 for suffix in cache[suc]:
669 669 newss = list(prefix)
670 670 for part in suffix:
671 671 # do not duplicated entry in successors set
672 672 # first entry wins.
673 673 if part not in newss:
674 674 newss.append(part)
675 675 productresult.append(newss)
676 676 markss = productresult
677 677 succssets.extend(markss)
678 678 # remove duplicated and subset
679 679 seen = []
680 680 final = []
681 681 candidate = sorted(((set(s), s) for s in succssets if s),
682 682 key=lambda x: len(x[1]), reverse=True)
683 683 for setversion, listversion in candidate:
684 684 for seenset in seen:
685 685 if setversion.issubset(seenset):
686 686 break
687 687 else:
688 688 final.append(listversion)
689 689 seen.append(setversion)
690 690 final.reverse() # put small successors set first
691 691 cache[current] = final
692 692 return cache[initialnode]
693 693
694 694 def _knownrevs(repo, nodes):
695 695 """yield revision numbers of known nodes passed in parameters
696 696
697 697 Unknown revisions are silently ignored."""
698 698 torev = repo.changelog.nodemap.get
699 699 for n in nodes:
700 700 rev = torev(n)
701 701 if rev is not None:
702 702 yield rev
703 703
704 704 # mapping of 'set-name' -> <function to compute this set>
705 705 cachefuncs = {}
706 706 def cachefor(name):
707 707 """Decorator to register a function as computing the cache for a set"""
708 708 def decorator(func):
709 709 assert name not in cachefuncs
710 710 cachefuncs[name] = func
711 711 return func
712 712 return decorator
713 713
714 714 def getrevs(repo, name):
715 715 """Return the set of revision that belong to the <name> set
716 716
717 717 Such access may compute the set and cache it for future use"""
718 718 repo = repo.unfiltered()
719 719 if not repo.obsstore:
720 720 return ()
721 721 if name not in repo.obsstore.caches:
722 722 repo.obsstore.caches[name] = cachefuncs[name](repo)
723 723 return repo.obsstore.caches[name]
724 724
725 725 # To be simple we need to invalidate obsolescence cache when:
726 726 #
727 727 # - new changeset is added:
728 728 # - public phase is changed
729 729 # - obsolescence marker are added
730 730 # - strip is used a repo
731 731 def clearobscaches(repo):
732 732 """Remove all obsolescence related cache from a repo
733 733
734 734 This remove all cache in obsstore is the obsstore already exist on the
735 735 repo.
736 736
737 737 (We could be smarter here given the exact event that trigger the cache
738 738 clearing)"""
739 739 # only clear cache is there is obsstore data in this repo
740 740 if 'obsstore' in repo._filecache:
741 741 repo.obsstore.caches.clear()
742 742
743 743 @cachefor('obsolete')
744 744 def _computeobsoleteset(repo):
745 745 """the set of obsolete revisions"""
746 746 obs = set()
747 747 getrev = repo.changelog.nodemap.get
748 748 getphase = repo._phasecache.phase
749 749 for node in repo.obsstore.successors:
750 750 rev = getrev(node)
751 751 if rev is not None and getphase(repo, rev):
752 752 obs.add(rev)
753 753 return obs
754 754
755 755 @cachefor('unstable')
756 756 def _computeunstableset(repo):
757 757 """the set of non obsolete revisions with obsolete parents"""
758 758 # revset is not efficient enough here
759 759 # we do (obsolete()::) - obsolete() by hand
760 760 obs = getrevs(repo, 'obsolete')
761 761 if not obs:
762 762 return set()
763 763 cl = repo.changelog
764 764 return set(r for r in cl.descendants(obs) if r not in obs)
765 765
766 766 @cachefor('suspended')
767 767 def _computesuspendedset(repo):
768 768 """the set of obsolete parents with non obsolete descendants"""
769 769 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
770 770 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
771 771
772 772 @cachefor('extinct')
773 773 def _computeextinctset(repo):
774 774 """the set of obsolete parents without non obsolete descendants"""
775 775 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
776 776
777 777
778 778 @cachefor('bumped')
779 779 def _computebumpedset(repo):
780 780 """the set of revs trying to obsolete public revisions"""
781 781 bumped = set()
782 # utils function (avoid attribute lookup in the loop)
782 # util function (avoid attribute lookup in the loop)
783 783 phase = repo._phasecache.phase # would be faster to grab the full list
784 784 public = phases.public
785 785 cl = repo.changelog
786 786 torev = cl.nodemap.get
787 787 obs = getrevs(repo, 'obsolete')
788 788 for rev in repo:
789 789 # We only evaluate mutable, non-obsolete revision
790 790 if (public < phase(repo, rev)) and (rev not in obs):
791 791 node = cl.node(rev)
792 792 # (future) A cache of precursors may worth if split is very common
793 793 for pnode in allprecursors(repo.obsstore, [node],
794 794 ignoreflags=bumpedfix):
795 795 prev = torev(pnode) # unfiltered! but so is phasecache
796 796 if (prev is not None) and (phase(repo, prev) <= public):
797 797 # we have a public precursors
798 798 bumped.add(rev)
799 799 break # Next draft!
800 800 return bumped
801 801
802 802 @cachefor('divergent')
803 803 def _computedivergentset(repo):
804 804 """the set of rev that compete to be the final successors of some revision.
805 805 """
806 806 divergent = set()
807 807 obsstore = repo.obsstore
808 808 newermap = {}
809 809 for ctx in repo.set('(not public()) - obsolete()'):
810 810 mark = obsstore.precursors.get(ctx.node(), ())
811 811 toprocess = set(mark)
812 812 while toprocess:
813 813 prec = toprocess.pop()[0]
814 814 if prec not in newermap:
815 815 successorssets(repo, prec, newermap)
816 816 newer = [n for n in newermap[prec] if n]
817 817 if len(newer) > 1:
818 818 divergent.add(ctx.rev())
819 819 break
820 820 toprocess.update(obsstore.precursors.get(prec, ()))
821 821 return divergent
822 822
823 823
824 824 def createmarkers(repo, relations, flag=0, metadata=None):
825 825 """Add obsolete markers between changesets in a repo
826 826
827 827 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
828 tuple. `old` and `news` are changectx. metadata is an optional dictionnary
828 tuple. `old` and `news` are changectx. metadata is an optional dictionary
829 829 containing metadata for this marker only. It is merged with the global
830 830 metadata specified through the `metadata` argument of this function,
831 831
832 832 Trying to obsolete a public changeset will raise an exception.
833 833
834 834 Current user and date are used except if specified otherwise in the
835 835 metadata attribute.
836 836
837 837 This function operates within a transaction of its own, but does
838 838 not take any lock on the repo.
839 839 """
840 840 # prepare metadata
841 841 if metadata is None:
842 842 metadata = {}
843 843 if 'date' not in metadata:
844 844 metadata['date'] = '%i %i' % util.makedate()
845 845 if 'user' not in metadata:
846 846 metadata['user'] = repo.ui.username()
847 847 tr = repo.transaction('add-obsolescence-marker')
848 848 try:
849 849 for rel in relations:
850 850 prec = rel[0]
851 851 sucs = rel[1]
852 852 localmetadata = metadata.copy()
853 853 if 2 < len(rel):
854 854 localmetadata.update(rel[2])
855 855
856 856 if not prec.mutable():
857 857 raise util.Abort("cannot obsolete immutable changeset: %s"
858 858 % prec)
859 859 nprec = prec.node()
860 860 nsucs = tuple(s.node() for s in sucs)
861 861 if nprec in nsucs:
862 862 raise util.Abort("changeset %s cannot obsolete itself" % prec)
863 863 repo.obsstore.create(tr, nprec, nsucs, flag, localmetadata)
864 864 repo.filteredrevcache.clear()
865 865 tr.close()
866 866 finally:
867 867 tr.release()
@@ -1,2859 +1,2859 b''
1 1 # revset.py - revision set queries for mercurial
2 2 #
3 3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 import re
9 9 import parser, util, error, discovery, hbisect, phases
10 10 import node
11 11 import heapq
12 12 import match as matchmod
13 13 import ancestor as ancestormod
14 14 from i18n import _
15 15 import encoding
16 16 import obsolete as obsmod
17 17 import pathutil
18 18 import repoview
19 19
20 20 def _revancestors(repo, revs, followfirst):
21 21 """Like revlog.ancestors(), but supports followfirst."""
22 22 cut = followfirst and 1 or None
23 23 cl = repo.changelog
24 24
25 25 def iterate():
26 26 revqueue, revsnode = None, None
27 27 h = []
28 28
29 29 revs.descending()
30 30 revqueue = util.deque(revs)
31 31 if revqueue:
32 32 revsnode = revqueue.popleft()
33 33 heapq.heappush(h, -revsnode)
34 34
35 35 seen = set([node.nullrev])
36 36 while h:
37 37 current = -heapq.heappop(h)
38 38 if current not in seen:
39 39 if revsnode and current == revsnode:
40 40 if revqueue:
41 41 revsnode = revqueue.popleft()
42 42 heapq.heappush(h, -revsnode)
43 43 seen.add(current)
44 44 yield current
45 45 for parent in cl.parentrevs(current)[:cut]:
46 46 if parent != node.nullrev:
47 47 heapq.heappush(h, -parent)
48 48
49 49 return _descgeneratorset(iterate())
50 50
51 51 def _revdescendants(repo, revs, followfirst):
52 52 """Like revlog.descendants() but supports followfirst."""
53 53 cut = followfirst and 1 or None
54 54
55 55 def iterate():
56 56 cl = repo.changelog
57 57 first = min(revs)
58 58 nullrev = node.nullrev
59 59 if first == nullrev:
60 60 # Are there nodes with a null first parent and a non-null
61 61 # second one? Maybe. Do we care? Probably not.
62 62 for i in cl:
63 63 yield i
64 64 else:
65 65 seen = set(revs)
66 66 for i in cl.revs(first + 1):
67 67 for x in cl.parentrevs(i)[:cut]:
68 68 if x != nullrev and x in seen:
69 69 seen.add(i)
70 70 yield i
71 71 break
72 72
73 73 return _ascgeneratorset(iterate())
74 74
75 75 def _revsbetween(repo, roots, heads):
76 76 """Return all paths between roots and heads, inclusive of both endpoint
77 77 sets."""
78 78 if not roots:
79 79 return baseset([])
80 80 parentrevs = repo.changelog.parentrevs
81 81 visit = baseset(heads)
82 82 reachable = set()
83 83 seen = {}
84 84 minroot = min(roots)
85 85 roots = set(roots)
86 86 # open-code the post-order traversal due to the tiny size of
87 87 # sys.getrecursionlimit()
88 88 while visit:
89 89 rev = visit.pop()
90 90 if rev in roots:
91 91 reachable.add(rev)
92 92 parents = parentrevs(rev)
93 93 seen[rev] = parents
94 94 for parent in parents:
95 95 if parent >= minroot and parent not in seen:
96 96 visit.append(parent)
97 97 if not reachable:
98 98 return baseset([])
99 99 for rev in sorted(seen):
100 100 for parent in seen[rev]:
101 101 if parent in reachable:
102 102 reachable.add(rev)
103 103 return baseset(sorted(reachable))
104 104
105 105 elements = {
106 106 "(": (20, ("group", 1, ")"), ("func", 1, ")")),
107 107 "~": (18, None, ("ancestor", 18)),
108 108 "^": (18, None, ("parent", 18), ("parentpost", 18)),
109 109 "-": (5, ("negate", 19), ("minus", 5)),
110 110 "::": (17, ("dagrangepre", 17), ("dagrange", 17),
111 111 ("dagrangepost", 17)),
112 112 "..": (17, ("dagrangepre", 17), ("dagrange", 17),
113 113 ("dagrangepost", 17)),
114 114 ":": (15, ("rangepre", 15), ("range", 15), ("rangepost", 15)),
115 115 "not": (10, ("not", 10)),
116 116 "!": (10, ("not", 10)),
117 117 "and": (5, None, ("and", 5)),
118 118 "&": (5, None, ("and", 5)),
119 119 "or": (4, None, ("or", 4)),
120 120 "|": (4, None, ("or", 4)),
121 121 "+": (4, None, ("or", 4)),
122 122 ",": (2, None, ("list", 2)),
123 123 ")": (0, None, None),
124 124 "symbol": (0, ("symbol",), None),
125 125 "string": (0, ("string",), None),
126 126 "end": (0, None, None),
127 127 }
128 128
129 129 keywords = set(['and', 'or', 'not'])
130 130
131 131 def tokenize(program, lookup=None):
132 132 '''
133 133 Parse a revset statement into a stream of tokens
134 134
135 135 Check that @ is a valid unquoted token character (issue3686):
136 136 >>> list(tokenize("@::"))
137 137 [('symbol', '@', 0), ('::', None, 1), ('end', None, 3)]
138 138
139 139 '''
140 140
141 141 pos, l = 0, len(program)
142 142 while pos < l:
143 143 c = program[pos]
144 144 if c.isspace(): # skip inter-token whitespace
145 145 pass
146 146 elif c == ':' and program[pos:pos + 2] == '::': # look ahead carefully
147 147 yield ('::', None, pos)
148 148 pos += 1 # skip ahead
149 149 elif c == '.' and program[pos:pos + 2] == '..': # look ahead carefully
150 150 yield ('..', None, pos)
151 151 pos += 1 # skip ahead
152 152 elif c in "():,-|&+!~^": # handle simple operators
153 153 yield (c, None, pos)
154 154 elif (c in '"\'' or c == 'r' and
155 155 program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
156 156 if c == 'r':
157 157 pos += 1
158 158 c = program[pos]
159 159 decode = lambda x: x
160 160 else:
161 161 decode = lambda x: x.decode('string-escape')
162 162 pos += 1
163 163 s = pos
164 164 while pos < l: # find closing quote
165 165 d = program[pos]
166 166 if d == '\\': # skip over escaped characters
167 167 pos += 2
168 168 continue
169 169 if d == c:
170 170 yield ('string', decode(program[s:pos]), s)
171 171 break
172 172 pos += 1
173 173 else:
174 174 raise error.ParseError(_("unterminated string"), s)
175 175 # gather up a symbol/keyword
176 176 elif c.isalnum() or c in '._@' or ord(c) > 127:
177 177 s = pos
178 178 pos += 1
179 179 while pos < l: # find end of symbol
180 180 d = program[pos]
181 181 if not (d.isalnum() or d in "-._/@" or ord(d) > 127):
182 182 break
183 183 if d == '.' and program[pos - 1] == '.': # special case for ..
184 184 pos -= 1
185 185 break
186 186 pos += 1
187 187 sym = program[s:pos]
188 188 if sym in keywords: # operator keywords
189 189 yield (sym, None, s)
190 190 elif '-' in sym:
191 191 # some jerk gave us foo-bar-baz, try to check if it's a symbol
192 192 if lookup and lookup(sym):
193 193 # looks like a real symbol
194 194 yield ('symbol', sym, s)
195 195 else:
196 196 # looks like an expression
197 197 parts = sym.split('-')
198 198 for p in parts[:-1]:
199 199 if p: # possible consecutive -
200 200 yield ('symbol', p, s)
201 201 s += len(p)
202 202 yield ('-', None, pos)
203 203 s += 1
204 204 if parts[-1]: # possible trailing -
205 205 yield ('symbol', parts[-1], s)
206 206 else:
207 207 yield ('symbol', sym, s)
208 208 pos -= 1
209 209 else:
210 210 raise error.ParseError(_("syntax error"), pos)
211 211 pos += 1
212 212 yield ('end', None, pos)
213 213
214 214 # helpers
215 215
216 216 def getstring(x, err):
217 217 if x and (x[0] == 'string' or x[0] == 'symbol'):
218 218 return x[1]
219 219 raise error.ParseError(err)
220 220
221 221 def getlist(x):
222 222 if not x:
223 223 return []
224 224 if x[0] == 'list':
225 225 return getlist(x[1]) + [x[2]]
226 226 return [x]
227 227
228 228 def getargs(x, min, max, err):
229 229 l = getlist(x)
230 230 if len(l) < min or (max >= 0 and len(l) > max):
231 231 raise error.ParseError(err)
232 232 return l
233 233
234 234 def getset(repo, subset, x):
235 235 if not x:
236 236 raise error.ParseError(_("missing argument"))
237 237 s = methods[x[0]](repo, subset, *x[1:])
238 238 if util.safehasattr(s, 'set'):
239 239 return s
240 240 return baseset(s)
241 241
242 242 def _getrevsource(repo, r):
243 243 extra = repo[r].extra()
244 244 for label in ('source', 'transplant_source', 'rebase_source'):
245 245 if label in extra:
246 246 try:
247 247 return repo[extra[label]].rev()
248 248 except error.RepoLookupError:
249 249 pass
250 250 return None
251 251
252 252 # operator methods
253 253
254 254 def stringset(repo, subset, x):
255 255 x = repo[x].rev()
256 256 if x == -1 and len(subset) == len(repo):
257 257 return baseset([-1])
258 258 if len(subset) == len(repo) or x in subset:
259 259 return baseset([x])
260 260 return baseset([])
261 261
262 262 def symbolset(repo, subset, x):
263 263 if x in symbols:
264 264 raise error.ParseError(_("can't use %s here") % x)
265 265 return stringset(repo, subset, x)
266 266
267 267 def rangeset(repo, subset, x, y):
268 268 cl = baseset(repo.changelog)
269 269 m = getset(repo, cl, x)
270 270 n = getset(repo, cl, y)
271 271
272 272 if not m or not n:
273 273 return baseset([])
274 274 m, n = m[0], n[-1]
275 275
276 276 if m < n:
277 277 r = spanset(repo, m, n + 1)
278 278 else:
279 279 r = spanset(repo, m, n - 1)
280 280 return r & subset
281 281
282 282 def dagrange(repo, subset, x, y):
283 283 r = spanset(repo)
284 284 xs = _revsbetween(repo, getset(repo, r, x), getset(repo, r, y))
285 285 s = subset.set()
286 286 return xs.filter(lambda r: r in s)
287 287
288 288 def andset(repo, subset, x, y):
289 289 return getset(repo, getset(repo, subset, x), y)
290 290
291 291 def orset(repo, subset, x, y):
292 292 xl = getset(repo, subset, x)
293 293 yl = getset(repo, subset - xl, y)
294 294 return xl + yl
295 295
296 296 def notset(repo, subset, x):
297 297 return subset - getset(repo, subset, x)
298 298
299 299 def listset(repo, subset, a, b):
300 300 raise error.ParseError(_("can't use a list in this context"))
301 301
302 302 def func(repo, subset, a, b):
303 303 if a[0] == 'symbol' and a[1] in symbols:
304 304 return symbols[a[1]](repo, subset, b)
305 305 raise error.ParseError(_("not a function: %s") % a[1])
306 306
307 307 # functions
308 308
309 309 def adds(repo, subset, x):
310 310 """``adds(pattern)``
311 311 Changesets that add a file matching pattern.
312 312
313 313 The pattern without explicit kind like ``glob:`` is expected to be
314 314 relative to the current directory and match against a file or a
315 315 directory.
316 316 """
317 317 # i18n: "adds" is a keyword
318 318 pat = getstring(x, _("adds requires a pattern"))
319 319 return checkstatus(repo, subset, pat, 1)
320 320
321 321 def ancestor(repo, subset, x):
322 322 """``ancestor(*changeset)``
323 323 A greatest common ancestor of the changesets.
324 324
325 325 Accepts 0 or more changesets.
326 326 Will return empty list when passed no args.
327 327 Greatest common ancestor of a single changeset is that changeset.
328 328 """
329 329 # i18n: "ancestor" is a keyword
330 330 l = getlist(x)
331 331 rl = spanset(repo)
332 332 anc = None
333 333
334 334 # (getset(repo, rl, i) for i in l) generates a list of lists
335 335 for revs in (getset(repo, rl, i) for i in l):
336 336 for r in revs:
337 337 if anc is None:
338 338 anc = repo[r]
339 339 else:
340 340 anc = anc.ancestor(repo[r])
341 341
342 342 if anc is not None and anc.rev() in subset:
343 343 return baseset([anc.rev()])
344 344 return baseset([])
345 345
346 346 def _ancestors(repo, subset, x, followfirst=False):
347 347 args = getset(repo, spanset(repo), x)
348 348 if not args:
349 349 return baseset([])
350 350 s = _revancestors(repo, args, followfirst)
351 351 return subset.filter(lambda r: r in s)
352 352
353 353 def ancestors(repo, subset, x):
354 354 """``ancestors(set)``
355 355 Changesets that are ancestors of a changeset in set.
356 356 """
357 357 return _ancestors(repo, subset, x)
358 358
359 359 def _firstancestors(repo, subset, x):
360 360 # ``_firstancestors(set)``
361 361 # Like ``ancestors(set)`` but follows only the first parents.
362 362 return _ancestors(repo, subset, x, followfirst=True)
363 363
364 364 def ancestorspec(repo, subset, x, n):
365 365 """``set~n``
366 366 Changesets that are the Nth ancestor (first parents only) of a changeset
367 367 in set.
368 368 """
369 369 try:
370 370 n = int(n[1])
371 371 except (TypeError, ValueError):
372 372 raise error.ParseError(_("~ expects a number"))
373 373 ps = set()
374 374 cl = repo.changelog
375 375 for r in getset(repo, baseset(cl), x):
376 376 for i in range(n):
377 377 r = cl.parentrevs(r)[0]
378 378 ps.add(r)
379 379 return subset.filter(lambda r: r in ps)
380 380
381 381 def author(repo, subset, x):
382 382 """``author(string)``
383 383 Alias for ``user(string)``.
384 384 """
385 385 # i18n: "author" is a keyword
386 386 n = encoding.lower(getstring(x, _("author requires a string")))
387 387 kind, pattern, matcher = _substringmatcher(n)
388 388 return subset.filter(lambda x: matcher(encoding.lower(repo[x].user())))
389 389
390 390 def only(repo, subset, x):
391 391 """``only(set, [set])``
392 392 Changesets that are ancestors of the first set that are not ancestors
393 393 of any other head in the repo. If a second set is specified, the result
394 394 is ancestors of the first set that are not ancestors of the second set
395 395 (i.e. ::<set1> - ::<set2>).
396 396 """
397 397 cl = repo.changelog
398 398 args = getargs(x, 1, 2, _('only takes one or two arguments'))
399 399 include = getset(repo, spanset(repo), args[0]).set()
400 400 if len(args) == 1:
401 401 descendants = set(_revdescendants(repo, include, False))
402 402 exclude = [rev for rev in cl.headrevs()
403 403 if not rev in descendants and not rev in include]
404 404 else:
405 405 exclude = getset(repo, spanset(repo), args[1])
406 406
407 407 results = set(ancestormod.missingancestors(include, exclude, cl.parentrevs))
408 408 return lazyset(subset, lambda x: x in results)
409 409
410 410 def bisect(repo, subset, x):
411 411 """``bisect(string)``
412 412 Changesets marked in the specified bisect status:
413 413
414 414 - ``good``, ``bad``, ``skip``: csets explicitly marked as good/bad/skip
415 415 - ``goods``, ``bads`` : csets topologically good/bad
416 416 - ``range`` : csets taking part in the bisection
417 417 - ``pruned`` : csets that are goods, bads or skipped
418 418 - ``untested`` : csets whose fate is yet unknown
419 419 - ``ignored`` : csets ignored due to DAG topology
420 420 - ``current`` : the cset currently being bisected
421 421 """
422 422 # i18n: "bisect" is a keyword
423 423 status = getstring(x, _("bisect requires a string")).lower()
424 424 state = set(hbisect.get(repo, status))
425 425 return subset.filter(lambda r: r in state)
426 426
427 427 # Backward-compatibility
428 428 # - no help entry so that we do not advertise it any more
429 429 def bisected(repo, subset, x):
430 430 return bisect(repo, subset, x)
431 431
432 432 def bookmark(repo, subset, x):
433 433 """``bookmark([name])``
434 434 The named bookmark or all bookmarks.
435 435
436 436 If `name` starts with `re:`, the remainder of the name is treated as
437 437 a regular expression. To match a bookmark that actually starts with `re:`,
438 438 use the prefix `literal:`.
439 439 """
440 440 # i18n: "bookmark" is a keyword
441 441 args = getargs(x, 0, 1, _('bookmark takes one or no arguments'))
442 442 if args:
443 443 bm = getstring(args[0],
444 444 # i18n: "bookmark" is a keyword
445 445 _('the argument to bookmark must be a string'))
446 446 kind, pattern, matcher = _stringmatcher(bm)
447 447 if kind == 'literal':
448 448 bmrev = repo._bookmarks.get(bm, None)
449 449 if not bmrev:
450 450 raise util.Abort(_("bookmark '%s' does not exist") % bm)
451 451 bmrev = repo[bmrev].rev()
452 452 return subset.filter(lambda r: r == bmrev)
453 453 else:
454 454 matchrevs = set()
455 455 for name, bmrev in repo._bookmarks.iteritems():
456 456 if matcher(name):
457 457 matchrevs.add(bmrev)
458 458 if not matchrevs:
459 459 raise util.Abort(_("no bookmarks exist that match '%s'")
460 460 % pattern)
461 461 bmrevs = set()
462 462 for bmrev in matchrevs:
463 463 bmrevs.add(repo[bmrev].rev())
464 464 return subset & bmrevs
465 465
466 466 bms = set([repo[r].rev()
467 467 for r in repo._bookmarks.values()])
468 468 return subset.filter(lambda r: r in bms)
469 469
470 470 def branch(repo, subset, x):
471 471 """``branch(string or set)``
472 472 All changesets belonging to the given branch or the branches of the given
473 473 changesets.
474 474
475 475 If `string` starts with `re:`, the remainder of the name is treated as
476 476 a regular expression. To match a branch that actually starts with `re:`,
477 477 use the prefix `literal:`.
478 478 """
479 479 try:
480 480 b = getstring(x, '')
481 481 except error.ParseError:
482 482 # not a string, but another revspec, e.g. tip()
483 483 pass
484 484 else:
485 485 kind, pattern, matcher = _stringmatcher(b)
486 486 if kind == 'literal':
487 487 # note: falls through to the revspec case if no branch with
488 488 # this name exists
489 489 if pattern in repo.branchmap():
490 490 return subset.filter(lambda r: matcher(repo[r].branch()))
491 491 else:
492 492 return subset.filter(lambda r: matcher(repo[r].branch()))
493 493
494 494 s = getset(repo, spanset(repo), x)
495 495 b = set()
496 496 for r in s:
497 497 b.add(repo[r].branch())
498 498 s = s.set()
499 499 return subset.filter(lambda r: r in s or repo[r].branch() in b)
500 500
501 501 def bumped(repo, subset, x):
502 502 """``bumped()``
503 503 Mutable changesets marked as successors of public changesets.
504 504
505 505 Only non-public and non-obsolete changesets can be `bumped`.
506 506 """
507 507 # i18n: "bumped" is a keyword
508 508 getargs(x, 0, 0, _("bumped takes no arguments"))
509 509 bumped = obsmod.getrevs(repo, 'bumped')
510 510 return subset & bumped
511 511
512 512 def bundle(repo, subset, x):
513 513 """``bundle()``
514 514 Changesets in the bundle.
515 515
516 516 Bundle must be specified by the -R option."""
517 517
518 518 try:
519 519 bundlerevs = repo.changelog.bundlerevs
520 520 except AttributeError:
521 521 raise util.Abort(_("no bundle provided - specify with -R"))
522 522 return subset & bundlerevs
523 523
524 524 def checkstatus(repo, subset, pat, field):
525 525 hasset = matchmod.patkind(pat) == 'set'
526 526
527 527 def matches(x):
528 528 m = None
529 529 fname = None
530 530 c = repo[x]
531 531 if not m or hasset:
532 532 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
533 533 if not m.anypats() and len(m.files()) == 1:
534 534 fname = m.files()[0]
535 535 if fname is not None:
536 536 if fname not in c.files():
537 537 return False
538 538 else:
539 539 for f in c.files():
540 540 if m(f):
541 541 break
542 542 else:
543 543 return False
544 544 files = repo.status(c.p1().node(), c.node())[field]
545 545 if fname is not None:
546 546 if fname in files:
547 547 return True
548 548 else:
549 549 for f in files:
550 550 if m(f):
551 551 return True
552 552
553 553 return subset.filter(matches)
554 554
555 555 def _children(repo, narrow, parentset):
556 556 cs = set()
557 557 if not parentset:
558 558 return baseset(cs)
559 559 pr = repo.changelog.parentrevs
560 560 minrev = min(parentset)
561 561 for r in narrow:
562 562 if r <= minrev:
563 563 continue
564 564 for p in pr(r):
565 565 if p in parentset:
566 566 cs.add(r)
567 567 return baseset(cs)
568 568
569 569 def children(repo, subset, x):
570 570 """``children(set)``
571 571 Child changesets of changesets in set.
572 572 """
573 573 s = getset(repo, baseset(repo), x).set()
574 574 cs = _children(repo, subset, s)
575 575 return subset & cs
576 576
577 577 def closed(repo, subset, x):
578 578 """``closed()``
579 579 Changeset is closed.
580 580 """
581 581 # i18n: "closed" is a keyword
582 582 getargs(x, 0, 0, _("closed takes no arguments"))
583 583 return subset.filter(lambda r: repo[r].closesbranch())
584 584
585 585 def contains(repo, subset, x):
586 586 """``contains(pattern)``
587 587 Revision contains a file matching pattern. See :hg:`help patterns`
588 588 for information about file patterns.
589 589
590 590 The pattern without explicit kind like ``glob:`` is expected to be
591 591 relative to the current directory and match against a file exactly
592 592 for efficiency.
593 593 """
594 594 # i18n: "contains" is a keyword
595 595 pat = getstring(x, _("contains requires a pattern"))
596 596
597 597 def matches(x):
598 598 if not matchmod.patkind(pat):
599 599 pats = pathutil.canonpath(repo.root, repo.getcwd(), pat)
600 600 if pats in repo[x]:
601 601 return True
602 602 else:
603 603 c = repo[x]
604 604 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
605 605 for f in c.manifest():
606 606 if m(f):
607 607 return True
608 608 return False
609 609
610 610 return subset.filter(matches)
611 611
612 612 def converted(repo, subset, x):
613 613 """``converted([id])``
614 614 Changesets converted from the given identifier in the old repository if
615 615 present, or all converted changesets if no identifier is specified.
616 616 """
617 617
618 618 # There is exactly no chance of resolving the revision, so do a simple
619 619 # string compare and hope for the best
620 620
621 621 rev = None
622 622 # i18n: "converted" is a keyword
623 623 l = getargs(x, 0, 1, _('converted takes one or no arguments'))
624 624 if l:
625 625 # i18n: "converted" is a keyword
626 626 rev = getstring(l[0], _('converted requires a revision'))
627 627
628 628 def _matchvalue(r):
629 629 source = repo[r].extra().get('convert_revision', None)
630 630 return source is not None and (rev is None or source.startswith(rev))
631 631
632 632 return subset.filter(lambda r: _matchvalue(r))
633 633
634 634 def date(repo, subset, x):
635 635 """``date(interval)``
636 636 Changesets within the interval, see :hg:`help dates`.
637 637 """
638 638 # i18n: "date" is a keyword
639 639 ds = getstring(x, _("date requires a string"))
640 640 dm = util.matchdate(ds)
641 641 return subset.filter(lambda x: dm(repo[x].date()[0]))
642 642
643 643 def desc(repo, subset, x):
644 644 """``desc(string)``
645 645 Search commit message for string. The match is case-insensitive.
646 646 """
647 647 # i18n: "desc" is a keyword
648 648 ds = encoding.lower(getstring(x, _("desc requires a string")))
649 649
650 650 def matches(x):
651 651 c = repo[x]
652 652 return ds in encoding.lower(c.description())
653 653
654 654 return subset.filter(matches)
655 655
656 656 def _descendants(repo, subset, x, followfirst=False):
657 657 args = getset(repo, spanset(repo), x)
658 658 if not args:
659 659 return baseset([])
660 660 s = _revdescendants(repo, args, followfirst)
661 661
662 662 # Both sets need to be ascending in order to lazily return the union
663 663 # in the correct order.
664 664 args.ascending()
665 665
666 666 subsetset = subset.set()
667 667 result = (orderedlazyset(s, subsetset.__contains__, ascending=True) +
668 668 orderedlazyset(args, subsetset.__contains__, ascending=True))
669 669
670 670 # Wrap result in a lazyset since it's an _addset, which doesn't implement
671 671 # all the necessary functions to be consumed by callers.
672 672 return orderedlazyset(result, lambda r: True, ascending=True)
673 673
674 674 def descendants(repo, subset, x):
675 675 """``descendants(set)``
676 676 Changesets which are descendants of changesets in set.
677 677 """
678 678 return _descendants(repo, subset, x)
679 679
680 680 def _firstdescendants(repo, subset, x):
681 681 # ``_firstdescendants(set)``
682 682 # Like ``descendants(set)`` but follows only the first parents.
683 683 return _descendants(repo, subset, x, followfirst=True)
684 684
685 685 def destination(repo, subset, x):
686 686 """``destination([set])``
687 687 Changesets that were created by a graft, transplant or rebase operation,
688 688 with the given revisions specified as the source. Omitting the optional set
689 689 is the same as passing all().
690 690 """
691 691 if x is not None:
692 692 args = getset(repo, spanset(repo), x).set()
693 693 else:
694 694 args = getall(repo, spanset(repo), x).set()
695 695
696 696 dests = set()
697 697
698 698 # subset contains all of the possible destinations that can be returned, so
699 699 # iterate over them and see if their source(s) were provided in the args.
700 700 # Even if the immediate src of r is not in the args, src's source (or
701 701 # further back) may be. Scanning back further than the immediate src allows
702 702 # transitive transplants and rebases to yield the same results as transitive
703 703 # grafts.
704 704 for r in subset:
705 705 src = _getrevsource(repo, r)
706 706 lineage = None
707 707
708 708 while src is not None:
709 709 if lineage is None:
710 710 lineage = list()
711 711
712 712 lineage.append(r)
713 713
714 714 # The visited lineage is a match if the current source is in the arg
715 715 # set. Since every candidate dest is visited by way of iterating
716 716 # subset, any dests further back in the lineage will be tested by a
717 717 # different iteration over subset. Likewise, if the src was already
718 718 # selected, the current lineage can be selected without going back
719 719 # further.
720 720 if src in args or src in dests:
721 721 dests.update(lineage)
722 722 break
723 723
724 724 r = src
725 725 src = _getrevsource(repo, r)
726 726
727 727 return subset.filter(lambda r: r in dests)
728 728
729 729 def divergent(repo, subset, x):
730 730 """``divergent()``
731 731 Final successors of changesets with an alternative set of final successors.
732 732 """
733 733 # i18n: "divergent" is a keyword
734 734 getargs(x, 0, 0, _("divergent takes no arguments"))
735 735 divergent = obsmod.getrevs(repo, 'divergent')
736 736 return subset.filter(lambda r: r in divergent)
737 737
738 738 def draft(repo, subset, x):
739 739 """``draft()``
740 740 Changeset in draft phase."""
741 741 # i18n: "draft" is a keyword
742 742 getargs(x, 0, 0, _("draft takes no arguments"))
743 743 pc = repo._phasecache
744 744 return subset.filter(lambda r: pc.phase(repo, r) == phases.draft)
745 745
746 746 def extinct(repo, subset, x):
747 747 """``extinct()``
748 748 Obsolete changesets with obsolete descendants only.
749 749 """
750 750 # i18n: "extinct" is a keyword
751 751 getargs(x, 0, 0, _("extinct takes no arguments"))
752 752 extincts = obsmod.getrevs(repo, 'extinct')
753 753 return subset & extincts
754 754
755 755 def extra(repo, subset, x):
756 756 """``extra(label, [value])``
757 757 Changesets with the given label in the extra metadata, with the given
758 758 optional value.
759 759
760 760 If `value` starts with `re:`, the remainder of the value is treated as
761 761 a regular expression. To match a value that actually starts with `re:`,
762 762 use the prefix `literal:`.
763 763 """
764 764
765 765 # i18n: "extra" is a keyword
766 766 l = getargs(x, 1, 2, _('extra takes at least 1 and at most 2 arguments'))
767 767 # i18n: "extra" is a keyword
768 768 label = getstring(l[0], _('first argument to extra must be a string'))
769 769 value = None
770 770
771 771 if len(l) > 1:
772 772 # i18n: "extra" is a keyword
773 773 value = getstring(l[1], _('second argument to extra must be a string'))
774 774 kind, value, matcher = _stringmatcher(value)
775 775
776 776 def _matchvalue(r):
777 777 extra = repo[r].extra()
778 778 return label in extra and (value is None or matcher(extra[label]))
779 779
780 780 return subset.filter(lambda r: _matchvalue(r))
781 781
782 782 def filelog(repo, subset, x):
783 783 """``filelog(pattern)``
784 784 Changesets connected to the specified filelog.
785 785
786 786 For performance reasons, ``filelog()`` does not show every changeset
787 787 that affects the requested file(s). See :hg:`help log` for details. For
788 788 a slower, more accurate result, use ``file()``.
789 789
790 790 The pattern without explicit kind like ``glob:`` is expected to be
791 791 relative to the current directory and match against a file exactly
792 792 for efficiency.
793 793 """
794 794
795 795 # i18n: "filelog" is a keyword
796 796 pat = getstring(x, _("filelog requires a pattern"))
797 797 s = set()
798 798
799 799 if not matchmod.patkind(pat):
800 800 f = pathutil.canonpath(repo.root, repo.getcwd(), pat)
801 801 fl = repo.file(f)
802 802 for fr in fl:
803 803 s.add(fl.linkrev(fr))
804 804 else:
805 805 m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=repo[None])
806 806 for f in repo[None]:
807 807 if m(f):
808 808 fl = repo.file(f)
809 809 for fr in fl:
810 810 s.add(fl.linkrev(fr))
811 811
812 812 return subset.filter(lambda r: r in s)
813 813
814 814 def first(repo, subset, x):
815 815 """``first(set, [n])``
816 816 An alias for limit().
817 817 """
818 818 return limit(repo, subset, x)
819 819
820 820 def _follow(repo, subset, x, name, followfirst=False):
821 821 l = getargs(x, 0, 1, _("%s takes no arguments or a filename") % name)
822 822 c = repo['.']
823 823 if l:
824 824 x = getstring(l[0], _("%s expected a filename") % name)
825 825 if x in c:
826 826 cx = c[x]
827 827 s = set(ctx.rev() for ctx in cx.ancestors(followfirst=followfirst))
828 828 # include the revision responsible for the most recent version
829 829 s.add(cx.linkrev())
830 830 else:
831 831 return baseset([])
832 832 else:
833 833 s = _revancestors(repo, baseset([c.rev()]), followfirst)
834 834
835 835 return subset.filter(lambda r: r in s)
836 836
837 837 def follow(repo, subset, x):
838 838 """``follow([file])``
839 839 An alias for ``::.`` (ancestors of the working copy's first parent).
840 840 If a filename is specified, the history of the given file is followed,
841 841 including copies.
842 842 """
843 843 return _follow(repo, subset, x, 'follow')
844 844
845 845 def _followfirst(repo, subset, x):
846 846 # ``followfirst([file])``
847 847 # Like ``follow([file])`` but follows only the first parent of
848 848 # every revision or file revision.
849 849 return _follow(repo, subset, x, '_followfirst', followfirst=True)
850 850
851 851 def getall(repo, subset, x):
852 852 """``all()``
853 853 All changesets, the same as ``0:tip``.
854 854 """
855 855 # i18n: "all" is a keyword
856 856 getargs(x, 0, 0, _("all takes no arguments"))
857 857 return subset
858 858
859 859 def grep(repo, subset, x):
860 860 """``grep(regex)``
861 861 Like ``keyword(string)`` but accepts a regex. Use ``grep(r'...')``
862 862 to ensure special escape characters are handled correctly. Unlike
863 863 ``keyword(string)``, the match is case-sensitive.
864 864 """
865 865 try:
866 866 # i18n: "grep" is a keyword
867 867 gr = re.compile(getstring(x, _("grep requires a string")))
868 868 except re.error, e:
869 869 raise error.ParseError(_('invalid match pattern: %s') % e)
870 870
871 871 def matches(x):
872 872 c = repo[x]
873 873 for e in c.files() + [c.user(), c.description()]:
874 874 if gr.search(e):
875 875 return True
876 876 return False
877 877
878 878 return subset.filter(matches)
879 879
880 880 def _matchfiles(repo, subset, x):
881 881 # _matchfiles takes a revset list of prefixed arguments:
882 882 #
883 883 # [p:foo, i:bar, x:baz]
884 884 #
885 885 # builds a match object from them and filters subset. Allowed
886 886 # prefixes are 'p:' for regular patterns, 'i:' for include
887 887 # patterns and 'x:' for exclude patterns. Use 'r:' prefix to pass
888 888 # a revision identifier, or the empty string to reference the
889 889 # working directory, from which the match object is
890 890 # initialized. Use 'd:' to set the default matching mode, default
891 891 # to 'glob'. At most one 'r:' and 'd:' argument can be passed.
892 892
893 893 # i18n: "_matchfiles" is a keyword
894 894 l = getargs(x, 1, -1, _("_matchfiles requires at least one argument"))
895 895 pats, inc, exc = [], [], []
896 896 hasset = False
897 897 rev, default = None, None
898 898 for arg in l:
899 899 # i18n: "_matchfiles" is a keyword
900 900 s = getstring(arg, _("_matchfiles requires string arguments"))
901 901 prefix, value = s[:2], s[2:]
902 902 if prefix == 'p:':
903 903 pats.append(value)
904 904 elif prefix == 'i:':
905 905 inc.append(value)
906 906 elif prefix == 'x:':
907 907 exc.append(value)
908 908 elif prefix == 'r:':
909 909 if rev is not None:
910 910 # i18n: "_matchfiles" is a keyword
911 911 raise error.ParseError(_('_matchfiles expected at most one '
912 912 'revision'))
913 913 rev = value
914 914 elif prefix == 'd:':
915 915 if default is not None:
916 916 # i18n: "_matchfiles" is a keyword
917 917 raise error.ParseError(_('_matchfiles expected at most one '
918 918 'default mode'))
919 919 default = value
920 920 else:
921 921 # i18n: "_matchfiles" is a keyword
922 922 raise error.ParseError(_('invalid _matchfiles prefix: %s') % prefix)
923 923 if not hasset and matchmod.patkind(value) == 'set':
924 924 hasset = True
925 925 if not default:
926 926 default = 'glob'
927 927
928 928 def matches(x):
929 929 m = None
930 930 c = repo[x]
931 931 if not m or (hasset and rev is None):
932 932 ctx = c
933 933 if rev is not None:
934 934 ctx = repo[rev or None]
935 935 m = matchmod.match(repo.root, repo.getcwd(), pats, include=inc,
936 936 exclude=exc, ctx=ctx, default=default)
937 937 for f in c.files():
938 938 if m(f):
939 939 return True
940 940 return False
941 941
942 942 return subset.filter(matches)
943 943
944 944 def hasfile(repo, subset, x):
945 945 """``file(pattern)``
946 946 Changesets affecting files matched by pattern.
947 947
948 948 For a faster but less accurate result, consider using ``filelog()``
949 949 instead.
950 950
951 951 This predicate uses ``glob:`` as the default kind of pattern.
952 952 """
953 953 # i18n: "file" is a keyword
954 954 pat = getstring(x, _("file requires a pattern"))
955 955 return _matchfiles(repo, subset, ('string', 'p:' + pat))
956 956
957 957 def head(repo, subset, x):
958 958 """``head()``
959 959 Changeset is a named branch head.
960 960 """
961 961 # i18n: "head" is a keyword
962 962 getargs(x, 0, 0, _("head takes no arguments"))
963 963 hs = set()
964 964 for b, ls in repo.branchmap().iteritems():
965 965 hs.update(repo[h].rev() for h in ls)
966 966 return baseset(hs).filter(subset.__contains__)
967 967
968 968 def heads(repo, subset, x):
969 969 """``heads(set)``
970 970 Members of set with no children in set.
971 971 """
972 972 s = getset(repo, subset, x)
973 973 ps = parents(repo, subset, x)
974 974 return s - ps
975 975
976 976 def hidden(repo, subset, x):
977 977 """``hidden()``
978 978 Hidden changesets.
979 979 """
980 980 # i18n: "hidden" is a keyword
981 981 getargs(x, 0, 0, _("hidden takes no arguments"))
982 982 hiddenrevs = repoview.filterrevs(repo, 'visible')
983 983 return subset & hiddenrevs
984 984
985 985 def keyword(repo, subset, x):
986 986 """``keyword(string)``
987 987 Search commit message, user name, and names of changed files for
988 988 string. The match is case-insensitive.
989 989 """
990 990 # i18n: "keyword" is a keyword
991 991 kw = encoding.lower(getstring(x, _("keyword requires a string")))
992 992
993 993 def matches(r):
994 994 c = repo[r]
995 995 return util.any(kw in encoding.lower(t) for t in c.files() + [c.user(),
996 996 c.description()])
997 997
998 998 return subset.filter(matches)
999 999
1000 1000 def limit(repo, subset, x):
1001 1001 """``limit(set, [n])``
1002 1002 First n members of set, defaulting to 1.
1003 1003 """
1004 1004 # i18n: "limit" is a keyword
1005 1005 l = getargs(x, 1, 2, _("limit requires one or two arguments"))
1006 1006 try:
1007 1007 lim = 1
1008 1008 if len(l) == 2:
1009 1009 # i18n: "limit" is a keyword
1010 1010 lim = int(getstring(l[1], _("limit requires a number")))
1011 1011 except (TypeError, ValueError):
1012 1012 # i18n: "limit" is a keyword
1013 1013 raise error.ParseError(_("limit expects a number"))
1014 1014 ss = subset.set()
1015 1015 os = getset(repo, spanset(repo), l[0])
1016 1016 bs = baseset([])
1017 1017 it = iter(os)
1018 1018 for x in xrange(lim):
1019 1019 try:
1020 1020 y = it.next()
1021 1021 if y in ss:
1022 1022 bs.append(y)
1023 1023 except (StopIteration):
1024 1024 break
1025 1025 return bs
1026 1026
1027 1027 def last(repo, subset, x):
1028 1028 """``last(set, [n])``
1029 1029 Last n members of set, defaulting to 1.
1030 1030 """
1031 1031 # i18n: "last" is a keyword
1032 1032 l = getargs(x, 1, 2, _("last requires one or two arguments"))
1033 1033 try:
1034 1034 lim = 1
1035 1035 if len(l) == 2:
1036 1036 # i18n: "last" is a keyword
1037 1037 lim = int(getstring(l[1], _("last requires a number")))
1038 1038 except (TypeError, ValueError):
1039 1039 # i18n: "last" is a keyword
1040 1040 raise error.ParseError(_("last expects a number"))
1041 1041 ss = subset.set()
1042 1042 os = getset(repo, spanset(repo), l[0])
1043 1043 os.reverse()
1044 1044 bs = baseset([])
1045 1045 it = iter(os)
1046 1046 for x in xrange(lim):
1047 1047 try:
1048 1048 y = it.next()
1049 1049 if y in ss:
1050 1050 bs.append(y)
1051 1051 except (StopIteration):
1052 1052 break
1053 1053 return bs
1054 1054
1055 1055 def maxrev(repo, subset, x):
1056 1056 """``max(set)``
1057 1057 Changeset with highest revision number in set.
1058 1058 """
1059 1059 os = getset(repo, spanset(repo), x)
1060 1060 if os:
1061 1061 m = os.max()
1062 1062 if m in subset:
1063 1063 return baseset([m])
1064 1064 return baseset([])
1065 1065
1066 1066 def merge(repo, subset, x):
1067 1067 """``merge()``
1068 1068 Changeset is a merge changeset.
1069 1069 """
1070 1070 # i18n: "merge" is a keyword
1071 1071 getargs(x, 0, 0, _("merge takes no arguments"))
1072 1072 cl = repo.changelog
1073 1073 return subset.filter(lambda r: cl.parentrevs(r)[1] != -1)
1074 1074
1075 1075 def branchpoint(repo, subset, x):
1076 1076 """``branchpoint()``
1077 1077 Changesets with more than one child.
1078 1078 """
1079 1079 # i18n: "branchpoint" is a keyword
1080 1080 getargs(x, 0, 0, _("branchpoint takes no arguments"))
1081 1081 cl = repo.changelog
1082 1082 if not subset:
1083 1083 return baseset([])
1084 1084 baserev = min(subset)
1085 1085 parentscount = [0]*(len(repo) - baserev)
1086 1086 for r in cl.revs(start=baserev + 1):
1087 1087 for p in cl.parentrevs(r):
1088 1088 if p >= baserev:
1089 1089 parentscount[p - baserev] += 1
1090 1090 return subset.filter(lambda r: parentscount[r - baserev] > 1)
1091 1091
1092 1092 def minrev(repo, subset, x):
1093 1093 """``min(set)``
1094 1094 Changeset with lowest revision number in set.
1095 1095 """
1096 1096 os = getset(repo, spanset(repo), x)
1097 1097 if os:
1098 1098 m = os.min()
1099 1099 if m in subset:
1100 1100 return baseset([m])
1101 1101 return baseset([])
1102 1102
1103 1103 def _missingancestors(repo, subset, x):
1104 1104 # i18n: "_missingancestors" is a keyword
1105 1105 revs, bases = getargs(x, 2, 2,
1106 1106 _("_missingancestors requires two arguments"))
1107 1107 rs = baseset(repo)
1108 1108 revs = getset(repo, rs, revs)
1109 1109 bases = getset(repo, rs, bases)
1110 1110 missing = set(repo.changelog.findmissingrevs(bases, revs))
1111 1111 return baseset([r for r in subset if r in missing])
1112 1112
1113 1113 def modifies(repo, subset, x):
1114 1114 """``modifies(pattern)``
1115 1115 Changesets modifying files matched by pattern.
1116 1116
1117 1117 The pattern without explicit kind like ``glob:`` is expected to be
1118 1118 relative to the current directory and match against a file or a
1119 1119 directory.
1120 1120 """
1121 1121 # i18n: "modifies" is a keyword
1122 1122 pat = getstring(x, _("modifies requires a pattern"))
1123 1123 return checkstatus(repo, subset, pat, 0)
1124 1124
1125 1125 def node_(repo, subset, x):
1126 1126 """``id(string)``
1127 1127 Revision non-ambiguously specified by the given hex string prefix.
1128 1128 """
1129 1129 # i18n: "id" is a keyword
1130 1130 l = getargs(x, 1, 1, _("id requires one argument"))
1131 1131 # i18n: "id" is a keyword
1132 1132 n = getstring(l[0], _("id requires a string"))
1133 1133 if len(n) == 40:
1134 1134 rn = repo[n].rev()
1135 1135 else:
1136 1136 rn = None
1137 1137 pm = repo.changelog._partialmatch(n)
1138 1138 if pm is not None:
1139 1139 rn = repo.changelog.rev(pm)
1140 1140
1141 1141 return subset.filter(lambda r: r == rn)
1142 1142
1143 1143 def obsolete(repo, subset, x):
1144 1144 """``obsolete()``
1145 1145 Mutable changeset with a newer version."""
1146 1146 # i18n: "obsolete" is a keyword
1147 1147 getargs(x, 0, 0, _("obsolete takes no arguments"))
1148 1148 obsoletes = obsmod.getrevs(repo, 'obsolete')
1149 1149 return subset & obsoletes
1150 1150
1151 1151 def origin(repo, subset, x):
1152 1152 """``origin([set])``
1153 1153 Changesets that were specified as a source for the grafts, transplants or
1154 1154 rebases that created the given revisions. Omitting the optional set is the
1155 1155 same as passing all(). If a changeset created by these operations is itself
1156 1156 specified as a source for one of these operations, only the source changeset
1157 1157 for the first operation is selected.
1158 1158 """
1159 1159 if x is not None:
1160 1160 args = getset(repo, spanset(repo), x).set()
1161 1161 else:
1162 1162 args = getall(repo, spanset(repo), x).set()
1163 1163
1164 1164 def _firstsrc(rev):
1165 1165 src = _getrevsource(repo, rev)
1166 1166 if src is None:
1167 1167 return None
1168 1168
1169 1169 while True:
1170 1170 prev = _getrevsource(repo, src)
1171 1171
1172 1172 if prev is None:
1173 1173 return src
1174 1174 src = prev
1175 1175
1176 1176 o = set([_firstsrc(r) for r in args])
1177 1177 return subset.filter(lambda r: r in o)
1178 1178
1179 1179 def outgoing(repo, subset, x):
1180 1180 """``outgoing([path])``
1181 1181 Changesets not found in the specified destination repository, or the
1182 1182 default push location.
1183 1183 """
1184 1184 import hg # avoid start-up nasties
1185 1185 # i18n: "outgoing" is a keyword
1186 1186 l = getargs(x, 0, 1, _("outgoing takes one or no arguments"))
1187 1187 # i18n: "outgoing" is a keyword
1188 1188 dest = l and getstring(l[0], _("outgoing requires a repository path")) or ''
1189 1189 dest = repo.ui.expandpath(dest or 'default-push', dest or 'default')
1190 1190 dest, branches = hg.parseurl(dest)
1191 1191 revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
1192 1192 if revs:
1193 1193 revs = [repo.lookup(rev) for rev in revs]
1194 1194 other = hg.peer(repo, {}, dest)
1195 1195 repo.ui.pushbuffer()
1196 1196 outgoing = discovery.findcommonoutgoing(repo, other, onlyheads=revs)
1197 1197 repo.ui.popbuffer()
1198 1198 cl = repo.changelog
1199 1199 o = set([cl.rev(r) for r in outgoing.missing])
1200 1200 return subset.filter(lambda r: r in o)
1201 1201
1202 1202 def p1(repo, subset, x):
1203 1203 """``p1([set])``
1204 1204 First parent of changesets in set, or the working directory.
1205 1205 """
1206 1206 if x is None:
1207 1207 p = repo[x].p1().rev()
1208 1208 return subset.filter(lambda r: r == p)
1209 1209
1210 1210 ps = set()
1211 1211 cl = repo.changelog
1212 1212 for r in getset(repo, spanset(repo), x):
1213 1213 ps.add(cl.parentrevs(r)[0])
1214 1214 return subset & ps
1215 1215
1216 1216 def p2(repo, subset, x):
1217 1217 """``p2([set])``
1218 1218 Second parent of changesets in set, or the working directory.
1219 1219 """
1220 1220 if x is None:
1221 1221 ps = repo[x].parents()
1222 1222 try:
1223 1223 p = ps[1].rev()
1224 1224 return subset.filter(lambda r: r == p)
1225 1225 except IndexError:
1226 1226 return baseset([])
1227 1227
1228 1228 ps = set()
1229 1229 cl = repo.changelog
1230 1230 for r in getset(repo, spanset(repo), x):
1231 1231 ps.add(cl.parentrevs(r)[1])
1232 1232 return subset & ps
1233 1233
1234 1234 def parents(repo, subset, x):
1235 1235 """``parents([set])``
1236 1236 The set of all parents for all changesets in set, or the working directory.
1237 1237 """
1238 1238 if x is None:
1239 1239 ps = tuple(p.rev() for p in repo[x].parents())
1240 1240 return subset & ps
1241 1241
1242 1242 ps = set()
1243 1243 cl = repo.changelog
1244 1244 for r in getset(repo, spanset(repo), x):
1245 1245 ps.update(cl.parentrevs(r))
1246 1246 return subset & ps
1247 1247
1248 1248 def parentspec(repo, subset, x, n):
1249 1249 """``set^0``
1250 1250 The set.
1251 1251 ``set^1`` (or ``set^``), ``set^2``
1252 1252 First or second parent, respectively, of all changesets in set.
1253 1253 """
1254 1254 try:
1255 1255 n = int(n[1])
1256 1256 if n not in (0, 1, 2):
1257 1257 raise ValueError
1258 1258 except (TypeError, ValueError):
1259 1259 raise error.ParseError(_("^ expects a number 0, 1, or 2"))
1260 1260 ps = set()
1261 1261 cl = repo.changelog
1262 1262 for r in getset(repo, baseset(cl), x):
1263 1263 if n == 0:
1264 1264 ps.add(r)
1265 1265 elif n == 1:
1266 1266 ps.add(cl.parentrevs(r)[0])
1267 1267 elif n == 2:
1268 1268 parents = cl.parentrevs(r)
1269 1269 if len(parents) > 1:
1270 1270 ps.add(parents[1])
1271 1271 return subset & ps
1272 1272
1273 1273 def present(repo, subset, x):
1274 1274 """``present(set)``
1275 1275 An empty set, if any revision in set isn't found; otherwise,
1276 1276 all revisions in set.
1277 1277
1278 1278 If any of specified revisions is not present in the local repository,
1279 1279 the query is normally aborted. But this predicate allows the query
1280 1280 to continue even in such cases.
1281 1281 """
1282 1282 try:
1283 1283 return getset(repo, subset, x)
1284 1284 except error.RepoLookupError:
1285 1285 return baseset([])
1286 1286
1287 1287 def public(repo, subset, x):
1288 1288 """``public()``
1289 1289 Changeset in public phase."""
1290 1290 # i18n: "public" is a keyword
1291 1291 getargs(x, 0, 0, _("public takes no arguments"))
1292 1292 pc = repo._phasecache
1293 1293 return subset.filter(lambda r: pc.phase(repo, r) == phases.public)
1294 1294
1295 1295 def remote(repo, subset, x):
1296 1296 """``remote([id [,path]])``
1297 1297 Local revision that corresponds to the given identifier in a
1298 1298 remote repository, if present. Here, the '.' identifier is a
1299 1299 synonym for the current local branch.
1300 1300 """
1301 1301
1302 1302 import hg # avoid start-up nasties
1303 1303 # i18n: "remote" is a keyword
1304 1304 l = getargs(x, 0, 2, _("remote takes one, two or no arguments"))
1305 1305
1306 1306 q = '.'
1307 1307 if len(l) > 0:
1308 1308 # i18n: "remote" is a keyword
1309 1309 q = getstring(l[0], _("remote requires a string id"))
1310 1310 if q == '.':
1311 1311 q = repo['.'].branch()
1312 1312
1313 1313 dest = ''
1314 1314 if len(l) > 1:
1315 1315 # i18n: "remote" is a keyword
1316 1316 dest = getstring(l[1], _("remote requires a repository path"))
1317 1317 dest = repo.ui.expandpath(dest or 'default')
1318 1318 dest, branches = hg.parseurl(dest)
1319 1319 revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
1320 1320 if revs:
1321 1321 revs = [repo.lookup(rev) for rev in revs]
1322 1322 other = hg.peer(repo, {}, dest)
1323 1323 n = other.lookup(q)
1324 1324 if n in repo:
1325 1325 r = repo[n].rev()
1326 1326 if r in subset:
1327 1327 return baseset([r])
1328 1328 return baseset([])
1329 1329
1330 1330 def removes(repo, subset, x):
1331 1331 """``removes(pattern)``
1332 1332 Changesets which remove files matching pattern.
1333 1333
1334 1334 The pattern without explicit kind like ``glob:`` is expected to be
1335 1335 relative to the current directory and match against a file or a
1336 1336 directory.
1337 1337 """
1338 1338 # i18n: "removes" is a keyword
1339 1339 pat = getstring(x, _("removes requires a pattern"))
1340 1340 return checkstatus(repo, subset, pat, 2)
1341 1341
1342 1342 def rev(repo, subset, x):
1343 1343 """``rev(number)``
1344 1344 Revision with the given numeric identifier.
1345 1345 """
1346 1346 # i18n: "rev" is a keyword
1347 1347 l = getargs(x, 1, 1, _("rev requires one argument"))
1348 1348 try:
1349 1349 # i18n: "rev" is a keyword
1350 1350 l = int(getstring(l[0], _("rev requires a number")))
1351 1351 except (TypeError, ValueError):
1352 1352 # i18n: "rev" is a keyword
1353 1353 raise error.ParseError(_("rev expects a number"))
1354 1354 return subset.filter(lambda r: r == l)
1355 1355
1356 1356 def matching(repo, subset, x):
1357 1357 """``matching(revision [, field])``
1358 1358 Changesets in which a given set of fields match the set of fields in the
1359 1359 selected revision or set.
1360 1360
1361 1361 To match more than one field pass the list of fields to match separated
1362 1362 by spaces (e.g. ``author description``).
1363 1363
1364 1364 Valid fields are most regular revision fields and some special fields.
1365 1365
1366 1366 Regular revision fields are ``description``, ``author``, ``branch``,
1367 1367 ``date``, ``files``, ``phase``, ``parents``, ``substate``, ``user``
1368 1368 and ``diff``.
1369 1369 Note that ``author`` and ``user`` are synonyms. ``diff`` refers to the
1370 1370 contents of the revision. Two revisions matching their ``diff`` will
1371 1371 also match their ``files``.
1372 1372
1373 1373 Special fields are ``summary`` and ``metadata``:
1374 1374 ``summary`` matches the first line of the description.
1375 1375 ``metadata`` is equivalent to matching ``description user date``
1376 1376 (i.e. it matches the main metadata fields).
1377 1377
1378 1378 ``metadata`` is the default field which is used when no fields are
1379 1379 specified. You can match more than one field at a time.
1380 1380 """
1381 1381 # i18n: "matching" is a keyword
1382 1382 l = getargs(x, 1, 2, _("matching takes 1 or 2 arguments"))
1383 1383
1384 1384 revs = getset(repo, baseset(repo.changelog), l[0])
1385 1385
1386 1386 fieldlist = ['metadata']
1387 1387 if len(l) > 1:
1388 1388 fieldlist = getstring(l[1],
1389 1389 # i18n: "matching" is a keyword
1390 1390 _("matching requires a string "
1391 1391 "as its second argument")).split()
1392 1392
1393 1393 # Make sure that there are no repeated fields,
1394 1394 # expand the 'special' 'metadata' field type
1395 1395 # and check the 'files' whenever we check the 'diff'
1396 1396 fields = []
1397 1397 for field in fieldlist:
1398 1398 if field == 'metadata':
1399 1399 fields += ['user', 'description', 'date']
1400 1400 elif field == 'diff':
1401 1401 # a revision matching the diff must also match the files
1402 1402 # since matching the diff is very costly, make sure to
1403 1403 # also match the files first
1404 1404 fields += ['files', 'diff']
1405 1405 else:
1406 1406 if field == 'author':
1407 1407 field = 'user'
1408 1408 fields.append(field)
1409 1409 fields = set(fields)
1410 1410 if 'summary' in fields and 'description' in fields:
1411 1411 # If a revision matches its description it also matches its summary
1412 1412 fields.discard('summary')
1413 1413
1414 1414 # We may want to match more than one field
1415 1415 # Not all fields take the same amount of time to be matched
1416 1416 # Sort the selected fields in order of increasing matching cost
1417 1417 fieldorder = ['phase', 'parents', 'user', 'date', 'branch', 'summary',
1418 1418 'files', 'description', 'substate', 'diff']
1419 1419 def fieldkeyfunc(f):
1420 1420 try:
1421 1421 return fieldorder.index(f)
1422 1422 except ValueError:
1423 1423 # assume an unknown field is very costly
1424 1424 return len(fieldorder)
1425 1425 fields = list(fields)
1426 1426 fields.sort(key=fieldkeyfunc)
1427 1427
1428 1428 # Each field will be matched with its own "getfield" function
1429 1429 # which will be added to the getfieldfuncs array of functions
1430 1430 getfieldfuncs = []
1431 1431 _funcs = {
1432 1432 'user': lambda r: repo[r].user(),
1433 1433 'branch': lambda r: repo[r].branch(),
1434 1434 'date': lambda r: repo[r].date(),
1435 1435 'description': lambda r: repo[r].description(),
1436 1436 'files': lambda r: repo[r].files(),
1437 1437 'parents': lambda r: repo[r].parents(),
1438 1438 'phase': lambda r: repo[r].phase(),
1439 1439 'substate': lambda r: repo[r].substate,
1440 1440 'summary': lambda r: repo[r].description().splitlines()[0],
1441 1441 'diff': lambda r: list(repo[r].diff(git=True),)
1442 1442 }
1443 1443 for info in fields:
1444 1444 getfield = _funcs.get(info, None)
1445 1445 if getfield is None:
1446 1446 raise error.ParseError(
1447 1447 # i18n: "matching" is a keyword
1448 1448 _("unexpected field name passed to matching: %s") % info)
1449 1449 getfieldfuncs.append(getfield)
1450 1450 # convert the getfield array of functions into a "getinfo" function
1451 1451 # which returns an array of field values (or a single value if there
1452 1452 # is only one field to match)
1453 1453 getinfo = lambda r: [f(r) for f in getfieldfuncs]
1454 1454
1455 1455 def matches(x):
1456 1456 for rev in revs:
1457 1457 target = getinfo(rev)
1458 1458 match = True
1459 1459 for n, f in enumerate(getfieldfuncs):
1460 1460 if target[n] != f(x):
1461 1461 match = False
1462 1462 if match:
1463 1463 return True
1464 1464 return False
1465 1465
1466 1466 return subset.filter(matches)
1467 1467
1468 1468 def reverse(repo, subset, x):
1469 1469 """``reverse(set)``
1470 1470 Reverse order of set.
1471 1471 """
1472 1472 l = getset(repo, subset, x)
1473 1473 l.reverse()
1474 1474 return l
1475 1475
1476 1476 def roots(repo, subset, x):
1477 1477 """``roots(set)``
1478 1478 Changesets in set with no parent changeset in set.
1479 1479 """
1480 1480 s = getset(repo, spanset(repo), x).set()
1481 1481 subset = baseset([r for r in s if r in subset.set()])
1482 1482 cs = _children(repo, subset, s)
1483 1483 return subset - cs
1484 1484
1485 1485 def secret(repo, subset, x):
1486 1486 """``secret()``
1487 1487 Changeset in secret phase."""
1488 1488 # i18n: "secret" is a keyword
1489 1489 getargs(x, 0, 0, _("secret takes no arguments"))
1490 1490 pc = repo._phasecache
1491 1491 return subset.filter(lambda x: pc.phase(repo, x) == phases.secret)
1492 1492
1493 1493 def sort(repo, subset, x):
1494 1494 """``sort(set[, [-]key...])``
1495 1495 Sort set by keys. The default sort order is ascending, specify a key
1496 1496 as ``-key`` to sort in descending order.
1497 1497
1498 1498 The keys can be:
1499 1499
1500 1500 - ``rev`` for the revision number,
1501 1501 - ``branch`` for the branch name,
1502 1502 - ``desc`` for the commit message (description),
1503 1503 - ``user`` for user name (``author`` can be used as an alias),
1504 1504 - ``date`` for the commit date
1505 1505 """
1506 1506 # i18n: "sort" is a keyword
1507 1507 l = getargs(x, 1, 2, _("sort requires one or two arguments"))
1508 1508 keys = "rev"
1509 1509 if len(l) == 2:
1510 1510 # i18n: "sort" is a keyword
1511 1511 keys = getstring(l[1], _("sort spec must be a string"))
1512 1512
1513 1513 s = l[0]
1514 1514 keys = keys.split()
1515 1515 l = []
1516 1516 def invert(s):
1517 1517 return "".join(chr(255 - ord(c)) for c in s)
1518 1518 revs = getset(repo, subset, s)
1519 1519 if keys == ["rev"]:
1520 1520 revs.sort()
1521 1521 return revs
1522 1522 elif keys == ["-rev"]:
1523 1523 revs.sort(reverse=True)
1524 1524 return revs
1525 1525 for r in revs:
1526 1526 c = repo[r]
1527 1527 e = []
1528 1528 for k in keys:
1529 1529 if k == 'rev':
1530 1530 e.append(r)
1531 1531 elif k == '-rev':
1532 1532 e.append(-r)
1533 1533 elif k == 'branch':
1534 1534 e.append(c.branch())
1535 1535 elif k == '-branch':
1536 1536 e.append(invert(c.branch()))
1537 1537 elif k == 'desc':
1538 1538 e.append(c.description())
1539 1539 elif k == '-desc':
1540 1540 e.append(invert(c.description()))
1541 1541 elif k in 'user author':
1542 1542 e.append(c.user())
1543 1543 elif k in '-user -author':
1544 1544 e.append(invert(c.user()))
1545 1545 elif k == 'date':
1546 1546 e.append(c.date()[0])
1547 1547 elif k == '-date':
1548 1548 e.append(-c.date()[0])
1549 1549 else:
1550 1550 raise error.ParseError(_("unknown sort key %r") % k)
1551 1551 e.append(r)
1552 1552 l.append(e)
1553 1553 l.sort()
1554 1554 return baseset([e[-1] for e in l])
1555 1555
1556 1556 def _stringmatcher(pattern):
1557 1557 """
1558 1558 accepts a string, possibly starting with 're:' or 'literal:' prefix.
1559 1559 returns the matcher name, pattern, and matcher function.
1560 1560 missing or unknown prefixes are treated as literal matches.
1561 1561
1562 1562 helper for tests:
1563 1563 >>> def test(pattern, *tests):
1564 1564 ... kind, pattern, matcher = _stringmatcher(pattern)
1565 1565 ... return (kind, pattern, [bool(matcher(t)) for t in tests])
1566 1566
1567 1567 exact matching (no prefix):
1568 1568 >>> test('abcdefg', 'abc', 'def', 'abcdefg')
1569 1569 ('literal', 'abcdefg', [False, False, True])
1570 1570
1571 1571 regex matching ('re:' prefix)
1572 1572 >>> test('re:a.+b', 'nomatch', 'fooadef', 'fooadefbar')
1573 1573 ('re', 'a.+b', [False, False, True])
1574 1574
1575 1575 force exact matches ('literal:' prefix)
1576 1576 >>> test('literal:re:foobar', 'foobar', 're:foobar')
1577 1577 ('literal', 're:foobar', [False, True])
1578 1578
1579 1579 unknown prefixes are ignored and treated as literals
1580 1580 >>> test('foo:bar', 'foo', 'bar', 'foo:bar')
1581 1581 ('literal', 'foo:bar', [False, False, True])
1582 1582 """
1583 1583 if pattern.startswith('re:'):
1584 1584 pattern = pattern[3:]
1585 1585 try:
1586 1586 regex = re.compile(pattern)
1587 1587 except re.error, e:
1588 1588 raise error.ParseError(_('invalid regular expression: %s')
1589 1589 % e)
1590 1590 return 're', pattern, regex.search
1591 1591 elif pattern.startswith('literal:'):
1592 1592 pattern = pattern[8:]
1593 1593 return 'literal', pattern, pattern.__eq__
1594 1594
1595 1595 def _substringmatcher(pattern):
1596 1596 kind, pattern, matcher = _stringmatcher(pattern)
1597 1597 if kind == 'literal':
1598 1598 matcher = lambda s: pattern in s
1599 1599 return kind, pattern, matcher
1600 1600
1601 1601 def tag(repo, subset, x):
1602 1602 """``tag([name])``
1603 1603 The specified tag by name, or all tagged revisions if no name is given.
1604 1604
1605 1605 If `name` starts with `re:`, the remainder of the name is treated as
1606 1606 a regular expression. To match a tag that actually starts with `re:`,
1607 1607 use the prefix `literal:`.
1608 1608 """
1609 1609 # i18n: "tag" is a keyword
1610 1610 args = getargs(x, 0, 1, _("tag takes one or no arguments"))
1611 1611 cl = repo.changelog
1612 1612 if args:
1613 1613 pattern = getstring(args[0],
1614 1614 # i18n: "tag" is a keyword
1615 1615 _('the argument to tag must be a string'))
1616 1616 kind, pattern, matcher = _stringmatcher(pattern)
1617 1617 if kind == 'literal':
1618 1618 # avoid resolving all tags
1619 1619 tn = repo._tagscache.tags.get(pattern, None)
1620 1620 if tn is None:
1621 1621 raise util.Abort(_("tag '%s' does not exist") % pattern)
1622 1622 s = set([repo[tn].rev()])
1623 1623 else:
1624 1624 s = set([cl.rev(n) for t, n in repo.tagslist() if matcher(t)])
1625 1625 else:
1626 1626 s = set([cl.rev(n) for t, n in repo.tagslist() if t != 'tip'])
1627 1627 return subset & s
1628 1628
1629 1629 def tagged(repo, subset, x):
1630 1630 return tag(repo, subset, x)
1631 1631
1632 1632 def unstable(repo, subset, x):
1633 1633 """``unstable()``
1634 1634 Non-obsolete changesets with obsolete ancestors.
1635 1635 """
1636 1636 # i18n: "unstable" is a keyword
1637 1637 getargs(x, 0, 0, _("unstable takes no arguments"))
1638 1638 unstables = obsmod.getrevs(repo, 'unstable')
1639 1639 return subset & unstables
1640 1640
1641 1641
1642 1642 def user(repo, subset, x):
1643 1643 """``user(string)``
1644 1644 User name contains string. The match is case-insensitive.
1645 1645
1646 1646 If `string` starts with `re:`, the remainder of the string is treated as
1647 1647 a regular expression. To match a user that actually contains `re:`, use
1648 1648 the prefix `literal:`.
1649 1649 """
1650 1650 return author(repo, subset, x)
1651 1651
1652 1652 # for internal use
1653 1653 def _list(repo, subset, x):
1654 1654 s = getstring(x, "internal error")
1655 1655 if not s:
1656 1656 return baseset([])
1657 1657 ls = [repo[r].rev() for r in s.split('\0')]
1658 1658 s = subset.set()
1659 1659 return baseset([r for r in ls if r in s])
1660 1660
1661 1661 # for internal use
1662 1662 def _intlist(repo, subset, x):
1663 1663 s = getstring(x, "internal error")
1664 1664 if not s:
1665 1665 return baseset([])
1666 1666 ls = [int(r) for r in s.split('\0')]
1667 1667 s = subset.set()
1668 1668 return baseset([r for r in ls if r in s])
1669 1669
1670 1670 # for internal use
1671 1671 def _hexlist(repo, subset, x):
1672 1672 s = getstring(x, "internal error")
1673 1673 if not s:
1674 1674 return baseset([])
1675 1675 cl = repo.changelog
1676 1676 ls = [cl.rev(node.bin(r)) for r in s.split('\0')]
1677 1677 s = subset.set()
1678 1678 return baseset([r for r in ls if r in s])
1679 1679
1680 1680 symbols = {
1681 1681 "adds": adds,
1682 1682 "all": getall,
1683 1683 "ancestor": ancestor,
1684 1684 "ancestors": ancestors,
1685 1685 "_firstancestors": _firstancestors,
1686 1686 "author": author,
1687 1687 "only": only,
1688 1688 "bisect": bisect,
1689 1689 "bisected": bisected,
1690 1690 "bookmark": bookmark,
1691 1691 "branch": branch,
1692 1692 "branchpoint": branchpoint,
1693 1693 "bumped": bumped,
1694 1694 "bundle": bundle,
1695 1695 "children": children,
1696 1696 "closed": closed,
1697 1697 "contains": contains,
1698 1698 "converted": converted,
1699 1699 "date": date,
1700 1700 "desc": desc,
1701 1701 "descendants": descendants,
1702 1702 "_firstdescendants": _firstdescendants,
1703 1703 "destination": destination,
1704 1704 "divergent": divergent,
1705 1705 "draft": draft,
1706 1706 "extinct": extinct,
1707 1707 "extra": extra,
1708 1708 "file": hasfile,
1709 1709 "filelog": filelog,
1710 1710 "first": first,
1711 1711 "follow": follow,
1712 1712 "_followfirst": _followfirst,
1713 1713 "grep": grep,
1714 1714 "head": head,
1715 1715 "heads": heads,
1716 1716 "hidden": hidden,
1717 1717 "id": node_,
1718 1718 "keyword": keyword,
1719 1719 "last": last,
1720 1720 "limit": limit,
1721 1721 "_matchfiles": _matchfiles,
1722 1722 "max": maxrev,
1723 1723 "merge": merge,
1724 1724 "min": minrev,
1725 1725 "_missingancestors": _missingancestors,
1726 1726 "modifies": modifies,
1727 1727 "obsolete": obsolete,
1728 1728 "origin": origin,
1729 1729 "outgoing": outgoing,
1730 1730 "p1": p1,
1731 1731 "p2": p2,
1732 1732 "parents": parents,
1733 1733 "present": present,
1734 1734 "public": public,
1735 1735 "remote": remote,
1736 1736 "removes": removes,
1737 1737 "rev": rev,
1738 1738 "reverse": reverse,
1739 1739 "roots": roots,
1740 1740 "sort": sort,
1741 1741 "secret": secret,
1742 1742 "matching": matching,
1743 1743 "tag": tag,
1744 1744 "tagged": tagged,
1745 1745 "user": user,
1746 1746 "unstable": unstable,
1747 1747 "_list": _list,
1748 1748 "_intlist": _intlist,
1749 1749 "_hexlist": _hexlist,
1750 1750 }
1751 1751
1752 1752 # symbols which can't be used for a DoS attack for any given input
1753 1753 # (e.g. those which accept regexes as plain strings shouldn't be included)
1754 1754 # functions that just return a lot of changesets (like all) don't count here
1755 1755 safesymbols = set([
1756 1756 "adds",
1757 1757 "all",
1758 1758 "ancestor",
1759 1759 "ancestors",
1760 1760 "_firstancestors",
1761 1761 "author",
1762 1762 "bisect",
1763 1763 "bisected",
1764 1764 "bookmark",
1765 1765 "branch",
1766 1766 "branchpoint",
1767 1767 "bumped",
1768 1768 "bundle",
1769 1769 "children",
1770 1770 "closed",
1771 1771 "converted",
1772 1772 "date",
1773 1773 "desc",
1774 1774 "descendants",
1775 1775 "_firstdescendants",
1776 1776 "destination",
1777 1777 "divergent",
1778 1778 "draft",
1779 1779 "extinct",
1780 1780 "extra",
1781 1781 "file",
1782 1782 "filelog",
1783 1783 "first",
1784 1784 "follow",
1785 1785 "_followfirst",
1786 1786 "head",
1787 1787 "heads",
1788 1788 "hidden",
1789 1789 "id",
1790 1790 "keyword",
1791 1791 "last",
1792 1792 "limit",
1793 1793 "_matchfiles",
1794 1794 "max",
1795 1795 "merge",
1796 1796 "min",
1797 1797 "_missingancestors",
1798 1798 "modifies",
1799 1799 "obsolete",
1800 1800 "origin",
1801 1801 "outgoing",
1802 1802 "p1",
1803 1803 "p2",
1804 1804 "parents",
1805 1805 "present",
1806 1806 "public",
1807 1807 "remote",
1808 1808 "removes",
1809 1809 "rev",
1810 1810 "reverse",
1811 1811 "roots",
1812 1812 "sort",
1813 1813 "secret",
1814 1814 "matching",
1815 1815 "tag",
1816 1816 "tagged",
1817 1817 "user",
1818 1818 "unstable",
1819 1819 "_list",
1820 1820 "_intlist",
1821 1821 "_hexlist",
1822 1822 ])
1823 1823
1824 1824 methods = {
1825 1825 "range": rangeset,
1826 1826 "dagrange": dagrange,
1827 1827 "string": stringset,
1828 1828 "symbol": symbolset,
1829 1829 "and": andset,
1830 1830 "or": orset,
1831 1831 "not": notset,
1832 1832 "list": listset,
1833 1833 "func": func,
1834 1834 "ancestor": ancestorspec,
1835 1835 "parent": parentspec,
1836 1836 "parentpost": p1,
1837 1837 }
1838 1838
1839 1839 def optimize(x, small):
1840 1840 if x is None:
1841 1841 return 0, x
1842 1842
1843 1843 smallbonus = 1
1844 1844 if small:
1845 1845 smallbonus = .5
1846 1846
1847 1847 op = x[0]
1848 1848 if op == 'minus':
1849 1849 return optimize(('and', x[1], ('not', x[2])), small)
1850 1850 elif op == 'dagrangepre':
1851 1851 return optimize(('func', ('symbol', 'ancestors'), x[1]), small)
1852 1852 elif op == 'dagrangepost':
1853 1853 return optimize(('func', ('symbol', 'descendants'), x[1]), small)
1854 1854 elif op == 'rangepre':
1855 1855 return optimize(('range', ('string', '0'), x[1]), small)
1856 1856 elif op == 'rangepost':
1857 1857 return optimize(('range', x[1], ('string', 'tip')), small)
1858 1858 elif op == 'negate':
1859 1859 return optimize(('string',
1860 1860 '-' + getstring(x[1], _("can't negate that"))), small)
1861 1861 elif op in 'string symbol negate':
1862 1862 return smallbonus, x # single revisions are small
1863 1863 elif op == 'and':
1864 1864 wa, ta = optimize(x[1], True)
1865 1865 wb, tb = optimize(x[2], True)
1866 1866
1867 1867 # (::x and not ::y)/(not ::y and ::x) have a fast path
1868 1868 def ismissingancestors(revs, bases):
1869 1869 return (
1870 1870 revs[0] == 'func'
1871 1871 and getstring(revs[1], _('not a symbol')) == 'ancestors'
1872 1872 and bases[0] == 'not'
1873 1873 and bases[1][0] == 'func'
1874 1874 and getstring(bases[1][1], _('not a symbol')) == 'ancestors')
1875 1875
1876 1876 w = min(wa, wb)
1877 1877 if ismissingancestors(ta, tb):
1878 1878 return w, ('func', ('symbol', '_missingancestors'),
1879 1879 ('list', ta[2], tb[1][2]))
1880 1880 if ismissingancestors(tb, ta):
1881 1881 return w, ('func', ('symbol', '_missingancestors'),
1882 1882 ('list', tb[2], ta[1][2]))
1883 1883
1884 1884 if wa > wb:
1885 1885 return w, (op, tb, ta)
1886 1886 return w, (op, ta, tb)
1887 1887 elif op == 'or':
1888 1888 wa, ta = optimize(x[1], False)
1889 1889 wb, tb = optimize(x[2], False)
1890 1890 if wb < wa:
1891 1891 wb, wa = wa, wb
1892 1892 return max(wa, wb), (op, ta, tb)
1893 1893 elif op == 'not':
1894 1894 o = optimize(x[1], not small)
1895 1895 return o[0], (op, o[1])
1896 1896 elif op == 'parentpost':
1897 1897 o = optimize(x[1], small)
1898 1898 return o[0], (op, o[1])
1899 1899 elif op == 'group':
1900 1900 return optimize(x[1], small)
1901 1901 elif op in 'dagrange range list parent ancestorspec':
1902 1902 if op == 'parent':
1903 1903 # x^:y means (x^) : y, not x ^ (:y)
1904 1904 post = ('parentpost', x[1])
1905 1905 if x[2][0] == 'dagrangepre':
1906 1906 return optimize(('dagrange', post, x[2][1]), small)
1907 1907 elif x[2][0] == 'rangepre':
1908 1908 return optimize(('range', post, x[2][1]), small)
1909 1909
1910 1910 wa, ta = optimize(x[1], small)
1911 1911 wb, tb = optimize(x[2], small)
1912 1912 return wa + wb, (op, ta, tb)
1913 1913 elif op == 'func':
1914 1914 f = getstring(x[1], _("not a symbol"))
1915 1915 wa, ta = optimize(x[2], small)
1916 1916 if f in ("author branch closed date desc file grep keyword "
1917 1917 "outgoing user"):
1918 1918 w = 10 # slow
1919 1919 elif f in "modifies adds removes":
1920 1920 w = 30 # slower
1921 1921 elif f == "contains":
1922 1922 w = 100 # very slow
1923 1923 elif f == "ancestor":
1924 1924 w = 1 * smallbonus
1925 1925 elif f in "reverse limit first":
1926 1926 w = 0
1927 1927 elif f in "sort":
1928 1928 w = 10 # assume most sorts look at changelog
1929 1929 else:
1930 1930 w = 1
1931 1931 return w + wa, (op, x[1], ta)
1932 1932 return 1, x
1933 1933
1934 1934 _aliasarg = ('func', ('symbol', '_aliasarg'))
1935 1935 def _getaliasarg(tree):
1936 1936 """If tree matches ('func', ('symbol', '_aliasarg'), ('string', X))
1937 1937 return X, None otherwise.
1938 1938 """
1939 1939 if (len(tree) == 3 and tree[:2] == _aliasarg
1940 1940 and tree[2][0] == 'string'):
1941 1941 return tree[2][1]
1942 1942 return None
1943 1943
1944 1944 def _checkaliasarg(tree, known=None):
1945 1945 """Check tree contains no _aliasarg construct or only ones which
1946 1946 value is in known. Used to avoid alias placeholders injection.
1947 1947 """
1948 1948 if isinstance(tree, tuple):
1949 1949 arg = _getaliasarg(tree)
1950 1950 if arg is not None and (not known or arg not in known):
1951 1951 raise error.ParseError(_("not a function: %s") % '_aliasarg')
1952 1952 for t in tree:
1953 1953 _checkaliasarg(t, known)
1954 1954
1955 1955 class revsetalias(object):
1956 1956 funcre = re.compile('^([^(]+)\(([^)]+)\)$')
1957 1957 args = None
1958 1958
1959 1959 def __init__(self, name, value):
1960 1960 '''Aliases like:
1961 1961
1962 1962 h = heads(default)
1963 1963 b($1) = ancestors($1) - ancestors(default)
1964 1964 '''
1965 1965 m = self.funcre.search(name)
1966 1966 if m:
1967 1967 self.name = m.group(1)
1968 1968 self.tree = ('func', ('symbol', m.group(1)))
1969 1969 self.args = [x.strip() for x in m.group(2).split(',')]
1970 1970 for arg in self.args:
1971 1971 # _aliasarg() is an unknown symbol only used separate
1972 1972 # alias argument placeholders from regular strings.
1973 1973 value = value.replace(arg, '_aliasarg(%r)' % (arg,))
1974 1974 else:
1975 1975 self.name = name
1976 1976 self.tree = ('symbol', name)
1977 1977
1978 1978 self.replacement, pos = parse(value)
1979 1979 if pos != len(value):
1980 1980 raise error.ParseError(_('invalid token'), pos)
1981 1981 # Check for placeholder injection
1982 1982 _checkaliasarg(self.replacement, self.args)
1983 1983
1984 1984 def _getalias(aliases, tree):
1985 1985 """If tree looks like an unexpanded alias, return it. Return None
1986 1986 otherwise.
1987 1987 """
1988 1988 if isinstance(tree, tuple) and tree:
1989 1989 if tree[0] == 'symbol' and len(tree) == 2:
1990 1990 name = tree[1]
1991 1991 alias = aliases.get(name)
1992 1992 if alias and alias.args is None and alias.tree == tree:
1993 1993 return alias
1994 1994 if tree[0] == 'func' and len(tree) > 1:
1995 1995 if tree[1][0] == 'symbol' and len(tree[1]) == 2:
1996 1996 name = tree[1][1]
1997 1997 alias = aliases.get(name)
1998 1998 if alias and alias.args is not None and alias.tree == tree[:2]:
1999 1999 return alias
2000 2000 return None
2001 2001
2002 2002 def _expandargs(tree, args):
2003 2003 """Replace _aliasarg instances with the substitution value of the
2004 2004 same name in args, recursively.
2005 2005 """
2006 2006 if not tree or not isinstance(tree, tuple):
2007 2007 return tree
2008 2008 arg = _getaliasarg(tree)
2009 2009 if arg is not None:
2010 2010 return args[arg]
2011 2011 return tuple(_expandargs(t, args) for t in tree)
2012 2012
2013 2013 def _expandaliases(aliases, tree, expanding, cache):
2014 2014 """Expand aliases in tree, recursively.
2015 2015
2016 2016 'aliases' is a dictionary mapping user defined aliases to
2017 2017 revsetalias objects.
2018 2018 """
2019 2019 if not isinstance(tree, tuple):
2020 2020 # Do not expand raw strings
2021 2021 return tree
2022 2022 alias = _getalias(aliases, tree)
2023 2023 if alias is not None:
2024 2024 if alias in expanding:
2025 2025 raise error.ParseError(_('infinite expansion of revset alias "%s" '
2026 2026 'detected') % alias.name)
2027 2027 expanding.append(alias)
2028 2028 if alias.name not in cache:
2029 2029 cache[alias.name] = _expandaliases(aliases, alias.replacement,
2030 2030 expanding, cache)
2031 2031 result = cache[alias.name]
2032 2032 expanding.pop()
2033 2033 if alias.args is not None:
2034 2034 l = getlist(tree[2])
2035 2035 if len(l) != len(alias.args):
2036 2036 raise error.ParseError(
2037 2037 _('invalid number of arguments: %s') % len(l))
2038 2038 l = [_expandaliases(aliases, a, [], cache) for a in l]
2039 2039 result = _expandargs(result, dict(zip(alias.args, l)))
2040 2040 else:
2041 2041 result = tuple(_expandaliases(aliases, t, expanding, cache)
2042 2042 for t in tree)
2043 2043 return result
2044 2044
2045 2045 def findaliases(ui, tree):
2046 2046 _checkaliasarg(tree)
2047 2047 aliases = {}
2048 2048 for k, v in ui.configitems('revsetalias'):
2049 2049 alias = revsetalias(k, v)
2050 2050 aliases[alias.name] = alias
2051 2051 return _expandaliases(aliases, tree, [], {})
2052 2052
2053 2053 def parse(spec, lookup=None):
2054 2054 p = parser.parser(tokenize, elements)
2055 2055 return p.parse(spec, lookup=lookup)
2056 2056
2057 2057 def match(ui, spec, repo=None):
2058 2058 if not spec:
2059 2059 raise error.ParseError(_("empty query"))
2060 2060 lookup = None
2061 2061 if repo:
2062 2062 lookup = repo.__contains__
2063 2063 tree, pos = parse(spec, lookup)
2064 2064 if (pos != len(spec)):
2065 2065 raise error.ParseError(_("invalid token"), pos)
2066 2066 if ui:
2067 2067 tree = findaliases(ui, tree)
2068 2068 weight, tree = optimize(tree, True)
2069 2069 def mfunc(repo, subset):
2070 2070 if util.safehasattr(subset, 'set'):
2071 2071 return getset(repo, subset, tree)
2072 2072 return getset(repo, baseset(subset), tree)
2073 2073 return mfunc
2074 2074
2075 2075 def formatspec(expr, *args):
2076 2076 '''
2077 2077 This is a convenience function for using revsets internally, and
2078 2078 escapes arguments appropriately. Aliases are intentionally ignored
2079 2079 so that intended expression behavior isn't accidentally subverted.
2080 2080
2081 2081 Supported arguments:
2082 2082
2083 2083 %r = revset expression, parenthesized
2084 2084 %d = int(arg), no quoting
2085 2085 %s = string(arg), escaped and single-quoted
2086 2086 %b = arg.branch(), escaped and single-quoted
2087 2087 %n = hex(arg), single-quoted
2088 2088 %% = a literal '%'
2089 2089
2090 2090 Prefixing the type with 'l' specifies a parenthesized list of that type.
2091 2091
2092 2092 >>> formatspec('%r:: and %lr', '10 or 11', ("this()", "that()"))
2093 2093 '(10 or 11):: and ((this()) or (that()))'
2094 2094 >>> formatspec('%d:: and not %d::', 10, 20)
2095 2095 '10:: and not 20::'
2096 2096 >>> formatspec('%ld or %ld', [], [1])
2097 2097 "_list('') or 1"
2098 2098 >>> formatspec('keyword(%s)', 'foo\\xe9')
2099 2099 "keyword('foo\\\\xe9')"
2100 2100 >>> b = lambda: 'default'
2101 2101 >>> b.branch = b
2102 2102 >>> formatspec('branch(%b)', b)
2103 2103 "branch('default')"
2104 2104 >>> formatspec('root(%ls)', ['a', 'b', 'c', 'd'])
2105 2105 "root(_list('a\\x00b\\x00c\\x00d'))"
2106 2106 '''
2107 2107
2108 2108 def quote(s):
2109 2109 return repr(str(s))
2110 2110
2111 2111 def argtype(c, arg):
2112 2112 if c == 'd':
2113 2113 return str(int(arg))
2114 2114 elif c == 's':
2115 2115 return quote(arg)
2116 2116 elif c == 'r':
2117 2117 parse(arg) # make sure syntax errors are confined
2118 2118 return '(%s)' % arg
2119 2119 elif c == 'n':
2120 2120 return quote(node.hex(arg))
2121 2121 elif c == 'b':
2122 2122 return quote(arg.branch())
2123 2123
2124 2124 def listexp(s, t):
2125 2125 l = len(s)
2126 2126 if l == 0:
2127 2127 return "_list('')"
2128 2128 elif l == 1:
2129 2129 return argtype(t, s[0])
2130 2130 elif t == 'd':
2131 2131 return "_intlist('%s')" % "\0".join(str(int(a)) for a in s)
2132 2132 elif t == 's':
2133 2133 return "_list('%s')" % "\0".join(s)
2134 2134 elif t == 'n':
2135 2135 return "_hexlist('%s')" % "\0".join(node.hex(a) for a in s)
2136 2136 elif t == 'b':
2137 2137 return "_list('%s')" % "\0".join(a.branch() for a in s)
2138 2138
2139 2139 m = l // 2
2140 2140 return '(%s or %s)' % (listexp(s[:m], t), listexp(s[m:], t))
2141 2141
2142 2142 ret = ''
2143 2143 pos = 0
2144 2144 arg = 0
2145 2145 while pos < len(expr):
2146 2146 c = expr[pos]
2147 2147 if c == '%':
2148 2148 pos += 1
2149 2149 d = expr[pos]
2150 2150 if d == '%':
2151 2151 ret += d
2152 2152 elif d in 'dsnbr':
2153 2153 ret += argtype(d, args[arg])
2154 2154 arg += 1
2155 2155 elif d == 'l':
2156 2156 # a list of some type
2157 2157 pos += 1
2158 2158 d = expr[pos]
2159 2159 ret += listexp(list(args[arg]), d)
2160 2160 arg += 1
2161 2161 else:
2162 2162 raise util.Abort('unexpected revspec format character %s' % d)
2163 2163 else:
2164 2164 ret += c
2165 2165 pos += 1
2166 2166
2167 2167 return ret
2168 2168
2169 2169 def prettyformat(tree):
2170 2170 def _prettyformat(tree, level, lines):
2171 2171 if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
2172 2172 lines.append((level, str(tree)))
2173 2173 else:
2174 2174 lines.append((level, '(%s' % tree[0]))
2175 2175 for s in tree[1:]:
2176 2176 _prettyformat(s, level + 1, lines)
2177 2177 lines[-1:] = [(lines[-1][0], lines[-1][1] + ')')]
2178 2178
2179 2179 lines = []
2180 2180 _prettyformat(tree, 0, lines)
2181 2181 output = '\n'.join((' '*l + s) for l, s in lines)
2182 2182 return output
2183 2183
2184 2184 def depth(tree):
2185 2185 if isinstance(tree, tuple):
2186 2186 return max(map(depth, tree)) + 1
2187 2187 else:
2188 2188 return 0
2189 2189
2190 2190 def funcsused(tree):
2191 2191 if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
2192 2192 return set()
2193 2193 else:
2194 2194 funcs = set()
2195 2195 for s in tree[1:]:
2196 2196 funcs |= funcsused(s)
2197 2197 if tree[0] == 'func':
2198 2198 funcs.add(tree[1][1])
2199 2199 return funcs
2200 2200
2201 2201 class baseset(list):
2202 2202 """Basic data structure that represents a revset and contains the basic
2203 2203 operation that it should be able to perform.
2204 2204
2205 2205 Every method in this class should be implemented by any smartset class.
2206 2206 """
2207 2207 def __init__(self, data=()):
2208 2208 super(baseset, self).__init__(data)
2209 2209 self._set = None
2210 2210
2211 2211 def ascending(self):
2212 2212 """Sorts the set in ascending order (in place).
2213 2213
2214 2214 This is part of the mandatory API for smartset."""
2215 2215 self.sort()
2216 2216
2217 2217 def descending(self):
2218 2218 """Sorts the set in descending order (in place).
2219 2219
2220 2220 This is part of the mandatory API for smartset."""
2221 2221 self.sort(reverse=True)
2222 2222
2223 2223 def min(self):
2224 2224 return min(self)
2225 2225
2226 2226 def max(self):
2227 2227 return max(self)
2228 2228
2229 2229 def set(self):
2230 2230 """Returns a set or a smartset containing all the elements.
2231 2231
2232 2232 The returned structure should be the fastest option for membership
2233 2233 testing.
2234 2234
2235 2235 This is part of the mandatory API for smartset."""
2236 2236 if not self._set:
2237 2237 self._set = set(self)
2238 2238 return self._set
2239 2239
2240 2240 def __sub__(self, other):
2241 2241 """Returns a new object with the substraction of the two collections.
2242 2242
2243 2243 This is part of the mandatory API for smartset."""
2244 2244 if isinstance(other, baseset):
2245 2245 s = other.set()
2246 2246 else:
2247 2247 s = set(other)
2248 2248 return baseset(self.set() - s)
2249 2249
2250 2250 def __and__(self, other):
2251 2251 """Returns a new object with the intersection of the two collections.
2252 2252
2253 2253 This is part of the mandatory API for smartset."""
2254 2254 if isinstance(other, baseset):
2255 2255 other = other.set()
2256 2256 return baseset([y for y in self if y in other])
2257 2257
2258 2258 def __add__(self, other):
2259 2259 """Returns a new object with the union of the two collections.
2260 2260
2261 2261 This is part of the mandatory API for smartset."""
2262 2262 s = self.set()
2263 2263 l = [r for r in other if r not in s]
2264 2264 return baseset(list(self) + l)
2265 2265
2266 2266 def isascending(self):
2267 2267 """Returns True if the collection is ascending order, False if not.
2268 2268
2269 2269 This is part of the mandatory API for smartset."""
2270 2270 return False
2271 2271
2272 2272 def isdescending(self):
2273 2273 """Returns True if the collection is descending order, False if not.
2274 2274
2275 2275 This is part of the mandatory API for smartset."""
2276 2276 return False
2277 2277
2278 2278 def filter(self, condition):
2279 2279 """Returns this smartset filtered by condition as a new smartset.
2280 2280
2281 2281 `condition` is a callable which takes a revision number and returns a
2282 2282 boolean.
2283 2283
2284 2284 This is part of the mandatory API for smartset."""
2285 2285 return lazyset(self, condition)
2286 2286
2287 2287 class _orderedsetmixin(object):
2288 2288 """Mixin class with utility methods for smartsets
2289 2289
2290 2290 This should be extended by smartsets which have the isascending(),
2291 2291 isdescending() and reverse() methods"""
2292 2292
2293 2293 def _first(self):
2294 2294 """return the first revision in the set"""
2295 2295 for r in self:
2296 2296 return r
2297 2297 raise ValueError('arg is an empty sequence')
2298 2298
2299 2299 def _last(self):
2300 2300 """return the last revision in the set"""
2301 2301 self.reverse()
2302 2302 m = self._first()
2303 2303 self.reverse()
2304 2304 return m
2305 2305
2306 2306 def min(self):
2307 2307 """return the smallest element in the set"""
2308 2308 if self.isascending():
2309 2309 return self._first()
2310 2310 return self._last()
2311 2311
2312 2312 def max(self):
2313 2313 """return the largest element in the set"""
2314 2314 if self.isascending():
2315 2315 return self._last()
2316 2316 return self._first()
2317 2317
2318 2318 class lazyset(object):
2319 2319 """Duck type for baseset class which iterates lazily over the revisions in
2320 2320 the subset and contains a function which tests for membership in the
2321 2321 revset
2322 2322 """
2323 2323 def __init__(self, subset, condition=lambda x: True):
2324 2324 """
2325 2325 condition: a function that decide whether a revision in the subset
2326 2326 belongs to the revset or not.
2327 2327 """
2328 2328 self._subset = subset
2329 2329 self._condition = condition
2330 2330 self._cache = {}
2331 2331
2332 2332 def ascending(self):
2333 2333 self._subset.sort()
2334 2334
2335 2335 def descending(self):
2336 2336 self._subset.sort(reverse=True)
2337 2337
2338 2338 def min(self):
2339 2339 return min(self)
2340 2340
2341 2341 def max(self):
2342 2342 return max(self)
2343 2343
2344 2344 def __contains__(self, x):
2345 2345 c = self._cache
2346 2346 if x not in c:
2347 2347 c[x] = x in self._subset and self._condition(x)
2348 2348 return c[x]
2349 2349
2350 2350 def __iter__(self):
2351 2351 cond = self._condition
2352 2352 for x in self._subset:
2353 2353 if cond(x):
2354 2354 yield x
2355 2355
2356 2356 def __and__(self, x):
2357 2357 return lazyset(self, lambda r: r in x)
2358 2358
2359 2359 def __sub__(self, x):
2360 2360 return lazyset(self, lambda r: r not in x)
2361 2361
2362 2362 def __add__(self, x):
2363 2363 return _addset(self, x)
2364 2364
2365 2365 def __nonzero__(self):
2366 2366 for r in self:
2367 2367 return True
2368 2368 return False
2369 2369
2370 2370 def __len__(self):
2371 2371 # Basic implementation to be changed in future patches.
2372 2372 l = baseset([r for r in self])
2373 2373 return len(l)
2374 2374
2375 2375 def __getitem__(self, x):
2376 2376 # Basic implementation to be changed in future patches.
2377 2377 l = baseset([r for r in self])
2378 2378 return l[x]
2379 2379
2380 2380 def sort(self, reverse=False):
2381 2381 if not util.safehasattr(self._subset, 'sort'):
2382 2382 self._subset = baseset(self._subset)
2383 2383 self._subset.sort(reverse=reverse)
2384 2384
2385 2385 def reverse(self):
2386 2386 self._subset.reverse()
2387 2387
2388 2388 def set(self):
2389 2389 return set([r for r in self])
2390 2390
2391 2391 def isascending(self):
2392 2392 return False
2393 2393
2394 2394 def isdescending(self):
2395 2395 return False
2396 2396
2397 2397 def filter(self, l):
2398 2398 return lazyset(self, l)
2399 2399
2400 2400 class orderedlazyset(_orderedsetmixin, lazyset):
2401 2401 """Subclass of lazyset which subset can be ordered either ascending or
2402 2402 descendingly
2403 2403 """
2404 2404 def __init__(self, subset, condition, ascending=True):
2405 2405 super(orderedlazyset, self).__init__(subset, condition)
2406 2406 self._ascending = ascending
2407 2407
2408 2408 def filter(self, l):
2409 2409 return orderedlazyset(self, l, ascending=self._ascending)
2410 2410
2411 2411 def ascending(self):
2412 2412 if not self._ascending:
2413 2413 self.reverse()
2414 2414
2415 2415 def descending(self):
2416 2416 if self._ascending:
2417 2417 self.reverse()
2418 2418
2419 2419 def __and__(self, x):
2420 2420 return orderedlazyset(self, lambda r: r in x,
2421 2421 ascending=self._ascending)
2422 2422
2423 2423 def __sub__(self, x):
2424 2424 return orderedlazyset(self, lambda r: r not in x,
2425 2425 ascending=self._ascending)
2426 2426
2427 2427 def __add__(self, x):
2428 2428 kwargs = {}
2429 2429 if self.isascending() and x.isascending():
2430 2430 kwargs['ascending'] = True
2431 2431 if self.isdescending() and x.isdescending():
2432 2432 kwargs['ascending'] = False
2433 2433 return _addset(self, x, **kwargs)
2434 2434
2435 2435 def sort(self, reverse=False):
2436 2436 if reverse:
2437 2437 if self._ascending:
2438 2438 self._subset.sort(reverse=reverse)
2439 2439 else:
2440 2440 if not self._ascending:
2441 2441 self._subset.sort(reverse=reverse)
2442 2442 self._ascending = not reverse
2443 2443
2444 2444 def isascending(self):
2445 2445 return self._ascending
2446 2446
2447 2447 def isdescending(self):
2448 2448 return not self._ascending
2449 2449
2450 2450 def reverse(self):
2451 2451 self._subset.reverse()
2452 2452 self._ascending = not self._ascending
2453 2453
2454 2454 class _addset(_orderedsetmixin):
2455 2455 """Represent the addition of two sets
2456 2456
2457 2457 Wrapper structure for lazily adding two structures without losing much
2458 2458 performance on the __contains__ method
2459 2459
2460 2460 If the ascending attribute is set, that means the two structures are
2461 2461 ordered in either an ascending or descending way. Therefore, we can add
2462 them mantaining the order by iterating over both at the same time
2462 them maintaining the order by iterating over both at the same time
2463 2463
2464 2464 This class does not duck-type baseset and it's only supposed to be used
2465 2465 internally
2466 2466 """
2467 2467 def __init__(self, revs1, revs2, ascending=None):
2468 2468 self._r1 = revs1
2469 2469 self._r2 = revs2
2470 2470 self._iter = None
2471 2471 self._ascending = ascending
2472 2472 self._genlist = None
2473 2473
2474 2474 def __len__(self):
2475 2475 return len(self._list)
2476 2476
2477 2477 @util.propertycache
2478 2478 def _list(self):
2479 2479 if not self._genlist:
2480 2480 self._genlist = baseset(self._iterator())
2481 2481 return self._genlist
2482 2482
2483 2483 def filter(self, condition):
2484 2484 if self._ascending is not None:
2485 2485 return orderedlazyset(self, condition, ascending=self._ascending)
2486 2486 return lazyset(self, condition)
2487 2487
2488 2488 def ascending(self):
2489 2489 if self._ascending is None:
2490 2490 self.sort()
2491 2491 self._ascending = True
2492 2492 else:
2493 2493 if not self._ascending:
2494 2494 self.reverse()
2495 2495
2496 2496 def descending(self):
2497 2497 if self._ascending is None:
2498 2498 self.sort(reverse=True)
2499 2499 self._ascending = False
2500 2500 else:
2501 2501 if self._ascending:
2502 2502 self.reverse()
2503 2503
2504 2504 def __and__(self, other):
2505 2505 filterfunc = other.__contains__
2506 2506 if self._ascending is not None:
2507 2507 return orderedlazyset(self, filterfunc, ascending=self._ascending)
2508 2508 return lazyset(self, filterfunc)
2509 2509
2510 2510 def __sub__(self, other):
2511 2511 filterfunc = lambda r: r not in other
2512 2512 if self._ascending is not None:
2513 2513 return orderedlazyset(self, filterfunc, ascending=self._ascending)
2514 2514 return lazyset(self, filterfunc)
2515 2515
2516 2516 def __add__(self, other):
2517 2517 """When both collections are ascending or descending, preserve the order
2518 2518 """
2519 2519 kwargs = {}
2520 2520 if self._ascending is not None:
2521 2521 if self.isascending() and other.isascending():
2522 2522 kwargs['ascending'] = True
2523 2523 if self.isdescending() and other.isdescending():
2524 2524 kwargs['ascending'] = False
2525 2525 return _addset(self, other, **kwargs)
2526 2526
2527 2527 def _iterator(self):
2528 2528 """Iterate over both collections without repeating elements
2529 2529
2530 2530 If the ascending attribute is not set, iterate over the first one and
2531 2531 then over the second one checking for membership on the first one so we
2532 2532 dont yield any duplicates.
2533 2533
2534 2534 If the ascending attribute is set, iterate over both collections at the
2535 2535 same time, yielding only one value at a time in the given order.
2536 2536 """
2537 2537 if not self._iter:
2538 2538 def gen():
2539 2539 if self._ascending is None:
2540 2540 for r in self._r1:
2541 2541 yield r
2542 2542 s = self._r1.set()
2543 2543 for r in self._r2:
2544 2544 if r not in s:
2545 2545 yield r
2546 2546 else:
2547 2547 iter1 = iter(self._r1)
2548 2548 iter2 = iter(self._r2)
2549 2549
2550 2550 val1 = None
2551 2551 val2 = None
2552 2552
2553 2553 choice = max
2554 2554 if self._ascending:
2555 2555 choice = min
2556 2556 try:
2557 2557 # Consume both iterators in an ordered way until one is
2558 2558 # empty
2559 2559 while True:
2560 2560 if val1 is None:
2561 2561 val1 = iter1.next()
2562 2562 if val2 is None:
2563 2563 val2 = iter2.next()
2564 2564 next = choice(val1, val2)
2565 2565 yield next
2566 2566 if val1 == next:
2567 2567 val1 = None
2568 2568 if val2 == next:
2569 2569 val2 = None
2570 2570 except StopIteration:
2571 2571 # Flush any remaining values and consume the other one
2572 2572 it = iter2
2573 2573 if val1 is not None:
2574 2574 yield val1
2575 2575 it = iter1
2576 2576 elif val2 is not None:
2577 2577 # might have been equality and both are empty
2578 2578 yield val2
2579 2579 for val in it:
2580 2580 yield val
2581 2581
2582 2582 self._iter = _generatorset(gen())
2583 2583
2584 2584 return self._iter
2585 2585
2586 2586 def __iter__(self):
2587 2587 if self._genlist:
2588 2588 return iter(self._genlist)
2589 2589 return iter(self._iterator())
2590 2590
2591 2591 def __contains__(self, x):
2592 2592 return x in self._r1 or x in self._r2
2593 2593
2594 2594 def set(self):
2595 2595 return self
2596 2596
2597 2597 def sort(self, reverse=False):
2598 2598 """Sort the added set
2599 2599
2600 2600 For this we use the cached list with all the generated values and if we
2601 2601 know they are ascending or descending we can sort them in a smart way.
2602 2602 """
2603 2603 if self._ascending is None:
2604 2604 self._list.sort(reverse=reverse)
2605 2605 self._ascending = not reverse
2606 2606 else:
2607 2607 if bool(self._ascending) == bool(reverse):
2608 2608 self.reverse()
2609 2609
2610 2610 def isascending(self):
2611 2611 return self._ascending is not None and self._ascending
2612 2612
2613 2613 def isdescending(self):
2614 2614 return self._ascending is not None and not self._ascending
2615 2615
2616 2616 def reverse(self):
2617 2617 self._list.reverse()
2618 2618 if self._ascending is not None:
2619 2619 self._ascending = not self._ascending
2620 2620
2621 2621 class _generatorset(object):
2622 2622 """Wrap a generator for lazy iteration
2623 2623
2624 2624 Wrapper structure for generators that provides lazy membership and can
2625 2625 be iterated more than once.
2626 2626 When asked for membership it generates values until either it finds the
2627 2627 requested one or has gone through all the elements in the generator
2628 2628
2629 2629 This class does not duck-type baseset and it's only supposed to be used
2630 2630 internally
2631 2631 """
2632 2632 def __init__(self, gen):
2633 2633 """
2634 2634 gen: a generator producing the values for the generatorset.
2635 2635 """
2636 2636 self._gen = gen
2637 2637 self._cache = {}
2638 2638 self._genlist = baseset([])
2639 2639 self._finished = False
2640 2640
2641 2641 def __contains__(self, x):
2642 2642 if x in self._cache:
2643 2643 return self._cache[x]
2644 2644
2645 2645 # Use new values only, as existing values would be cached.
2646 2646 for l in self._consumegen():
2647 2647 if l == x:
2648 2648 return True
2649 2649
2650 2650 self._cache[x] = False
2651 2651 return False
2652 2652
2653 2653 def __iter__(self):
2654 2654 if self._finished:
2655 2655 for x in self._genlist:
2656 2656 yield x
2657 2657 return
2658 2658
2659 2659 i = 0
2660 2660 genlist = self._genlist
2661 2661 consume = self._consumegen()
2662 2662 while True:
2663 2663 if i < len(genlist):
2664 2664 yield genlist[i]
2665 2665 else:
2666 2666 yield consume.next()
2667 2667 i += 1
2668 2668
2669 2669 def _consumegen(self):
2670 2670 for item in self._gen:
2671 2671 self._cache[item] = True
2672 2672 self._genlist.append(item)
2673 2673 yield item
2674 2674 self._finished = True
2675 2675
2676 2676 def set(self):
2677 2677 return self
2678 2678
2679 2679 def sort(self, reverse=False):
2680 2680 if not self._finished:
2681 2681 for i in self:
2682 2682 continue
2683 2683 self._genlist.sort(reverse=reverse)
2684 2684
2685 2685 class _ascgeneratorset(_generatorset):
2686 2686 """Wrap a generator of ascending elements for lazy iteration
2687 2687
2688 2688 Same structure as _generatorset but stops iterating after it goes past
2689 2689 the value when asked for membership and the element is not contained
2690 2690
2691 2691 This class does not duck-type baseset and it's only supposed to be used
2692 2692 internally
2693 2693 """
2694 2694 def __contains__(self, x):
2695 2695 if x in self._cache:
2696 2696 return self._cache[x]
2697 2697
2698 2698 # Use new values only, as existing values would be cached.
2699 2699 for l in self._consumegen():
2700 2700 if l == x:
2701 2701 return True
2702 2702 if l > x:
2703 2703 break
2704 2704
2705 2705 self._cache[x] = False
2706 2706 return False
2707 2707
2708 2708 class _descgeneratorset(_generatorset):
2709 2709 """Wrap a generator of descending elements for lazy iteration
2710 2710
2711 2711 Same structure as _generatorset but stops iterating after it goes past
2712 2712 the value when asked for membership and the element is not contained
2713 2713
2714 2714 This class does not duck-type baseset and it's only supposed to be used
2715 2715 internally
2716 2716 """
2717 2717 def __contains__(self, x):
2718 2718 if x in self._cache:
2719 2719 return self._cache[x]
2720 2720
2721 2721 # Use new values only, as existing values would be cached.
2722 2722 for l in self._consumegen():
2723 2723 if l == x:
2724 2724 return True
2725 2725 if l < x:
2726 2726 break
2727 2727
2728 2728 self._cache[x] = False
2729 2729 return False
2730 2730
2731 2731 class spanset(_orderedsetmixin):
2732 2732 """Duck type for baseset class which represents a range of revisions and
2733 2733 can work lazily and without having all the range in memory
2734 2734
2735 2735 Note that spanset(x, y) behave almost like xrange(x, y) except for two
2736 2736 notable points:
2737 2737 - when x < y it will be automatically descending,
2738 2738 - revision filtered with this repoview will be skipped.
2739 2739
2740 2740 """
2741 2741 def __init__(self, repo, start=0, end=None):
2742 2742 """
2743 2743 start: first revision included the set
2744 2744 (default to 0)
2745 2745 end: first revision excluded (last+1)
2746 2746 (default to len(repo)
2747 2747
2748 2748 Spanset will be descending if `end` < `start`.
2749 2749 """
2750 2750 self._start = start
2751 2751 if end is not None:
2752 2752 self._end = end
2753 2753 else:
2754 2754 self._end = len(repo)
2755 2755 self._hiddenrevs = repo.changelog.filteredrevs
2756 2756
2757 2757 def ascending(self):
2758 2758 if self._start > self._end:
2759 2759 self.reverse()
2760 2760
2761 2761 def descending(self):
2762 2762 if self._start < self._end:
2763 2763 self.reverse()
2764 2764
2765 2765 def _contained(self, rev):
2766 2766 return (rev <= self._start and rev > self._end) or (rev >= self._start
2767 2767 and rev < self._end)
2768 2768
2769 2769 def __iter__(self):
2770 2770 if self._start <= self._end:
2771 2771 iterrange = xrange(self._start, self._end)
2772 2772 else:
2773 2773 iterrange = xrange(self._start, self._end, -1)
2774 2774
2775 2775 if self._hiddenrevs:
2776 2776 s = self._hiddenrevs
2777 2777 for r in iterrange:
2778 2778 if r not in s:
2779 2779 yield r
2780 2780 else:
2781 2781 for r in iterrange:
2782 2782 yield r
2783 2783
2784 2784 def __contains__(self, x):
2785 2785 return self._contained(x) and not (self._hiddenrevs and rev in
2786 2786 self._hiddenrevs)
2787 2787
2788 2788 def __nonzero__(self):
2789 2789 for r in self:
2790 2790 return True
2791 2791 return False
2792 2792
2793 2793 def __and__(self, x):
2794 2794 if isinstance(x, baseset):
2795 2795 x = x.set()
2796 2796 if self._start <= self._end:
2797 2797 return orderedlazyset(self, lambda r: r in x)
2798 2798 else:
2799 2799 return orderedlazyset(self, lambda r: r in x, ascending=False)
2800 2800
2801 2801 def __sub__(self, x):
2802 2802 if isinstance(x, baseset):
2803 2803 x = x.set()
2804 2804 if self._start <= self._end:
2805 2805 return orderedlazyset(self, lambda r: r not in x)
2806 2806 else:
2807 2807 return orderedlazyset(self, lambda r: r not in x, ascending=False)
2808 2808
2809 2809 def __add__(self, x):
2810 2810 kwargs = {}
2811 2811 if self.isascending() and x.isascending():
2812 2812 kwargs['ascending'] = True
2813 2813 if self.isdescending() and x.isdescending():
2814 2814 kwargs['ascending'] = False
2815 2815 return _addset(self, x, **kwargs)
2816 2816
2817 2817 def __len__(self):
2818 2818 if not self._hiddenrevs:
2819 2819 return abs(self._end - self._start)
2820 2820 else:
2821 2821 count = 0
2822 2822 for rev in self._hiddenrevs:
2823 2823 if self._contained(rev):
2824 2824 count += 1
2825 2825 return abs(self._end - self._start) - count
2826 2826
2827 2827 def __getitem__(self, x):
2828 2828 # Basic implementation to be changed in future patches.
2829 2829 l = baseset([r for r in self])
2830 2830 return l[x]
2831 2831
2832 2832 def sort(self, reverse=False):
2833 2833 if bool(reverse) != (self._start > self._end):
2834 2834 self.reverse()
2835 2835
2836 2836 def reverse(self):
2837 2837 # Just switch the _start and _end parameters
2838 2838 if self._start <= self._end:
2839 2839 self._start, self._end = self._end - 1, self._start - 1
2840 2840 else:
2841 2841 self._start, self._end = self._end + 1, self._start + 1
2842 2842
2843 2843 def set(self):
2844 2844 return self
2845 2845
2846 2846 def isascending(self):
2847 2847 return self._start < self._end
2848 2848
2849 2849 def isdescending(self):
2850 2850 return self._start > self._end
2851 2851
2852 2852 def filter(self, l):
2853 2853 if self._start <= self._end:
2854 2854 return orderedlazyset(self, l)
2855 2855 else:
2856 2856 return orderedlazyset(self, l, ascending=False)
2857 2857
2858 2858 # tell hggettext to extract docstrings from these functions:
2859 2859 i18nfunctions = symbols.values()
@@ -1,785 +1,785 b''
1 1 # wireproto.py - generic wire protocol support functions
2 2 #
3 3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 import urllib, tempfile, os, sys
9 9 from i18n import _
10 10 from node import bin, hex
11 11 import changegroup as changegroupmod
12 12 import peer, error, encoding, util, store, exchange
13 13
14 14
15 15 class abstractserverproto(object):
16 16 """abstract class that summarizes the protocol API
17 17
18 18 Used as reference and documentation.
19 19 """
20 20
21 21 def getargs(self, args):
22 22 """return the value for arguments in <args>
23 23
24 24 returns a list of values (same order as <args>)"""
25 25 raise NotImplementedError()
26 26
27 27 def getfile(self, fp):
28 28 """write the whole content of a file into a file like object
29 29
30 30 The file is in the form::
31 31
32 32 (<chunk-size>\n<chunk>)+0\n
33 33
34 34 chunk size is the ascii version of the int.
35 35 """
36 36 raise NotImplementedError()
37 37
38 38 def redirect(self):
39 39 """may setup interception for stdout and stderr
40 40
41 41 See also the `restore` method."""
42 42 raise NotImplementedError()
43 43
44 44 # If the `redirect` function does install interception, the `restore`
45 45 # function MUST be defined. If interception is not used, this function
46 46 # MUST NOT be defined.
47 47 #
48 48 # left commented here on purpose
49 49 #
50 50 #def restore(self):
51 51 # """reinstall previous stdout and stderr and return intercepted stdout
52 52 # """
53 53 # raise NotImplementedError()
54 54
55 55 def groupchunks(self, cg):
56 56 """return 4096 chunks from a changegroup object
57 57
58 58 Some protocols may have compressed the contents."""
59 59 raise NotImplementedError()
60 60
61 61 # abstract batching support
62 62
63 63 class future(object):
64 64 '''placeholder for a value to be set later'''
65 65 def set(self, value):
66 66 if util.safehasattr(self, 'value'):
67 67 raise error.RepoError("future is already set")
68 68 self.value = value
69 69
70 70 class batcher(object):
71 71 '''base class for batches of commands submittable in a single request
72 72
73 73 All methods invoked on instances of this class are simply queued and
74 74 return a a future for the result. Once you call submit(), all the queued
75 75 calls are performed and the results set in their respective futures.
76 76 '''
77 77 def __init__(self):
78 78 self.calls = []
79 79 def __getattr__(self, name):
80 80 def call(*args, **opts):
81 81 resref = future()
82 82 self.calls.append((name, args, opts, resref,))
83 83 return resref
84 84 return call
85 85 def submit(self):
86 86 pass
87 87
88 88 class localbatch(batcher):
89 89 '''performs the queued calls directly'''
90 90 def __init__(self, local):
91 91 batcher.__init__(self)
92 92 self.local = local
93 93 def submit(self):
94 94 for name, args, opts, resref in self.calls:
95 95 resref.set(getattr(self.local, name)(*args, **opts))
96 96
97 97 class remotebatch(batcher):
98 98 '''batches the queued calls; uses as few roundtrips as possible'''
99 99 def __init__(self, remote):
100 100 '''remote must support _submitbatch(encbatch) and
101 101 _submitone(op, encargs)'''
102 102 batcher.__init__(self)
103 103 self.remote = remote
104 104 def submit(self):
105 105 req, rsp = [], []
106 106 for name, args, opts, resref in self.calls:
107 107 mtd = getattr(self.remote, name)
108 108 batchablefn = getattr(mtd, 'batchable', None)
109 109 if batchablefn is not None:
110 110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 111 encargsorres, encresref = batchable.next()
112 112 if encresref:
113 113 req.append((name, encargsorres,))
114 114 rsp.append((batchable, encresref, resref,))
115 115 else:
116 116 resref.set(encargsorres)
117 117 else:
118 118 if req:
119 119 self._submitreq(req, rsp)
120 120 req, rsp = [], []
121 121 resref.set(mtd(*args, **opts))
122 122 if req:
123 123 self._submitreq(req, rsp)
124 124 def _submitreq(self, req, rsp):
125 125 encresults = self.remote._submitbatch(req)
126 126 for encres, r in zip(encresults, rsp):
127 127 batchable, encresref, resref = r
128 128 encresref.set(encres)
129 129 resref.set(batchable.next())
130 130
131 131 def batchable(f):
132 132 '''annotation for batchable methods
133 133
134 134 Such methods must implement a coroutine as follows:
135 135
136 136 @batchable
137 137 def sample(self, one, two=None):
138 138 # Handle locally computable results first:
139 139 if not one:
140 140 yield "a local result", None
141 141 # Build list of encoded arguments suitable for your wire protocol:
142 142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 143 # Create future for injection of encoded result:
144 144 encresref = future()
145 145 # Return encoded arguments and future:
146 146 yield encargs, encresref
147 147 # Assuming the future to be filled with the result from the batched
148 148 # request now. Decode it:
149 149 yield decode(encresref.value)
150 150
151 151 The decorator returns a function which wraps this coroutine as a plain
152 152 method, but adds the original method as an attribute called "batchable",
153 153 which is used by remotebatch to split the call into separate encoding and
154 154 decoding phases.
155 155 '''
156 156 def plain(*args, **opts):
157 157 batchable = f(*args, **opts)
158 158 encargsorres, encresref = batchable.next()
159 159 if not encresref:
160 160 return encargsorres # a local result in this case
161 161 self = args[0]
162 162 encresref.set(self._submitone(f.func_name, encargsorres))
163 163 return batchable.next()
164 164 setattr(plain, 'batchable', f)
165 165 return plain
166 166
167 167 # list of nodes encoding / decoding
168 168
169 169 def decodelist(l, sep=' '):
170 170 if l:
171 171 return map(bin, l.split(sep))
172 172 return []
173 173
174 174 def encodelist(l, sep=' '):
175 175 return sep.join(map(hex, l))
176 176
177 177 # batched call argument encoding
178 178
179 179 def escapearg(plain):
180 180 return (plain
181 181 .replace(':', '::')
182 182 .replace(',', ':,')
183 183 .replace(';', ':;')
184 184 .replace('=', ':='))
185 185
186 186 def unescapearg(escaped):
187 187 return (escaped
188 188 .replace(':=', '=')
189 189 .replace(':;', ';')
190 190 .replace(':,', ',')
191 191 .replace('::', ':'))
192 192
193 193 # client side
194 194
195 195 class wirepeer(peer.peerrepository):
196 196
197 197 def batch(self):
198 198 return remotebatch(self)
199 199 def _submitbatch(self, req):
200 200 cmds = []
201 201 for op, argsdict in req:
202 202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
203 203 cmds.append('%s %s' % (op, args))
204 204 rsp = self._call("batch", cmds=';'.join(cmds))
205 205 return rsp.split(';')
206 206 def _submitone(self, op, args):
207 207 return self._call(op, **args)
208 208
209 209 @batchable
210 210 def lookup(self, key):
211 211 self.requirecap('lookup', _('look up remote revision'))
212 212 f = future()
213 213 yield {'key': encoding.fromlocal(key)}, f
214 214 d = f.value
215 215 success, data = d[:-1].split(" ", 1)
216 216 if int(success):
217 217 yield bin(data)
218 218 self._abort(error.RepoError(data))
219 219
220 220 @batchable
221 221 def heads(self):
222 222 f = future()
223 223 yield {}, f
224 224 d = f.value
225 225 try:
226 226 yield decodelist(d[:-1])
227 227 except ValueError:
228 228 self._abort(error.ResponseError(_("unexpected response:"), d))
229 229
230 230 @batchable
231 231 def known(self, nodes):
232 232 f = future()
233 233 yield {'nodes': encodelist(nodes)}, f
234 234 d = f.value
235 235 try:
236 236 yield [bool(int(f)) for f in d]
237 237 except ValueError:
238 238 self._abort(error.ResponseError(_("unexpected response:"), d))
239 239
240 240 @batchable
241 241 def branchmap(self):
242 242 f = future()
243 243 yield {}, f
244 244 d = f.value
245 245 try:
246 246 branchmap = {}
247 247 for branchpart in d.splitlines():
248 248 branchname, branchheads = branchpart.split(' ', 1)
249 249 branchname = encoding.tolocal(urllib.unquote(branchname))
250 250 branchheads = decodelist(branchheads)
251 251 branchmap[branchname] = branchheads
252 252 yield branchmap
253 253 except TypeError:
254 254 self._abort(error.ResponseError(_("unexpected response:"), d))
255 255
256 256 def branches(self, nodes):
257 257 n = encodelist(nodes)
258 258 d = self._call("branches", nodes=n)
259 259 try:
260 260 br = [tuple(decodelist(b)) for b in d.splitlines()]
261 261 return br
262 262 except ValueError:
263 263 self._abort(error.ResponseError(_("unexpected response:"), d))
264 264
265 265 def between(self, pairs):
266 266 batch = 8 # avoid giant requests
267 267 r = []
268 268 for i in xrange(0, len(pairs), batch):
269 269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
270 270 d = self._call("between", pairs=n)
271 271 try:
272 272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
273 273 except ValueError:
274 274 self._abort(error.ResponseError(_("unexpected response:"), d))
275 275 return r
276 276
277 277 @batchable
278 278 def pushkey(self, namespace, key, old, new):
279 279 if not self.capable('pushkey'):
280 280 yield False, None
281 281 f = future()
282 282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
283 283 yield {'namespace': encoding.fromlocal(namespace),
284 284 'key': encoding.fromlocal(key),
285 285 'old': encoding.fromlocal(old),
286 286 'new': encoding.fromlocal(new)}, f
287 287 d = f.value
288 288 d, output = d.split('\n', 1)
289 289 try:
290 290 d = bool(int(d))
291 291 except ValueError:
292 292 raise error.ResponseError(
293 293 _('push failed (unexpected response):'), d)
294 294 for l in output.splitlines(True):
295 295 self.ui.status(_('remote: '), l)
296 296 yield d
297 297
298 298 @batchable
299 299 def listkeys(self, namespace):
300 300 if not self.capable('pushkey'):
301 301 yield {}, None
302 302 f = future()
303 303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
304 304 yield {'namespace': encoding.fromlocal(namespace)}, f
305 305 d = f.value
306 306 r = {}
307 307 for l in d.splitlines():
308 308 k, v = l.split('\t')
309 309 r[encoding.tolocal(k)] = encoding.tolocal(v)
310 310 yield r
311 311
312 312 def stream_out(self):
313 313 return self._callstream('stream_out')
314 314
315 315 def changegroup(self, nodes, kind):
316 316 n = encodelist(nodes)
317 317 f = self._callcompressable("changegroup", roots=n)
318 318 return changegroupmod.unbundle10(f, 'UN')
319 319
320 320 def changegroupsubset(self, bases, heads, kind):
321 321 self.requirecap('changegroupsubset', _('look up remote changes'))
322 322 bases = encodelist(bases)
323 323 heads = encodelist(heads)
324 324 f = self._callcompressable("changegroupsubset",
325 325 bases=bases, heads=heads)
326 326 return changegroupmod.unbundle10(f, 'UN')
327 327
328 328 def getbundle(self, source, heads=None, common=None, bundlecaps=None):
329 329 self.requirecap('getbundle', _('look up remote changes'))
330 330 opts = {}
331 331 if heads is not None:
332 332 opts['heads'] = encodelist(heads)
333 333 if common is not None:
334 334 opts['common'] = encodelist(common)
335 335 if bundlecaps is not None:
336 336 opts['bundlecaps'] = ','.join(bundlecaps)
337 337 f = self._callcompressable("getbundle", **opts)
338 338 return changegroupmod.unbundle10(f, 'UN')
339 339
340 340 def unbundle(self, cg, heads, source):
341 341 '''Send cg (a readable file-like object representing the
342 342 changegroup to push, typically a chunkbuffer object) to the
343 343 remote server as a bundle. Return an integer indicating the
344 344 result of the push (see localrepository.addchangegroup()).'''
345 345
346 346 if heads != ['force'] and self.capable('unbundlehash'):
347 347 heads = encodelist(['hashed',
348 348 util.sha1(''.join(sorted(heads))).digest()])
349 349 else:
350 350 heads = encodelist(heads)
351 351
352 352 ret, output = self._callpush("unbundle", cg, heads=heads)
353 353 if ret == "":
354 354 raise error.ResponseError(
355 355 _('push failed:'), output)
356 356 try:
357 357 ret = int(ret)
358 358 except ValueError:
359 359 raise error.ResponseError(
360 360 _('push failed (unexpected response):'), ret)
361 361
362 362 for l in output.splitlines(True):
363 363 self.ui.status(_('remote: '), l)
364 364 return ret
365 365
366 366 def debugwireargs(self, one, two, three=None, four=None, five=None):
367 367 # don't pass optional arguments left at their default value
368 368 opts = {}
369 369 if three is not None:
370 370 opts['three'] = three
371 371 if four is not None:
372 372 opts['four'] = four
373 373 return self._call('debugwireargs', one=one, two=two, **opts)
374 374
375 375 def _call(self, cmd, **args):
376 376 """execute <cmd> on the server
377 377
378 378 The command is expected to return a simple string.
379 379
380 380 returns the server reply as a string."""
381 381 raise NotImplementedError()
382 382
383 383 def _callstream(self, cmd, **args):
384 384 """execute <cmd> on the server
385 385
386 386 The command is expected to return a stream.
387 387
388 388 returns the server reply as a file like object."""
389 389 raise NotImplementedError()
390 390
391 391 def _callcompressable(self, cmd, **args):
392 392 """execute <cmd> on the server
393 393
394 394 The command is expected to return a stream.
395 395
396 The stream may have been compressed in some implementaitons. This
396 The stream may have been compressed in some implementations. This
397 397 function takes care of the decompression. This is the only difference
398 398 with _callstream.
399 399
400 400 returns the server reply as a file like object.
401 401 """
402 402 raise NotImplementedError()
403 403
404 404 def _callpush(self, cmd, fp, **args):
405 405 """execute a <cmd> on server
406 406
407 407 The command is expected to be related to a push. Push has a special
408 408 return method.
409 409
410 410 returns the server reply as a (ret, output) tuple. ret is either
411 411 empty (error) or a stringified int.
412 412 """
413 413 raise NotImplementedError()
414 414
415 415 def _abort(self, exception):
416 416 """clearly abort the wire protocol connection and raise the exception
417 417 """
418 418 raise NotImplementedError()
419 419
420 420 # server side
421 421
422 422 # wire protocol command can either return a string or one of these classes.
423 423 class streamres(object):
424 424 """wireproto reply: binary stream
425 425
426 426 The call was successful and the result is a stream.
427 427 Iterate on the `self.gen` attribute to retrieve chunks.
428 428 """
429 429 def __init__(self, gen):
430 430 self.gen = gen
431 431
432 432 class pushres(object):
433 433 """wireproto reply: success with simple integer return
434 434
435 435 The call was successful and returned an integer contained in `self.res`.
436 436 """
437 437 def __init__(self, res):
438 438 self.res = res
439 439
440 440 class pusherr(object):
441 441 """wireproto reply: failure
442 442
443 443 The call failed. The `self.res` attribute contains the error message.
444 444 """
445 445 def __init__(self, res):
446 446 self.res = res
447 447
448 448 class ooberror(object):
449 449 """wireproto reply: failure of a batch of operation
450 450
451 451 Something failed during a batch call. The error message is stored in
452 452 `self.message`.
453 453 """
454 454 def __init__(self, message):
455 455 self.message = message
456 456
457 457 def dispatch(repo, proto, command):
458 458 repo = repo.filtered("served")
459 459 func, spec = commands[command]
460 460 args = proto.getargs(spec)
461 461 return func(repo, proto, *args)
462 462
463 463 def options(cmd, keys, others):
464 464 opts = {}
465 465 for k in keys:
466 466 if k in others:
467 467 opts[k] = others[k]
468 468 del others[k]
469 469 if others:
470 470 sys.stderr.write("abort: %s got unexpected arguments %s\n"
471 471 % (cmd, ",".join(others)))
472 472 return opts
473 473
474 474 # list of commands
475 475 commands = {}
476 476
477 477 def wireprotocommand(name, args=''):
478 """decorator for wireprotocol command"""
478 """decorator for wire protocol command"""
479 479 def register(func):
480 480 commands[name] = (func, args)
481 481 return func
482 482 return register
483 483
484 484 @wireprotocommand('batch', 'cmds *')
485 485 def batch(repo, proto, cmds, others):
486 486 repo = repo.filtered("served")
487 487 res = []
488 488 for pair in cmds.split(';'):
489 489 op, args = pair.split(' ', 1)
490 490 vals = {}
491 491 for a in args.split(','):
492 492 if a:
493 493 n, v = a.split('=')
494 494 vals[n] = unescapearg(v)
495 495 func, spec = commands[op]
496 496 if spec:
497 497 keys = spec.split()
498 498 data = {}
499 499 for k in keys:
500 500 if k == '*':
501 501 star = {}
502 502 for key in vals.keys():
503 503 if key not in keys:
504 504 star[key] = vals[key]
505 505 data['*'] = star
506 506 else:
507 507 data[k] = vals[k]
508 508 result = func(repo, proto, *[data[k] for k in keys])
509 509 else:
510 510 result = func(repo, proto)
511 511 if isinstance(result, ooberror):
512 512 return result
513 513 res.append(escapearg(result))
514 514 return ';'.join(res)
515 515
516 516 @wireprotocommand('between', 'pairs')
517 517 def between(repo, proto, pairs):
518 518 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
519 519 r = []
520 520 for b in repo.between(pairs):
521 521 r.append(encodelist(b) + "\n")
522 522 return "".join(r)
523 523
524 524 @wireprotocommand('branchmap')
525 525 def branchmap(repo, proto):
526 526 branchmap = repo.branchmap()
527 527 heads = []
528 528 for branch, nodes in branchmap.iteritems():
529 529 branchname = urllib.quote(encoding.fromlocal(branch))
530 530 branchnodes = encodelist(nodes)
531 531 heads.append('%s %s' % (branchname, branchnodes))
532 532 return '\n'.join(heads)
533 533
534 534 @wireprotocommand('branches', 'nodes')
535 535 def branches(repo, proto, nodes):
536 536 nodes = decodelist(nodes)
537 537 r = []
538 538 for b in repo.branches(nodes):
539 539 r.append(encodelist(b) + "\n")
540 540 return "".join(r)
541 541
542 542
543 543 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
544 544 'known', 'getbundle', 'unbundlehash', 'batch']
545 545
546 546 def _capabilities(repo, proto):
547 547 """return a list of capabilities for a repo
548 548
549 549 This function exists to allow extensions to easily wrap capabilities
550 550 computation
551 551
552 552 - returns a lists: easy to alter
553 553 - change done here will be propagated to both `capabilities` and `hello`
554 command without any other effort. without any other action needed.
554 command without any other action needed.
555 555 """
556 556 # copy to prevent modification of the global list
557 557 caps = list(wireprotocaps)
558 558 if _allowstream(repo.ui):
559 559 if repo.ui.configbool('server', 'preferuncompressed', False):
560 560 caps.append('stream-preferred')
561 561 requiredformats = repo.requirements & repo.supportedformats
562 562 # if our local revlogs are just revlogv1, add 'stream' cap
563 563 if not requiredformats - set(('revlogv1',)):
564 564 caps.append('stream')
565 565 # otherwise, add 'streamreqs' detailing our local revlog format
566 566 else:
567 567 caps.append('streamreqs=%s' % ','.join(requiredformats))
568 568 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
569 569 caps.append('httpheader=1024')
570 570 return caps
571 571
572 # If you are writting and extension and consider wrapping this function. Wrap
572 # If you are writing an extension and consider wrapping this function. Wrap
573 573 # `_capabilities` instead.
574 574 @wireprotocommand('capabilities')
575 575 def capabilities(repo, proto):
576 576 return ' '.join(_capabilities(repo, proto))
577 577
578 578 @wireprotocommand('changegroup', 'roots')
579 579 def changegroup(repo, proto, roots):
580 580 nodes = decodelist(roots)
581 581 cg = changegroupmod.changegroup(repo, nodes, 'serve')
582 582 return streamres(proto.groupchunks(cg))
583 583
584 584 @wireprotocommand('changegroupsubset', 'bases heads')
585 585 def changegroupsubset(repo, proto, bases, heads):
586 586 bases = decodelist(bases)
587 587 heads = decodelist(heads)
588 588 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
589 589 return streamres(proto.groupchunks(cg))
590 590
591 591 @wireprotocommand('debugwireargs', 'one two *')
592 592 def debugwireargs(repo, proto, one, two, others):
593 593 # only accept optional args from the known set
594 594 opts = options('debugwireargs', ['three', 'four'], others)
595 595 return repo.debugwireargs(one, two, **opts)
596 596
597 597 @wireprotocommand('getbundle', '*')
598 598 def getbundle(repo, proto, others):
599 599 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
600 600 for k, v in opts.iteritems():
601 601 if k in ('heads', 'common'):
602 602 opts[k] = decodelist(v)
603 603 elif k == 'bundlecaps':
604 604 opts[k] = set(v.split(','))
605 605 cg = changegroupmod.getbundle(repo, 'serve', **opts)
606 606 return streamres(proto.groupchunks(cg))
607 607
608 608 @wireprotocommand('heads')
609 609 def heads(repo, proto):
610 610 h = repo.heads()
611 611 return encodelist(h) + "\n"
612 612
613 613 @wireprotocommand('hello')
614 614 def hello(repo, proto):
615 615 '''the hello command returns a set of lines describing various
616 616 interesting things about the server, in an RFC822-like format.
617 617 Currently the only one defined is "capabilities", which
618 618 consists of a line in the form:
619 619
620 620 capabilities: space separated list of tokens
621 621 '''
622 622 return "capabilities: %s\n" % (capabilities(repo, proto))
623 623
624 624 @wireprotocommand('listkeys', 'namespace')
625 625 def listkeys(repo, proto, namespace):
626 626 d = repo.listkeys(encoding.tolocal(namespace)).items()
627 627 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
628 628 for k, v in d])
629 629 return t
630 630
631 631 @wireprotocommand('lookup', 'key')
632 632 def lookup(repo, proto, key):
633 633 try:
634 634 k = encoding.tolocal(key)
635 635 c = repo[k]
636 636 r = c.hex()
637 637 success = 1
638 638 except Exception, inst:
639 639 r = str(inst)
640 640 success = 0
641 641 return "%s %s\n" % (success, r)
642 642
643 643 @wireprotocommand('known', 'nodes *')
644 644 def known(repo, proto, nodes, others):
645 645 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
646 646
647 647 @wireprotocommand('pushkey', 'namespace key old new')
648 648 def pushkey(repo, proto, namespace, key, old, new):
649 649 # compatibility with pre-1.8 clients which were accidentally
650 650 # sending raw binary nodes rather than utf-8-encoded hex
651 651 if len(new) == 20 and new.encode('string-escape') != new:
652 652 # looks like it could be a binary node
653 653 try:
654 654 new.decode('utf-8')
655 655 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
656 656 except UnicodeDecodeError:
657 657 pass # binary, leave unmodified
658 658 else:
659 659 new = encoding.tolocal(new) # normal path
660 660
661 661 if util.safehasattr(proto, 'restore'):
662 662
663 663 proto.redirect()
664 664
665 665 try:
666 666 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
667 667 encoding.tolocal(old), new) or False
668 668 except util.Abort:
669 669 r = False
670 670
671 671 output = proto.restore()
672 672
673 673 return '%s\n%s' % (int(r), output)
674 674
675 675 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
676 676 encoding.tolocal(old), new)
677 677 return '%s\n' % int(r)
678 678
679 679 def _allowstream(ui):
680 680 return ui.configbool('server', 'uncompressed', True, untrusted=True)
681 681
682 682 def _walkstreamfiles(repo):
683 683 # this is it's own function so extensions can override it
684 684 return repo.store.walk()
685 685
686 686 @wireprotocommand('stream_out')
687 687 def stream(repo, proto):
688 688 '''If the server supports streaming clone, it advertises the "stream"
689 689 capability with a value representing the version and flags of the repo
690 690 it is serving. Client checks to see if it understands the format.
691 691
692 692 The format is simple: the server writes out a line with the amount
693 693 of files, then the total amount of bytes to be transferred (separated
694 694 by a space). Then, for each file, the server first writes the filename
695 and filesize (separated by the null character), then the file contents.
695 and file size (separated by the null character), then the file contents.
696 696 '''
697 697
698 698 if not _allowstream(repo.ui):
699 699 return '1\n'
700 700
701 701 entries = []
702 702 total_bytes = 0
703 703 try:
704 704 # get consistent snapshot of repo, lock during scan
705 705 lock = repo.lock()
706 706 try:
707 707 repo.ui.debug('scanning\n')
708 708 for name, ename, size in _walkstreamfiles(repo):
709 709 if size:
710 710 entries.append((name, size))
711 711 total_bytes += size
712 712 finally:
713 713 lock.release()
714 714 except error.LockError:
715 715 return '2\n' # error: 2
716 716
717 717 def streamer(repo, entries, total):
718 718 '''stream out all metadata files in repository.'''
719 719 yield '0\n' # success
720 720 repo.ui.debug('%d files, %d bytes to transfer\n' %
721 721 (len(entries), total_bytes))
722 722 yield '%d %d\n' % (len(entries), total_bytes)
723 723
724 724 sopener = repo.sopener
725 725 oldaudit = sopener.mustaudit
726 726 debugflag = repo.ui.debugflag
727 727 sopener.mustaudit = False
728 728
729 729 try:
730 730 for name, size in entries:
731 731 if debugflag:
732 732 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
733 733 # partially encode name over the wire for backwards compat
734 734 yield '%s\0%d\n' % (store.encodedir(name), size)
735 735 if size <= 65536:
736 736 fp = sopener(name)
737 737 try:
738 738 data = fp.read(size)
739 739 finally:
740 740 fp.close()
741 741 yield data
742 742 else:
743 743 for chunk in util.filechunkiter(sopener(name), limit=size):
744 744 yield chunk
745 745 # replace with "finally:" when support for python 2.4 has been dropped
746 746 except Exception:
747 747 sopener.mustaudit = oldaudit
748 748 raise
749 749 sopener.mustaudit = oldaudit
750 750
751 751 return streamres(streamer(repo, entries, total_bytes))
752 752
753 753 @wireprotocommand('unbundle', 'heads')
754 754 def unbundle(repo, proto, heads):
755 755 their_heads = decodelist(heads)
756 756
757 757 try:
758 758 proto.redirect()
759 759
760 760 exchange.check_heads(repo, their_heads, 'preparing changes')
761 761
762 762 # write bundle data to temporary file because it can be big
763 763 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
764 764 fp = os.fdopen(fd, 'wb+')
765 765 r = 0
766 766 try:
767 767 proto.getfile(fp)
768 768 fp.seek(0)
769 769 gen = changegroupmod.readbundle(fp, None)
770 770 r = exchange.unbundle(repo, gen, their_heads, 'serve',
771 771 proto._client())
772 772 return pushres(r)
773 773
774 774 finally:
775 775 fp.close()
776 776 os.unlink(tempname)
777 777 except util.Abort, inst:
778 778 # The old code we moved used sys.stderr directly.
779 # We did not changed it to minise code change.
779 # We did not change it to minimise code change.
780 780 # This need to be moved to something proper.
781 781 # Feel free to do it.
782 782 sys.stderr.write("abort: %s\n" % inst)
783 783 return pushres(0)
784 784 except exchange.PushRaced, exc:
785 785 return pusherr(str(exc))
@@ -1,1321 +1,1321 b''
1 1 #!/usr/bin/env python
2 2 #
3 3 # run-tests.py - Run a set of tests on Mercurial
4 4 #
5 5 # Copyright 2006 Matt Mackall <mpm@selenic.com>
6 6 #
7 7 # This software may be used and distributed according to the terms of the
8 8 # GNU General Public License version 2 or any later version.
9 9
10 10 # Modifying this script is tricky because it has many modes:
11 11 # - serial (default) vs parallel (-jN, N > 1)
12 12 # - no coverage (default) vs coverage (-c, -C, -s)
13 13 # - temp install (default) vs specific hg script (--with-hg, --local)
14 14 # - tests are a mix of shell scripts and Python scripts
15 15 #
16 16 # If you change this script, it is recommended that you ensure you
17 17 # haven't broken it by running it in various modes with a representative
18 18 # sample of test scripts. For example:
19 19 #
20 20 # 1) serial, no coverage, temp install:
21 21 # ./run-tests.py test-s*
22 22 # 2) serial, no coverage, local hg:
23 23 # ./run-tests.py --local test-s*
24 24 # 3) serial, coverage, temp install:
25 25 # ./run-tests.py -c test-s*
26 26 # 4) serial, coverage, local hg:
27 27 # ./run-tests.py -c --local test-s* # unsupported
28 28 # 5) parallel, no coverage, temp install:
29 29 # ./run-tests.py -j2 test-s*
30 30 # 6) parallel, no coverage, local hg:
31 31 # ./run-tests.py -j2 --local test-s*
32 32 # 7) parallel, coverage, temp install:
33 33 # ./run-tests.py -j2 -c test-s* # currently broken
34 34 # 8) parallel, coverage, local install:
35 35 # ./run-tests.py -j2 -c --local test-s* # unsupported (and broken)
36 36 # 9) parallel, custom tmp dir:
37 37 # ./run-tests.py -j2 --tmpdir /tmp/myhgtests
38 38 #
39 39 # (You could use any subset of the tests: test-s* happens to match
40 40 # enough that it's worth doing parallel runs, few enough that it
41 41 # completes fairly quickly, includes both shell and Python scripts, and
42 42 # includes some scripts that run daemon processes.)
43 43
44 44 from distutils import version
45 45 import difflib
46 46 import errno
47 47 import optparse
48 48 import os
49 49 import shutil
50 50 import subprocess
51 51 import signal
52 52 import sys
53 53 import tempfile
54 54 import time
55 55 import random
56 56 import re
57 57 import threading
58 58 import killdaemons as killmod
59 59 import Queue as queue
60 60
61 61 processlock = threading.Lock()
62 62
63 63 # subprocess._cleanup can race with any Popen.wait or Popen.poll on py24
64 64 # http://bugs.python.org/issue1731717 for details. We shouldn't be producing
65 65 # zombies but it's pretty harmless even if we do.
66 66 if sys.version_info < (2, 5):
67 67 subprocess._cleanup = lambda: None
68 68
69 69 closefds = os.name == 'posix'
70 70 def Popen4(cmd, wd, timeout, env=None):
71 71 processlock.acquire()
72 72 p = subprocess.Popen(cmd, shell=True, bufsize=-1, cwd=wd, env=env,
73 73 close_fds=closefds,
74 74 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
75 75 stderr=subprocess.STDOUT)
76 76 processlock.release()
77 77
78 78 p.fromchild = p.stdout
79 79 p.tochild = p.stdin
80 80 p.childerr = p.stderr
81 81
82 82 p.timeout = False
83 83 if timeout:
84 84 def t():
85 85 start = time.time()
86 86 while time.time() - start < timeout and p.returncode is None:
87 87 time.sleep(.1)
88 88 p.timeout = True
89 89 if p.returncode is None:
90 90 terminate(p)
91 91 threading.Thread(target=t).start()
92 92
93 93 return p
94 94
95 95 # reserved exit code to skip test (used by hghave)
96 96 SKIPPED_STATUS = 80
97 97 SKIPPED_PREFIX = 'skipped: '
98 98 FAILED_PREFIX = 'hghave check failed: '
99 99 PYTHON = sys.executable.replace('\\', '/')
100 100 IMPL_PATH = 'PYTHONPATH'
101 101 if 'java' in sys.platform:
102 102 IMPL_PATH = 'JYTHONPATH'
103 103
104 104 requiredtools = [os.path.basename(sys.executable), "diff", "grep", "unzip",
105 105 "gunzip", "bunzip2", "sed"]
106 106 createdfiles = []
107 107
108 108 defaults = {
109 109 'jobs': ('HGTEST_JOBS', 1),
110 110 'timeout': ('HGTEST_TIMEOUT', 180),
111 111 'port': ('HGTEST_PORT', 20059),
112 112 'shell': ('HGTEST_SHELL', 'sh'),
113 113 }
114 114
115 115 def parselistfiles(files, listtype, warn=True):
116 116 entries = dict()
117 117 for filename in files:
118 118 try:
119 119 path = os.path.expanduser(os.path.expandvars(filename))
120 120 f = open(path, "r")
121 121 except IOError, err:
122 122 if err.errno != errno.ENOENT:
123 123 raise
124 124 if warn:
125 125 print "warning: no such %s file: %s" % (listtype, filename)
126 126 continue
127 127
128 128 for line in f.readlines():
129 129 line = line.split('#', 1)[0].strip()
130 130 if line:
131 131 entries[line] = filename
132 132
133 133 f.close()
134 134 return entries
135 135
136 136 def getparser():
137 137 parser = optparse.OptionParser("%prog [options] [tests]")
138 138
139 139 # keep these sorted
140 140 parser.add_option("--blacklist", action="append",
141 141 help="skip tests listed in the specified blacklist file")
142 142 parser.add_option("--whitelist", action="append",
143 143 help="always run tests listed in the specified whitelist file")
144 144 parser.add_option("--changed", type="string",
145 145 help="run tests that are changed in parent rev or working directory")
146 146 parser.add_option("-C", "--annotate", action="store_true",
147 147 help="output files annotated with coverage")
148 148 parser.add_option("-c", "--cover", action="store_true",
149 149 help="print a test coverage report")
150 150 parser.add_option("-d", "--debug", action="store_true",
151 151 help="debug mode: write output of test scripts to console"
152 " rather than capturing and diff'ing it (disables timeout)")
152 " rather than capturing and diffing it (disables timeout)")
153 153 parser.add_option("-f", "--first", action="store_true",
154 154 help="exit on the first test failure")
155 155 parser.add_option("-H", "--htmlcov", action="store_true",
156 156 help="create an HTML report of the coverage of the files")
157 157 parser.add_option("-i", "--interactive", action="store_true",
158 158 help="prompt to accept changed output")
159 159 parser.add_option("-j", "--jobs", type="int",
160 160 help="number of jobs to run in parallel"
161 161 " (default: $%s or %d)" % defaults['jobs'])
162 162 parser.add_option("--keep-tmpdir", action="store_true",
163 163 help="keep temporary directory after running tests")
164 164 parser.add_option("-k", "--keywords",
165 165 help="run tests matching keywords")
166 166 parser.add_option("-l", "--local", action="store_true",
167 167 help="shortcut for --with-hg=<testdir>/../hg")
168 168 parser.add_option("--loop", action="store_true",
169 169 help="loop tests repeatedly")
170 170 parser.add_option("-n", "--nodiff", action="store_true",
171 171 help="skip showing test changes")
172 172 parser.add_option("-p", "--port", type="int",
173 173 help="port on which servers should listen"
174 174 " (default: $%s or %d)" % defaults['port'])
175 175 parser.add_option("--compiler", type="string",
176 176 help="compiler to build with")
177 177 parser.add_option("--pure", action="store_true",
178 178 help="use pure Python code instead of C extensions")
179 179 parser.add_option("-R", "--restart", action="store_true",
180 180 help="restart at last error")
181 181 parser.add_option("-r", "--retest", action="store_true",
182 182 help="retest failed tests")
183 183 parser.add_option("-S", "--noskips", action="store_true",
184 184 help="don't report skip tests verbosely")
185 185 parser.add_option("--shell", type="string",
186 186 help="shell to use (default: $%s or %s)" % defaults['shell'])
187 187 parser.add_option("-t", "--timeout", type="int",
188 188 help="kill errant tests after TIMEOUT seconds"
189 189 " (default: $%s or %d)" % defaults['timeout'])
190 190 parser.add_option("--time", action="store_true",
191 191 help="time how long each test takes")
192 192 parser.add_option("--tmpdir", type="string",
193 193 help="run tests in the given temporary directory"
194 194 " (implies --keep-tmpdir)")
195 195 parser.add_option("-v", "--verbose", action="store_true",
196 196 help="output verbose messages")
197 197 parser.add_option("--view", type="string",
198 198 help="external diff viewer")
199 199 parser.add_option("--with-hg", type="string",
200 200 metavar="HG",
201 201 help="test using specified hg script rather than a "
202 202 "temporary installation")
203 203 parser.add_option("-3", "--py3k-warnings", action="store_true",
204 204 help="enable Py3k warnings on Python 2.6+")
205 205 parser.add_option('--extra-config-opt', action="append",
206 206 help='set the given config opt in the test hgrc')
207 207 parser.add_option('--random', action="store_true",
208 208 help='run tests in random order')
209 209
210 210 for option, (envvar, default) in defaults.items():
211 211 defaults[option] = type(default)(os.environ.get(envvar, default))
212 212 parser.set_defaults(**defaults)
213 213
214 214 return parser
215 215
216 216 def parseargs(args, parser):
217 217 (options, args) = parser.parse_args(args)
218 218
219 219 # jython is always pure
220 220 if 'java' in sys.platform or '__pypy__' in sys.modules:
221 221 options.pure = True
222 222
223 223 if options.with_hg:
224 224 options.with_hg = os.path.expanduser(options.with_hg)
225 225 if not (os.path.isfile(options.with_hg) and
226 226 os.access(options.with_hg, os.X_OK)):
227 227 parser.error('--with-hg must specify an executable hg script')
228 228 if not os.path.basename(options.with_hg) == 'hg':
229 229 sys.stderr.write('warning: --with-hg should specify an hg script\n')
230 230 if options.local:
231 231 testdir = os.path.dirname(os.path.realpath(sys.argv[0]))
232 232 hgbin = os.path.join(os.path.dirname(testdir), 'hg')
233 233 if os.name != 'nt' and not os.access(hgbin, os.X_OK):
234 234 parser.error('--local specified, but %r not found or not executable'
235 235 % hgbin)
236 236 options.with_hg = hgbin
237 237
238 238 options.anycoverage = options.cover or options.annotate or options.htmlcov
239 239 if options.anycoverage:
240 240 try:
241 241 import coverage
242 242 covver = version.StrictVersion(coverage.__version__).version
243 243 if covver < (3, 3):
244 244 parser.error('coverage options require coverage 3.3 or later')
245 245 except ImportError:
246 246 parser.error('coverage options now require the coverage package')
247 247
248 248 if options.anycoverage and options.local:
249 249 # this needs some path mangling somewhere, I guess
250 250 parser.error("sorry, coverage options do not work when --local "
251 251 "is specified")
252 252
253 253 global verbose
254 254 if options.verbose:
255 255 verbose = ''
256 256
257 257 if options.tmpdir:
258 258 options.tmpdir = os.path.expanduser(options.tmpdir)
259 259
260 260 if options.jobs < 1:
261 261 parser.error('--jobs must be positive')
262 262 if options.interactive and options.debug:
263 263 parser.error("-i/--interactive and -d/--debug are incompatible")
264 264 if options.debug:
265 265 if options.timeout != defaults['timeout']:
266 266 sys.stderr.write(
267 267 'warning: --timeout option ignored with --debug\n')
268 268 options.timeout = 0
269 269 if options.py3k_warnings:
270 270 if sys.version_info[:2] < (2, 6) or sys.version_info[:2] >= (3, 0):
271 271 parser.error('--py3k-warnings can only be used on Python 2.6+')
272 272 if options.blacklist:
273 273 options.blacklist = parselistfiles(options.blacklist, 'blacklist')
274 274 if options.whitelist:
275 275 options.whitelisted = parselistfiles(options.whitelist, 'whitelist')
276 276 else:
277 277 options.whitelisted = {}
278 278
279 279 return (options, args)
280 280
281 281 def rename(src, dst):
282 282 """Like os.rename(), trade atomicity and opened files friendliness
283 283 for existing destination support.
284 284 """
285 285 shutil.copy(src, dst)
286 286 os.remove(src)
287 287
288 288 def parsehghaveoutput(lines):
289 289 '''Parse hghave log lines.
290 290 Return tuple of lists (missing, failed):
291 291 * the missing/unknown features
292 292 * the features for which existence check failed'''
293 293 missing = []
294 294 failed = []
295 295 for line in lines:
296 296 if line.startswith(SKIPPED_PREFIX):
297 297 line = line.splitlines()[0]
298 298 missing.append(line[len(SKIPPED_PREFIX):])
299 299 elif line.startswith(FAILED_PREFIX):
300 300 line = line.splitlines()[0]
301 301 failed.append(line[len(FAILED_PREFIX):])
302 302
303 303 return missing, failed
304 304
305 305 def showdiff(expected, output, ref, err):
306 306 print
307 307 servefail = False
308 308 for line in difflib.unified_diff(expected, output, ref, err):
309 309 sys.stdout.write(line)
310 310 if not servefail and line.startswith(
311 311 '+ abort: child process failed to start'):
312 312 servefail = True
313 313 return {'servefail': servefail}
314 314
315 315
316 316 verbose = False
317 317 def vlog(*msg):
318 318 if verbose is not False:
319 319 iolock.acquire()
320 320 if verbose:
321 321 print verbose,
322 322 for m in msg:
323 323 print m,
324 324 print
325 325 sys.stdout.flush()
326 326 iolock.release()
327 327
328 328 def log(*msg):
329 329 iolock.acquire()
330 330 if verbose:
331 331 print verbose,
332 332 for m in msg:
333 333 print m,
334 334 print
335 335 sys.stdout.flush()
336 336 iolock.release()
337 337
338 338 def findprogram(program):
339 339 """Search PATH for a executable program"""
340 340 for p in os.environ.get('PATH', os.defpath).split(os.pathsep):
341 341 name = os.path.join(p, program)
342 342 if os.name == 'nt' or os.access(name, os.X_OK):
343 343 return name
344 344 return None
345 345
346 346 def createhgrc(path, options):
347 347 # create a fresh hgrc
348 348 hgrc = open(path, 'w')
349 349 hgrc.write('[ui]\n')
350 350 hgrc.write('slash = True\n')
351 351 hgrc.write('interactive = False\n')
352 352 hgrc.write('[defaults]\n')
353 353 hgrc.write('backout = -d "0 0"\n')
354 354 hgrc.write('commit = -d "0 0"\n')
355 355 hgrc.write('shelve = --date "0 0"\n')
356 356 hgrc.write('tag = -d "0 0"\n')
357 357 if options.extra_config_opt:
358 358 for opt in options.extra_config_opt:
359 359 section, key = opt.split('.', 1)
360 360 assert '=' in key, ('extra config opt %s must '
361 361 'have an = for assignment' % opt)
362 362 hgrc.write('[%s]\n%s\n' % (section, key))
363 363 hgrc.close()
364 364
365 365 def createenv(options, testtmp, threadtmp, port):
366 366 env = os.environ.copy()
367 367 env['TESTTMP'] = testtmp
368 368 env['HOME'] = testtmp
369 369 env["HGPORT"] = str(port)
370 370 env["HGPORT1"] = str(port + 1)
371 371 env["HGPORT2"] = str(port + 2)
372 372 env["HGRCPATH"] = os.path.join(threadtmp, '.hgrc')
373 373 env["DAEMON_PIDS"] = os.path.join(threadtmp, 'daemon.pids')
374 374 env["HGEDITOR"] = sys.executable + ' -c "import sys; sys.exit(0)"'
375 375 env["HGMERGE"] = "internal:merge"
376 376 env["HGUSER"] = "test"
377 377 env["HGENCODING"] = "ascii"
378 378 env["HGENCODINGMODE"] = "strict"
379 379
380 380 # Reset some environment variables to well-known values so that
381 381 # the tests produce repeatable output.
382 382 env['LANG'] = env['LC_ALL'] = env['LANGUAGE'] = 'C'
383 383 env['TZ'] = 'GMT'
384 384 env["EMAIL"] = "Foo Bar <foo.bar@example.com>"
385 385 env['COLUMNS'] = '80'
386 386 env['TERM'] = 'xterm'
387 387
388 388 for k in ('HG HGPROF CDPATH GREP_OPTIONS http_proxy no_proxy ' +
389 389 'NO_PROXY').split():
390 390 if k in env:
391 391 del env[k]
392 392
393 393 # unset env related to hooks
394 394 for k in env.keys():
395 395 if k.startswith('HG_'):
396 396 del env[k]
397 397
398 398 return env
399 399
400 400 def checktools():
401 401 # Before we go any further, check for pre-requisite tools
402 402 # stuff from coreutils (cat, rm, etc) are not tested
403 403 for p in requiredtools:
404 404 if os.name == 'nt' and not p.endswith('.exe'):
405 405 p += '.exe'
406 406 found = findprogram(p)
407 407 if found:
408 408 vlog("# Found prerequisite", p, "at", found)
409 409 else:
410 410 print "WARNING: Did not find prerequisite tool: "+p
411 411
412 412 def terminate(proc):
413 413 """Terminate subprocess (with fallback for Python versions < 2.6)"""
414 414 vlog('# Terminating process %d' % proc.pid)
415 415 try:
416 416 getattr(proc, 'terminate', lambda : os.kill(proc.pid, signal.SIGTERM))()
417 417 except OSError:
418 418 pass
419 419
420 420 def killdaemons(pidfile):
421 421 return killmod.killdaemons(pidfile, tryhard=False, remove=True,
422 422 logfn=vlog)
423 423
424 424 def cleanup(options):
425 425 if not options.keep_tmpdir:
426 426 vlog("# Cleaning up HGTMP", HGTMP)
427 427 shutil.rmtree(HGTMP, True)
428 428 for f in createdfiles:
429 429 try:
430 430 os.remove(f)
431 431 except OSError:
432 432 pass
433 433
434 434 def usecorrectpython():
435 435 # some tests run python interpreter. they must use same
436 436 # interpreter we use or bad things will happen.
437 437 pyexename = sys.platform == 'win32' and 'python.exe' or 'python'
438 438 if getattr(os, 'symlink', None):
439 439 vlog("# Making python executable in test path a symlink to '%s'" %
440 440 sys.executable)
441 441 mypython = os.path.join(TMPBINDIR, pyexename)
442 442 try:
443 443 if os.readlink(mypython) == sys.executable:
444 444 return
445 445 os.unlink(mypython)
446 446 except OSError, err:
447 447 if err.errno != errno.ENOENT:
448 448 raise
449 449 if findprogram(pyexename) != sys.executable:
450 450 try:
451 451 os.symlink(sys.executable, mypython)
452 452 createdfiles.append(mypython)
453 453 except OSError, err:
454 454 # child processes may race, which is harmless
455 455 if err.errno != errno.EEXIST:
456 456 raise
457 457 else:
458 458 exedir, exename = os.path.split(sys.executable)
459 459 vlog("# Modifying search path to find %s as %s in '%s'" %
460 460 (exename, pyexename, exedir))
461 461 path = os.environ['PATH'].split(os.pathsep)
462 462 while exedir in path:
463 463 path.remove(exedir)
464 464 os.environ['PATH'] = os.pathsep.join([exedir] + path)
465 465 if not findprogram(pyexename):
466 466 print "WARNING: Cannot find %s in search path" % pyexename
467 467
468 468 def installhg(options):
469 469 vlog("# Performing temporary installation of HG")
470 470 installerrs = os.path.join("tests", "install.err")
471 471 compiler = ''
472 472 if options.compiler:
473 473 compiler = '--compiler ' + options.compiler
474 474 pure = options.pure and "--pure" or ""
475 475 py3 = ''
476 476 if sys.version_info[0] == 3:
477 477 py3 = '--c2to3'
478 478
479 479 # Run installer in hg root
480 480 script = os.path.realpath(sys.argv[0])
481 481 hgroot = os.path.dirname(os.path.dirname(script))
482 482 os.chdir(hgroot)
483 483 nohome = '--home=""'
484 484 if os.name == 'nt':
485 485 # The --home="" trick works only on OS where os.sep == '/'
486 486 # because of a distutils convert_path() fast-path. Avoid it at
487 487 # least on Windows for now, deal with .pydistutils.cfg bugs
488 488 # when they happen.
489 489 nohome = ''
490 490 cmd = ('%(exe)s setup.py %(py3)s %(pure)s clean --all'
491 491 ' build %(compiler)s --build-base="%(base)s"'
492 492 ' install --force --prefix="%(prefix)s" --install-lib="%(libdir)s"'
493 493 ' --install-scripts="%(bindir)s" %(nohome)s >%(logfile)s 2>&1'
494 494 % {'exe': sys.executable, 'py3': py3, 'pure': pure,
495 495 'compiler': compiler, 'base': os.path.join(HGTMP, "build"),
496 496 'prefix': INST, 'libdir': PYTHONDIR, 'bindir': BINDIR,
497 497 'nohome': nohome, 'logfile': installerrs})
498 498 vlog("# Running", cmd)
499 499 if os.system(cmd) == 0:
500 500 if not options.verbose:
501 501 os.remove(installerrs)
502 502 else:
503 503 f = open(installerrs)
504 504 for line in f:
505 505 print line,
506 506 f.close()
507 507 sys.exit(1)
508 508 os.chdir(TESTDIR)
509 509
510 510 usecorrectpython()
511 511
512 512 if options.py3k_warnings and not options.anycoverage:
513 513 vlog("# Updating hg command to enable Py3k Warnings switch")
514 514 f = open(os.path.join(BINDIR, 'hg'), 'r')
515 515 lines = [line.rstrip() for line in f]
516 516 lines[0] += ' -3'
517 517 f.close()
518 518 f = open(os.path.join(BINDIR, 'hg'), 'w')
519 519 for line in lines:
520 520 f.write(line + '\n')
521 521 f.close()
522 522
523 523 hgbat = os.path.join(BINDIR, 'hg.bat')
524 524 if os.path.isfile(hgbat):
525 525 # hg.bat expects to be put in bin/scripts while run-tests.py
526 526 # installation layout put it in bin/ directly. Fix it
527 527 f = open(hgbat, 'rb')
528 528 data = f.read()
529 529 f.close()
530 530 if '"%~dp0..\python" "%~dp0hg" %*' in data:
531 531 data = data.replace('"%~dp0..\python" "%~dp0hg" %*',
532 532 '"%~dp0python" "%~dp0hg" %*')
533 533 f = open(hgbat, 'wb')
534 534 f.write(data)
535 535 f.close()
536 536 else:
537 537 print 'WARNING: cannot fix hg.bat reference to python.exe'
538 538
539 539 if options.anycoverage:
540 540 custom = os.path.join(TESTDIR, 'sitecustomize.py')
541 541 target = os.path.join(PYTHONDIR, 'sitecustomize.py')
542 542 vlog('# Installing coverage trigger to %s' % target)
543 543 shutil.copyfile(custom, target)
544 544 rc = os.path.join(TESTDIR, '.coveragerc')
545 545 vlog('# Installing coverage rc to %s' % rc)
546 546 os.environ['COVERAGE_PROCESS_START'] = rc
547 547 fn = os.path.join(INST, '..', '.coverage')
548 548 os.environ['COVERAGE_FILE'] = fn
549 549
550 550 def outputtimes(options):
551 551 vlog('# Producing time report')
552 552 times.sort(key=lambda t: (t[1], t[0]), reverse=True)
553 553 cols = '%7.3f %s'
554 554 print '\n%-7s %s' % ('Time', 'Test')
555 555 for test, timetaken in times:
556 556 print cols % (timetaken, test)
557 557
558 558 def outputcoverage(options):
559 559
560 560 vlog('# Producing coverage report')
561 561 os.chdir(PYTHONDIR)
562 562
563 563 def covrun(*args):
564 564 cmd = 'coverage %s' % ' '.join(args)
565 565 vlog('# Running: %s' % cmd)
566 566 os.system(cmd)
567 567
568 568 covrun('-c')
569 569 omit = ','.join(os.path.join(x, '*') for x in [BINDIR, TESTDIR])
570 570 covrun('-i', '-r', '"--omit=%s"' % omit) # report
571 571 if options.htmlcov:
572 572 htmldir = os.path.join(TESTDIR, 'htmlcov')
573 573 covrun('-i', '-b', '"--directory=%s"' % htmldir, '"--omit=%s"' % omit)
574 574 if options.annotate:
575 575 adir = os.path.join(TESTDIR, 'annotated')
576 576 if not os.path.isdir(adir):
577 577 os.mkdir(adir)
578 578 covrun('-i', '-a', '"--directory=%s"' % adir, '"--omit=%s"' % omit)
579 579
580 580 def pytest(test, wd, options, replacements, env):
581 581 py3kswitch = options.py3k_warnings and ' -3' or ''
582 582 cmd = '%s%s "%s"' % (PYTHON, py3kswitch, test)
583 583 vlog("# Running", cmd)
584 584 if os.name == 'nt':
585 585 replacements.append((r'\r\n', '\n'))
586 586 return run(cmd, wd, options, replacements, env)
587 587
588 588 needescape = re.compile(r'[\x00-\x08\x0b-\x1f\x7f-\xff]').search
589 589 escapesub = re.compile(r'[\x00-\x08\x0b-\x1f\\\x7f-\xff]').sub
590 590 escapemap = dict((chr(i), r'\x%02x' % i) for i in range(256))
591 591 escapemap.update({'\\': '\\\\', '\r': r'\r'})
592 592 def escapef(m):
593 593 return escapemap[m.group(0)]
594 594 def stringescape(s):
595 595 return escapesub(escapef, s)
596 596
597 597 def rematch(el, l):
598 598 try:
599 599 # use \Z to ensure that the regex matches to the end of the string
600 600 if os.name == 'nt':
601 601 return re.match(el + r'\r?\n\Z', l)
602 602 return re.match(el + r'\n\Z', l)
603 603 except re.error:
604 604 # el is an invalid regex
605 605 return False
606 606
607 607 def globmatch(el, l):
608 608 # The only supported special characters are * and ? plus / which also
609 # matches \ on windows. Escaping of these caracters is supported.
609 # matches \ on windows. Escaping of these characters is supported.
610 610 if el + '\n' == l:
611 611 if os.altsep:
612 612 # matching on "/" is not needed for this line
613 613 return '-glob'
614 614 return True
615 615 i, n = 0, len(el)
616 616 res = ''
617 617 while i < n:
618 618 c = el[i]
619 619 i += 1
620 620 if c == '\\' and el[i] in '*?\\/':
621 621 res += el[i - 1:i + 1]
622 622 i += 1
623 623 elif c == '*':
624 624 res += '.*'
625 625 elif c == '?':
626 626 res += '.'
627 627 elif c == '/' and os.altsep:
628 628 res += '[/\\\\]'
629 629 else:
630 630 res += re.escape(c)
631 631 return rematch(res, l)
632 632
633 633 def linematch(el, l):
634 634 if el == l: # perfect match (fast)
635 635 return True
636 636 if el:
637 637 if el.endswith(" (esc)\n"):
638 638 el = el[:-7].decode('string-escape') + '\n'
639 639 if el == l or os.name == 'nt' and el[:-1] + '\r\n' == l:
640 640 return True
641 641 if el.endswith(" (re)\n"):
642 642 return rematch(el[:-6], l)
643 643 if el.endswith(" (glob)\n"):
644 644 return globmatch(el[:-8], l)
645 645 if os.altsep and l.replace('\\', '/') == el:
646 646 return '+glob'
647 647 return False
648 648
649 649 def tsttest(test, wd, options, replacements, env):
650 650 # We generate a shell script which outputs unique markers to line
651 651 # up script results with our source. These markers include input
652 652 # line number and the last return code
653 653 salt = "SALT" + str(time.time())
654 654 def addsalt(line, inpython):
655 655 if inpython:
656 656 script.append('%s %d 0\n' % (salt, line))
657 657 else:
658 658 script.append('echo %s %s $?\n' % (salt, line))
659 659
660 660 # After we run the shell script, we re-unify the script output
661 661 # with non-active parts of the source, with synchronization by our
662 662 # SALT line number markers. The after table contains the
663 663 # non-active components, ordered by line number
664 664 after = {}
665 665 pos = prepos = -1
666 666
667 # Expected shellscript output
667 # Expected shell script output
668 668 expected = {}
669 669
670 670 # We keep track of whether or not we're in a Python block so we
671 671 # can generate the surrounding doctest magic
672 672 inpython = False
673 673
674 674 # True or False when in a true or false conditional section
675 675 skipping = None
676 676
677 677 def hghave(reqs):
678 678 # TODO: do something smarter when all other uses of hghave is gone
679 679 tdir = TESTDIR.replace('\\', '/')
680 680 proc = Popen4('%s -c "%s/hghave %s"' %
681 681 (options.shell, tdir, ' '.join(reqs)), wd, 0)
682 682 stdout, stderr = proc.communicate()
683 683 ret = proc.wait()
684 684 if wifexited(ret):
685 685 ret = os.WEXITSTATUS(ret)
686 686 if ret == 2:
687 687 print stdout
688 688 sys.exit(1)
689 689 return ret == 0
690 690
691 691 f = open(test)
692 692 t = f.readlines()
693 693 f.close()
694 694
695 695 script = []
696 696 if options.debug:
697 697 script.append('set -x\n')
698 698 if os.getenv('MSYSTEM'):
699 699 script.append('alias pwd="pwd -W"\n')
700 700 n = 0
701 701 for n, l in enumerate(t):
702 702 if not l.endswith('\n'):
703 703 l += '\n'
704 704 if l.startswith('#if'):
705 705 lsplit = l.split()
706 706 if len(lsplit) < 2 or lsplit[0] != '#if':
707 707 after.setdefault(pos, []).append(' !!! invalid #if\n')
708 708 if skipping is not None:
709 709 after.setdefault(pos, []).append(' !!! nested #if\n')
710 710 skipping = not hghave(lsplit[1:])
711 711 after.setdefault(pos, []).append(l)
712 712 elif l.startswith('#else'):
713 713 if skipping is None:
714 714 after.setdefault(pos, []).append(' !!! missing #if\n')
715 715 skipping = not skipping
716 716 after.setdefault(pos, []).append(l)
717 717 elif l.startswith('#endif'):
718 718 if skipping is None:
719 719 after.setdefault(pos, []).append(' !!! missing #if\n')
720 720 skipping = None
721 721 after.setdefault(pos, []).append(l)
722 722 elif skipping:
723 723 after.setdefault(pos, []).append(l)
724 724 elif l.startswith(' >>> '): # python inlines
725 725 after.setdefault(pos, []).append(l)
726 726 prepos = pos
727 727 pos = n
728 728 if not inpython:
729 729 # we've just entered a Python block, add the header
730 730 inpython = True
731 731 addsalt(prepos, False) # make sure we report the exit code
732 732 script.append('%s -m heredoctest <<EOF\n' % PYTHON)
733 733 addsalt(n, True)
734 734 script.append(l[2:])
735 735 elif l.startswith(' ... '): # python inlines
736 736 after.setdefault(prepos, []).append(l)
737 737 script.append(l[2:])
738 738 elif l.startswith(' $ '): # commands
739 739 if inpython:
740 740 script.append("EOF\n")
741 741 inpython = False
742 742 after.setdefault(pos, []).append(l)
743 743 prepos = pos
744 744 pos = n
745 745 addsalt(n, False)
746 746 cmd = l[4:].split()
747 747 if len(cmd) == 2 and cmd[0] == 'cd':
748 748 l = ' $ cd %s || exit 1\n' % cmd[1]
749 749 script.append(l[4:])
750 750 elif l.startswith(' > '): # continuations
751 751 after.setdefault(prepos, []).append(l)
752 752 script.append(l[4:])
753 753 elif l.startswith(' '): # results
754 754 # queue up a list of expected results
755 755 expected.setdefault(pos, []).append(l[2:])
756 756 else:
757 757 if inpython:
758 758 script.append("EOF\n")
759 759 inpython = False
760 760 # non-command/result - queue up for merged output
761 761 after.setdefault(pos, []).append(l)
762 762
763 763 if inpython:
764 764 script.append("EOF\n")
765 765 if skipping is not None:
766 766 after.setdefault(pos, []).append(' !!! missing #endif\n')
767 767 addsalt(n + 1, False)
768 768
769 769 # Write out the script and execute it
770 770 name = wd + '.sh'
771 771 f = open(name, 'w')
772 772 for l in script:
773 773 f.write(l)
774 774 f.close()
775 775
776 776 cmd = '%s "%s"' % (options.shell, name)
777 777 vlog("# Running", cmd)
778 778 exitcode, output = run(cmd, wd, options, replacements, env)
779 779 # do not merge output if skipped, return hghave message instead
780 780 # similarly, with --debug, output is None
781 781 if exitcode == SKIPPED_STATUS or output is None:
782 782 return exitcode, output
783 783
784 784 # Merge the script output back into a unified test
785 785
786 786 warnonly = 1 # 1: not yet, 2: yes, 3: for sure not
787 787 if exitcode != 0: # failure has been reported
788 788 warnonly = 3 # set to "for sure not"
789 789 pos = -1
790 790 postout = []
791 791 for l in output:
792 792 lout, lcmd = l, None
793 793 if salt in l:
794 794 lout, lcmd = l.split(salt, 1)
795 795
796 796 if lout:
797 797 if not lout.endswith('\n'):
798 798 lout += ' (no-eol)\n'
799 799
800 800 # find the expected output at the current position
801 801 el = None
802 802 if pos in expected and expected[pos]:
803 803 el = expected[pos].pop(0)
804 804
805 805 r = linematch(el, lout)
806 806 if isinstance(r, str):
807 807 if r == '+glob':
808 808 lout = el[:-1] + ' (glob)\n'
809 809 r = '' # warn only this line
810 810 elif r == '-glob':
811 811 lout = ''.join(el.rsplit(' (glob)', 1))
812 812 r = '' # warn only this line
813 813 else:
814 814 log('\ninfo, unknown linematch result: %r\n' % r)
815 815 r = False
816 816 if r:
817 817 postout.append(" " + el)
818 818 else:
819 819 if needescape(lout):
820 820 lout = stringescape(lout.rstrip('\n')) + " (esc)\n"
821 821 postout.append(" " + lout) # let diff deal with it
822 822 if r != '': # if line failed
823 823 warnonly = 3 # set to "for sure not"
824 824 elif warnonly == 1: # is "not yet" (and line is warn only)
825 825 warnonly = 2 # set to "yes" do warn
826 826
827 827 if lcmd:
828 828 # add on last return code
829 829 ret = int(lcmd.split()[1])
830 830 if ret != 0:
831 831 postout.append(" [%s]\n" % ret)
832 832 if pos in after:
833 833 # merge in non-active test bits
834 834 postout += after.pop(pos)
835 835 pos = int(lcmd.split()[0])
836 836
837 837 if pos in after:
838 838 postout += after.pop(pos)
839 839
840 840 if warnonly == 2:
841 841 exitcode = False # set exitcode to warned
842 842 return exitcode, postout
843 843
844 844 wifexited = getattr(os, "WIFEXITED", lambda x: False)
845 845 def run(cmd, wd, options, replacements, env):
846 846 """Run command in a sub-process, capturing the output (stdout and stderr).
847 847 Return a tuple (exitcode, output). output is None in debug mode."""
848 848 # TODO: Use subprocess.Popen if we're running on Python 2.4
849 849 if options.debug:
850 850 proc = subprocess.Popen(cmd, shell=True, cwd=wd, env=env)
851 851 ret = proc.wait()
852 852 return (ret, None)
853 853
854 854 proc = Popen4(cmd, wd, options.timeout, env)
855 855 def cleanup():
856 856 terminate(proc)
857 857 ret = proc.wait()
858 858 if ret == 0:
859 859 ret = signal.SIGTERM << 8
860 860 killdaemons(env['DAEMON_PIDS'])
861 861 return ret
862 862
863 863 output = ''
864 864 proc.tochild.close()
865 865
866 866 try:
867 867 output = proc.fromchild.read()
868 868 except KeyboardInterrupt:
869 869 vlog('# Handling keyboard interrupt')
870 870 cleanup()
871 871 raise
872 872
873 873 ret = proc.wait()
874 874 if wifexited(ret):
875 875 ret = os.WEXITSTATUS(ret)
876 876
877 877 if proc.timeout:
878 878 ret = 'timeout'
879 879
880 880 if ret:
881 881 killdaemons(env['DAEMON_PIDS'])
882 882
883 883 if abort:
884 884 raise KeyboardInterrupt()
885 885
886 886 for s, r in replacements:
887 887 output = re.sub(s, r, output)
888 888 return ret, output.splitlines(True)
889 889
890 890 def runone(options, test, count):
891 891 '''returns a result element: (code, test, msg)'''
892 892
893 893 def skip(msg):
894 894 if options.verbose:
895 895 log("\nSkipping %s: %s" % (testpath, msg))
896 896 return 's', test, msg
897 897
898 898 def fail(msg, ret):
899 899 warned = ret is False
900 900 if not options.nodiff:
901 901 log("\n%s: %s %s" % (warned and 'Warning' or 'ERROR', test, msg))
902 902 if (not ret and options.interactive
903 903 and os.path.exists(testpath + ".err")):
904 904 iolock.acquire()
905 905 print "Accept this change? [n] ",
906 906 answer = sys.stdin.readline().strip()
907 907 iolock.release()
908 908 if answer.lower() in "y yes".split():
909 909 if test.endswith(".t"):
910 910 rename(testpath + ".err", testpath)
911 911 else:
912 912 rename(testpath + ".err", testpath + ".out")
913 913 return '.', test, ''
914 914 return warned and '~' or '!', test, msg
915 915
916 916 def success():
917 917 return '.', test, ''
918 918
919 919 def ignore(msg):
920 920 return 'i', test, msg
921 921
922 922 def describe(ret):
923 923 if ret < 0:
924 924 return 'killed by signal %d' % -ret
925 925 return 'returned error code %d' % ret
926 926
927 927 testpath = os.path.join(TESTDIR, test)
928 928 err = os.path.join(TESTDIR, test + ".err")
929 929 lctest = test.lower()
930 930
931 931 if not os.path.exists(testpath):
932 932 return skip("doesn't exist")
933 933
934 934 if not (options.whitelisted and test in options.whitelisted):
935 935 if options.blacklist and test in options.blacklist:
936 936 return skip("blacklisted")
937 937
938 938 if options.retest and not os.path.exists(test + ".err"):
939 939 return ignore("not retesting")
940 940
941 941 if options.keywords:
942 942 fp = open(test)
943 943 t = fp.read().lower() + test.lower()
944 944 fp.close()
945 945 for k in options.keywords.lower().split():
946 946 if k in t:
947 947 break
948 948 else:
949 949 return ignore("doesn't match keyword")
950 950
951 951 if not os.path.basename(lctest).startswith("test-"):
952 952 return skip("not a test file")
953 953 for ext, func, out in testtypes:
954 954 if lctest.endswith(ext):
955 955 runner = func
956 956 ref = os.path.join(TESTDIR, test + out)
957 957 break
958 958 else:
959 959 return skip("unknown test type")
960 960
961 961 vlog("# Test", test)
962 962
963 963 if os.path.exists(err):
964 964 os.remove(err) # Remove any previous output files
965 965
966 966 # Make a tmp subdirectory to work in
967 967 threadtmp = os.path.join(HGTMP, "child%d" % count)
968 968 testtmp = os.path.join(threadtmp, os.path.basename(test))
969 969 os.mkdir(threadtmp)
970 970 os.mkdir(testtmp)
971 971
972 972 port = options.port + count * 3
973 973 replacements = [
974 974 (r':%s\b' % port, ':$HGPORT'),
975 975 (r':%s\b' % (port + 1), ':$HGPORT1'),
976 976 (r':%s\b' % (port + 2), ':$HGPORT2'),
977 977 ]
978 978 if os.name == 'nt':
979 979 replacements.append(
980 980 (''.join(c.isalpha() and '[%s%s]' % (c.lower(), c.upper()) or
981 981 c in '/\\' and r'[/\\]' or
982 982 c.isdigit() and c or
983 983 '\\' + c
984 984 for c in testtmp), '$TESTTMP'))
985 985 else:
986 986 replacements.append((re.escape(testtmp), '$TESTTMP'))
987 987
988 988 env = createenv(options, testtmp, threadtmp, port)
989 989 createhgrc(env['HGRCPATH'], options)
990 990
991 991 starttime = time.time()
992 992 try:
993 993 ret, out = runner(testpath, testtmp, options, replacements, env)
994 994 except KeyboardInterrupt:
995 995 endtime = time.time()
996 996 log('INTERRUPTED: %s (after %d seconds)' % (test, endtime - starttime))
997 997 raise
998 998 endtime = time.time()
999 999 times.append((test, endtime - starttime))
1000 1000 vlog("# Ret was:", ret)
1001 1001
1002 1002 killdaemons(env['DAEMON_PIDS'])
1003 1003
1004 1004 skipped = (ret == SKIPPED_STATUS)
1005 1005
1006 1006 # If we're not in --debug mode and reference output file exists,
1007 1007 # check test output against it.
1008 1008 if options.debug:
1009 1009 refout = None # to match "out is None"
1010 1010 elif os.path.exists(ref):
1011 1011 f = open(ref, "r")
1012 1012 refout = f.read().splitlines(True)
1013 1013 f.close()
1014 1014 else:
1015 1015 refout = []
1016 1016
1017 1017 if (ret != 0 or out != refout) and not skipped and not options.debug:
1018 1018 # Save errors to a file for diagnosis
1019 1019 f = open(err, "wb")
1020 1020 for line in out:
1021 1021 f.write(line)
1022 1022 f.close()
1023 1023
1024 1024 if skipped:
1025 1025 if out is None: # debug mode: nothing to parse
1026 1026 missing = ['unknown']
1027 1027 failed = None
1028 1028 else:
1029 1029 missing, failed = parsehghaveoutput(out)
1030 1030 if not missing:
1031 1031 missing = ['irrelevant']
1032 1032 if failed:
1033 1033 result = fail("hghave failed checking for %s" % failed[-1], ret)
1034 1034 skipped = False
1035 1035 else:
1036 1036 result = skip(missing[-1])
1037 1037 elif ret == 'timeout':
1038 1038 result = fail("timed out", ret)
1039 1039 elif out != refout:
1040 1040 info = {}
1041 1041 if not options.nodiff:
1042 1042 iolock.acquire()
1043 1043 if options.view:
1044 1044 os.system("%s %s %s" % (options.view, ref, err))
1045 1045 else:
1046 1046 info = showdiff(refout, out, ref, err)
1047 1047 iolock.release()
1048 1048 msg = ""
1049 1049 if info.get('servefail'): msg += "serve failed and "
1050 1050 if ret:
1051 1051 msg += "output changed and " + describe(ret)
1052 1052 else:
1053 1053 msg += "output changed"
1054 1054 result = fail(msg, ret)
1055 1055 elif ret:
1056 1056 result = fail(describe(ret), ret)
1057 1057 else:
1058 1058 result = success()
1059 1059
1060 1060 if not options.verbose:
1061 1061 iolock.acquire()
1062 1062 sys.stdout.write(result[0])
1063 1063 sys.stdout.flush()
1064 1064 iolock.release()
1065 1065
1066 1066 if not options.keep_tmpdir:
1067 1067 shutil.rmtree(threadtmp, True)
1068 1068 return result
1069 1069
1070 1070 _hgpath = None
1071 1071
1072 1072 def _gethgpath():
1073 1073 """Return the path to the mercurial package that is actually found by
1074 1074 the current Python interpreter."""
1075 1075 global _hgpath
1076 1076 if _hgpath is not None:
1077 1077 return _hgpath
1078 1078
1079 1079 cmd = '%s -c "import mercurial; print (mercurial.__path__[0])"'
1080 1080 pipe = os.popen(cmd % PYTHON)
1081 1081 try:
1082 1082 _hgpath = pipe.read().strip()
1083 1083 finally:
1084 1084 pipe.close()
1085 1085 return _hgpath
1086 1086
1087 1087 def _checkhglib(verb):
1088 1088 """Ensure that the 'mercurial' package imported by python is
1089 1089 the one we expect it to be. If not, print a warning to stderr."""
1090 1090 expecthg = os.path.join(PYTHONDIR, 'mercurial')
1091 1091 actualhg = _gethgpath()
1092 1092 if os.path.abspath(actualhg) != os.path.abspath(expecthg):
1093 1093 sys.stderr.write('warning: %s with unexpected mercurial lib: %s\n'
1094 1094 ' (expected %s)\n'
1095 1095 % (verb, actualhg, expecthg))
1096 1096
1097 1097 results = {'.':[], '!':[], '~': [], 's':[], 'i':[]}
1098 1098 times = []
1099 1099 iolock = threading.Lock()
1100 1100 abort = False
1101 1101
1102 1102 def scheduletests(options, tests):
1103 1103 jobs = options.jobs
1104 1104 done = queue.Queue()
1105 1105 running = 0
1106 1106 count = 0
1107 1107 global abort
1108 1108
1109 1109 def job(test, count):
1110 1110 try:
1111 1111 done.put(runone(options, test, count))
1112 1112 except KeyboardInterrupt:
1113 1113 pass
1114 1114 except: # re-raises
1115 1115 done.put(('!', test, 'run-test raised an error, see traceback'))
1116 1116 raise
1117 1117
1118 1118 try:
1119 1119 while tests or running:
1120 1120 if not done.empty() or running == jobs or not tests:
1121 1121 try:
1122 1122 code, test, msg = done.get(True, 1)
1123 1123 results[code].append((test, msg))
1124 1124 if options.first and code not in '.si':
1125 1125 break
1126 1126 except queue.Empty:
1127 1127 continue
1128 1128 running -= 1
1129 1129 if tests and not running == jobs:
1130 1130 test = tests.pop(0)
1131 1131 if options.loop:
1132 1132 tests.append(test)
1133 1133 t = threading.Thread(target=job, name=test, args=(test, count))
1134 1134 t.start()
1135 1135 running += 1
1136 1136 count += 1
1137 1137 except KeyboardInterrupt:
1138 1138 abort = True
1139 1139
1140 1140 def runtests(options, tests):
1141 1141 try:
1142 1142 if INST:
1143 1143 installhg(options)
1144 1144 _checkhglib("Testing")
1145 1145 else:
1146 1146 usecorrectpython()
1147 1147
1148 1148 if options.restart:
1149 1149 orig = list(tests)
1150 1150 while tests:
1151 1151 if os.path.exists(tests[0] + ".err"):
1152 1152 break
1153 1153 tests.pop(0)
1154 1154 if not tests:
1155 1155 print "running all tests"
1156 1156 tests = orig
1157 1157
1158 1158 scheduletests(options, tests)
1159 1159
1160 1160 failed = len(results['!'])
1161 1161 warned = len(results['~'])
1162 1162 tested = len(results['.']) + failed + warned
1163 1163 skipped = len(results['s'])
1164 1164 ignored = len(results['i'])
1165 1165
1166 1166 print
1167 1167 if not options.noskips:
1168 1168 for s in results['s']:
1169 1169 print "Skipped %s: %s" % s
1170 1170 for s in results['~']:
1171 1171 print "Warned %s: %s" % s
1172 1172 for s in results['!']:
1173 1173 print "Failed %s: %s" % s
1174 1174 _checkhglib("Tested")
1175 1175 print "# Ran %d tests, %d skipped, %d warned, %d failed." % (
1176 1176 tested, skipped + ignored, warned, failed)
1177 1177 if results['!']:
1178 1178 print 'python hash seed:', os.environ['PYTHONHASHSEED']
1179 1179 if options.time:
1180 1180 outputtimes(options)
1181 1181
1182 1182 if options.anycoverage:
1183 1183 outputcoverage(options)
1184 1184 except KeyboardInterrupt:
1185 1185 failed = True
1186 1186 print "\ninterrupted!"
1187 1187
1188 1188 if failed:
1189 1189 return 1
1190 1190 if warned:
1191 1191 return 80
1192 1192
1193 1193 testtypes = [('.py', pytest, '.out'),
1194 1194 ('.t', tsttest, '')]
1195 1195
1196 1196 def main(args, parser=None):
1197 1197 parser = parser or getparser()
1198 1198 (options, args) = parseargs(args, parser)
1199 1199 os.umask(022)
1200 1200
1201 1201 checktools()
1202 1202
1203 1203 if not args:
1204 1204 if options.changed:
1205 1205 proc = Popen4('hg st --rev "%s" -man0 .' % options.changed,
1206 1206 None, 0)
1207 1207 stdout, stderr = proc.communicate()
1208 1208 args = stdout.strip('\0').split('\0')
1209 1209 else:
1210 1210 args = os.listdir(".")
1211 1211
1212 1212 tests = [t for t in args
1213 1213 if os.path.basename(t).startswith("test-")
1214 1214 and (t.endswith(".py") or t.endswith(".t"))]
1215 1215
1216 1216 if options.random:
1217 1217 random.shuffle(tests)
1218 1218 else:
1219 1219 # keywords for slow tests
1220 1220 slow = 'svn gendoc check-code-hg'.split()
1221 1221 def sortkey(f):
1222 1222 # run largest tests first, as they tend to take the longest
1223 1223 try:
1224 1224 val = -os.stat(f).st_size
1225 1225 except OSError, e:
1226 1226 if e.errno != errno.ENOENT:
1227 1227 raise
1228 1228 return -1e9 # file does not exist, tell early
1229 1229 for kw in slow:
1230 1230 if kw in f:
1231 1231 val *= 10
1232 1232 return val
1233 1233 tests.sort(key=sortkey)
1234 1234
1235 1235 if 'PYTHONHASHSEED' not in os.environ:
1236 1236 # use a random python hash seed all the time
1237 1237 # we do the randomness ourself to know what seed is used
1238 1238 os.environ['PYTHONHASHSEED'] = str(random.getrandbits(32))
1239 1239
1240 1240 global TESTDIR, HGTMP, INST, BINDIR, TMPBINDIR, PYTHONDIR, COVERAGE_FILE
1241 1241 TESTDIR = os.environ["TESTDIR"] = os.getcwd()
1242 1242 if options.tmpdir:
1243 1243 options.keep_tmpdir = True
1244 1244 tmpdir = options.tmpdir
1245 1245 if os.path.exists(tmpdir):
1246 1246 # Meaning of tmpdir has changed since 1.3: we used to create
1247 1247 # HGTMP inside tmpdir; now HGTMP is tmpdir. So fail if
1248 1248 # tmpdir already exists.
1249 1249 print "error: temp dir %r already exists" % tmpdir
1250 1250 return 1
1251 1251
1252 1252 # Automatically removing tmpdir sounds convenient, but could
1253 1253 # really annoy anyone in the habit of using "--tmpdir=/tmp"
1254 1254 # or "--tmpdir=$HOME".
1255 1255 #vlog("# Removing temp dir", tmpdir)
1256 1256 #shutil.rmtree(tmpdir)
1257 1257 os.makedirs(tmpdir)
1258 1258 else:
1259 1259 d = None
1260 1260 if os.name == 'nt':
1261 1261 # without this, we get the default temp dir location, but
1262 1262 # in all lowercase, which causes troubles with paths (issue3490)
1263 1263 d = os.getenv('TMP')
1264 1264 tmpdir = tempfile.mkdtemp('', 'hgtests.', d)
1265 1265 HGTMP = os.environ['HGTMP'] = os.path.realpath(tmpdir)
1266 1266
1267 1267 if options.with_hg:
1268 1268 INST = None
1269 1269 BINDIR = os.path.dirname(os.path.realpath(options.with_hg))
1270 1270 TMPBINDIR = os.path.join(HGTMP, 'install', 'bin')
1271 1271 os.makedirs(TMPBINDIR)
1272 1272
1273 1273 # This looks redundant with how Python initializes sys.path from
1274 1274 # the location of the script being executed. Needed because the
1275 1275 # "hg" specified by --with-hg is not the only Python script
1276 1276 # executed in the test suite that needs to import 'mercurial'
1277 1277 # ... which means it's not really redundant at all.
1278 1278 PYTHONDIR = BINDIR
1279 1279 else:
1280 1280 INST = os.path.join(HGTMP, "install")
1281 1281 BINDIR = os.environ["BINDIR"] = os.path.join(INST, "bin")
1282 1282 TMPBINDIR = BINDIR
1283 1283 PYTHONDIR = os.path.join(INST, "lib", "python")
1284 1284
1285 1285 os.environ["BINDIR"] = BINDIR
1286 1286 os.environ["PYTHON"] = PYTHON
1287 1287
1288 1288 path = [BINDIR] + os.environ["PATH"].split(os.pathsep)
1289 1289 if TMPBINDIR != BINDIR:
1290 1290 path = [TMPBINDIR] + path
1291 1291 os.environ["PATH"] = os.pathsep.join(path)
1292 1292
1293 1293 # Include TESTDIR in PYTHONPATH so that out-of-tree extensions
1294 1294 # can run .../tests/run-tests.py test-foo where test-foo
1295 1295 # adds an extension to HGRC. Also include run-test.py directory to import
1296 1296 # modules like heredoctest.
1297 1297 pypath = [PYTHONDIR, TESTDIR, os.path.abspath(os.path.dirname(__file__))]
1298 1298 # We have to augment PYTHONPATH, rather than simply replacing
1299 1299 # it, in case external libraries are only available via current
1300 1300 # PYTHONPATH. (In particular, the Subversion bindings on OS X
1301 1301 # are in /opt/subversion.)
1302 1302 oldpypath = os.environ.get(IMPL_PATH)
1303 1303 if oldpypath:
1304 1304 pypath.append(oldpypath)
1305 1305 os.environ[IMPL_PATH] = os.pathsep.join(pypath)
1306 1306
1307 1307 COVERAGE_FILE = os.path.join(TESTDIR, ".coverage")
1308 1308
1309 1309 vlog("# Using TESTDIR", TESTDIR)
1310 1310 vlog("# Using HGTMP", HGTMP)
1311 1311 vlog("# Using PATH", os.environ["PATH"])
1312 1312 vlog("# Using", IMPL_PATH, os.environ[IMPL_PATH])
1313 1313
1314 1314 try:
1315 1315 return runtests(options, tests) or 0
1316 1316 finally:
1317 1317 time.sleep(.1)
1318 1318 cleanup(options)
1319 1319
1320 1320 if __name__ == '__main__':
1321 1321 sys.exit(main(sys.argv[1:]))
@@ -1,136 +1,136 b''
1 1 from mercurial import ancestor, commands, hg, ui, util
2 2
3 3 # graph is a dict of child->parent adjacency lists for this graph:
4 4 # o 13
5 5 # |
6 6 # | o 12
7 7 # | |
8 8 # | | o 11
9 9 # | | |\
10 10 # | | | | o 10
11 11 # | | | | |
12 12 # | o---+ | 9
13 13 # | | | | |
14 14 # o | | | | 8
15 15 # / / / /
16 16 # | | o | 7
17 17 # | | | |
18 18 # o---+ | 6
19 19 # / / /
20 20 # | | o 5
21 21 # | |/
22 22 # | o 4
23 23 # | |
24 24 # o | 3
25 25 # | |
26 26 # | o 2
27 27 # |/
28 28 # o 1
29 29 # |
30 30 # o 0
31 31
32 32 graph = {0: [-1], 1: [0], 2: [1], 3: [1], 4: [2], 5: [4], 6: [4],
33 33 7: [4], 8: [-1], 9: [6, 7], 10: [5], 11: [3, 7], 12: [9],
34 34 13: [8]}
35 35 pfunc = graph.get
36 36
37 37 class mockchangelog(object):
38 38 parentrevs = graph.get
39 39
40 40 def runmissingancestors(revs, bases):
41 41 print "%% ancestors of %s and not of %s" % (revs, bases)
42 42 print ancestor.missingancestors(revs, bases, pfunc)
43 43
44 44 def test_missingancestors():
45 45 # Empty revs
46 46 runmissingancestors([], [1])
47 47 runmissingancestors([], [])
48 48
49 49 # If bases is empty, it's the same as if it were [nullrev]
50 50 runmissingancestors([12], [])
51 51
52 52 # Trivial case: revs == bases
53 53 runmissingancestors([0], [0])
54 54 runmissingancestors([4, 5, 6], [6, 5, 4])
55 55
56 56 # With nullrev
57 57 runmissingancestors([-1], [12])
58 58 runmissingancestors([12], [-1])
59 59
60 60 # 9 is a parent of 12. 7 is a parent of 9, so an ancestor of 12. 6 is an
61 61 # ancestor of 12 but not of 7.
62 62 runmissingancestors([12], [9])
63 63 runmissingancestors([9], [12])
64 64 runmissingancestors([12, 9], [7])
65 65 runmissingancestors([7, 6], [12])
66 66
67 67 # More complex cases
68 68 runmissingancestors([10], [11, 12])
69 69 runmissingancestors([11], [10])
70 70 runmissingancestors([11], [10, 12])
71 71 runmissingancestors([12], [10])
72 72 runmissingancestors([12], [11])
73 73 runmissingancestors([10, 11, 12], [13])
74 74 runmissingancestors([13], [10, 11, 12])
75 75
76 76 def genlazyancestors(revs, stoprev=0, inclusive=False):
77 77 print ("%% lazy ancestor set for %s, stoprev = %s, inclusive = %s" %
78 78 (revs, stoprev, inclusive))
79 79 return ancestor.lazyancestors(mockchangelog, revs, stoprev=stoprev,
80 80 inclusive=inclusive)
81 81
82 82 def printlazyancestors(s, l):
83 83 print [n for n in l if n in s]
84 84
85 85 def test_lazyancestors():
86 86 # Empty revs
87 87 s = genlazyancestors([])
88 88 printlazyancestors(s, [3, 0, -1])
89 89
90 90 # Standard example
91 91 s = genlazyancestors([11, 13])
92 92 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
93 93
94 94 # Including revs
95 95 s = genlazyancestors([11, 13], inclusive=True)
96 96 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
97 97
98 98 # Test with stoprev
99 99 s = genlazyancestors([11, 13], stoprev=6)
100 100 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
101 101 s = genlazyancestors([11, 13], stoprev=6, inclusive=True)
102 102 printlazyancestors(s, [11, 13, 7, 9, 8, 3, 6, 4, 1, -1, 0])
103 103
104 104
105 105 # The C gca algorithm requires a real repo. These are textual descriptions of
106 # dags that have been known to be problematic.
106 # DAGs that have been known to be problematic.
107 107 dagtests = [
108 108 '+2*2*2/*3/2',
109 109 '+3*3/*2*2/*4*4/*4/2*4/2*2',
110 110 ]
111 111 def test_gca():
112 112 u = ui.ui()
113 113 for i, dag in enumerate(dagtests):
114 114 repo = hg.repository(u, 'gca%d' % i, create=1)
115 115 cl = repo.changelog
116 116 if not util.safehasattr(cl.index, 'ancestors'):
117 117 # C version not available
118 118 return
119 119
120 120 commands.debugbuilddag(u, repo, dag)
121 121 # Compare the results of the Python and C versions. This does not
122 122 # include choosing a winner when more than one gca exists -- we make
123 123 # sure both return exactly the same set of gcas.
124 124 for a in cl:
125 125 for b in cl:
126 126 cgcas = sorted(cl.index.ancestors(a, b))
127 127 pygcas = sorted(ancestor.ancestors(cl.parentrevs, a, b))
128 128 if cgcas != pygcas:
129 129 print "test_gca: for dag %s, gcas for %d, %d:" % (dag, a, b)
130 130 print " C returned: %s" % cgcas
131 131 print " Python returned: %s" % pygcas
132 132
133 133 if __name__ == '__main__':
134 134 test_missingancestors()
135 135 test_lazyancestors()
136 136 test_gca()
@@ -1,191 +1,191 b''
1 1 $ hg init
2 2
3 3 no bookmarks
4 4
5 5 $ hg bookmarks
6 6 no bookmarks set
7 7
8 8 set bookmark X
9 9
10 10 $ hg bookmark X
11 11
12 12 list bookmarks
13 13
14 14 $ hg bookmark
15 15 * X -1:000000000000
16 16
17 17 list bookmarks with color
18 18
19 19 $ hg --config extensions.color= --config color.mode=ansi \
20 20 > bookmark --color=always
21 21 \x1b[0;32m * X -1:000000000000\x1b[0m (esc)
22 22
23 23 update to bookmark X
24 24
25 25 $ hg update X
26 26 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
27 27
28 28 list bookmarks
29 29
30 30 $ hg bookmarks
31 31 * X -1:000000000000
32 32
33 33 rename
34 34
35 35 $ hg bookmark -m X Z
36 36
37 37 list bookmarks
38 38
39 39 $ cat .hg/bookmarks.current
40 40 Z (no-eol)
41 41 $ cat .hg/bookmarks
42 42 0000000000000000000000000000000000000000 Z
43 43 $ hg bookmarks
44 44 * Z -1:000000000000
45 45
46 46 new bookmarks X and Y, first one made active
47 47
48 48 $ hg bookmark Y X
49 49
50 50 list bookmarks
51 51
52 52 $ hg bookmark
53 53 X -1:000000000000
54 54 * Y -1:000000000000
55 55 Z -1:000000000000
56 56
57 57 $ hg bookmark -d X
58 58
59 59 commit
60 60
61 61 $ echo 'b' > b
62 62 $ hg add b
63 63 $ hg commit -m'test'
64 64
65 65 list bookmarks
66 66
67 67 $ hg bookmark
68 68 * Y 0:719295282060
69 69 Z -1:000000000000
70 70
71 71 Verify that switching to Z updates the current bookmark:
72 72 $ hg update Z
73 73 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
74 74 $ hg bookmark
75 75 Y 0:719295282060
76 76 * Z -1:000000000000
77 77
78 78 Switch back to Y for the remaining tests in this file:
79 79 $ hg update Y
80 80 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
81 81
82 82 delete bookmarks
83 83
84 84 $ hg bookmark -d Y
85 85 $ hg bookmark -d Z
86 86
87 87 list bookmarks
88 88
89 89 $ hg bookmark
90 90 no bookmarks set
91 91
92 92 update to tip
93 93
94 94 $ hg update tip
95 95 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
96 96
97 97 set bookmark Y using -r . but make sure that the active
98 98 bookmark is not activated
99 99
100 100 $ hg bookmark -r . Y
101 101
102 102 list bookmarks, Y should not be active
103 103
104 104 $ hg bookmark
105 105 Y 0:719295282060
106 106
107 107 now, activate Y
108 108
109 109 $ hg up -q Y
110 110
111 111 set bookmark Z using -i
112 112
113 113 $ hg bookmark -r . -i Z
114 114 $ hg bookmarks
115 115 * Y 0:719295282060
116 116 Z 0:719295282060
117 117
118 118 deactivate current bookmark using -i
119 119
120 120 $ hg bookmark -i Y
121 121 $ hg bookmarks
122 122 Y 0:719295282060
123 123 Z 0:719295282060
124 124
125 125 $ hg up -q Y
126 126 $ hg bookmark -i
127 127 $ hg bookmarks
128 128 Y 0:719295282060
129 129 Z 0:719295282060
130 130 $ hg bookmark -i
131 131 no active bookmark
132 132 $ hg up -q Y
133 133 $ hg bookmarks
134 134 * Y 0:719295282060
135 135 Z 0:719295282060
136 136
137 137 deactivate current bookmark while renaming
138 138
139 139 $ hg bookmark -i -m Y X
140 140 $ hg bookmarks
141 141 X 0:719295282060
142 142 Z 0:719295282060
143 143
144 144 bare update moves the active bookmark forward and clear the divergent bookmarks
145 145
146 146 $ echo a > a
147 147 $ hg ci -Am1
148 148 adding a
149 149 $ echo b >> a
150 150 $ hg ci -Am2
151 151 $ hg bookmark X@1 -r 1
152 152 $ hg bookmark X@2 -r 2
153 153 $ hg update X
154 154 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
155 155 $ hg bookmarks
156 156 * X 0:719295282060
157 157 X@1 1:cc586d725fbe
158 158 X@2 2:49e1c4e84c58
159 159 Z 0:719295282060
160 160 $ hg update
161 161 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
162 162 updating bookmark X
163 163 $ hg bookmarks
164 164 * X 2:49e1c4e84c58
165 165 Z 0:719295282060
166 166
167 167 test deleting .hg/bookmarks.current when explicitly updating
168 168 to a revision
169 169
170 170 $ echo a >> b
171 171 $ hg ci -m.
172 172 $ hg up -q X
173 173 $ test -f .hg/bookmarks.current
174 174
175 175 try to update to it again to make sure we don't
176 176 set and then unset it
177 177
178 178 $ hg up -q X
179 179 $ test -f .hg/bookmarks.current
180 180
181 181 $ hg up -q 1
182 182 $ test -f .hg/bookmarks.current
183 183 [1]
184 184
185 185 when a bookmark is active, hg up -r . is
186 analogus to hg book -i <active bookmark>
186 analogous to hg book -i <active bookmark>
187 187
188 188 $ hg up -q X
189 189 $ hg up -q .
190 190 $ test -f .hg/bookmarks.current
191 191 [1]
@@ -1,650 +1,650 b''
1 1 Setting up test
2 2
3 3 $ hg init test
4 4 $ cd test
5 5 $ echo 0 > afile
6 6 $ hg add afile
7 7 $ hg commit -m "0.0"
8 8 $ echo 1 >> afile
9 9 $ hg commit -m "0.1"
10 10 $ echo 2 >> afile
11 11 $ hg commit -m "0.2"
12 12 $ echo 3 >> afile
13 13 $ hg commit -m "0.3"
14 14 $ hg update -C 0
15 15 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
16 16 $ echo 1 >> afile
17 17 $ hg commit -m "1.1"
18 18 created new head
19 19 $ echo 2 >> afile
20 20 $ hg commit -m "1.2"
21 21 $ echo "a line" > fred
22 22 $ echo 3 >> afile
23 23 $ hg add fred
24 24 $ hg commit -m "1.3"
25 25 $ hg mv afile adifferentfile
26 26 $ hg commit -m "1.3m"
27 27 $ hg update -C 3
28 28 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
29 29 $ hg mv afile anotherfile
30 30 $ hg commit -m "0.3m"
31 31 $ hg verify
32 32 checking changesets
33 33 checking manifests
34 34 crosschecking files in changesets and manifests
35 35 checking files
36 36 4 files, 9 changesets, 7 total revisions
37 37 $ cd ..
38 38 $ hg init empty
39 39
40 40 Bundle and phase
41 41
42 42 $ hg -R test phase --force --secret 0
43 43 $ hg -R test bundle phase.hg empty
44 44 searching for changes
45 45 no changes found (ignored 9 secret changesets)
46 46 [1]
47 47 $ hg -R test phase --draft -r 'head()'
48 48
49 49 Bundle --all
50 50
51 51 $ hg -R test bundle --all all.hg
52 52 9 changesets found
53 53
54 54 Bundle test to full.hg
55 55
56 56 $ hg -R test bundle full.hg empty
57 57 searching for changes
58 58 9 changesets found
59 59
60 60 Unbundle full.hg in test
61 61
62 62 $ hg -R test unbundle full.hg
63 63 adding changesets
64 64 adding manifests
65 65 adding file changes
66 66 added 0 changesets with 0 changes to 4 files
67 67 (run 'hg update' to get a working copy)
68 68
69 69 Verify empty
70 70
71 71 $ hg -R empty heads
72 72 [1]
73 73 $ hg -R empty verify
74 74 checking changesets
75 75 checking manifests
76 76 crosschecking files in changesets and manifests
77 77 checking files
78 78 0 files, 0 changesets, 0 total revisions
79 79
80 80 Pull full.hg into test (using --cwd)
81 81
82 82 $ hg --cwd test pull ../full.hg
83 83 pulling from ../full.hg
84 84 searching for changes
85 85 no changes found
86 86
87 87 Verify that there are no leaked temporary files after pull (issue2797)
88 88
89 89 $ ls test/.hg | grep .hg10un
90 90 [1]
91 91
92 92 Pull full.hg into empty (using --cwd)
93 93
94 94 $ hg --cwd empty pull ../full.hg
95 95 pulling from ../full.hg
96 96 requesting all changes
97 97 adding changesets
98 98 adding manifests
99 99 adding file changes
100 100 added 9 changesets with 7 changes to 4 files (+1 heads)
101 101 (run 'hg heads' to see heads, 'hg merge' to merge)
102 102
103 103 Rollback empty
104 104
105 105 $ hg -R empty rollback
106 106 repository tip rolled back to revision -1 (undo pull)
107 107
108 108 Pull full.hg into empty again (using --cwd)
109 109
110 110 $ hg --cwd empty pull ../full.hg
111 111 pulling from ../full.hg
112 112 requesting all changes
113 113 adding changesets
114 114 adding manifests
115 115 adding file changes
116 116 added 9 changesets with 7 changes to 4 files (+1 heads)
117 117 (run 'hg heads' to see heads, 'hg merge' to merge)
118 118
119 119 Pull full.hg into test (using -R)
120 120
121 121 $ hg -R test pull full.hg
122 122 pulling from full.hg
123 123 searching for changes
124 124 no changes found
125 125
126 126 Pull full.hg into empty (using -R)
127 127
128 128 $ hg -R empty pull full.hg
129 129 pulling from full.hg
130 130 searching for changes
131 131 no changes found
132 132
133 133 Rollback empty
134 134
135 135 $ hg -R empty rollback
136 136 repository tip rolled back to revision -1 (undo pull)
137 137
138 138 Pull full.hg into empty again (using -R)
139 139
140 140 $ hg -R empty pull full.hg
141 141 pulling from full.hg
142 142 requesting all changes
143 143 adding changesets
144 144 adding manifests
145 145 adding file changes
146 146 added 9 changesets with 7 changes to 4 files (+1 heads)
147 147 (run 'hg heads' to see heads, 'hg merge' to merge)
148 148
149 149 Log -R full.hg in fresh empty
150 150
151 151 $ rm -r empty
152 152 $ hg init empty
153 153 $ cd empty
154 154 $ hg -R bundle://../full.hg log
155 155 changeset: 8:aa35859c02ea
156 156 tag: tip
157 157 parent: 3:eebf5a27f8ca
158 158 user: test
159 159 date: Thu Jan 01 00:00:00 1970 +0000
160 160 summary: 0.3m
161 161
162 162 changeset: 7:a6a34bfa0076
163 163 user: test
164 164 date: Thu Jan 01 00:00:00 1970 +0000
165 165 summary: 1.3m
166 166
167 167 changeset: 6:7373c1169842
168 168 user: test
169 169 date: Thu Jan 01 00:00:00 1970 +0000
170 170 summary: 1.3
171 171
172 172 changeset: 5:1bb50a9436a7
173 173 user: test
174 174 date: Thu Jan 01 00:00:00 1970 +0000
175 175 summary: 1.2
176 176
177 177 changeset: 4:095197eb4973
178 178 parent: 0:f9ee2f85a263
179 179 user: test
180 180 date: Thu Jan 01 00:00:00 1970 +0000
181 181 summary: 1.1
182 182
183 183 changeset: 3:eebf5a27f8ca
184 184 user: test
185 185 date: Thu Jan 01 00:00:00 1970 +0000
186 186 summary: 0.3
187 187
188 188 changeset: 2:e38ba6f5b7e0
189 189 user: test
190 190 date: Thu Jan 01 00:00:00 1970 +0000
191 191 summary: 0.2
192 192
193 193 changeset: 1:34c2bf6b0626
194 194 user: test
195 195 date: Thu Jan 01 00:00:00 1970 +0000
196 196 summary: 0.1
197 197
198 198 changeset: 0:f9ee2f85a263
199 199 user: test
200 200 date: Thu Jan 01 00:00:00 1970 +0000
201 201 summary: 0.0
202 202
203 203 Make sure bundlerepo doesn't leak tempfiles (issue2491)
204 204
205 205 $ ls .hg
206 206 00changelog.i
207 207 cache
208 208 requires
209 209 store
210 210
211 211 Pull ../full.hg into empty (with hook)
212 212
213 213 $ echo "[hooks]" >> .hg/hgrc
214 214 $ echo "changegroup = python \"$TESTDIR/printenv.py\" changegroup" >> .hg/hgrc
215 215
216 216 doesn't work (yet ?)
217 217
218 218 hg -R bundle://../full.hg verify
219 219
220 220 $ hg pull bundle://../full.hg
221 221 pulling from bundle:../full.hg
222 222 requesting all changes
223 223 adding changesets
224 224 adding manifests
225 225 adding file changes
226 226 added 9 changesets with 7 changes to 4 files (+1 heads)
227 227 changegroup hook: HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_SOURCE=pull HG_URL=bundle:../full.hg
228 228 (run 'hg heads' to see heads, 'hg merge' to merge)
229 229
230 230 Rollback empty
231 231
232 232 $ hg rollback
233 233 repository tip rolled back to revision -1 (undo pull)
234 234 $ cd ..
235 235
236 236 Log -R bundle:empty+full.hg
237 237
238 238 $ hg -R bundle:empty+full.hg log --template="{rev} "; echo ""
239 239 8 7 6 5 4 3 2 1 0
240 240
241 241 Pull full.hg into empty again (using -R; with hook)
242 242
243 243 $ hg -R empty pull full.hg
244 244 pulling from full.hg
245 245 requesting all changes
246 246 adding changesets
247 247 adding manifests
248 248 adding file changes
249 249 added 9 changesets with 7 changes to 4 files (+1 heads)
250 250 changegroup hook: HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_SOURCE=pull HG_URL=bundle:empty+full.hg
251 251 (run 'hg heads' to see heads, 'hg merge' to merge)
252 252
253 253 Create partial clones
254 254
255 255 $ rm -r empty
256 256 $ hg init empty
257 257 $ hg clone -r 3 test partial
258 258 adding changesets
259 259 adding manifests
260 260 adding file changes
261 261 added 4 changesets with 4 changes to 1 files
262 262 updating to branch default
263 263 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
264 264 $ hg clone partial partial2
265 265 updating to branch default
266 266 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
267 267 $ cd partial
268 268
269 269 Log -R full.hg in partial
270 270
271 271 $ hg -R bundle://../full.hg log
272 272 changeset: 8:aa35859c02ea
273 273 tag: tip
274 274 parent: 3:eebf5a27f8ca
275 275 user: test
276 276 date: Thu Jan 01 00:00:00 1970 +0000
277 277 summary: 0.3m
278 278
279 279 changeset: 7:a6a34bfa0076
280 280 user: test
281 281 date: Thu Jan 01 00:00:00 1970 +0000
282 282 summary: 1.3m
283 283
284 284 changeset: 6:7373c1169842
285 285 user: test
286 286 date: Thu Jan 01 00:00:00 1970 +0000
287 287 summary: 1.3
288 288
289 289 changeset: 5:1bb50a9436a7
290 290 user: test
291 291 date: Thu Jan 01 00:00:00 1970 +0000
292 292 summary: 1.2
293 293
294 294 changeset: 4:095197eb4973
295 295 parent: 0:f9ee2f85a263
296 296 user: test
297 297 date: Thu Jan 01 00:00:00 1970 +0000
298 298 summary: 1.1
299 299
300 300 changeset: 3:eebf5a27f8ca
301 301 user: test
302 302 date: Thu Jan 01 00:00:00 1970 +0000
303 303 summary: 0.3
304 304
305 305 changeset: 2:e38ba6f5b7e0
306 306 user: test
307 307 date: Thu Jan 01 00:00:00 1970 +0000
308 308 summary: 0.2
309 309
310 310 changeset: 1:34c2bf6b0626
311 311 user: test
312 312 date: Thu Jan 01 00:00:00 1970 +0000
313 313 summary: 0.1
314 314
315 315 changeset: 0:f9ee2f85a263
316 316 user: test
317 317 date: Thu Jan 01 00:00:00 1970 +0000
318 318 summary: 0.0
319 319
320 320
321 321 Incoming full.hg in partial
322 322
323 323 $ hg incoming bundle://../full.hg
324 324 comparing with bundle:../full.hg
325 325 searching for changes
326 326 changeset: 4:095197eb4973
327 327 parent: 0:f9ee2f85a263
328 328 user: test
329 329 date: Thu Jan 01 00:00:00 1970 +0000
330 330 summary: 1.1
331 331
332 332 changeset: 5:1bb50a9436a7
333 333 user: test
334 334 date: Thu Jan 01 00:00:00 1970 +0000
335 335 summary: 1.2
336 336
337 337 changeset: 6:7373c1169842
338 338 user: test
339 339 date: Thu Jan 01 00:00:00 1970 +0000
340 340 summary: 1.3
341 341
342 342 changeset: 7:a6a34bfa0076
343 343 user: test
344 344 date: Thu Jan 01 00:00:00 1970 +0000
345 345 summary: 1.3m
346 346
347 347 changeset: 8:aa35859c02ea
348 348 tag: tip
349 349 parent: 3:eebf5a27f8ca
350 350 user: test
351 351 date: Thu Jan 01 00:00:00 1970 +0000
352 352 summary: 0.3m
353 353
354 354
355 355 Outgoing -R full.hg vs partial2 in partial
356 356
357 357 $ hg -R bundle://../full.hg outgoing ../partial2
358 358 comparing with ../partial2
359 359 searching for changes
360 360 changeset: 4:095197eb4973
361 361 parent: 0:f9ee2f85a263
362 362 user: test
363 363 date: Thu Jan 01 00:00:00 1970 +0000
364 364 summary: 1.1
365 365
366 366 changeset: 5:1bb50a9436a7
367 367 user: test
368 368 date: Thu Jan 01 00:00:00 1970 +0000
369 369 summary: 1.2
370 370
371 371 changeset: 6:7373c1169842
372 372 user: test
373 373 date: Thu Jan 01 00:00:00 1970 +0000
374 374 summary: 1.3
375 375
376 376 changeset: 7:a6a34bfa0076
377 377 user: test
378 378 date: Thu Jan 01 00:00:00 1970 +0000
379 379 summary: 1.3m
380 380
381 381 changeset: 8:aa35859c02ea
382 382 tag: tip
383 383 parent: 3:eebf5a27f8ca
384 384 user: test
385 385 date: Thu Jan 01 00:00:00 1970 +0000
386 386 summary: 0.3m
387 387
388 388
389 389 Outgoing -R does-not-exist.hg vs partial2 in partial
390 390
391 391 $ hg -R bundle://../does-not-exist.hg outgoing ../partial2
392 392 abort: *../does-not-exist.hg* (glob)
393 393 [255]
394 394 $ cd ..
395 395
396 396 hide outer repo
397 397 $ hg init
398 398
399 399 Direct clone from bundle (all-history)
400 400
401 401 $ hg clone full.hg full-clone
402 402 requesting all changes
403 403 adding changesets
404 404 adding manifests
405 405 adding file changes
406 406 added 9 changesets with 7 changes to 4 files (+1 heads)
407 407 updating to branch default
408 408 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
409 409 $ hg -R full-clone heads
410 410 changeset: 8:aa35859c02ea
411 411 tag: tip
412 412 parent: 3:eebf5a27f8ca
413 413 user: test
414 414 date: Thu Jan 01 00:00:00 1970 +0000
415 415 summary: 0.3m
416 416
417 417 changeset: 7:a6a34bfa0076
418 418 user: test
419 419 date: Thu Jan 01 00:00:00 1970 +0000
420 420 summary: 1.3m
421 421
422 422 $ rm -r full-clone
423 423
424 424 When cloning from a non-copiable repository into '', do not
425 425 recurse infinitely (issue 2528)
426 426
427 427 $ hg clone full.hg ''
428 428 abort: empty destination path is not valid
429 429 [255]
430 430
431 431 test for http://mercurial.selenic.com/bts/issue216
432 432
433 433 Unbundle incremental bundles into fresh empty in one go
434 434
435 435 $ rm -r empty
436 436 $ hg init empty
437 437 $ hg -R test bundle --base null -r 0 ../0.hg
438 438 1 changesets found
439 439 $ hg -R test bundle --base 0 -r 1 ../1.hg
440 440 1 changesets found
441 441 $ hg -R empty unbundle -u ../0.hg ../1.hg
442 442 adding changesets
443 443 adding manifests
444 444 adding file changes
445 445 added 1 changesets with 1 changes to 1 files
446 446 adding changesets
447 447 adding manifests
448 448 adding file changes
449 449 added 1 changesets with 1 changes to 1 files
450 450 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
451 451
452 452 View full contents of the bundle
453 453 $ hg -R test bundle --base null -r 3 ../partial.hg
454 454 4 changesets found
455 455 $ cd test
456 456 $ hg -R ../../partial.hg log -r "bundle()"
457 457 changeset: 0:f9ee2f85a263
458 458 user: test
459 459 date: Thu Jan 01 00:00:00 1970 +0000
460 460 summary: 0.0
461 461
462 462 changeset: 1:34c2bf6b0626
463 463 user: test
464 464 date: Thu Jan 01 00:00:00 1970 +0000
465 465 summary: 0.1
466 466
467 467 changeset: 2:e38ba6f5b7e0
468 468 user: test
469 469 date: Thu Jan 01 00:00:00 1970 +0000
470 470 summary: 0.2
471 471
472 472 changeset: 3:eebf5a27f8ca
473 473 user: test
474 474 date: Thu Jan 01 00:00:00 1970 +0000
475 475 summary: 0.3
476 476
477 477 $ cd ..
478 478
479 479 test for 540d1059c802
480 480
481 481 test for 540d1059c802
482 482
483 483 $ hg init orig
484 484 $ cd orig
485 485 $ echo foo > foo
486 486 $ hg add foo
487 487 $ hg ci -m 'add foo'
488 488
489 489 $ hg clone . ../copy
490 490 updating to branch default
491 491 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
492 492 $ hg tag foo
493 493
494 494 $ cd ../copy
495 495 $ echo >> foo
496 496 $ hg ci -m 'change foo'
497 497 $ hg bundle ../bundle.hg ../orig
498 498 searching for changes
499 499 1 changesets found
500 500
501 501 $ cd ../orig
502 502 $ hg incoming ../bundle.hg
503 503 comparing with ../bundle.hg
504 504 searching for changes
505 505 changeset: 2:ed1b79f46b9a
506 506 tag: tip
507 507 parent: 0:bbd179dfa0a7
508 508 user: test
509 509 date: Thu Jan 01 00:00:00 1970 +0000
510 510 summary: change foo
511 511
512 512 $ cd ..
513 513
514 514 test bundle with # in the filename (issue2154):
515 515
516 516 $ cp bundle.hg 'test#bundle.hg'
517 517 $ cd orig
518 518 $ hg incoming '../test#bundle.hg'
519 519 comparing with ../test
520 520 abort: unknown revision 'bundle.hg'!
521 521 [255]
522 522
523 523 note that percent encoding is not handled:
524 524
525 525 $ hg incoming ../test%23bundle.hg
526 526 abort: repository ../test%23bundle.hg not found!
527 527 [255]
528 528 $ cd ..
529 529
530 530 test to bundle revisions on the newly created branch (issue3828):
531 531
532 532 $ hg -q clone -U test test-clone
533 533 $ cd test
534 534
535 535 $ hg -q branch foo
536 536 $ hg commit -m "create foo branch"
537 537 $ hg -q outgoing ../test-clone
538 538 9:b4f5acb1ee27
539 539 $ hg -q bundle --branch foo foo.hg ../test-clone
540 540 $ hg -R foo.hg -q log -r "bundle()"
541 541 9:b4f5acb1ee27
542 542
543 543 $ cd ..
544 544
545 545 test for http://mercurial.selenic.com/bts/issue1144
546 546
547 547 test that verify bundle does not traceback
548 548
549 partial history bundle, fails w/ unkown parent
549 partial history bundle, fails w/ unknown parent
550 550
551 551 $ hg -R bundle.hg verify
552 552 abort: 00changelog.i@bbd179dfa0a7: unknown parent!
553 553 [255]
554 554
555 555 full history bundle, refuses to verify non-local repo
556 556
557 557 $ hg -R all.hg verify
558 558 abort: cannot verify bundle or remote repos
559 559 [255]
560 560
561 561 but, regular verify must continue to work
562 562
563 563 $ hg -R orig verify
564 564 checking changesets
565 565 checking manifests
566 566 crosschecking files in changesets and manifests
567 567 checking files
568 568 2 files, 2 changesets, 2 total revisions
569 569
570 570 diff against bundle
571 571
572 572 $ hg init b
573 573 $ cd b
574 574 $ hg -R ../all.hg diff -r tip
575 575 diff -r aa35859c02ea anotherfile
576 576 --- a/anotherfile Thu Jan 01 00:00:00 1970 +0000
577 577 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
578 578 @@ -1,4 +0,0 @@
579 579 -0
580 580 -1
581 581 -2
582 582 -3
583 583 $ cd ..
584 584
585 585 bundle single branch
586 586
587 587 $ hg init branchy
588 588 $ cd branchy
589 589 $ echo a >a
590 590 $ echo x >x
591 591 $ hg ci -Ama
592 592 adding a
593 593 adding x
594 594 $ echo c >c
595 595 $ echo xx >x
596 596 $ hg ci -Amc
597 597 adding c
598 598 $ echo c1 >c1
599 599 $ hg ci -Amc1
600 600 adding c1
601 601 $ hg up 0
602 602 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
603 603 $ echo b >b
604 604 $ hg ci -Amb
605 605 adding b
606 606 created new head
607 607 $ echo b1 >b1
608 608 $ echo xx >x
609 609 $ hg ci -Amb1
610 610 adding b1
611 611 $ hg clone -q -r2 . part
612 612
613 613 == bundling via incoming
614 614
615 615 $ hg in -R part --bundle incoming.hg --template "{node}\n" .
616 616 comparing with .
617 617 searching for changes
618 618 1a38c1b849e8b70c756d2d80b0b9a3ac0b7ea11a
619 619 057f4db07f61970e1c11e83be79e9d08adc4dc31
620 620
621 621 == bundling
622 622
623 623 $ hg bundle bundle.hg part --debug
624 624 query 1; heads
625 625 searching for changes
626 626 all remote heads known locally
627 627 2 changesets found
628 628 list of changesets:
629 629 1a38c1b849e8b70c756d2d80b0b9a3ac0b7ea11a
630 630 057f4db07f61970e1c11e83be79e9d08adc4dc31
631 631 bundling: 1/2 changesets (50.00%)
632 632 bundling: 2/2 changesets (100.00%)
633 633 bundling: 1/2 manifests (50.00%)
634 634 bundling: 2/2 manifests (100.00%)
635 635 bundling: b 1/3 files (33.33%)
636 636 bundling: b1 2/3 files (66.67%)
637 637 bundling: x 3/3 files (100.00%)
638 638
639 639 == Test for issue3441
640 640
641 641 $ hg clone -q -r0 . part2
642 642 $ hg -q -R part2 pull bundle.hg
643 643 $ hg -R part2 verify
644 644 checking changesets
645 645 checking manifests
646 646 crosschecking files in changesets and manifests
647 647 checking files
648 648 4 files, 3 changesets, 5 total revisions
649 649
650 650 $ cd ..
@@ -1,1856 +1,1856 b''
1 1 $ hg init a
2 2 $ cd a
3 3 $ echo a > a
4 4 $ hg add a
5 5 $ echo line 1 > b
6 6 $ echo line 2 >> b
7 7 $ hg commit -l b -d '1000000 0' -u 'User Name <user@hostname>'
8 8
9 9 $ hg add b
10 10 $ echo other 1 > c
11 11 $ echo other 2 >> c
12 12 $ echo >> c
13 13 $ echo other 3 >> c
14 14 $ hg commit -l c -d '1100000 0' -u 'A. N. Other <other@place>'
15 15
16 16 $ hg add c
17 17 $ hg commit -m 'no person' -d '1200000 0' -u 'other@place'
18 18 $ echo c >> c
19 19 $ hg commit -m 'no user, no domain' -d '1300000 0' -u 'person'
20 20
21 21 $ echo foo > .hg/branch
22 22 $ hg commit -m 'new branch' -d '1400000 0' -u 'person'
23 23
24 24 $ hg co -q 3
25 25 $ echo other 4 >> d
26 26 $ hg add d
27 27 $ hg commit -m 'new head' -d '1500000 0' -u 'person'
28 28
29 29 $ hg merge -q foo
30 30 $ hg commit -m 'merge' -d '1500001 0' -u 'person'
31 31
32 32 Second branch starting at nullrev:
33 33
34 34 $ hg update null
35 35 0 files updated, 0 files merged, 4 files removed, 0 files unresolved
36 36 $ echo second > second
37 37 $ hg add second
38 38 $ hg commit -m second -d '1000000 0' -u 'User Name <user@hostname>'
39 39 created new head
40 40
41 41 $ echo third > third
42 42 $ hg add third
43 43 $ hg mv second fourth
44 44 $ hg commit -m third -d "2020-01-01 10:01"
45 45
46 46 $ hg log --template '{join(file_copies, ",\n")}\n' -r .
47 47 fourth (second)
48 48 $ hg log -T '{file_copies % "{source} -> {name}\n"}' -r .
49 49 second -> fourth
50 50
51 51 Quoting for ui.logtemplate
52 52
53 53 $ hg tip --config "ui.logtemplate={rev}\n"
54 54 8
55 55 $ hg tip --config "ui.logtemplate='{rev}\n'"
56 56 8
57 57 $ hg tip --config 'ui.logtemplate="{rev}\n"'
58 58 8
59 59
60 60 Make sure user/global hgrc does not affect tests
61 61
62 62 $ echo '[ui]' > .hg/hgrc
63 63 $ echo 'logtemplate =' >> .hg/hgrc
64 64 $ echo 'style =' >> .hg/hgrc
65 65
66 66 Add some simple styles to settings
67 67
68 68 $ echo '[templates]' >> .hg/hgrc
69 69 $ printf 'simple = "{rev}\\n"\n' >> .hg/hgrc
70 70 $ printf 'simple2 = {rev}\\n\n' >> .hg/hgrc
71 71
72 72 $ hg log -l1 -Tsimple
73 73 8
74 74 $ hg log -l1 -Tsimple2
75 75 8
76 76
77 77 Test templates and style maps in files:
78 78
79 79 $ echo "{rev}" > tmpl
80 80 $ hg log -l1 -T./tmpl
81 81 8
82 82 $ hg log -l1 -Tblah/blah
83 83 blah/blah (no-eol)
84 84
85 85 $ printf 'changeset = "{rev}\\n"\n' > map-simple
86 86 $ hg log -l1 -T./map-simple
87 87 8
88 88
89 89 Default style is like normal output:
90 90
91 91 $ hg log > log.out
92 92 $ hg log --style default > style.out
93 93 $ cmp log.out style.out || diff -u log.out style.out
94 94
95 95 $ hg log -v > log.out
96 96 $ hg log -v --style default > style.out
97 97 $ cmp log.out style.out || diff -u log.out style.out
98 98
99 99 $ hg log --debug > log.out
100 100 $ hg log --debug --style default > style.out
101 101 $ cmp log.out style.out || diff -u log.out style.out
102 102
103 103 Revision with no copies (used to print a traceback):
104 104
105 105 $ hg tip -v --template '\n'
106 106
107 107
108 108 Compact style works:
109 109
110 110 $ hg log -Tcompact
111 111 8[tip] 95c24699272e 2020-01-01 10:01 +0000 test
112 112 third
113 113
114 114 7:-1 29114dbae42b 1970-01-12 13:46 +0000 user
115 115 second
116 116
117 117 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
118 118 merge
119 119
120 120 5:3 13207e5a10d9 1970-01-18 08:40 +0000 person
121 121 new head
122 122
123 123 4 bbe44766e73d 1970-01-17 04:53 +0000 person
124 124 new branch
125 125
126 126 3 10e46f2dcbf4 1970-01-16 01:06 +0000 person
127 127 no user, no domain
128 128
129 129 2 97054abb4ab8 1970-01-14 21:20 +0000 other
130 130 no person
131 131
132 132 1 b608e9d1a3f0 1970-01-13 17:33 +0000 other
133 133 other 1
134 134
135 135 0 1e4e1b8f71e0 1970-01-12 13:46 +0000 user
136 136 line 1
137 137
138 138
139 139 $ hg log -v --style compact
140 140 8[tip] 95c24699272e 2020-01-01 10:01 +0000 test
141 141 third
142 142
143 143 7:-1 29114dbae42b 1970-01-12 13:46 +0000 User Name <user@hostname>
144 144 second
145 145
146 146 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
147 147 merge
148 148
149 149 5:3 13207e5a10d9 1970-01-18 08:40 +0000 person
150 150 new head
151 151
152 152 4 bbe44766e73d 1970-01-17 04:53 +0000 person
153 153 new branch
154 154
155 155 3 10e46f2dcbf4 1970-01-16 01:06 +0000 person
156 156 no user, no domain
157 157
158 158 2 97054abb4ab8 1970-01-14 21:20 +0000 other@place
159 159 no person
160 160
161 161 1 b608e9d1a3f0 1970-01-13 17:33 +0000 A. N. Other <other@place>
162 162 other 1
163 163 other 2
164 164
165 165 other 3
166 166
167 167 0 1e4e1b8f71e0 1970-01-12 13:46 +0000 User Name <user@hostname>
168 168 line 1
169 169 line 2
170 170
171 171
172 172 $ hg log --debug --style compact
173 173 8[tip]:7,-1 95c24699272e 2020-01-01 10:01 +0000 test
174 174 third
175 175
176 176 7:-1,-1 29114dbae42b 1970-01-12 13:46 +0000 User Name <user@hostname>
177 177 second
178 178
179 179 6:5,4 d41e714fe50d 1970-01-18 08:40 +0000 person
180 180 merge
181 181
182 182 5:3,-1 13207e5a10d9 1970-01-18 08:40 +0000 person
183 183 new head
184 184
185 185 4:3,-1 bbe44766e73d 1970-01-17 04:53 +0000 person
186 186 new branch
187 187
188 188 3:2,-1 10e46f2dcbf4 1970-01-16 01:06 +0000 person
189 189 no user, no domain
190 190
191 191 2:1,-1 97054abb4ab8 1970-01-14 21:20 +0000 other@place
192 192 no person
193 193
194 194 1:0,-1 b608e9d1a3f0 1970-01-13 17:33 +0000 A. N. Other <other@place>
195 195 other 1
196 196 other 2
197 197
198 198 other 3
199 199
200 200 0:-1,-1 1e4e1b8f71e0 1970-01-12 13:46 +0000 User Name <user@hostname>
201 201 line 1
202 202 line 2
203 203
204 204
205 205 Test xml styles:
206 206
207 207 $ hg log --style xml
208 208 <?xml version="1.0"?>
209 209 <log>
210 210 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
211 211 <tag>tip</tag>
212 212 <author email="test">test</author>
213 213 <date>2020-01-01T10:01:00+00:00</date>
214 214 <msg xml:space="preserve">third</msg>
215 215 </logentry>
216 216 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
217 217 <parent revision="-1" node="0000000000000000000000000000000000000000" />
218 218 <author email="user@hostname">User Name</author>
219 219 <date>1970-01-12T13:46:40+00:00</date>
220 220 <msg xml:space="preserve">second</msg>
221 221 </logentry>
222 222 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
223 223 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
224 224 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
225 225 <author email="person">person</author>
226 226 <date>1970-01-18T08:40:01+00:00</date>
227 227 <msg xml:space="preserve">merge</msg>
228 228 </logentry>
229 229 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
230 230 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
231 231 <author email="person">person</author>
232 232 <date>1970-01-18T08:40:00+00:00</date>
233 233 <msg xml:space="preserve">new head</msg>
234 234 </logentry>
235 235 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
236 236 <branch>foo</branch>
237 237 <author email="person">person</author>
238 238 <date>1970-01-17T04:53:20+00:00</date>
239 239 <msg xml:space="preserve">new branch</msg>
240 240 </logentry>
241 241 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
242 242 <author email="person">person</author>
243 243 <date>1970-01-16T01:06:40+00:00</date>
244 244 <msg xml:space="preserve">no user, no domain</msg>
245 245 </logentry>
246 246 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
247 247 <author email="other@place">other</author>
248 248 <date>1970-01-14T21:20:00+00:00</date>
249 249 <msg xml:space="preserve">no person</msg>
250 250 </logentry>
251 251 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
252 252 <author email="other@place">A. N. Other</author>
253 253 <date>1970-01-13T17:33:20+00:00</date>
254 254 <msg xml:space="preserve">other 1
255 255 other 2
256 256
257 257 other 3</msg>
258 258 </logentry>
259 259 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
260 260 <author email="user@hostname">User Name</author>
261 261 <date>1970-01-12T13:46:40+00:00</date>
262 262 <msg xml:space="preserve">line 1
263 263 line 2</msg>
264 264 </logentry>
265 265 </log>
266 266
267 267 $ hg log -v --style xml
268 268 <?xml version="1.0"?>
269 269 <log>
270 270 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
271 271 <tag>tip</tag>
272 272 <author email="test">test</author>
273 273 <date>2020-01-01T10:01:00+00:00</date>
274 274 <msg xml:space="preserve">third</msg>
275 275 <paths>
276 276 <path action="A">fourth</path>
277 277 <path action="A">third</path>
278 278 <path action="R">second</path>
279 279 </paths>
280 280 <copies>
281 281 <copy source="second">fourth</copy>
282 282 </copies>
283 283 </logentry>
284 284 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
285 285 <parent revision="-1" node="0000000000000000000000000000000000000000" />
286 286 <author email="user@hostname">User Name</author>
287 287 <date>1970-01-12T13:46:40+00:00</date>
288 288 <msg xml:space="preserve">second</msg>
289 289 <paths>
290 290 <path action="A">second</path>
291 291 </paths>
292 292 </logentry>
293 293 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
294 294 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
295 295 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
296 296 <author email="person">person</author>
297 297 <date>1970-01-18T08:40:01+00:00</date>
298 298 <msg xml:space="preserve">merge</msg>
299 299 <paths>
300 300 </paths>
301 301 </logentry>
302 302 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
303 303 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
304 304 <author email="person">person</author>
305 305 <date>1970-01-18T08:40:00+00:00</date>
306 306 <msg xml:space="preserve">new head</msg>
307 307 <paths>
308 308 <path action="A">d</path>
309 309 </paths>
310 310 </logentry>
311 311 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
312 312 <branch>foo</branch>
313 313 <author email="person">person</author>
314 314 <date>1970-01-17T04:53:20+00:00</date>
315 315 <msg xml:space="preserve">new branch</msg>
316 316 <paths>
317 317 </paths>
318 318 </logentry>
319 319 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
320 320 <author email="person">person</author>
321 321 <date>1970-01-16T01:06:40+00:00</date>
322 322 <msg xml:space="preserve">no user, no domain</msg>
323 323 <paths>
324 324 <path action="M">c</path>
325 325 </paths>
326 326 </logentry>
327 327 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
328 328 <author email="other@place">other</author>
329 329 <date>1970-01-14T21:20:00+00:00</date>
330 330 <msg xml:space="preserve">no person</msg>
331 331 <paths>
332 332 <path action="A">c</path>
333 333 </paths>
334 334 </logentry>
335 335 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
336 336 <author email="other@place">A. N. Other</author>
337 337 <date>1970-01-13T17:33:20+00:00</date>
338 338 <msg xml:space="preserve">other 1
339 339 other 2
340 340
341 341 other 3</msg>
342 342 <paths>
343 343 <path action="A">b</path>
344 344 </paths>
345 345 </logentry>
346 346 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
347 347 <author email="user@hostname">User Name</author>
348 348 <date>1970-01-12T13:46:40+00:00</date>
349 349 <msg xml:space="preserve">line 1
350 350 line 2</msg>
351 351 <paths>
352 352 <path action="A">a</path>
353 353 </paths>
354 354 </logentry>
355 355 </log>
356 356
357 357 $ hg log --debug --style xml
358 358 <?xml version="1.0"?>
359 359 <log>
360 360 <logentry revision="8" node="95c24699272ef57d062b8bccc32c878bf841784a">
361 361 <tag>tip</tag>
362 362 <parent revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453" />
363 363 <parent revision="-1" node="0000000000000000000000000000000000000000" />
364 364 <author email="test">test</author>
365 365 <date>2020-01-01T10:01:00+00:00</date>
366 366 <msg xml:space="preserve">third</msg>
367 367 <paths>
368 368 <path action="A">fourth</path>
369 369 <path action="A">third</path>
370 370 <path action="R">second</path>
371 371 </paths>
372 372 <copies>
373 373 <copy source="second">fourth</copy>
374 374 </copies>
375 375 <extra key="branch">default</extra>
376 376 </logentry>
377 377 <logentry revision="7" node="29114dbae42b9f078cf2714dbe3a86bba8ec7453">
378 378 <parent revision="-1" node="0000000000000000000000000000000000000000" />
379 379 <parent revision="-1" node="0000000000000000000000000000000000000000" />
380 380 <author email="user@hostname">User Name</author>
381 381 <date>1970-01-12T13:46:40+00:00</date>
382 382 <msg xml:space="preserve">second</msg>
383 383 <paths>
384 384 <path action="A">second</path>
385 385 </paths>
386 386 <extra key="branch">default</extra>
387 387 </logentry>
388 388 <logentry revision="6" node="d41e714fe50d9e4a5f11b4d595d543481b5f980b">
389 389 <parent revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f" />
390 390 <parent revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74" />
391 391 <author email="person">person</author>
392 392 <date>1970-01-18T08:40:01+00:00</date>
393 393 <msg xml:space="preserve">merge</msg>
394 394 <paths>
395 395 </paths>
396 396 <extra key="branch">default</extra>
397 397 </logentry>
398 398 <logentry revision="5" node="13207e5a10d9fd28ec424934298e176197f2c67f">
399 399 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
400 400 <parent revision="-1" node="0000000000000000000000000000000000000000" />
401 401 <author email="person">person</author>
402 402 <date>1970-01-18T08:40:00+00:00</date>
403 403 <msg xml:space="preserve">new head</msg>
404 404 <paths>
405 405 <path action="A">d</path>
406 406 </paths>
407 407 <extra key="branch">default</extra>
408 408 </logentry>
409 409 <logentry revision="4" node="bbe44766e73d5f11ed2177f1838de10c53ef3e74">
410 410 <branch>foo</branch>
411 411 <parent revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47" />
412 412 <parent revision="-1" node="0000000000000000000000000000000000000000" />
413 413 <author email="person">person</author>
414 414 <date>1970-01-17T04:53:20+00:00</date>
415 415 <msg xml:space="preserve">new branch</msg>
416 416 <paths>
417 417 </paths>
418 418 <extra key="branch">foo</extra>
419 419 </logentry>
420 420 <logentry revision="3" node="10e46f2dcbf4823578cf180f33ecf0b957964c47">
421 421 <parent revision="2" node="97054abb4ab824450e9164180baf491ae0078465" />
422 422 <parent revision="-1" node="0000000000000000000000000000000000000000" />
423 423 <author email="person">person</author>
424 424 <date>1970-01-16T01:06:40+00:00</date>
425 425 <msg xml:space="preserve">no user, no domain</msg>
426 426 <paths>
427 427 <path action="M">c</path>
428 428 </paths>
429 429 <extra key="branch">default</extra>
430 430 </logentry>
431 431 <logentry revision="2" node="97054abb4ab824450e9164180baf491ae0078465">
432 432 <parent revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965" />
433 433 <parent revision="-1" node="0000000000000000000000000000000000000000" />
434 434 <author email="other@place">other</author>
435 435 <date>1970-01-14T21:20:00+00:00</date>
436 436 <msg xml:space="preserve">no person</msg>
437 437 <paths>
438 438 <path action="A">c</path>
439 439 </paths>
440 440 <extra key="branch">default</extra>
441 441 </logentry>
442 442 <logentry revision="1" node="b608e9d1a3f0273ccf70fb85fd6866b3482bf965">
443 443 <parent revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f" />
444 444 <parent revision="-1" node="0000000000000000000000000000000000000000" />
445 445 <author email="other@place">A. N. Other</author>
446 446 <date>1970-01-13T17:33:20+00:00</date>
447 447 <msg xml:space="preserve">other 1
448 448 other 2
449 449
450 450 other 3</msg>
451 451 <paths>
452 452 <path action="A">b</path>
453 453 </paths>
454 454 <extra key="branch">default</extra>
455 455 </logentry>
456 456 <logentry revision="0" node="1e4e1b8f71e05681d422154f5421e385fec3454f">
457 457 <parent revision="-1" node="0000000000000000000000000000000000000000" />
458 458 <parent revision="-1" node="0000000000000000000000000000000000000000" />
459 459 <author email="user@hostname">User Name</author>
460 460 <date>1970-01-12T13:46:40+00:00</date>
461 461 <msg xml:space="preserve">line 1
462 462 line 2</msg>
463 463 <paths>
464 464 <path action="A">a</path>
465 465 </paths>
466 466 <extra key="branch">default</extra>
467 467 </logentry>
468 468 </log>
469 469
470 470
471 471 Error if style not readable:
472 472
473 473 #if unix-permissions no-root
474 474 $ touch q
475 475 $ chmod 0 q
476 476 $ hg log --style ./q
477 477 abort: Permission denied: ./q
478 478 [255]
479 479 #endif
480 480
481 481 Error if no style:
482 482
483 483 $ hg log --style notexist
484 484 abort: style 'notexist' not found
485 485 (available styles: bisect, changelog, compact, default, phases, xml)
486 486 [255]
487 487
488 488 Error if style missing key:
489 489
490 490 $ echo 'q = q' > t
491 491 $ hg log --style ./t
492 492 abort: "changeset" not in template map
493 493 [255]
494 494
495 495 Error if style missing value:
496 496
497 497 $ echo 'changeset =' > t
498 498 $ hg log --style t
499 499 abort: t:1: missing value
500 500 [255]
501 501
502 502 Error if include fails:
503 503
504 504 $ echo 'changeset = q' >> t
505 505 #if unix-permissions no-root
506 506 $ hg log --style ./t
507 507 abort: template file ./q: Permission denied
508 508 [255]
509 509 $ rm q
510 510 #endif
511 511
512 512 Include works:
513 513
514 514 $ echo '{rev}' > q
515 515 $ hg log --style ./t
516 516 8
517 517 7
518 518 6
519 519 5
520 520 4
521 521 3
522 522 2
523 523 1
524 524 0
525 525
526 526 Missing non-standard names give no error (backward compatibility):
527 527
528 528 $ echo "changeset = '{c}'" > t
529 529 $ hg log --style ./t
530 530
531 531 Defining non-standard name works:
532 532
533 533 $ cat <<EOF > t
534 534 > changeset = '{c}'
535 535 > c = q
536 536 > EOF
537 537 $ hg log --style ./t
538 538 8
539 539 7
540 540 6
541 541 5
542 542 4
543 543 3
544 544 2
545 545 1
546 546 0
547 547
548 548 ui.style works:
549 549
550 550 $ echo '[ui]' > .hg/hgrc
551 551 $ echo 'style = t' >> .hg/hgrc
552 552 $ hg log
553 553 8
554 554 7
555 555 6
556 556 5
557 557 4
558 558 3
559 559 2
560 560 1
561 561 0
562 562
563 563
564 564 Issue338:
565 565
566 566 $ hg log --style=changelog > changelog
567 567
568 568 $ cat changelog
569 569 2020-01-01 test <test>
570 570
571 571 * fourth, second, third:
572 572 third
573 573 [95c24699272e] [tip]
574 574
575 575 1970-01-12 User Name <user@hostname>
576 576
577 577 * second:
578 578 second
579 579 [29114dbae42b]
580 580
581 581 1970-01-18 person <person>
582 582
583 583 * merge
584 584 [d41e714fe50d]
585 585
586 586 * d:
587 587 new head
588 588 [13207e5a10d9]
589 589
590 590 1970-01-17 person <person>
591 591
592 592 * new branch
593 593 [bbe44766e73d] <foo>
594 594
595 595 1970-01-16 person <person>
596 596
597 597 * c:
598 598 no user, no domain
599 599 [10e46f2dcbf4]
600 600
601 601 1970-01-14 other <other@place>
602 602
603 603 * c:
604 604 no person
605 605 [97054abb4ab8]
606 606
607 607 1970-01-13 A. N. Other <other@place>
608 608
609 609 * b:
610 610 other 1 other 2
611 611
612 612 other 3
613 613 [b608e9d1a3f0]
614 614
615 615 1970-01-12 User Name <user@hostname>
616 616
617 617 * a:
618 618 line 1 line 2
619 619 [1e4e1b8f71e0]
620 620
621 621
622 622 Issue2130: xml output for 'hg heads' is malformed
623 623
624 624 $ hg heads --style changelog
625 625 2020-01-01 test <test>
626 626
627 627 * fourth, second, third:
628 628 third
629 629 [95c24699272e] [tip]
630 630
631 631 1970-01-18 person <person>
632 632
633 633 * merge
634 634 [d41e714fe50d]
635 635
636 636 1970-01-17 person <person>
637 637
638 638 * new branch
639 639 [bbe44766e73d] <foo>
640 640
641 641
642 642 Keys work:
643 643
644 644 $ for key in author branch branches date desc file_adds file_dels file_mods \
645 645 > file_copies file_copies_switch files \
646 646 > manifest node parents rev tags diffstat extras \
647 647 > p1rev p2rev p1node p2node; do
648 648 > for mode in '' --verbose --debug; do
649 649 > hg log $mode --template "$key$mode: {$key}\n"
650 650 > done
651 651 > done
652 652 author: test
653 653 author: User Name <user@hostname>
654 654 author: person
655 655 author: person
656 656 author: person
657 657 author: person
658 658 author: other@place
659 659 author: A. N. Other <other@place>
660 660 author: User Name <user@hostname>
661 661 author--verbose: test
662 662 author--verbose: User Name <user@hostname>
663 663 author--verbose: person
664 664 author--verbose: person
665 665 author--verbose: person
666 666 author--verbose: person
667 667 author--verbose: other@place
668 668 author--verbose: A. N. Other <other@place>
669 669 author--verbose: User Name <user@hostname>
670 670 author--debug: test
671 671 author--debug: User Name <user@hostname>
672 672 author--debug: person
673 673 author--debug: person
674 674 author--debug: person
675 675 author--debug: person
676 676 author--debug: other@place
677 677 author--debug: A. N. Other <other@place>
678 678 author--debug: User Name <user@hostname>
679 679 branch: default
680 680 branch: default
681 681 branch: default
682 682 branch: default
683 683 branch: foo
684 684 branch: default
685 685 branch: default
686 686 branch: default
687 687 branch: default
688 688 branch--verbose: default
689 689 branch--verbose: default
690 690 branch--verbose: default
691 691 branch--verbose: default
692 692 branch--verbose: foo
693 693 branch--verbose: default
694 694 branch--verbose: default
695 695 branch--verbose: default
696 696 branch--verbose: default
697 697 branch--debug: default
698 698 branch--debug: default
699 699 branch--debug: default
700 700 branch--debug: default
701 701 branch--debug: foo
702 702 branch--debug: default
703 703 branch--debug: default
704 704 branch--debug: default
705 705 branch--debug: default
706 706 branches:
707 707 branches:
708 708 branches:
709 709 branches:
710 710 branches: foo
711 711 branches:
712 712 branches:
713 713 branches:
714 714 branches:
715 715 branches--verbose:
716 716 branches--verbose:
717 717 branches--verbose:
718 718 branches--verbose:
719 719 branches--verbose: foo
720 720 branches--verbose:
721 721 branches--verbose:
722 722 branches--verbose:
723 723 branches--verbose:
724 724 branches--debug:
725 725 branches--debug:
726 726 branches--debug:
727 727 branches--debug:
728 728 branches--debug: foo
729 729 branches--debug:
730 730 branches--debug:
731 731 branches--debug:
732 732 branches--debug:
733 733 date: 1577872860.00
734 734 date: 1000000.00
735 735 date: 1500001.00
736 736 date: 1500000.00
737 737 date: 1400000.00
738 738 date: 1300000.00
739 739 date: 1200000.00
740 740 date: 1100000.00
741 741 date: 1000000.00
742 742 date--verbose: 1577872860.00
743 743 date--verbose: 1000000.00
744 744 date--verbose: 1500001.00
745 745 date--verbose: 1500000.00
746 746 date--verbose: 1400000.00
747 747 date--verbose: 1300000.00
748 748 date--verbose: 1200000.00
749 749 date--verbose: 1100000.00
750 750 date--verbose: 1000000.00
751 751 date--debug: 1577872860.00
752 752 date--debug: 1000000.00
753 753 date--debug: 1500001.00
754 754 date--debug: 1500000.00
755 755 date--debug: 1400000.00
756 756 date--debug: 1300000.00
757 757 date--debug: 1200000.00
758 758 date--debug: 1100000.00
759 759 date--debug: 1000000.00
760 760 desc: third
761 761 desc: second
762 762 desc: merge
763 763 desc: new head
764 764 desc: new branch
765 765 desc: no user, no domain
766 766 desc: no person
767 767 desc: other 1
768 768 other 2
769 769
770 770 other 3
771 771 desc: line 1
772 772 line 2
773 773 desc--verbose: third
774 774 desc--verbose: second
775 775 desc--verbose: merge
776 776 desc--verbose: new head
777 777 desc--verbose: new branch
778 778 desc--verbose: no user, no domain
779 779 desc--verbose: no person
780 780 desc--verbose: other 1
781 781 other 2
782 782
783 783 other 3
784 784 desc--verbose: line 1
785 785 line 2
786 786 desc--debug: third
787 787 desc--debug: second
788 788 desc--debug: merge
789 789 desc--debug: new head
790 790 desc--debug: new branch
791 791 desc--debug: no user, no domain
792 792 desc--debug: no person
793 793 desc--debug: other 1
794 794 other 2
795 795
796 796 other 3
797 797 desc--debug: line 1
798 798 line 2
799 799 file_adds: fourth third
800 800 file_adds: second
801 801 file_adds:
802 802 file_adds: d
803 803 file_adds:
804 804 file_adds:
805 805 file_adds: c
806 806 file_adds: b
807 807 file_adds: a
808 808 file_adds--verbose: fourth third
809 809 file_adds--verbose: second
810 810 file_adds--verbose:
811 811 file_adds--verbose: d
812 812 file_adds--verbose:
813 813 file_adds--verbose:
814 814 file_adds--verbose: c
815 815 file_adds--verbose: b
816 816 file_adds--verbose: a
817 817 file_adds--debug: fourth third
818 818 file_adds--debug: second
819 819 file_adds--debug:
820 820 file_adds--debug: d
821 821 file_adds--debug:
822 822 file_adds--debug:
823 823 file_adds--debug: c
824 824 file_adds--debug: b
825 825 file_adds--debug: a
826 826 file_dels: second
827 827 file_dels:
828 828 file_dels:
829 829 file_dels:
830 830 file_dels:
831 831 file_dels:
832 832 file_dels:
833 833 file_dels:
834 834 file_dels:
835 835 file_dels--verbose: second
836 836 file_dels--verbose:
837 837 file_dels--verbose:
838 838 file_dels--verbose:
839 839 file_dels--verbose:
840 840 file_dels--verbose:
841 841 file_dels--verbose:
842 842 file_dels--verbose:
843 843 file_dels--verbose:
844 844 file_dels--debug: second
845 845 file_dels--debug:
846 846 file_dels--debug:
847 847 file_dels--debug:
848 848 file_dels--debug:
849 849 file_dels--debug:
850 850 file_dels--debug:
851 851 file_dels--debug:
852 852 file_dels--debug:
853 853 file_mods:
854 854 file_mods:
855 855 file_mods:
856 856 file_mods:
857 857 file_mods:
858 858 file_mods: c
859 859 file_mods:
860 860 file_mods:
861 861 file_mods:
862 862 file_mods--verbose:
863 863 file_mods--verbose:
864 864 file_mods--verbose:
865 865 file_mods--verbose:
866 866 file_mods--verbose:
867 867 file_mods--verbose: c
868 868 file_mods--verbose:
869 869 file_mods--verbose:
870 870 file_mods--verbose:
871 871 file_mods--debug:
872 872 file_mods--debug:
873 873 file_mods--debug:
874 874 file_mods--debug:
875 875 file_mods--debug:
876 876 file_mods--debug: c
877 877 file_mods--debug:
878 878 file_mods--debug:
879 879 file_mods--debug:
880 880 file_copies: fourth (second)
881 881 file_copies:
882 882 file_copies:
883 883 file_copies:
884 884 file_copies:
885 885 file_copies:
886 886 file_copies:
887 887 file_copies:
888 888 file_copies:
889 889 file_copies--verbose: fourth (second)
890 890 file_copies--verbose:
891 891 file_copies--verbose:
892 892 file_copies--verbose:
893 893 file_copies--verbose:
894 894 file_copies--verbose:
895 895 file_copies--verbose:
896 896 file_copies--verbose:
897 897 file_copies--verbose:
898 898 file_copies--debug: fourth (second)
899 899 file_copies--debug:
900 900 file_copies--debug:
901 901 file_copies--debug:
902 902 file_copies--debug:
903 903 file_copies--debug:
904 904 file_copies--debug:
905 905 file_copies--debug:
906 906 file_copies--debug:
907 907 file_copies_switch:
908 908 file_copies_switch:
909 909 file_copies_switch:
910 910 file_copies_switch:
911 911 file_copies_switch:
912 912 file_copies_switch:
913 913 file_copies_switch:
914 914 file_copies_switch:
915 915 file_copies_switch:
916 916 file_copies_switch--verbose:
917 917 file_copies_switch--verbose:
918 918 file_copies_switch--verbose:
919 919 file_copies_switch--verbose:
920 920 file_copies_switch--verbose:
921 921 file_copies_switch--verbose:
922 922 file_copies_switch--verbose:
923 923 file_copies_switch--verbose:
924 924 file_copies_switch--verbose:
925 925 file_copies_switch--debug:
926 926 file_copies_switch--debug:
927 927 file_copies_switch--debug:
928 928 file_copies_switch--debug:
929 929 file_copies_switch--debug:
930 930 file_copies_switch--debug:
931 931 file_copies_switch--debug:
932 932 file_copies_switch--debug:
933 933 file_copies_switch--debug:
934 934 files: fourth second third
935 935 files: second
936 936 files:
937 937 files: d
938 938 files:
939 939 files: c
940 940 files: c
941 941 files: b
942 942 files: a
943 943 files--verbose: fourth second third
944 944 files--verbose: second
945 945 files--verbose:
946 946 files--verbose: d
947 947 files--verbose:
948 948 files--verbose: c
949 949 files--verbose: c
950 950 files--verbose: b
951 951 files--verbose: a
952 952 files--debug: fourth second third
953 953 files--debug: second
954 954 files--debug:
955 955 files--debug: d
956 956 files--debug:
957 957 files--debug: c
958 958 files--debug: c
959 959 files--debug: b
960 960 files--debug: a
961 961 manifest: 6:94961b75a2da
962 962 manifest: 5:f2dbc354b94e
963 963 manifest: 4:4dc3def4f9b4
964 964 manifest: 4:4dc3def4f9b4
965 965 manifest: 3:cb5a1327723b
966 966 manifest: 3:cb5a1327723b
967 967 manifest: 2:6e0e82995c35
968 968 manifest: 1:4e8d705b1e53
969 969 manifest: 0:a0c8bcbbb45c
970 970 manifest--verbose: 6:94961b75a2da
971 971 manifest--verbose: 5:f2dbc354b94e
972 972 manifest--verbose: 4:4dc3def4f9b4
973 973 manifest--verbose: 4:4dc3def4f9b4
974 974 manifest--verbose: 3:cb5a1327723b
975 975 manifest--verbose: 3:cb5a1327723b
976 976 manifest--verbose: 2:6e0e82995c35
977 977 manifest--verbose: 1:4e8d705b1e53
978 978 manifest--verbose: 0:a0c8bcbbb45c
979 979 manifest--debug: 6:94961b75a2da554b4df6fb599e5bfc7d48de0c64
980 980 manifest--debug: 5:f2dbc354b94e5ec0b4f10680ee0cee816101d0bf
981 981 manifest--debug: 4:4dc3def4f9b4c6e8de820f6ee74737f91e96a216
982 982 manifest--debug: 4:4dc3def4f9b4c6e8de820f6ee74737f91e96a216
983 983 manifest--debug: 3:cb5a1327723bada42f117e4c55a303246eaf9ccc
984 984 manifest--debug: 3:cb5a1327723bada42f117e4c55a303246eaf9ccc
985 985 manifest--debug: 2:6e0e82995c35d0d57a52aca8da4e56139e06b4b1
986 986 manifest--debug: 1:4e8d705b1e53e3f9375e0e60dc7b525d8211fe55
987 987 manifest--debug: 0:a0c8bcbbb45c63b90b70ad007bf38961f64f2af0
988 988 node: 95c24699272ef57d062b8bccc32c878bf841784a
989 989 node: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
990 990 node: d41e714fe50d9e4a5f11b4d595d543481b5f980b
991 991 node: 13207e5a10d9fd28ec424934298e176197f2c67f
992 992 node: bbe44766e73d5f11ed2177f1838de10c53ef3e74
993 993 node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
994 994 node: 97054abb4ab824450e9164180baf491ae0078465
995 995 node: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
996 996 node: 1e4e1b8f71e05681d422154f5421e385fec3454f
997 997 node--verbose: 95c24699272ef57d062b8bccc32c878bf841784a
998 998 node--verbose: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
999 999 node--verbose: d41e714fe50d9e4a5f11b4d595d543481b5f980b
1000 1000 node--verbose: 13207e5a10d9fd28ec424934298e176197f2c67f
1001 1001 node--verbose: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1002 1002 node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1003 1003 node--verbose: 97054abb4ab824450e9164180baf491ae0078465
1004 1004 node--verbose: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1005 1005 node--verbose: 1e4e1b8f71e05681d422154f5421e385fec3454f
1006 1006 node--debug: 95c24699272ef57d062b8bccc32c878bf841784a
1007 1007 node--debug: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1008 1008 node--debug: d41e714fe50d9e4a5f11b4d595d543481b5f980b
1009 1009 node--debug: 13207e5a10d9fd28ec424934298e176197f2c67f
1010 1010 node--debug: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1011 1011 node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1012 1012 node--debug: 97054abb4ab824450e9164180baf491ae0078465
1013 1013 node--debug: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1014 1014 node--debug: 1e4e1b8f71e05681d422154f5421e385fec3454f
1015 1015 parents:
1016 1016 parents: -1:000000000000
1017 1017 parents: 5:13207e5a10d9 4:bbe44766e73d
1018 1018 parents: 3:10e46f2dcbf4
1019 1019 parents:
1020 1020 parents:
1021 1021 parents:
1022 1022 parents:
1023 1023 parents:
1024 1024 parents--verbose:
1025 1025 parents--verbose: -1:000000000000
1026 1026 parents--verbose: 5:13207e5a10d9 4:bbe44766e73d
1027 1027 parents--verbose: 3:10e46f2dcbf4
1028 1028 parents--verbose:
1029 1029 parents--verbose:
1030 1030 parents--verbose:
1031 1031 parents--verbose:
1032 1032 parents--verbose:
1033 1033 parents--debug: 7:29114dbae42b9f078cf2714dbe3a86bba8ec7453 -1:0000000000000000000000000000000000000000
1034 1034 parents--debug: -1:0000000000000000000000000000000000000000 -1:0000000000000000000000000000000000000000
1035 1035 parents--debug: 5:13207e5a10d9fd28ec424934298e176197f2c67f 4:bbe44766e73d5f11ed2177f1838de10c53ef3e74
1036 1036 parents--debug: 3:10e46f2dcbf4823578cf180f33ecf0b957964c47 -1:0000000000000000000000000000000000000000
1037 1037 parents--debug: 3:10e46f2dcbf4823578cf180f33ecf0b957964c47 -1:0000000000000000000000000000000000000000
1038 1038 parents--debug: 2:97054abb4ab824450e9164180baf491ae0078465 -1:0000000000000000000000000000000000000000
1039 1039 parents--debug: 1:b608e9d1a3f0273ccf70fb85fd6866b3482bf965 -1:0000000000000000000000000000000000000000
1040 1040 parents--debug: 0:1e4e1b8f71e05681d422154f5421e385fec3454f -1:0000000000000000000000000000000000000000
1041 1041 parents--debug: -1:0000000000000000000000000000000000000000 -1:0000000000000000000000000000000000000000
1042 1042 rev: 8
1043 1043 rev: 7
1044 1044 rev: 6
1045 1045 rev: 5
1046 1046 rev: 4
1047 1047 rev: 3
1048 1048 rev: 2
1049 1049 rev: 1
1050 1050 rev: 0
1051 1051 rev--verbose: 8
1052 1052 rev--verbose: 7
1053 1053 rev--verbose: 6
1054 1054 rev--verbose: 5
1055 1055 rev--verbose: 4
1056 1056 rev--verbose: 3
1057 1057 rev--verbose: 2
1058 1058 rev--verbose: 1
1059 1059 rev--verbose: 0
1060 1060 rev--debug: 8
1061 1061 rev--debug: 7
1062 1062 rev--debug: 6
1063 1063 rev--debug: 5
1064 1064 rev--debug: 4
1065 1065 rev--debug: 3
1066 1066 rev--debug: 2
1067 1067 rev--debug: 1
1068 1068 rev--debug: 0
1069 1069 tags: tip
1070 1070 tags:
1071 1071 tags:
1072 1072 tags:
1073 1073 tags:
1074 1074 tags:
1075 1075 tags:
1076 1076 tags:
1077 1077 tags:
1078 1078 tags--verbose: tip
1079 1079 tags--verbose:
1080 1080 tags--verbose:
1081 1081 tags--verbose:
1082 1082 tags--verbose:
1083 1083 tags--verbose:
1084 1084 tags--verbose:
1085 1085 tags--verbose:
1086 1086 tags--verbose:
1087 1087 tags--debug: tip
1088 1088 tags--debug:
1089 1089 tags--debug:
1090 1090 tags--debug:
1091 1091 tags--debug:
1092 1092 tags--debug:
1093 1093 tags--debug:
1094 1094 tags--debug:
1095 1095 tags--debug:
1096 1096 diffstat: 3: +2/-1
1097 1097 diffstat: 1: +1/-0
1098 1098 diffstat: 0: +0/-0
1099 1099 diffstat: 1: +1/-0
1100 1100 diffstat: 0: +0/-0
1101 1101 diffstat: 1: +1/-0
1102 1102 diffstat: 1: +4/-0
1103 1103 diffstat: 1: +2/-0
1104 1104 diffstat: 1: +1/-0
1105 1105 diffstat--verbose: 3: +2/-1
1106 1106 diffstat--verbose: 1: +1/-0
1107 1107 diffstat--verbose: 0: +0/-0
1108 1108 diffstat--verbose: 1: +1/-0
1109 1109 diffstat--verbose: 0: +0/-0
1110 1110 diffstat--verbose: 1: +1/-0
1111 1111 diffstat--verbose: 1: +4/-0
1112 1112 diffstat--verbose: 1: +2/-0
1113 1113 diffstat--verbose: 1: +1/-0
1114 1114 diffstat--debug: 3: +2/-1
1115 1115 diffstat--debug: 1: +1/-0
1116 1116 diffstat--debug: 0: +0/-0
1117 1117 diffstat--debug: 1: +1/-0
1118 1118 diffstat--debug: 0: +0/-0
1119 1119 diffstat--debug: 1: +1/-0
1120 1120 diffstat--debug: 1: +4/-0
1121 1121 diffstat--debug: 1: +2/-0
1122 1122 diffstat--debug: 1: +1/-0
1123 1123 extras: branch=default
1124 1124 extras: branch=default
1125 1125 extras: branch=default
1126 1126 extras: branch=default
1127 1127 extras: branch=foo
1128 1128 extras: branch=default
1129 1129 extras: branch=default
1130 1130 extras: branch=default
1131 1131 extras: branch=default
1132 1132 extras--verbose: branch=default
1133 1133 extras--verbose: branch=default
1134 1134 extras--verbose: branch=default
1135 1135 extras--verbose: branch=default
1136 1136 extras--verbose: branch=foo
1137 1137 extras--verbose: branch=default
1138 1138 extras--verbose: branch=default
1139 1139 extras--verbose: branch=default
1140 1140 extras--verbose: branch=default
1141 1141 extras--debug: branch=default
1142 1142 extras--debug: branch=default
1143 1143 extras--debug: branch=default
1144 1144 extras--debug: branch=default
1145 1145 extras--debug: branch=foo
1146 1146 extras--debug: branch=default
1147 1147 extras--debug: branch=default
1148 1148 extras--debug: branch=default
1149 1149 extras--debug: branch=default
1150 1150 p1rev: 7
1151 1151 p1rev: -1
1152 1152 p1rev: 5
1153 1153 p1rev: 3
1154 1154 p1rev: 3
1155 1155 p1rev: 2
1156 1156 p1rev: 1
1157 1157 p1rev: 0
1158 1158 p1rev: -1
1159 1159 p1rev--verbose: 7
1160 1160 p1rev--verbose: -1
1161 1161 p1rev--verbose: 5
1162 1162 p1rev--verbose: 3
1163 1163 p1rev--verbose: 3
1164 1164 p1rev--verbose: 2
1165 1165 p1rev--verbose: 1
1166 1166 p1rev--verbose: 0
1167 1167 p1rev--verbose: -1
1168 1168 p1rev--debug: 7
1169 1169 p1rev--debug: -1
1170 1170 p1rev--debug: 5
1171 1171 p1rev--debug: 3
1172 1172 p1rev--debug: 3
1173 1173 p1rev--debug: 2
1174 1174 p1rev--debug: 1
1175 1175 p1rev--debug: 0
1176 1176 p1rev--debug: -1
1177 1177 p2rev: -1
1178 1178 p2rev: -1
1179 1179 p2rev: 4
1180 1180 p2rev: -1
1181 1181 p2rev: -1
1182 1182 p2rev: -1
1183 1183 p2rev: -1
1184 1184 p2rev: -1
1185 1185 p2rev: -1
1186 1186 p2rev--verbose: -1
1187 1187 p2rev--verbose: -1
1188 1188 p2rev--verbose: 4
1189 1189 p2rev--verbose: -1
1190 1190 p2rev--verbose: -1
1191 1191 p2rev--verbose: -1
1192 1192 p2rev--verbose: -1
1193 1193 p2rev--verbose: -1
1194 1194 p2rev--verbose: -1
1195 1195 p2rev--debug: -1
1196 1196 p2rev--debug: -1
1197 1197 p2rev--debug: 4
1198 1198 p2rev--debug: -1
1199 1199 p2rev--debug: -1
1200 1200 p2rev--debug: -1
1201 1201 p2rev--debug: -1
1202 1202 p2rev--debug: -1
1203 1203 p2rev--debug: -1
1204 1204 p1node: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1205 1205 p1node: 0000000000000000000000000000000000000000
1206 1206 p1node: 13207e5a10d9fd28ec424934298e176197f2c67f
1207 1207 p1node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1208 1208 p1node: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1209 1209 p1node: 97054abb4ab824450e9164180baf491ae0078465
1210 1210 p1node: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1211 1211 p1node: 1e4e1b8f71e05681d422154f5421e385fec3454f
1212 1212 p1node: 0000000000000000000000000000000000000000
1213 1213 p1node--verbose: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1214 1214 p1node--verbose: 0000000000000000000000000000000000000000
1215 1215 p1node--verbose: 13207e5a10d9fd28ec424934298e176197f2c67f
1216 1216 p1node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1217 1217 p1node--verbose: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1218 1218 p1node--verbose: 97054abb4ab824450e9164180baf491ae0078465
1219 1219 p1node--verbose: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1220 1220 p1node--verbose: 1e4e1b8f71e05681d422154f5421e385fec3454f
1221 1221 p1node--verbose: 0000000000000000000000000000000000000000
1222 1222 p1node--debug: 29114dbae42b9f078cf2714dbe3a86bba8ec7453
1223 1223 p1node--debug: 0000000000000000000000000000000000000000
1224 1224 p1node--debug: 13207e5a10d9fd28ec424934298e176197f2c67f
1225 1225 p1node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1226 1226 p1node--debug: 10e46f2dcbf4823578cf180f33ecf0b957964c47
1227 1227 p1node--debug: 97054abb4ab824450e9164180baf491ae0078465
1228 1228 p1node--debug: b608e9d1a3f0273ccf70fb85fd6866b3482bf965
1229 1229 p1node--debug: 1e4e1b8f71e05681d422154f5421e385fec3454f
1230 1230 p1node--debug: 0000000000000000000000000000000000000000
1231 1231 p2node: 0000000000000000000000000000000000000000
1232 1232 p2node: 0000000000000000000000000000000000000000
1233 1233 p2node: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1234 1234 p2node: 0000000000000000000000000000000000000000
1235 1235 p2node: 0000000000000000000000000000000000000000
1236 1236 p2node: 0000000000000000000000000000000000000000
1237 1237 p2node: 0000000000000000000000000000000000000000
1238 1238 p2node: 0000000000000000000000000000000000000000
1239 1239 p2node: 0000000000000000000000000000000000000000
1240 1240 p2node--verbose: 0000000000000000000000000000000000000000
1241 1241 p2node--verbose: 0000000000000000000000000000000000000000
1242 1242 p2node--verbose: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1243 1243 p2node--verbose: 0000000000000000000000000000000000000000
1244 1244 p2node--verbose: 0000000000000000000000000000000000000000
1245 1245 p2node--verbose: 0000000000000000000000000000000000000000
1246 1246 p2node--verbose: 0000000000000000000000000000000000000000
1247 1247 p2node--verbose: 0000000000000000000000000000000000000000
1248 1248 p2node--verbose: 0000000000000000000000000000000000000000
1249 1249 p2node--debug: 0000000000000000000000000000000000000000
1250 1250 p2node--debug: 0000000000000000000000000000000000000000
1251 1251 p2node--debug: bbe44766e73d5f11ed2177f1838de10c53ef3e74
1252 1252 p2node--debug: 0000000000000000000000000000000000000000
1253 1253 p2node--debug: 0000000000000000000000000000000000000000
1254 1254 p2node--debug: 0000000000000000000000000000000000000000
1255 1255 p2node--debug: 0000000000000000000000000000000000000000
1256 1256 p2node--debug: 0000000000000000000000000000000000000000
1257 1257 p2node--debug: 0000000000000000000000000000000000000000
1258 1258
1259 1259 Filters work:
1260 1260
1261 1261 $ hg log --template '{author|domain}\n'
1262 1262
1263 1263 hostname
1264 1264
1265 1265
1266 1266
1267 1267
1268 1268 place
1269 1269 place
1270 1270 hostname
1271 1271
1272 1272 $ hg log --template '{author|person}\n'
1273 1273 test
1274 1274 User Name
1275 1275 person
1276 1276 person
1277 1277 person
1278 1278 person
1279 1279 other
1280 1280 A. N. Other
1281 1281 User Name
1282 1282
1283 1283 $ hg log --template '{author|user}\n'
1284 1284 test
1285 1285 user
1286 1286 person
1287 1287 person
1288 1288 person
1289 1289 person
1290 1290 other
1291 1291 other
1292 1292 user
1293 1293
1294 1294 $ hg log --template '{date|date}\n'
1295 1295 Wed Jan 01 10:01:00 2020 +0000
1296 1296 Mon Jan 12 13:46:40 1970 +0000
1297 1297 Sun Jan 18 08:40:01 1970 +0000
1298 1298 Sun Jan 18 08:40:00 1970 +0000
1299 1299 Sat Jan 17 04:53:20 1970 +0000
1300 1300 Fri Jan 16 01:06:40 1970 +0000
1301 1301 Wed Jan 14 21:20:00 1970 +0000
1302 1302 Tue Jan 13 17:33:20 1970 +0000
1303 1303 Mon Jan 12 13:46:40 1970 +0000
1304 1304
1305 1305 $ hg log --template '{date|isodate}\n'
1306 1306 2020-01-01 10:01 +0000
1307 1307 1970-01-12 13:46 +0000
1308 1308 1970-01-18 08:40 +0000
1309 1309 1970-01-18 08:40 +0000
1310 1310 1970-01-17 04:53 +0000
1311 1311 1970-01-16 01:06 +0000
1312 1312 1970-01-14 21:20 +0000
1313 1313 1970-01-13 17:33 +0000
1314 1314 1970-01-12 13:46 +0000
1315 1315
1316 1316 $ hg log --template '{date|isodatesec}\n'
1317 1317 2020-01-01 10:01:00 +0000
1318 1318 1970-01-12 13:46:40 +0000
1319 1319 1970-01-18 08:40:01 +0000
1320 1320 1970-01-18 08:40:00 +0000
1321 1321 1970-01-17 04:53:20 +0000
1322 1322 1970-01-16 01:06:40 +0000
1323 1323 1970-01-14 21:20:00 +0000
1324 1324 1970-01-13 17:33:20 +0000
1325 1325 1970-01-12 13:46:40 +0000
1326 1326
1327 1327 $ hg log --template '{date|rfc822date}\n'
1328 1328 Wed, 01 Jan 2020 10:01:00 +0000
1329 1329 Mon, 12 Jan 1970 13:46:40 +0000
1330 1330 Sun, 18 Jan 1970 08:40:01 +0000
1331 1331 Sun, 18 Jan 1970 08:40:00 +0000
1332 1332 Sat, 17 Jan 1970 04:53:20 +0000
1333 1333 Fri, 16 Jan 1970 01:06:40 +0000
1334 1334 Wed, 14 Jan 1970 21:20:00 +0000
1335 1335 Tue, 13 Jan 1970 17:33:20 +0000
1336 1336 Mon, 12 Jan 1970 13:46:40 +0000
1337 1337
1338 1338 $ hg log --template '{desc|firstline}\n'
1339 1339 third
1340 1340 second
1341 1341 merge
1342 1342 new head
1343 1343 new branch
1344 1344 no user, no domain
1345 1345 no person
1346 1346 other 1
1347 1347 line 1
1348 1348
1349 1349 $ hg log --template '{node|short}\n'
1350 1350 95c24699272e
1351 1351 29114dbae42b
1352 1352 d41e714fe50d
1353 1353 13207e5a10d9
1354 1354 bbe44766e73d
1355 1355 10e46f2dcbf4
1356 1356 97054abb4ab8
1357 1357 b608e9d1a3f0
1358 1358 1e4e1b8f71e0
1359 1359
1360 1360 $ hg log --template '<changeset author="{author|xmlescape}"/>\n'
1361 1361 <changeset author="test"/>
1362 1362 <changeset author="User Name &lt;user@hostname&gt;"/>
1363 1363 <changeset author="person"/>
1364 1364 <changeset author="person"/>
1365 1365 <changeset author="person"/>
1366 1366 <changeset author="person"/>
1367 1367 <changeset author="other@place"/>
1368 1368 <changeset author="A. N. Other &lt;other@place&gt;"/>
1369 1369 <changeset author="User Name &lt;user@hostname&gt;"/>
1370 1370
1371 1371 $ hg log --template '{rev}: {children}\n'
1372 1372 8:
1373 1373 7: 8:95c24699272e
1374 1374 6:
1375 1375 5: 6:d41e714fe50d
1376 1376 4: 6:d41e714fe50d
1377 1377 3: 4:bbe44766e73d 5:13207e5a10d9
1378 1378 2: 3:10e46f2dcbf4
1379 1379 1: 2:97054abb4ab8
1380 1380 0: 1:b608e9d1a3f0
1381 1381
1382 1382 Formatnode filter works:
1383 1383
1384 1384 $ hg -q log -r 0 --template '{node|formatnode}\n'
1385 1385 1e4e1b8f71e0
1386 1386
1387 1387 $ hg log -r 0 --template '{node|formatnode}\n'
1388 1388 1e4e1b8f71e0
1389 1389
1390 1390 $ hg -v log -r 0 --template '{node|formatnode}\n'
1391 1391 1e4e1b8f71e0
1392 1392
1393 1393 $ hg --debug log -r 0 --template '{node|formatnode}\n'
1394 1394 1e4e1b8f71e05681d422154f5421e385fec3454f
1395 1395
1396 1396 Age filter:
1397 1397
1398 1398 $ hg log --template '{date|age}\n' > /dev/null || exit 1
1399 1399
1400 1400 >>> from datetime import datetime, timedelta
1401 1401 >>> fp = open('a', 'w')
1402 1402 >>> n = datetime.now() + timedelta(366 * 7)
1403 1403 >>> fp.write('%d-%d-%d 00:00' % (n.year, n.month, n.day))
1404 1404 >>> fp.close()
1405 1405 $ hg add a
1406 1406 $ hg commit -m future -d "`cat a`"
1407 1407
1408 1408 $ hg log -l1 --template '{date|age}\n'
1409 1409 7 years from now
1410 1410
1411 1411 Error on syntax:
1412 1412
1413 1413 $ echo 'x = "f' >> t
1414 1414 $ hg log
1415 1415 abort: t:3: unmatched quotes
1416 1416 [255]
1417 1417
1418 1418 Behind the scenes, this will throw TypeError
1419 1419
1420 1420 $ hg log -l 3 --template '{date|obfuscate}\n'
1421 1421 abort: template filter 'obfuscate' is not compatible with keyword 'date'
1422 1422 [255]
1423 1423
1424 1424 Behind the scenes, this will throw a ValueError
1425 1425
1426 1426 $ hg log -l 3 --template 'line: {desc|shortdate}\n'
1427 1427 abort: template filter 'shortdate' is not compatible with keyword 'desc'
1428 1428 [255]
1429 1429
1430 1430 Behind the scenes, this will throw AttributeError
1431 1431
1432 1432 $ hg log -l 3 --template 'line: {date|escape}\n'
1433 1433 abort: template filter 'escape' is not compatible with keyword 'date'
1434 1434 [255]
1435 1435
1436 1436 Behind the scenes, this will throw ValueError
1437 1437
1438 1438 $ hg tip --template '{author|email|date}\n'
1439 1439 abort: template filter 'datefilter' is not compatible with keyword 'author'
1440 1440 [255]
1441 1441
1442 1442 Thrown an error if a template function doesn't exist
1443 1443
1444 1444 $ hg tip --template '{foo()}\n'
1445 1445 hg: parse error: unknown function 'foo'
1446 1446 [255]
1447 1447
1448 1448 $ cd ..
1449 1449
1450 1450
1451 1451 latesttag:
1452 1452
1453 1453 $ hg init latesttag
1454 1454 $ cd latesttag
1455 1455
1456 1456 $ echo a > file
1457 1457 $ hg ci -Am a -d '0 0'
1458 1458 adding file
1459 1459
1460 1460 $ echo b >> file
1461 1461 $ hg ci -m b -d '1 0'
1462 1462
1463 1463 $ echo c >> head1
1464 1464 $ hg ci -Am h1c -d '2 0'
1465 1465 adding head1
1466 1466
1467 1467 $ hg update -q 1
1468 1468 $ echo d >> head2
1469 1469 $ hg ci -Am h2d -d '3 0'
1470 1470 adding head2
1471 1471 created new head
1472 1472
1473 1473 $ echo e >> head2
1474 1474 $ hg ci -m h2e -d '4 0'
1475 1475
1476 1476 $ hg merge -q
1477 1477 $ hg ci -m merge -d '5 -3600'
1478 1478
1479 1479 No tag set:
1480 1480
1481 1481 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1482 1482 5: null+5
1483 1483 4: null+4
1484 1484 3: null+3
1485 1485 2: null+3
1486 1486 1: null+2
1487 1487 0: null+1
1488 1488
1489 One common tag: longuest path wins:
1489 One common tag: longest path wins:
1490 1490
1491 1491 $ hg tag -r 1 -m t1 -d '6 0' t1
1492 1492 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1493 1493 6: t1+4
1494 1494 5: t1+3
1495 1495 4: t1+2
1496 1496 3: t1+1
1497 1497 2: t1+1
1498 1498 1: t1+0
1499 1499 0: null+1
1500 1500
1501 1501 One ancestor tag: more recent wins:
1502 1502
1503 1503 $ hg tag -r 2 -m t2 -d '7 0' t2
1504 1504 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1505 1505 7: t2+3
1506 1506 6: t2+2
1507 1507 5: t2+1
1508 1508 4: t1+2
1509 1509 3: t1+1
1510 1510 2: t2+0
1511 1511 1: t1+0
1512 1512 0: null+1
1513 1513
1514 1514 Two branch tags: more recent wins:
1515 1515
1516 1516 $ hg tag -r 3 -m t3 -d '8 0' t3
1517 1517 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1518 1518 8: t3+5
1519 1519 7: t3+4
1520 1520 6: t3+3
1521 1521 5: t3+2
1522 1522 4: t3+1
1523 1523 3: t3+0
1524 1524 2: t2+0
1525 1525 1: t1+0
1526 1526 0: null+1
1527 1527
1528 1528 Merged tag overrides:
1529 1529
1530 1530 $ hg tag -r 5 -m t5 -d '9 0' t5
1531 1531 $ hg tag -r 3 -m at3 -d '10 0' at3
1532 1532 $ hg log --template '{rev}: {latesttag}+{latesttagdistance}\n'
1533 1533 10: t5+5
1534 1534 9: t5+4
1535 1535 8: t5+3
1536 1536 7: t5+2
1537 1537 6: t5+1
1538 1538 5: t5+0
1539 1539 4: at3:t3+1
1540 1540 3: at3:t3+0
1541 1541 2: t2+0
1542 1542 1: t1+0
1543 1543 0: null+1
1544 1544
1545 1545 $ cd ..
1546 1546
1547 1547
1548 1548 Style path expansion: issue1948 - ui.style option doesn't work on OSX
1549 1549 if it is a relative path
1550 1550
1551 1551 $ mkdir -p home/styles
1552 1552
1553 1553 $ cat > home/styles/teststyle <<EOF
1554 1554 > changeset = 'test {rev}:{node|short}\n'
1555 1555 > EOF
1556 1556
1557 1557 $ HOME=`pwd`/home; export HOME
1558 1558
1559 1559 $ cat > latesttag/.hg/hgrc <<EOF
1560 1560 > [ui]
1561 1561 > style = ~/styles/teststyle
1562 1562 > EOF
1563 1563
1564 1564 $ hg -R latesttag tip
1565 1565 test 10:9b4a630e5f5f
1566 1566
1567 1567 Test recursive showlist template (issue1989):
1568 1568
1569 1569 $ cat > style1989 <<EOF
1570 1570 > changeset = '{file_mods}{manifest}{extras}'
1571 1571 > file_mod = 'M|{author|person}\n'
1572 1572 > manifest = '{rev},{author}\n'
1573 1573 > extra = '{key}: {author}\n'
1574 1574 > EOF
1575 1575
1576 1576 $ hg -R latesttag log -r tip --style=style1989
1577 1577 M|test
1578 1578 10,test
1579 1579 branch: test
1580 1580
1581 1581 Test new-style inline templating:
1582 1582
1583 1583 $ hg log -R latesttag -r tip --template 'modified files: {file_mods % " {file}\n"}\n'
1584 1584 modified files: .hgtags
1585 1585
1586 1586 Test the sub function of templating for expansion:
1587 1587
1588 1588 $ hg log -R latesttag -r 10 --template '{sub("[0-9]", "x", "{rev}")}\n'
1589 1589 xx
1590 1590
1591 1591 Test the strip function with chars specified:
1592 1592
1593 1593 $ hg log -R latesttag --template '{desc}\n'
1594 1594 at3
1595 1595 t5
1596 1596 t3
1597 1597 t2
1598 1598 t1
1599 1599 merge
1600 1600 h2e
1601 1601 h2d
1602 1602 h1c
1603 1603 b
1604 1604 a
1605 1605
1606 1606 $ hg log -R latesttag --template '{strip(desc, "te")}\n'
1607 1607 at3
1608 1608 5
1609 1609 3
1610 1610 2
1611 1611 1
1612 1612 merg
1613 1613 h2
1614 1614 h2d
1615 1615 h1c
1616 1616 b
1617 1617 a
1618 1618
1619 1619 Test date format:
1620 1620
1621 1621 $ hg log -R latesttag --template 'date: {date(date, "%y %m %d %S %z")}\n'
1622 1622 date: 70 01 01 10 +0000
1623 1623 date: 70 01 01 09 +0000
1624 1624 date: 70 01 01 08 +0000
1625 1625 date: 70 01 01 07 +0000
1626 1626 date: 70 01 01 06 +0000
1627 1627 date: 70 01 01 05 +0100
1628 1628 date: 70 01 01 04 +0000
1629 1629 date: 70 01 01 03 +0000
1630 1630 date: 70 01 01 02 +0000
1631 1631 date: 70 01 01 01 +0000
1632 1632 date: 70 01 01 00 +0000
1633 1633
1634 1634 Test string escaping:
1635 1635
1636 1636 $ hg log -R latesttag -r 0 --template '>\n<>\\n<{if(rev, "[>\n<>\\n<]")}>\n<>\\n<\n'
1637 1637 >
1638 1638 <>\n<[>
1639 1639 <>\n<]>
1640 1640 <>\n<
1641 1641
1642 1642 "string-escape"-ed "\x5c\x786e" becomes r"\x6e" (once) or r"n" (twice)
1643 1643
1644 1644 $ hg log -R a -r 0 --template '{if("1", "\x5c\x786e", "NG")}\n'
1645 1645 \x6e
1646 1646 $ hg log -R a -r 0 --template '{if("1", r"\x5c\x786e", "NG")}\n'
1647 1647 \x5c\x786e
1648 1648 $ hg log -R a -r 0 --template '{if("", "NG", "\x5c\x786e")}\n'
1649 1649 \x6e
1650 1650 $ hg log -R a -r 0 --template '{if("", "NG", r"\x5c\x786e")}\n'
1651 1651 \x5c\x786e
1652 1652
1653 1653 $ hg log -R a -r 2 --template '{ifeq("no perso\x6e", desc, "\x5c\x786e", "NG")}\n'
1654 1654 \x6e
1655 1655 $ hg log -R a -r 2 --template '{ifeq(r"no perso\x6e", desc, "NG", r"\x5c\x786e")}\n'
1656 1656 \x5c\x786e
1657 1657 $ hg log -R a -r 2 --template '{ifeq(desc, "no perso\x6e", "\x5c\x786e", "NG")}\n'
1658 1658 \x6e
1659 1659 $ hg log -R a -r 2 --template '{ifeq(desc, r"no perso\x6e", "NG", r"\x5c\x786e")}\n'
1660 1660 \x5c\x786e
1661 1661
1662 1662 $ hg log -R a -r 8 --template '{join(files, "\n")}\n'
1663 1663 fourth
1664 1664 second
1665 1665 third
1666 1666 $ hg log -R a -r 8 --template '{join(files, r"\n")}\n'
1667 1667 fourth\nsecond\nthird
1668 1668
1669 1669 $ hg log -R a -r 2 --template '{rstdoc("1st\n\n2nd", "htm\x6c")}'
1670 1670 <p>
1671 1671 1st
1672 1672 </p>
1673 1673 <p>
1674 1674 2nd
1675 1675 </p>
1676 1676 $ hg log -R a -r 2 --template '{rstdoc(r"1st\n\n2nd", "html")}'
1677 1677 <p>
1678 1678 1st\n\n2nd
1679 1679 </p>
1680 1680 $ hg log -R a -r 2 --template '{rstdoc("1st\n\n2nd", r"htm\x6c")}'
1681 1681 1st
1682 1682
1683 1683 2nd
1684 1684
1685 1685 $ hg log -R a -r 2 --template '{strip(desc, "\x6e")}\n'
1686 1686 o perso
1687 1687 $ hg log -R a -r 2 --template '{strip(desc, r"\x6e")}\n'
1688 1688 no person
1689 1689 $ hg log -R a -r 2 --template '{strip("no perso\x6e", "\x6e")}\n'
1690 1690 o perso
1691 1691 $ hg log -R a -r 2 --template '{strip(r"no perso\x6e", r"\x6e")}\n'
1692 1692 no perso
1693 1693
1694 1694 $ hg log -R a -r 2 --template '{sub("\\x6e", "\x2d", desc)}\n'
1695 1695 -o perso-
1696 1696 $ hg log -R a -r 2 --template '{sub(r"\\x6e", "-", desc)}\n'
1697 1697 no person
1698 1698 $ hg log -R a -r 2 --template '{sub("n", r"\x2d", desc)}\n'
1699 1699 \x2do perso\x2d
1700 1700 $ hg log -R a -r 2 --template '{sub("n", "\x2d", "no perso\x6e")}\n'
1701 1701 -o perso-
1702 1702 $ hg log -R a -r 2 --template '{sub("n", r"\x2d", r"no perso\x6e")}\n'
1703 1703 \x2do perso\x6e
1704 1704
1705 1705 $ hg log -R a -r 8 --template '{files % "{file}\n"}'
1706 1706 fourth
1707 1707 second
1708 1708 third
1709 1709 $ hg log -R a -r 8 --template '{files % r"{file}\n"}\n'
1710 1710 fourth\nsecond\nthird\n
1711 1711
1712 Test string escapeing in nested expression:
1712 Test string escaping in nested expression:
1713 1713
1714 1714 $ hg log -R a -r 8 --template '{ifeq(r"\x6e", if("1", "\x5c\x786e"), join(files, "\x5c\x786e"))}\n'
1715 1715 fourth\x6esecond\x6ethird
1716 1716 $ hg log -R a -r 8 --template '{ifeq(if("1", r"\x6e"), "\x5c\x786e", join(files, "\x5c\x786e"))}\n'
1717 1717 fourth\x6esecond\x6ethird
1718 1718
1719 1719 $ hg log -R a -r 8 --template '{join(files, ifeq(branch, "default", "\x5c\x786e"))}\n'
1720 1720 fourth\x6esecond\x6ethird
1721 1721 $ hg log -R a -r 8 --template '{join(files, ifeq(branch, "default", r"\x5c\x786e"))}\n'
1722 1722 fourth\x5c\x786esecond\x5c\x786ethird
1723 1723
1724 1724 $ hg log -R a -r 3:4 --template '{rev}:{sub(if("1", "\x6e"), ifeq(branch, "foo", r"\x5c\x786e", "\x5c\x786e"), desc)}\n'
1725 1725 3:\x6eo user, \x6eo domai\x6e
1726 1726 4:\x5c\x786eew bra\x5c\x786ech
1727 1727
1728 1728 Test recursive evaluation:
1729 1729
1730 1730 $ hg init r
1731 1731 $ cd r
1732 1732 $ echo a > a
1733 1733 $ hg ci -Am '{rev}'
1734 1734 adding a
1735 1735 $ hg log -r 0 --template '{if(rev, desc)}\n'
1736 1736 {rev}
1737 1737 $ hg log -r 0 --template '{if(rev, "{author} {rev}")}\n'
1738 1738 test 0
1739 1739
1740 1740 $ hg branch -q 'text.{rev}'
1741 1741 $ echo aa >> aa
1742 1742 $ hg ci -u '{node|short}' -m 'desc to be wrapped desc to be wrapped'
1743 1743
1744 1744 $ hg log -l1 --template '{fill(desc, "20", author, branch)}'
1745 1745 {node|short}desc to
1746 1746 text.{rev}be wrapped
1747 1747 text.{rev}desc to be
1748 1748 text.{rev}wrapped (no-eol)
1749 1749 $ hg log -l1 --template '{fill(desc, "20", "{node|short}:", "text.{rev}:")}'
1750 1750 bcc7ff960b8e:desc to
1751 1751 text.1:be wrapped
1752 1752 text.1:desc to be
1753 1753 text.1:wrapped (no-eol)
1754 1754
1755 1755 $ hg log -l 1 --template '{sub(r"[0-9]", "-", author)}'
1756 1756 {node|short} (no-eol)
1757 1757 $ hg log -l 1 --template '{sub(r"[0-9]", "-", "{node|short}")}'
1758 1758 bcc-ff---b-e (no-eol)
1759 1759
1760 1760 $ cat >> .hg/hgrc <<EOF
1761 1761 > [extensions]
1762 1762 > color=
1763 1763 > [color]
1764 1764 > mode=ansi
1765 1765 > text.{rev} = red
1766 1766 > text.1 = green
1767 1767 > EOF
1768 1768 $ hg log --color=always -l 1 --template '{label(branch, "text\n")}'
1769 1769 \x1b[0;31mtext\x1b[0m (esc)
1770 1770 $ hg log --color=always -l 1 --template '{label("text.{rev}", "text\n")}'
1771 1771 \x1b[0;32mtext\x1b[0m (esc)
1772 1772
1773 1773 Test branches inside if statement:
1774 1774
1775 1775 $ hg log -r 0 --template '{if(branches, "yes", "no")}\n'
1776 1776 no
1777 1777
1778 1778 Test shortest(node) function:
1779 1779
1780 1780 $ echo b > b
1781 1781 $ hg ci -qAm b
1782 1782 $ hg log --template '{shortest(node)}\n'
1783 1783 e777
1784 1784 bcc7
1785 1785 f776
1786 1786 $ hg log --template '{shortest(node, 10)}\n'
1787 1787 e777603221
1788 1788 bcc7ff960b
1789 1789 f7769ec2ab
1790 1790
1791 1791 Test pad function
1792 1792
1793 1793 $ hg log --template '{pad(rev, 20)} {author|user}\n'
1794 1794 2 test
1795 1795 1 {node|short}
1796 1796 0 test
1797 1797
1798 1798 $ hg log --template '{pad(rev, 20, " ", True)} {author|user}\n'
1799 1799 2 test
1800 1800 1 {node|short}
1801 1801 0 test
1802 1802
1803 1803 $ hg log --template '{pad(rev, 20, "-", False)} {author|user}\n'
1804 1804 2------------------- test
1805 1805 1------------------- {node|short}
1806 1806 0------------------- test
1807 1807
1808 1808 Test ifcontains function
1809 1809
1810 1810 $ hg log --template '{rev} {ifcontains("a", file_adds, "added a", "did not add a")}\n'
1811 1811 2 did not add a
1812 1812 1 did not add a
1813 1813 0 added a
1814 1814
1815 1815 Test revset function
1816 1816
1817 1817 $ hg log --template '{rev} {ifcontains(rev, revset("."), "current rev", "not current rev")}\n'
1818 1818 2 current rev
1819 1819 1 not current rev
1820 1820 0 not current rev
1821 1821
1822 1822 $ hg log --template '{rev} Parents: {revset("parents(%s)", rev)}\n'
1823 1823 2 Parents: 1
1824 1824 1 Parents: 0
1825 1825 0 Parents:
1826 1826
1827 1827 $ hg log --template 'Rev: {rev}\n{revset("::%s", rev) % "Ancestor: {revision}\n"}\n'
1828 1828 Rev: 2
1829 1829 Ancestor: 0
1830 1830 Ancestor: 1
1831 1831 Ancestor: 2
1832 1832
1833 1833 Rev: 1
1834 1834 Ancestor: 0
1835 1835 Ancestor: 1
1836 1836
1837 1837 Rev: 0
1838 1838 Ancestor: 0
1839 1839
1840 1840 Test current bookmark templating
1841 1841
1842 1842 $ hg book foo
1843 1843 $ hg book bar
1844 1844 $ hg log --template "{rev} {bookmarks % '{bookmark}{ifeq(bookmark, current, \"*\")} '}\n"
1845 1845 2 bar* foo
1846 1846 1
1847 1847 0
1848 1848
1849 1849 Test stringify on sub expressions
1850 1850
1851 1851 $ cd ..
1852 1852 $ hg log -R a -r 8 --template '{join(files, if("1", if("1", ", ")))}\n'
1853 1853 fourth, second, third
1854 1854 $ hg log -R a -r 8 --template '{strip(if("1", if("1", "-abc-")), if("1", if("1", "-")))}\n'
1855 1855 abc
1856 1856
@@ -1,468 +1,468 b''
1 1 $ cat >> $HGRCPATH <<EOF
2 2 > [extensions]
3 3 > convert=
4 4 > [convert]
5 5 > hg.saverev=False
6 6 > EOF
7 7 $ hg help convert
8 8 hg convert [OPTION]... SOURCE [DEST [REVMAP]]
9 9
10 10 convert a foreign SCM repository to a Mercurial one.
11 11
12 12 Accepted source formats [identifiers]:
13 13
14 14 - Mercurial [hg]
15 15 - CVS [cvs]
16 16 - Darcs [darcs]
17 17 - git [git]
18 18 - Subversion [svn]
19 19 - Monotone [mtn]
20 20 - GNU Arch [gnuarch]
21 21 - Bazaar [bzr]
22 22 - Perforce [p4]
23 23
24 24 Accepted destination formats [identifiers]:
25 25
26 26 - Mercurial [hg]
27 27 - Subversion [svn] (history on branches is not preserved)
28 28
29 29 If no revision is given, all revisions will be converted. Otherwise,
30 30 convert will only import up to the named revision (given in a format
31 31 understood by the source).
32 32
33 33 If no destination directory name is specified, it defaults to the basename
34 34 of the source with "-hg" appended. If the destination repository doesn't
35 35 exist, it will be created.
36 36
37 37 By default, all sources except Mercurial will use --branchsort. Mercurial
38 38 uses --sourcesort to preserve original revision numbers order. Sort modes
39 39 have the following effects:
40 40
41 41 --branchsort convert from parent to child revision when possible, which
42 42 means branches are usually converted one after the other.
43 43 It generates more compact repositories.
44 44 --datesort sort revisions by date. Converted repositories have good-
45 45 looking changelogs but are often an order of magnitude
46 46 larger than the same ones generated by --branchsort.
47 47 --sourcesort try to preserve source revisions order, only supported by
48 48 Mercurial sources.
49 49 --closesort try to move closed revisions as close as possible to parent
50 50 branches, only supported by Mercurial sources.
51 51
52 52 If "REVMAP" isn't given, it will be put in a default location
53 53 ("<dest>/.hg/shamap" by default). The "REVMAP" is a simple text file that
54 54 maps each source commit ID to the destination ID for that revision, like
55 55 so:
56 56
57 57 <source ID> <destination ID>
58 58
59 59 If the file doesn't exist, it's automatically created. It's updated on
60 60 each commit copied, so "hg convert" can be interrupted and can be run
61 61 repeatedly to copy new commits.
62 62
63 63 The authormap is a simple text file that maps each source commit author to
64 64 a destination commit author. It is handy for source SCMs that use unix
65 65 logins to identify authors (e.g.: CVS). One line per author mapping and
66 66 the line format is:
67 67
68 68 source author = destination author
69 69
70 70 Empty lines and lines starting with a "#" are ignored.
71 71
72 72 The filemap is a file that allows filtering and remapping of files and
73 73 directories. Each line can contain one of the following directives:
74 74
75 75 include path/to/file-or-dir
76 76
77 77 exclude path/to/file-or-dir
78 78
79 79 rename path/to/source path/to/destination
80 80
81 81 Comment lines start with "#". A specified path matches if it equals the
82 82 full relative name of a file or one of its parent directories. The
83 83 "include" or "exclude" directive with the longest matching path applies,
84 84 so line order does not matter.
85 85
86 86 The "include" directive causes a file, or all files under a directory, to
87 87 be included in the destination repository. The default if there are no
88 88 "include" statements is to include everything. If there are any "include"
89 89 statements, nothing else is included. The "exclude" directive causes files
90 90 or directories to be omitted. The "rename" directive renames a file or
91 91 directory if it is converted. To rename from a subdirectory into the root
92 92 of the repository, use "." as the path to rename to.
93 93
94 94 The splicemap is a file that allows insertion of synthetic history,
95 95 letting you specify the parents of a revision. This is useful if you want
96 96 to e.g. give a Subversion merge two parents, or graft two disconnected
97 97 series of history together. Each entry contains a key, followed by a
98 98 space, followed by one or two comma-separated values:
99 99
100 100 key parent1, parent2
101 101
102 102 The key is the revision ID in the source revision control system whose
103 103 parents should be modified (same format as a key in .hg/shamap). The
104 104 values are the revision IDs (in either the source or destination revision
105 105 control system) that should be used as the new parents for that node. For
106 106 example, if you have merged "release-1.0" into "trunk", then you should
107 107 specify the revision on "trunk" as the first parent and the one on the
108 108 "release-1.0" branch as the second.
109 109
110 110 The branchmap is a file that allows you to rename a branch when it is
111 111 being brought in from whatever external repository. When used in
112 112 conjunction with a splicemap, it allows for a powerful combination to help
113 113 fix even the most badly mismanaged repositories and turn them into nicely
114 114 structured Mercurial repositories. The branchmap contains lines of the
115 115 form:
116 116
117 117 original_branch_name new_branch_name
118 118
119 119 where "original_branch_name" is the name of the branch in the source
120 120 repository, and "new_branch_name" is the name of the branch is the
121 121 destination repository. No whitespace is allowed in the branch names. This
122 122 can be used to (for instance) move code in one repository from "default"
123 123 to a named branch.
124 124
125 125 The closemap is a file that allows closing of a branch. This is useful if
126 126 you want to close a branch. Each entry contains a revision or hash
127 127 separated by white space.
128 128
129 The tagpmap is a file that exactly analogous to the branchmap. This will
129 The tagmap is a file that exactly analogous to the branchmap. This will
130 130 rename tags on the fly and prevent the 'update tags' commit usually found
131 131 at the end of a convert process.
132 132
133 133 Mercurial Source
134 134 ################
135 135
136 136 The Mercurial source recognizes the following configuration options, which
137 137 you can set on the command line with "--config":
138 138
139 139 convert.hg.ignoreerrors
140 140 ignore integrity errors when reading. Use it to fix
141 141 Mercurial repositories with missing revlogs, by converting
142 142 from and to Mercurial. Default is False.
143 143 convert.hg.saverev
144 144 store original revision ID in changeset (forces target IDs
145 145 to change). It takes a boolean argument and defaults to
146 146 False.
147 147 convert.hg.revs
148 148 revset specifying the source revisions to convert.
149 149
150 150 CVS Source
151 151 ##########
152 152
153 153 CVS source will use a sandbox (i.e. a checked-out copy) from CVS to
154 154 indicate the starting point of what will be converted. Direct access to
155 155 the repository files is not needed, unless of course the repository is
156 156 ":local:". The conversion uses the top level directory in the sandbox to
157 157 find the CVS repository, and then uses CVS rlog commands to find files to
158 158 convert. This means that unless a filemap is given, all files under the
159 159 starting directory will be converted, and that any directory
160 160 reorganization in the CVS sandbox is ignored.
161 161
162 162 The following options can be used with "--config":
163 163
164 164 convert.cvsps.cache
165 165 Set to False to disable remote log caching, for testing and
166 166 debugging purposes. Default is True.
167 167 convert.cvsps.fuzz
168 168 Specify the maximum time (in seconds) that is allowed
169 169 between commits with identical user and log message in a
170 170 single changeset. When very large files were checked in as
171 171 part of a changeset then the default may not be long enough.
172 172 The default is 60.
173 173 convert.cvsps.mergeto
174 174 Specify a regular expression to which commit log messages
175 175 are matched. If a match occurs, then the conversion process
176 176 will insert a dummy revision merging the branch on which
177 177 this log message occurs to the branch indicated in the
178 178 regex. Default is "{{mergetobranch ([-\w]+)}}"
179 179 convert.cvsps.mergefrom
180 180 Specify a regular expression to which commit log messages
181 181 are matched. If a match occurs, then the conversion process
182 182 will add the most recent revision on the branch indicated in
183 183 the regex as the second parent of the changeset. Default is
184 184 "{{mergefrombranch ([-\w]+)}}"
185 185 convert.localtimezone
186 186 use local time (as determined by the TZ environment
187 187 variable) for changeset date/times. The default is False
188 188 (use UTC).
189 189 hooks.cvslog Specify a Python function to be called at the end of
190 190 gathering the CVS log. The function is passed a list with
191 191 the log entries, and can modify the entries in-place, or add
192 192 or delete them.
193 193 hooks.cvschangesets
194 194 Specify a Python function to be called after the changesets
195 195 are calculated from the CVS log. The function is passed a
196 196 list with the changeset entries, and can modify the
197 197 changesets in-place, or add or delete them.
198 198
199 199 An additional "debugcvsps" Mercurial command allows the builtin changeset
200 200 merging code to be run without doing a conversion. Its parameters and
201 201 output are similar to that of cvsps 2.1. Please see the command help for
202 202 more details.
203 203
204 204 Subversion Source
205 205 #################
206 206
207 207 Subversion source detects classical trunk/branches/tags layouts. By
208 208 default, the supplied "svn://repo/path/" source URL is converted as a
209 209 single branch. If "svn://repo/path/trunk" exists it replaces the default
210 210 branch. If "svn://repo/path/branches" exists, its subdirectories are
211 211 listed as possible branches. If "svn://repo/path/tags" exists, it is
212 212 looked for tags referencing converted branches. Default "trunk",
213 213 "branches" and "tags" values can be overridden with following options. Set
214 214 them to paths relative to the source URL, or leave them blank to disable
215 215 auto detection.
216 216
217 217 The following options can be set with "--config":
218 218
219 219 convert.svn.branches
220 220 specify the directory containing branches. The default is
221 221 "branches".
222 222 convert.svn.tags
223 223 specify the directory containing tags. The default is
224 224 "tags".
225 225 convert.svn.trunk
226 226 specify the name of the trunk branch. The default is
227 227 "trunk".
228 228 convert.localtimezone
229 229 use local time (as determined by the TZ environment
230 230 variable) for changeset date/times. The default is False
231 231 (use UTC).
232 232
233 233 Source history can be retrieved starting at a specific revision, instead
234 234 of being integrally converted. Only single branch conversions are
235 235 supported.
236 236
237 237 convert.svn.startrev
238 238 specify start Subversion revision number. The default is 0.
239 239
240 240 Perforce Source
241 241 ###############
242 242
243 243 The Perforce (P4) importer can be given a p4 depot path or a client
244 244 specification as source. It will convert all files in the source to a flat
245 245 Mercurial repository, ignoring labels, branches and integrations. Note
246 246 that when a depot path is given you then usually should specify a target
247 247 directory, because otherwise the target may be named "...-hg".
248 248
249 249 It is possible to limit the amount of source history to be converted by
250 250 specifying an initial Perforce revision:
251 251
252 252 convert.p4.startrev
253 253 specify initial Perforce revision (a Perforce changelist
254 254 number).
255 255
256 256 Mercurial Destination
257 257 #####################
258 258
259 259 The following options are supported:
260 260
261 261 convert.hg.clonebranches
262 262 dispatch source branches in separate clones. The default is
263 263 False.
264 264 convert.hg.tagsbranch
265 265 branch name for tag revisions, defaults to "default".
266 266 convert.hg.usebranchnames
267 267 preserve branch names. The default is True.
268 268
269 269 options:
270 270
271 271 -s --source-type TYPE source repository type
272 272 -d --dest-type TYPE destination repository type
273 273 -r --rev REV import up to source revision REV
274 274 -A --authormap FILE remap usernames using this file
275 275 --filemap FILE remap file names using contents of file
276 276 --splicemap FILE splice synthesized history into place
277 277 --branchmap FILE change branch names while converting
278 278 --closemap FILE closes given revs
279 279 --tagmap FILE change tag names while converting
280 280 --branchsort try to sort changesets by branches
281 281 --datesort try to sort changesets by date
282 282 --sourcesort preserve source changesets order
283 283 --closesort try to reorder closed revisions
284 284
285 285 use "hg -v help convert" to show the global options
286 286 $ hg init a
287 287 $ cd a
288 288 $ echo a > a
289 289 $ hg ci -d'0 0' -Ama
290 290 adding a
291 291 $ hg cp a b
292 292 $ hg ci -d'1 0' -mb
293 293 $ hg rm a
294 294 $ hg ci -d'2 0' -mc
295 295 $ hg mv b a
296 296 $ hg ci -d'3 0' -md
297 297 $ echo a >> a
298 298 $ hg ci -d'4 0' -me
299 299 $ cd ..
300 300 $ hg convert a 2>&1 | grep -v 'subversion python bindings could not be loaded'
301 301 assuming destination a-hg
302 302 initializing destination a-hg repository
303 303 scanning source...
304 304 sorting...
305 305 converting...
306 306 4 a
307 307 3 b
308 308 2 c
309 309 1 d
310 310 0 e
311 311 $ hg --cwd a-hg pull ../a
312 312 pulling from ../a
313 313 searching for changes
314 314 no changes found
315 315
316 316 conversion to existing file should fail
317 317
318 318 $ touch bogusfile
319 319 $ hg convert a bogusfile
320 320 initializing destination bogusfile repository
321 321 abort: cannot create new bundle repository
322 322 [255]
323 323
324 324 #if unix-permissions no-root
325 325
326 326 conversion to dir without permissions should fail
327 327
328 328 $ mkdir bogusdir
329 329 $ chmod 000 bogusdir
330 330
331 331 $ hg convert a bogusdir
332 332 abort: Permission denied: 'bogusdir'
333 333 [255]
334 334
335 335 user permissions should succeed
336 336
337 337 $ chmod 700 bogusdir
338 338 $ hg convert a bogusdir
339 339 initializing destination bogusdir repository
340 340 scanning source...
341 341 sorting...
342 342 converting...
343 343 4 a
344 344 3 b
345 345 2 c
346 346 1 d
347 347 0 e
348 348
349 349 #endif
350 350
351 351 test pre and post conversion actions
352 352
353 353 $ echo 'include b' > filemap
354 354 $ hg convert --debug --filemap filemap a partialb | \
355 355 > grep 'run hg'
356 356 run hg source pre-conversion action
357 357 run hg sink pre-conversion action
358 358 run hg sink post-conversion action
359 359 run hg source post-conversion action
360 360
361 361 converting empty dir should fail "nicely
362 362
363 363 $ mkdir emptydir
364 364
365 365 override $PATH to ensure p4 not visible; use $PYTHON in case we're
366 366 running from a devel copy, not a temp installation
367 367
368 368 $ PATH="$BINDIR" $PYTHON "$BINDIR"/hg convert emptydir
369 369 assuming destination emptydir-hg
370 370 initializing destination emptydir-hg repository
371 371 emptydir does not look like a CVS checkout
372 372 emptydir does not look like a Git repository
373 373 emptydir does not look like a Subversion repository
374 374 emptydir is not a local Mercurial repository
375 375 emptydir does not look like a darcs repository
376 376 emptydir does not look like a monotone repository
377 377 emptydir does not look like a GNU Arch repository
378 378 emptydir does not look like a Bazaar repository
379 379 cannot find required "p4" tool
380 380 abort: emptydir: missing or unsupported repository
381 381 [255]
382 382
383 383 convert with imaginary source type
384 384
385 385 $ hg convert --source-type foo a a-foo
386 386 initializing destination a-foo repository
387 387 abort: foo: invalid source repository type
388 388 [255]
389 389
390 390 convert with imaginary sink type
391 391
392 392 $ hg convert --dest-type foo a a-foo
393 393 abort: foo: invalid destination repository type
394 394 [255]
395 395
396 396 testing: convert must not produce duplicate entries in fncache
397 397
398 398 $ hg convert a b
399 399 initializing destination b repository
400 400 scanning source...
401 401 sorting...
402 402 converting...
403 403 4 a
404 404 3 b
405 405 2 c
406 406 1 d
407 407 0 e
408 408
409 409 contents of fncache file:
410 410
411 411 $ cat b/.hg/store/fncache | sort
412 412 data/a.i
413 413 data/b.i
414 414
415 415 test bogus URL
416 416
417 417 $ hg convert -q bzr+ssh://foobar@selenic.com/baz baz
418 418 abort: bzr+ssh://foobar@selenic.com/baz: missing or unsupported repository
419 419 [255]
420 420
421 421 test revset converted() lookup
422 422
423 423 $ hg --config convert.hg.saverev=True convert a c
424 424 initializing destination c repository
425 425 scanning source...
426 426 sorting...
427 427 converting...
428 428 4 a
429 429 3 b
430 430 2 c
431 431 1 d
432 432 0 e
433 433 $ echo f > c/f
434 434 $ hg -R c ci -d'0 0' -Amf
435 435 adding f
436 436 created new head
437 437 $ hg -R c log -r "converted(09d945a62ce6)"
438 438 changeset: 1:98c3dd46a874
439 439 user: test
440 440 date: Thu Jan 01 00:00:01 1970 +0000
441 441 summary: b
442 442
443 443 $ hg -R c log -r "converted()"
444 444 changeset: 0:31ed57b2037c
445 445 user: test
446 446 date: Thu Jan 01 00:00:00 1970 +0000
447 447 summary: a
448 448
449 449 changeset: 1:98c3dd46a874
450 450 user: test
451 451 date: Thu Jan 01 00:00:01 1970 +0000
452 452 summary: b
453 453
454 454 changeset: 2:3b9ca06ef716
455 455 user: test
456 456 date: Thu Jan 01 00:00:02 1970 +0000
457 457 summary: c
458 458
459 459 changeset: 3:4e0debd37cf2
460 460 user: test
461 461 date: Thu Jan 01 00:00:03 1970 +0000
462 462 summary: d
463 463
464 464 changeset: 4:9de3bc9349c5
465 465 user: test
466 466 date: Thu Jan 01 00:00:04 1970 +0000
467 467 summary: e
468 468
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now