##// END OF EJS Templates
obsolete: add operation metadata to rebase/amend/histedit obsmarkers...
Durham Goode -
r32327:3546a771 default
parent child Browse files
Show More
@@ -1,1675 +1,1675 b''
1 1 # histedit.py - interactive history editing for mercurial
2 2 #
3 3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """interactive history editing
8 8
9 9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 10 is as follows, assuming the following history::
11 11
12 12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 13 | Add delta
14 14 |
15 15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 16 | Add gamma
17 17 |
18 18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 19 | Add beta
20 20 |
21 21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 22 Add alpha
23 23
24 24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 25 file open in your editor::
26 26
27 27 pick c561b4e977df Add beta
28 28 pick 030b686bedc4 Add gamma
29 29 pick 7c2fd3b9020c Add delta
30 30
31 31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 32 #
33 33 # Commits are listed from least to most recent
34 34 #
35 35 # Commands:
36 36 # p, pick = use commit
37 37 # e, edit = use commit, but stop for amending
38 38 # f, fold = use commit, but combine it with the one above
39 39 # r, roll = like fold, but discard this commit's description and date
40 40 # d, drop = remove commit from history
41 41 # m, mess = edit commit message without changing commit content
42 42 #
43 43
44 44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 45 for each revision in your history. For example, if you had meant to add gamma
46 46 before beta, and then wanted to add delta in the same revision as beta, you
47 47 would reorganize the file to look like this::
48 48
49 49 pick 030b686bedc4 Add gamma
50 50 pick c561b4e977df Add beta
51 51 fold 7c2fd3b9020c Add delta
52 52
53 53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 54 #
55 55 # Commits are listed from least to most recent
56 56 #
57 57 # Commands:
58 58 # p, pick = use commit
59 59 # e, edit = use commit, but stop for amending
60 60 # f, fold = use commit, but combine it with the one above
61 61 # r, roll = like fold, but discard this commit's description and date
62 62 # d, drop = remove commit from history
63 63 # m, mess = edit commit message without changing commit content
64 64 #
65 65
66 66 At which point you close the editor and ``histedit`` starts working. When you
67 67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 68 those revisions together, offering you a chance to clean up the commit message::
69 69
70 70 Add beta
71 71 ***
72 72 Add delta
73 73
74 74 Edit the commit message to your liking, then close the editor. The date used
75 75 for the commit will be the later of the two commits' dates. For this example,
76 76 let's assume that the commit message was changed to ``Add beta and delta.``
77 77 After histedit has run and had a chance to remove any old or temporary
78 78 revisions it needed, the history looks like this::
79 79
80 80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 81 | Add beta and delta.
82 82 |
83 83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 84 | Add gamma
85 85 |
86 86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 87 Add alpha
88 88
89 89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 90 ones) until after it has completed all the editing operations, so it will
91 91 probably perform several strip operations when it's done. For the above example,
92 92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 93 so you might need to be a little patient. You can choose to keep the original
94 94 revisions by passing the ``--keep`` flag.
95 95
96 96 The ``edit`` operation will drop you back to a command prompt,
97 97 allowing you to edit files freely, or even use ``hg record`` to commit
98 98 some changes as a separate commit. When you're done, any remaining
99 99 uncommitted changes will be committed as well. When done, run ``hg
100 100 histedit --continue`` to finish this step. If there are uncommitted
101 101 changes, you'll be prompted for a new commit message, but the default
102 102 commit message will be the original message for the ``edit`` ed
103 103 revision, and the date of the original commit will be preserved.
104 104
105 105 The ``message`` operation will give you a chance to revise a commit
106 106 message without changing the contents. It's a shortcut for doing
107 107 ``edit`` immediately followed by `hg histedit --continue``.
108 108
109 109 If ``histedit`` encounters a conflict when moving a revision (while
110 110 handling ``pick`` or ``fold``), it'll stop in a similar manner to
111 111 ``edit`` with the difference that it won't prompt you for a commit
112 112 message when done. If you decide at this point that you don't like how
113 113 much work it will be to rearrange history, or that you made a mistake,
114 114 you can use ``hg histedit --abort`` to abandon the new changes you
115 115 have made and return to the state before you attempted to edit your
116 116 history.
117 117
118 118 If we clone the histedit-ed example repository above and add four more
119 119 changes, such that we have the following history::
120 120
121 121 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
122 122 | Add theta
123 123 |
124 124 o 5 140988835471 2009-04-27 18:04 -0500 stefan
125 125 | Add eta
126 126 |
127 127 o 4 122930637314 2009-04-27 18:04 -0500 stefan
128 128 | Add zeta
129 129 |
130 130 o 3 836302820282 2009-04-27 18:04 -0500 stefan
131 131 | Add epsilon
132 132 |
133 133 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
134 134 | Add beta and delta.
135 135 |
136 136 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
137 137 | Add gamma
138 138 |
139 139 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
140 140 Add alpha
141 141
142 142 If you run ``hg histedit --outgoing`` on the clone then it is the same
143 143 as running ``hg histedit 836302820282``. If you need plan to push to a
144 144 repository that Mercurial does not detect to be related to the source
145 145 repo, you can add a ``--force`` option.
146 146
147 147 Config
148 148 ------
149 149
150 150 Histedit rule lines are truncated to 80 characters by default. You
151 151 can customize this behavior by setting a different length in your
152 152 configuration file::
153 153
154 154 [histedit]
155 155 linelen = 120 # truncate rule lines at 120 characters
156 156
157 157 ``hg histedit`` attempts to automatically choose an appropriate base
158 158 revision to use. To change which base revision is used, define a
159 159 revset in your configuration file::
160 160
161 161 [histedit]
162 162 defaultrev = only(.) & draft()
163 163
164 164 By default each edited revision needs to be present in histedit commands.
165 165 To remove revision you need to use ``drop`` operation. You can configure
166 166 the drop to be implicit for missing commits by adding::
167 167
168 168 [histedit]
169 169 dropmissing = True
170 170
171 171 By default, histedit will close the transaction after each action. For
172 172 performance purposes, you can configure histedit to use a single transaction
173 173 across the entire histedit. WARNING: This setting introduces a significant risk
174 174 of losing the work you've done in a histedit if the histedit aborts
175 175 unexpectedly::
176 176
177 177 [histedit]
178 178 singletransaction = True
179 179
180 180 """
181 181
182 182 from __future__ import absolute_import
183 183
184 184 import errno
185 185 import os
186 186
187 187 from mercurial.i18n import _
188 188 from mercurial import (
189 189 bundle2,
190 190 cmdutil,
191 191 context,
192 192 copies,
193 193 destutil,
194 194 discovery,
195 195 error,
196 196 exchange,
197 197 extensions,
198 198 hg,
199 199 lock,
200 200 merge as mergemod,
201 201 mergeutil,
202 202 node,
203 203 obsolete,
204 204 repair,
205 205 scmutil,
206 206 util,
207 207 )
208 208
209 209 pickle = util.pickle
210 210 release = lock.release
211 211 cmdtable = {}
212 212 command = cmdutil.command(cmdtable)
213 213
214 214 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
215 215 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
216 216 # be specifying the version(s) of Mercurial they are tested with, or
217 217 # leave the attribute unspecified.
218 218 testedwith = 'ships-with-hg-core'
219 219
220 220 actiontable = {}
221 221 primaryactions = set()
222 222 secondaryactions = set()
223 223 tertiaryactions = set()
224 224 internalactions = set()
225 225
226 226 def geteditcomment(ui, first, last):
227 227 """ construct the editor comment
228 228 The comment includes::
229 229 - an intro
230 230 - sorted primary commands
231 231 - sorted short commands
232 232 - sorted long commands
233 233 - additional hints
234 234
235 235 Commands are only included once.
236 236 """
237 237 intro = _("""Edit history between %s and %s
238 238
239 239 Commits are listed from least to most recent
240 240
241 241 You can reorder changesets by reordering the lines
242 242
243 243 Commands:
244 244 """)
245 245 actions = []
246 246 def addverb(v):
247 247 a = actiontable[v]
248 248 lines = a.message.split("\n")
249 249 if len(a.verbs):
250 250 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
251 251 actions.append(" %s = %s" % (v, lines[0]))
252 252 actions.extend([' %s' for l in lines[1:]])
253 253
254 254 for v in (
255 255 sorted(primaryactions) +
256 256 sorted(secondaryactions) +
257 257 sorted(tertiaryactions)
258 258 ):
259 259 addverb(v)
260 260 actions.append('')
261 261
262 262 hints = []
263 263 if ui.configbool('histedit', 'dropmissing'):
264 264 hints.append("Deleting a changeset from the list "
265 265 "will DISCARD it from the edited history!")
266 266
267 267 lines = (intro % (first, last)).split('\n') + actions + hints
268 268
269 269 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
270 270
271 271 class histeditstate(object):
272 272 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
273 273 topmost=None, replacements=None, lock=None, wlock=None):
274 274 self.repo = repo
275 275 self.actions = actions
276 276 self.keep = keep
277 277 self.topmost = topmost
278 278 self.parentctxnode = parentctxnode
279 279 self.lock = lock
280 280 self.wlock = wlock
281 281 self.backupfile = None
282 282 self.tr = None
283 283 if replacements is None:
284 284 self.replacements = []
285 285 else:
286 286 self.replacements = replacements
287 287
288 288 def read(self):
289 289 """Load histedit state from disk and set fields appropriately."""
290 290 try:
291 291 state = self.repo.vfs.read('histedit-state')
292 292 except IOError as err:
293 293 if err.errno != errno.ENOENT:
294 294 raise
295 295 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
296 296
297 297 if state.startswith('v1\n'):
298 298 data = self._load()
299 299 parentctxnode, rules, keep, topmost, replacements, backupfile = data
300 300 else:
301 301 data = pickle.loads(state)
302 302 parentctxnode, rules, keep, topmost, replacements = data
303 303 backupfile = None
304 304
305 305 self.parentctxnode = parentctxnode
306 306 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
307 307 actions = parserules(rules, self)
308 308 self.actions = actions
309 309 self.keep = keep
310 310 self.topmost = topmost
311 311 self.replacements = replacements
312 312 self.backupfile = backupfile
313 313
314 314 def write(self, tr=None):
315 315 if tr:
316 316 tr.addfilegenerator('histedit-state', ('histedit-state',),
317 317 self._write, location='plain')
318 318 else:
319 319 with self.repo.vfs("histedit-state", "w") as f:
320 320 self._write(f)
321 321
322 322 def _write(self, fp):
323 323 fp.write('v1\n')
324 324 fp.write('%s\n' % node.hex(self.parentctxnode))
325 325 fp.write('%s\n' % node.hex(self.topmost))
326 326 fp.write('%s\n' % self.keep)
327 327 fp.write('%d\n' % len(self.actions))
328 328 for action in self.actions:
329 329 fp.write('%s\n' % action.tostate())
330 330 fp.write('%d\n' % len(self.replacements))
331 331 for replacement in self.replacements:
332 332 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
333 333 for r in replacement[1])))
334 334 backupfile = self.backupfile
335 335 if not backupfile:
336 336 backupfile = ''
337 337 fp.write('%s\n' % backupfile)
338 338
339 339 def _load(self):
340 340 fp = self.repo.vfs('histedit-state', 'r')
341 341 lines = [l[:-1] for l in fp.readlines()]
342 342
343 343 index = 0
344 344 lines[index] # version number
345 345 index += 1
346 346
347 347 parentctxnode = node.bin(lines[index])
348 348 index += 1
349 349
350 350 topmost = node.bin(lines[index])
351 351 index += 1
352 352
353 353 keep = lines[index] == 'True'
354 354 index += 1
355 355
356 356 # Rules
357 357 rules = []
358 358 rulelen = int(lines[index])
359 359 index += 1
360 360 for i in xrange(rulelen):
361 361 ruleaction = lines[index]
362 362 index += 1
363 363 rule = lines[index]
364 364 index += 1
365 365 rules.append((ruleaction, rule))
366 366
367 367 # Replacements
368 368 replacements = []
369 369 replacementlen = int(lines[index])
370 370 index += 1
371 371 for i in xrange(replacementlen):
372 372 replacement = lines[index]
373 373 original = node.bin(replacement[:40])
374 374 succ = [node.bin(replacement[i:i + 40]) for i in
375 375 range(40, len(replacement), 40)]
376 376 replacements.append((original, succ))
377 377 index += 1
378 378
379 379 backupfile = lines[index]
380 380 index += 1
381 381
382 382 fp.close()
383 383
384 384 return parentctxnode, rules, keep, topmost, replacements, backupfile
385 385
386 386 def clear(self):
387 387 if self.inprogress():
388 388 self.repo.vfs.unlink('histedit-state')
389 389
390 390 def inprogress(self):
391 391 return self.repo.vfs.exists('histedit-state')
392 392
393 393
394 394 class histeditaction(object):
395 395 def __init__(self, state, node):
396 396 self.state = state
397 397 self.repo = state.repo
398 398 self.node = node
399 399
400 400 @classmethod
401 401 def fromrule(cls, state, rule):
402 402 """Parses the given rule, returning an instance of the histeditaction.
403 403 """
404 404 rulehash = rule.strip().split(' ', 1)[0]
405 405 try:
406 406 rev = node.bin(rulehash)
407 407 except TypeError:
408 408 raise error.ParseError("invalid changeset %s" % rulehash)
409 409 return cls(state, rev)
410 410
411 411 def verify(self, prev, expected, seen):
412 412 """ Verifies semantic correctness of the rule"""
413 413 repo = self.repo
414 414 ha = node.hex(self.node)
415 415 try:
416 416 self.node = repo[ha].node()
417 417 except error.RepoError:
418 418 raise error.ParseError(_('unknown changeset %s listed')
419 419 % ha[:12])
420 420 if self.node is not None:
421 421 self._verifynodeconstraints(prev, expected, seen)
422 422
423 423 def _verifynodeconstraints(self, prev, expected, seen):
424 424 # by default command need a node in the edited list
425 425 if self.node not in expected:
426 426 raise error.ParseError(_('%s "%s" changeset was not a candidate')
427 427 % (self.verb, node.short(self.node)),
428 428 hint=_('only use listed changesets'))
429 429 # and only one command per node
430 430 if self.node in seen:
431 431 raise error.ParseError(_('duplicated command for changeset %s') %
432 432 node.short(self.node))
433 433
434 434 def torule(self):
435 435 """build a histedit rule line for an action
436 436
437 437 by default lines are in the form:
438 438 <hash> <rev> <summary>
439 439 """
440 440 ctx = self.repo[self.node]
441 441 summary = _getsummary(ctx)
442 442 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
443 443 # trim to 75 columns by default so it's not stupidly wide in my editor
444 444 # (the 5 more are left for verb)
445 445 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
446 446 maxlen = max(maxlen, 22) # avoid truncating hash
447 447 return util.ellipsis(line, maxlen)
448 448
449 449 def tostate(self):
450 450 """Print an action in format used by histedit state files
451 451 (the first line is a verb, the remainder is the second)
452 452 """
453 453 return "%s\n%s" % (self.verb, node.hex(self.node))
454 454
455 455 def run(self):
456 456 """Runs the action. The default behavior is simply apply the action's
457 457 rulectx onto the current parentctx."""
458 458 self.applychange()
459 459 self.continuedirty()
460 460 return self.continueclean()
461 461
462 462 def applychange(self):
463 463 """Applies the changes from this action's rulectx onto the current
464 464 parentctx, but does not commit them."""
465 465 repo = self.repo
466 466 rulectx = repo[self.node]
467 467 repo.ui.pushbuffer(error=True, labeled=True)
468 468 hg.update(repo, self.state.parentctxnode, quietempty=True)
469 469 stats = applychanges(repo.ui, repo, rulectx, {})
470 470 if stats and stats[3] > 0:
471 471 buf = repo.ui.popbuffer()
472 472 repo.ui.write(*buf)
473 473 raise error.InterventionRequired(
474 474 _('Fix up the change (%s %s)') %
475 475 (self.verb, node.short(self.node)),
476 476 hint=_('hg histedit --continue to resume'))
477 477 else:
478 478 repo.ui.popbuffer()
479 479
480 480 def continuedirty(self):
481 481 """Continues the action when changes have been applied to the working
482 482 copy. The default behavior is to commit the dirty changes."""
483 483 repo = self.repo
484 484 rulectx = repo[self.node]
485 485
486 486 editor = self.commiteditor()
487 487 commit = commitfuncfor(repo, rulectx)
488 488
489 489 commit(text=rulectx.description(), user=rulectx.user(),
490 490 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
491 491
492 492 def commiteditor(self):
493 493 """The editor to be used to edit the commit message."""
494 494 return False
495 495
496 496 def continueclean(self):
497 497 """Continues the action when the working copy is clean. The default
498 498 behavior is to accept the current commit as the new version of the
499 499 rulectx."""
500 500 ctx = self.repo['.']
501 501 if ctx.node() == self.state.parentctxnode:
502 502 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
503 503 node.short(self.node))
504 504 return ctx, [(self.node, tuple())]
505 505 if ctx.node() == self.node:
506 506 # Nothing changed
507 507 return ctx, []
508 508 return ctx, [(self.node, (ctx.node(),))]
509 509
510 510 def commitfuncfor(repo, src):
511 511 """Build a commit function for the replacement of <src>
512 512
513 513 This function ensure we apply the same treatment to all changesets.
514 514
515 515 - Add a 'histedit_source' entry in extra.
516 516
517 517 Note that fold has its own separated logic because its handling is a bit
518 518 different and not easily factored out of the fold method.
519 519 """
520 520 phasemin = src.phase()
521 521 def commitfunc(**kwargs):
522 522 overrides = {('phases', 'new-commit'): phasemin}
523 523 with repo.ui.configoverride(overrides, 'histedit'):
524 524 extra = kwargs.get('extra', {}).copy()
525 525 extra['histedit_source'] = src.hex()
526 526 kwargs['extra'] = extra
527 527 return repo.commit(**kwargs)
528 528 return commitfunc
529 529
530 530 def applychanges(ui, repo, ctx, opts):
531 531 """Merge changeset from ctx (only) in the current working directory"""
532 532 wcpar = repo.dirstate.parents()[0]
533 533 if ctx.p1().node() == wcpar:
534 534 # edits are "in place" we do not need to make any merge,
535 535 # just applies changes on parent for editing
536 536 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
537 537 stats = None
538 538 else:
539 539 try:
540 540 # ui.forcemerge is an internal variable, do not document
541 541 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
542 542 'histedit')
543 543 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
544 544 finally:
545 545 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
546 546 return stats
547 547
548 548 def collapse(repo, first, last, commitopts, skipprompt=False):
549 549 """collapse the set of revisions from first to last as new one.
550 550
551 551 Expected commit options are:
552 552 - message
553 553 - date
554 554 - username
555 555 Commit message is edited in all cases.
556 556
557 557 This function works in memory."""
558 558 ctxs = list(repo.set('%d::%d', first, last))
559 559 if not ctxs:
560 560 return None
561 561 for c in ctxs:
562 562 if not c.mutable():
563 563 raise error.ParseError(
564 564 _("cannot fold into public change %s") % node.short(c.node()))
565 565 base = first.parents()[0]
566 566
567 567 # commit a new version of the old changeset, including the update
568 568 # collect all files which might be affected
569 569 files = set()
570 570 for ctx in ctxs:
571 571 files.update(ctx.files())
572 572
573 573 # Recompute copies (avoid recording a -> b -> a)
574 574 copied = copies.pathcopies(base, last)
575 575
576 576 # prune files which were reverted by the updates
577 577 files = [f for f in files if not cmdutil.samefile(f, last, base)]
578 578 # commit version of these files as defined by head
579 579 headmf = last.manifest()
580 580 def filectxfn(repo, ctx, path):
581 581 if path in headmf:
582 582 fctx = last[path]
583 583 flags = fctx.flags()
584 584 mctx = context.memfilectx(repo,
585 585 fctx.path(), fctx.data(),
586 586 islink='l' in flags,
587 587 isexec='x' in flags,
588 588 copied=copied.get(path))
589 589 return mctx
590 590 return None
591 591
592 592 if commitopts.get('message'):
593 593 message = commitopts['message']
594 594 else:
595 595 message = first.description()
596 596 user = commitopts.get('user')
597 597 date = commitopts.get('date')
598 598 extra = commitopts.get('extra')
599 599
600 600 parents = (first.p1().node(), first.p2().node())
601 601 editor = None
602 602 if not skipprompt:
603 603 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
604 604 new = context.memctx(repo,
605 605 parents=parents,
606 606 text=message,
607 607 files=files,
608 608 filectxfn=filectxfn,
609 609 user=user,
610 610 date=date,
611 611 extra=extra,
612 612 editor=editor)
613 613 return repo.commitctx(new)
614 614
615 615 def _isdirtywc(repo):
616 616 return repo[None].dirty(missing=True)
617 617
618 618 def abortdirty():
619 619 raise error.Abort(_('working copy has pending changes'),
620 620 hint=_('amend, commit, or revert them and run histedit '
621 621 '--continue, or abort with histedit --abort'))
622 622
623 623 def action(verbs, message, priority=False, internal=False):
624 624 def wrap(cls):
625 625 assert not priority or not internal
626 626 verb = verbs[0]
627 627 if priority:
628 628 primaryactions.add(verb)
629 629 elif internal:
630 630 internalactions.add(verb)
631 631 elif len(verbs) > 1:
632 632 secondaryactions.add(verb)
633 633 else:
634 634 tertiaryactions.add(verb)
635 635
636 636 cls.verb = verb
637 637 cls.verbs = verbs
638 638 cls.message = message
639 639 for verb in verbs:
640 640 actiontable[verb] = cls
641 641 return cls
642 642 return wrap
643 643
644 644 @action(['pick', 'p'],
645 645 _('use commit'),
646 646 priority=True)
647 647 class pick(histeditaction):
648 648 def run(self):
649 649 rulectx = self.repo[self.node]
650 650 if rulectx.parents()[0].node() == self.state.parentctxnode:
651 651 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
652 652 return rulectx, []
653 653
654 654 return super(pick, self).run()
655 655
656 656 @action(['edit', 'e'],
657 657 _('use commit, but stop for amending'),
658 658 priority=True)
659 659 class edit(histeditaction):
660 660 def run(self):
661 661 repo = self.repo
662 662 rulectx = repo[self.node]
663 663 hg.update(repo, self.state.parentctxnode, quietempty=True)
664 664 applychanges(repo.ui, repo, rulectx, {})
665 665 raise error.InterventionRequired(
666 666 _('Editing (%s), you may commit or record as needed now.')
667 667 % node.short(self.node),
668 668 hint=_('hg histedit --continue to resume'))
669 669
670 670 def commiteditor(self):
671 671 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
672 672
673 673 @action(['fold', 'f'],
674 674 _('use commit, but combine it with the one above'))
675 675 class fold(histeditaction):
676 676 def verify(self, prev, expected, seen):
677 677 """ Verifies semantic correctness of the fold rule"""
678 678 super(fold, self).verify(prev, expected, seen)
679 679 repo = self.repo
680 680 if not prev:
681 681 c = repo[self.node].parents()[0]
682 682 elif not prev.verb in ('pick', 'base'):
683 683 return
684 684 else:
685 685 c = repo[prev.node]
686 686 if not c.mutable():
687 687 raise error.ParseError(
688 688 _("cannot fold into public change %s") % node.short(c.node()))
689 689
690 690
691 691 def continuedirty(self):
692 692 repo = self.repo
693 693 rulectx = repo[self.node]
694 694
695 695 commit = commitfuncfor(repo, rulectx)
696 696 commit(text='fold-temp-revision %s' % node.short(self.node),
697 697 user=rulectx.user(), date=rulectx.date(),
698 698 extra=rulectx.extra())
699 699
700 700 def continueclean(self):
701 701 repo = self.repo
702 702 ctx = repo['.']
703 703 rulectx = repo[self.node]
704 704 parentctxnode = self.state.parentctxnode
705 705 if ctx.node() == parentctxnode:
706 706 repo.ui.warn(_('%s: empty changeset\n') %
707 707 node.short(self.node))
708 708 return ctx, [(self.node, (parentctxnode,))]
709 709
710 710 parentctx = repo[parentctxnode]
711 711 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
712 712 parentctx))
713 713 if not newcommits:
714 714 repo.ui.warn(_('%s: cannot fold - working copy is not a '
715 715 'descendant of previous commit %s\n') %
716 716 (node.short(self.node), node.short(parentctxnode)))
717 717 return ctx, [(self.node, (ctx.node(),))]
718 718
719 719 middlecommits = newcommits.copy()
720 720 middlecommits.discard(ctx.node())
721 721
722 722 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
723 723 middlecommits)
724 724
725 725 def skipprompt(self):
726 726 """Returns true if the rule should skip the message editor.
727 727
728 728 For example, 'fold' wants to show an editor, but 'rollup'
729 729 doesn't want to.
730 730 """
731 731 return False
732 732
733 733 def mergedescs(self):
734 734 """Returns true if the rule should merge messages of multiple changes.
735 735
736 736 This exists mainly so that 'rollup' rules can be a subclass of
737 737 'fold'.
738 738 """
739 739 return True
740 740
741 741 def firstdate(self):
742 742 """Returns true if the rule should preserve the date of the first
743 743 change.
744 744
745 745 This exists mainly so that 'rollup' rules can be a subclass of
746 746 'fold'.
747 747 """
748 748 return False
749 749
750 750 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
751 751 parent = ctx.parents()[0].node()
752 752 repo.ui.pushbuffer()
753 753 hg.update(repo, parent)
754 754 repo.ui.popbuffer()
755 755 ### prepare new commit data
756 756 commitopts = {}
757 757 commitopts['user'] = ctx.user()
758 758 # commit message
759 759 if not self.mergedescs():
760 760 newmessage = ctx.description()
761 761 else:
762 762 newmessage = '\n***\n'.join(
763 763 [ctx.description()] +
764 764 [repo[r].description() for r in internalchanges] +
765 765 [oldctx.description()]) + '\n'
766 766 commitopts['message'] = newmessage
767 767 # date
768 768 if self.firstdate():
769 769 commitopts['date'] = ctx.date()
770 770 else:
771 771 commitopts['date'] = max(ctx.date(), oldctx.date())
772 772 extra = ctx.extra().copy()
773 773 # histedit_source
774 774 # note: ctx is likely a temporary commit but that the best we can do
775 775 # here. This is sufficient to solve issue3681 anyway.
776 776 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
777 777 commitopts['extra'] = extra
778 778 phasemin = max(ctx.phase(), oldctx.phase())
779 779 overrides = {('phases', 'new-commit'): phasemin}
780 780 with repo.ui.configoverride(overrides, 'histedit'):
781 781 n = collapse(repo, ctx, repo[newnode], commitopts,
782 782 skipprompt=self.skipprompt())
783 783 if n is None:
784 784 return ctx, []
785 785 repo.ui.pushbuffer()
786 786 hg.update(repo, n)
787 787 repo.ui.popbuffer()
788 788 replacements = [(oldctx.node(), (newnode,)),
789 789 (ctx.node(), (n,)),
790 790 (newnode, (n,)),
791 791 ]
792 792 for ich in internalchanges:
793 793 replacements.append((ich, (n,)))
794 794 return repo[n], replacements
795 795
796 796 class base(histeditaction):
797 797
798 798 def run(self):
799 799 if self.repo['.'].node() != self.node:
800 800 mergemod.update(self.repo, self.node, False, True)
801 801 # branchmerge, force)
802 802 return self.continueclean()
803 803
804 804 def continuedirty(self):
805 805 abortdirty()
806 806
807 807 def continueclean(self):
808 808 basectx = self.repo['.']
809 809 return basectx, []
810 810
811 811 def _verifynodeconstraints(self, prev, expected, seen):
812 812 # base can only be use with a node not in the edited set
813 813 if self.node in expected:
814 814 msg = _('%s "%s" changeset was an edited list candidate')
815 815 raise error.ParseError(
816 816 msg % (self.verb, node.short(self.node)),
817 817 hint=_('base must only use unlisted changesets'))
818 818
819 819 @action(['_multifold'],
820 820 _(
821 821 """fold subclass used for when multiple folds happen in a row
822 822
823 823 We only want to fire the editor for the folded message once when
824 824 (say) four changes are folded down into a single change. This is
825 825 similar to rollup, but we should preserve both messages so that
826 826 when the last fold operation runs we can show the user all the
827 827 commit messages in their editor.
828 828 """),
829 829 internal=True)
830 830 class _multifold(fold):
831 831 def skipprompt(self):
832 832 return True
833 833
834 834 @action(["roll", "r"],
835 835 _("like fold, but discard this commit's description and date"))
836 836 class rollup(fold):
837 837 def mergedescs(self):
838 838 return False
839 839
840 840 def skipprompt(self):
841 841 return True
842 842
843 843 def firstdate(self):
844 844 return True
845 845
846 846 @action(["drop", "d"],
847 847 _('remove commit from history'))
848 848 class drop(histeditaction):
849 849 def run(self):
850 850 parentctx = self.repo[self.state.parentctxnode]
851 851 return parentctx, [(self.node, tuple())]
852 852
853 853 @action(["mess", "m"],
854 854 _('edit commit message without changing commit content'),
855 855 priority=True)
856 856 class message(histeditaction):
857 857 def commiteditor(self):
858 858 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
859 859
860 860 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
861 861 """utility function to find the first outgoing changeset
862 862
863 863 Used by initialization code"""
864 864 if opts is None:
865 865 opts = {}
866 866 dest = ui.expandpath(remote or 'default-push', remote or 'default')
867 867 dest, revs = hg.parseurl(dest, None)[:2]
868 868 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
869 869
870 870 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
871 871 other = hg.peer(repo, opts, dest)
872 872
873 873 if revs:
874 874 revs = [repo.lookup(rev) for rev in revs]
875 875
876 876 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
877 877 if not outgoing.missing:
878 878 raise error.Abort(_('no outgoing ancestors'))
879 879 roots = list(repo.revs("roots(%ln)", outgoing.missing))
880 880 if 1 < len(roots):
881 881 msg = _('there are ambiguous outgoing revisions')
882 882 hint = _("see 'hg help histedit' for more detail")
883 883 raise error.Abort(msg, hint=hint)
884 884 return repo.lookup(roots[0])
885 885
886 886
887 887 @command('histedit',
888 888 [('', 'commands', '',
889 889 _('read history edits from the specified file'), _('FILE')),
890 890 ('c', 'continue', False, _('continue an edit already in progress')),
891 891 ('', 'edit-plan', False, _('edit remaining actions list')),
892 892 ('k', 'keep', False,
893 893 _("don't strip old nodes after edit is complete")),
894 894 ('', 'abort', False, _('abort an edit in progress')),
895 895 ('o', 'outgoing', False, _('changesets not found in destination')),
896 896 ('f', 'force', False,
897 897 _('force outgoing even for unrelated repositories')),
898 898 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
899 899 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"))
900 900 def histedit(ui, repo, *freeargs, **opts):
901 901 """interactively edit changeset history
902 902
903 903 This command lets you edit a linear series of changesets (up to
904 904 and including the working directory, which should be clean).
905 905 You can:
906 906
907 907 - `pick` to [re]order a changeset
908 908
909 909 - `drop` to omit changeset
910 910
911 911 - `mess` to reword the changeset commit message
912 912
913 913 - `fold` to combine it with the preceding changeset (using the later date)
914 914
915 915 - `roll` like fold, but discarding this commit's description and date
916 916
917 917 - `edit` to edit this changeset (preserving date)
918 918
919 919 There are a number of ways to select the root changeset:
920 920
921 921 - Specify ANCESTOR directly
922 922
923 923 - Use --outgoing -- it will be the first linear changeset not
924 924 included in destination. (See :hg:`help config.paths.default-push`)
925 925
926 926 - Otherwise, the value from the "histedit.defaultrev" config option
927 927 is used as a revset to select the base revision when ANCESTOR is not
928 928 specified. The first revision returned by the revset is used. By
929 929 default, this selects the editable history that is unique to the
930 930 ancestry of the working directory.
931 931
932 932 .. container:: verbose
933 933
934 934 If you use --outgoing, this command will abort if there are ambiguous
935 935 outgoing revisions. For example, if there are multiple branches
936 936 containing outgoing revisions.
937 937
938 938 Use "min(outgoing() and ::.)" or similar revset specification
939 939 instead of --outgoing to specify edit target revision exactly in
940 940 such ambiguous situation. See :hg:`help revsets` for detail about
941 941 selecting revisions.
942 942
943 943 .. container:: verbose
944 944
945 945 Examples:
946 946
947 947 - A number of changes have been made.
948 948 Revision 3 is no longer needed.
949 949
950 950 Start history editing from revision 3::
951 951
952 952 hg histedit -r 3
953 953
954 954 An editor opens, containing the list of revisions,
955 955 with specific actions specified::
956 956
957 957 pick 5339bf82f0ca 3 Zworgle the foobar
958 958 pick 8ef592ce7cc4 4 Bedazzle the zerlog
959 959 pick 0a9639fcda9d 5 Morgify the cromulancy
960 960
961 961 Additional information about the possible actions
962 962 to take appears below the list of revisions.
963 963
964 964 To remove revision 3 from the history,
965 965 its action (at the beginning of the relevant line)
966 966 is changed to 'drop'::
967 967
968 968 drop 5339bf82f0ca 3 Zworgle the foobar
969 969 pick 8ef592ce7cc4 4 Bedazzle the zerlog
970 970 pick 0a9639fcda9d 5 Morgify the cromulancy
971 971
972 972 - A number of changes have been made.
973 973 Revision 2 and 4 need to be swapped.
974 974
975 975 Start history editing from revision 2::
976 976
977 977 hg histedit -r 2
978 978
979 979 An editor opens, containing the list of revisions,
980 980 with specific actions specified::
981 981
982 982 pick 252a1af424ad 2 Blorb a morgwazzle
983 983 pick 5339bf82f0ca 3 Zworgle the foobar
984 984 pick 8ef592ce7cc4 4 Bedazzle the zerlog
985 985
986 986 To swap revision 2 and 4, its lines are swapped
987 987 in the editor::
988 988
989 989 pick 8ef592ce7cc4 4 Bedazzle the zerlog
990 990 pick 5339bf82f0ca 3 Zworgle the foobar
991 991 pick 252a1af424ad 2 Blorb a morgwazzle
992 992
993 993 Returns 0 on success, 1 if user intervention is required (not only
994 994 for intentional "edit" command, but also for resolving unexpected
995 995 conflicts).
996 996 """
997 997 state = histeditstate(repo)
998 998 try:
999 999 state.wlock = repo.wlock()
1000 1000 state.lock = repo.lock()
1001 1001 _histedit(ui, repo, state, *freeargs, **opts)
1002 1002 finally:
1003 1003 release(state.lock, state.wlock)
1004 1004
1005 1005 goalcontinue = 'continue'
1006 1006 goalabort = 'abort'
1007 1007 goaleditplan = 'edit-plan'
1008 1008 goalnew = 'new'
1009 1009
1010 1010 def _getgoal(opts):
1011 1011 if opts.get('continue'):
1012 1012 return goalcontinue
1013 1013 if opts.get('abort'):
1014 1014 return goalabort
1015 1015 if opts.get('edit_plan'):
1016 1016 return goaleditplan
1017 1017 return goalnew
1018 1018
1019 1019 def _readfile(ui, path):
1020 1020 if path == '-':
1021 1021 with ui.timeblockedsection('histedit'):
1022 1022 return ui.fin.read()
1023 1023 else:
1024 1024 with open(path, 'rb') as f:
1025 1025 return f.read()
1026 1026
1027 1027 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1028 1028 # TODO only abort if we try to histedit mq patches, not just
1029 1029 # blanket if mq patches are applied somewhere
1030 1030 mq = getattr(repo, 'mq', None)
1031 1031 if mq and mq.applied:
1032 1032 raise error.Abort(_('source has mq patches applied'))
1033 1033
1034 1034 # basic argument incompatibility processing
1035 1035 outg = opts.get('outgoing')
1036 1036 editplan = opts.get('edit_plan')
1037 1037 abort = opts.get('abort')
1038 1038 force = opts.get('force')
1039 1039 if force and not outg:
1040 1040 raise error.Abort(_('--force only allowed with --outgoing'))
1041 1041 if goal == 'continue':
1042 1042 if any((outg, abort, revs, freeargs, rules, editplan)):
1043 1043 raise error.Abort(_('no arguments allowed with --continue'))
1044 1044 elif goal == 'abort':
1045 1045 if any((outg, revs, freeargs, rules, editplan)):
1046 1046 raise error.Abort(_('no arguments allowed with --abort'))
1047 1047 elif goal == 'edit-plan':
1048 1048 if any((outg, revs, freeargs)):
1049 1049 raise error.Abort(_('only --commands argument allowed with '
1050 1050 '--edit-plan'))
1051 1051 else:
1052 1052 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1053 1053 raise error.Abort(_('history edit already in progress, try '
1054 1054 '--continue or --abort'))
1055 1055 if outg:
1056 1056 if revs:
1057 1057 raise error.Abort(_('no revisions allowed with --outgoing'))
1058 1058 if len(freeargs) > 1:
1059 1059 raise error.Abort(
1060 1060 _('only one repo argument allowed with --outgoing'))
1061 1061 else:
1062 1062 revs.extend(freeargs)
1063 1063 if len(revs) == 0:
1064 1064 defaultrev = destutil.desthistedit(ui, repo)
1065 1065 if defaultrev is not None:
1066 1066 revs.append(defaultrev)
1067 1067
1068 1068 if len(revs) != 1:
1069 1069 raise error.Abort(
1070 1070 _('histedit requires exactly one ancestor revision'))
1071 1071
1072 1072 def _histedit(ui, repo, state, *freeargs, **opts):
1073 1073 goal = _getgoal(opts)
1074 1074 revs = opts.get('rev', [])
1075 1075 rules = opts.get('commands', '')
1076 1076 state.keep = opts.get('keep', False)
1077 1077
1078 1078 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1079 1079
1080 1080 # rebuild state
1081 1081 if goal == goalcontinue:
1082 1082 state.read()
1083 1083 state = bootstrapcontinue(ui, state, opts)
1084 1084 elif goal == goaleditplan:
1085 1085 _edithisteditplan(ui, repo, state, rules)
1086 1086 return
1087 1087 elif goal == goalabort:
1088 1088 _aborthistedit(ui, repo, state)
1089 1089 return
1090 1090 else:
1091 1091 # goal == goalnew
1092 1092 _newhistedit(ui, repo, state, revs, freeargs, opts)
1093 1093
1094 1094 _continuehistedit(ui, repo, state)
1095 1095 _finishhistedit(ui, repo, state)
1096 1096
1097 1097 def _continuehistedit(ui, repo, state):
1098 1098 """This function runs after either:
1099 1099 - bootstrapcontinue (if the goal is 'continue')
1100 1100 - _newhistedit (if the goal is 'new')
1101 1101 """
1102 1102 # preprocess rules so that we can hide inner folds from the user
1103 1103 # and only show one editor
1104 1104 actions = state.actions[:]
1105 1105 for idx, (action, nextact) in enumerate(
1106 1106 zip(actions, actions[1:] + [None])):
1107 1107 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1108 1108 state.actions[idx].__class__ = _multifold
1109 1109
1110 1110 total = len(state.actions)
1111 1111 pos = 0
1112 1112 state.tr = None
1113 1113
1114 1114 # Force an initial state file write, so the user can run --abort/continue
1115 1115 # even if there's an exception before the first transaction serialize.
1116 1116 state.write()
1117 1117 try:
1118 1118 # Don't use singletransaction by default since it rolls the entire
1119 1119 # transaction back if an unexpected exception happens (like a
1120 1120 # pretxncommit hook throws, or the user aborts the commit msg editor).
1121 1121 if ui.configbool("histedit", "singletransaction", False):
1122 1122 # Don't use a 'with' for the transaction, since actions may close
1123 1123 # and reopen a transaction. For example, if the action executes an
1124 1124 # external process it may choose to commit the transaction first.
1125 1125 state.tr = repo.transaction('histedit')
1126 1126
1127 1127 while state.actions:
1128 1128 state.write(tr=state.tr)
1129 1129 actobj = state.actions[0]
1130 1130 pos += 1
1131 1131 ui.progress(_("editing"), pos, actobj.torule(),
1132 1132 _('changes'), total)
1133 1133 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1134 1134 actobj.torule()))
1135 1135 parentctx, replacement_ = actobj.run()
1136 1136 state.parentctxnode = parentctx.node()
1137 1137 state.replacements.extend(replacement_)
1138 1138 state.actions.pop(0)
1139 1139
1140 1140 if state.tr is not None:
1141 1141 state.tr.close()
1142 1142 except error.InterventionRequired:
1143 1143 if state.tr is not None:
1144 1144 state.tr.close()
1145 1145 raise
1146 1146 except Exception:
1147 1147 if state.tr is not None:
1148 1148 state.tr.abort()
1149 1149 raise
1150 1150
1151 1151 state.write()
1152 1152 ui.progress(_("editing"), None)
1153 1153
1154 1154 def _finishhistedit(ui, repo, state):
1155 1155 """This action runs when histedit is finishing its session"""
1156 1156 repo.ui.pushbuffer()
1157 1157 hg.update(repo, state.parentctxnode, quietempty=True)
1158 1158 repo.ui.popbuffer()
1159 1159
1160 1160 mapping, tmpnodes, created, ntm = processreplacement(state)
1161 1161 if mapping:
1162 1162 for prec, succs in mapping.iteritems():
1163 1163 if not succs:
1164 1164 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1165 1165 else:
1166 1166 ui.debug('histedit: %s is replaced by %s\n' % (
1167 1167 node.short(prec), node.short(succs[0])))
1168 1168 if len(succs) > 1:
1169 1169 m = 'histedit: %s'
1170 1170 for n in succs[1:]:
1171 1171 ui.debug(m % node.short(n))
1172 1172
1173 1173 safecleanupnode(ui, repo, 'temp', tmpnodes)
1174 1174
1175 1175 if not state.keep:
1176 1176 if mapping:
1177 1177 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1178 1178 # TODO update mq state
1179 1179 safecleanupnode(ui, repo, 'replaced', mapping)
1180 1180
1181 1181 state.clear()
1182 1182 if os.path.exists(repo.sjoin('undo')):
1183 1183 os.unlink(repo.sjoin('undo'))
1184 1184 if repo.vfs.exists('histedit-last-edit.txt'):
1185 1185 repo.vfs.unlink('histedit-last-edit.txt')
1186 1186
1187 1187 def _aborthistedit(ui, repo, state):
1188 1188 try:
1189 1189 state.read()
1190 1190 __, leafs, tmpnodes, __ = processreplacement(state)
1191 1191 ui.debug('restore wc to old parent %s\n'
1192 1192 % node.short(state.topmost))
1193 1193
1194 1194 # Recover our old commits if necessary
1195 1195 if not state.topmost in repo and state.backupfile:
1196 1196 backupfile = repo.vfs.join(state.backupfile)
1197 1197 f = hg.openpath(ui, backupfile)
1198 1198 gen = exchange.readbundle(ui, f, backupfile)
1199 1199 with repo.transaction('histedit.abort') as tr:
1200 1200 if not isinstance(gen, bundle2.unbundle20):
1201 1201 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
1202 1202 if isinstance(gen, bundle2.unbundle20):
1203 1203 bundle2.applybundle(repo, gen, tr,
1204 1204 source='histedit',
1205 1205 url='bundle:' + backupfile)
1206 1206
1207 1207 os.remove(backupfile)
1208 1208
1209 1209 # check whether we should update away
1210 1210 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1211 1211 state.parentctxnode, leafs | tmpnodes):
1212 1212 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1213 1213 cleanupnode(ui, repo, 'created', tmpnodes)
1214 1214 cleanupnode(ui, repo, 'temp', leafs)
1215 1215 except Exception:
1216 1216 if state.inprogress():
1217 1217 ui.warn(_('warning: encountered an exception during histedit '
1218 1218 '--abort; the repository may not have been completely '
1219 1219 'cleaned up\n'))
1220 1220 raise
1221 1221 finally:
1222 1222 state.clear()
1223 1223
1224 1224 def _edithisteditplan(ui, repo, state, rules):
1225 1225 state.read()
1226 1226 if not rules:
1227 1227 comment = geteditcomment(ui,
1228 1228 node.short(state.parentctxnode),
1229 1229 node.short(state.topmost))
1230 1230 rules = ruleeditor(repo, ui, state.actions, comment)
1231 1231 else:
1232 1232 rules = _readfile(ui, rules)
1233 1233 actions = parserules(rules, state)
1234 1234 ctxs = [repo[act.node] \
1235 1235 for act in state.actions if act.node]
1236 1236 warnverifyactions(ui, repo, actions, state, ctxs)
1237 1237 state.actions = actions
1238 1238 state.write()
1239 1239
1240 1240 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1241 1241 outg = opts.get('outgoing')
1242 1242 rules = opts.get('commands', '')
1243 1243 force = opts.get('force')
1244 1244
1245 1245 cmdutil.checkunfinished(repo)
1246 1246 cmdutil.bailifchanged(repo)
1247 1247
1248 1248 topmost, empty = repo.dirstate.parents()
1249 1249 if outg:
1250 1250 if freeargs:
1251 1251 remote = freeargs[0]
1252 1252 else:
1253 1253 remote = None
1254 1254 root = findoutgoing(ui, repo, remote, force, opts)
1255 1255 else:
1256 1256 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1257 1257 if len(rr) != 1:
1258 1258 raise error.Abort(_('The specified revisions must have '
1259 1259 'exactly one common root'))
1260 1260 root = rr[0].node()
1261 1261
1262 1262 revs = between(repo, root, topmost, state.keep)
1263 1263 if not revs:
1264 1264 raise error.Abort(_('%s is not an ancestor of working directory') %
1265 1265 node.short(root))
1266 1266
1267 1267 ctxs = [repo[r] for r in revs]
1268 1268 if not rules:
1269 1269 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1270 1270 actions = [pick(state, r) for r in revs]
1271 1271 rules = ruleeditor(repo, ui, actions, comment)
1272 1272 else:
1273 1273 rules = _readfile(ui, rules)
1274 1274 actions = parserules(rules, state)
1275 1275 warnverifyactions(ui, repo, actions, state, ctxs)
1276 1276
1277 1277 parentctxnode = repo[root].parents()[0].node()
1278 1278
1279 1279 state.parentctxnode = parentctxnode
1280 1280 state.actions = actions
1281 1281 state.topmost = topmost
1282 1282 state.replacements = []
1283 1283
1284 1284 # Create a backup so we can always abort completely.
1285 1285 backupfile = None
1286 1286 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1287 1287 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1288 1288 'histedit')
1289 1289 state.backupfile = backupfile
1290 1290
1291 1291 def _getsummary(ctx):
1292 1292 # a common pattern is to extract the summary but default to the empty
1293 1293 # string
1294 1294 summary = ctx.description() or ''
1295 1295 if summary:
1296 1296 summary = summary.splitlines()[0]
1297 1297 return summary
1298 1298
1299 1299 def bootstrapcontinue(ui, state, opts):
1300 1300 repo = state.repo
1301 1301
1302 1302 ms = mergemod.mergestate.read(repo)
1303 1303 mergeutil.checkunresolved(ms)
1304 1304
1305 1305 if state.actions:
1306 1306 actobj = state.actions.pop(0)
1307 1307
1308 1308 if _isdirtywc(repo):
1309 1309 actobj.continuedirty()
1310 1310 if _isdirtywc(repo):
1311 1311 abortdirty()
1312 1312
1313 1313 parentctx, replacements = actobj.continueclean()
1314 1314
1315 1315 state.parentctxnode = parentctx.node()
1316 1316 state.replacements.extend(replacements)
1317 1317
1318 1318 return state
1319 1319
1320 1320 def between(repo, old, new, keep):
1321 1321 """select and validate the set of revision to edit
1322 1322
1323 1323 When keep is false, the specified set can't have children."""
1324 1324 ctxs = list(repo.set('%n::%n', old, new))
1325 1325 if ctxs and not keep:
1326 1326 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1327 1327 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1328 1328 raise error.Abort(_('can only histedit a changeset together '
1329 1329 'with all its descendants'))
1330 1330 if repo.revs('(%ld) and merge()', ctxs):
1331 1331 raise error.Abort(_('cannot edit history that contains merges'))
1332 1332 root = ctxs[0] # list is already sorted by repo.set
1333 1333 if not root.mutable():
1334 1334 raise error.Abort(_('cannot edit public changeset: %s') % root,
1335 1335 hint=_("see 'hg help phases' for details"))
1336 1336 return [c.node() for c in ctxs]
1337 1337
1338 1338 def ruleeditor(repo, ui, actions, editcomment=""):
1339 1339 """open an editor to edit rules
1340 1340
1341 1341 rules are in the format [ [act, ctx], ...] like in state.rules
1342 1342 """
1343 1343 if repo.ui.configbool("experimental", "histedit.autoverb"):
1344 1344 newact = util.sortdict()
1345 1345 for act in actions:
1346 1346 ctx = repo[act.node]
1347 1347 summary = _getsummary(ctx)
1348 1348 fword = summary.split(' ', 1)[0].lower()
1349 1349 added = False
1350 1350
1351 1351 # if it doesn't end with the special character '!' just skip this
1352 1352 if fword.endswith('!'):
1353 1353 fword = fword[:-1]
1354 1354 if fword in primaryactions | secondaryactions | tertiaryactions:
1355 1355 act.verb = fword
1356 1356 # get the target summary
1357 1357 tsum = summary[len(fword) + 1:].lstrip()
1358 1358 # safe but slow: reverse iterate over the actions so we
1359 1359 # don't clash on two commits having the same summary
1360 1360 for na, l in reversed(list(newact.iteritems())):
1361 1361 actx = repo[na.node]
1362 1362 asum = _getsummary(actx)
1363 1363 if asum == tsum:
1364 1364 added = True
1365 1365 l.append(act)
1366 1366 break
1367 1367
1368 1368 if not added:
1369 1369 newact[act] = []
1370 1370
1371 1371 # copy over and flatten the new list
1372 1372 actions = []
1373 1373 for na, l in newact.iteritems():
1374 1374 actions.append(na)
1375 1375 actions += l
1376 1376
1377 1377 rules = '\n'.join([act.torule() for act in actions])
1378 1378 rules += '\n\n'
1379 1379 rules += editcomment
1380 1380 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1381 1381 repopath=repo.path)
1382 1382
1383 1383 # Save edit rules in .hg/histedit-last-edit.txt in case
1384 1384 # the user needs to ask for help after something
1385 1385 # surprising happens.
1386 1386 f = open(repo.vfs.join('histedit-last-edit.txt'), 'w')
1387 1387 f.write(rules)
1388 1388 f.close()
1389 1389
1390 1390 return rules
1391 1391
1392 1392 def parserules(rules, state):
1393 1393 """Read the histedit rules string and return list of action objects """
1394 1394 rules = [l for l in (r.strip() for r in rules.splitlines())
1395 1395 if l and not l.startswith('#')]
1396 1396 actions = []
1397 1397 for r in rules:
1398 1398 if ' ' not in r:
1399 1399 raise error.ParseError(_('malformed line "%s"') % r)
1400 1400 verb, rest = r.split(' ', 1)
1401 1401
1402 1402 if verb not in actiontable:
1403 1403 raise error.ParseError(_('unknown action "%s"') % verb)
1404 1404
1405 1405 action = actiontable[verb].fromrule(state, rest)
1406 1406 actions.append(action)
1407 1407 return actions
1408 1408
1409 1409 def warnverifyactions(ui, repo, actions, state, ctxs):
1410 1410 try:
1411 1411 verifyactions(actions, state, ctxs)
1412 1412 except error.ParseError:
1413 1413 if repo.vfs.exists('histedit-last-edit.txt'):
1414 1414 ui.warn(_('warning: histedit rules saved '
1415 1415 'to: .hg/histedit-last-edit.txt\n'))
1416 1416 raise
1417 1417
1418 1418 def verifyactions(actions, state, ctxs):
1419 1419 """Verify that there exists exactly one action per given changeset and
1420 1420 other constraints.
1421 1421
1422 1422 Will abort if there are to many or too few rules, a malformed rule,
1423 1423 or a rule on a changeset outside of the user-given range.
1424 1424 """
1425 1425 expected = set(c.node() for c in ctxs)
1426 1426 seen = set()
1427 1427 prev = None
1428 1428 for action in actions:
1429 1429 action.verify(prev, expected, seen)
1430 1430 prev = action
1431 1431 if action.node is not None:
1432 1432 seen.add(action.node)
1433 1433 missing = sorted(expected - seen) # sort to stabilize output
1434 1434
1435 1435 if state.repo.ui.configbool('histedit', 'dropmissing'):
1436 1436 if len(actions) == 0:
1437 1437 raise error.ParseError(_('no rules provided'),
1438 1438 hint=_('use strip extension to remove commits'))
1439 1439
1440 1440 drops = [drop(state, n) for n in missing]
1441 1441 # put the in the beginning so they execute immediately and
1442 1442 # don't show in the edit-plan in the future
1443 1443 actions[:0] = drops
1444 1444 elif missing:
1445 1445 raise error.ParseError(_('missing rules for changeset %s') %
1446 1446 node.short(missing[0]),
1447 1447 hint=_('use "drop %s" to discard, see also: '
1448 1448 "'hg help -e histedit.config'")
1449 1449 % node.short(missing[0]))
1450 1450
1451 1451 def adjustreplacementsfrommarkers(repo, oldreplacements):
1452 1452 """Adjust replacements from obsolescence markers
1453 1453
1454 1454 Replacements structure is originally generated based on
1455 1455 histedit's state and does not account for changes that are
1456 1456 not recorded there. This function fixes that by adding
1457 1457 data read from obsolescence markers"""
1458 1458 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1459 1459 return oldreplacements
1460 1460
1461 1461 unfi = repo.unfiltered()
1462 1462 nm = unfi.changelog.nodemap
1463 1463 obsstore = repo.obsstore
1464 1464 newreplacements = list(oldreplacements)
1465 1465 oldsuccs = [r[1] for r in oldreplacements]
1466 1466 # successors that have already been added to succstocheck once
1467 1467 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1468 1468 succstocheck = list(seensuccs)
1469 1469 while succstocheck:
1470 1470 n = succstocheck.pop()
1471 1471 missing = nm.get(n) is None
1472 1472 markers = obsstore.successors.get(n, ())
1473 1473 if missing and not markers:
1474 1474 # dead end, mark it as such
1475 1475 newreplacements.append((n, ()))
1476 1476 for marker in markers:
1477 1477 nsuccs = marker[1]
1478 1478 newreplacements.append((n, nsuccs))
1479 1479 for nsucc in nsuccs:
1480 1480 if nsucc not in seensuccs:
1481 1481 seensuccs.add(nsucc)
1482 1482 succstocheck.append(nsucc)
1483 1483
1484 1484 return newreplacements
1485 1485
1486 1486 def processreplacement(state):
1487 1487 """process the list of replacements to return
1488 1488
1489 1489 1) the final mapping between original and created nodes
1490 1490 2) the list of temporary node created by histedit
1491 1491 3) the list of new commit created by histedit"""
1492 1492 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1493 1493 allsuccs = set()
1494 1494 replaced = set()
1495 1495 fullmapping = {}
1496 1496 # initialize basic set
1497 1497 # fullmapping records all operations recorded in replacement
1498 1498 for rep in replacements:
1499 1499 allsuccs.update(rep[1])
1500 1500 replaced.add(rep[0])
1501 1501 fullmapping.setdefault(rep[0], set()).update(rep[1])
1502 1502 new = allsuccs - replaced
1503 1503 tmpnodes = allsuccs & replaced
1504 1504 # Reduce content fullmapping into direct relation between original nodes
1505 1505 # and final node created during history edition
1506 1506 # Dropped changeset are replaced by an empty list
1507 1507 toproceed = set(fullmapping)
1508 1508 final = {}
1509 1509 while toproceed:
1510 1510 for x in list(toproceed):
1511 1511 succs = fullmapping[x]
1512 1512 for s in list(succs):
1513 1513 if s in toproceed:
1514 1514 # non final node with unknown closure
1515 1515 # We can't process this now
1516 1516 break
1517 1517 elif s in final:
1518 1518 # non final node, replace with closure
1519 1519 succs.remove(s)
1520 1520 succs.update(final[s])
1521 1521 else:
1522 1522 final[x] = succs
1523 1523 toproceed.remove(x)
1524 1524 # remove tmpnodes from final mapping
1525 1525 for n in tmpnodes:
1526 1526 del final[n]
1527 1527 # we expect all changes involved in final to exist in the repo
1528 1528 # turn `final` into list (topologically sorted)
1529 1529 nm = state.repo.changelog.nodemap
1530 1530 for prec, succs in final.items():
1531 1531 final[prec] = sorted(succs, key=nm.get)
1532 1532
1533 1533 # computed topmost element (necessary for bookmark)
1534 1534 if new:
1535 1535 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1536 1536 elif not final:
1537 1537 # Nothing rewritten at all. we won't need `newtopmost`
1538 1538 # It is the same as `oldtopmost` and `processreplacement` know it
1539 1539 newtopmost = None
1540 1540 else:
1541 1541 # every body died. The newtopmost is the parent of the root.
1542 1542 r = state.repo.changelog.rev
1543 1543 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1544 1544
1545 1545 return final, tmpnodes, new, newtopmost
1546 1546
1547 1547 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1548 1548 """Move bookmark from old to newly created node"""
1549 1549 if not mapping:
1550 1550 # if nothing got rewritten there is not purpose for this function
1551 1551 return
1552 1552 moves = []
1553 1553 for bk, old in sorted(repo._bookmarks.iteritems()):
1554 1554 if old == oldtopmost:
1555 1555 # special case ensure bookmark stay on tip.
1556 1556 #
1557 1557 # This is arguably a feature and we may only want that for the
1558 1558 # active bookmark. But the behavior is kept compatible with the old
1559 1559 # version for now.
1560 1560 moves.append((bk, newtopmost))
1561 1561 continue
1562 1562 base = old
1563 1563 new = mapping.get(base, None)
1564 1564 if new is None:
1565 1565 continue
1566 1566 while not new:
1567 1567 # base is killed, trying with parent
1568 1568 base = repo[base].p1().node()
1569 1569 new = mapping.get(base, (base,))
1570 1570 # nothing to move
1571 1571 moves.append((bk, new[-1]))
1572 1572 if moves:
1573 1573 lock = tr = None
1574 1574 try:
1575 1575 lock = repo.lock()
1576 1576 tr = repo.transaction('histedit')
1577 1577 marks = repo._bookmarks
1578 1578 for mark, new in moves:
1579 1579 old = marks[mark]
1580 1580 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1581 1581 % (mark, node.short(old), node.short(new)))
1582 1582 marks[mark] = new
1583 1583 marks.recordchange(tr)
1584 1584 tr.close()
1585 1585 finally:
1586 1586 release(tr, lock)
1587 1587
1588 1588 def cleanupnode(ui, repo, name, nodes):
1589 1589 """strip a group of nodes from the repository
1590 1590
1591 1591 The set of node to strip may contains unknown nodes."""
1592 1592 ui.debug('should strip %s nodes %s\n' %
1593 1593 (name, ', '.join([node.short(n) for n in nodes])))
1594 1594 with repo.lock():
1595 1595 # do not let filtering get in the way of the cleanse
1596 1596 # we should probably get rid of obsolescence marker created during the
1597 1597 # histedit, but we currently do not have such information.
1598 1598 repo = repo.unfiltered()
1599 1599 # Find all nodes that need to be stripped
1600 1600 # (we use %lr instead of %ln to silently ignore unknown items)
1601 1601 nm = repo.changelog.nodemap
1602 1602 nodes = sorted(n for n in nodes if n in nm)
1603 1603 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1604 1604 for c in roots:
1605 1605 # We should process node in reverse order to strip tip most first.
1606 1606 # but this trigger a bug in changegroup hook.
1607 1607 # This would reduce bundle overhead
1608 1608 repair.strip(ui, repo, c)
1609 1609
1610 1610 def safecleanupnode(ui, repo, name, nodes):
1611 1611 """strip or obsolete nodes
1612 1612
1613 1613 nodes could be either a set or dict which maps to replacements.
1614 1614 nodes could be unknown (outside the repo).
1615 1615 """
1616 1616 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
1617 1617 if supportsmarkers:
1618 1618 if util.safehasattr(nodes, 'get'):
1619 1619 # nodes is a dict-like mapping
1620 1620 # use unfiltered repo for successors in case they are hidden
1621 1621 urepo = repo.unfiltered()
1622 1622 def getmarker(prec):
1623 1623 succs = tuple(urepo[n] for n in nodes.get(prec, ()))
1624 1624 return (repo[prec], succs)
1625 1625 else:
1626 1626 # nodes is a set-like
1627 1627 def getmarker(prec):
1628 1628 return (repo[prec], ())
1629 1629 # sort by revision number because it sound "right"
1630 1630 sortednodes = sorted([n for n in nodes if n in repo],
1631 1631 key=repo.changelog.rev)
1632 1632 markers = [getmarker(t) for t in sortednodes]
1633 1633 if markers:
1634 obsolete.createmarkers(repo, markers)
1634 obsolete.createmarkers(repo, markers, operation='histedit')
1635 1635 else:
1636 1636 return cleanupnode(ui, repo, name, nodes)
1637 1637
1638 1638 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1639 1639 if isinstance(nodelist, str):
1640 1640 nodelist = [nodelist]
1641 1641 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1642 1642 state = histeditstate(repo)
1643 1643 state.read()
1644 1644 histedit_nodes = {action.node for action
1645 1645 in state.actions if action.node}
1646 1646 common_nodes = histedit_nodes & set(nodelist)
1647 1647 if common_nodes:
1648 1648 raise error.Abort(_("histedit in progress, can't strip %s")
1649 1649 % ', '.join(node.short(x) for x in common_nodes))
1650 1650 return orig(ui, repo, nodelist, *args, **kwargs)
1651 1651
1652 1652 extensions.wrapfunction(repair, 'strip', stripwrapper)
1653 1653
1654 1654 def summaryhook(ui, repo):
1655 1655 if not os.path.exists(repo.vfs.join('histedit-state')):
1656 1656 return
1657 1657 state = histeditstate(repo)
1658 1658 state.read()
1659 1659 if state.actions:
1660 1660 # i18n: column positioning for "hg summary"
1661 1661 ui.write(_('hist: %s (histedit --continue)\n') %
1662 1662 (ui.label(_('%d remaining'), 'histedit.remaining') %
1663 1663 len(state.actions)))
1664 1664
1665 1665 def extsetup(ui):
1666 1666 cmdutil.summaryhooks.add('histedit', summaryhook)
1667 1667 cmdutil.unfinishedstates.append(
1668 1668 ['histedit-state', False, True, _('histedit in progress'),
1669 1669 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1670 1670 cmdutil.afterresolvedstates.append(
1671 1671 ['histedit-state', _('hg histedit --continue')])
1672 1672 if ui.configbool("experimental", "histeditng"):
1673 1673 globals()['base'] = action(['base', 'b'],
1674 1674 _('checkout changeset and apply further changesets from there')
1675 1675 )(base)
@@ -1,1541 +1,1541 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 https://mercurial-scm.org/wiki/RebaseExtension
15 15 '''
16 16
17 17 from __future__ import absolute_import
18 18
19 19 import errno
20 20 import os
21 21
22 22 from mercurial.i18n import _
23 23 from mercurial.node import (
24 24 hex,
25 25 nullid,
26 26 nullrev,
27 27 short,
28 28 )
29 29 from mercurial import (
30 30 bookmarks,
31 31 cmdutil,
32 32 commands,
33 33 copies,
34 34 destutil,
35 35 dirstateguard,
36 36 error,
37 37 extensions,
38 38 hg,
39 39 lock,
40 40 merge as mergemod,
41 41 mergeutil,
42 42 obsolete,
43 43 patch,
44 44 phases,
45 45 registrar,
46 46 repair,
47 47 repoview,
48 48 revset,
49 49 scmutil,
50 50 smartset,
51 51 util,
52 52 )
53 53
54 54 release = lock.release
55 55 templateopts = commands.templateopts
56 56
57 57 # The following constants are used throughout the rebase module. The ordering of
58 58 # their values must be maintained.
59 59
60 60 # Indicates that a revision needs to be rebased
61 61 revtodo = -1
62 62 nullmerge = -2
63 63 revignored = -3
64 64 # successor in rebase destination
65 65 revprecursor = -4
66 66 # plain prune (no successor)
67 67 revpruned = -5
68 68 revskipped = (revignored, revprecursor, revpruned)
69 69
70 70 cmdtable = {}
71 71 command = cmdutil.command(cmdtable)
72 72 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
73 73 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
74 74 # be specifying the version(s) of Mercurial they are tested with, or
75 75 # leave the attribute unspecified.
76 76 testedwith = 'ships-with-hg-core'
77 77
78 78 def _nothingtorebase():
79 79 return 1
80 80
81 81 def _savegraft(ctx, extra):
82 82 s = ctx.extra().get('source', None)
83 83 if s is not None:
84 84 extra['source'] = s
85 85 s = ctx.extra().get('intermediate-source', None)
86 86 if s is not None:
87 87 extra['intermediate-source'] = s
88 88
89 89 def _savebranch(ctx, extra):
90 90 extra['branch'] = ctx.branch()
91 91
92 92 def _makeextrafn(copiers):
93 93 """make an extrafn out of the given copy-functions.
94 94
95 95 A copy function takes a context and an extra dict, and mutates the
96 96 extra dict as needed based on the given context.
97 97 """
98 98 def extrafn(ctx, extra):
99 99 for c in copiers:
100 100 c(ctx, extra)
101 101 return extrafn
102 102
103 103 def _destrebase(repo, sourceset, destspace=None):
104 104 """small wrapper around destmerge to pass the right extra args
105 105
106 106 Please wrap destutil.destmerge instead."""
107 107 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
108 108 onheadcheck=False, destspace=destspace)
109 109
110 110 revsetpredicate = registrar.revsetpredicate()
111 111
112 112 @revsetpredicate('_destrebase')
113 113 def _revsetdestrebase(repo, subset, x):
114 114 # ``_rebasedefaultdest()``
115 115
116 116 # default destination for rebase.
117 117 # # XXX: Currently private because I expect the signature to change.
118 118 # # XXX: - bailing out in case of ambiguity vs returning all data.
119 119 # i18n: "_rebasedefaultdest" is a keyword
120 120 sourceset = None
121 121 if x is not None:
122 122 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
123 123 return subset & smartset.baseset([_destrebase(repo, sourceset)])
124 124
125 125 class rebaseruntime(object):
126 126 """This class is a container for rebase runtime state"""
127 127 def __init__(self, repo, ui, opts=None):
128 128 if opts is None:
129 129 opts = {}
130 130
131 131 self.repo = repo
132 132 self.ui = ui
133 133 self.opts = opts
134 134 self.originalwd = None
135 135 self.external = nullrev
136 136 # Mapping between the old revision id and either what is the new rebased
137 137 # revision or what needs to be done with the old revision. The state
138 138 # dict will be what contains most of the rebase progress state.
139 139 self.state = {}
140 140 self.activebookmark = None
141 141 self.currentbookmarks = None
142 142 self.dest = None
143 143 self.skipped = set()
144 144 self.destancestors = set()
145 145
146 146 self.collapsef = opts.get('collapse', False)
147 147 self.collapsemsg = cmdutil.logmessage(ui, opts)
148 148 self.date = opts.get('date', None)
149 149
150 150 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
151 151 self.extrafns = [_savegraft]
152 152 if e:
153 153 self.extrafns = [e]
154 154
155 155 self.keepf = opts.get('keep', False)
156 156 self.keepbranchesf = opts.get('keepbranches', False)
157 157 # keepopen is not meant for use on the command line, but by
158 158 # other extensions
159 159 self.keepopen = opts.get('keepopen', False)
160 160 self.obsoletenotrebased = {}
161 161
162 162 def storestatus(self, tr=None):
163 163 """Store the current status to allow recovery"""
164 164 if tr:
165 165 tr.addfilegenerator('rebasestate', ('rebasestate',),
166 166 self._writestatus, location='plain')
167 167 else:
168 168 with self.repo.vfs("rebasestate", "w") as f:
169 169 self._writestatus(f)
170 170
171 171 def _writestatus(self, f):
172 172 repo = self.repo.unfiltered()
173 173 f.write(repo[self.originalwd].hex() + '\n')
174 174 f.write(repo[self.dest].hex() + '\n')
175 175 f.write(repo[self.external].hex() + '\n')
176 176 f.write('%d\n' % int(self.collapsef))
177 177 f.write('%d\n' % int(self.keepf))
178 178 f.write('%d\n' % int(self.keepbranchesf))
179 179 f.write('%s\n' % (self.activebookmark or ''))
180 180 for d, v in self.state.iteritems():
181 181 oldrev = repo[d].hex()
182 182 if v >= 0:
183 183 newrev = repo[v].hex()
184 184 elif v == revtodo:
185 185 # To maintain format compatibility, we have to use nullid.
186 186 # Please do remove this special case when upgrading the format.
187 187 newrev = hex(nullid)
188 188 else:
189 189 newrev = v
190 190 f.write("%s:%s\n" % (oldrev, newrev))
191 191 repo.ui.debug('rebase status stored\n')
192 192
193 193 def restorestatus(self):
194 194 """Restore a previously stored status"""
195 195 repo = self.repo
196 196 keepbranches = None
197 197 dest = None
198 198 collapse = False
199 199 external = nullrev
200 200 activebookmark = None
201 201 state = {}
202 202
203 203 try:
204 204 f = repo.vfs("rebasestate")
205 205 for i, l in enumerate(f.read().splitlines()):
206 206 if i == 0:
207 207 originalwd = repo[l].rev()
208 208 elif i == 1:
209 209 dest = repo[l].rev()
210 210 elif i == 2:
211 211 external = repo[l].rev()
212 212 elif i == 3:
213 213 collapse = bool(int(l))
214 214 elif i == 4:
215 215 keep = bool(int(l))
216 216 elif i == 5:
217 217 keepbranches = bool(int(l))
218 218 elif i == 6 and not (len(l) == 81 and ':' in l):
219 219 # line 6 is a recent addition, so for backwards
220 220 # compatibility check that the line doesn't look like the
221 221 # oldrev:newrev lines
222 222 activebookmark = l
223 223 else:
224 224 oldrev, newrev = l.split(':')
225 225 if newrev in (str(nullmerge), str(revignored),
226 226 str(revprecursor), str(revpruned)):
227 227 state[repo[oldrev].rev()] = int(newrev)
228 228 elif newrev == nullid:
229 229 state[repo[oldrev].rev()] = revtodo
230 230 # Legacy compat special case
231 231 else:
232 232 state[repo[oldrev].rev()] = repo[newrev].rev()
233 233
234 234 except IOError as err:
235 235 if err.errno != errno.ENOENT:
236 236 raise
237 237 cmdutil.wrongtooltocontinue(repo, _('rebase'))
238 238
239 239 if keepbranches is None:
240 240 raise error.Abort(_('.hg/rebasestate is incomplete'))
241 241
242 242 skipped = set()
243 243 # recompute the set of skipped revs
244 244 if not collapse:
245 245 seen = {dest}
246 246 for old, new in sorted(state.items()):
247 247 if new != revtodo and new in seen:
248 248 skipped.add(old)
249 249 seen.add(new)
250 250 repo.ui.debug('computed skipped revs: %s\n' %
251 251 (' '.join(str(r) for r in sorted(skipped)) or None))
252 252 repo.ui.debug('rebase status resumed\n')
253 253 _setrebasesetvisibility(repo, set(state.keys()) | {originalwd})
254 254
255 255 self.originalwd = originalwd
256 256 self.dest = dest
257 257 self.state = state
258 258 self.skipped = skipped
259 259 self.collapsef = collapse
260 260 self.keepf = keep
261 261 self.keepbranchesf = keepbranches
262 262 self.external = external
263 263 self.activebookmark = activebookmark
264 264
265 265 def _handleskippingobsolete(self, rebaserevs, obsoleterevs, dest):
266 266 """Compute structures necessary for skipping obsolete revisions
267 267
268 268 rebaserevs: iterable of all revisions that are to be rebased
269 269 obsoleterevs: iterable of all obsolete revisions in rebaseset
270 270 dest: a destination revision for the rebase operation
271 271 """
272 272 self.obsoletenotrebased = {}
273 273 if not self.ui.configbool('experimental', 'rebaseskipobsolete',
274 274 default=True):
275 275 return
276 276 rebaseset = set(rebaserevs)
277 277 obsoleteset = set(obsoleterevs)
278 278 self.obsoletenotrebased = _computeobsoletenotrebased(self.repo,
279 279 obsoleteset, dest)
280 280 skippedset = set(self.obsoletenotrebased)
281 281 _checkobsrebase(self.repo, self.ui, obsoleteset, rebaseset, skippedset)
282 282
283 283 def _prepareabortorcontinue(self, isabort):
284 284 try:
285 285 self.restorestatus()
286 286 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
287 287 except error.RepoLookupError:
288 288 if isabort:
289 289 clearstatus(self.repo)
290 290 clearcollapsemsg(self.repo)
291 291 self.repo.ui.warn(_('rebase aborted (no revision is removed,'
292 292 ' only broken state is cleared)\n'))
293 293 return 0
294 294 else:
295 295 msg = _('cannot continue inconsistent rebase')
296 296 hint = _('use "hg rebase --abort" to clear broken state')
297 297 raise error.Abort(msg, hint=hint)
298 298 if isabort:
299 299 return abort(self.repo, self.originalwd, self.dest,
300 300 self.state, activebookmark=self.activebookmark)
301 301
302 302 obsrevs = (r for r, st in self.state.items() if st == revprecursor)
303 303 self._handleskippingobsolete(self.state.keys(), obsrevs, self.dest)
304 304
305 305 def _preparenewrebase(self, dest, rebaseset):
306 306 if dest is None:
307 307 return _nothingtorebase()
308 308
309 309 allowunstable = obsolete.isenabled(self.repo, obsolete.allowunstableopt)
310 310 if (not (self.keepf or allowunstable)
311 311 and self.repo.revs('first(children(%ld) - %ld)',
312 312 rebaseset, rebaseset)):
313 313 raise error.Abort(
314 314 _("can't remove original changesets with"
315 315 " unrebased descendants"),
316 316 hint=_('use --keep to keep original changesets'))
317 317
318 318 obsrevs = _filterobsoleterevs(self.repo, set(rebaseset))
319 319 self._handleskippingobsolete(rebaseset, obsrevs, dest)
320 320
321 321 result = buildstate(self.repo, dest, rebaseset, self.collapsef,
322 322 self.obsoletenotrebased)
323 323
324 324 if not result:
325 325 # Empty state built, nothing to rebase
326 326 self.ui.status(_('nothing to rebase\n'))
327 327 return _nothingtorebase()
328 328
329 329 for root in self.repo.set('roots(%ld)', rebaseset):
330 330 if not self.keepf and not root.mutable():
331 331 raise error.Abort(_("can't rebase public changeset %s")
332 332 % root,
333 333 hint=_("see 'hg help phases' for details"))
334 334
335 335 (self.originalwd, self.dest, self.state) = result
336 336 if self.collapsef:
337 337 self.destancestors = self.repo.changelog.ancestors(
338 338 [self.dest],
339 339 inclusive=True)
340 340 self.external = externalparent(self.repo, self.state,
341 341 self.destancestors)
342 342
343 343 if dest.closesbranch() and not self.keepbranchesf:
344 344 self.ui.status(_('reopening closed branch head %s\n') % dest)
345 345
346 346 def _performrebase(self, tr):
347 347 repo, ui, opts = self.repo, self.ui, self.opts
348 348 if self.keepbranchesf:
349 349 # insert _savebranch at the start of extrafns so if
350 350 # there's a user-provided extrafn it can clobber branch if
351 351 # desired
352 352 self.extrafns.insert(0, _savebranch)
353 353 if self.collapsef:
354 354 branches = set()
355 355 for rev in self.state:
356 356 branches.add(repo[rev].branch())
357 357 if len(branches) > 1:
358 358 raise error.Abort(_('cannot collapse multiple named '
359 359 'branches'))
360 360
361 361 # Rebase
362 362 if not self.destancestors:
363 363 self.destancestors = repo.changelog.ancestors([self.dest],
364 364 inclusive=True)
365 365
366 366 # Keep track of the current bookmarks in order to reset them later
367 367 self.currentbookmarks = repo._bookmarks.copy()
368 368 self.activebookmark = self.activebookmark or repo._activebookmark
369 369 if self.activebookmark:
370 370 bookmarks.deactivate(repo)
371 371
372 372 # Store the state before we begin so users can run 'hg rebase --abort'
373 373 # if we fail before the transaction closes.
374 374 self.storestatus()
375 375
376 376 sortedrevs = repo.revs('sort(%ld, -topo)', self.state)
377 377 cands = [k for k, v in self.state.iteritems() if v == revtodo]
378 378 total = len(cands)
379 379 pos = 0
380 380 for rev in sortedrevs:
381 381 ctx = repo[rev]
382 382 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
383 383 ctx.description().split('\n', 1)[0])
384 384 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
385 385 if names:
386 386 desc += ' (%s)' % ' '.join(names)
387 387 if self.state[rev] == rev:
388 388 ui.status(_('already rebased %s\n') % desc)
389 389 elif self.state[rev] == revtodo:
390 390 pos += 1
391 391 ui.status(_('rebasing %s\n') % desc)
392 392 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
393 393 _('changesets'), total)
394 394 p1, p2, base = defineparents(repo, rev, self.dest,
395 395 self.state,
396 396 self.destancestors,
397 397 self.obsoletenotrebased)
398 398 self.storestatus(tr=tr)
399 399 storecollapsemsg(repo, self.collapsemsg)
400 400 if len(repo[None].parents()) == 2:
401 401 repo.ui.debug('resuming interrupted rebase\n')
402 402 else:
403 403 try:
404 404 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
405 405 'rebase')
406 406 stats = rebasenode(repo, rev, p1, base, self.state,
407 407 self.collapsef, self.dest)
408 408 if stats and stats[3] > 0:
409 409 raise error.InterventionRequired(
410 410 _('unresolved conflicts (see hg '
411 411 'resolve, then hg rebase --continue)'))
412 412 finally:
413 413 ui.setconfig('ui', 'forcemerge', '', 'rebase')
414 414 if not self.collapsef:
415 415 merging = p2 != nullrev
416 416 editform = cmdutil.mergeeditform(merging, 'rebase')
417 417 editor = cmdutil.getcommiteditor(editform=editform, **opts)
418 418 newnode = concludenode(repo, rev, p1, p2,
419 419 extrafn=_makeextrafn(self.extrafns),
420 420 editor=editor,
421 421 keepbranches=self.keepbranchesf,
422 422 date=self.date)
423 423 if newnode is None:
424 424 # If it ended up being a no-op commit, then the normal
425 425 # merge state clean-up path doesn't happen, so do it
426 426 # here. Fix issue5494
427 427 mergemod.mergestate.clean(repo)
428 428 else:
429 429 # Skip commit if we are collapsing
430 430 repo.dirstate.beginparentchange()
431 431 repo.setparents(repo[p1].node())
432 432 repo.dirstate.endparentchange()
433 433 newnode = None
434 434 # Update the state
435 435 if newnode is not None:
436 436 self.state[rev] = repo[newnode].rev()
437 437 ui.debug('rebased as %s\n' % short(newnode))
438 438 else:
439 439 if not self.collapsef:
440 440 ui.warn(_('note: rebase of %d:%s created no changes '
441 441 'to commit\n') % (rev, ctx))
442 442 self.skipped.add(rev)
443 443 self.state[rev] = p1
444 444 ui.debug('next revision set to %s\n' % p1)
445 445 elif self.state[rev] == nullmerge:
446 446 ui.debug('ignoring null merge rebase of %s\n' % rev)
447 447 elif self.state[rev] == revignored:
448 448 ui.status(_('not rebasing ignored %s\n') % desc)
449 449 elif self.state[rev] == revprecursor:
450 450 destctx = repo[self.obsoletenotrebased[rev]]
451 451 descdest = '%d:%s "%s"' % (destctx.rev(), destctx,
452 452 destctx.description().split('\n', 1)[0])
453 453 msg = _('note: not rebasing %s, already in destination as %s\n')
454 454 ui.status(msg % (desc, descdest))
455 455 elif self.state[rev] == revpruned:
456 456 msg = _('note: not rebasing %s, it has no successor\n')
457 457 ui.status(msg % desc)
458 458 else:
459 459 ui.status(_('already rebased %s as %s\n') %
460 460 (desc, repo[self.state[rev]]))
461 461
462 462 ui.progress(_('rebasing'), None)
463 463 ui.note(_('rebase merging completed\n'))
464 464
465 465 def _finishrebase(self):
466 466 repo, ui, opts = self.repo, self.ui, self.opts
467 467 if self.collapsef and not self.keepopen:
468 468 p1, p2, _base = defineparents(repo, min(self.state),
469 469 self.dest, self.state,
470 470 self.destancestors,
471 471 self.obsoletenotrebased)
472 472 editopt = opts.get('edit')
473 473 editform = 'rebase.collapse'
474 474 if self.collapsemsg:
475 475 commitmsg = self.collapsemsg
476 476 else:
477 477 commitmsg = 'Collapsed revision'
478 478 for rebased in self.state:
479 479 if rebased not in self.skipped and\
480 480 self.state[rebased] > nullmerge:
481 481 commitmsg += '\n* %s' % repo[rebased].description()
482 482 editopt = True
483 483 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
484 484 revtoreuse = max(self.state)
485 485 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
486 486 try:
487 487 newnode = concludenode(repo, revtoreuse, p1, self.external,
488 488 commitmsg=commitmsg,
489 489 extrafn=_makeextrafn(self.extrafns),
490 490 editor=editor,
491 491 keepbranches=self.keepbranchesf,
492 492 date=self.date)
493 493 dsguard.close()
494 494 release(dsguard)
495 495 except error.InterventionRequired:
496 496 dsguard.close()
497 497 release(dsguard)
498 498 raise
499 499 except Exception:
500 500 release(dsguard)
501 501 raise
502 502
503 503 if newnode is None:
504 504 newrev = self.dest
505 505 else:
506 506 newrev = repo[newnode].rev()
507 507 for oldrev in self.state.iterkeys():
508 508 if self.state[oldrev] > nullmerge:
509 509 self.state[oldrev] = newrev
510 510
511 511 if 'qtip' in repo.tags():
512 512 updatemq(repo, self.state, self.skipped, **opts)
513 513
514 514 if self.currentbookmarks:
515 515 # Nodeids are needed to reset bookmarks
516 516 nstate = {}
517 517 for k, v in self.state.iteritems():
518 518 if v > nullmerge and v != k:
519 519 nstate[repo[k].node()] = repo[v].node()
520 520 elif v == revprecursor:
521 521 succ = self.obsoletenotrebased[k]
522 522 nstate[repo[k].node()] = repo[succ].node()
523 523 # XXX this is the same as dest.node() for the non-continue path --
524 524 # this should probably be cleaned up
525 525 destnode = repo[self.dest].node()
526 526
527 527 # restore original working directory
528 528 # (we do this before stripping)
529 529 newwd = self.state.get(self.originalwd, self.originalwd)
530 530 if newwd == revprecursor:
531 531 newwd = self.obsoletenotrebased[self.originalwd]
532 532 elif newwd < 0:
533 533 # original directory is a parent of rebase set root or ignored
534 534 newwd = self.originalwd
535 535 if newwd not in [c.rev() for c in repo[None].parents()]:
536 536 ui.note(_("update back to initial working directory parent\n"))
537 537 hg.updaterepo(repo, newwd, False)
538 538
539 539 if self.currentbookmarks:
540 540 with repo.transaction('bookmark') as tr:
541 541 updatebookmarks(repo, destnode, nstate,
542 542 self.currentbookmarks, tr)
543 543 if self.activebookmark not in repo._bookmarks:
544 544 # active bookmark was divergent one and has been deleted
545 545 self.activebookmark = None
546 546
547 547 if not self.keepf:
548 548 collapsedas = None
549 549 if self.collapsef:
550 550 collapsedas = newnode
551 551 clearrebased(ui, repo, self.state, self.skipped, collapsedas)
552 552
553 553 clearstatus(repo)
554 554 clearcollapsemsg(repo)
555 555
556 556 ui.note(_("rebase completed\n"))
557 557 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
558 558 if self.skipped:
559 559 skippedlen = len(self.skipped)
560 560 ui.note(_("%d revisions have been skipped\n") % skippedlen)
561 561
562 562 if (self.activebookmark and
563 563 repo['.'].node() == repo._bookmarks[self.activebookmark]):
564 564 bookmarks.activate(repo, self.activebookmark)
565 565
566 566 @command('rebase',
567 567 [('s', 'source', '',
568 568 _('rebase the specified changeset and descendants'), _('REV')),
569 569 ('b', 'base', '',
570 570 _('rebase everything from branching point of specified changeset'),
571 571 _('REV')),
572 572 ('r', 'rev', [],
573 573 _('rebase these revisions'),
574 574 _('REV')),
575 575 ('d', 'dest', '',
576 576 _('rebase onto the specified changeset'), _('REV')),
577 577 ('', 'collapse', False, _('collapse the rebased changesets')),
578 578 ('m', 'message', '',
579 579 _('use text as collapse commit message'), _('TEXT')),
580 580 ('e', 'edit', False, _('invoke editor on commit messages')),
581 581 ('l', 'logfile', '',
582 582 _('read collapse commit message from file'), _('FILE')),
583 583 ('k', 'keep', False, _('keep original changesets')),
584 584 ('', 'keepbranches', False, _('keep original branch names')),
585 585 ('D', 'detach', False, _('(DEPRECATED)')),
586 586 ('i', 'interactive', False, _('(DEPRECATED)')),
587 587 ('t', 'tool', '', _('specify merge tool')),
588 588 ('c', 'continue', False, _('continue an interrupted rebase')),
589 589 ('a', 'abort', False, _('abort an interrupted rebase'))] +
590 590 templateopts,
591 591 _('[-s REV | -b REV] [-d REV] [OPTION]'))
592 592 def rebase(ui, repo, **opts):
593 593 """move changeset (and descendants) to a different branch
594 594
595 595 Rebase uses repeated merging to graft changesets from one part of
596 596 history (the source) onto another (the destination). This can be
597 597 useful for linearizing *local* changes relative to a master
598 598 development tree.
599 599
600 600 Published commits cannot be rebased (see :hg:`help phases`).
601 601 To copy commits, see :hg:`help graft`.
602 602
603 603 If you don't specify a destination changeset (``-d/--dest``), rebase
604 604 will use the same logic as :hg:`merge` to pick a destination. if
605 605 the current branch contains exactly one other head, the other head
606 606 is merged with by default. Otherwise, an explicit revision with
607 607 which to merge with must be provided. (destination changeset is not
608 608 modified by rebasing, but new changesets are added as its
609 609 descendants.)
610 610
611 611 Here are the ways to select changesets:
612 612
613 613 1. Explicitly select them using ``--rev``.
614 614
615 615 2. Use ``--source`` to select a root changeset and include all of its
616 616 descendants.
617 617
618 618 3. Use ``--base`` to select a changeset; rebase will find ancestors
619 619 and their descendants which are not also ancestors of the destination.
620 620
621 621 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
622 622 rebase will use ``--base .`` as above.
623 623
624 624 Rebase will destroy original changesets unless you use ``--keep``.
625 625 It will also move your bookmarks (even if you do).
626 626
627 627 Some changesets may be dropped if they do not contribute changes
628 628 (e.g. merges from the destination branch).
629 629
630 630 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
631 631 a named branch with two heads. You will need to explicitly specify source
632 632 and/or destination.
633 633
634 634 If you need to use a tool to automate merge/conflict decisions, you
635 635 can specify one with ``--tool``, see :hg:`help merge-tools`.
636 636 As a caveat: the tool will not be used to mediate when a file was
637 637 deleted, there is no hook presently available for this.
638 638
639 639 If a rebase is interrupted to manually resolve a conflict, it can be
640 640 continued with --continue/-c or aborted with --abort/-a.
641 641
642 642 .. container:: verbose
643 643
644 644 Examples:
645 645
646 646 - move "local changes" (current commit back to branching point)
647 647 to the current branch tip after a pull::
648 648
649 649 hg rebase
650 650
651 651 - move a single changeset to the stable branch::
652 652
653 653 hg rebase -r 5f493448 -d stable
654 654
655 655 - splice a commit and all its descendants onto another part of history::
656 656
657 657 hg rebase --source c0c3 --dest 4cf9
658 658
659 659 - rebase everything on a branch marked by a bookmark onto the
660 660 default branch::
661 661
662 662 hg rebase --base myfeature --dest default
663 663
664 664 - collapse a sequence of changes into a single commit::
665 665
666 666 hg rebase --collapse -r 1520:1525 -d .
667 667
668 668 - move a named branch while preserving its name::
669 669
670 670 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
671 671
672 672 Configuration Options:
673 673
674 674 You can make rebase require a destination if you set the following config
675 675 option::
676 676
677 677 [commands]
678 678 rebase.requiredest = True
679 679
680 680 Return Values:
681 681
682 682 Returns 0 on success, 1 if nothing to rebase or there are
683 683 unresolved conflicts.
684 684
685 685 """
686 686 rbsrt = rebaseruntime(repo, ui, opts)
687 687
688 688 lock = wlock = None
689 689 try:
690 690 wlock = repo.wlock()
691 691 lock = repo.lock()
692 692
693 693 # Validate input and define rebasing points
694 694 destf = opts.get('dest', None)
695 695 srcf = opts.get('source', None)
696 696 basef = opts.get('base', None)
697 697 revf = opts.get('rev', [])
698 698 # search default destination in this space
699 699 # used in the 'hg pull --rebase' case, see issue 5214.
700 700 destspace = opts.get('_destspace')
701 701 contf = opts.get('continue')
702 702 abortf = opts.get('abort')
703 703 if opts.get('interactive'):
704 704 try:
705 705 if extensions.find('histedit'):
706 706 enablehistedit = ''
707 707 except KeyError:
708 708 enablehistedit = " --config extensions.histedit="
709 709 help = "hg%s help -e histedit" % enablehistedit
710 710 msg = _("interactive history editing is supported by the "
711 711 "'histedit' extension (see \"%s\")") % help
712 712 raise error.Abort(msg)
713 713
714 714 if rbsrt.collapsemsg and not rbsrt.collapsef:
715 715 raise error.Abort(
716 716 _('message can only be specified with collapse'))
717 717
718 718 if contf or abortf:
719 719 if contf and abortf:
720 720 raise error.Abort(_('cannot use both abort and continue'))
721 721 if rbsrt.collapsef:
722 722 raise error.Abort(
723 723 _('cannot use collapse with continue or abort'))
724 724 if srcf or basef or destf:
725 725 raise error.Abort(
726 726 _('abort and continue do not allow specifying revisions'))
727 727 if abortf and opts.get('tool', False):
728 728 ui.warn(_('tool option will be ignored\n'))
729 729 if contf:
730 730 ms = mergemod.mergestate.read(repo)
731 731 mergeutil.checkunresolved(ms)
732 732
733 733 retcode = rbsrt._prepareabortorcontinue(abortf)
734 734 if retcode is not None:
735 735 return retcode
736 736 else:
737 737 dest, rebaseset = _definesets(ui, repo, destf, srcf, basef, revf,
738 738 destspace=destspace)
739 739 retcode = rbsrt._preparenewrebase(dest, rebaseset)
740 740 if retcode is not None:
741 741 return retcode
742 742
743 743 with repo.transaction('rebase') as tr:
744 744 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
745 745 try:
746 746 rbsrt._performrebase(tr)
747 747 dsguard.close()
748 748 release(dsguard)
749 749 except error.InterventionRequired:
750 750 dsguard.close()
751 751 release(dsguard)
752 752 tr.close()
753 753 raise
754 754 except Exception:
755 755 release(dsguard)
756 756 raise
757 757 rbsrt._finishrebase()
758 758 finally:
759 759 release(lock, wlock)
760 760
761 761 def _definesets(ui, repo, destf=None, srcf=None, basef=None, revf=None,
762 762 destspace=None):
763 763 """use revisions argument to define destination and rebase set
764 764 """
765 765 if revf is None:
766 766 revf = []
767 767
768 768 # destspace is here to work around issues with `hg pull --rebase` see
769 769 # issue5214 for details
770 770 if srcf and basef:
771 771 raise error.Abort(_('cannot specify both a source and a base'))
772 772 if revf and basef:
773 773 raise error.Abort(_('cannot specify both a revision and a base'))
774 774 if revf and srcf:
775 775 raise error.Abort(_('cannot specify both a revision and a source'))
776 776
777 777 cmdutil.checkunfinished(repo)
778 778 cmdutil.bailifchanged(repo)
779 779
780 780 if ui.configbool('commands', 'rebase.requiredest') and not destf:
781 781 raise error.Abort(_('you must specify a destination'),
782 782 hint=_('use: hg rebase -d REV'))
783 783
784 784 if destf:
785 785 dest = scmutil.revsingle(repo, destf)
786 786
787 787 if revf:
788 788 rebaseset = scmutil.revrange(repo, revf)
789 789 if not rebaseset:
790 790 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
791 791 return None, None
792 792 elif srcf:
793 793 src = scmutil.revrange(repo, [srcf])
794 794 if not src:
795 795 ui.status(_('empty "source" revision set - nothing to rebase\n'))
796 796 return None, None
797 797 rebaseset = repo.revs('(%ld)::', src)
798 798 assert rebaseset
799 799 else:
800 800 base = scmutil.revrange(repo, [basef or '.'])
801 801 if not base:
802 802 ui.status(_('empty "base" revision set - '
803 803 "can't compute rebase set\n"))
804 804 return None, None
805 805 if not destf:
806 806 dest = repo[_destrebase(repo, base, destspace=destspace)]
807 807 destf = str(dest)
808 808
809 809 roots = [] # selected children of branching points
810 810 bpbase = {} # {branchingpoint: [origbase]}
811 811 for b in base: # group bases by branching points
812 812 bp = repo.revs('ancestor(%d, %d)', b, dest).first()
813 813 bpbase[bp] = bpbase.get(bp, []) + [b]
814 814 if None in bpbase:
815 815 # emulate the old behavior, showing "nothing to rebase" (a better
816 816 # behavior may be abort with "cannot find branching point" error)
817 817 bpbase.clear()
818 818 for bp, bs in bpbase.iteritems(): # calculate roots
819 819 roots += list(repo.revs('children(%d) & ancestors(%ld)', bp, bs))
820 820
821 821 rebaseset = repo.revs('%ld::', roots)
822 822
823 823 if not rebaseset:
824 824 # transform to list because smartsets are not comparable to
825 825 # lists. This should be improved to honor laziness of
826 826 # smartset.
827 827 if list(base) == [dest.rev()]:
828 828 if basef:
829 829 ui.status(_('nothing to rebase - %s is both "base"'
830 830 ' and destination\n') % dest)
831 831 else:
832 832 ui.status(_('nothing to rebase - working directory '
833 833 'parent is also destination\n'))
834 834 elif not repo.revs('%ld - ::%d', base, dest):
835 835 if basef:
836 836 ui.status(_('nothing to rebase - "base" %s is '
837 837 'already an ancestor of destination '
838 838 '%s\n') %
839 839 ('+'.join(str(repo[r]) for r in base),
840 840 dest))
841 841 else:
842 842 ui.status(_('nothing to rebase - working '
843 843 'directory parent is already an '
844 844 'ancestor of destination %s\n') % dest)
845 845 else: # can it happen?
846 846 ui.status(_('nothing to rebase from %s to %s\n') %
847 847 ('+'.join(str(repo[r]) for r in base), dest))
848 848 return None, None
849 849
850 850 if not destf:
851 851 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
852 852 destf = str(dest)
853 853
854 854 return dest, rebaseset
855 855
856 856 def externalparent(repo, state, destancestors):
857 857 """Return the revision that should be used as the second parent
858 858 when the revisions in state is collapsed on top of destancestors.
859 859 Abort if there is more than one parent.
860 860 """
861 861 parents = set()
862 862 source = min(state)
863 863 for rev in state:
864 864 if rev == source:
865 865 continue
866 866 for p in repo[rev].parents():
867 867 if (p.rev() not in state
868 868 and p.rev() not in destancestors):
869 869 parents.add(p.rev())
870 870 if not parents:
871 871 return nullrev
872 872 if len(parents) == 1:
873 873 return parents.pop()
874 874 raise error.Abort(_('unable to collapse on top of %s, there is more '
875 875 'than one external parent: %s') %
876 876 (max(destancestors),
877 877 ', '.join(str(p) for p in sorted(parents))))
878 878
879 879 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
880 880 keepbranches=False, date=None):
881 881 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
882 882 but also store useful information in extra.
883 883 Return node of committed revision.'''
884 884 repo.setparents(repo[p1].node(), repo[p2].node())
885 885 ctx = repo[rev]
886 886 if commitmsg is None:
887 887 commitmsg = ctx.description()
888 888 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
889 889 extra = {'rebase_source': ctx.hex()}
890 890 if extrafn:
891 891 extrafn(ctx, extra)
892 892
893 893 destphase = max(ctx.phase(), phases.draft)
894 894 overrides = {('phases', 'new-commit'): destphase}
895 895 with repo.ui.configoverride(overrides, 'rebase'):
896 896 if keepbranch:
897 897 repo.ui.setconfig('ui', 'allowemptycommit', True)
898 898 # Commit might fail if unresolved files exist
899 899 if date is None:
900 900 date = ctx.date()
901 901 newnode = repo.commit(text=commitmsg, user=ctx.user(),
902 902 date=date, extra=extra, editor=editor)
903 903
904 904 repo.dirstate.setbranch(repo[newnode].branch())
905 905 return newnode
906 906
907 907 def rebasenode(repo, rev, p1, base, state, collapse, dest):
908 908 'Rebase a single revision rev on top of p1 using base as merge ancestor'
909 909 # Merge phase
910 910 # Update to destination and merge it with local
911 911 if repo['.'].rev() != p1:
912 912 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
913 913 mergemod.update(repo, p1, False, True)
914 914 else:
915 915 repo.ui.debug(" already in destination\n")
916 916 repo.dirstate.write(repo.currenttransaction())
917 917 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
918 918 if base is not None:
919 919 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
920 920 # When collapsing in-place, the parent is the common ancestor, we
921 921 # have to allow merging with it.
922 922 stats = mergemod.update(repo, rev, True, True, base, collapse,
923 923 labels=['dest', 'source'])
924 924 if collapse:
925 925 copies.duplicatecopies(repo, rev, dest)
926 926 else:
927 927 # If we're not using --collapse, we need to
928 928 # duplicate copies between the revision we're
929 929 # rebasing and its first parent, but *not*
930 930 # duplicate any copies that have already been
931 931 # performed in the destination.
932 932 p1rev = repo[rev].p1().rev()
933 933 copies.duplicatecopies(repo, rev, p1rev, skiprev=dest)
934 934 return stats
935 935
936 936 def nearestrebased(repo, rev, state):
937 937 """return the nearest ancestors of rev in the rebase result"""
938 938 rebased = [r for r in state if state[r] > nullmerge]
939 939 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
940 940 if candidates:
941 941 return state[candidates.first()]
942 942 else:
943 943 return None
944 944
945 945 def _checkobsrebase(repo, ui, rebaseobsrevs, rebasesetrevs, rebaseobsskipped):
946 946 """
947 947 Abort if rebase will create divergence or rebase is noop because of markers
948 948
949 949 `rebaseobsrevs`: set of obsolete revision in source
950 950 `rebasesetrevs`: set of revisions to be rebased from source
951 951 `rebaseobsskipped`: set of revisions from source skipped because they have
952 952 successors in destination
953 953 """
954 954 # Obsolete node with successors not in dest leads to divergence
955 955 divergenceok = ui.configbool('experimental',
956 956 'allowdivergence')
957 957 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
958 958
959 959 if divergencebasecandidates and not divergenceok:
960 960 divhashes = (str(repo[r])
961 961 for r in divergencebasecandidates)
962 962 msg = _("this rebase will cause "
963 963 "divergences from: %s")
964 964 h = _("to force the rebase please set "
965 965 "experimental.allowdivergence=True")
966 966 raise error.Abort(msg % (",".join(divhashes),), hint=h)
967 967
968 968 def defineparents(repo, rev, dest, state, destancestors,
969 969 obsoletenotrebased):
970 970 'Return the new parent relationship of the revision that will be rebased'
971 971 parents = repo[rev].parents()
972 972 p1 = p2 = nullrev
973 973 rp1 = None
974 974
975 975 p1n = parents[0].rev()
976 976 if p1n in destancestors:
977 977 p1 = dest
978 978 elif p1n in state:
979 979 if state[p1n] == nullmerge:
980 980 p1 = dest
981 981 elif state[p1n] in revskipped:
982 982 p1 = nearestrebased(repo, p1n, state)
983 983 if p1 is None:
984 984 p1 = dest
985 985 else:
986 986 p1 = state[p1n]
987 987 else: # p1n external
988 988 p1 = dest
989 989 p2 = p1n
990 990
991 991 if len(parents) == 2 and parents[1].rev() not in destancestors:
992 992 p2n = parents[1].rev()
993 993 # interesting second parent
994 994 if p2n in state:
995 995 if p1 == dest: # p1n in destancestors or external
996 996 p1 = state[p2n]
997 997 if p1 == revprecursor:
998 998 rp1 = obsoletenotrebased[p2n]
999 999 elif state[p2n] in revskipped:
1000 1000 p2 = nearestrebased(repo, p2n, state)
1001 1001 if p2 is None:
1002 1002 # no ancestors rebased yet, detach
1003 1003 p2 = dest
1004 1004 else:
1005 1005 p2 = state[p2n]
1006 1006 else: # p2n external
1007 1007 if p2 != nullrev: # p1n external too => rev is a merged revision
1008 1008 raise error.Abort(_('cannot use revision %d as base, result '
1009 1009 'would have 3 parents') % rev)
1010 1010 p2 = p2n
1011 1011 repo.ui.debug(" future parents are %d and %d\n" %
1012 1012 (repo[rp1 or p1].rev(), repo[p2].rev()))
1013 1013
1014 1014 if not any(p.rev() in state for p in parents):
1015 1015 # Case (1) root changeset of a non-detaching rebase set.
1016 1016 # Let the merge mechanism find the base itself.
1017 1017 base = None
1018 1018 elif not repo[rev].p2():
1019 1019 # Case (2) detaching the node with a single parent, use this parent
1020 1020 base = repo[rev].p1().rev()
1021 1021 else:
1022 1022 # Assuming there is a p1, this is the case where there also is a p2.
1023 1023 # We are thus rebasing a merge and need to pick the right merge base.
1024 1024 #
1025 1025 # Imagine we have:
1026 1026 # - M: current rebase revision in this step
1027 1027 # - A: one parent of M
1028 1028 # - B: other parent of M
1029 1029 # - D: destination of this merge step (p1 var)
1030 1030 #
1031 1031 # Consider the case where D is a descendant of A or B and the other is
1032 1032 # 'outside'. In this case, the right merge base is the D ancestor.
1033 1033 #
1034 1034 # An informal proof, assuming A is 'outside' and B is the D ancestor:
1035 1035 #
1036 1036 # If we pick B as the base, the merge involves:
1037 1037 # - changes from B to M (actual changeset payload)
1038 1038 # - changes from B to D (induced by rebase) as D is a rebased
1039 1039 # version of B)
1040 1040 # Which exactly represent the rebase operation.
1041 1041 #
1042 1042 # If we pick A as the base, the merge involves:
1043 1043 # - changes from A to M (actual changeset payload)
1044 1044 # - changes from A to D (with include changes between unrelated A and B
1045 1045 # plus changes induced by rebase)
1046 1046 # Which does not represent anything sensible and creates a lot of
1047 1047 # conflicts. A is thus not the right choice - B is.
1048 1048 #
1049 1049 # Note: The base found in this 'proof' is only correct in the specified
1050 1050 # case. This base does not make sense if is not D a descendant of A or B
1051 1051 # or if the other is not parent 'outside' (especially not if the other
1052 1052 # parent has been rebased). The current implementation does not
1053 1053 # make it feasible to consider different cases separately. In these
1054 1054 # other cases we currently just leave it to the user to correctly
1055 1055 # resolve an impossible merge using a wrong ancestor.
1056 1056 #
1057 1057 # xx, p1 could be -4, and both parents could probably be -4...
1058 1058 for p in repo[rev].parents():
1059 1059 if state.get(p.rev()) == p1:
1060 1060 base = p.rev()
1061 1061 break
1062 1062 else: # fallback when base not found
1063 1063 base = None
1064 1064
1065 1065 # Raise because this function is called wrong (see issue 4106)
1066 1066 raise AssertionError('no base found to rebase on '
1067 1067 '(defineparents called wrong)')
1068 1068 return rp1 or p1, p2, base
1069 1069
1070 1070 def isagitpatch(repo, patchname):
1071 1071 'Return true if the given patch is in git format'
1072 1072 mqpatch = os.path.join(repo.mq.path, patchname)
1073 1073 for line in patch.linereader(file(mqpatch, 'rb')):
1074 1074 if line.startswith('diff --git'):
1075 1075 return True
1076 1076 return False
1077 1077
1078 1078 def updatemq(repo, state, skipped, **opts):
1079 1079 'Update rebased mq patches - finalize and then import them'
1080 1080 mqrebase = {}
1081 1081 mq = repo.mq
1082 1082 original_series = mq.fullseries[:]
1083 1083 skippedpatches = set()
1084 1084
1085 1085 for p in mq.applied:
1086 1086 rev = repo[p.node].rev()
1087 1087 if rev in state:
1088 1088 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
1089 1089 (rev, p.name))
1090 1090 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1091 1091 else:
1092 1092 # Applied but not rebased, not sure this should happen
1093 1093 skippedpatches.add(p.name)
1094 1094
1095 1095 if mqrebase:
1096 1096 mq.finish(repo, mqrebase.keys())
1097 1097
1098 1098 # We must start import from the newest revision
1099 1099 for rev in sorted(mqrebase, reverse=True):
1100 1100 if rev not in skipped:
1101 1101 name, isgit = mqrebase[rev]
1102 1102 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
1103 1103 (name, state[rev], repo[state[rev]]))
1104 1104 mq.qimport(repo, (), patchname=name, git=isgit,
1105 1105 rev=[str(state[rev])])
1106 1106 else:
1107 1107 # Rebased and skipped
1108 1108 skippedpatches.add(mqrebase[rev][0])
1109 1109
1110 1110 # Patches were either applied and rebased and imported in
1111 1111 # order, applied and removed or unapplied. Discard the removed
1112 1112 # ones while preserving the original series order and guards.
1113 1113 newseries = [s for s in original_series
1114 1114 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
1115 1115 mq.fullseries[:] = newseries
1116 1116 mq.seriesdirty = True
1117 1117 mq.savedirty()
1118 1118
1119 1119 def updatebookmarks(repo, destnode, nstate, originalbookmarks, tr):
1120 1120 'Move bookmarks to their correct changesets, and delete divergent ones'
1121 1121 marks = repo._bookmarks
1122 1122 for k, v in originalbookmarks.iteritems():
1123 1123 if v in nstate:
1124 1124 # update the bookmarks for revs that have moved
1125 1125 marks[k] = nstate[v]
1126 1126 bookmarks.deletedivergent(repo, [destnode], k)
1127 1127 marks.recordchange(tr)
1128 1128
1129 1129 def storecollapsemsg(repo, collapsemsg):
1130 1130 'Store the collapse message to allow recovery'
1131 1131 collapsemsg = collapsemsg or ''
1132 1132 f = repo.vfs("last-message.txt", "w")
1133 1133 f.write("%s\n" % collapsemsg)
1134 1134 f.close()
1135 1135
1136 1136 def clearcollapsemsg(repo):
1137 1137 'Remove collapse message file'
1138 1138 repo.vfs.unlinkpath("last-message.txt", ignoremissing=True)
1139 1139
1140 1140 def restorecollapsemsg(repo, isabort):
1141 1141 'Restore previously stored collapse message'
1142 1142 try:
1143 1143 f = repo.vfs("last-message.txt")
1144 1144 collapsemsg = f.readline().strip()
1145 1145 f.close()
1146 1146 except IOError as err:
1147 1147 if err.errno != errno.ENOENT:
1148 1148 raise
1149 1149 if isabort:
1150 1150 # Oh well, just abort like normal
1151 1151 collapsemsg = ''
1152 1152 else:
1153 1153 raise error.Abort(_('missing .hg/last-message.txt for rebase'))
1154 1154 return collapsemsg
1155 1155
1156 1156 def clearstatus(repo):
1157 1157 'Remove the status files'
1158 1158 _clearrebasesetvisibiliy(repo)
1159 1159 repo.vfs.unlinkpath("rebasestate", ignoremissing=True)
1160 1160
1161 1161 def needupdate(repo, state):
1162 1162 '''check whether we should `update --clean` away from a merge, or if
1163 1163 somehow the working dir got forcibly updated, e.g. by older hg'''
1164 1164 parents = [p.rev() for p in repo[None].parents()]
1165 1165
1166 1166 # Are we in a merge state at all?
1167 1167 if len(parents) < 2:
1168 1168 return False
1169 1169
1170 1170 # We should be standing on the first as-of-yet unrebased commit.
1171 1171 firstunrebased = min([old for old, new in state.iteritems()
1172 1172 if new == nullrev])
1173 1173 if firstunrebased in parents:
1174 1174 return True
1175 1175
1176 1176 return False
1177 1177
1178 1178 def abort(repo, originalwd, dest, state, activebookmark=None):
1179 1179 '''Restore the repository to its original state. Additional args:
1180 1180
1181 1181 activebookmark: the name of the bookmark that should be active after the
1182 1182 restore'''
1183 1183
1184 1184 try:
1185 1185 # If the first commits in the rebased set get skipped during the rebase,
1186 1186 # their values within the state mapping will be the dest rev id. The
1187 1187 # dstates list must must not contain the dest rev (issue4896)
1188 1188 dstates = [s for s in state.values() if s >= 0 and s != dest]
1189 1189 immutable = [d for d in dstates if not repo[d].mutable()]
1190 1190 cleanup = True
1191 1191 if immutable:
1192 1192 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1193 1193 % ', '.join(str(repo[r]) for r in immutable),
1194 1194 hint=_("see 'hg help phases' for details"))
1195 1195 cleanup = False
1196 1196
1197 1197 descendants = set()
1198 1198 if dstates:
1199 1199 descendants = set(repo.changelog.descendants(dstates))
1200 1200 if descendants - set(dstates):
1201 1201 repo.ui.warn(_("warning: new changesets detected on destination "
1202 1202 "branch, can't strip\n"))
1203 1203 cleanup = False
1204 1204
1205 1205 if cleanup:
1206 1206 shouldupdate = False
1207 1207 rebased = filter(lambda x: x >= 0 and x != dest, state.values())
1208 1208 if rebased:
1209 1209 strippoints = [
1210 1210 c.node() for c in repo.set('roots(%ld)', rebased)]
1211 1211
1212 1212 updateifonnodes = set(rebased)
1213 1213 updateifonnodes.add(dest)
1214 1214 updateifonnodes.add(originalwd)
1215 1215 shouldupdate = repo['.'].rev() in updateifonnodes
1216 1216
1217 1217 # Update away from the rebase if necessary
1218 1218 if shouldupdate or needupdate(repo, state):
1219 1219 mergemod.update(repo, originalwd, False, True)
1220 1220
1221 1221 # Strip from the first rebased revision
1222 1222 if rebased:
1223 1223 # no backup of rebased cset versions needed
1224 1224 repair.strip(repo.ui, repo, strippoints)
1225 1225
1226 1226 if activebookmark and activebookmark in repo._bookmarks:
1227 1227 bookmarks.activate(repo, activebookmark)
1228 1228
1229 1229 finally:
1230 1230 clearstatus(repo)
1231 1231 clearcollapsemsg(repo)
1232 1232 repo.ui.warn(_('rebase aborted\n'))
1233 1233 return 0
1234 1234
1235 1235 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
1236 1236 '''Define which revisions are going to be rebased and where
1237 1237
1238 1238 repo: repo
1239 1239 dest: context
1240 1240 rebaseset: set of rev
1241 1241 '''
1242 1242 originalwd = repo['.'].rev()
1243 1243 _setrebasesetvisibility(repo, set(rebaseset) | {originalwd})
1244 1244
1245 1245 # This check isn't strictly necessary, since mq detects commits over an
1246 1246 # applied patch. But it prevents messing up the working directory when
1247 1247 # a partially completed rebase is blocked by mq.
1248 1248 if 'qtip' in repo.tags() and (dest.node() in
1249 1249 [s.node for s in repo.mq.applied]):
1250 1250 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1251 1251
1252 1252 roots = list(repo.set('roots(%ld)', rebaseset))
1253 1253 if not roots:
1254 1254 raise error.Abort(_('no matching revisions'))
1255 1255 roots.sort()
1256 1256 state = dict.fromkeys(rebaseset, revtodo)
1257 1257 detachset = set()
1258 1258 emptyrebase = True
1259 1259 for root in roots:
1260 1260 commonbase = root.ancestor(dest)
1261 1261 if commonbase == root:
1262 1262 raise error.Abort(_('source is ancestor of destination'))
1263 1263 if commonbase == dest:
1264 1264 wctx = repo[None]
1265 1265 if dest == wctx.p1():
1266 1266 # when rebasing to '.', it will use the current wd branch name
1267 1267 samebranch = root.branch() == wctx.branch()
1268 1268 else:
1269 1269 samebranch = root.branch() == dest.branch()
1270 1270 if not collapse and samebranch and root in dest.children():
1271 1271 # mark the revision as done by setting its new revision
1272 1272 # equal to its old (current) revisions
1273 1273 state[root.rev()] = root.rev()
1274 1274 repo.ui.debug('source is a child of destination\n')
1275 1275 continue
1276 1276
1277 1277 emptyrebase = False
1278 1278 repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root))
1279 1279 # Rebase tries to turn <dest> into a parent of <root> while
1280 1280 # preserving the number of parents of rebased changesets:
1281 1281 #
1282 1282 # - A changeset with a single parent will always be rebased as a
1283 1283 # changeset with a single parent.
1284 1284 #
1285 1285 # - A merge will be rebased as merge unless its parents are both
1286 1286 # ancestors of <dest> or are themselves in the rebased set and
1287 1287 # pruned while rebased.
1288 1288 #
1289 1289 # If one parent of <root> is an ancestor of <dest>, the rebased
1290 1290 # version of this parent will be <dest>. This is always true with
1291 1291 # --base option.
1292 1292 #
1293 1293 # Otherwise, we need to *replace* the original parents with
1294 1294 # <dest>. This "detaches" the rebased set from its former location
1295 1295 # and rebases it onto <dest>. Changes introduced by ancestors of
1296 1296 # <root> not common with <dest> (the detachset, marked as
1297 1297 # nullmerge) are "removed" from the rebased changesets.
1298 1298 #
1299 1299 # - If <root> has a single parent, set it to <dest>.
1300 1300 #
1301 1301 # - If <root> is a merge, we cannot decide which parent to
1302 1302 # replace, the rebase operation is not clearly defined.
1303 1303 #
1304 1304 # The table below sums up this behavior:
1305 1305 #
1306 1306 # +------------------+----------------------+-------------------------+
1307 1307 # | | one parent | merge |
1308 1308 # +------------------+----------------------+-------------------------+
1309 1309 # | parent in | new parent is <dest> | parents in ::<dest> are |
1310 1310 # | ::<dest> | | remapped to <dest> |
1311 1311 # +------------------+----------------------+-------------------------+
1312 1312 # | unrelated source | new parent is <dest> | ambiguous, abort |
1313 1313 # +------------------+----------------------+-------------------------+
1314 1314 #
1315 1315 # The actual abort is handled by `defineparents`
1316 1316 if len(root.parents()) <= 1:
1317 1317 # ancestors of <root> not ancestors of <dest>
1318 1318 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1319 1319 [root.rev()]))
1320 1320 if emptyrebase:
1321 1321 return None
1322 1322 for rev in sorted(state):
1323 1323 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
1324 1324 # if all parents of this revision are done, then so is this revision
1325 1325 if parents and all((state.get(p) == p for p in parents)):
1326 1326 state[rev] = rev
1327 1327 for r in detachset:
1328 1328 if r not in state:
1329 1329 state[r] = nullmerge
1330 1330 if len(roots) > 1:
1331 1331 # If we have multiple roots, we may have "hole" in the rebase set.
1332 1332 # Rebase roots that descend from those "hole" should not be detached as
1333 1333 # other root are. We use the special `revignored` to inform rebase that
1334 1334 # the revision should be ignored but that `defineparents` should search
1335 1335 # a rebase destination that make sense regarding rebased topology.
1336 1336 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1337 1337 for ignored in set(rebasedomain) - set(rebaseset):
1338 1338 state[ignored] = revignored
1339 1339 for r in obsoletenotrebased:
1340 1340 if obsoletenotrebased[r] is None:
1341 1341 state[r] = revpruned
1342 1342 else:
1343 1343 state[r] = revprecursor
1344 1344 return originalwd, dest.rev(), state
1345 1345
1346 1346 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1347 1347 """dispose of rebased revision at the end of the rebase
1348 1348
1349 1349 If `collapsedas` is not None, the rebase was a collapse whose result if the
1350 1350 `collapsedas` node."""
1351 1351 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1352 1352 markers = []
1353 1353 for rev, newrev in sorted(state.items()):
1354 1354 if newrev >= 0 and newrev != rev:
1355 1355 if rev in skipped:
1356 1356 succs = ()
1357 1357 elif collapsedas is not None:
1358 1358 succs = (repo[collapsedas],)
1359 1359 else:
1360 1360 succs = (repo[newrev],)
1361 1361 markers.append((repo[rev], succs))
1362 1362 if markers:
1363 obsolete.createmarkers(repo, markers)
1363 obsolete.createmarkers(repo, markers, operation='rebase')
1364 1364 else:
1365 1365 rebased = [rev for rev in state
1366 1366 if state[rev] > nullmerge and state[rev] != rev]
1367 1367 if rebased:
1368 1368 stripped = []
1369 1369 for root in repo.set('roots(%ld)', rebased):
1370 1370 if set(repo.changelog.descendants([root.rev()])) - set(state):
1371 1371 ui.warn(_("warning: new changesets detected "
1372 1372 "on source branch, not stripping\n"))
1373 1373 else:
1374 1374 stripped.append(root.node())
1375 1375 if stripped:
1376 1376 # backup the old csets by default
1377 1377 repair.strip(ui, repo, stripped, "all")
1378 1378
1379 1379
1380 1380 def pullrebase(orig, ui, repo, *args, **opts):
1381 1381 'Call rebase after pull if the latter has been invoked with --rebase'
1382 1382 ret = None
1383 1383 if opts.get('rebase'):
1384 1384 if ui.configbool('commands', 'rebase.requiredest'):
1385 1385 msg = _('rebase destination required by configuration')
1386 1386 hint = _('use hg pull followed by hg rebase -d DEST')
1387 1387 raise error.Abort(msg, hint=hint)
1388 1388
1389 1389 wlock = lock = None
1390 1390 try:
1391 1391 wlock = repo.wlock()
1392 1392 lock = repo.lock()
1393 1393 if opts.get('update'):
1394 1394 del opts['update']
1395 1395 ui.debug('--update and --rebase are not compatible, ignoring '
1396 1396 'the update flag\n')
1397 1397
1398 1398 cmdutil.checkunfinished(repo)
1399 1399 cmdutil.bailifchanged(repo, hint=_('cannot pull with rebase: '
1400 1400 'please commit or shelve your changes first'))
1401 1401
1402 1402 revsprepull = len(repo)
1403 1403 origpostincoming = commands.postincoming
1404 1404 def _dummy(*args, **kwargs):
1405 1405 pass
1406 1406 commands.postincoming = _dummy
1407 1407 try:
1408 1408 ret = orig(ui, repo, *args, **opts)
1409 1409 finally:
1410 1410 commands.postincoming = origpostincoming
1411 1411 revspostpull = len(repo)
1412 1412 if revspostpull > revsprepull:
1413 1413 # --rev option from pull conflict with rebase own --rev
1414 1414 # dropping it
1415 1415 if 'rev' in opts:
1416 1416 del opts['rev']
1417 1417 # positional argument from pull conflicts with rebase's own
1418 1418 # --source.
1419 1419 if 'source' in opts:
1420 1420 del opts['source']
1421 1421 # revsprepull is the len of the repo, not revnum of tip.
1422 1422 destspace = list(repo.changelog.revs(start=revsprepull))
1423 1423 opts['_destspace'] = destspace
1424 1424 try:
1425 1425 rebase(ui, repo, **opts)
1426 1426 except error.NoMergeDestAbort:
1427 1427 # we can maybe update instead
1428 1428 rev, _a, _b = destutil.destupdate(repo)
1429 1429 if rev == repo['.'].rev():
1430 1430 ui.status(_('nothing to rebase\n'))
1431 1431 else:
1432 1432 ui.status(_('nothing to rebase - updating instead\n'))
1433 1433 # not passing argument to get the bare update behavior
1434 1434 # with warning and trumpets
1435 1435 commands.update(ui, repo)
1436 1436 finally:
1437 1437 release(lock, wlock)
1438 1438 else:
1439 1439 if opts.get('tool'):
1440 1440 raise error.Abort(_('--tool can only be used with --rebase'))
1441 1441 ret = orig(ui, repo, *args, **opts)
1442 1442
1443 1443 return ret
1444 1444
1445 1445 def _setrebasesetvisibility(repo, revs):
1446 1446 """store the currently rebased set on the repo object
1447 1447
1448 1448 This is used by another function to prevent rebased revision to because
1449 1449 hidden (see issue4504)"""
1450 1450 repo = repo.unfiltered()
1451 1451 repo._rebaseset = revs
1452 1452 # invalidate cache if visibility changes
1453 1453 hiddens = repo.filteredrevcache.get('visible', set())
1454 1454 if revs & hiddens:
1455 1455 repo.invalidatevolatilesets()
1456 1456
1457 1457 def _clearrebasesetvisibiliy(repo):
1458 1458 """remove rebaseset data from the repo"""
1459 1459 repo = repo.unfiltered()
1460 1460 if '_rebaseset' in vars(repo):
1461 1461 del repo._rebaseset
1462 1462
1463 1463 def _rebasedvisible(orig, repo):
1464 1464 """ensure rebased revs stay visible (see issue4504)"""
1465 1465 blockers = orig(repo)
1466 1466 blockers.update(getattr(repo, '_rebaseset', ()))
1467 1467 return blockers
1468 1468
1469 1469 def _filterobsoleterevs(repo, revs):
1470 1470 """returns a set of the obsolete revisions in revs"""
1471 1471 return set(r for r in revs if repo[r].obsolete())
1472 1472
1473 1473 def _computeobsoletenotrebased(repo, rebaseobsrevs, dest):
1474 1474 """return a mapping obsolete => successor for all obsolete nodes to be
1475 1475 rebased that have a successors in the destination
1476 1476
1477 1477 obsolete => None entries in the mapping indicate nodes with no successor"""
1478 1478 obsoletenotrebased = {}
1479 1479
1480 1480 # Build a mapping successor => obsolete nodes for the obsolete
1481 1481 # nodes to be rebased
1482 1482 allsuccessors = {}
1483 1483 cl = repo.changelog
1484 1484 for r in rebaseobsrevs:
1485 1485 node = cl.node(r)
1486 1486 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1487 1487 try:
1488 1488 allsuccessors[cl.rev(s)] = cl.rev(node)
1489 1489 except LookupError:
1490 1490 pass
1491 1491
1492 1492 if allsuccessors:
1493 1493 # Look for successors of obsolete nodes to be rebased among
1494 1494 # the ancestors of dest
1495 1495 ancs = cl.ancestors([repo[dest].rev()],
1496 1496 stoprev=min(allsuccessors),
1497 1497 inclusive=True)
1498 1498 for s in allsuccessors:
1499 1499 if s in ancs:
1500 1500 obsoletenotrebased[allsuccessors[s]] = s
1501 1501 elif (s == allsuccessors[s] and
1502 1502 allsuccessors.values().count(s) == 1):
1503 1503 # plain prune
1504 1504 obsoletenotrebased[s] = None
1505 1505
1506 1506 return obsoletenotrebased
1507 1507
1508 1508 def summaryhook(ui, repo):
1509 1509 if not repo.vfs.exists('rebasestate'):
1510 1510 return
1511 1511 try:
1512 1512 rbsrt = rebaseruntime(repo, ui, {})
1513 1513 rbsrt.restorestatus()
1514 1514 state = rbsrt.state
1515 1515 except error.RepoLookupError:
1516 1516 # i18n: column positioning for "hg summary"
1517 1517 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1518 1518 ui.write(msg)
1519 1519 return
1520 1520 numrebased = len([i for i in state.itervalues() if i >= 0])
1521 1521 # i18n: column positioning for "hg summary"
1522 1522 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1523 1523 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1524 1524 ui.label(_('%d remaining'), 'rebase.remaining') %
1525 1525 (len(state) - numrebased)))
1526 1526
1527 1527 def uisetup(ui):
1528 1528 #Replace pull with a decorator to provide --rebase option
1529 1529 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1530 1530 entry[1].append(('', 'rebase', None,
1531 1531 _("rebase working directory to branch head")))
1532 1532 entry[1].append(('t', 'tool', '',
1533 1533 _("specify merge tool for rebase")))
1534 1534 cmdutil.summaryhooks.add('rebase', summaryhook)
1535 1535 cmdutil.unfinishedstates.append(
1536 1536 ['rebasestate', False, False, _('rebase in progress'),
1537 1537 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1538 1538 cmdutil.afterresolvedstates.append(
1539 1539 ['rebasestate', _('hg rebase --continue')])
1540 1540 # ensure rebased rev are not hidden
1541 1541 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
@@ -1,3487 +1,3487 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import itertools
12 12 import os
13 13 import re
14 14 import tempfile
15 15
16 16 from .i18n import _
17 17 from .node import (
18 18 bin,
19 19 hex,
20 20 nullid,
21 21 nullrev,
22 22 short,
23 23 )
24 24
25 25 from . import (
26 26 bookmarks,
27 27 changelog,
28 28 copies,
29 29 crecord as crecordmod,
30 30 encoding,
31 31 error,
32 32 formatter,
33 33 graphmod,
34 34 lock as lockmod,
35 35 match as matchmod,
36 36 obsolete,
37 37 patch,
38 38 pathutil,
39 39 phases,
40 40 pycompat,
41 41 repair,
42 42 revlog,
43 43 revset,
44 44 scmutil,
45 45 smartset,
46 46 templatekw,
47 47 templater,
48 48 util,
49 49 vfs as vfsmod,
50 50 )
51 51 stringio = util.stringio
52 52
53 53 # special string such that everything below this line will be ingored in the
54 54 # editor text
55 55 _linebelow = "^HG: ------------------------ >8 ------------------------$"
56 56
57 57 def ishunk(x):
58 58 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
59 59 return isinstance(x, hunkclasses)
60 60
61 61 def newandmodified(chunks, originalchunks):
62 62 newlyaddedandmodifiedfiles = set()
63 63 for chunk in chunks:
64 64 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
65 65 originalchunks:
66 66 newlyaddedandmodifiedfiles.add(chunk.header.filename())
67 67 return newlyaddedandmodifiedfiles
68 68
69 69 def parsealiases(cmd):
70 70 return cmd.lstrip("^").split("|")
71 71
72 72 def setupwrapcolorwrite(ui):
73 73 # wrap ui.write so diff output can be labeled/colorized
74 74 def wrapwrite(orig, *args, **kw):
75 75 label = kw.pop('label', '')
76 76 for chunk, l in patch.difflabel(lambda: args):
77 77 orig(chunk, label=label + l)
78 78
79 79 oldwrite = ui.write
80 80 def wrap(*args, **kwargs):
81 81 return wrapwrite(oldwrite, *args, **kwargs)
82 82 setattr(ui, 'write', wrap)
83 83 return oldwrite
84 84
85 85 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
86 86 if usecurses:
87 87 if testfile:
88 88 recordfn = crecordmod.testdecorator(testfile,
89 89 crecordmod.testchunkselector)
90 90 else:
91 91 recordfn = crecordmod.chunkselector
92 92
93 93 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
94 94
95 95 else:
96 96 return patch.filterpatch(ui, originalhunks, operation)
97 97
98 98 def recordfilter(ui, originalhunks, operation=None):
99 99 """ Prompts the user to filter the originalhunks and return a list of
100 100 selected hunks.
101 101 *operation* is used for to build ui messages to indicate the user what
102 102 kind of filtering they are doing: reverting, committing, shelving, etc.
103 103 (see patch.filterpatch).
104 104 """
105 105 usecurses = crecordmod.checkcurses(ui)
106 106 testfile = ui.config('experimental', 'crecordtest', None)
107 107 oldwrite = setupwrapcolorwrite(ui)
108 108 try:
109 109 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
110 110 testfile, operation)
111 111 finally:
112 112 ui.write = oldwrite
113 113 return newchunks, newopts
114 114
115 115 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
116 116 filterfn, *pats, **opts):
117 117 from . import merge as mergemod
118 118 opts = pycompat.byteskwargs(opts)
119 119 if not ui.interactive():
120 120 if cmdsuggest:
121 121 msg = _('running non-interactively, use %s instead') % cmdsuggest
122 122 else:
123 123 msg = _('running non-interactively')
124 124 raise error.Abort(msg)
125 125
126 126 # make sure username is set before going interactive
127 127 if not opts.get('user'):
128 128 ui.username() # raise exception, username not provided
129 129
130 130 def recordfunc(ui, repo, message, match, opts):
131 131 """This is generic record driver.
132 132
133 133 Its job is to interactively filter local changes, and
134 134 accordingly prepare working directory into a state in which the
135 135 job can be delegated to a non-interactive commit command such as
136 136 'commit' or 'qrefresh'.
137 137
138 138 After the actual job is done by non-interactive command, the
139 139 working directory is restored to its original state.
140 140
141 141 In the end we'll record interesting changes, and everything else
142 142 will be left in place, so the user can continue working.
143 143 """
144 144
145 145 checkunfinished(repo, commit=True)
146 146 wctx = repo[None]
147 147 merge = len(wctx.parents()) > 1
148 148 if merge:
149 149 raise error.Abort(_('cannot partially commit a merge '
150 150 '(use "hg commit" instead)'))
151 151
152 152 def fail(f, msg):
153 153 raise error.Abort('%s: %s' % (f, msg))
154 154
155 155 force = opts.get('force')
156 156 if not force:
157 157 vdirs = []
158 158 match.explicitdir = vdirs.append
159 159 match.bad = fail
160 160
161 161 status = repo.status(match=match)
162 162 if not force:
163 163 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
164 164 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
165 165 diffopts.nodates = True
166 166 diffopts.git = True
167 167 diffopts.showfunc = True
168 168 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
169 169 originalchunks = patch.parsepatch(originaldiff)
170 170
171 171 # 1. filter patch, since we are intending to apply subset of it
172 172 try:
173 173 chunks, newopts = filterfn(ui, originalchunks)
174 174 except patch.PatchError as err:
175 175 raise error.Abort(_('error parsing patch: %s') % err)
176 176 opts.update(newopts)
177 177
178 178 # We need to keep a backup of files that have been newly added and
179 179 # modified during the recording process because there is a previous
180 180 # version without the edit in the workdir
181 181 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
182 182 contenders = set()
183 183 for h in chunks:
184 184 try:
185 185 contenders.update(set(h.files()))
186 186 except AttributeError:
187 187 pass
188 188
189 189 changed = status.modified + status.added + status.removed
190 190 newfiles = [f for f in changed if f in contenders]
191 191 if not newfiles:
192 192 ui.status(_('no changes to record\n'))
193 193 return 0
194 194
195 195 modified = set(status.modified)
196 196
197 197 # 2. backup changed files, so we can restore them in the end
198 198
199 199 if backupall:
200 200 tobackup = changed
201 201 else:
202 202 tobackup = [f for f in newfiles if f in modified or f in \
203 203 newlyaddedandmodifiedfiles]
204 204 backups = {}
205 205 if tobackup:
206 206 backupdir = repo.vfs.join('record-backups')
207 207 try:
208 208 os.mkdir(backupdir)
209 209 except OSError as err:
210 210 if err.errno != errno.EEXIST:
211 211 raise
212 212 try:
213 213 # backup continues
214 214 for f in tobackup:
215 215 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
216 216 dir=backupdir)
217 217 os.close(fd)
218 218 ui.debug('backup %r as %r\n' % (f, tmpname))
219 219 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
220 220 backups[f] = tmpname
221 221
222 222 fp = stringio()
223 223 for c in chunks:
224 224 fname = c.filename()
225 225 if fname in backups:
226 226 c.write(fp)
227 227 dopatch = fp.tell()
228 228 fp.seek(0)
229 229
230 230 # 2.5 optionally review / modify patch in text editor
231 231 if opts.get('review', False):
232 232 patchtext = (crecordmod.diffhelptext
233 233 + crecordmod.patchhelptext
234 234 + fp.read())
235 235 reviewedpatch = ui.edit(patchtext, "",
236 236 extra={"suffix": ".diff"},
237 237 repopath=repo.path)
238 238 fp.truncate(0)
239 239 fp.write(reviewedpatch)
240 240 fp.seek(0)
241 241
242 242 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
243 243 # 3a. apply filtered patch to clean repo (clean)
244 244 if backups:
245 245 # Equivalent to hg.revert
246 246 m = scmutil.matchfiles(repo, backups.keys())
247 247 mergemod.update(repo, repo.dirstate.p1(),
248 248 False, True, matcher=m)
249 249
250 250 # 3b. (apply)
251 251 if dopatch:
252 252 try:
253 253 ui.debug('applying patch\n')
254 254 ui.debug(fp.getvalue())
255 255 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
256 256 except patch.PatchError as err:
257 257 raise error.Abort(str(err))
258 258 del fp
259 259
260 260 # 4. We prepared working directory according to filtered
261 261 # patch. Now is the time to delegate the job to
262 262 # commit/qrefresh or the like!
263 263
264 264 # Make all of the pathnames absolute.
265 265 newfiles = [repo.wjoin(nf) for nf in newfiles]
266 266 return commitfunc(ui, repo, *newfiles, **opts)
267 267 finally:
268 268 # 5. finally restore backed-up files
269 269 try:
270 270 dirstate = repo.dirstate
271 271 for realname, tmpname in backups.iteritems():
272 272 ui.debug('restoring %r to %r\n' % (tmpname, realname))
273 273
274 274 if dirstate[realname] == 'n':
275 275 # without normallookup, restoring timestamp
276 276 # may cause partially committed files
277 277 # to be treated as unmodified
278 278 dirstate.normallookup(realname)
279 279
280 280 # copystat=True here and above are a hack to trick any
281 281 # editors that have f open that we haven't modified them.
282 282 #
283 283 # Also note that this racy as an editor could notice the
284 284 # file's mtime before we've finished writing it.
285 285 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
286 286 os.unlink(tmpname)
287 287 if tobackup:
288 288 os.rmdir(backupdir)
289 289 except OSError:
290 290 pass
291 291
292 292 def recordinwlock(ui, repo, message, match, opts):
293 293 with repo.wlock():
294 294 return recordfunc(ui, repo, message, match, opts)
295 295
296 296 return commit(ui, repo, recordinwlock, pats, opts)
297 297
298 298 def findpossible(cmd, table, strict=False):
299 299 """
300 300 Return cmd -> (aliases, command table entry)
301 301 for each matching command.
302 302 Return debug commands (or their aliases) only if no normal command matches.
303 303 """
304 304 choice = {}
305 305 debugchoice = {}
306 306
307 307 if cmd in table:
308 308 # short-circuit exact matches, "log" alias beats "^log|history"
309 309 keys = [cmd]
310 310 else:
311 311 keys = table.keys()
312 312
313 313 allcmds = []
314 314 for e in keys:
315 315 aliases = parsealiases(e)
316 316 allcmds.extend(aliases)
317 317 found = None
318 318 if cmd in aliases:
319 319 found = cmd
320 320 elif not strict:
321 321 for a in aliases:
322 322 if a.startswith(cmd):
323 323 found = a
324 324 break
325 325 if found is not None:
326 326 if aliases[0].startswith("debug") or found.startswith("debug"):
327 327 debugchoice[found] = (aliases, table[e])
328 328 else:
329 329 choice[found] = (aliases, table[e])
330 330
331 331 if not choice and debugchoice:
332 332 choice = debugchoice
333 333
334 334 return choice, allcmds
335 335
336 336 def findcmd(cmd, table, strict=True):
337 337 """Return (aliases, command table entry) for command string."""
338 338 choice, allcmds = findpossible(cmd, table, strict)
339 339
340 340 if cmd in choice:
341 341 return choice[cmd]
342 342
343 343 if len(choice) > 1:
344 344 clist = choice.keys()
345 345 clist.sort()
346 346 raise error.AmbiguousCommand(cmd, clist)
347 347
348 348 if choice:
349 349 return choice.values()[0]
350 350
351 351 raise error.UnknownCommand(cmd, allcmds)
352 352
353 353 def findrepo(p):
354 354 while not os.path.isdir(os.path.join(p, ".hg")):
355 355 oldp, p = p, os.path.dirname(p)
356 356 if p == oldp:
357 357 return None
358 358
359 359 return p
360 360
361 361 def bailifchanged(repo, merge=True, hint=None):
362 362 """ enforce the precondition that working directory must be clean.
363 363
364 364 'merge' can be set to false if a pending uncommitted merge should be
365 365 ignored (such as when 'update --check' runs).
366 366
367 367 'hint' is the usual hint given to Abort exception.
368 368 """
369 369
370 370 if merge and repo.dirstate.p2() != nullid:
371 371 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
372 372 modified, added, removed, deleted = repo.status()[:4]
373 373 if modified or added or removed or deleted:
374 374 raise error.Abort(_('uncommitted changes'), hint=hint)
375 375 ctx = repo[None]
376 376 for s in sorted(ctx.substate):
377 377 ctx.sub(s).bailifchanged(hint=hint)
378 378
379 379 def logmessage(ui, opts):
380 380 """ get the log message according to -m and -l option """
381 381 message = opts.get('message')
382 382 logfile = opts.get('logfile')
383 383
384 384 if message and logfile:
385 385 raise error.Abort(_('options --message and --logfile are mutually '
386 386 'exclusive'))
387 387 if not message and logfile:
388 388 try:
389 389 if logfile == '-':
390 390 message = ui.fin.read()
391 391 else:
392 392 message = '\n'.join(util.readfile(logfile).splitlines())
393 393 except IOError as inst:
394 394 raise error.Abort(_("can't read commit message '%s': %s") %
395 395 (logfile, inst.strerror))
396 396 return message
397 397
398 398 def mergeeditform(ctxorbool, baseformname):
399 399 """return appropriate editform name (referencing a committemplate)
400 400
401 401 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
402 402 merging is committed.
403 403
404 404 This returns baseformname with '.merge' appended if it is a merge,
405 405 otherwise '.normal' is appended.
406 406 """
407 407 if isinstance(ctxorbool, bool):
408 408 if ctxorbool:
409 409 return baseformname + ".merge"
410 410 elif 1 < len(ctxorbool.parents()):
411 411 return baseformname + ".merge"
412 412
413 413 return baseformname + ".normal"
414 414
415 415 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
416 416 editform='', **opts):
417 417 """get appropriate commit message editor according to '--edit' option
418 418
419 419 'finishdesc' is a function to be called with edited commit message
420 420 (= 'description' of the new changeset) just after editing, but
421 421 before checking empty-ness. It should return actual text to be
422 422 stored into history. This allows to change description before
423 423 storing.
424 424
425 425 'extramsg' is a extra message to be shown in the editor instead of
426 426 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
427 427 is automatically added.
428 428
429 429 'editform' is a dot-separated list of names, to distinguish
430 430 the purpose of commit text editing.
431 431
432 432 'getcommiteditor' returns 'commitforceeditor' regardless of
433 433 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
434 434 they are specific for usage in MQ.
435 435 """
436 436 if edit or finishdesc or extramsg:
437 437 return lambda r, c, s: commitforceeditor(r, c, s,
438 438 finishdesc=finishdesc,
439 439 extramsg=extramsg,
440 440 editform=editform)
441 441 elif editform:
442 442 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
443 443 else:
444 444 return commiteditor
445 445
446 446 def loglimit(opts):
447 447 """get the log limit according to option -l/--limit"""
448 448 limit = opts.get('limit')
449 449 if limit:
450 450 try:
451 451 limit = int(limit)
452 452 except ValueError:
453 453 raise error.Abort(_('limit must be a positive integer'))
454 454 if limit <= 0:
455 455 raise error.Abort(_('limit must be positive'))
456 456 else:
457 457 limit = None
458 458 return limit
459 459
460 460 def makefilename(repo, pat, node, desc=None,
461 461 total=None, seqno=None, revwidth=None, pathname=None):
462 462 node_expander = {
463 463 'H': lambda: hex(node),
464 464 'R': lambda: str(repo.changelog.rev(node)),
465 465 'h': lambda: short(node),
466 466 'm': lambda: re.sub('[^\w]', '_', str(desc))
467 467 }
468 468 expander = {
469 469 '%': lambda: '%',
470 470 'b': lambda: os.path.basename(repo.root),
471 471 }
472 472
473 473 try:
474 474 if node:
475 475 expander.update(node_expander)
476 476 if node:
477 477 expander['r'] = (lambda:
478 478 str(repo.changelog.rev(node)).zfill(revwidth or 0))
479 479 if total is not None:
480 480 expander['N'] = lambda: str(total)
481 481 if seqno is not None:
482 482 expander['n'] = lambda: str(seqno)
483 483 if total is not None and seqno is not None:
484 484 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
485 485 if pathname is not None:
486 486 expander['s'] = lambda: os.path.basename(pathname)
487 487 expander['d'] = lambda: os.path.dirname(pathname) or '.'
488 488 expander['p'] = lambda: pathname
489 489
490 490 newname = []
491 491 patlen = len(pat)
492 492 i = 0
493 493 while i < patlen:
494 494 c = pat[i:i + 1]
495 495 if c == '%':
496 496 i += 1
497 497 c = pat[i:i + 1]
498 498 c = expander[c]()
499 499 newname.append(c)
500 500 i += 1
501 501 return ''.join(newname)
502 502 except KeyError as inst:
503 503 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
504 504 inst.args[0])
505 505
506 506 class _unclosablefile(object):
507 507 def __init__(self, fp):
508 508 self._fp = fp
509 509
510 510 def close(self):
511 511 pass
512 512
513 513 def __iter__(self):
514 514 return iter(self._fp)
515 515
516 516 def __getattr__(self, attr):
517 517 return getattr(self._fp, attr)
518 518
519 519 def __enter__(self):
520 520 return self
521 521
522 522 def __exit__(self, exc_type, exc_value, exc_tb):
523 523 pass
524 524
525 525 def makefileobj(repo, pat, node=None, desc=None, total=None,
526 526 seqno=None, revwidth=None, mode='wb', modemap=None,
527 527 pathname=None):
528 528
529 529 writable = mode not in ('r', 'rb')
530 530
531 531 if not pat or pat == '-':
532 532 if writable:
533 533 fp = repo.ui.fout
534 534 else:
535 535 fp = repo.ui.fin
536 536 return _unclosablefile(fp)
537 537 if util.safehasattr(pat, 'write') and writable:
538 538 return pat
539 539 if util.safehasattr(pat, 'read') and 'r' in mode:
540 540 return pat
541 541 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
542 542 if modemap is not None:
543 543 mode = modemap.get(fn, mode)
544 544 if mode == 'wb':
545 545 modemap[fn] = 'ab'
546 546 return open(fn, mode)
547 547
548 548 def openrevlog(repo, cmd, file_, opts):
549 549 """opens the changelog, manifest, a filelog or a given revlog"""
550 550 cl = opts['changelog']
551 551 mf = opts['manifest']
552 552 dir = opts['dir']
553 553 msg = None
554 554 if cl and mf:
555 555 msg = _('cannot specify --changelog and --manifest at the same time')
556 556 elif cl and dir:
557 557 msg = _('cannot specify --changelog and --dir at the same time')
558 558 elif cl or mf or dir:
559 559 if file_:
560 560 msg = _('cannot specify filename with --changelog or --manifest')
561 561 elif not repo:
562 562 msg = _('cannot specify --changelog or --manifest or --dir '
563 563 'without a repository')
564 564 if msg:
565 565 raise error.Abort(msg)
566 566
567 567 r = None
568 568 if repo:
569 569 if cl:
570 570 r = repo.unfiltered().changelog
571 571 elif dir:
572 572 if 'treemanifest' not in repo.requirements:
573 573 raise error.Abort(_("--dir can only be used on repos with "
574 574 "treemanifest enabled"))
575 575 dirlog = repo.manifestlog._revlog.dirlog(dir)
576 576 if len(dirlog):
577 577 r = dirlog
578 578 elif mf:
579 579 r = repo.manifestlog._revlog
580 580 elif file_:
581 581 filelog = repo.file(file_)
582 582 if len(filelog):
583 583 r = filelog
584 584 if not r:
585 585 if not file_:
586 586 raise error.CommandError(cmd, _('invalid arguments'))
587 587 if not os.path.isfile(file_):
588 588 raise error.Abort(_("revlog '%s' not found") % file_)
589 589 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
590 590 file_[:-2] + ".i")
591 591 return r
592 592
593 593 def copy(ui, repo, pats, opts, rename=False):
594 594 # called with the repo lock held
595 595 #
596 596 # hgsep => pathname that uses "/" to separate directories
597 597 # ossep => pathname that uses os.sep to separate directories
598 598 cwd = repo.getcwd()
599 599 targets = {}
600 600 after = opts.get("after")
601 601 dryrun = opts.get("dry_run")
602 602 wctx = repo[None]
603 603
604 604 def walkpat(pat):
605 605 srcs = []
606 606 if after:
607 607 badstates = '?'
608 608 else:
609 609 badstates = '?r'
610 610 m = scmutil.match(repo[None], [pat], opts, globbed=True)
611 611 for abs in repo.walk(m):
612 612 state = repo.dirstate[abs]
613 613 rel = m.rel(abs)
614 614 exact = m.exact(abs)
615 615 if state in badstates:
616 616 if exact and state == '?':
617 617 ui.warn(_('%s: not copying - file is not managed\n') % rel)
618 618 if exact and state == 'r':
619 619 ui.warn(_('%s: not copying - file has been marked for'
620 620 ' remove\n') % rel)
621 621 continue
622 622 # abs: hgsep
623 623 # rel: ossep
624 624 srcs.append((abs, rel, exact))
625 625 return srcs
626 626
627 627 # abssrc: hgsep
628 628 # relsrc: ossep
629 629 # otarget: ossep
630 630 def copyfile(abssrc, relsrc, otarget, exact):
631 631 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
632 632 if '/' in abstarget:
633 633 # We cannot normalize abstarget itself, this would prevent
634 634 # case only renames, like a => A.
635 635 abspath, absname = abstarget.rsplit('/', 1)
636 636 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
637 637 reltarget = repo.pathto(abstarget, cwd)
638 638 target = repo.wjoin(abstarget)
639 639 src = repo.wjoin(abssrc)
640 640 state = repo.dirstate[abstarget]
641 641
642 642 scmutil.checkportable(ui, abstarget)
643 643
644 644 # check for collisions
645 645 prevsrc = targets.get(abstarget)
646 646 if prevsrc is not None:
647 647 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
648 648 (reltarget, repo.pathto(abssrc, cwd),
649 649 repo.pathto(prevsrc, cwd)))
650 650 return
651 651
652 652 # check for overwrites
653 653 exists = os.path.lexists(target)
654 654 samefile = False
655 655 if exists and abssrc != abstarget:
656 656 if (repo.dirstate.normalize(abssrc) ==
657 657 repo.dirstate.normalize(abstarget)):
658 658 if not rename:
659 659 ui.warn(_("%s: can't copy - same file\n") % reltarget)
660 660 return
661 661 exists = False
662 662 samefile = True
663 663
664 664 if not after and exists or after and state in 'mn':
665 665 if not opts['force']:
666 666 if state in 'mn':
667 667 msg = _('%s: not overwriting - file already committed\n')
668 668 if after:
669 669 flags = '--after --force'
670 670 else:
671 671 flags = '--force'
672 672 if rename:
673 673 hint = _('(hg rename %s to replace the file by '
674 674 'recording a rename)\n') % flags
675 675 else:
676 676 hint = _('(hg copy %s to replace the file by '
677 677 'recording a copy)\n') % flags
678 678 else:
679 679 msg = _('%s: not overwriting - file exists\n')
680 680 if rename:
681 681 hint = _('(hg rename --after to record the rename)\n')
682 682 else:
683 683 hint = _('(hg copy --after to record the copy)\n')
684 684 ui.warn(msg % reltarget)
685 685 ui.warn(hint)
686 686 return
687 687
688 688 if after:
689 689 if not exists:
690 690 if rename:
691 691 ui.warn(_('%s: not recording move - %s does not exist\n') %
692 692 (relsrc, reltarget))
693 693 else:
694 694 ui.warn(_('%s: not recording copy - %s does not exist\n') %
695 695 (relsrc, reltarget))
696 696 return
697 697 elif not dryrun:
698 698 try:
699 699 if exists:
700 700 os.unlink(target)
701 701 targetdir = os.path.dirname(target) or '.'
702 702 if not os.path.isdir(targetdir):
703 703 os.makedirs(targetdir)
704 704 if samefile:
705 705 tmp = target + "~hgrename"
706 706 os.rename(src, tmp)
707 707 os.rename(tmp, target)
708 708 else:
709 709 util.copyfile(src, target)
710 710 srcexists = True
711 711 except IOError as inst:
712 712 if inst.errno == errno.ENOENT:
713 713 ui.warn(_('%s: deleted in working directory\n') % relsrc)
714 714 srcexists = False
715 715 else:
716 716 ui.warn(_('%s: cannot copy - %s\n') %
717 717 (relsrc, inst.strerror))
718 718 return True # report a failure
719 719
720 720 if ui.verbose or not exact:
721 721 if rename:
722 722 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
723 723 else:
724 724 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
725 725
726 726 targets[abstarget] = abssrc
727 727
728 728 # fix up dirstate
729 729 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
730 730 dryrun=dryrun, cwd=cwd)
731 731 if rename and not dryrun:
732 732 if not after and srcexists and not samefile:
733 733 repo.wvfs.unlinkpath(abssrc)
734 734 wctx.forget([abssrc])
735 735
736 736 # pat: ossep
737 737 # dest ossep
738 738 # srcs: list of (hgsep, hgsep, ossep, bool)
739 739 # return: function that takes hgsep and returns ossep
740 740 def targetpathfn(pat, dest, srcs):
741 741 if os.path.isdir(pat):
742 742 abspfx = pathutil.canonpath(repo.root, cwd, pat)
743 743 abspfx = util.localpath(abspfx)
744 744 if destdirexists:
745 745 striplen = len(os.path.split(abspfx)[0])
746 746 else:
747 747 striplen = len(abspfx)
748 748 if striplen:
749 749 striplen += len(pycompat.ossep)
750 750 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
751 751 elif destdirexists:
752 752 res = lambda p: os.path.join(dest,
753 753 os.path.basename(util.localpath(p)))
754 754 else:
755 755 res = lambda p: dest
756 756 return res
757 757
758 758 # pat: ossep
759 759 # dest ossep
760 760 # srcs: list of (hgsep, hgsep, ossep, bool)
761 761 # return: function that takes hgsep and returns ossep
762 762 def targetpathafterfn(pat, dest, srcs):
763 763 if matchmod.patkind(pat):
764 764 # a mercurial pattern
765 765 res = lambda p: os.path.join(dest,
766 766 os.path.basename(util.localpath(p)))
767 767 else:
768 768 abspfx = pathutil.canonpath(repo.root, cwd, pat)
769 769 if len(abspfx) < len(srcs[0][0]):
770 770 # A directory. Either the target path contains the last
771 771 # component of the source path or it does not.
772 772 def evalpath(striplen):
773 773 score = 0
774 774 for s in srcs:
775 775 t = os.path.join(dest, util.localpath(s[0])[striplen:])
776 776 if os.path.lexists(t):
777 777 score += 1
778 778 return score
779 779
780 780 abspfx = util.localpath(abspfx)
781 781 striplen = len(abspfx)
782 782 if striplen:
783 783 striplen += len(pycompat.ossep)
784 784 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
785 785 score = evalpath(striplen)
786 786 striplen1 = len(os.path.split(abspfx)[0])
787 787 if striplen1:
788 788 striplen1 += len(pycompat.ossep)
789 789 if evalpath(striplen1) > score:
790 790 striplen = striplen1
791 791 res = lambda p: os.path.join(dest,
792 792 util.localpath(p)[striplen:])
793 793 else:
794 794 # a file
795 795 if destdirexists:
796 796 res = lambda p: os.path.join(dest,
797 797 os.path.basename(util.localpath(p)))
798 798 else:
799 799 res = lambda p: dest
800 800 return res
801 801
802 802 pats = scmutil.expandpats(pats)
803 803 if not pats:
804 804 raise error.Abort(_('no source or destination specified'))
805 805 if len(pats) == 1:
806 806 raise error.Abort(_('no destination specified'))
807 807 dest = pats.pop()
808 808 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
809 809 if not destdirexists:
810 810 if len(pats) > 1 or matchmod.patkind(pats[0]):
811 811 raise error.Abort(_('with multiple sources, destination must be an '
812 812 'existing directory'))
813 813 if util.endswithsep(dest):
814 814 raise error.Abort(_('destination %s is not a directory') % dest)
815 815
816 816 tfn = targetpathfn
817 817 if after:
818 818 tfn = targetpathafterfn
819 819 copylist = []
820 820 for pat in pats:
821 821 srcs = walkpat(pat)
822 822 if not srcs:
823 823 continue
824 824 copylist.append((tfn(pat, dest, srcs), srcs))
825 825 if not copylist:
826 826 raise error.Abort(_('no files to copy'))
827 827
828 828 errors = 0
829 829 for targetpath, srcs in copylist:
830 830 for abssrc, relsrc, exact in srcs:
831 831 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
832 832 errors += 1
833 833
834 834 if errors:
835 835 ui.warn(_('(consider using --after)\n'))
836 836
837 837 return errors != 0
838 838
839 839 ## facility to let extension process additional data into an import patch
840 840 # list of identifier to be executed in order
841 841 extrapreimport = [] # run before commit
842 842 extrapostimport = [] # run after commit
843 843 # mapping from identifier to actual import function
844 844 #
845 845 # 'preimport' are run before the commit is made and are provided the following
846 846 # arguments:
847 847 # - repo: the localrepository instance,
848 848 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
849 849 # - extra: the future extra dictionary of the changeset, please mutate it,
850 850 # - opts: the import options.
851 851 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
852 852 # mutation of in memory commit and more. Feel free to rework the code to get
853 853 # there.
854 854 extrapreimportmap = {}
855 855 # 'postimport' are run after the commit is made and are provided the following
856 856 # argument:
857 857 # - ctx: the changectx created by import.
858 858 extrapostimportmap = {}
859 859
860 860 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
861 861 """Utility function used by commands.import to import a single patch
862 862
863 863 This function is explicitly defined here to help the evolve extension to
864 864 wrap this part of the import logic.
865 865
866 866 The API is currently a bit ugly because it a simple code translation from
867 867 the import command. Feel free to make it better.
868 868
869 869 :hunk: a patch (as a binary string)
870 870 :parents: nodes that will be parent of the created commit
871 871 :opts: the full dict of option passed to the import command
872 872 :msgs: list to save commit message to.
873 873 (used in case we need to save it when failing)
874 874 :updatefunc: a function that update a repo to a given node
875 875 updatefunc(<repo>, <node>)
876 876 """
877 877 # avoid cycle context -> subrepo -> cmdutil
878 878 from . import context
879 879 extractdata = patch.extract(ui, hunk)
880 880 tmpname = extractdata.get('filename')
881 881 message = extractdata.get('message')
882 882 user = opts.get('user') or extractdata.get('user')
883 883 date = opts.get('date') or extractdata.get('date')
884 884 branch = extractdata.get('branch')
885 885 nodeid = extractdata.get('nodeid')
886 886 p1 = extractdata.get('p1')
887 887 p2 = extractdata.get('p2')
888 888
889 889 nocommit = opts.get('no_commit')
890 890 importbranch = opts.get('import_branch')
891 891 update = not opts.get('bypass')
892 892 strip = opts["strip"]
893 893 prefix = opts["prefix"]
894 894 sim = float(opts.get('similarity') or 0)
895 895 if not tmpname:
896 896 return (None, None, False)
897 897
898 898 rejects = False
899 899
900 900 try:
901 901 cmdline_message = logmessage(ui, opts)
902 902 if cmdline_message:
903 903 # pickup the cmdline msg
904 904 message = cmdline_message
905 905 elif message:
906 906 # pickup the patch msg
907 907 message = message.strip()
908 908 else:
909 909 # launch the editor
910 910 message = None
911 911 ui.debug('message:\n%s\n' % message)
912 912
913 913 if len(parents) == 1:
914 914 parents.append(repo[nullid])
915 915 if opts.get('exact'):
916 916 if not nodeid or not p1:
917 917 raise error.Abort(_('not a Mercurial patch'))
918 918 p1 = repo[p1]
919 919 p2 = repo[p2 or nullid]
920 920 elif p2:
921 921 try:
922 922 p1 = repo[p1]
923 923 p2 = repo[p2]
924 924 # Without any options, consider p2 only if the
925 925 # patch is being applied on top of the recorded
926 926 # first parent.
927 927 if p1 != parents[0]:
928 928 p1 = parents[0]
929 929 p2 = repo[nullid]
930 930 except error.RepoError:
931 931 p1, p2 = parents
932 932 if p2.node() == nullid:
933 933 ui.warn(_("warning: import the patch as a normal revision\n"
934 934 "(use --exact to import the patch as a merge)\n"))
935 935 else:
936 936 p1, p2 = parents
937 937
938 938 n = None
939 939 if update:
940 940 if p1 != parents[0]:
941 941 updatefunc(repo, p1.node())
942 942 if p2 != parents[1]:
943 943 repo.setparents(p1.node(), p2.node())
944 944
945 945 if opts.get('exact') or importbranch:
946 946 repo.dirstate.setbranch(branch or 'default')
947 947
948 948 partial = opts.get('partial', False)
949 949 files = set()
950 950 try:
951 951 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
952 952 files=files, eolmode=None, similarity=sim / 100.0)
953 953 except patch.PatchError as e:
954 954 if not partial:
955 955 raise error.Abort(str(e))
956 956 if partial:
957 957 rejects = True
958 958
959 959 files = list(files)
960 960 if nocommit:
961 961 if message:
962 962 msgs.append(message)
963 963 else:
964 964 if opts.get('exact') or p2:
965 965 # If you got here, you either use --force and know what
966 966 # you are doing or used --exact or a merge patch while
967 967 # being updated to its first parent.
968 968 m = None
969 969 else:
970 970 m = scmutil.matchfiles(repo, files or [])
971 971 editform = mergeeditform(repo[None], 'import.normal')
972 972 if opts.get('exact'):
973 973 editor = None
974 974 else:
975 975 editor = getcommiteditor(editform=editform, **opts)
976 976 extra = {}
977 977 for idfunc in extrapreimport:
978 978 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
979 979 overrides = {}
980 980 if partial:
981 981 overrides[('ui', 'allowemptycommit')] = True
982 982 with repo.ui.configoverride(overrides, 'import'):
983 983 n = repo.commit(message, user,
984 984 date, match=m,
985 985 editor=editor, extra=extra)
986 986 for idfunc in extrapostimport:
987 987 extrapostimportmap[idfunc](repo[n])
988 988 else:
989 989 if opts.get('exact') or importbranch:
990 990 branch = branch or 'default'
991 991 else:
992 992 branch = p1.branch()
993 993 store = patch.filestore()
994 994 try:
995 995 files = set()
996 996 try:
997 997 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
998 998 files, eolmode=None)
999 999 except patch.PatchError as e:
1000 1000 raise error.Abort(str(e))
1001 1001 if opts.get('exact'):
1002 1002 editor = None
1003 1003 else:
1004 1004 editor = getcommiteditor(editform='import.bypass')
1005 1005 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1006 1006 message,
1007 1007 user,
1008 1008 date,
1009 1009 branch, files, store,
1010 1010 editor=editor)
1011 1011 n = memctx.commit()
1012 1012 finally:
1013 1013 store.close()
1014 1014 if opts.get('exact') and nocommit:
1015 1015 # --exact with --no-commit is still useful in that it does merge
1016 1016 # and branch bits
1017 1017 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1018 1018 elif opts.get('exact') and hex(n) != nodeid:
1019 1019 raise error.Abort(_('patch is damaged or loses information'))
1020 1020 msg = _('applied to working directory')
1021 1021 if n:
1022 1022 # i18n: refers to a short changeset id
1023 1023 msg = _('created %s') % short(n)
1024 1024 return (msg, n, rejects)
1025 1025 finally:
1026 1026 os.unlink(tmpname)
1027 1027
1028 1028 # facility to let extensions include additional data in an exported patch
1029 1029 # list of identifiers to be executed in order
1030 1030 extraexport = []
1031 1031 # mapping from identifier to actual export function
1032 1032 # function as to return a string to be added to the header or None
1033 1033 # it is given two arguments (sequencenumber, changectx)
1034 1034 extraexportmap = {}
1035 1035
1036 1036 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1037 1037 opts=None, match=None):
1038 1038 '''export changesets as hg patches.'''
1039 1039
1040 1040 total = len(revs)
1041 1041 revwidth = max([len(str(rev)) for rev in revs])
1042 1042 filemode = {}
1043 1043
1044 1044 def single(rev, seqno, fp):
1045 1045 ctx = repo[rev]
1046 1046 node = ctx.node()
1047 1047 parents = [p.node() for p in ctx.parents() if p]
1048 1048 branch = ctx.branch()
1049 1049 if switch_parent:
1050 1050 parents.reverse()
1051 1051
1052 1052 if parents:
1053 1053 prev = parents[0]
1054 1054 else:
1055 1055 prev = nullid
1056 1056
1057 1057 shouldclose = False
1058 1058 if not fp and len(template) > 0:
1059 1059 desc_lines = ctx.description().rstrip().split('\n')
1060 1060 desc = desc_lines[0] #Commit always has a first line.
1061 1061 fp = makefileobj(repo, template, node, desc=desc, total=total,
1062 1062 seqno=seqno, revwidth=revwidth, mode='wb',
1063 1063 modemap=filemode)
1064 1064 shouldclose = True
1065 1065 if fp and not getattr(fp, 'name', '<unnamed>').startswith('<'):
1066 1066 repo.ui.note("%s\n" % fp.name)
1067 1067
1068 1068 if not fp:
1069 1069 write = repo.ui.write
1070 1070 else:
1071 1071 def write(s, **kw):
1072 1072 fp.write(s)
1073 1073
1074 1074 write("# HG changeset patch\n")
1075 1075 write("# User %s\n" % ctx.user())
1076 1076 write("# Date %d %d\n" % ctx.date())
1077 1077 write("# %s\n" % util.datestr(ctx.date()))
1078 1078 if branch and branch != 'default':
1079 1079 write("# Branch %s\n" % branch)
1080 1080 write("# Node ID %s\n" % hex(node))
1081 1081 write("# Parent %s\n" % hex(prev))
1082 1082 if len(parents) > 1:
1083 1083 write("# Parent %s\n" % hex(parents[1]))
1084 1084
1085 1085 for headerid in extraexport:
1086 1086 header = extraexportmap[headerid](seqno, ctx)
1087 1087 if header is not None:
1088 1088 write('# %s\n' % header)
1089 1089 write(ctx.description().rstrip())
1090 1090 write("\n\n")
1091 1091
1092 1092 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1093 1093 write(chunk, label=label)
1094 1094
1095 1095 if shouldclose:
1096 1096 fp.close()
1097 1097
1098 1098 for seqno, rev in enumerate(revs):
1099 1099 single(rev, seqno + 1, fp)
1100 1100
1101 1101 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1102 1102 changes=None, stat=False, fp=None, prefix='',
1103 1103 root='', listsubrepos=False):
1104 1104 '''show diff or diffstat.'''
1105 1105 if fp is None:
1106 1106 write = ui.write
1107 1107 else:
1108 1108 def write(s, **kw):
1109 1109 fp.write(s)
1110 1110
1111 1111 if root:
1112 1112 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1113 1113 else:
1114 1114 relroot = ''
1115 1115 if relroot != '':
1116 1116 # XXX relative roots currently don't work if the root is within a
1117 1117 # subrepo
1118 1118 uirelroot = match.uipath(relroot)
1119 1119 relroot += '/'
1120 1120 for matchroot in match.files():
1121 1121 if not matchroot.startswith(relroot):
1122 1122 ui.warn(_('warning: %s not inside relative root %s\n') % (
1123 1123 match.uipath(matchroot), uirelroot))
1124 1124
1125 1125 if stat:
1126 1126 diffopts = diffopts.copy(context=0)
1127 1127 width = 80
1128 1128 if not ui.plain():
1129 1129 width = ui.termwidth()
1130 1130 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1131 1131 prefix=prefix, relroot=relroot)
1132 1132 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1133 1133 width=width):
1134 1134 write(chunk, label=label)
1135 1135 else:
1136 1136 for chunk, label in patch.diffui(repo, node1, node2, match,
1137 1137 changes, diffopts, prefix=prefix,
1138 1138 relroot=relroot):
1139 1139 write(chunk, label=label)
1140 1140
1141 1141 if listsubrepos:
1142 1142 ctx1 = repo[node1]
1143 1143 ctx2 = repo[node2]
1144 1144 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1145 1145 tempnode2 = node2
1146 1146 try:
1147 1147 if node2 is not None:
1148 1148 tempnode2 = ctx2.substate[subpath][1]
1149 1149 except KeyError:
1150 1150 # A subrepo that existed in node1 was deleted between node1 and
1151 1151 # node2 (inclusive). Thus, ctx2's substate won't contain that
1152 1152 # subpath. The best we can do is to ignore it.
1153 1153 tempnode2 = None
1154 1154 submatch = matchmod.subdirmatcher(subpath, match)
1155 1155 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1156 1156 stat=stat, fp=fp, prefix=prefix)
1157 1157
1158 1158 def _changesetlabels(ctx):
1159 1159 labels = ['log.changeset', 'changeset.%s' % ctx.phasestr()]
1160 1160 if ctx.obsolete():
1161 1161 labels.append('changeset.obsolete')
1162 1162 if ctx.troubled():
1163 1163 labels.append('changeset.troubled')
1164 1164 for trouble in ctx.troubles():
1165 1165 labels.append('trouble.%s' % trouble)
1166 1166 return ' '.join(labels)
1167 1167
1168 1168 class changeset_printer(object):
1169 1169 '''show changeset information when templating not requested.'''
1170 1170
1171 1171 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1172 1172 self.ui = ui
1173 1173 self.repo = repo
1174 1174 self.buffered = buffered
1175 1175 self.matchfn = matchfn
1176 1176 self.diffopts = diffopts
1177 1177 self.header = {}
1178 1178 self.hunk = {}
1179 1179 self.lastheader = None
1180 1180 self.footer = None
1181 1181
1182 1182 def flush(self, ctx):
1183 1183 rev = ctx.rev()
1184 1184 if rev in self.header:
1185 1185 h = self.header[rev]
1186 1186 if h != self.lastheader:
1187 1187 self.lastheader = h
1188 1188 self.ui.write(h)
1189 1189 del self.header[rev]
1190 1190 if rev in self.hunk:
1191 1191 self.ui.write(self.hunk[rev])
1192 1192 del self.hunk[rev]
1193 1193 return 1
1194 1194 return 0
1195 1195
1196 1196 def close(self):
1197 1197 if self.footer:
1198 1198 self.ui.write(self.footer)
1199 1199
1200 1200 def show(self, ctx, copies=None, matchfn=None, **props):
1201 1201 if self.buffered:
1202 1202 self.ui.pushbuffer(labeled=True)
1203 1203 self._show(ctx, copies, matchfn, props)
1204 1204 self.hunk[ctx.rev()] = self.ui.popbuffer()
1205 1205 else:
1206 1206 self._show(ctx, copies, matchfn, props)
1207 1207
1208 1208 def _show(self, ctx, copies, matchfn, props):
1209 1209 '''show a single changeset or file revision'''
1210 1210 changenode = ctx.node()
1211 1211 rev = ctx.rev()
1212 1212 if self.ui.debugflag:
1213 1213 hexfunc = hex
1214 1214 else:
1215 1215 hexfunc = short
1216 1216 # as of now, wctx.node() and wctx.rev() return None, but we want to
1217 1217 # show the same values as {node} and {rev} templatekw
1218 1218 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1219 1219
1220 1220 if self.ui.quiet:
1221 1221 self.ui.write("%d:%s\n" % revnode, label='log.node')
1222 1222 return
1223 1223
1224 1224 date = util.datestr(ctx.date())
1225 1225
1226 1226 # i18n: column positioning for "hg log"
1227 1227 self.ui.write(_("changeset: %d:%s\n") % revnode,
1228 1228 label=_changesetlabels(ctx))
1229 1229
1230 1230 # branches are shown first before any other names due to backwards
1231 1231 # compatibility
1232 1232 branch = ctx.branch()
1233 1233 # don't show the default branch name
1234 1234 if branch != 'default':
1235 1235 # i18n: column positioning for "hg log"
1236 1236 self.ui.write(_("branch: %s\n") % branch,
1237 1237 label='log.branch')
1238 1238
1239 1239 for nsname, ns in self.repo.names.iteritems():
1240 1240 # branches has special logic already handled above, so here we just
1241 1241 # skip it
1242 1242 if nsname == 'branches':
1243 1243 continue
1244 1244 # we will use the templatename as the color name since those two
1245 1245 # should be the same
1246 1246 for name in ns.names(self.repo, changenode):
1247 1247 self.ui.write(ns.logfmt % name,
1248 1248 label='log.%s' % ns.colorname)
1249 1249 if self.ui.debugflag:
1250 1250 # i18n: column positioning for "hg log"
1251 1251 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1252 1252 label='log.phase')
1253 1253 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1254 1254 label = 'log.parent changeset.%s' % pctx.phasestr()
1255 1255 # i18n: column positioning for "hg log"
1256 1256 self.ui.write(_("parent: %d:%s\n")
1257 1257 % (pctx.rev(), hexfunc(pctx.node())),
1258 1258 label=label)
1259 1259
1260 1260 if self.ui.debugflag and rev is not None:
1261 1261 mnode = ctx.manifestnode()
1262 1262 # i18n: column positioning for "hg log"
1263 1263 self.ui.write(_("manifest: %d:%s\n") %
1264 1264 (self.repo.manifestlog._revlog.rev(mnode),
1265 1265 hex(mnode)),
1266 1266 label='ui.debug log.manifest')
1267 1267 # i18n: column positioning for "hg log"
1268 1268 self.ui.write(_("user: %s\n") % ctx.user(),
1269 1269 label='log.user')
1270 1270 # i18n: column positioning for "hg log"
1271 1271 self.ui.write(_("date: %s\n") % date,
1272 1272 label='log.date')
1273 1273
1274 1274 if ctx.troubled():
1275 1275 # i18n: column positioning for "hg log"
1276 1276 self.ui.write(_("trouble: %s\n") % ', '.join(ctx.troubles()),
1277 1277 label='log.trouble')
1278 1278
1279 1279 if self.ui.debugflag:
1280 1280 files = ctx.p1().status(ctx)[:3]
1281 1281 for key, value in zip([# i18n: column positioning for "hg log"
1282 1282 _("files:"),
1283 1283 # i18n: column positioning for "hg log"
1284 1284 _("files+:"),
1285 1285 # i18n: column positioning for "hg log"
1286 1286 _("files-:")], files):
1287 1287 if value:
1288 1288 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1289 1289 label='ui.debug log.files')
1290 1290 elif ctx.files() and self.ui.verbose:
1291 1291 # i18n: column positioning for "hg log"
1292 1292 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1293 1293 label='ui.note log.files')
1294 1294 if copies and self.ui.verbose:
1295 1295 copies = ['%s (%s)' % c for c in copies]
1296 1296 # i18n: column positioning for "hg log"
1297 1297 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1298 1298 label='ui.note log.copies')
1299 1299
1300 1300 extra = ctx.extra()
1301 1301 if extra and self.ui.debugflag:
1302 1302 for key, value in sorted(extra.items()):
1303 1303 # i18n: column positioning for "hg log"
1304 1304 self.ui.write(_("extra: %s=%s\n")
1305 1305 % (key, util.escapestr(value)),
1306 1306 label='ui.debug log.extra')
1307 1307
1308 1308 description = ctx.description().strip()
1309 1309 if description:
1310 1310 if self.ui.verbose:
1311 1311 self.ui.write(_("description:\n"),
1312 1312 label='ui.note log.description')
1313 1313 self.ui.write(description,
1314 1314 label='ui.note log.description')
1315 1315 self.ui.write("\n\n")
1316 1316 else:
1317 1317 # i18n: column positioning for "hg log"
1318 1318 self.ui.write(_("summary: %s\n") %
1319 1319 description.splitlines()[0],
1320 1320 label='log.summary')
1321 1321 self.ui.write("\n")
1322 1322
1323 1323 self.showpatch(ctx, matchfn)
1324 1324
1325 1325 def showpatch(self, ctx, matchfn):
1326 1326 if not matchfn:
1327 1327 matchfn = self.matchfn
1328 1328 if matchfn:
1329 1329 stat = self.diffopts.get('stat')
1330 1330 diff = self.diffopts.get('patch')
1331 1331 diffopts = patch.diffallopts(self.ui, self.diffopts)
1332 1332 node = ctx.node()
1333 1333 prev = ctx.p1().node()
1334 1334 if stat:
1335 1335 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1336 1336 match=matchfn, stat=True)
1337 1337 if diff:
1338 1338 if stat:
1339 1339 self.ui.write("\n")
1340 1340 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1341 1341 match=matchfn, stat=False)
1342 1342 self.ui.write("\n")
1343 1343
1344 1344 class jsonchangeset(changeset_printer):
1345 1345 '''format changeset information.'''
1346 1346
1347 1347 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1348 1348 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1349 1349 self.cache = {}
1350 1350 self._first = True
1351 1351
1352 1352 def close(self):
1353 1353 if not self._first:
1354 1354 self.ui.write("\n]\n")
1355 1355 else:
1356 1356 self.ui.write("[]\n")
1357 1357
1358 1358 def _show(self, ctx, copies, matchfn, props):
1359 1359 '''show a single changeset or file revision'''
1360 1360 rev = ctx.rev()
1361 1361 if rev is None:
1362 1362 jrev = jnode = 'null'
1363 1363 else:
1364 1364 jrev = '%d' % rev
1365 1365 jnode = '"%s"' % hex(ctx.node())
1366 1366 j = encoding.jsonescape
1367 1367
1368 1368 if self._first:
1369 1369 self.ui.write("[\n {")
1370 1370 self._first = False
1371 1371 else:
1372 1372 self.ui.write(",\n {")
1373 1373
1374 1374 if self.ui.quiet:
1375 1375 self.ui.write(('\n "rev": %s') % jrev)
1376 1376 self.ui.write((',\n "node": %s') % jnode)
1377 1377 self.ui.write('\n }')
1378 1378 return
1379 1379
1380 1380 self.ui.write(('\n "rev": %s') % jrev)
1381 1381 self.ui.write((',\n "node": %s') % jnode)
1382 1382 self.ui.write((',\n "branch": "%s"') % j(ctx.branch()))
1383 1383 self.ui.write((',\n "phase": "%s"') % ctx.phasestr())
1384 1384 self.ui.write((',\n "user": "%s"') % j(ctx.user()))
1385 1385 self.ui.write((',\n "date": [%d, %d]') % ctx.date())
1386 1386 self.ui.write((',\n "desc": "%s"') % j(ctx.description()))
1387 1387
1388 1388 self.ui.write((',\n "bookmarks": [%s]') %
1389 1389 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1390 1390 self.ui.write((',\n "tags": [%s]') %
1391 1391 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1392 1392 self.ui.write((',\n "parents": [%s]') %
1393 1393 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1394 1394
1395 1395 if self.ui.debugflag:
1396 1396 if rev is None:
1397 1397 jmanifestnode = 'null'
1398 1398 else:
1399 1399 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1400 1400 self.ui.write((',\n "manifest": %s') % jmanifestnode)
1401 1401
1402 1402 self.ui.write((',\n "extra": {%s}') %
1403 1403 ", ".join('"%s": "%s"' % (j(k), j(v))
1404 1404 for k, v in ctx.extra().items()))
1405 1405
1406 1406 files = ctx.p1().status(ctx)
1407 1407 self.ui.write((',\n "modified": [%s]') %
1408 1408 ", ".join('"%s"' % j(f) for f in files[0]))
1409 1409 self.ui.write((',\n "added": [%s]') %
1410 1410 ", ".join('"%s"' % j(f) for f in files[1]))
1411 1411 self.ui.write((',\n "removed": [%s]') %
1412 1412 ", ".join('"%s"' % j(f) for f in files[2]))
1413 1413
1414 1414 elif self.ui.verbose:
1415 1415 self.ui.write((',\n "files": [%s]') %
1416 1416 ", ".join('"%s"' % j(f) for f in ctx.files()))
1417 1417
1418 1418 if copies:
1419 1419 self.ui.write((',\n "copies": {%s}') %
1420 1420 ", ".join('"%s": "%s"' % (j(k), j(v))
1421 1421 for k, v in copies))
1422 1422
1423 1423 matchfn = self.matchfn
1424 1424 if matchfn:
1425 1425 stat = self.diffopts.get('stat')
1426 1426 diff = self.diffopts.get('patch')
1427 1427 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1428 1428 node, prev = ctx.node(), ctx.p1().node()
1429 1429 if stat:
1430 1430 self.ui.pushbuffer()
1431 1431 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1432 1432 match=matchfn, stat=True)
1433 1433 self.ui.write((',\n "diffstat": "%s"')
1434 1434 % j(self.ui.popbuffer()))
1435 1435 if diff:
1436 1436 self.ui.pushbuffer()
1437 1437 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1438 1438 match=matchfn, stat=False)
1439 1439 self.ui.write((',\n "diff": "%s"') % j(self.ui.popbuffer()))
1440 1440
1441 1441 self.ui.write("\n }")
1442 1442
1443 1443 class changeset_templater(changeset_printer):
1444 1444 '''format changeset information.'''
1445 1445
1446 1446 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1447 1447 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1448 1448 assert not (tmpl and mapfile)
1449 1449 defaulttempl = templatekw.defaulttempl
1450 1450 if mapfile:
1451 1451 self.t = templater.templater.frommapfile(mapfile,
1452 1452 cache=defaulttempl)
1453 1453 else:
1454 1454 self.t = formatter.maketemplater(ui, 'changeset', tmpl,
1455 1455 cache=defaulttempl)
1456 1456
1457 1457 self._counter = itertools.count()
1458 1458 self.cache = {}
1459 1459
1460 1460 # find correct templates for current mode
1461 1461 tmplmodes = [
1462 1462 (True, None),
1463 1463 (self.ui.verbose, 'verbose'),
1464 1464 (self.ui.quiet, 'quiet'),
1465 1465 (self.ui.debugflag, 'debug'),
1466 1466 ]
1467 1467
1468 1468 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1469 1469 'docheader': '', 'docfooter': ''}
1470 1470 for mode, postfix in tmplmodes:
1471 1471 for t in self._parts:
1472 1472 cur = t
1473 1473 if postfix:
1474 1474 cur += "_" + postfix
1475 1475 if mode and cur in self.t:
1476 1476 self._parts[t] = cur
1477 1477
1478 1478 if self._parts['docheader']:
1479 1479 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1480 1480
1481 1481 def close(self):
1482 1482 if self._parts['docfooter']:
1483 1483 if not self.footer:
1484 1484 self.footer = ""
1485 1485 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1486 1486 return super(changeset_templater, self).close()
1487 1487
1488 1488 def _show(self, ctx, copies, matchfn, props):
1489 1489 '''show a single changeset or file revision'''
1490 1490 props = props.copy()
1491 1491 props.update(templatekw.keywords)
1492 1492 props['templ'] = self.t
1493 1493 props['ctx'] = ctx
1494 1494 props['repo'] = self.repo
1495 1495 props['ui'] = self.repo.ui
1496 1496 props['index'] = next(self._counter)
1497 1497 props['revcache'] = {'copies': copies}
1498 1498 props['cache'] = self.cache
1499 1499 props = pycompat.strkwargs(props)
1500 1500
1501 1501 # write header
1502 1502 if self._parts['header']:
1503 1503 h = templater.stringify(self.t(self._parts['header'], **props))
1504 1504 if self.buffered:
1505 1505 self.header[ctx.rev()] = h
1506 1506 else:
1507 1507 if self.lastheader != h:
1508 1508 self.lastheader = h
1509 1509 self.ui.write(h)
1510 1510
1511 1511 # write changeset metadata, then patch if requested
1512 1512 key = self._parts['changeset']
1513 1513 self.ui.write(templater.stringify(self.t(key, **props)))
1514 1514 self.showpatch(ctx, matchfn)
1515 1515
1516 1516 if self._parts['footer']:
1517 1517 if not self.footer:
1518 1518 self.footer = templater.stringify(
1519 1519 self.t(self._parts['footer'], **props))
1520 1520
1521 1521 def gettemplate(ui, tmpl, style):
1522 1522 """
1523 1523 Find the template matching the given template spec or style.
1524 1524 """
1525 1525
1526 1526 # ui settings
1527 1527 if not tmpl and not style: # template are stronger than style
1528 1528 tmpl = ui.config('ui', 'logtemplate')
1529 1529 if tmpl:
1530 1530 return templater.unquotestring(tmpl), None
1531 1531 else:
1532 1532 style = util.expandpath(ui.config('ui', 'style', ''))
1533 1533
1534 1534 if not tmpl and style:
1535 1535 mapfile = style
1536 1536 if not os.path.split(mapfile)[0]:
1537 1537 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1538 1538 or templater.templatepath(mapfile))
1539 1539 if mapname:
1540 1540 mapfile = mapname
1541 1541 return None, mapfile
1542 1542
1543 1543 if not tmpl:
1544 1544 return None, None
1545 1545
1546 1546 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1547 1547
1548 1548 def show_changeset(ui, repo, opts, buffered=False):
1549 1549 """show one changeset using template or regular display.
1550 1550
1551 1551 Display format will be the first non-empty hit of:
1552 1552 1. option 'template'
1553 1553 2. option 'style'
1554 1554 3. [ui] setting 'logtemplate'
1555 1555 4. [ui] setting 'style'
1556 1556 If all of these values are either the unset or the empty string,
1557 1557 regular display via changeset_printer() is done.
1558 1558 """
1559 1559 # options
1560 1560 matchfn = None
1561 1561 if opts.get('patch') or opts.get('stat'):
1562 1562 matchfn = scmutil.matchall(repo)
1563 1563
1564 1564 if opts.get('template') == 'json':
1565 1565 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1566 1566
1567 1567 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1568 1568
1569 1569 if not tmpl and not mapfile:
1570 1570 return changeset_printer(ui, repo, matchfn, opts, buffered)
1571 1571
1572 1572 return changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile, buffered)
1573 1573
1574 1574 def showmarker(fm, marker, index=None):
1575 1575 """utility function to display obsolescence marker in a readable way
1576 1576
1577 1577 To be used by debug function."""
1578 1578 if index is not None:
1579 1579 fm.write('index', '%i ', index)
1580 1580 fm.write('precnode', '%s ', hex(marker.precnode()))
1581 1581 succs = marker.succnodes()
1582 1582 fm.condwrite(succs, 'succnodes', '%s ',
1583 1583 fm.formatlist(map(hex, succs), name='node'))
1584 1584 fm.write('flag', '%X ', marker.flags())
1585 1585 parents = marker.parentnodes()
1586 1586 if parents is not None:
1587 1587 fm.write('parentnodes', '{%s} ',
1588 1588 fm.formatlist(map(hex, parents), name='node', sep=', '))
1589 1589 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1590 1590 meta = marker.metadata().copy()
1591 1591 meta.pop('date', None)
1592 1592 fm.write('metadata', '{%s}', fm.formatdict(meta, fmt='%r: %r', sep=', '))
1593 1593 fm.plain('\n')
1594 1594
1595 1595 def finddate(ui, repo, date):
1596 1596 """Find the tipmost changeset that matches the given date spec"""
1597 1597
1598 1598 df = util.matchdate(date)
1599 1599 m = scmutil.matchall(repo)
1600 1600 results = {}
1601 1601
1602 1602 def prep(ctx, fns):
1603 1603 d = ctx.date()
1604 1604 if df(d[0]):
1605 1605 results[ctx.rev()] = d
1606 1606
1607 1607 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1608 1608 rev = ctx.rev()
1609 1609 if rev in results:
1610 1610 ui.status(_("found revision %s from %s\n") %
1611 1611 (rev, util.datestr(results[rev])))
1612 1612 return '%d' % rev
1613 1613
1614 1614 raise error.Abort(_("revision matching date not found"))
1615 1615
1616 1616 def increasingwindows(windowsize=8, sizelimit=512):
1617 1617 while True:
1618 1618 yield windowsize
1619 1619 if windowsize < sizelimit:
1620 1620 windowsize *= 2
1621 1621
1622 1622 class FileWalkError(Exception):
1623 1623 pass
1624 1624
1625 1625 def walkfilerevs(repo, match, follow, revs, fncache):
1626 1626 '''Walks the file history for the matched files.
1627 1627
1628 1628 Returns the changeset revs that are involved in the file history.
1629 1629
1630 1630 Throws FileWalkError if the file history can't be walked using
1631 1631 filelogs alone.
1632 1632 '''
1633 1633 wanted = set()
1634 1634 copies = []
1635 1635 minrev, maxrev = min(revs), max(revs)
1636 1636 def filerevgen(filelog, last):
1637 1637 """
1638 1638 Only files, no patterns. Check the history of each file.
1639 1639
1640 1640 Examines filelog entries within minrev, maxrev linkrev range
1641 1641 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1642 1642 tuples in backwards order
1643 1643 """
1644 1644 cl_count = len(repo)
1645 1645 revs = []
1646 1646 for j in xrange(0, last + 1):
1647 1647 linkrev = filelog.linkrev(j)
1648 1648 if linkrev < minrev:
1649 1649 continue
1650 1650 # only yield rev for which we have the changelog, it can
1651 1651 # happen while doing "hg log" during a pull or commit
1652 1652 if linkrev >= cl_count:
1653 1653 break
1654 1654
1655 1655 parentlinkrevs = []
1656 1656 for p in filelog.parentrevs(j):
1657 1657 if p != nullrev:
1658 1658 parentlinkrevs.append(filelog.linkrev(p))
1659 1659 n = filelog.node(j)
1660 1660 revs.append((linkrev, parentlinkrevs,
1661 1661 follow and filelog.renamed(n)))
1662 1662
1663 1663 return reversed(revs)
1664 1664 def iterfiles():
1665 1665 pctx = repo['.']
1666 1666 for filename in match.files():
1667 1667 if follow:
1668 1668 if filename not in pctx:
1669 1669 raise error.Abort(_('cannot follow file not in parent '
1670 1670 'revision: "%s"') % filename)
1671 1671 yield filename, pctx[filename].filenode()
1672 1672 else:
1673 1673 yield filename, None
1674 1674 for filename_node in copies:
1675 1675 yield filename_node
1676 1676
1677 1677 for file_, node in iterfiles():
1678 1678 filelog = repo.file(file_)
1679 1679 if not len(filelog):
1680 1680 if node is None:
1681 1681 # A zero count may be a directory or deleted file, so
1682 1682 # try to find matching entries on the slow path.
1683 1683 if follow:
1684 1684 raise error.Abort(
1685 1685 _('cannot follow nonexistent file: "%s"') % file_)
1686 1686 raise FileWalkError("Cannot walk via filelog")
1687 1687 else:
1688 1688 continue
1689 1689
1690 1690 if node is None:
1691 1691 last = len(filelog) - 1
1692 1692 else:
1693 1693 last = filelog.rev(node)
1694 1694
1695 1695 # keep track of all ancestors of the file
1696 1696 ancestors = {filelog.linkrev(last)}
1697 1697
1698 1698 # iterate from latest to oldest revision
1699 1699 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1700 1700 if not follow:
1701 1701 if rev > maxrev:
1702 1702 continue
1703 1703 else:
1704 1704 # Note that last might not be the first interesting
1705 1705 # rev to us:
1706 1706 # if the file has been changed after maxrev, we'll
1707 1707 # have linkrev(last) > maxrev, and we still need
1708 1708 # to explore the file graph
1709 1709 if rev not in ancestors:
1710 1710 continue
1711 1711 # XXX insert 1327 fix here
1712 1712 if flparentlinkrevs:
1713 1713 ancestors.update(flparentlinkrevs)
1714 1714
1715 1715 fncache.setdefault(rev, []).append(file_)
1716 1716 wanted.add(rev)
1717 1717 if copied:
1718 1718 copies.append(copied)
1719 1719
1720 1720 return wanted
1721 1721
1722 1722 class _followfilter(object):
1723 1723 def __init__(self, repo, onlyfirst=False):
1724 1724 self.repo = repo
1725 1725 self.startrev = nullrev
1726 1726 self.roots = set()
1727 1727 self.onlyfirst = onlyfirst
1728 1728
1729 1729 def match(self, rev):
1730 1730 def realparents(rev):
1731 1731 if self.onlyfirst:
1732 1732 return self.repo.changelog.parentrevs(rev)[0:1]
1733 1733 else:
1734 1734 return filter(lambda x: x != nullrev,
1735 1735 self.repo.changelog.parentrevs(rev))
1736 1736
1737 1737 if self.startrev == nullrev:
1738 1738 self.startrev = rev
1739 1739 return True
1740 1740
1741 1741 if rev > self.startrev:
1742 1742 # forward: all descendants
1743 1743 if not self.roots:
1744 1744 self.roots.add(self.startrev)
1745 1745 for parent in realparents(rev):
1746 1746 if parent in self.roots:
1747 1747 self.roots.add(rev)
1748 1748 return True
1749 1749 else:
1750 1750 # backwards: all parents
1751 1751 if not self.roots:
1752 1752 self.roots.update(realparents(self.startrev))
1753 1753 if rev in self.roots:
1754 1754 self.roots.remove(rev)
1755 1755 self.roots.update(realparents(rev))
1756 1756 return True
1757 1757
1758 1758 return False
1759 1759
1760 1760 def walkchangerevs(repo, match, opts, prepare):
1761 1761 '''Iterate over files and the revs in which they changed.
1762 1762
1763 1763 Callers most commonly need to iterate backwards over the history
1764 1764 in which they are interested. Doing so has awful (quadratic-looking)
1765 1765 performance, so we use iterators in a "windowed" way.
1766 1766
1767 1767 We walk a window of revisions in the desired order. Within the
1768 1768 window, we first walk forwards to gather data, then in the desired
1769 1769 order (usually backwards) to display it.
1770 1770
1771 1771 This function returns an iterator yielding contexts. Before
1772 1772 yielding each context, the iterator will first call the prepare
1773 1773 function on each context in the window in forward order.'''
1774 1774
1775 1775 follow = opts.get('follow') or opts.get('follow_first')
1776 1776 revs = _logrevs(repo, opts)
1777 1777 if not revs:
1778 1778 return []
1779 1779 wanted = set()
1780 1780 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1781 1781 opts.get('removed'))
1782 1782 fncache = {}
1783 1783 change = repo.changectx
1784 1784
1785 1785 # First step is to fill wanted, the set of revisions that we want to yield.
1786 1786 # When it does not induce extra cost, we also fill fncache for revisions in
1787 1787 # wanted: a cache of filenames that were changed (ctx.files()) and that
1788 1788 # match the file filtering conditions.
1789 1789
1790 1790 if match.always():
1791 1791 # No files, no patterns. Display all revs.
1792 1792 wanted = revs
1793 1793 elif not slowpath:
1794 1794 # We only have to read through the filelog to find wanted revisions
1795 1795
1796 1796 try:
1797 1797 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1798 1798 except FileWalkError:
1799 1799 slowpath = True
1800 1800
1801 1801 # We decided to fall back to the slowpath because at least one
1802 1802 # of the paths was not a file. Check to see if at least one of them
1803 1803 # existed in history, otherwise simply return
1804 1804 for path in match.files():
1805 1805 if path == '.' or path in repo.store:
1806 1806 break
1807 1807 else:
1808 1808 return []
1809 1809
1810 1810 if slowpath:
1811 1811 # We have to read the changelog to match filenames against
1812 1812 # changed files
1813 1813
1814 1814 if follow:
1815 1815 raise error.Abort(_('can only follow copies/renames for explicit '
1816 1816 'filenames'))
1817 1817
1818 1818 # The slow path checks files modified in every changeset.
1819 1819 # This is really slow on large repos, so compute the set lazily.
1820 1820 class lazywantedset(object):
1821 1821 def __init__(self):
1822 1822 self.set = set()
1823 1823 self.revs = set(revs)
1824 1824
1825 1825 # No need to worry about locality here because it will be accessed
1826 1826 # in the same order as the increasing window below.
1827 1827 def __contains__(self, value):
1828 1828 if value in self.set:
1829 1829 return True
1830 1830 elif not value in self.revs:
1831 1831 return False
1832 1832 else:
1833 1833 self.revs.discard(value)
1834 1834 ctx = change(value)
1835 1835 matches = filter(match, ctx.files())
1836 1836 if matches:
1837 1837 fncache[value] = matches
1838 1838 self.set.add(value)
1839 1839 return True
1840 1840 return False
1841 1841
1842 1842 def discard(self, value):
1843 1843 self.revs.discard(value)
1844 1844 self.set.discard(value)
1845 1845
1846 1846 wanted = lazywantedset()
1847 1847
1848 1848 # it might be worthwhile to do this in the iterator if the rev range
1849 1849 # is descending and the prune args are all within that range
1850 1850 for rev in opts.get('prune', ()):
1851 1851 rev = repo[rev].rev()
1852 1852 ff = _followfilter(repo)
1853 1853 stop = min(revs[0], revs[-1])
1854 1854 for x in xrange(rev, stop - 1, -1):
1855 1855 if ff.match(x):
1856 1856 wanted = wanted - [x]
1857 1857
1858 1858 # Now that wanted is correctly initialized, we can iterate over the
1859 1859 # revision range, yielding only revisions in wanted.
1860 1860 def iterate():
1861 1861 if follow and match.always():
1862 1862 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1863 1863 def want(rev):
1864 1864 return ff.match(rev) and rev in wanted
1865 1865 else:
1866 1866 def want(rev):
1867 1867 return rev in wanted
1868 1868
1869 1869 it = iter(revs)
1870 1870 stopiteration = False
1871 1871 for windowsize in increasingwindows():
1872 1872 nrevs = []
1873 1873 for i in xrange(windowsize):
1874 1874 rev = next(it, None)
1875 1875 if rev is None:
1876 1876 stopiteration = True
1877 1877 break
1878 1878 elif want(rev):
1879 1879 nrevs.append(rev)
1880 1880 for rev in sorted(nrevs):
1881 1881 fns = fncache.get(rev)
1882 1882 ctx = change(rev)
1883 1883 if not fns:
1884 1884 def fns_generator():
1885 1885 for f in ctx.files():
1886 1886 if match(f):
1887 1887 yield f
1888 1888 fns = fns_generator()
1889 1889 prepare(ctx, fns)
1890 1890 for rev in nrevs:
1891 1891 yield change(rev)
1892 1892
1893 1893 if stopiteration:
1894 1894 break
1895 1895
1896 1896 return iterate()
1897 1897
1898 1898 def _makefollowlogfilematcher(repo, files, followfirst):
1899 1899 # When displaying a revision with --patch --follow FILE, we have
1900 1900 # to know which file of the revision must be diffed. With
1901 1901 # --follow, we want the names of the ancestors of FILE in the
1902 1902 # revision, stored in "fcache". "fcache" is populated by
1903 1903 # reproducing the graph traversal already done by --follow revset
1904 1904 # and relating revs to file names (which is not "correct" but
1905 1905 # good enough).
1906 1906 fcache = {}
1907 1907 fcacheready = [False]
1908 1908 pctx = repo['.']
1909 1909
1910 1910 def populate():
1911 1911 for fn in files:
1912 1912 fctx = pctx[fn]
1913 1913 fcache.setdefault(fctx.introrev(), set()).add(fctx.path())
1914 1914 for c in fctx.ancestors(followfirst=followfirst):
1915 1915 fcache.setdefault(c.rev(), set()).add(c.path())
1916 1916
1917 1917 def filematcher(rev):
1918 1918 if not fcacheready[0]:
1919 1919 # Lazy initialization
1920 1920 fcacheready[0] = True
1921 1921 populate()
1922 1922 return scmutil.matchfiles(repo, fcache.get(rev, []))
1923 1923
1924 1924 return filematcher
1925 1925
1926 1926 def _makenofollowlogfilematcher(repo, pats, opts):
1927 1927 '''hook for extensions to override the filematcher for non-follow cases'''
1928 1928 return None
1929 1929
1930 1930 def _makelogrevset(repo, pats, opts, revs):
1931 1931 """Return (expr, filematcher) where expr is a revset string built
1932 1932 from log options and file patterns or None. If --stat or --patch
1933 1933 are not passed filematcher is None. Otherwise it is a callable
1934 1934 taking a revision number and returning a match objects filtering
1935 1935 the files to be detailed when displaying the revision.
1936 1936 """
1937 1937 opt2revset = {
1938 1938 'no_merges': ('not merge()', None),
1939 1939 'only_merges': ('merge()', None),
1940 1940 '_ancestors': ('ancestors(%(val)s)', None),
1941 1941 '_fancestors': ('_firstancestors(%(val)s)', None),
1942 1942 '_descendants': ('descendants(%(val)s)', None),
1943 1943 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1944 1944 '_matchfiles': ('_matchfiles(%(val)s)', None),
1945 1945 'date': ('date(%(val)r)', None),
1946 1946 'branch': ('branch(%(val)r)', ' or '),
1947 1947 '_patslog': ('filelog(%(val)r)', ' or '),
1948 1948 '_patsfollow': ('follow(%(val)r)', ' or '),
1949 1949 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1950 1950 'keyword': ('keyword(%(val)r)', ' or '),
1951 1951 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1952 1952 'user': ('user(%(val)r)', ' or '),
1953 1953 }
1954 1954
1955 1955 opts = dict(opts)
1956 1956 # follow or not follow?
1957 1957 follow = opts.get('follow') or opts.get('follow_first')
1958 1958 if opts.get('follow_first'):
1959 1959 followfirst = 1
1960 1960 else:
1961 1961 followfirst = 0
1962 1962 # --follow with FILE behavior depends on revs...
1963 1963 it = iter(revs)
1964 1964 startrev = next(it)
1965 1965 followdescendants = startrev < next(it, startrev)
1966 1966
1967 1967 # branch and only_branch are really aliases and must be handled at
1968 1968 # the same time
1969 1969 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1970 1970 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1971 1971 # pats/include/exclude are passed to match.match() directly in
1972 1972 # _matchfiles() revset but walkchangerevs() builds its matcher with
1973 1973 # scmutil.match(). The difference is input pats are globbed on
1974 1974 # platforms without shell expansion (windows).
1975 1975 wctx = repo[None]
1976 1976 match, pats = scmutil.matchandpats(wctx, pats, opts)
1977 1977 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1978 1978 opts.get('removed'))
1979 1979 if not slowpath:
1980 1980 for f in match.files():
1981 1981 if follow and f not in wctx:
1982 1982 # If the file exists, it may be a directory, so let it
1983 1983 # take the slow path.
1984 1984 if os.path.exists(repo.wjoin(f)):
1985 1985 slowpath = True
1986 1986 continue
1987 1987 else:
1988 1988 raise error.Abort(_('cannot follow file not in parent '
1989 1989 'revision: "%s"') % f)
1990 1990 filelog = repo.file(f)
1991 1991 if not filelog:
1992 1992 # A zero count may be a directory or deleted file, so
1993 1993 # try to find matching entries on the slow path.
1994 1994 if follow:
1995 1995 raise error.Abort(
1996 1996 _('cannot follow nonexistent file: "%s"') % f)
1997 1997 slowpath = True
1998 1998
1999 1999 # We decided to fall back to the slowpath because at least one
2000 2000 # of the paths was not a file. Check to see if at least one of them
2001 2001 # existed in history - in that case, we'll continue down the
2002 2002 # slowpath; otherwise, we can turn off the slowpath
2003 2003 if slowpath:
2004 2004 for path in match.files():
2005 2005 if path == '.' or path in repo.store:
2006 2006 break
2007 2007 else:
2008 2008 slowpath = False
2009 2009
2010 2010 fpats = ('_patsfollow', '_patsfollowfirst')
2011 2011 fnopats = (('_ancestors', '_fancestors'),
2012 2012 ('_descendants', '_fdescendants'))
2013 2013 if slowpath:
2014 2014 # See walkchangerevs() slow path.
2015 2015 #
2016 2016 # pats/include/exclude cannot be represented as separate
2017 2017 # revset expressions as their filtering logic applies at file
2018 2018 # level. For instance "-I a -X a" matches a revision touching
2019 2019 # "a" and "b" while "file(a) and not file(b)" does
2020 2020 # not. Besides, filesets are evaluated against the working
2021 2021 # directory.
2022 2022 matchargs = ['r:', 'd:relpath']
2023 2023 for p in pats:
2024 2024 matchargs.append('p:' + p)
2025 2025 for p in opts.get('include', []):
2026 2026 matchargs.append('i:' + p)
2027 2027 for p in opts.get('exclude', []):
2028 2028 matchargs.append('x:' + p)
2029 2029 matchargs = ','.join(('%r' % p) for p in matchargs)
2030 2030 opts['_matchfiles'] = matchargs
2031 2031 if follow:
2032 2032 opts[fnopats[0][followfirst]] = '.'
2033 2033 else:
2034 2034 if follow:
2035 2035 if pats:
2036 2036 # follow() revset interprets its file argument as a
2037 2037 # manifest entry, so use match.files(), not pats.
2038 2038 opts[fpats[followfirst]] = list(match.files())
2039 2039 else:
2040 2040 op = fnopats[followdescendants][followfirst]
2041 2041 opts[op] = 'rev(%d)' % startrev
2042 2042 else:
2043 2043 opts['_patslog'] = list(pats)
2044 2044
2045 2045 filematcher = None
2046 2046 if opts.get('patch') or opts.get('stat'):
2047 2047 # When following files, track renames via a special matcher.
2048 2048 # If we're forced to take the slowpath it means we're following
2049 2049 # at least one pattern/directory, so don't bother with rename tracking.
2050 2050 if follow and not match.always() and not slowpath:
2051 2051 # _makefollowlogfilematcher expects its files argument to be
2052 2052 # relative to the repo root, so use match.files(), not pats.
2053 2053 filematcher = _makefollowlogfilematcher(repo, match.files(),
2054 2054 followfirst)
2055 2055 else:
2056 2056 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2057 2057 if filematcher is None:
2058 2058 filematcher = lambda rev: match
2059 2059
2060 2060 expr = []
2061 2061 for op, val in sorted(opts.iteritems()):
2062 2062 if not val:
2063 2063 continue
2064 2064 if op not in opt2revset:
2065 2065 continue
2066 2066 revop, andor = opt2revset[op]
2067 2067 if '%(val)' not in revop:
2068 2068 expr.append(revop)
2069 2069 else:
2070 2070 if not isinstance(val, list):
2071 2071 e = revop % {'val': val}
2072 2072 else:
2073 2073 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2074 2074 expr.append(e)
2075 2075
2076 2076 if expr:
2077 2077 expr = '(' + ' and '.join(expr) + ')'
2078 2078 else:
2079 2079 expr = None
2080 2080 return expr, filematcher
2081 2081
2082 2082 def _logrevs(repo, opts):
2083 2083 # Default --rev value depends on --follow but --follow behavior
2084 2084 # depends on revisions resolved from --rev...
2085 2085 follow = opts.get('follow') or opts.get('follow_first')
2086 2086 if opts.get('rev'):
2087 2087 revs = scmutil.revrange(repo, opts['rev'])
2088 2088 elif follow and repo.dirstate.p1() == nullid:
2089 2089 revs = smartset.baseset()
2090 2090 elif follow:
2091 2091 revs = repo.revs('reverse(:.)')
2092 2092 else:
2093 2093 revs = smartset.spanset(repo)
2094 2094 revs.reverse()
2095 2095 return revs
2096 2096
2097 2097 def getgraphlogrevs(repo, pats, opts):
2098 2098 """Return (revs, expr, filematcher) where revs is an iterable of
2099 2099 revision numbers, expr is a revset string built from log options
2100 2100 and file patterns or None, and used to filter 'revs'. If --stat or
2101 2101 --patch are not passed filematcher is None. Otherwise it is a
2102 2102 callable taking a revision number and returning a match objects
2103 2103 filtering the files to be detailed when displaying the revision.
2104 2104 """
2105 2105 limit = loglimit(opts)
2106 2106 revs = _logrevs(repo, opts)
2107 2107 if not revs:
2108 2108 return smartset.baseset(), None, None
2109 2109 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2110 2110 if opts.get('rev'):
2111 2111 # User-specified revs might be unsorted, but don't sort before
2112 2112 # _makelogrevset because it might depend on the order of revs
2113 2113 if not (revs.isdescending() or revs.istopo()):
2114 2114 revs.sort(reverse=True)
2115 2115 if expr:
2116 2116 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2117 2117 revs = matcher(repo, revs)
2118 2118 if limit is not None:
2119 2119 limitedrevs = []
2120 2120 for idx, rev in enumerate(revs):
2121 2121 if idx >= limit:
2122 2122 break
2123 2123 limitedrevs.append(rev)
2124 2124 revs = smartset.baseset(limitedrevs)
2125 2125
2126 2126 return revs, expr, filematcher
2127 2127
2128 2128 def getlogrevs(repo, pats, opts):
2129 2129 """Return (revs, expr, filematcher) where revs is an iterable of
2130 2130 revision numbers, expr is a revset string built from log options
2131 2131 and file patterns or None, and used to filter 'revs'. If --stat or
2132 2132 --patch are not passed filematcher is None. Otherwise it is a
2133 2133 callable taking a revision number and returning a match objects
2134 2134 filtering the files to be detailed when displaying the revision.
2135 2135 """
2136 2136 limit = loglimit(opts)
2137 2137 revs = _logrevs(repo, opts)
2138 2138 if not revs:
2139 2139 return smartset.baseset([]), None, None
2140 2140 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2141 2141 if expr:
2142 2142 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2143 2143 revs = matcher(repo, revs)
2144 2144 if limit is not None:
2145 2145 limitedrevs = []
2146 2146 for idx, r in enumerate(revs):
2147 2147 if limit <= idx:
2148 2148 break
2149 2149 limitedrevs.append(r)
2150 2150 revs = smartset.baseset(limitedrevs)
2151 2151
2152 2152 return revs, expr, filematcher
2153 2153
2154 2154 def _graphnodeformatter(ui, displayer):
2155 2155 spec = ui.config('ui', 'graphnodetemplate')
2156 2156 if not spec:
2157 2157 return templatekw.showgraphnode # fast path for "{graphnode}"
2158 2158
2159 2159 spec = templater.unquotestring(spec)
2160 2160 templ = formatter.gettemplater(ui, 'graphnode', spec)
2161 2161 cache = {}
2162 2162 if isinstance(displayer, changeset_templater):
2163 2163 cache = displayer.cache # reuse cache of slow templates
2164 2164 props = templatekw.keywords.copy()
2165 2165 props['templ'] = templ
2166 2166 props['cache'] = cache
2167 2167 def formatnode(repo, ctx):
2168 2168 props['ctx'] = ctx
2169 2169 props['repo'] = repo
2170 2170 props['ui'] = repo.ui
2171 2171 props['revcache'] = {}
2172 2172 return templater.stringify(templ('graphnode', **props))
2173 2173 return formatnode
2174 2174
2175 2175 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2176 2176 filematcher=None):
2177 2177 formatnode = _graphnodeformatter(ui, displayer)
2178 2178 state = graphmod.asciistate()
2179 2179 styles = state['styles']
2180 2180
2181 2181 # only set graph styling if HGPLAIN is not set.
2182 2182 if ui.plain('graph'):
2183 2183 # set all edge styles to |, the default pre-3.8 behaviour
2184 2184 styles.update(dict.fromkeys(styles, '|'))
2185 2185 else:
2186 2186 edgetypes = {
2187 2187 'parent': graphmod.PARENT,
2188 2188 'grandparent': graphmod.GRANDPARENT,
2189 2189 'missing': graphmod.MISSINGPARENT
2190 2190 }
2191 2191 for name, key in edgetypes.items():
2192 2192 # experimental config: experimental.graphstyle.*
2193 2193 styles[key] = ui.config('experimental', 'graphstyle.%s' % name,
2194 2194 styles[key])
2195 2195 if not styles[key]:
2196 2196 styles[key] = None
2197 2197
2198 2198 # experimental config: experimental.graphshorten
2199 2199 state['graphshorten'] = ui.configbool('experimental', 'graphshorten')
2200 2200
2201 2201 for rev, type, ctx, parents in dag:
2202 2202 char = formatnode(repo, ctx)
2203 2203 copies = None
2204 2204 if getrenamed and ctx.rev():
2205 2205 copies = []
2206 2206 for fn in ctx.files():
2207 2207 rename = getrenamed(fn, ctx.rev())
2208 2208 if rename:
2209 2209 copies.append((fn, rename[0]))
2210 2210 revmatchfn = None
2211 2211 if filematcher is not None:
2212 2212 revmatchfn = filematcher(ctx.rev())
2213 2213 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2214 2214 lines = displayer.hunk.pop(rev).split('\n')
2215 2215 if not lines[-1]:
2216 2216 del lines[-1]
2217 2217 displayer.flush(ctx)
2218 2218 edges = edgefn(type, char, lines, state, rev, parents)
2219 2219 for type, char, lines, coldata in edges:
2220 2220 graphmod.ascii(ui, state, type, char, lines, coldata)
2221 2221 displayer.close()
2222 2222
2223 2223 def graphlog(ui, repo, pats, opts):
2224 2224 # Parameters are identical to log command ones
2225 2225 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2226 2226 revdag = graphmod.dagwalker(repo, revs)
2227 2227
2228 2228 getrenamed = None
2229 2229 if opts.get('copies'):
2230 2230 endrev = None
2231 2231 if opts.get('rev'):
2232 2232 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2233 2233 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2234 2234
2235 2235 ui.pager('log')
2236 2236 displayer = show_changeset(ui, repo, opts, buffered=True)
2237 2237 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2238 2238 filematcher)
2239 2239
2240 2240 def checkunsupportedgraphflags(pats, opts):
2241 2241 for op in ["newest_first"]:
2242 2242 if op in opts and opts[op]:
2243 2243 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2244 2244 % op.replace("_", "-"))
2245 2245
2246 2246 def graphrevs(repo, nodes, opts):
2247 2247 limit = loglimit(opts)
2248 2248 nodes.reverse()
2249 2249 if limit is not None:
2250 2250 nodes = nodes[:limit]
2251 2251 return graphmod.nodes(repo, nodes)
2252 2252
2253 2253 def add(ui, repo, match, prefix, explicitonly, **opts):
2254 2254 join = lambda f: os.path.join(prefix, f)
2255 2255 bad = []
2256 2256
2257 2257 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2258 2258 names = []
2259 2259 wctx = repo[None]
2260 2260 cca = None
2261 2261 abort, warn = scmutil.checkportabilityalert(ui)
2262 2262 if abort or warn:
2263 2263 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2264 2264
2265 2265 badmatch = matchmod.badmatch(match, badfn)
2266 2266 dirstate = repo.dirstate
2267 2267 # We don't want to just call wctx.walk here, since it would return a lot of
2268 2268 # clean files, which we aren't interested in and takes time.
2269 2269 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2270 2270 True, False, full=False)):
2271 2271 exact = match.exact(f)
2272 2272 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2273 2273 if cca:
2274 2274 cca(f)
2275 2275 names.append(f)
2276 2276 if ui.verbose or not exact:
2277 2277 ui.status(_('adding %s\n') % match.rel(f))
2278 2278
2279 2279 for subpath in sorted(wctx.substate):
2280 2280 sub = wctx.sub(subpath)
2281 2281 try:
2282 2282 submatch = matchmod.subdirmatcher(subpath, match)
2283 2283 if opts.get(r'subrepos'):
2284 2284 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2285 2285 else:
2286 2286 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2287 2287 except error.LookupError:
2288 2288 ui.status(_("skipping missing subrepository: %s\n")
2289 2289 % join(subpath))
2290 2290
2291 2291 if not opts.get(r'dry_run'):
2292 2292 rejected = wctx.add(names, prefix)
2293 2293 bad.extend(f for f in rejected if f in match.files())
2294 2294 return bad
2295 2295
2296 2296 def addwebdirpath(repo, serverpath, webconf):
2297 2297 webconf[serverpath] = repo.root
2298 2298 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2299 2299
2300 2300 for r in repo.revs('filelog("path:.hgsub")'):
2301 2301 ctx = repo[r]
2302 2302 for subpath in ctx.substate:
2303 2303 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2304 2304
2305 2305 def forget(ui, repo, match, prefix, explicitonly):
2306 2306 join = lambda f: os.path.join(prefix, f)
2307 2307 bad = []
2308 2308 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2309 2309 wctx = repo[None]
2310 2310 forgot = []
2311 2311
2312 2312 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2313 2313 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2314 2314 if explicitonly:
2315 2315 forget = [f for f in forget if match.exact(f)]
2316 2316
2317 2317 for subpath in sorted(wctx.substate):
2318 2318 sub = wctx.sub(subpath)
2319 2319 try:
2320 2320 submatch = matchmod.subdirmatcher(subpath, match)
2321 2321 subbad, subforgot = sub.forget(submatch, prefix)
2322 2322 bad.extend([subpath + '/' + f for f in subbad])
2323 2323 forgot.extend([subpath + '/' + f for f in subforgot])
2324 2324 except error.LookupError:
2325 2325 ui.status(_("skipping missing subrepository: %s\n")
2326 2326 % join(subpath))
2327 2327
2328 2328 if not explicitonly:
2329 2329 for f in match.files():
2330 2330 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2331 2331 if f not in forgot:
2332 2332 if repo.wvfs.exists(f):
2333 2333 # Don't complain if the exact case match wasn't given.
2334 2334 # But don't do this until after checking 'forgot', so
2335 2335 # that subrepo files aren't normalized, and this op is
2336 2336 # purely from data cached by the status walk above.
2337 2337 if repo.dirstate.normalize(f) in repo.dirstate:
2338 2338 continue
2339 2339 ui.warn(_('not removing %s: '
2340 2340 'file is already untracked\n')
2341 2341 % match.rel(f))
2342 2342 bad.append(f)
2343 2343
2344 2344 for f in forget:
2345 2345 if ui.verbose or not match.exact(f):
2346 2346 ui.status(_('removing %s\n') % match.rel(f))
2347 2347
2348 2348 rejected = wctx.forget(forget, prefix)
2349 2349 bad.extend(f for f in rejected if f in match.files())
2350 2350 forgot.extend(f for f in forget if f not in rejected)
2351 2351 return bad, forgot
2352 2352
2353 2353 def files(ui, ctx, m, fm, fmt, subrepos):
2354 2354 rev = ctx.rev()
2355 2355 ret = 1
2356 2356 ds = ctx.repo().dirstate
2357 2357
2358 2358 for f in ctx.matches(m):
2359 2359 if rev is None and ds[f] == 'r':
2360 2360 continue
2361 2361 fm.startitem()
2362 2362 if ui.verbose:
2363 2363 fc = ctx[f]
2364 2364 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2365 2365 fm.data(abspath=f)
2366 2366 fm.write('path', fmt, m.rel(f))
2367 2367 ret = 0
2368 2368
2369 2369 for subpath in sorted(ctx.substate):
2370 2370 submatch = matchmod.subdirmatcher(subpath, m)
2371 2371 if (subrepos or m.exact(subpath) or any(submatch.files())):
2372 2372 sub = ctx.sub(subpath)
2373 2373 try:
2374 2374 recurse = m.exact(subpath) or subrepos
2375 2375 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2376 2376 ret = 0
2377 2377 except error.LookupError:
2378 2378 ui.status(_("skipping missing subrepository: %s\n")
2379 2379 % m.abs(subpath))
2380 2380
2381 2381 return ret
2382 2382
2383 2383 def remove(ui, repo, m, prefix, after, force, subrepos, warnings=None):
2384 2384 join = lambda f: os.path.join(prefix, f)
2385 2385 ret = 0
2386 2386 s = repo.status(match=m, clean=True)
2387 2387 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2388 2388
2389 2389 wctx = repo[None]
2390 2390
2391 2391 if warnings is None:
2392 2392 warnings = []
2393 2393 warn = True
2394 2394 else:
2395 2395 warn = False
2396 2396
2397 2397 subs = sorted(wctx.substate)
2398 2398 total = len(subs)
2399 2399 count = 0
2400 2400 for subpath in subs:
2401 2401 count += 1
2402 2402 submatch = matchmod.subdirmatcher(subpath, m)
2403 2403 if subrepos or m.exact(subpath) or any(submatch.files()):
2404 2404 ui.progress(_('searching'), count, total=total, unit=_('subrepos'))
2405 2405 sub = wctx.sub(subpath)
2406 2406 try:
2407 2407 if sub.removefiles(submatch, prefix, after, force, subrepos,
2408 2408 warnings):
2409 2409 ret = 1
2410 2410 except error.LookupError:
2411 2411 warnings.append(_("skipping missing subrepository: %s\n")
2412 2412 % join(subpath))
2413 2413 ui.progress(_('searching'), None)
2414 2414
2415 2415 # warn about failure to delete explicit files/dirs
2416 2416 deleteddirs = util.dirs(deleted)
2417 2417 files = m.files()
2418 2418 total = len(files)
2419 2419 count = 0
2420 2420 for f in files:
2421 2421 def insubrepo():
2422 2422 for subpath in wctx.substate:
2423 2423 if f.startswith(subpath + '/'):
2424 2424 return True
2425 2425 return False
2426 2426
2427 2427 count += 1
2428 2428 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2429 2429 isdir = f in deleteddirs or wctx.hasdir(f)
2430 2430 if (f in repo.dirstate or isdir or f == '.'
2431 2431 or insubrepo() or f in subs):
2432 2432 continue
2433 2433
2434 2434 if repo.wvfs.exists(f):
2435 2435 if repo.wvfs.isdir(f):
2436 2436 warnings.append(_('not removing %s: no tracked files\n')
2437 2437 % m.rel(f))
2438 2438 else:
2439 2439 warnings.append(_('not removing %s: file is untracked\n')
2440 2440 % m.rel(f))
2441 2441 # missing files will generate a warning elsewhere
2442 2442 ret = 1
2443 2443 ui.progress(_('deleting'), None)
2444 2444
2445 2445 if force:
2446 2446 list = modified + deleted + clean + added
2447 2447 elif after:
2448 2448 list = deleted
2449 2449 remaining = modified + added + clean
2450 2450 total = len(remaining)
2451 2451 count = 0
2452 2452 for f in remaining:
2453 2453 count += 1
2454 2454 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2455 2455 warnings.append(_('not removing %s: file still exists\n')
2456 2456 % m.rel(f))
2457 2457 ret = 1
2458 2458 ui.progress(_('skipping'), None)
2459 2459 else:
2460 2460 list = deleted + clean
2461 2461 total = len(modified) + len(added)
2462 2462 count = 0
2463 2463 for f in modified:
2464 2464 count += 1
2465 2465 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2466 2466 warnings.append(_('not removing %s: file is modified (use -f'
2467 2467 ' to force removal)\n') % m.rel(f))
2468 2468 ret = 1
2469 2469 for f in added:
2470 2470 count += 1
2471 2471 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2472 2472 warnings.append(_("not removing %s: file has been marked for add"
2473 2473 " (use 'hg forget' to undo add)\n") % m.rel(f))
2474 2474 ret = 1
2475 2475 ui.progress(_('skipping'), None)
2476 2476
2477 2477 list = sorted(list)
2478 2478 total = len(list)
2479 2479 count = 0
2480 2480 for f in list:
2481 2481 count += 1
2482 2482 if ui.verbose or not m.exact(f):
2483 2483 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2484 2484 ui.status(_('removing %s\n') % m.rel(f))
2485 2485 ui.progress(_('deleting'), None)
2486 2486
2487 2487 with repo.wlock():
2488 2488 if not after:
2489 2489 for f in list:
2490 2490 if f in added:
2491 2491 continue # we never unlink added files on remove
2492 2492 repo.wvfs.unlinkpath(f, ignoremissing=True)
2493 2493 repo[None].forget(list)
2494 2494
2495 2495 if warn:
2496 2496 for warning in warnings:
2497 2497 ui.warn(warning)
2498 2498
2499 2499 return ret
2500 2500
2501 2501 def cat(ui, repo, ctx, matcher, prefix, **opts):
2502 2502 err = 1
2503 2503
2504 2504 def write(path):
2505 2505 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2506 2506 pathname=os.path.join(prefix, path))
2507 2507 data = ctx[path].data()
2508 2508 if opts.get('decode'):
2509 2509 data = repo.wwritedata(path, data)
2510 2510 fp.write(data)
2511 2511 fp.close()
2512 2512
2513 2513 # Automation often uses hg cat on single files, so special case it
2514 2514 # for performance to avoid the cost of parsing the manifest.
2515 2515 if len(matcher.files()) == 1 and not matcher.anypats():
2516 2516 file = matcher.files()[0]
2517 2517 mfl = repo.manifestlog
2518 2518 mfnode = ctx.manifestnode()
2519 2519 try:
2520 2520 if mfnode and mfl[mfnode].find(file)[0]:
2521 2521 write(file)
2522 2522 return 0
2523 2523 except KeyError:
2524 2524 pass
2525 2525
2526 2526 for abs in ctx.walk(matcher):
2527 2527 write(abs)
2528 2528 err = 0
2529 2529
2530 2530 for subpath in sorted(ctx.substate):
2531 2531 sub = ctx.sub(subpath)
2532 2532 try:
2533 2533 submatch = matchmod.subdirmatcher(subpath, matcher)
2534 2534
2535 2535 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2536 2536 **opts):
2537 2537 err = 0
2538 2538 except error.RepoLookupError:
2539 2539 ui.status(_("skipping missing subrepository: %s\n")
2540 2540 % os.path.join(prefix, subpath))
2541 2541
2542 2542 return err
2543 2543
2544 2544 def commit(ui, repo, commitfunc, pats, opts):
2545 2545 '''commit the specified files or all outstanding changes'''
2546 2546 date = opts.get('date')
2547 2547 if date:
2548 2548 opts['date'] = util.parsedate(date)
2549 2549 message = logmessage(ui, opts)
2550 2550 matcher = scmutil.match(repo[None], pats, opts)
2551 2551
2552 2552 # extract addremove carefully -- this function can be called from a command
2553 2553 # that doesn't support addremove
2554 2554 if opts.get('addremove'):
2555 2555 if scmutil.addremove(repo, matcher, "", opts) != 0:
2556 2556 raise error.Abort(
2557 2557 _("failed to mark all new/missing files as added/removed"))
2558 2558
2559 2559 return commitfunc(ui, repo, message, matcher, opts)
2560 2560
2561 2561 def samefile(f, ctx1, ctx2):
2562 2562 if f in ctx1.manifest():
2563 2563 a = ctx1.filectx(f)
2564 2564 if f in ctx2.manifest():
2565 2565 b = ctx2.filectx(f)
2566 2566 return (not a.cmp(b)
2567 2567 and a.flags() == b.flags())
2568 2568 else:
2569 2569 return False
2570 2570 else:
2571 2571 return f not in ctx2.manifest()
2572 2572
2573 2573 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2574 2574 # avoid cycle context -> subrepo -> cmdutil
2575 2575 from . import context
2576 2576
2577 2577 # amend will reuse the existing user if not specified, but the obsolete
2578 2578 # marker creation requires that the current user's name is specified.
2579 2579 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2580 2580 ui.username() # raise exception if username not set
2581 2581
2582 2582 ui.note(_('amending changeset %s\n') % old)
2583 2583 base = old.p1()
2584 2584 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2585 2585
2586 2586 wlock = lock = newid = None
2587 2587 try:
2588 2588 wlock = repo.wlock()
2589 2589 lock = repo.lock()
2590 2590 with repo.transaction('amend') as tr:
2591 2591 # See if we got a message from -m or -l, if not, open the editor
2592 2592 # with the message of the changeset to amend
2593 2593 message = logmessage(ui, opts)
2594 2594 # ensure logfile does not conflict with later enforcement of the
2595 2595 # message. potential logfile content has been processed by
2596 2596 # `logmessage` anyway.
2597 2597 opts.pop('logfile')
2598 2598 # First, do a regular commit to record all changes in the working
2599 2599 # directory (if there are any)
2600 2600 ui.callhooks = False
2601 2601 activebookmark = repo._bookmarks.active
2602 2602 try:
2603 2603 repo._bookmarks.active = None
2604 2604 opts['message'] = 'temporary amend commit for %s' % old
2605 2605 node = commit(ui, repo, commitfunc, pats, opts)
2606 2606 finally:
2607 2607 repo._bookmarks.active = activebookmark
2608 2608 repo._bookmarks.recordchange(tr)
2609 2609 ui.callhooks = True
2610 2610 ctx = repo[node]
2611 2611
2612 2612 # Participating changesets:
2613 2613 #
2614 2614 # node/ctx o - new (intermediate) commit that contains changes
2615 2615 # | from working dir to go into amending commit
2616 2616 # | (or a workingctx if there were no changes)
2617 2617 # |
2618 2618 # old o - changeset to amend
2619 2619 # |
2620 2620 # base o - parent of amending changeset
2621 2621
2622 2622 # Update extra dict from amended commit (e.g. to preserve graft
2623 2623 # source)
2624 2624 extra.update(old.extra())
2625 2625
2626 2626 # Also update it from the intermediate commit or from the wctx
2627 2627 extra.update(ctx.extra())
2628 2628
2629 2629 if len(old.parents()) > 1:
2630 2630 # ctx.files() isn't reliable for merges, so fall back to the
2631 2631 # slower repo.status() method
2632 2632 files = set([fn for st in repo.status(base, old)[:3]
2633 2633 for fn in st])
2634 2634 else:
2635 2635 files = set(old.files())
2636 2636
2637 2637 # Second, we use either the commit we just did, or if there were no
2638 2638 # changes the parent of the working directory as the version of the
2639 2639 # files in the final amend commit
2640 2640 if node:
2641 2641 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2642 2642
2643 2643 user = ctx.user()
2644 2644 date = ctx.date()
2645 2645 # Recompute copies (avoid recording a -> b -> a)
2646 2646 copied = copies.pathcopies(base, ctx)
2647 2647 if old.p2:
2648 2648 copied.update(copies.pathcopies(old.p2(), ctx))
2649 2649
2650 2650 # Prune files which were reverted by the updates: if old
2651 2651 # introduced file X and our intermediate commit, node,
2652 2652 # renamed that file, then those two files are the same and
2653 2653 # we can discard X from our list of files. Likewise if X
2654 2654 # was deleted, it's no longer relevant
2655 2655 files.update(ctx.files())
2656 2656 files = [f for f in files if not samefile(f, ctx, base)]
2657 2657
2658 2658 def filectxfn(repo, ctx_, path):
2659 2659 try:
2660 2660 fctx = ctx[path]
2661 2661 flags = fctx.flags()
2662 2662 mctx = context.memfilectx(repo,
2663 2663 fctx.path(), fctx.data(),
2664 2664 islink='l' in flags,
2665 2665 isexec='x' in flags,
2666 2666 copied=copied.get(path))
2667 2667 return mctx
2668 2668 except KeyError:
2669 2669 return None
2670 2670 else:
2671 2671 ui.note(_('copying changeset %s to %s\n') % (old, base))
2672 2672
2673 2673 # Use version of files as in the old cset
2674 2674 def filectxfn(repo, ctx_, path):
2675 2675 try:
2676 2676 return old.filectx(path)
2677 2677 except KeyError:
2678 2678 return None
2679 2679
2680 2680 user = opts.get('user') or old.user()
2681 2681 date = opts.get('date') or old.date()
2682 2682 editform = mergeeditform(old, 'commit.amend')
2683 2683 editor = getcommiteditor(editform=editform, **opts)
2684 2684 if not message:
2685 2685 editor = getcommiteditor(edit=True, editform=editform)
2686 2686 message = old.description()
2687 2687
2688 2688 pureextra = extra.copy()
2689 2689 extra['amend_source'] = old.hex()
2690 2690
2691 2691 new = context.memctx(repo,
2692 2692 parents=[base.node(), old.p2().node()],
2693 2693 text=message,
2694 2694 files=files,
2695 2695 filectxfn=filectxfn,
2696 2696 user=user,
2697 2697 date=date,
2698 2698 extra=extra,
2699 2699 editor=editor)
2700 2700
2701 2701 newdesc = changelog.stripdesc(new.description())
2702 2702 if ((not node)
2703 2703 and newdesc == old.description()
2704 2704 and user == old.user()
2705 2705 and date == old.date()
2706 2706 and pureextra == old.extra()):
2707 2707 # nothing changed. continuing here would create a new node
2708 2708 # anyway because of the amend_source noise.
2709 2709 #
2710 2710 # This not what we expect from amend.
2711 2711 return old.node()
2712 2712
2713 2713 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2714 2714 try:
2715 2715 if opts.get('secret'):
2716 2716 commitphase = 'secret'
2717 2717 else:
2718 2718 commitphase = old.phase()
2719 2719 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2720 2720 newid = repo.commitctx(new)
2721 2721 finally:
2722 2722 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2723 2723 if newid != old.node():
2724 2724 # Reroute the working copy parent to the new changeset
2725 2725 repo.setparents(newid, nullid)
2726 2726
2727 2727 # Move bookmarks from old parent to amend commit
2728 2728 bms = repo.nodebookmarks(old.node())
2729 2729 if bms:
2730 2730 marks = repo._bookmarks
2731 2731 for bm in bms:
2732 2732 ui.debug('moving bookmarks %r from %s to %s\n' %
2733 2733 (marks, old.hex(), hex(newid)))
2734 2734 marks[bm] = newid
2735 2735 marks.recordchange(tr)
2736 2736 #commit the whole amend process
2737 2737 if createmarkers:
2738 2738 # mark the new changeset as successor of the rewritten one
2739 2739 new = repo[newid]
2740 2740 obs = [(old, (new,))]
2741 2741 if node:
2742 2742 obs.append((ctx, ()))
2743 2743
2744 obsolete.createmarkers(repo, obs)
2744 obsolete.createmarkers(repo, obs, operation='amend')
2745 2745 if not createmarkers and newid != old.node():
2746 2746 # Strip the intermediate commit (if there was one) and the amended
2747 2747 # commit
2748 2748 if node:
2749 2749 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2750 2750 ui.note(_('stripping amended changeset %s\n') % old)
2751 2751 repair.strip(ui, repo, old.node(), topic='amend-backup')
2752 2752 finally:
2753 2753 lockmod.release(lock, wlock)
2754 2754 return newid
2755 2755
2756 2756 def commiteditor(repo, ctx, subs, editform=''):
2757 2757 if ctx.description():
2758 2758 return ctx.description()
2759 2759 return commitforceeditor(repo, ctx, subs, editform=editform,
2760 2760 unchangedmessagedetection=True)
2761 2761
2762 2762 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2763 2763 editform='', unchangedmessagedetection=False):
2764 2764 if not extramsg:
2765 2765 extramsg = _("Leave message empty to abort commit.")
2766 2766
2767 2767 forms = [e for e in editform.split('.') if e]
2768 2768 forms.insert(0, 'changeset')
2769 2769 templatetext = None
2770 2770 while forms:
2771 2771 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2772 2772 if tmpl:
2773 2773 tmpl = templater.unquotestring(tmpl)
2774 2774 templatetext = committext = buildcommittemplate(
2775 2775 repo, ctx, subs, extramsg, tmpl)
2776 2776 break
2777 2777 forms.pop()
2778 2778 else:
2779 2779 committext = buildcommittext(repo, ctx, subs, extramsg)
2780 2780
2781 2781 # run editor in the repository root
2782 2782 olddir = pycompat.getcwd()
2783 2783 os.chdir(repo.root)
2784 2784
2785 2785 # make in-memory changes visible to external process
2786 2786 tr = repo.currenttransaction()
2787 2787 repo.dirstate.write(tr)
2788 2788 pending = tr and tr.writepending() and repo.root
2789 2789
2790 2790 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2791 2791 editform=editform, pending=pending,
2792 2792 repopath=repo.path)
2793 2793 text = editortext
2794 2794
2795 2795 # strip away anything below this special string (used for editors that want
2796 2796 # to display the diff)
2797 2797 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2798 2798 if stripbelow:
2799 2799 text = text[:stripbelow.start()]
2800 2800
2801 2801 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2802 2802 os.chdir(olddir)
2803 2803
2804 2804 if finishdesc:
2805 2805 text = finishdesc(text)
2806 2806 if not text.strip():
2807 2807 raise error.Abort(_("empty commit message"))
2808 2808 if unchangedmessagedetection and editortext == templatetext:
2809 2809 raise error.Abort(_("commit message unchanged"))
2810 2810
2811 2811 return text
2812 2812
2813 2813 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2814 2814 ui = repo.ui
2815 2815 tmpl, mapfile = gettemplate(ui, tmpl, None)
2816 2816
2817 2817 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2818 2818
2819 2819 for k, v in repo.ui.configitems('committemplate'):
2820 2820 if k != 'changeset':
2821 2821 t.t.cache[k] = v
2822 2822
2823 2823 if not extramsg:
2824 2824 extramsg = '' # ensure that extramsg is string
2825 2825
2826 2826 ui.pushbuffer()
2827 2827 t.show(ctx, extramsg=extramsg)
2828 2828 return ui.popbuffer()
2829 2829
2830 2830 def hgprefix(msg):
2831 2831 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2832 2832
2833 2833 def buildcommittext(repo, ctx, subs, extramsg):
2834 2834 edittext = []
2835 2835 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2836 2836 if ctx.description():
2837 2837 edittext.append(ctx.description())
2838 2838 edittext.append("")
2839 2839 edittext.append("") # Empty line between message and comments.
2840 2840 edittext.append(hgprefix(_("Enter commit message."
2841 2841 " Lines beginning with 'HG:' are removed.")))
2842 2842 edittext.append(hgprefix(extramsg))
2843 2843 edittext.append("HG: --")
2844 2844 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2845 2845 if ctx.p2():
2846 2846 edittext.append(hgprefix(_("branch merge")))
2847 2847 if ctx.branch():
2848 2848 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2849 2849 if bookmarks.isactivewdirparent(repo):
2850 2850 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2851 2851 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2852 2852 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2853 2853 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2854 2854 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2855 2855 if not added and not modified and not removed:
2856 2856 edittext.append(hgprefix(_("no files changed")))
2857 2857 edittext.append("")
2858 2858
2859 2859 return "\n".join(edittext)
2860 2860
2861 2861 def commitstatus(repo, node, branch, bheads=None, opts=None):
2862 2862 if opts is None:
2863 2863 opts = {}
2864 2864 ctx = repo[node]
2865 2865 parents = ctx.parents()
2866 2866
2867 2867 if (not opts.get('amend') and bheads and node not in bheads and not
2868 2868 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2869 2869 repo.ui.status(_('created new head\n'))
2870 2870 # The message is not printed for initial roots. For the other
2871 2871 # changesets, it is printed in the following situations:
2872 2872 #
2873 2873 # Par column: for the 2 parents with ...
2874 2874 # N: null or no parent
2875 2875 # B: parent is on another named branch
2876 2876 # C: parent is a regular non head changeset
2877 2877 # H: parent was a branch head of the current branch
2878 2878 # Msg column: whether we print "created new head" message
2879 2879 # In the following, it is assumed that there already exists some
2880 2880 # initial branch heads of the current branch, otherwise nothing is
2881 2881 # printed anyway.
2882 2882 #
2883 2883 # Par Msg Comment
2884 2884 # N N y additional topo root
2885 2885 #
2886 2886 # B N y additional branch root
2887 2887 # C N y additional topo head
2888 2888 # H N n usual case
2889 2889 #
2890 2890 # B B y weird additional branch root
2891 2891 # C B y branch merge
2892 2892 # H B n merge with named branch
2893 2893 #
2894 2894 # C C y additional head from merge
2895 2895 # C H n merge with a head
2896 2896 #
2897 2897 # H H n head merge: head count decreases
2898 2898
2899 2899 if not opts.get('close_branch'):
2900 2900 for r in parents:
2901 2901 if r.closesbranch() and r.branch() == branch:
2902 2902 repo.ui.status(_('reopening closed branch head %d\n') % r)
2903 2903
2904 2904 if repo.ui.debugflag:
2905 2905 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2906 2906 elif repo.ui.verbose:
2907 2907 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2908 2908
2909 2909 def postcommitstatus(repo, pats, opts):
2910 2910 return repo.status(match=scmutil.match(repo[None], pats, opts))
2911 2911
2912 2912 def revert(ui, repo, ctx, parents, *pats, **opts):
2913 2913 parent, p2 = parents
2914 2914 node = ctx.node()
2915 2915
2916 2916 mf = ctx.manifest()
2917 2917 if node == p2:
2918 2918 parent = p2
2919 2919
2920 2920 # need all matching names in dirstate and manifest of target rev,
2921 2921 # so have to walk both. do not print errors if files exist in one
2922 2922 # but not other. in both cases, filesets should be evaluated against
2923 2923 # workingctx to get consistent result (issue4497). this means 'set:**'
2924 2924 # cannot be used to select missing files from target rev.
2925 2925
2926 2926 # `names` is a mapping for all elements in working copy and target revision
2927 2927 # The mapping is in the form:
2928 2928 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2929 2929 names = {}
2930 2930
2931 2931 with repo.wlock():
2932 2932 ## filling of the `names` mapping
2933 2933 # walk dirstate to fill `names`
2934 2934
2935 2935 interactive = opts.get('interactive', False)
2936 2936 wctx = repo[None]
2937 2937 m = scmutil.match(wctx, pats, opts)
2938 2938
2939 2939 # we'll need this later
2940 2940 targetsubs = sorted(s for s in wctx.substate if m(s))
2941 2941
2942 2942 if not m.always():
2943 2943 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2944 2944 names[abs] = m.rel(abs), m.exact(abs)
2945 2945
2946 2946 # walk target manifest to fill `names`
2947 2947
2948 2948 def badfn(path, msg):
2949 2949 if path in names:
2950 2950 return
2951 2951 if path in ctx.substate:
2952 2952 return
2953 2953 path_ = path + '/'
2954 2954 for f in names:
2955 2955 if f.startswith(path_):
2956 2956 return
2957 2957 ui.warn("%s: %s\n" % (m.rel(path), msg))
2958 2958
2959 2959 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2960 2960 if abs not in names:
2961 2961 names[abs] = m.rel(abs), m.exact(abs)
2962 2962
2963 2963 # Find status of all file in `names`.
2964 2964 m = scmutil.matchfiles(repo, names)
2965 2965
2966 2966 changes = repo.status(node1=node, match=m,
2967 2967 unknown=True, ignored=True, clean=True)
2968 2968 else:
2969 2969 changes = repo.status(node1=node, match=m)
2970 2970 for kind in changes:
2971 2971 for abs in kind:
2972 2972 names[abs] = m.rel(abs), m.exact(abs)
2973 2973
2974 2974 m = scmutil.matchfiles(repo, names)
2975 2975
2976 2976 modified = set(changes.modified)
2977 2977 added = set(changes.added)
2978 2978 removed = set(changes.removed)
2979 2979 _deleted = set(changes.deleted)
2980 2980 unknown = set(changes.unknown)
2981 2981 unknown.update(changes.ignored)
2982 2982 clean = set(changes.clean)
2983 2983 modadded = set()
2984 2984
2985 2985 # We need to account for the state of the file in the dirstate,
2986 2986 # even when we revert against something else than parent. This will
2987 2987 # slightly alter the behavior of revert (doing back up or not, delete
2988 2988 # or just forget etc).
2989 2989 if parent == node:
2990 2990 dsmodified = modified
2991 2991 dsadded = added
2992 2992 dsremoved = removed
2993 2993 # store all local modifications, useful later for rename detection
2994 2994 localchanges = dsmodified | dsadded
2995 2995 modified, added, removed = set(), set(), set()
2996 2996 else:
2997 2997 changes = repo.status(node1=parent, match=m)
2998 2998 dsmodified = set(changes.modified)
2999 2999 dsadded = set(changes.added)
3000 3000 dsremoved = set(changes.removed)
3001 3001 # store all local modifications, useful later for rename detection
3002 3002 localchanges = dsmodified | dsadded
3003 3003
3004 3004 # only take into account for removes between wc and target
3005 3005 clean |= dsremoved - removed
3006 3006 dsremoved &= removed
3007 3007 # distinct between dirstate remove and other
3008 3008 removed -= dsremoved
3009 3009
3010 3010 modadded = added & dsmodified
3011 3011 added -= modadded
3012 3012
3013 3013 # tell newly modified apart.
3014 3014 dsmodified &= modified
3015 3015 dsmodified |= modified & dsadded # dirstate added may need backup
3016 3016 modified -= dsmodified
3017 3017
3018 3018 # We need to wait for some post-processing to update this set
3019 3019 # before making the distinction. The dirstate will be used for
3020 3020 # that purpose.
3021 3021 dsadded = added
3022 3022
3023 3023 # in case of merge, files that are actually added can be reported as
3024 3024 # modified, we need to post process the result
3025 3025 if p2 != nullid:
3026 3026 mergeadd = set(dsmodified)
3027 3027 for path in dsmodified:
3028 3028 if path in mf:
3029 3029 mergeadd.remove(path)
3030 3030 dsadded |= mergeadd
3031 3031 dsmodified -= mergeadd
3032 3032
3033 3033 # if f is a rename, update `names` to also revert the source
3034 3034 cwd = repo.getcwd()
3035 3035 for f in localchanges:
3036 3036 src = repo.dirstate.copied(f)
3037 3037 # XXX should we check for rename down to target node?
3038 3038 if src and src not in names and repo.dirstate[src] == 'r':
3039 3039 dsremoved.add(src)
3040 3040 names[src] = (repo.pathto(src, cwd), True)
3041 3041
3042 3042 # determine the exact nature of the deleted changesets
3043 3043 deladded = set(_deleted)
3044 3044 for path in _deleted:
3045 3045 if path in mf:
3046 3046 deladded.remove(path)
3047 3047 deleted = _deleted - deladded
3048 3048
3049 3049 # distinguish between file to forget and the other
3050 3050 added = set()
3051 3051 for abs in dsadded:
3052 3052 if repo.dirstate[abs] != 'a':
3053 3053 added.add(abs)
3054 3054 dsadded -= added
3055 3055
3056 3056 for abs in deladded:
3057 3057 if repo.dirstate[abs] == 'a':
3058 3058 dsadded.add(abs)
3059 3059 deladded -= dsadded
3060 3060
3061 3061 # For files marked as removed, we check if an unknown file is present at
3062 3062 # the same path. If a such file exists it may need to be backed up.
3063 3063 # Making the distinction at this stage helps have simpler backup
3064 3064 # logic.
3065 3065 removunk = set()
3066 3066 for abs in removed:
3067 3067 target = repo.wjoin(abs)
3068 3068 if os.path.lexists(target):
3069 3069 removunk.add(abs)
3070 3070 removed -= removunk
3071 3071
3072 3072 dsremovunk = set()
3073 3073 for abs in dsremoved:
3074 3074 target = repo.wjoin(abs)
3075 3075 if os.path.lexists(target):
3076 3076 dsremovunk.add(abs)
3077 3077 dsremoved -= dsremovunk
3078 3078
3079 3079 # action to be actually performed by revert
3080 3080 # (<list of file>, message>) tuple
3081 3081 actions = {'revert': ([], _('reverting %s\n')),
3082 3082 'add': ([], _('adding %s\n')),
3083 3083 'remove': ([], _('removing %s\n')),
3084 3084 'drop': ([], _('removing %s\n')),
3085 3085 'forget': ([], _('forgetting %s\n')),
3086 3086 'undelete': ([], _('undeleting %s\n')),
3087 3087 'noop': (None, _('no changes needed to %s\n')),
3088 3088 'unknown': (None, _('file not managed: %s\n')),
3089 3089 }
3090 3090
3091 3091 # "constant" that convey the backup strategy.
3092 3092 # All set to `discard` if `no-backup` is set do avoid checking
3093 3093 # no_backup lower in the code.
3094 3094 # These values are ordered for comparison purposes
3095 3095 backupinteractive = 3 # do backup if interactively modified
3096 3096 backup = 2 # unconditionally do backup
3097 3097 check = 1 # check if the existing file differs from target
3098 3098 discard = 0 # never do backup
3099 3099 if opts.get('no_backup'):
3100 3100 backupinteractive = backup = check = discard
3101 3101 if interactive:
3102 3102 dsmodifiedbackup = backupinteractive
3103 3103 else:
3104 3104 dsmodifiedbackup = backup
3105 3105 tobackup = set()
3106 3106
3107 3107 backupanddel = actions['remove']
3108 3108 if not opts.get('no_backup'):
3109 3109 backupanddel = actions['drop']
3110 3110
3111 3111 disptable = (
3112 3112 # dispatch table:
3113 3113 # file state
3114 3114 # action
3115 3115 # make backup
3116 3116
3117 3117 ## Sets that results that will change file on disk
3118 3118 # Modified compared to target, no local change
3119 3119 (modified, actions['revert'], discard),
3120 3120 # Modified compared to target, but local file is deleted
3121 3121 (deleted, actions['revert'], discard),
3122 3122 # Modified compared to target, local change
3123 3123 (dsmodified, actions['revert'], dsmodifiedbackup),
3124 3124 # Added since target
3125 3125 (added, actions['remove'], discard),
3126 3126 # Added in working directory
3127 3127 (dsadded, actions['forget'], discard),
3128 3128 # Added since target, have local modification
3129 3129 (modadded, backupanddel, backup),
3130 3130 # Added since target but file is missing in working directory
3131 3131 (deladded, actions['drop'], discard),
3132 3132 # Removed since target, before working copy parent
3133 3133 (removed, actions['add'], discard),
3134 3134 # Same as `removed` but an unknown file exists at the same path
3135 3135 (removunk, actions['add'], check),
3136 3136 # Removed since targe, marked as such in working copy parent
3137 3137 (dsremoved, actions['undelete'], discard),
3138 3138 # Same as `dsremoved` but an unknown file exists at the same path
3139 3139 (dsremovunk, actions['undelete'], check),
3140 3140 ## the following sets does not result in any file changes
3141 3141 # File with no modification
3142 3142 (clean, actions['noop'], discard),
3143 3143 # Existing file, not tracked anywhere
3144 3144 (unknown, actions['unknown'], discard),
3145 3145 )
3146 3146
3147 3147 for abs, (rel, exact) in sorted(names.items()):
3148 3148 # target file to be touch on disk (relative to cwd)
3149 3149 target = repo.wjoin(abs)
3150 3150 # search the entry in the dispatch table.
3151 3151 # if the file is in any of these sets, it was touched in the working
3152 3152 # directory parent and we are sure it needs to be reverted.
3153 3153 for table, (xlist, msg), dobackup in disptable:
3154 3154 if abs not in table:
3155 3155 continue
3156 3156 if xlist is not None:
3157 3157 xlist.append(abs)
3158 3158 if dobackup:
3159 3159 # If in interactive mode, don't automatically create
3160 3160 # .orig files (issue4793)
3161 3161 if dobackup == backupinteractive:
3162 3162 tobackup.add(abs)
3163 3163 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3164 3164 bakname = scmutil.origpath(ui, repo, rel)
3165 3165 ui.note(_('saving current version of %s as %s\n') %
3166 3166 (rel, bakname))
3167 3167 if not opts.get('dry_run'):
3168 3168 if interactive:
3169 3169 util.copyfile(target, bakname)
3170 3170 else:
3171 3171 util.rename(target, bakname)
3172 3172 if ui.verbose or not exact:
3173 3173 if not isinstance(msg, basestring):
3174 3174 msg = msg(abs)
3175 3175 ui.status(msg % rel)
3176 3176 elif exact:
3177 3177 ui.warn(msg % rel)
3178 3178 break
3179 3179
3180 3180 if not opts.get('dry_run'):
3181 3181 needdata = ('revert', 'add', 'undelete')
3182 3182 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3183 3183 _performrevert(repo, parents, ctx, actions, interactive, tobackup)
3184 3184
3185 3185 if targetsubs:
3186 3186 # Revert the subrepos on the revert list
3187 3187 for sub in targetsubs:
3188 3188 try:
3189 3189 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3190 3190 except KeyError:
3191 3191 raise error.Abort("subrepository '%s' does not exist in %s!"
3192 3192 % (sub, short(ctx.node())))
3193 3193
3194 3194 def _revertprefetch(repo, ctx, *files):
3195 3195 """Let extension changing the storage layer prefetch content"""
3196 3196 pass
3197 3197
3198 3198 def _performrevert(repo, parents, ctx, actions, interactive=False,
3199 3199 tobackup=None):
3200 3200 """function that actually perform all the actions computed for revert
3201 3201
3202 3202 This is an independent function to let extension to plug in and react to
3203 3203 the imminent revert.
3204 3204
3205 3205 Make sure you have the working directory locked when calling this function.
3206 3206 """
3207 3207 parent, p2 = parents
3208 3208 node = ctx.node()
3209 3209 excluded_files = []
3210 3210 matcher_opts = {"exclude": excluded_files}
3211 3211
3212 3212 def checkout(f):
3213 3213 fc = ctx[f]
3214 3214 repo.wwrite(f, fc.data(), fc.flags())
3215 3215
3216 3216 def doremove(f):
3217 3217 try:
3218 3218 repo.wvfs.unlinkpath(f)
3219 3219 except OSError:
3220 3220 pass
3221 3221 repo.dirstate.remove(f)
3222 3222
3223 3223 audit_path = pathutil.pathauditor(repo.root)
3224 3224 for f in actions['forget'][0]:
3225 3225 if interactive:
3226 3226 choice = repo.ui.promptchoice(
3227 3227 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
3228 3228 if choice == 0:
3229 3229 repo.dirstate.drop(f)
3230 3230 else:
3231 3231 excluded_files.append(repo.wjoin(f))
3232 3232 else:
3233 3233 repo.dirstate.drop(f)
3234 3234 for f in actions['remove'][0]:
3235 3235 audit_path(f)
3236 3236 if interactive:
3237 3237 choice = repo.ui.promptchoice(
3238 3238 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
3239 3239 if choice == 0:
3240 3240 doremove(f)
3241 3241 else:
3242 3242 excluded_files.append(repo.wjoin(f))
3243 3243 else:
3244 3244 doremove(f)
3245 3245 for f in actions['drop'][0]:
3246 3246 audit_path(f)
3247 3247 repo.dirstate.remove(f)
3248 3248
3249 3249 normal = None
3250 3250 if node == parent:
3251 3251 # We're reverting to our parent. If possible, we'd like status
3252 3252 # to report the file as clean. We have to use normallookup for
3253 3253 # merges to avoid losing information about merged/dirty files.
3254 3254 if p2 != nullid:
3255 3255 normal = repo.dirstate.normallookup
3256 3256 else:
3257 3257 normal = repo.dirstate.normal
3258 3258
3259 3259 newlyaddedandmodifiedfiles = set()
3260 3260 if interactive:
3261 3261 # Prompt the user for changes to revert
3262 3262 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3263 3263 m = scmutil.match(ctx, torevert, matcher_opts)
3264 3264 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3265 3265 diffopts.nodates = True
3266 3266 diffopts.git = True
3267 3267 operation = 'discard'
3268 3268 reversehunks = True
3269 3269 if node != parent:
3270 3270 operation = 'revert'
3271 3271 reversehunks = repo.ui.configbool('experimental',
3272 3272 'revertalternateinteractivemode',
3273 3273 True)
3274 3274 if reversehunks:
3275 3275 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3276 3276 else:
3277 3277 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3278 3278 originalchunks = patch.parsepatch(diff)
3279 3279
3280 3280 try:
3281 3281
3282 3282 chunks, opts = recordfilter(repo.ui, originalchunks,
3283 3283 operation=operation)
3284 3284 if reversehunks:
3285 3285 chunks = patch.reversehunks(chunks)
3286 3286
3287 3287 except patch.PatchError as err:
3288 3288 raise error.Abort(_('error parsing patch: %s') % err)
3289 3289
3290 3290 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3291 3291 if tobackup is None:
3292 3292 tobackup = set()
3293 3293 # Apply changes
3294 3294 fp = stringio()
3295 3295 for c in chunks:
3296 3296 # Create a backup file only if this hunk should be backed up
3297 3297 if ishunk(c) and c.header.filename() in tobackup:
3298 3298 abs = c.header.filename()
3299 3299 target = repo.wjoin(abs)
3300 3300 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3301 3301 util.copyfile(target, bakname)
3302 3302 tobackup.remove(abs)
3303 3303 c.write(fp)
3304 3304 dopatch = fp.tell()
3305 3305 fp.seek(0)
3306 3306 if dopatch:
3307 3307 try:
3308 3308 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3309 3309 except patch.PatchError as err:
3310 3310 raise error.Abort(str(err))
3311 3311 del fp
3312 3312 else:
3313 3313 for f in actions['revert'][0]:
3314 3314 checkout(f)
3315 3315 if normal:
3316 3316 normal(f)
3317 3317
3318 3318 for f in actions['add'][0]:
3319 3319 # Don't checkout modified files, they are already created by the diff
3320 3320 if f not in newlyaddedandmodifiedfiles:
3321 3321 checkout(f)
3322 3322 repo.dirstate.add(f)
3323 3323
3324 3324 normal = repo.dirstate.normallookup
3325 3325 if node == parent and p2 == nullid:
3326 3326 normal = repo.dirstate.normal
3327 3327 for f in actions['undelete'][0]:
3328 3328 checkout(f)
3329 3329 normal(f)
3330 3330
3331 3331 copied = copies.pathcopies(repo[parent], ctx)
3332 3332
3333 3333 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3334 3334 if f in copied:
3335 3335 repo.dirstate.copy(copied[f], f)
3336 3336
3337 3337 def command(table):
3338 3338 """Returns a function object to be used as a decorator for making commands.
3339 3339
3340 3340 This function receives a command table as its argument. The table should
3341 3341 be a dict.
3342 3342
3343 3343 The returned function can be used as a decorator for adding commands
3344 3344 to that command table. This function accepts multiple arguments to define
3345 3345 a command.
3346 3346
3347 3347 The first argument is the command name.
3348 3348
3349 3349 The options argument is an iterable of tuples defining command arguments.
3350 3350 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3351 3351
3352 3352 The synopsis argument defines a short, one line summary of how to use the
3353 3353 command. This shows up in the help output.
3354 3354
3355 3355 The norepo argument defines whether the command does not require a
3356 3356 local repository. Most commands operate against a repository, thus the
3357 3357 default is False.
3358 3358
3359 3359 The optionalrepo argument defines whether the command optionally requires
3360 3360 a local repository.
3361 3361
3362 3362 The inferrepo argument defines whether to try to find a repository from the
3363 3363 command line arguments. If True, arguments will be examined for potential
3364 3364 repository locations. See ``findrepo()``. If a repository is found, it
3365 3365 will be used.
3366 3366 """
3367 3367 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3368 3368 inferrepo=False):
3369 3369 def decorator(func):
3370 3370 func.norepo = norepo
3371 3371 func.optionalrepo = optionalrepo
3372 3372 func.inferrepo = inferrepo
3373 3373 if synopsis:
3374 3374 table[name] = func, list(options), synopsis
3375 3375 else:
3376 3376 table[name] = func, list(options)
3377 3377 return func
3378 3378 return decorator
3379 3379
3380 3380 return cmd
3381 3381
3382 3382 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3383 3383 # commands.outgoing. "missing" is "missing" of the result of
3384 3384 # "findcommonoutgoing()"
3385 3385 outgoinghooks = util.hooks()
3386 3386
3387 3387 # a list of (ui, repo) functions called by commands.summary
3388 3388 summaryhooks = util.hooks()
3389 3389
3390 3390 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3391 3391 #
3392 3392 # functions should return tuple of booleans below, if 'changes' is None:
3393 3393 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3394 3394 #
3395 3395 # otherwise, 'changes' is a tuple of tuples below:
3396 3396 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3397 3397 # - (desturl, destbranch, destpeer, outgoing)
3398 3398 summaryremotehooks = util.hooks()
3399 3399
3400 3400 # A list of state files kept by multistep operations like graft.
3401 3401 # Since graft cannot be aborted, it is considered 'clearable' by update.
3402 3402 # note: bisect is intentionally excluded
3403 3403 # (state file, clearable, allowcommit, error, hint)
3404 3404 unfinishedstates = [
3405 3405 ('graftstate', True, False, _('graft in progress'),
3406 3406 _("use 'hg graft --continue' or 'hg update' to abort")),
3407 3407 ('updatestate', True, False, _('last update was interrupted'),
3408 3408 _("use 'hg update' to get a consistent checkout"))
3409 3409 ]
3410 3410
3411 3411 def checkunfinished(repo, commit=False):
3412 3412 '''Look for an unfinished multistep operation, like graft, and abort
3413 3413 if found. It's probably good to check this right before
3414 3414 bailifchanged().
3415 3415 '''
3416 3416 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3417 3417 if commit and allowcommit:
3418 3418 continue
3419 3419 if repo.vfs.exists(f):
3420 3420 raise error.Abort(msg, hint=hint)
3421 3421
3422 3422 def clearunfinished(repo):
3423 3423 '''Check for unfinished operations (as above), and clear the ones
3424 3424 that are clearable.
3425 3425 '''
3426 3426 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3427 3427 if not clearable and repo.vfs.exists(f):
3428 3428 raise error.Abort(msg, hint=hint)
3429 3429 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3430 3430 if clearable and repo.vfs.exists(f):
3431 3431 util.unlink(repo.vfs.join(f))
3432 3432
3433 3433 afterresolvedstates = [
3434 3434 ('graftstate',
3435 3435 _('hg graft --continue')),
3436 3436 ]
3437 3437
3438 3438 def howtocontinue(repo):
3439 3439 '''Check for an unfinished operation and return the command to finish
3440 3440 it.
3441 3441
3442 3442 afterresolvedstates tuples define a .hg/{file} and the corresponding
3443 3443 command needed to finish it.
3444 3444
3445 3445 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3446 3446 a boolean.
3447 3447 '''
3448 3448 contmsg = _("continue: %s")
3449 3449 for f, msg in afterresolvedstates:
3450 3450 if repo.vfs.exists(f):
3451 3451 return contmsg % msg, True
3452 3452 workingctx = repo[None]
3453 3453 dirty = any(repo.status()) or any(workingctx.sub(s).dirty()
3454 3454 for s in workingctx.substate)
3455 3455 if dirty:
3456 3456 return contmsg % _("hg commit"), False
3457 3457 return None, None
3458 3458
3459 3459 def checkafterresolved(repo):
3460 3460 '''Inform the user about the next action after completing hg resolve
3461 3461
3462 3462 If there's a matching afterresolvedstates, howtocontinue will yield
3463 3463 repo.ui.warn as the reporter.
3464 3464
3465 3465 Otherwise, it will yield repo.ui.note.
3466 3466 '''
3467 3467 msg, warning = howtocontinue(repo)
3468 3468 if msg is not None:
3469 3469 if warning:
3470 3470 repo.ui.warn("%s\n" % msg)
3471 3471 else:
3472 3472 repo.ui.note("%s\n" % msg)
3473 3473
3474 3474 def wrongtooltocontinue(repo, task):
3475 3475 '''Raise an abort suggesting how to properly continue if there is an
3476 3476 active task.
3477 3477
3478 3478 Uses howtocontinue() to find the active task.
3479 3479
3480 3480 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3481 3481 a hint.
3482 3482 '''
3483 3483 after = howtocontinue(repo)
3484 3484 hint = None
3485 3485 if after[1]:
3486 3486 hint = after[0]
3487 3487 raise error.Abort(_('no %s in progress') % task, hint=hint)
@@ -1,1284 +1,1287 b''
1 1 # obsolete.py - obsolete markers handling
2 2 #
3 3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 4 # Logilab SA <contact@logilab.fr>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 """Obsolete marker handling
10 10
11 11 An obsolete marker maps an old changeset to a list of new
12 12 changesets. If the list of new changesets is empty, the old changeset
13 13 is said to be "killed". Otherwise, the old changeset is being
14 14 "replaced" by the new changesets.
15 15
16 16 Obsolete markers can be used to record and distribute changeset graph
17 17 transformations performed by history rewrite operations, and help
18 18 building new tools to reconcile conflicting rewrite actions. To
19 19 facilitate conflict resolution, markers include various annotations
20 20 besides old and news changeset identifiers, such as creation date or
21 21 author name.
22 22
23 23 The old obsoleted changeset is called a "precursor" and possible
24 24 replacements are called "successors". Markers that used changeset X as
25 25 a precursor are called "successor markers of X" because they hold
26 26 information about the successors of X. Markers that use changeset Y as
27 27 a successors are call "precursor markers of Y" because they hold
28 28 information about the precursors of Y.
29 29
30 30 Examples:
31 31
32 32 - When changeset A is replaced by changeset A', one marker is stored:
33 33
34 34 (A, (A',))
35 35
36 36 - When changesets A and B are folded into a new changeset C, two markers are
37 37 stored:
38 38
39 39 (A, (C,)) and (B, (C,))
40 40
41 41 - When changeset A is simply "pruned" from the graph, a marker is created:
42 42
43 43 (A, ())
44 44
45 45 - When changeset A is split into B and C, a single marker is used:
46 46
47 47 (A, (B, C))
48 48
49 49 We use a single marker to distinguish the "split" case from the "divergence"
50 50 case. If two independent operations rewrite the same changeset A in to A' and
51 51 A'', we have an error case: divergent rewriting. We can detect it because
52 52 two markers will be created independently:
53 53
54 54 (A, (B,)) and (A, (C,))
55 55
56 56 Format
57 57 ------
58 58
59 59 Markers are stored in an append-only file stored in
60 60 '.hg/store/obsstore'.
61 61
62 62 The file starts with a version header:
63 63
64 64 - 1 unsigned byte: version number, starting at zero.
65 65
66 66 The header is followed by the markers. Marker format depend of the version. See
67 67 comment associated with each format for details.
68 68
69 69 """
70 70 from __future__ import absolute_import
71 71
72 72 import errno
73 73 import struct
74 74
75 75 from .i18n import _
76 76 from . import (
77 77 error,
78 78 node,
79 79 parsers,
80 80 phases,
81 81 util,
82 82 )
83 83
84 84 _pack = struct.pack
85 85 _unpack = struct.unpack
86 86 _calcsize = struct.calcsize
87 87 propertycache = util.propertycache
88 88
89 89 # the obsolete feature is not mature enough to be enabled by default.
90 90 # you have to rely on third party extension extension to enable this.
91 91 _enabled = False
92 92
93 93 # Options for obsolescence
94 94 createmarkersopt = 'createmarkers'
95 95 allowunstableopt = 'allowunstable'
96 96 exchangeopt = 'exchange'
97 97
98 98 ### obsolescence marker flag
99 99
100 100 ## bumpedfix flag
101 101 #
102 102 # When a changeset A' succeed to a changeset A which became public, we call A'
103 103 # "bumped" because it's a successors of a public changesets
104 104 #
105 105 # o A' (bumped)
106 106 # |`:
107 107 # | o A
108 108 # |/
109 109 # o Z
110 110 #
111 111 # The way to solve this situation is to create a new changeset Ad as children
112 112 # of A. This changeset have the same content than A'. So the diff from A to A'
113 113 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
114 114 #
115 115 # o Ad
116 116 # |`:
117 117 # | x A'
118 118 # |'|
119 119 # o | A
120 120 # |/
121 121 # o Z
122 122 #
123 123 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
124 124 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
125 125 # This flag mean that the successors express the changes between the public and
126 126 # bumped version and fix the situation, breaking the transitivity of
127 127 # "bumped" here.
128 128 bumpedfix = 1
129 129 usingsha256 = 2
130 130
131 131 ## Parsing and writing of version "0"
132 132 #
133 133 # The header is followed by the markers. Each marker is made of:
134 134 #
135 135 # - 1 uint8 : number of new changesets "N", can be zero.
136 136 #
137 137 # - 1 uint32: metadata size "M" in bytes.
138 138 #
139 139 # - 1 byte: a bit field. It is reserved for flags used in common
140 140 # obsolete marker operations, to avoid repeated decoding of metadata
141 141 # entries.
142 142 #
143 143 # - 20 bytes: obsoleted changeset identifier.
144 144 #
145 145 # - N*20 bytes: new changesets identifiers.
146 146 #
147 147 # - M bytes: metadata as a sequence of nul-terminated strings. Each
148 148 # string contains a key and a value, separated by a colon ':', without
149 149 # additional encoding. Keys cannot contain '\0' or ':' and values
150 150 # cannot contain '\0'.
151 151 _fm0version = 0
152 152 _fm0fixed = '>BIB20s'
153 153 _fm0node = '20s'
154 154 _fm0fsize = _calcsize(_fm0fixed)
155 155 _fm0fnodesize = _calcsize(_fm0node)
156 156
157 157 def _fm0readmarkers(data, off):
158 158 # Loop on markers
159 159 l = len(data)
160 160 while off + _fm0fsize <= l:
161 161 # read fixed part
162 162 cur = data[off:off + _fm0fsize]
163 163 off += _fm0fsize
164 164 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
165 165 # read replacement
166 166 sucs = ()
167 167 if numsuc:
168 168 s = (_fm0fnodesize * numsuc)
169 169 cur = data[off:off + s]
170 170 sucs = _unpack(_fm0node * numsuc, cur)
171 171 off += s
172 172 # read metadata
173 173 # (metadata will be decoded on demand)
174 174 metadata = data[off:off + mdsize]
175 175 if len(metadata) != mdsize:
176 176 raise error.Abort(_('parsing obsolete marker: metadata is too '
177 177 'short, %d bytes expected, got %d')
178 178 % (mdsize, len(metadata)))
179 179 off += mdsize
180 180 metadata = _fm0decodemeta(metadata)
181 181 try:
182 182 when, offset = metadata.pop('date', '0 0').split(' ')
183 183 date = float(when), int(offset)
184 184 except ValueError:
185 185 date = (0., 0)
186 186 parents = None
187 187 if 'p2' in metadata:
188 188 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
189 189 elif 'p1' in metadata:
190 190 parents = (metadata.pop('p1', None),)
191 191 elif 'p0' in metadata:
192 192 parents = ()
193 193 if parents is not None:
194 194 try:
195 195 parents = tuple(node.bin(p) for p in parents)
196 196 # if parent content is not a nodeid, drop the data
197 197 for p in parents:
198 198 if len(p) != 20:
199 199 parents = None
200 200 break
201 201 except TypeError:
202 202 # if content cannot be translated to nodeid drop the data.
203 203 parents = None
204 204
205 205 metadata = tuple(sorted(metadata.iteritems()))
206 206
207 207 yield (pre, sucs, flags, metadata, date, parents)
208 208
209 209 def _fm0encodeonemarker(marker):
210 210 pre, sucs, flags, metadata, date, parents = marker
211 211 if flags & usingsha256:
212 212 raise error.Abort(_('cannot handle sha256 with old obsstore format'))
213 213 metadata = dict(metadata)
214 214 time, tz = date
215 215 metadata['date'] = '%r %i' % (time, tz)
216 216 if parents is not None:
217 217 if not parents:
218 218 # mark that we explicitly recorded no parents
219 219 metadata['p0'] = ''
220 220 for i, p in enumerate(parents, 1):
221 221 metadata['p%i' % i] = node.hex(p)
222 222 metadata = _fm0encodemeta(metadata)
223 223 numsuc = len(sucs)
224 224 format = _fm0fixed + (_fm0node * numsuc)
225 225 data = [numsuc, len(metadata), flags, pre]
226 226 data.extend(sucs)
227 227 return _pack(format, *data) + metadata
228 228
229 229 def _fm0encodemeta(meta):
230 230 """Return encoded metadata string to string mapping.
231 231
232 232 Assume no ':' in key and no '\0' in both key and value."""
233 233 for key, value in meta.iteritems():
234 234 if ':' in key or '\0' in key:
235 235 raise ValueError("':' and '\0' are forbidden in metadata key'")
236 236 if '\0' in value:
237 237 raise ValueError("':' is forbidden in metadata value'")
238 238 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
239 239
240 240 def _fm0decodemeta(data):
241 241 """Return string to string dictionary from encoded version."""
242 242 d = {}
243 243 for l in data.split('\0'):
244 244 if l:
245 245 key, value = l.split(':')
246 246 d[key] = value
247 247 return d
248 248
249 249 ## Parsing and writing of version "1"
250 250 #
251 251 # The header is followed by the markers. Each marker is made of:
252 252 #
253 253 # - uint32: total size of the marker (including this field)
254 254 #
255 255 # - float64: date in seconds since epoch
256 256 #
257 257 # - int16: timezone offset in minutes
258 258 #
259 259 # - uint16: a bit field. It is reserved for flags used in common
260 260 # obsolete marker operations, to avoid repeated decoding of metadata
261 261 # entries.
262 262 #
263 263 # - uint8: number of successors "N", can be zero.
264 264 #
265 265 # - uint8: number of parents "P", can be zero.
266 266 #
267 267 # 0: parents data stored but no parent,
268 268 # 1: one parent stored,
269 269 # 2: two parents stored,
270 270 # 3: no parent data stored
271 271 #
272 272 # - uint8: number of metadata entries M
273 273 #
274 274 # - 20 or 32 bytes: precursor changeset identifier.
275 275 #
276 276 # - N*(20 or 32) bytes: successors changesets identifiers.
277 277 #
278 278 # - P*(20 or 32) bytes: parents of the precursors changesets.
279 279 #
280 280 # - M*(uint8, uint8): size of all metadata entries (key and value)
281 281 #
282 282 # - remaining bytes: the metadata, each (key, value) pair after the other.
283 283 _fm1version = 1
284 284 _fm1fixed = '>IdhHBBB20s'
285 285 _fm1nodesha1 = '20s'
286 286 _fm1nodesha256 = '32s'
287 287 _fm1nodesha1size = _calcsize(_fm1nodesha1)
288 288 _fm1nodesha256size = _calcsize(_fm1nodesha256)
289 289 _fm1fsize = _calcsize(_fm1fixed)
290 290 _fm1parentnone = 3
291 291 _fm1parentshift = 14
292 292 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
293 293 _fm1metapair = 'BB'
294 294 _fm1metapairsize = _calcsize('BB')
295 295
296 296 def _fm1purereadmarkers(data, off):
297 297 # make some global constants local for performance
298 298 noneflag = _fm1parentnone
299 299 sha2flag = usingsha256
300 300 sha1size = _fm1nodesha1size
301 301 sha2size = _fm1nodesha256size
302 302 sha1fmt = _fm1nodesha1
303 303 sha2fmt = _fm1nodesha256
304 304 metasize = _fm1metapairsize
305 305 metafmt = _fm1metapair
306 306 fsize = _fm1fsize
307 307 unpack = _unpack
308 308
309 309 # Loop on markers
310 310 stop = len(data) - _fm1fsize
311 311 ufixed = struct.Struct(_fm1fixed).unpack
312 312
313 313 while off <= stop:
314 314 # read fixed part
315 315 o1 = off + fsize
316 316 t, secs, tz, flags, numsuc, numpar, nummeta, prec = ufixed(data[off:o1])
317 317
318 318 if flags & sha2flag:
319 319 # FIXME: prec was read as a SHA1, needs to be amended
320 320
321 321 # read 0 or more successors
322 322 if numsuc == 1:
323 323 o2 = o1 + sha2size
324 324 sucs = (data[o1:o2],)
325 325 else:
326 326 o2 = o1 + sha2size * numsuc
327 327 sucs = unpack(sha2fmt * numsuc, data[o1:o2])
328 328
329 329 # read parents
330 330 if numpar == noneflag:
331 331 o3 = o2
332 332 parents = None
333 333 elif numpar == 1:
334 334 o3 = o2 + sha2size
335 335 parents = (data[o2:o3],)
336 336 else:
337 337 o3 = o2 + sha2size * numpar
338 338 parents = unpack(sha2fmt * numpar, data[o2:o3])
339 339 else:
340 340 # read 0 or more successors
341 341 if numsuc == 1:
342 342 o2 = o1 + sha1size
343 343 sucs = (data[o1:o2],)
344 344 else:
345 345 o2 = o1 + sha1size * numsuc
346 346 sucs = unpack(sha1fmt * numsuc, data[o1:o2])
347 347
348 348 # read parents
349 349 if numpar == noneflag:
350 350 o3 = o2
351 351 parents = None
352 352 elif numpar == 1:
353 353 o3 = o2 + sha1size
354 354 parents = (data[o2:o3],)
355 355 else:
356 356 o3 = o2 + sha1size * numpar
357 357 parents = unpack(sha1fmt * numpar, data[o2:o3])
358 358
359 359 # read metadata
360 360 off = o3 + metasize * nummeta
361 361 metapairsize = unpack('>' + (metafmt * nummeta), data[o3:off])
362 362 metadata = []
363 363 for idx in xrange(0, len(metapairsize), 2):
364 364 o1 = off + metapairsize[idx]
365 365 o2 = o1 + metapairsize[idx + 1]
366 366 metadata.append((data[off:o1], data[o1:o2]))
367 367 off = o2
368 368
369 369 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
370 370
371 371 def _fm1encodeonemarker(marker):
372 372 pre, sucs, flags, metadata, date, parents = marker
373 373 # determine node size
374 374 _fm1node = _fm1nodesha1
375 375 if flags & usingsha256:
376 376 _fm1node = _fm1nodesha256
377 377 numsuc = len(sucs)
378 378 numextranodes = numsuc
379 379 if parents is None:
380 380 numpar = _fm1parentnone
381 381 else:
382 382 numpar = len(parents)
383 383 numextranodes += numpar
384 384 formatnodes = _fm1node * numextranodes
385 385 formatmeta = _fm1metapair * len(metadata)
386 386 format = _fm1fixed + formatnodes + formatmeta
387 387 # tz is stored in minutes so we divide by 60
388 388 tz = date[1]//60
389 389 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
390 390 data.extend(sucs)
391 391 if parents is not None:
392 392 data.extend(parents)
393 393 totalsize = _calcsize(format)
394 394 for key, value in metadata:
395 395 lk = len(key)
396 396 lv = len(value)
397 397 data.append(lk)
398 398 data.append(lv)
399 399 totalsize += lk + lv
400 400 data[0] = totalsize
401 401 data = [_pack(format, *data)]
402 402 for key, value in metadata:
403 403 data.append(key)
404 404 data.append(value)
405 405 return ''.join(data)
406 406
407 407 def _fm1readmarkers(data, off):
408 408 native = getattr(parsers, 'fm1readmarkers', None)
409 409 if not native:
410 410 return _fm1purereadmarkers(data, off)
411 411 stop = len(data) - _fm1fsize
412 412 return native(data, off, stop)
413 413
414 414 # mapping to read/write various marker formats
415 415 # <version> -> (decoder, encoder)
416 416 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
417 417 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
418 418
419 419 @util.nogc
420 420 def _readmarkers(data):
421 421 """Read and enumerate markers from raw data"""
422 422 off = 0
423 423 diskversion = _unpack('>B', data[off:off + 1])[0]
424 424 off += 1
425 425 if diskversion not in formats:
426 426 raise error.Abort(_('parsing obsolete marker: unknown version %r')
427 427 % diskversion)
428 428 return diskversion, formats[diskversion][0](data, off)
429 429
430 430 def encodemarkers(markers, addheader=False, version=_fm0version):
431 431 # Kept separate from flushmarkers(), it will be reused for
432 432 # markers exchange.
433 433 encodeone = formats[version][1]
434 434 if addheader:
435 435 yield _pack('>B', version)
436 436 for marker in markers:
437 437 yield encodeone(marker)
438 438
439 439
440 440 class marker(object):
441 441 """Wrap obsolete marker raw data"""
442 442
443 443 def __init__(self, repo, data):
444 444 # the repo argument will be used to create changectx in later version
445 445 self._repo = repo
446 446 self._data = data
447 447 self._decodedmeta = None
448 448
449 449 def __hash__(self):
450 450 return hash(self._data)
451 451
452 452 def __eq__(self, other):
453 453 if type(other) != type(self):
454 454 return False
455 455 return self._data == other._data
456 456
457 457 def precnode(self):
458 458 """Precursor changeset node identifier"""
459 459 return self._data[0]
460 460
461 461 def succnodes(self):
462 462 """List of successor changesets node identifiers"""
463 463 return self._data[1]
464 464
465 465 def parentnodes(self):
466 466 """Parents of the precursors (None if not recorded)"""
467 467 return self._data[5]
468 468
469 469 def metadata(self):
470 470 """Decoded metadata dictionary"""
471 471 return dict(self._data[3])
472 472
473 473 def date(self):
474 474 """Creation date as (unixtime, offset)"""
475 475 return self._data[4]
476 476
477 477 def flags(self):
478 478 """The flags field of the marker"""
479 479 return self._data[2]
480 480
481 481 @util.nogc
482 482 def _addsuccessors(successors, markers):
483 483 for mark in markers:
484 484 successors.setdefault(mark[0], set()).add(mark)
485 485
486 486 @util.nogc
487 487 def _addprecursors(precursors, markers):
488 488 for mark in markers:
489 489 for suc in mark[1]:
490 490 precursors.setdefault(suc, set()).add(mark)
491 491
492 492 @util.nogc
493 493 def _addchildren(children, markers):
494 494 for mark in markers:
495 495 parents = mark[5]
496 496 if parents is not None:
497 497 for p in parents:
498 498 children.setdefault(p, set()).add(mark)
499 499
500 500 def _checkinvalidmarkers(markers):
501 501 """search for marker with invalid data and raise error if needed
502 502
503 503 Exist as a separated function to allow the evolve extension for a more
504 504 subtle handling.
505 505 """
506 506 for mark in markers:
507 507 if node.nullid in mark[1]:
508 508 raise error.Abort(_('bad obsolescence marker detected: '
509 509 'invalid successors nullid'))
510 510
511 511 class obsstore(object):
512 512 """Store obsolete markers
513 513
514 514 Markers can be accessed with two mappings:
515 515 - precursors[x] -> set(markers on precursors edges of x)
516 516 - successors[x] -> set(markers on successors edges of x)
517 517 - children[x] -> set(markers on precursors edges of children(x)
518 518 """
519 519
520 520 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
521 521 # prec: nodeid, precursor changesets
522 522 # succs: tuple of nodeid, successor changesets (0-N length)
523 523 # flag: integer, flag field carrying modifier for the markers (see doc)
524 524 # meta: binary blob, encoded metadata dictionary
525 525 # date: (float, int) tuple, date of marker creation
526 526 # parents: (tuple of nodeid) or None, parents of precursors
527 527 # None is used when no data has been recorded
528 528
529 529 def __init__(self, svfs, defaultformat=_fm1version, readonly=False):
530 530 # caches for various obsolescence related cache
531 531 self.caches = {}
532 532 self.svfs = svfs
533 533 self._version = defaultformat
534 534 self._readonly = readonly
535 535
536 536 def __iter__(self):
537 537 return iter(self._all)
538 538
539 539 def __len__(self):
540 540 return len(self._all)
541 541
542 542 def __nonzero__(self):
543 543 if not self._cached('_all'):
544 544 try:
545 545 return self.svfs.stat('obsstore').st_size > 1
546 546 except OSError as inst:
547 547 if inst.errno != errno.ENOENT:
548 548 raise
549 549 # just build an empty _all list if no obsstore exists, which
550 550 # avoids further stat() syscalls
551 551 pass
552 552 return bool(self._all)
553 553
554 554 __bool__ = __nonzero__
555 555
556 556 @property
557 557 def readonly(self):
558 558 """True if marker creation is disabled
559 559
560 560 Remove me in the future when obsolete marker is always on."""
561 561 return self._readonly
562 562
563 563 def create(self, transaction, prec, succs=(), flag=0, parents=None,
564 564 date=None, metadata=None):
565 565 """obsolete: add a new obsolete marker
566 566
567 567 * ensuring it is hashable
568 568 * check mandatory metadata
569 569 * encode metadata
570 570
571 571 If you are a human writing code creating marker you want to use the
572 572 `createmarkers` function in this module instead.
573 573
574 574 return True if a new marker have been added, False if the markers
575 575 already existed (no op).
576 576 """
577 577 if metadata is None:
578 578 metadata = {}
579 579 if date is None:
580 580 if 'date' in metadata:
581 581 # as a courtesy for out-of-tree extensions
582 582 date = util.parsedate(metadata.pop('date'))
583 583 else:
584 584 date = util.makedate()
585 585 if len(prec) != 20:
586 586 raise ValueError(prec)
587 587 for succ in succs:
588 588 if len(succ) != 20:
589 589 raise ValueError(succ)
590 590 if prec in succs:
591 591 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
592 592
593 593 metadata = tuple(sorted(metadata.iteritems()))
594 594
595 595 marker = (str(prec), tuple(succs), int(flag), metadata, date, parents)
596 596 return bool(self.add(transaction, [marker]))
597 597
598 598 def add(self, transaction, markers):
599 599 """Add new markers to the store
600 600
601 601 Take care of filtering duplicate.
602 602 Return the number of new marker."""
603 603 if self._readonly:
604 604 raise error.Abort(_('creating obsolete markers is not enabled on '
605 605 'this repo'))
606 606 known = set(self._all)
607 607 new = []
608 608 for m in markers:
609 609 if m not in known:
610 610 known.add(m)
611 611 new.append(m)
612 612 if new:
613 613 f = self.svfs('obsstore', 'ab')
614 614 try:
615 615 offset = f.tell()
616 616 transaction.add('obsstore', offset)
617 617 # offset == 0: new file - add the version header
618 618 for bytes in encodemarkers(new, offset == 0, self._version):
619 619 f.write(bytes)
620 620 finally:
621 621 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
622 622 # call 'filecacheentry.refresh()' here
623 623 f.close()
624 624 self._addmarkers(new)
625 625 # new marker *may* have changed several set. invalidate the cache.
626 626 self.caches.clear()
627 627 # records the number of new markers for the transaction hooks
628 628 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
629 629 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
630 630 return len(new)
631 631
632 632 def mergemarkers(self, transaction, data):
633 633 """merge a binary stream of markers inside the obsstore
634 634
635 635 Returns the number of new markers added."""
636 636 version, markers = _readmarkers(data)
637 637 return self.add(transaction, markers)
638 638
639 639 @propertycache
640 640 def _all(self):
641 641 data = self.svfs.tryread('obsstore')
642 642 if not data:
643 643 return []
644 644 self._version, markers = _readmarkers(data)
645 645 markers = list(markers)
646 646 _checkinvalidmarkers(markers)
647 647 return markers
648 648
649 649 @propertycache
650 650 def successors(self):
651 651 successors = {}
652 652 _addsuccessors(successors, self._all)
653 653 return successors
654 654
655 655 @propertycache
656 656 def precursors(self):
657 657 precursors = {}
658 658 _addprecursors(precursors, self._all)
659 659 return precursors
660 660
661 661 @propertycache
662 662 def children(self):
663 663 children = {}
664 664 _addchildren(children, self._all)
665 665 return children
666 666
667 667 def _cached(self, attr):
668 668 return attr in self.__dict__
669 669
670 670 def _addmarkers(self, markers):
671 671 markers = list(markers) # to allow repeated iteration
672 672 self._all.extend(markers)
673 673 if self._cached('successors'):
674 674 _addsuccessors(self.successors, markers)
675 675 if self._cached('precursors'):
676 676 _addprecursors(self.precursors, markers)
677 677 if self._cached('children'):
678 678 _addchildren(self.children, markers)
679 679 _checkinvalidmarkers(markers)
680 680
681 681 def relevantmarkers(self, nodes):
682 682 """return a set of all obsolescence markers relevant to a set of nodes.
683 683
684 684 "relevant" to a set of nodes mean:
685 685
686 686 - marker that use this changeset as successor
687 687 - prune marker of direct children on this changeset
688 688 - recursive application of the two rules on precursors of these markers
689 689
690 690 It is a set so you cannot rely on order."""
691 691
692 692 pendingnodes = set(nodes)
693 693 seenmarkers = set()
694 694 seennodes = set(pendingnodes)
695 695 precursorsmarkers = self.precursors
696 696 children = self.children
697 697 while pendingnodes:
698 698 direct = set()
699 699 for current in pendingnodes:
700 700 direct.update(precursorsmarkers.get(current, ()))
701 701 pruned = [m for m in children.get(current, ()) if not m[1]]
702 702 direct.update(pruned)
703 703 direct -= seenmarkers
704 704 pendingnodes = set([m[0] for m in direct])
705 705 seenmarkers |= direct
706 706 pendingnodes -= seennodes
707 707 seennodes |= pendingnodes
708 708 return seenmarkers
709 709
710 710 def commonversion(versions):
711 711 """Return the newest version listed in both versions and our local formats.
712 712
713 713 Returns None if no common version exists.
714 714 """
715 715 versions.sort(reverse=True)
716 716 # search for highest version known on both side
717 717 for v in versions:
718 718 if v in formats:
719 719 return v
720 720 return None
721 721
722 722 # arbitrary picked to fit into 8K limit from HTTP server
723 723 # you have to take in account:
724 724 # - the version header
725 725 # - the base85 encoding
726 726 _maxpayload = 5300
727 727
728 728 def _pushkeyescape(markers):
729 729 """encode markers into a dict suitable for pushkey exchange
730 730
731 731 - binary data is base85 encoded
732 732 - split in chunks smaller than 5300 bytes"""
733 733 keys = {}
734 734 parts = []
735 735 currentlen = _maxpayload * 2 # ensure we create a new part
736 736 for marker in markers:
737 737 nextdata = _fm0encodeonemarker(marker)
738 738 if (len(nextdata) + currentlen > _maxpayload):
739 739 currentpart = []
740 740 currentlen = 0
741 741 parts.append(currentpart)
742 742 currentpart.append(nextdata)
743 743 currentlen += len(nextdata)
744 744 for idx, part in enumerate(reversed(parts)):
745 745 data = ''.join([_pack('>B', _fm0version)] + part)
746 746 keys['dump%i' % idx] = util.b85encode(data)
747 747 return keys
748 748
749 749 def listmarkers(repo):
750 750 """List markers over pushkey"""
751 751 if not repo.obsstore:
752 752 return {}
753 753 return _pushkeyescape(sorted(repo.obsstore))
754 754
755 755 def pushmarker(repo, key, old, new):
756 756 """Push markers over pushkey"""
757 757 if not key.startswith('dump'):
758 758 repo.ui.warn(_('unknown key: %r') % key)
759 759 return 0
760 760 if old:
761 761 repo.ui.warn(_('unexpected old value for %r') % key)
762 762 return 0
763 763 data = util.b85decode(new)
764 764 lock = repo.lock()
765 765 try:
766 766 tr = repo.transaction('pushkey: obsolete markers')
767 767 try:
768 768 repo.obsstore.mergemarkers(tr, data)
769 769 tr.close()
770 770 return 1
771 771 finally:
772 772 tr.release()
773 773 finally:
774 774 lock.release()
775 775
776 776 def getmarkers(repo, nodes=None):
777 777 """returns markers known in a repository
778 778
779 779 If <nodes> is specified, only markers "relevant" to those nodes are are
780 780 returned"""
781 781 if nodes is None:
782 782 rawmarkers = repo.obsstore
783 783 else:
784 784 rawmarkers = repo.obsstore.relevantmarkers(nodes)
785 785
786 786 for markerdata in rawmarkers:
787 787 yield marker(repo, markerdata)
788 788
789 789 def relevantmarkers(repo, node):
790 790 """all obsolete markers relevant to some revision"""
791 791 for markerdata in repo.obsstore.relevantmarkers(node):
792 792 yield marker(repo, markerdata)
793 793
794 794
795 795 def precursormarkers(ctx):
796 796 """obsolete marker marking this changeset as a successors"""
797 797 for data in ctx.repo().obsstore.precursors.get(ctx.node(), ()):
798 798 yield marker(ctx.repo(), data)
799 799
800 800 def successormarkers(ctx):
801 801 """obsolete marker making this changeset obsolete"""
802 802 for data in ctx.repo().obsstore.successors.get(ctx.node(), ()):
803 803 yield marker(ctx.repo(), data)
804 804
805 805 def allsuccessors(obsstore, nodes, ignoreflags=0):
806 806 """Yield node for every successor of <nodes>.
807 807
808 808 Some successors may be unknown locally.
809 809
810 810 This is a linear yield unsuited to detecting split changesets. It includes
811 811 initial nodes too."""
812 812 remaining = set(nodes)
813 813 seen = set(remaining)
814 814 while remaining:
815 815 current = remaining.pop()
816 816 yield current
817 817 for mark in obsstore.successors.get(current, ()):
818 818 # ignore marker flagged with specified flag
819 819 if mark[2] & ignoreflags:
820 820 continue
821 821 for suc in mark[1]:
822 822 if suc not in seen:
823 823 seen.add(suc)
824 824 remaining.add(suc)
825 825
826 826 def allprecursors(obsstore, nodes, ignoreflags=0):
827 827 """Yield node for every precursors of <nodes>.
828 828
829 829 Some precursors may be unknown locally.
830 830
831 831 This is a linear yield unsuited to detecting folded changesets. It includes
832 832 initial nodes too."""
833 833
834 834 remaining = set(nodes)
835 835 seen = set(remaining)
836 836 while remaining:
837 837 current = remaining.pop()
838 838 yield current
839 839 for mark in obsstore.precursors.get(current, ()):
840 840 # ignore marker flagged with specified flag
841 841 if mark[2] & ignoreflags:
842 842 continue
843 843 suc = mark[0]
844 844 if suc not in seen:
845 845 seen.add(suc)
846 846 remaining.add(suc)
847 847
848 848 def foreground(repo, nodes):
849 849 """return all nodes in the "foreground" of other node
850 850
851 851 The foreground of a revision is anything reachable using parent -> children
852 852 or precursor -> successor relation. It is very similar to "descendant" but
853 853 augmented with obsolescence information.
854 854
855 855 Beware that possible obsolescence cycle may result if complex situation.
856 856 """
857 857 repo = repo.unfiltered()
858 858 foreground = set(repo.set('%ln::', nodes))
859 859 if repo.obsstore:
860 860 # We only need this complicated logic if there is obsolescence
861 861 # XXX will probably deserve an optimised revset.
862 862 nm = repo.changelog.nodemap
863 863 plen = -1
864 864 # compute the whole set of successors or descendants
865 865 while len(foreground) != plen:
866 866 plen = len(foreground)
867 867 succs = set(c.node() for c in foreground)
868 868 mutable = [c.node() for c in foreground if c.mutable()]
869 869 succs.update(allsuccessors(repo.obsstore, mutable))
870 870 known = (n for n in succs if n in nm)
871 871 foreground = set(repo.set('%ln::', known))
872 872 return set(c.node() for c in foreground)
873 873
874 874
875 875 def successorssets(repo, initialnode, cache=None):
876 876 """Return set of all latest successors of initial nodes
877 877
878 878 The successors set of a changeset A are the group of revisions that succeed
879 879 A. It succeeds A as a consistent whole, each revision being only a partial
880 880 replacement. The successors set contains non-obsolete changesets only.
881 881
882 882 This function returns the full list of successor sets which is why it
883 883 returns a list of tuples and not just a single tuple. Each tuple is a valid
884 884 successors set. Note that (A,) may be a valid successors set for changeset A
885 885 (see below).
886 886
887 887 In most cases, a changeset A will have a single element (e.g. the changeset
888 888 A is replaced by A') in its successors set. Though, it is also common for a
889 889 changeset A to have no elements in its successor set (e.g. the changeset
890 890 has been pruned). Therefore, the returned list of successors sets will be
891 891 [(A',)] or [], respectively.
892 892
893 893 When a changeset A is split into A' and B', however, it will result in a
894 894 successors set containing more than a single element, i.e. [(A',B')].
895 895 Divergent changesets will result in multiple successors sets, i.e. [(A',),
896 896 (A'')].
897 897
898 898 If a changeset A is not obsolete, then it will conceptually have no
899 899 successors set. To distinguish this from a pruned changeset, the successor
900 900 set will contain itself only, i.e. [(A,)].
901 901
902 902 Finally, successors unknown locally are considered to be pruned (obsoleted
903 903 without any successors).
904 904
905 905 The optional `cache` parameter is a dictionary that may contain precomputed
906 906 successors sets. It is meant to reuse the computation of a previous call to
907 907 `successorssets` when multiple calls are made at the same time. The cache
908 908 dictionary is updated in place. The caller is responsible for its life
909 909 span. Code that makes multiple calls to `successorssets` *must* use this
910 910 cache mechanism or suffer terrible performance.
911 911 """
912 912
913 913 succmarkers = repo.obsstore.successors
914 914
915 915 # Stack of nodes we search successors sets for
916 916 toproceed = [initialnode]
917 917 # set version of above list for fast loop detection
918 918 # element added to "toproceed" must be added here
919 919 stackedset = set(toproceed)
920 920 if cache is None:
921 921 cache = {}
922 922
923 923 # This while loop is the flattened version of a recursive search for
924 924 # successors sets
925 925 #
926 926 # def successorssets(x):
927 927 # successors = directsuccessors(x)
928 928 # ss = [[]]
929 929 # for succ in directsuccessors(x):
930 930 # # product as in itertools cartesian product
931 931 # ss = product(ss, successorssets(succ))
932 932 # return ss
933 933 #
934 934 # But we can not use plain recursive calls here:
935 935 # - that would blow the python call stack
936 936 # - obsolescence markers may have cycles, we need to handle them.
937 937 #
938 938 # The `toproceed` list act as our call stack. Every node we search
939 939 # successors set for are stacked there.
940 940 #
941 941 # The `stackedset` is set version of this stack used to check if a node is
942 942 # already stacked. This check is used to detect cycles and prevent infinite
943 943 # loop.
944 944 #
945 945 # successors set of all nodes are stored in the `cache` dictionary.
946 946 #
947 947 # After this while loop ends we use the cache to return the successors sets
948 948 # for the node requested by the caller.
949 949 while toproceed:
950 950 # Every iteration tries to compute the successors sets of the topmost
951 951 # node of the stack: CURRENT.
952 952 #
953 953 # There are four possible outcomes:
954 954 #
955 955 # 1) We already know the successors sets of CURRENT:
956 956 # -> mission accomplished, pop it from the stack.
957 957 # 2) Node is not obsolete:
958 958 # -> the node is its own successors sets. Add it to the cache.
959 959 # 3) We do not know successors set of direct successors of CURRENT:
960 960 # -> We add those successors to the stack.
961 961 # 4) We know successors sets of all direct successors of CURRENT:
962 962 # -> We can compute CURRENT successors set and add it to the
963 963 # cache.
964 964 #
965 965 current = toproceed[-1]
966 966 if current in cache:
967 967 # case (1): We already know the successors sets
968 968 stackedset.remove(toproceed.pop())
969 969 elif current not in succmarkers:
970 970 # case (2): The node is not obsolete.
971 971 if current in repo:
972 972 # We have a valid last successors.
973 973 cache[current] = [(current,)]
974 974 else:
975 975 # Final obsolete version is unknown locally.
976 976 # Do not count that as a valid successors
977 977 cache[current] = []
978 978 else:
979 979 # cases (3) and (4)
980 980 #
981 981 # We proceed in two phases. Phase 1 aims to distinguish case (3)
982 982 # from case (4):
983 983 #
984 984 # For each direct successors of CURRENT, we check whether its
985 985 # successors sets are known. If they are not, we stack the
986 986 # unknown node and proceed to the next iteration of the while
987 987 # loop. (case 3)
988 988 #
989 989 # During this step, we may detect obsolescence cycles: a node
990 990 # with unknown successors sets but already in the call stack.
991 991 # In such a situation, we arbitrary set the successors sets of
992 992 # the node to nothing (node pruned) to break the cycle.
993 993 #
994 994 # If no break was encountered we proceed to phase 2.
995 995 #
996 996 # Phase 2 computes successors sets of CURRENT (case 4); see details
997 997 # in phase 2 itself.
998 998 #
999 999 # Note the two levels of iteration in each phase.
1000 1000 # - The first one handles obsolescence markers using CURRENT as
1001 1001 # precursor (successors markers of CURRENT).
1002 1002 #
1003 1003 # Having multiple entry here means divergence.
1004 1004 #
1005 1005 # - The second one handles successors defined in each marker.
1006 1006 #
1007 1007 # Having none means pruned node, multiple successors means split,
1008 1008 # single successors are standard replacement.
1009 1009 #
1010 1010 for mark in sorted(succmarkers[current]):
1011 1011 for suc in mark[1]:
1012 1012 if suc not in cache:
1013 1013 if suc in stackedset:
1014 1014 # cycle breaking
1015 1015 cache[suc] = []
1016 1016 else:
1017 1017 # case (3) If we have not computed successors sets
1018 1018 # of one of those successors we add it to the
1019 1019 # `toproceed` stack and stop all work for this
1020 1020 # iteration.
1021 1021 toproceed.append(suc)
1022 1022 stackedset.add(suc)
1023 1023 break
1024 1024 else:
1025 1025 continue
1026 1026 break
1027 1027 else:
1028 1028 # case (4): we know all successors sets of all direct
1029 1029 # successors
1030 1030 #
1031 1031 # Successors set contributed by each marker depends on the
1032 1032 # successors sets of all its "successors" node.
1033 1033 #
1034 1034 # Each different marker is a divergence in the obsolescence
1035 1035 # history. It contributes successors sets distinct from other
1036 1036 # markers.
1037 1037 #
1038 1038 # Within a marker, a successor may have divergent successors
1039 1039 # sets. In such a case, the marker will contribute multiple
1040 1040 # divergent successors sets. If multiple successors have
1041 1041 # divergent successors sets, a Cartesian product is used.
1042 1042 #
1043 1043 # At the end we post-process successors sets to remove
1044 1044 # duplicated entry and successors set that are strict subset of
1045 1045 # another one.
1046 1046 succssets = []
1047 1047 for mark in sorted(succmarkers[current]):
1048 1048 # successors sets contributed by this marker
1049 1049 markss = [[]]
1050 1050 for suc in mark[1]:
1051 1051 # cardinal product with previous successors
1052 1052 productresult = []
1053 1053 for prefix in markss:
1054 1054 for suffix in cache[suc]:
1055 1055 newss = list(prefix)
1056 1056 for part in suffix:
1057 1057 # do not duplicated entry in successors set
1058 1058 # first entry wins.
1059 1059 if part not in newss:
1060 1060 newss.append(part)
1061 1061 productresult.append(newss)
1062 1062 markss = productresult
1063 1063 succssets.extend(markss)
1064 1064 # remove duplicated and subset
1065 1065 seen = []
1066 1066 final = []
1067 1067 candidate = sorted(((set(s), s) for s in succssets if s),
1068 1068 key=lambda x: len(x[1]), reverse=True)
1069 1069 for setversion, listversion in candidate:
1070 1070 for seenset in seen:
1071 1071 if setversion.issubset(seenset):
1072 1072 break
1073 1073 else:
1074 1074 final.append(listversion)
1075 1075 seen.append(setversion)
1076 1076 final.reverse() # put small successors set first
1077 1077 cache[current] = final
1078 1078 return cache[initialnode]
1079 1079
1080 1080 # mapping of 'set-name' -> <function to compute this set>
1081 1081 cachefuncs = {}
1082 1082 def cachefor(name):
1083 1083 """Decorator to register a function as computing the cache for a set"""
1084 1084 def decorator(func):
1085 1085 assert name not in cachefuncs
1086 1086 cachefuncs[name] = func
1087 1087 return func
1088 1088 return decorator
1089 1089
1090 1090 def getrevs(repo, name):
1091 1091 """Return the set of revision that belong to the <name> set
1092 1092
1093 1093 Such access may compute the set and cache it for future use"""
1094 1094 repo = repo.unfiltered()
1095 1095 if not repo.obsstore:
1096 1096 return frozenset()
1097 1097 if name not in repo.obsstore.caches:
1098 1098 repo.obsstore.caches[name] = cachefuncs[name](repo)
1099 1099 return repo.obsstore.caches[name]
1100 1100
1101 1101 # To be simple we need to invalidate obsolescence cache when:
1102 1102 #
1103 1103 # - new changeset is added:
1104 1104 # - public phase is changed
1105 1105 # - obsolescence marker are added
1106 1106 # - strip is used a repo
1107 1107 def clearobscaches(repo):
1108 1108 """Remove all obsolescence related cache from a repo
1109 1109
1110 1110 This remove all cache in obsstore is the obsstore already exist on the
1111 1111 repo.
1112 1112
1113 1113 (We could be smarter here given the exact event that trigger the cache
1114 1114 clearing)"""
1115 1115 # only clear cache is there is obsstore data in this repo
1116 1116 if 'obsstore' in repo._filecache:
1117 1117 repo.obsstore.caches.clear()
1118 1118
1119 1119 @cachefor('obsolete')
1120 1120 def _computeobsoleteset(repo):
1121 1121 """the set of obsolete revisions"""
1122 1122 obs = set()
1123 1123 getnode = repo.changelog.node
1124 1124 notpublic = repo._phasecache.getrevset(repo, (phases.draft, phases.secret))
1125 1125 for r in notpublic:
1126 1126 if getnode(r) in repo.obsstore.successors:
1127 1127 obs.add(r)
1128 1128 return obs
1129 1129
1130 1130 @cachefor('unstable')
1131 1131 def _computeunstableset(repo):
1132 1132 """the set of non obsolete revisions with obsolete parents"""
1133 1133 revs = [(ctx.rev(), ctx) for ctx in
1134 1134 repo.set('(not public()) and (not obsolete())')]
1135 1135 revs.sort(key=lambda x:x[0])
1136 1136 unstable = set()
1137 1137 for rev, ctx in revs:
1138 1138 # A rev is unstable if one of its parent is obsolete or unstable
1139 1139 # this works since we traverse following growing rev order
1140 1140 if any((x.obsolete() or (x.rev() in unstable))
1141 1141 for x in ctx.parents()):
1142 1142 unstable.add(rev)
1143 1143 return unstable
1144 1144
1145 1145 @cachefor('suspended')
1146 1146 def _computesuspendedset(repo):
1147 1147 """the set of obsolete parents with non obsolete descendants"""
1148 1148 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
1149 1149 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
1150 1150
1151 1151 @cachefor('extinct')
1152 1152 def _computeextinctset(repo):
1153 1153 """the set of obsolete parents without non obsolete descendants"""
1154 1154 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
1155 1155
1156 1156
1157 1157 @cachefor('bumped')
1158 1158 def _computebumpedset(repo):
1159 1159 """the set of revs trying to obsolete public revisions"""
1160 1160 bumped = set()
1161 1161 # util function (avoid attribute lookup in the loop)
1162 1162 phase = repo._phasecache.phase # would be faster to grab the full list
1163 1163 public = phases.public
1164 1164 cl = repo.changelog
1165 1165 torev = cl.nodemap.get
1166 1166 for ctx in repo.set('(not public()) and (not obsolete())'):
1167 1167 rev = ctx.rev()
1168 1168 # We only evaluate mutable, non-obsolete revision
1169 1169 node = ctx.node()
1170 1170 # (future) A cache of precursors may worth if split is very common
1171 1171 for pnode in allprecursors(repo.obsstore, [node],
1172 1172 ignoreflags=bumpedfix):
1173 1173 prev = torev(pnode) # unfiltered! but so is phasecache
1174 1174 if (prev is not None) and (phase(repo, prev) <= public):
1175 1175 # we have a public precursor
1176 1176 bumped.add(rev)
1177 1177 break # Next draft!
1178 1178 return bumped
1179 1179
1180 1180 @cachefor('divergent')
1181 1181 def _computedivergentset(repo):
1182 1182 """the set of rev that compete to be the final successors of some revision.
1183 1183 """
1184 1184 divergent = set()
1185 1185 obsstore = repo.obsstore
1186 1186 newermap = {}
1187 1187 for ctx in repo.set('(not public()) - obsolete()'):
1188 1188 mark = obsstore.precursors.get(ctx.node(), ())
1189 1189 toprocess = set(mark)
1190 1190 seen = set()
1191 1191 while toprocess:
1192 1192 prec = toprocess.pop()[0]
1193 1193 if prec in seen:
1194 1194 continue # emergency cycle hanging prevention
1195 1195 seen.add(prec)
1196 1196 if prec not in newermap:
1197 1197 successorssets(repo, prec, newermap)
1198 1198 newer = [n for n in newermap[prec] if n]
1199 1199 if len(newer) > 1:
1200 1200 divergent.add(ctx.rev())
1201 1201 break
1202 1202 toprocess.update(obsstore.precursors.get(prec, ()))
1203 1203 return divergent
1204 1204
1205 1205
1206 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
1206 def createmarkers(repo, relations, flag=0, date=None, metadata=None,
1207 operation=None):
1207 1208 """Add obsolete markers between changesets in a repo
1208 1209
1209 1210 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1210 1211 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1211 1212 containing metadata for this marker only. It is merged with the global
1212 1213 metadata specified through the `metadata` argument of this function,
1213 1214
1214 1215 Trying to obsolete a public changeset will raise an exception.
1215 1216
1216 1217 Current user and date are used except if specified otherwise in the
1217 1218 metadata attribute.
1218 1219
1219 1220 This function operates within a transaction of its own, but does
1220 1221 not take any lock on the repo.
1221 1222 """
1222 1223 # prepare metadata
1223 1224 if metadata is None:
1224 1225 metadata = {}
1225 1226 if 'user' not in metadata:
1226 1227 metadata['user'] = repo.ui.username()
1228 if operation:
1229 metadata['operation'] = operation
1227 1230 tr = repo.transaction('add-obsolescence-marker')
1228 1231 try:
1229 1232 markerargs = []
1230 1233 for rel in relations:
1231 1234 prec = rel[0]
1232 1235 sucs = rel[1]
1233 1236 localmetadata = metadata.copy()
1234 1237 if 2 < len(rel):
1235 1238 localmetadata.update(rel[2])
1236 1239
1237 1240 if not prec.mutable():
1238 1241 raise error.Abort(_("cannot obsolete public changeset: %s")
1239 1242 % prec,
1240 1243 hint="see 'hg help phases' for details")
1241 1244 nprec = prec.node()
1242 1245 nsucs = tuple(s.node() for s in sucs)
1243 1246 npare = None
1244 1247 if not nsucs:
1245 1248 npare = tuple(p.node() for p in prec.parents())
1246 1249 if nprec in nsucs:
1247 1250 raise error.Abort(_("changeset %s cannot obsolete itself")
1248 1251 % prec)
1249 1252
1250 1253 # Creating the marker causes the hidden cache to become invalid,
1251 1254 # which causes recomputation when we ask for prec.parents() above.
1252 1255 # Resulting in n^2 behavior. So let's prepare all of the args
1253 1256 # first, then create the markers.
1254 1257 markerargs.append((nprec, nsucs, npare, localmetadata))
1255 1258
1256 1259 for args in markerargs:
1257 1260 nprec, nsucs, npare, localmetadata = args
1258 1261 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1259 1262 date=date, metadata=localmetadata)
1260 1263 repo.filteredrevcache.clear()
1261 1264 tr.close()
1262 1265 finally:
1263 1266 tr.release()
1264 1267
1265 1268 def isenabled(repo, option):
1266 1269 """Returns True if the given repository has the given obsolete option
1267 1270 enabled.
1268 1271 """
1269 1272 result = set(repo.ui.configlist('experimental', 'evolution'))
1270 1273 if 'all' in result:
1271 1274 return True
1272 1275
1273 1276 # For migration purposes, temporarily return true if the config hasn't been
1274 1277 # set but _enabled is true.
1275 1278 if len(result) == 0 and _enabled:
1276 1279 return True
1277 1280
1278 1281 # createmarkers must be enabled if other options are enabled
1279 1282 if ((allowunstableopt in result or exchangeopt in result) and
1280 1283 not createmarkersopt in result):
1281 1284 raise error.Abort(_("'createmarkers' obsolete option must be enabled "
1282 1285 "if other obsolete options are enabled"))
1283 1286
1284 1287 return option in result
@@ -1,576 +1,576 b''
1 1 $ . "$TESTDIR/histedit-helpers.sh"
2 2
3 3 Enable obsolete
4 4
5 5 $ cat >> $HGRCPATH << EOF
6 6 > [ui]
7 7 > logtemplate= {rev}:{node|short} {desc|firstline}
8 8 > [phases]
9 9 > publish=False
10 10 > [experimental]
11 11 > evolution=createmarkers,allowunstable
12 12 > [extensions]
13 13 > histedit=
14 14 > rebase=
15 15 > EOF
16 16
17 17 Test that histedit learns about obsolescence not stored in histedit state
18 18 $ hg init boo
19 19 $ cd boo
20 20 $ echo a > a
21 21 $ hg ci -Am a
22 22 adding a
23 23 $ echo a > b
24 24 $ echo a > c
25 25 $ echo a > c
26 26 $ hg ci -Am b
27 27 adding b
28 28 adding c
29 29 $ echo a > d
30 30 $ hg ci -Am c
31 31 adding d
32 32 $ echo "pick `hg log -r 0 -T '{node|short}'`" > plan
33 33 $ echo "pick `hg log -r 2 -T '{node|short}'`" >> plan
34 34 $ echo "edit `hg log -r 1 -T '{node|short}'`" >> plan
35 35 $ hg histedit -r 'all()' --commands plan
36 36 Editing (1b2d564fad96), you may commit or record as needed now.
37 37 (hg histedit --continue to resume)
38 38 [1]
39 39 $ hg st
40 40 A b
41 41 A c
42 42 ? plan
43 43 $ hg commit --amend b
44 44 $ hg histedit --continue
45 45 $ hg log -G
46 46 @ 6:46abc7c4d873 b
47 47 |
48 48 o 5:49d44ab2be1b c
49 49 |
50 50 o 0:cb9a9f314b8b a
51 51
52 52 $ hg debugobsolete
53 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
54 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'user': 'test'} (glob)
55 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'user': 'test'} (glob)
56 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
53 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
54 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'operation': 'amend', 'user': 'test'} (glob)
55 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
56 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
57 57
58 58 With some node gone missing during the edit.
59 59
60 60 $ echo "pick `hg log -r 0 -T '{node|short}'`" > plan
61 61 $ echo "pick `hg log -r 6 -T '{node|short}'`" >> plan
62 62 $ echo "edit `hg log -r 5 -T '{node|short}'`" >> plan
63 63 $ hg histedit -r 'all()' --commands plan
64 64 Editing (49d44ab2be1b), you may commit or record as needed now.
65 65 (hg histedit --continue to resume)
66 66 [1]
67 67 $ hg st
68 68 A b
69 69 A d
70 70 ? plan
71 71 $ hg commit --amend -X . -m XXXXXX
72 72 $ hg commit --amend -X . -m b2
73 73 $ hg --hidden --config extensions.strip= strip 'desc(XXXXXX)' --no-backup
74 74 $ hg histedit --continue
75 75 $ hg log -G
76 76 @ 9:273c1f3b8626 c
77 77 |
78 78 o 8:aba7da937030 b2
79 79 |
80 80 o 0:cb9a9f314b8b a
81 81
82 82 $ hg debugobsolete
83 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
84 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'user': 'test'} (glob)
85 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'user': 'test'} (glob)
86 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
87 76f72745eac0643d16530e56e2f86e36e40631f1 2ca853e48edbd6453a0674dc0fe28a0974c51b9c 0 (*) {'user': 'test'} (glob)
88 2ca853e48edbd6453a0674dc0fe28a0974c51b9c aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'user': 'test'} (glob)
89 49d44ab2be1b67a79127568a67c9c99430633b48 273c1f3b86267ed3ec684bb13af1fa4d6ba56e02 0 (*) {'user': 'test'} (glob)
90 46abc7c4d8738e8563e577f7889e1b6db3da4199 aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'user': 'test'} (glob)
83 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
84 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'operation': 'amend', 'user': 'test'} (glob)
85 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
86 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
87 76f72745eac0643d16530e56e2f86e36e40631f1 2ca853e48edbd6453a0674dc0fe28a0974c51b9c 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
88 2ca853e48edbd6453a0674dc0fe28a0974c51b9c aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
89 49d44ab2be1b67a79127568a67c9c99430633b48 273c1f3b86267ed3ec684bb13af1fa4d6ba56e02 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
90 46abc7c4d8738e8563e577f7889e1b6db3da4199 aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
91 91 $ cd ..
92 92
93 93 Base setup for the rest of the testing
94 94 ======================================
95 95
96 96 $ hg init base
97 97 $ cd base
98 98
99 99 $ for x in a b c d e f ; do
100 100 > echo $x > $x
101 101 > hg add $x
102 102 > hg ci -m $x
103 103 > done
104 104
105 105 $ hg log --graph
106 106 @ 5:652413bf663e f
107 107 |
108 108 o 4:e860deea161a e
109 109 |
110 110 o 3:055a42cdd887 d
111 111 |
112 112 o 2:177f92b77385 c
113 113 |
114 114 o 1:d2ae7f538514 b
115 115 |
116 116 o 0:cb9a9f314b8b a
117 117
118 118
119 119 $ HGEDITOR=cat hg histedit 1
120 120 pick d2ae7f538514 1 b
121 121 pick 177f92b77385 2 c
122 122 pick 055a42cdd887 3 d
123 123 pick e860deea161a 4 e
124 124 pick 652413bf663e 5 f
125 125
126 126 # Edit history between d2ae7f538514 and 652413bf663e
127 127 #
128 128 # Commits are listed from least to most recent
129 129 #
130 130 # You can reorder changesets by reordering the lines
131 131 #
132 132 # Commands:
133 133 #
134 134 # e, edit = use commit, but stop for amending
135 135 # m, mess = edit commit message without changing commit content
136 136 # p, pick = use commit
137 137 # d, drop = remove commit from history
138 138 # f, fold = use commit, but combine it with the one above
139 139 # r, roll = like fold, but discard this commit's description and date
140 140 #
141 141 $ hg histedit 1 --commands - --verbose <<EOF | grep histedit
142 142 > pick 177f92b77385 2 c
143 143 > drop d2ae7f538514 1 b
144 144 > pick 055a42cdd887 3 d
145 145 > fold e860deea161a 4 e
146 146 > pick 652413bf663e 5 f
147 147 > EOF
148 148 [1]
149 149 $ hg log --graph --hidden
150 150 @ 10:cacdfd884a93 f
151 151 |
152 152 o 9:59d9f330561f d
153 153 |
154 154 | x 8:b558abc46d09 fold-temp-revision e860deea161a
155 155 | |
156 156 | x 7:96e494a2d553 d
157 157 |/
158 158 o 6:b346ab9a313d c
159 159 |
160 160 | x 5:652413bf663e f
161 161 | |
162 162 | x 4:e860deea161a e
163 163 | |
164 164 | x 3:055a42cdd887 d
165 165 | |
166 166 | x 2:177f92b77385 c
167 167 | |
168 168 | x 1:d2ae7f538514 b
169 169 |/
170 170 o 0:cb9a9f314b8b a
171 171
172 172 $ hg debugobsolete
173 96e494a2d553dd05902ba1cee1d94d4cb7b8faed 0 {b346ab9a313db8537ecf96fca3ca3ca984ef3bd7} (*) {'user': 'test'} (glob)
174 b558abc46d09c30f57ac31e85a8a3d64d2e906e4 0 {96e494a2d553dd05902ba1cee1d94d4cb7b8faed} (*) {'user': 'test'} (glob)
175 d2ae7f538514cd87c17547b0de4cea71fe1af9fb 0 {cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b} (*) {'user': 'test'} (glob)
176 177f92b773850b59254aa5e923436f921b55483b b346ab9a313db8537ecf96fca3ca3ca984ef3bd7 0 (*) {'user': 'test'} (glob)
177 055a42cdd88768532f9cf79daa407fc8d138de9b 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
178 e860deea161a2f77de56603b340ebbb4536308ae 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
179 652413bf663ef2a641cab26574e46d5f5a64a55a cacdfd884a9321ec4e1de275ef3949fa953a1f83 0 (*) {'user': 'test'} (glob)
173 96e494a2d553dd05902ba1cee1d94d4cb7b8faed 0 {b346ab9a313db8537ecf96fca3ca3ca984ef3bd7} (*) {'operation': 'histedit', 'user': 'test'} (glob)
174 b558abc46d09c30f57ac31e85a8a3d64d2e906e4 0 {96e494a2d553dd05902ba1cee1d94d4cb7b8faed} (*) {'operation': 'histedit', 'user': 'test'} (glob)
175 d2ae7f538514cd87c17547b0de4cea71fe1af9fb 0 {cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b} (*) {'operation': 'histedit', 'user': 'test'} (glob)
176 177f92b773850b59254aa5e923436f921b55483b b346ab9a313db8537ecf96fca3ca3ca984ef3bd7 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
177 055a42cdd88768532f9cf79daa407fc8d138de9b 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
178 e860deea161a2f77de56603b340ebbb4536308ae 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
179 652413bf663ef2a641cab26574e46d5f5a64a55a cacdfd884a9321ec4e1de275ef3949fa953a1f83 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
180 180
181 181
182 182 Ensure hidden revision does not prevent histedit
183 183 -------------------------------------------------
184 184
185 185 create an hidden revision
186 186
187 187 $ hg histedit 6 --commands - << EOF
188 188 > pick b346ab9a313d 6 c
189 189 > drop 59d9f330561f 7 d
190 190 > pick cacdfd884a93 8 f
191 191 > EOF
192 192 $ hg log --graph
193 193 @ 11:c13eb81022ca f
194 194 |
195 195 o 6:b346ab9a313d c
196 196 |
197 197 o 0:cb9a9f314b8b a
198 198
199 199 check hidden revision are ignored (6 have hidden children 7 and 8)
200 200
201 201 $ hg histedit 6 --commands - << EOF
202 202 > pick b346ab9a313d 6 c
203 203 > pick c13eb81022ca 8 f
204 204 > EOF
205 205
206 206
207 207
208 208 Test that rewriting leaving instability behind is allowed
209 209 ---------------------------------------------------------------------
210 210
211 211 $ hg up '.^'
212 212 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
213 213 $ hg log -r 'children(.)'
214 214 11:c13eb81022ca f (no-eol)
215 215 $ hg histedit -r '.' --commands - <<EOF
216 216 > edit b346ab9a313d 6 c
217 217 > EOF
218 218 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
219 219 adding c
220 220 Editing (b346ab9a313d), you may commit or record as needed now.
221 221 (hg histedit --continue to resume)
222 222 [1]
223 223 $ echo c >> c
224 224 $ hg histedit --continue
225 225
226 226 $ hg log -r 'unstable()'
227 227 11:c13eb81022ca f (no-eol)
228 228
229 229 stabilise
230 230
231 231 $ hg rebase -r 'unstable()' -d .
232 232 rebasing 11:c13eb81022ca "f"
233 233 $ hg up tip -q
234 234
235 235 Test dropping of changeset on the top of the stack
236 236 -------------------------------------------------------
237 237
238 238 Nothing is rewritten below, the working directory parent must be change for the
239 239 dropped changeset to be hidden.
240 240
241 241 $ cd ..
242 242 $ hg clone base droplast
243 243 updating to branch default
244 244 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
245 245 $ cd droplast
246 246 $ hg histedit -r '40db8afa467b' --commands - << EOF
247 247 > pick 40db8afa467b 10 c
248 248 > drop b449568bf7fc 11 f
249 249 > EOF
250 250 $ hg log -G
251 251 @ 12:40db8afa467b c
252 252 |
253 253 o 0:cb9a9f314b8b a
254 254
255 255
256 256 With rewritten ancestors
257 257
258 258 $ echo e > e
259 259 $ hg add e
260 260 $ hg commit -m g
261 261 $ echo f > f
262 262 $ hg add f
263 263 $ hg commit -m h
264 264 $ hg histedit -r '40db8afa467b' --commands - << EOF
265 265 > pick 47a8561c0449 12 g
266 266 > pick 40db8afa467b 10 c
267 267 > drop 1b3b05f35ff0 13 h
268 268 > EOF
269 269 $ hg log -G
270 270 @ 17:ee6544123ab8 c
271 271 |
272 272 o 16:269e713e9eae g
273 273 |
274 274 o 0:cb9a9f314b8b a
275 275
276 276 $ cd ../base
277 277
278 278
279 279
280 280 Test phases support
281 281 ===========================================
282 282
283 283 Check that histedit respect immutability
284 284 -------------------------------------------
285 285
286 286 $ cat >> $HGRCPATH << EOF
287 287 > [ui]
288 288 > logtemplate= {rev}:{node|short} ({phase}) {desc|firstline}\n
289 289 > EOF
290 290
291 291 $ hg ph -pv '.^'
292 292 phase changed for 2 changesets
293 293 $ hg log -G
294 294 @ 13:b449568bf7fc (draft) f
295 295 |
296 296 o 12:40db8afa467b (public) c
297 297 |
298 298 o 0:cb9a9f314b8b (public) a
299 299
300 300 $ hg histedit -r '.~2'
301 301 abort: cannot edit public changeset: cb9a9f314b8b
302 302 (see 'hg help phases' for details)
303 303 [255]
304 304
305 305
306 306 Prepare further testing
307 307 -------------------------------------------
308 308
309 309 $ for x in g h i j k ; do
310 310 > echo $x > $x
311 311 > hg add $x
312 312 > hg ci -m $x
313 313 > done
314 314 $ hg phase --force --secret .~2
315 315 $ hg log -G
316 316 @ 18:ee118ab9fa44 (secret) k
317 317 |
318 318 o 17:3a6c53ee7f3d (secret) j
319 319 |
320 320 o 16:b605fb7503f2 (secret) i
321 321 |
322 322 o 15:7395e1ff83bd (draft) h
323 323 |
324 324 o 14:6b70183d2492 (draft) g
325 325 |
326 326 o 13:b449568bf7fc (draft) f
327 327 |
328 328 o 12:40db8afa467b (public) c
329 329 |
330 330 o 0:cb9a9f314b8b (public) a
331 331
332 332 $ cd ..
333 333
334 334 simple phase conservation
335 335 -------------------------------------------
336 336
337 337 Resulting changeset should conserve the phase of the original one whatever the
338 338 phases.new-commit option is.
339 339
340 340 New-commit as draft (default)
341 341
342 342 $ cp -R base simple-draft
343 343 $ cd simple-draft
344 344 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
345 345 > edit b449568bf7fc 11 f
346 346 > pick 6b70183d2492 12 g
347 347 > pick 7395e1ff83bd 13 h
348 348 > pick b605fb7503f2 14 i
349 349 > pick 3a6c53ee7f3d 15 j
350 350 > pick ee118ab9fa44 16 k
351 351 > EOF
352 352 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
353 353 adding f
354 354 Editing (b449568bf7fc), you may commit or record as needed now.
355 355 (hg histedit --continue to resume)
356 356 [1]
357 357 $ echo f >> f
358 358 $ hg histedit --continue
359 359 $ hg log -G
360 360 @ 24:12e89af74238 (secret) k
361 361 |
362 362 o 23:636a8687b22e (secret) j
363 363 |
364 364 o 22:ccaf0a38653f (secret) i
365 365 |
366 366 o 21:11a89d1c2613 (draft) h
367 367 |
368 368 o 20:c1dec7ca82ea (draft) g
369 369 |
370 370 o 19:087281e68428 (draft) f
371 371 |
372 372 o 12:40db8afa467b (public) c
373 373 |
374 374 o 0:cb9a9f314b8b (public) a
375 375
376 376 $ cd ..
377 377
378 378
379 379 New-commit as secret (config)
380 380
381 381 $ cp -R base simple-secret
382 382 $ cd simple-secret
383 383 $ cat >> .hg/hgrc << EOF
384 384 > [phases]
385 385 > new-commit=secret
386 386 > EOF
387 387 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
388 388 > edit b449568bf7fc 11 f
389 389 > pick 6b70183d2492 12 g
390 390 > pick 7395e1ff83bd 13 h
391 391 > pick b605fb7503f2 14 i
392 392 > pick 3a6c53ee7f3d 15 j
393 393 > pick ee118ab9fa44 16 k
394 394 > EOF
395 395 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
396 396 adding f
397 397 Editing (b449568bf7fc), you may commit or record as needed now.
398 398 (hg histedit --continue to resume)
399 399 [1]
400 400 $ echo f >> f
401 401 $ hg histedit --continue
402 402 $ hg log -G
403 403 @ 24:12e89af74238 (secret) k
404 404 |
405 405 o 23:636a8687b22e (secret) j
406 406 |
407 407 o 22:ccaf0a38653f (secret) i
408 408 |
409 409 o 21:11a89d1c2613 (draft) h
410 410 |
411 411 o 20:c1dec7ca82ea (draft) g
412 412 |
413 413 o 19:087281e68428 (draft) f
414 414 |
415 415 o 12:40db8afa467b (public) c
416 416 |
417 417 o 0:cb9a9f314b8b (public) a
418 418
419 419 $ cd ..
420 420
421 421
422 422 Changeset reordering
423 423 -------------------------------------------
424 424
425 425 If a secret changeset is put before a draft one, all descendant should be secret.
426 426 It seems more important to present the secret phase.
427 427
428 428 $ cp -R base reorder
429 429 $ cd reorder
430 430 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
431 431 > pick b449568bf7fc 11 f
432 432 > pick 3a6c53ee7f3d 15 j
433 433 > pick 6b70183d2492 12 g
434 434 > pick b605fb7503f2 14 i
435 435 > pick 7395e1ff83bd 13 h
436 436 > pick ee118ab9fa44 16 k
437 437 > EOF
438 438 $ hg log -G
439 439 @ 23:558246857888 (secret) k
440 440 |
441 441 o 22:28bd44768535 (secret) h
442 442 |
443 443 o 21:d5395202aeb9 (secret) i
444 444 |
445 445 o 20:21edda8e341b (secret) g
446 446 |
447 447 o 19:5ab64f3a4832 (secret) j
448 448 |
449 449 o 13:b449568bf7fc (draft) f
450 450 |
451 451 o 12:40db8afa467b (public) c
452 452 |
453 453 o 0:cb9a9f314b8b (public) a
454 454
455 455 $ cd ..
456 456
457 457 Changeset folding
458 458 -------------------------------------------
459 459
460 460 Folding a secret changeset with a draft one turn the result secret (again,
461 461 better safe than sorry). Folding between same phase changeset still works
462 462
463 463 Note that there is a few reordering in this series for more extensive test
464 464
465 465 $ cp -R base folding
466 466 $ cd folding
467 467 $ cat >> .hg/hgrc << EOF
468 468 > [phases]
469 469 > new-commit=secret
470 470 > EOF
471 471 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
472 472 > pick 7395e1ff83bd 13 h
473 473 > fold b449568bf7fc 11 f
474 474 > pick 6b70183d2492 12 g
475 475 > fold 3a6c53ee7f3d 15 j
476 476 > pick b605fb7503f2 14 i
477 477 > fold ee118ab9fa44 16 k
478 478 > EOF
479 479 $ hg log -G
480 480 @ 27:f9daec13fb98 (secret) i
481 481 |
482 482 o 24:49807617f46a (secret) g
483 483 |
484 484 o 21:050280826e04 (draft) h
485 485 |
486 486 o 12:40db8afa467b (public) c
487 487 |
488 488 o 0:cb9a9f314b8b (public) a
489 489
490 490 $ hg co 49807617f46a
491 491 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
492 492 $ echo wat >> wat
493 493 $ hg add wat
494 494 $ hg ci -m 'add wat'
495 495 created new head
496 496 $ hg merge f9daec13fb98
497 497 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
498 498 (branch merge, don't forget to commit)
499 499 $ hg ci -m 'merge'
500 500 $ echo not wat > wat
501 501 $ hg ci -m 'modify wat'
502 502 $ hg histedit 050280826e04
503 503 abort: cannot edit history that contains merges
504 504 [255]
505 505 $ cd ..
506 506
507 507 Check abort behavior
508 508 -------------------------------------------
509 509
510 510 We checks that abort properly clean the repository so the same histedit can be
511 511 attempted later.
512 512
513 513 $ cp -R base abort
514 514 $ cd abort
515 515 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
516 516 > pick b449568bf7fc 13 f
517 517 > pick 7395e1ff83bd 15 h
518 518 > pick 6b70183d2492 14 g
519 519 > pick b605fb7503f2 16 i
520 520 > roll 3a6c53ee7f3d 17 j
521 521 > edit ee118ab9fa44 18 k
522 522 > EOF
523 523 Editing (ee118ab9fa44), you may commit or record as needed now.
524 524 (hg histedit --continue to resume)
525 525 [1]
526 526
527 527 $ hg histedit --abort
528 528 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
529 529 saved backup bundle to $TESTTMP/abort/.hg/strip-backup/4dc06258baa6-dff4ef05-backup.hg (glob)
530 530
531 531 $ hg log -G
532 532 @ 18:ee118ab9fa44 (secret) k
533 533 |
534 534 o 17:3a6c53ee7f3d (secret) j
535 535 |
536 536 o 16:b605fb7503f2 (secret) i
537 537 |
538 538 o 15:7395e1ff83bd (draft) h
539 539 |
540 540 o 14:6b70183d2492 (draft) g
541 541 |
542 542 o 13:b449568bf7fc (draft) f
543 543 |
544 544 o 12:40db8afa467b (public) c
545 545 |
546 546 o 0:cb9a9f314b8b (public) a
547 547
548 548 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
549 549 > pick b449568bf7fc 13 f
550 550 > pick 7395e1ff83bd 15 h
551 551 > pick 6b70183d2492 14 g
552 552 > pick b605fb7503f2 16 i
553 553 > pick 3a6c53ee7f3d 17 j
554 554 > edit ee118ab9fa44 18 k
555 555 > EOF
556 556 Editing (ee118ab9fa44), you may commit or record as needed now.
557 557 (hg histedit --continue to resume)
558 558 [1]
559 559 $ hg histedit --continue
560 560 $ hg log -G
561 561 @ 23:175d6b286a22 (secret) k
562 562 |
563 563 o 22:44ca09d59ae4 (secret) j
564 564 |
565 565 o 21:31747692a644 (secret) i
566 566 |
567 567 o 20:9985cd4f21fa (draft) g
568 568 |
569 569 o 19:4dc06258baa6 (draft) h
570 570 |
571 571 o 13:b449568bf7fc (draft) f
572 572 |
573 573 o 12:40db8afa467b (public) c
574 574 |
575 575 o 0:cb9a9f314b8b (public) a
576 576
@@ -1,1284 +1,1284 b''
1 1 $ cat >> $HGRCPATH << EOF
2 2 > [phases]
3 3 > # public changeset are not obsolete
4 4 > publish=false
5 5 > [ui]
6 6 > logtemplate="{rev}:{node|short} ({phase}{if(obsolete, ' *{obsolete}*')}{if(troubles, ' {troubles}')}) [{tags} {bookmarks}] {desc|firstline}\n"
7 7 > EOF
8 8 $ mkcommit() {
9 9 > echo "$1" > "$1"
10 10 > hg add "$1"
11 11 > hg ci -m "add $1"
12 12 > }
13 13 $ getid() {
14 14 > hg log -T "{node}\n" --hidden -r "desc('$1')"
15 15 > }
16 16
17 17 $ cat > debugkeys.py <<EOF
18 18 > def reposetup(ui, repo):
19 19 > class debugkeysrepo(repo.__class__):
20 20 > def listkeys(self, namespace):
21 21 > ui.write('listkeys %s\n' % (namespace,))
22 22 > return super(debugkeysrepo, self).listkeys(namespace)
23 23 >
24 24 > if repo.local():
25 25 > repo.__class__ = debugkeysrepo
26 26 > EOF
27 27
28 28 $ hg init tmpa
29 29 $ cd tmpa
30 30 $ mkcommit kill_me
31 31
32 32 Checking that the feature is properly disabled
33 33
34 34 $ hg debugobsolete -d '0 0' `getid kill_me` -u babar
35 35 abort: creating obsolete markers is not enabled on this repo
36 36 [255]
37 37
38 38 Enabling it
39 39
40 40 $ cat >> $HGRCPATH << EOF
41 41 > [experimental]
42 42 > evolution=createmarkers,exchange
43 43 > EOF
44 44
45 45 Killing a single changeset without replacement
46 46
47 47 $ hg debugobsolete 0
48 48 abort: changeset references must be full hexadecimal node identifiers
49 49 [255]
50 50 $ hg debugobsolete '00'
51 51 abort: changeset references must be full hexadecimal node identifiers
52 52 [255]
53 53 $ hg debugobsolete -d '0 0' `getid kill_me` -u babar
54 54 $ hg debugobsolete
55 55 97b7c2d76b1845ed3eb988cd612611e72406cef0 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'babar'}
56 56
57 57 (test that mercurial is not confused)
58 58
59 59 $ hg up null --quiet # having 0 as parent prevents it to be hidden
60 60 $ hg tip
61 61 -1:000000000000 (public) [tip ]
62 62 $ hg up --hidden tip --quiet
63 63
64 64 Killing a single changeset with itself should fail
65 65 (simple local safeguard)
66 66
67 67 $ hg debugobsolete `getid kill_me` `getid kill_me`
68 68 abort: bad obsmarker input: in-marker cycle with 97b7c2d76b1845ed3eb988cd612611e72406cef0
69 69 [255]
70 70
71 71 $ cd ..
72 72
73 73 Killing a single changeset with replacement
74 74 (and testing the format option)
75 75
76 76 $ hg init tmpb
77 77 $ cd tmpb
78 78 $ mkcommit a
79 79 $ mkcommit b
80 80 $ mkcommit original_c
81 81 $ hg up "desc('b')"
82 82 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
83 83 $ mkcommit new_c
84 84 created new head
85 85 $ hg log -r 'hidden()' --template '{rev}:{node|short} {desc}\n' --hidden
86 86 $ hg debugobsolete --config format.obsstore-version=0 --flag 12 `getid original_c` `getid new_c` -d '121 120'
87 87 $ hg log -r 'hidden()' --template '{rev}:{node|short} {desc}\n' --hidden
88 88 2:245bde4270cd add original_c
89 89 $ hg debugrevlog -cd
90 90 # rev p1rev p2rev start end deltastart base p1 p2 rawsize totalsize compression heads chainlen
91 91 0 -1 -1 0 59 0 0 0 0 58 58 0 1 0
92 92 1 0 -1 59 118 59 59 0 0 58 116 0 1 0
93 93 2 1 -1 118 193 118 118 59 0 76 192 0 1 0
94 94 3 1 -1 193 260 193 193 59 0 66 258 0 2 0
95 95 $ hg debugobsolete
96 96 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
97 97
98 98 (check for version number of the obsstore)
99 99
100 100 $ dd bs=1 count=1 if=.hg/store/obsstore 2>/dev/null
101 101 \x00 (no-eol) (esc)
102 102
103 103 do it again (it read the obsstore before adding new changeset)
104 104
105 105 $ hg up '.^'
106 106 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
107 107 $ mkcommit new_2_c
108 108 created new head
109 109 $ hg debugobsolete -d '1337 0' `getid new_c` `getid new_2_c`
110 110 $ hg debugobsolete
111 111 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
112 112 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
113 113
114 114 Register two markers with a missing node
115 115
116 116 $ hg up '.^'
117 117 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
118 118 $ mkcommit new_3_c
119 119 created new head
120 120 $ hg debugobsolete -d '1338 0' `getid new_2_c` 1337133713371337133713371337133713371337
121 121 $ hg debugobsolete -d '1339 0' 1337133713371337133713371337133713371337 `getid new_3_c`
122 122 $ hg debugobsolete
123 123 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
124 124 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
125 125 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
126 126 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
127 127
128 128 Test the --index option of debugobsolete command
129 129 $ hg debugobsolete --index
130 130 0 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
131 131 1 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
132 132 2 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
133 133 3 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
134 134
135 135 Refuse pathological nullid successors
136 136 $ hg debugobsolete -d '9001 0' 1337133713371337133713371337133713371337 0000000000000000000000000000000000000000
137 137 transaction abort!
138 138 rollback completed
139 139 abort: bad obsolescence marker detected: invalid successors nullid
140 140 [255]
141 141
142 142 Check that graphlog detect that a changeset is obsolete:
143 143
144 144 $ hg log -G
145 145 @ 5:5601fb93a350 (draft) [tip ] add new_3_c
146 146 |
147 147 o 1:7c3bad9141dc (draft) [ ] add b
148 148 |
149 149 o 0:1f0dee641bb7 (draft) [ ] add a
150 150
151 151
152 152 check that heads does not report them
153 153
154 154 $ hg heads
155 155 5:5601fb93a350 (draft) [tip ] add new_3_c
156 156 $ hg heads --hidden
157 157 5:5601fb93a350 (draft) [tip ] add new_3_c
158 158 4:ca819180edb9 (draft *obsolete*) [ ] add new_2_c
159 159 3:cdbce2fbb163 (draft *obsolete*) [ ] add new_c
160 160 2:245bde4270cd (draft *obsolete*) [ ] add original_c
161 161
162 162
163 163 check that summary does not report them
164 164
165 165 $ hg init ../sink
166 166 $ echo '[paths]' >> .hg/hgrc
167 167 $ echo 'default=../sink' >> .hg/hgrc
168 168 $ hg summary --remote
169 169 parent: 5:5601fb93a350 tip
170 170 add new_3_c
171 171 branch: default
172 172 commit: (clean)
173 173 update: (current)
174 174 phases: 3 draft
175 175 remote: 3 outgoing
176 176
177 177 $ hg summary --remote --hidden
178 178 parent: 5:5601fb93a350 tip
179 179 add new_3_c
180 180 branch: default
181 181 commit: (clean)
182 182 update: 3 new changesets, 4 branch heads (merge)
183 183 phases: 6 draft
184 184 remote: 3 outgoing
185 185
186 186 check that various commands work well with filtering
187 187
188 188 $ hg tip
189 189 5:5601fb93a350 (draft) [tip ] add new_3_c
190 190 $ hg log -r 6
191 191 abort: unknown revision '6'!
192 192 [255]
193 193 $ hg log -r 4
194 194 abort: hidden revision '4'!
195 195 (use --hidden to access hidden revisions)
196 196 [255]
197 197 $ hg debugrevspec 'rev(6)'
198 198 $ hg debugrevspec 'rev(4)'
199 199 $ hg debugrevspec 'null'
200 200 -1
201 201
202 202 Check that public changeset are not accounted as obsolete:
203 203
204 204 $ hg --hidden phase --public 2
205 205 $ hg log -G
206 206 @ 5:5601fb93a350 (draft bumped) [tip ] add new_3_c
207 207 |
208 208 | o 2:245bde4270cd (public) [ ] add original_c
209 209 |/
210 210 o 1:7c3bad9141dc (public) [ ] add b
211 211 |
212 212 o 0:1f0dee641bb7 (public) [ ] add a
213 213
214 214
215 215 And that bumped changeset are detected
216 216 --------------------------------------
217 217
218 218 If we didn't filtered obsolete changesets out, 3 and 4 would show up too. Also
219 219 note that the bumped changeset (5:5601fb93a350) is not a direct successor of
220 220 the public changeset
221 221
222 222 $ hg log --hidden -r 'bumped()'
223 223 5:5601fb93a350 (draft bumped) [tip ] add new_3_c
224 224
225 225 And that we can't push bumped changeset
226 226
227 227 $ hg push ../tmpa -r 0 --force #(make repo related)
228 228 pushing to ../tmpa
229 229 searching for changes
230 230 warning: repository is unrelated
231 231 adding changesets
232 232 adding manifests
233 233 adding file changes
234 234 added 1 changesets with 1 changes to 1 files (+1 heads)
235 235 $ hg push ../tmpa
236 236 pushing to ../tmpa
237 237 searching for changes
238 238 abort: push includes bumped changeset: 5601fb93a350!
239 239 [255]
240 240
241 241 Fixing "bumped" situation
242 242 We need to create a clone of 5 and add a special marker with a flag
243 243
244 244 $ hg summary
245 245 parent: 5:5601fb93a350 tip (bumped)
246 246 add new_3_c
247 247 branch: default
248 248 commit: (clean)
249 249 update: 1 new changesets, 2 branch heads (merge)
250 250 phases: 1 draft
251 251 bumped: 1 changesets
252 252 $ hg up '5^'
253 253 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
254 254 $ hg revert -ar 5
255 255 adding new_3_c
256 256 $ hg ci -m 'add n3w_3_c'
257 257 created new head
258 258 $ hg debugobsolete -d '1338 0' --flags 1 `getid new_3_c` `getid n3w_3_c`
259 259 $ hg log -r 'bumped()'
260 260 $ hg log -G
261 261 @ 6:6f9641995072 (draft) [tip ] add n3w_3_c
262 262 |
263 263 | o 2:245bde4270cd (public) [ ] add original_c
264 264 |/
265 265 o 1:7c3bad9141dc (public) [ ] add b
266 266 |
267 267 o 0:1f0dee641bb7 (public) [ ] add a
268 268
269 269
270 270 $ cd ..
271 271
272 272 Revision 0 is hidden
273 273 --------------------
274 274
275 275 $ hg init rev0hidden
276 276 $ cd rev0hidden
277 277
278 278 $ mkcommit kill0
279 279 $ hg up -q null
280 280 $ hg debugobsolete `getid kill0`
281 281 $ mkcommit a
282 282 $ mkcommit b
283 283
284 284 Should pick the first visible revision as "repo" node
285 285
286 286 $ hg archive ../archive-null
287 287 $ cat ../archive-null/.hg_archival.txt
288 288 repo: 1f0dee641bb7258c56bd60e93edfa2405381c41e
289 289 node: 7c3bad9141dcb46ff89abf5f61856facd56e476c
290 290 branch: default
291 291 latesttag: null
292 292 latesttagdistance: 2
293 293 changessincelatesttag: 2
294 294
295 295
296 296 $ cd ..
297 297
298 298 Exchange Test
299 299 ============================
300 300
301 301 Destination repo does not have any data
302 302 ---------------------------------------
303 303
304 304 Simple incoming test
305 305
306 306 $ hg init tmpc
307 307 $ cd tmpc
308 308 $ hg incoming ../tmpb
309 309 comparing with ../tmpb
310 310 0:1f0dee641bb7 (public) [ ] add a
311 311 1:7c3bad9141dc (public) [ ] add b
312 312 2:245bde4270cd (public) [ ] add original_c
313 313 6:6f9641995072 (draft) [tip ] add n3w_3_c
314 314
315 315 Try to pull markers
316 316 (extinct changeset are excluded but marker are pushed)
317 317
318 318 $ hg pull ../tmpb
319 319 pulling from ../tmpb
320 320 requesting all changes
321 321 adding changesets
322 322 adding manifests
323 323 adding file changes
324 324 added 4 changesets with 4 changes to 4 files (+1 heads)
325 325 5 new obsolescence markers
326 326 (run 'hg heads' to see heads, 'hg merge' to merge)
327 327 $ hg debugobsolete
328 328 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
329 329 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
330 330 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
331 331 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
332 332 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
333 333
334 334 Rollback//Transaction support
335 335
336 336 $ hg debugobsolete -d '1340 0' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
337 337 $ hg debugobsolete
338 338 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
339 339 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
340 340 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
341 341 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
342 342 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
343 343 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb 0 (Thu Jan 01 00:22:20 1970 +0000) {'user': 'test'}
344 344 $ hg rollback -n
345 345 repository tip rolled back to revision 3 (undo debugobsolete)
346 346 $ hg rollback
347 347 repository tip rolled back to revision 3 (undo debugobsolete)
348 348 $ hg debugobsolete
349 349 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
350 350 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
351 351 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
352 352 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
353 353 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
354 354
355 355 $ cd ..
356 356
357 357 Try to push markers
358 358
359 359 $ hg init tmpd
360 360 $ hg -R tmpb push tmpd
361 361 pushing to tmpd
362 362 searching for changes
363 363 adding changesets
364 364 adding manifests
365 365 adding file changes
366 366 added 4 changesets with 4 changes to 4 files (+1 heads)
367 367 5 new obsolescence markers
368 368 $ hg -R tmpd debugobsolete | sort
369 369 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
370 370 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
371 371 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
372 372 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
373 373 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
374 374
375 375 Check obsolete keys are exchanged only if source has an obsolete store
376 376
377 377 $ hg init empty
378 378 $ hg --config extensions.debugkeys=debugkeys.py -R empty push tmpd
379 379 pushing to tmpd
380 380 listkeys phases
381 381 listkeys bookmarks
382 382 no changes found
383 383 listkeys phases
384 384 [1]
385 385
386 386 clone support
387 387 (markers are copied and extinct changesets are included to allow hardlinks)
388 388
389 389 $ hg clone tmpb clone-dest
390 390 updating to branch default
391 391 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
392 392 $ hg -R clone-dest log -G --hidden
393 393 @ 6:6f9641995072 (draft) [tip ] add n3w_3_c
394 394 |
395 395 | x 5:5601fb93a350 (draft *obsolete*) [ ] add new_3_c
396 396 |/
397 397 | x 4:ca819180edb9 (draft *obsolete*) [ ] add new_2_c
398 398 |/
399 399 | x 3:cdbce2fbb163 (draft *obsolete*) [ ] add new_c
400 400 |/
401 401 | o 2:245bde4270cd (public) [ ] add original_c
402 402 |/
403 403 o 1:7c3bad9141dc (public) [ ] add b
404 404 |
405 405 o 0:1f0dee641bb7 (public) [ ] add a
406 406
407 407 $ hg -R clone-dest debugobsolete
408 408 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
409 409 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
410 410 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
411 411 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
412 412 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
413 413
414 414
415 415 Destination repo have existing data
416 416 ---------------------------------------
417 417
418 418 On pull
419 419
420 420 $ hg init tmpe
421 421 $ cd tmpe
422 422 $ hg debugobsolete -d '1339 0' 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00
423 423 $ hg pull ../tmpb
424 424 pulling from ../tmpb
425 425 requesting all changes
426 426 adding changesets
427 427 adding manifests
428 428 adding file changes
429 429 added 4 changesets with 4 changes to 4 files (+1 heads)
430 430 5 new obsolescence markers
431 431 (run 'hg heads' to see heads, 'hg merge' to merge)
432 432 $ hg debugobsolete
433 433 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
434 434 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
435 435 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
436 436 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
437 437 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
438 438 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
439 439
440 440
441 441 On push
442 442
443 443 $ hg push ../tmpc
444 444 pushing to ../tmpc
445 445 searching for changes
446 446 no changes found
447 447 1 new obsolescence markers
448 448 [1]
449 449 $ hg -R ../tmpc debugobsolete
450 450 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
451 451 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
452 452 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
453 453 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
454 454 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
455 455 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
456 456
457 457 detect outgoing obsolete and unstable
458 458 ---------------------------------------
459 459
460 460
461 461 $ hg log -G
462 462 o 3:6f9641995072 (draft) [tip ] add n3w_3_c
463 463 |
464 464 | o 2:245bde4270cd (public) [ ] add original_c
465 465 |/
466 466 o 1:7c3bad9141dc (public) [ ] add b
467 467 |
468 468 o 0:1f0dee641bb7 (public) [ ] add a
469 469
470 470 $ hg up 'desc("n3w_3_c")'
471 471 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
472 472 $ mkcommit original_d
473 473 $ mkcommit original_e
474 474 $ hg debugobsolete --record-parents `getid original_d` -d '0 0'
475 475 $ hg debugobsolete | grep `getid original_d`
476 476 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
477 477 $ hg log -r 'obsolete()'
478 478 4:94b33453f93b (draft *obsolete*) [ ] add original_d
479 479 $ hg summary
480 480 parent: 5:cda648ca50f5 tip (unstable)
481 481 add original_e
482 482 branch: default
483 483 commit: (clean)
484 484 update: 1 new changesets, 2 branch heads (merge)
485 485 phases: 3 draft
486 486 unstable: 1 changesets
487 487 $ hg log -G -r '::unstable()'
488 488 @ 5:cda648ca50f5 (draft unstable) [tip ] add original_e
489 489 |
490 490 x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
491 491 |
492 492 o 3:6f9641995072 (draft) [ ] add n3w_3_c
493 493 |
494 494 o 1:7c3bad9141dc (public) [ ] add b
495 495 |
496 496 o 0:1f0dee641bb7 (public) [ ] add a
497 497
498 498
499 499 refuse to push obsolete changeset
500 500
501 501 $ hg push ../tmpc/ -r 'desc("original_d")'
502 502 pushing to ../tmpc/
503 503 searching for changes
504 504 abort: push includes obsolete changeset: 94b33453f93b!
505 505 [255]
506 506
507 507 refuse to push unstable changeset
508 508
509 509 $ hg push ../tmpc/
510 510 pushing to ../tmpc/
511 511 searching for changes
512 512 abort: push includes unstable changeset: cda648ca50f5!
513 513 [255]
514 514
515 515 Test that extinct changeset are properly detected
516 516
517 517 $ hg log -r 'extinct()'
518 518
519 519 Don't try to push extinct changeset
520 520
521 521 $ hg init ../tmpf
522 522 $ hg out ../tmpf
523 523 comparing with ../tmpf
524 524 searching for changes
525 525 0:1f0dee641bb7 (public) [ ] add a
526 526 1:7c3bad9141dc (public) [ ] add b
527 527 2:245bde4270cd (public) [ ] add original_c
528 528 3:6f9641995072 (draft) [ ] add n3w_3_c
529 529 4:94b33453f93b (draft *obsolete*) [ ] add original_d
530 530 5:cda648ca50f5 (draft unstable) [tip ] add original_e
531 531 $ hg push ../tmpf -f # -f because be push unstable too
532 532 pushing to ../tmpf
533 533 searching for changes
534 534 adding changesets
535 535 adding manifests
536 536 adding file changes
537 537 added 6 changesets with 6 changes to 6 files (+1 heads)
538 538 7 new obsolescence markers
539 539
540 540 no warning displayed
541 541
542 542 $ hg push ../tmpf
543 543 pushing to ../tmpf
544 544 searching for changes
545 545 no changes found
546 546 [1]
547 547
548 548 Do not warn about new head when the new head is a successors of a remote one
549 549
550 550 $ hg log -G
551 551 @ 5:cda648ca50f5 (draft unstable) [tip ] add original_e
552 552 |
553 553 x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
554 554 |
555 555 o 3:6f9641995072 (draft) [ ] add n3w_3_c
556 556 |
557 557 | o 2:245bde4270cd (public) [ ] add original_c
558 558 |/
559 559 o 1:7c3bad9141dc (public) [ ] add b
560 560 |
561 561 o 0:1f0dee641bb7 (public) [ ] add a
562 562
563 563 $ hg up -q 'desc(n3w_3_c)'
564 564 $ mkcommit obsolete_e
565 565 created new head
566 566 $ hg debugobsolete `getid 'original_e'` `getid 'obsolete_e'`
567 567 $ hg outgoing ../tmpf # parasite hg outgoing testin
568 568 comparing with ../tmpf
569 569 searching for changes
570 570 6:3de5eca88c00 (draft) [tip ] add obsolete_e
571 571 $ hg push ../tmpf
572 572 pushing to ../tmpf
573 573 searching for changes
574 574 adding changesets
575 575 adding manifests
576 576 adding file changes
577 577 added 1 changesets with 1 changes to 1 files (+1 heads)
578 578 1 new obsolescence markers
579 579
580 580 test relevance computation
581 581 ---------------------------------------
582 582
583 583 Checking simple case of "marker relevance".
584 584
585 585
586 586 Reminder of the repo situation
587 587
588 588 $ hg log --hidden --graph
589 589 @ 6:3de5eca88c00 (draft) [tip ] add obsolete_e
590 590 |
591 591 | x 5:cda648ca50f5 (draft *obsolete*) [ ] add original_e
592 592 | |
593 593 | x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
594 594 |/
595 595 o 3:6f9641995072 (draft) [ ] add n3w_3_c
596 596 |
597 597 | o 2:245bde4270cd (public) [ ] add original_c
598 598 |/
599 599 o 1:7c3bad9141dc (public) [ ] add b
600 600 |
601 601 o 0:1f0dee641bb7 (public) [ ] add a
602 602
603 603
604 604 List of all markers
605 605
606 606 $ hg debugobsolete
607 607 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
608 608 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
609 609 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
610 610 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
611 611 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
612 612 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
613 613 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
614 614 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
615 615
616 616 List of changesets with no chain
617 617
618 618 $ hg debugobsolete --hidden --rev ::2
619 619
620 620 List of changesets that are included on marker chain
621 621
622 622 $ hg debugobsolete --hidden --rev 6
623 623 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
624 624
625 625 List of changesets with a longer chain, (including a pruned children)
626 626
627 627 $ hg debugobsolete --hidden --rev 3
628 628 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
629 629 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
630 630 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
631 631 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
632 632 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
633 633 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
634 634 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
635 635
636 636 List of both
637 637
638 638 $ hg debugobsolete --hidden --rev 3::6
639 639 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
640 640 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
641 641 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
642 642 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
643 643 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
644 644 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
645 645 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
646 646 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
647 647
648 648 List of all markers in JSON
649 649
650 650 $ hg debugobsolete -Tjson
651 651 [
652 652 {
653 653 "date": [1339.0, 0],
654 654 "flag": 0,
655 655 "metadata": {"user": "test"},
656 656 "precnode": "1339133913391339133913391339133913391339",
657 657 "succnodes": ["ca819180edb99ed25ceafb3e9584ac287e240b00"]
658 658 },
659 659 {
660 660 "date": [1339.0, 0],
661 661 "flag": 0,
662 662 "metadata": {"user": "test"},
663 663 "precnode": "1337133713371337133713371337133713371337",
664 664 "succnodes": ["5601fb93a350734d935195fee37f4054c529ff39"]
665 665 },
666 666 {
667 667 "date": [121.0, 120],
668 668 "flag": 12,
669 669 "metadata": {"user": "test"},
670 670 "precnode": "245bde4270cd1072a27757984f9cda8ba26f08ca",
671 671 "succnodes": ["cdbce2fbb16313928851e97e0d85413f3f7eb77f"]
672 672 },
673 673 {
674 674 "date": [1338.0, 0],
675 675 "flag": 1,
676 676 "metadata": {"user": "test"},
677 677 "precnode": "5601fb93a350734d935195fee37f4054c529ff39",
678 678 "succnodes": ["6f96419950729f3671185b847352890f074f7557"]
679 679 },
680 680 {
681 681 "date": [1338.0, 0],
682 682 "flag": 0,
683 683 "metadata": {"user": "test"},
684 684 "precnode": "ca819180edb99ed25ceafb3e9584ac287e240b00",
685 685 "succnodes": ["1337133713371337133713371337133713371337"]
686 686 },
687 687 {
688 688 "date": [1337.0, 0],
689 689 "flag": 0,
690 690 "metadata": {"user": "test"},
691 691 "precnode": "cdbce2fbb16313928851e97e0d85413f3f7eb77f",
692 692 "succnodes": ["ca819180edb99ed25ceafb3e9584ac287e240b00"]
693 693 },
694 694 {
695 695 "date": [0.0, 0],
696 696 "flag": 0,
697 697 "metadata": {"user": "test"},
698 698 "parentnodes": ["6f96419950729f3671185b847352890f074f7557"],
699 699 "precnode": "94b33453f93bdb8d457ef9b770851a618bf413e1",
700 700 "succnodes": []
701 701 },
702 702 {
703 703 "date": *, (glob)
704 704 "flag": 0,
705 705 "metadata": {"user": "test"},
706 706 "precnode": "cda648ca50f50482b7055c0b0c4c117bba6733d9",
707 707 "succnodes": ["3de5eca88c00aa039da7399a220f4a5221faa585"]
708 708 }
709 709 ]
710 710
711 711 Template keywords
712 712
713 713 $ hg debugobsolete -r6 -T '{succnodes % "{node|short}"} {date|shortdate}\n'
714 714 3de5eca88c00 ????-??-?? (glob)
715 715 $ hg debugobsolete -r6 -T '{join(metadata % "{key}={value}", " ")}\n'
716 716 user=test
717 717 $ hg debugobsolete -r6 -T '{metadata}\n'
718 718 'user': 'test'
719 719 $ hg debugobsolete -r6 -T '{flag} {get(metadata, "user")}\n'
720 720 0 test
721 721
722 722 Test the debug output for exchange
723 723 ----------------------------------
724 724
725 725 $ hg pull ../tmpb --config 'experimental.obsmarkers-exchange-debug=True' # bundle2
726 726 pulling from ../tmpb
727 727 searching for changes
728 728 no changes found
729 729 obsmarker-exchange: 346 bytes received
730 730
731 731 check hgweb does not explode
732 732 ====================================
733 733
734 734 $ hg unbundle $TESTDIR/bundles/hgweb+obs.hg
735 735 adding changesets
736 736 adding manifests
737 737 adding file changes
738 738 added 62 changesets with 63 changes to 9 files (+60 heads)
739 739 (run 'hg heads .' to see heads, 'hg merge' to merge)
740 740 $ for node in `hg log -r 'desc(babar_)' --template '{node}\n'`;
741 741 > do
742 742 > hg debugobsolete $node
743 743 > done
744 744 $ hg up tip
745 745 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
746 746
747 747 #if serve
748 748
749 749 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
750 750 $ cat hg.pid >> $DAEMON_PIDS
751 751
752 752 check changelog view
753 753
754 754 $ get-with-headers.py --headeronly localhost:$HGPORT 'shortlog/'
755 755 200 Script output follows
756 756
757 757 check graph view
758 758
759 759 $ get-with-headers.py --headeronly localhost:$HGPORT 'graph'
760 760 200 Script output follows
761 761
762 762 check filelog view
763 763
764 764 $ get-with-headers.py --headeronly localhost:$HGPORT 'log/'`hg log -r . -T "{node}"`/'babar'
765 765 200 Script output follows
766 766
767 767 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/68'
768 768 200 Script output follows
769 769 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/67'
770 770 404 Not Found
771 771 [1]
772 772
773 773 check that web.view config option:
774 774
775 775 $ killdaemons.py hg.pid
776 776 $ cat >> .hg/hgrc << EOF
777 777 > [web]
778 778 > view=all
779 779 > EOF
780 780 $ wait
781 781 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
782 782 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/67'
783 783 200 Script output follows
784 784 $ killdaemons.py hg.pid
785 785
786 786 Checking _enable=False warning if obsolete marker exists
787 787
788 788 $ echo '[experimental]' >> $HGRCPATH
789 789 $ echo "evolution=" >> $HGRCPATH
790 790 $ hg log -r tip
791 791 obsolete feature not enabled but 68 markers found!
792 792 68:c15e9edfca13 (draft) [tip ] add celestine
793 793
794 794 reenable for later test
795 795
796 796 $ echo '[experimental]' >> $HGRCPATH
797 797 $ echo "evolution=createmarkers,exchange" >> $HGRCPATH
798 798
799 799 $ rm hg.pid access.log errors.log
800 800 #endif
801 801
802 802 Several troubles on the same changeset (create an unstable and bumped changeset)
803 803
804 804 $ hg debugobsolete `getid obsolete_e`
805 805 $ hg debugobsolete `getid original_c` `getid babar`
806 806 $ hg log --config ui.logtemplate= -r 'bumped() and unstable()'
807 807 changeset: 7:50c51b361e60
808 808 user: test
809 809 date: Thu Jan 01 00:00:00 1970 +0000
810 810 trouble: unstable, bumped
811 811 summary: add babar
812 812
813 813
814 814 test the "obsolete" templatekw
815 815
816 816 $ hg log -r 'obsolete()'
817 817 6:3de5eca88c00 (draft *obsolete*) [ ] add obsolete_e
818 818
819 819 test the "troubles" templatekw
820 820
821 821 $ hg log -r 'bumped() and unstable()'
822 822 7:50c51b361e60 (draft unstable bumped) [ ] add babar
823 823
824 824 test the default cmdline template
825 825
826 826 $ hg log -T default -r 'bumped()'
827 827 changeset: 7:50c51b361e60
828 828 user: test
829 829 date: Thu Jan 01 00:00:00 1970 +0000
830 830 trouble: unstable, bumped
831 831 summary: add babar
832 832
833 833 $ hg log -T default -r 'obsolete()'
834 834 changeset: 6:3de5eca88c00
835 835 parent: 3:6f9641995072
836 836 user: test
837 837 date: Thu Jan 01 00:00:00 1970 +0000
838 838 summary: add obsolete_e
839 839
840 840
841 841 test summary output
842 842
843 843 $ hg up -r 'bumped() and unstable()'
844 844 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
845 845 $ hg summary
846 846 parent: 7:50c51b361e60 (unstable, bumped)
847 847 add babar
848 848 branch: default
849 849 commit: (clean)
850 850 update: 2 new changesets (update)
851 851 phases: 4 draft
852 852 unstable: 2 changesets
853 853 bumped: 1 changesets
854 854 $ hg up -r 'obsolete()'
855 855 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
856 856 $ hg summary
857 857 parent: 6:3de5eca88c00 (obsolete)
858 858 add obsolete_e
859 859 branch: default
860 860 commit: (clean)
861 861 update: 3 new changesets (update)
862 862 phases: 4 draft
863 863 unstable: 2 changesets
864 864 bumped: 1 changesets
865 865
866 866 Test incoming/outcoming with changesets obsoleted remotely, known locally
867 867 ===============================================================================
868 868
869 869 This test issue 3805
870 870
871 871 $ hg init repo-issue3805
872 872 $ cd repo-issue3805
873 873 $ echo "base" > base
874 874 $ hg ci -Am "base"
875 875 adding base
876 876 $ echo "foo" > foo
877 877 $ hg ci -Am "A"
878 878 adding foo
879 879 $ hg clone . ../other-issue3805
880 880 updating to branch default
881 881 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
882 882 $ echo "bar" >> foo
883 883 $ hg ci --amend
884 884 $ cd ../other-issue3805
885 885 $ hg log -G
886 886 @ 1:29f0c6921ddd (draft) [tip ] A
887 887 |
888 888 o 0:d20a80d4def3 (draft) [ ] base
889 889
890 890 $ hg log -G -R ../repo-issue3805
891 891 @ 3:323a9c3ddd91 (draft) [tip ] A
892 892 |
893 893 o 0:d20a80d4def3 (draft) [ ] base
894 894
895 895 $ hg incoming
896 896 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
897 897 searching for changes
898 898 3:323a9c3ddd91 (draft) [tip ] A
899 899 $ hg incoming --bundle ../issue3805.hg
900 900 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
901 901 searching for changes
902 902 3:323a9c3ddd91 (draft) [tip ] A
903 903 $ hg outgoing
904 904 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
905 905 searching for changes
906 906 1:29f0c6921ddd (draft) [tip ] A
907 907
908 908 #if serve
909 909
910 910 $ hg serve -R ../repo-issue3805 -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
911 911 $ cat hg.pid >> $DAEMON_PIDS
912 912
913 913 $ hg incoming http://localhost:$HGPORT
914 914 comparing with http://localhost:$HGPORT/
915 915 searching for changes
916 916 2:323a9c3ddd91 (draft) [tip ] A
917 917 $ hg outgoing http://localhost:$HGPORT
918 918 comparing with http://localhost:$HGPORT/
919 919 searching for changes
920 920 1:29f0c6921ddd (draft) [tip ] A
921 921
922 922 $ killdaemons.py
923 923
924 924 #endif
925 925
926 926 This test issue 3814
927 927
928 928 (nothing to push but locally hidden changeset)
929 929
930 930 $ cd ..
931 931 $ hg init repo-issue3814
932 932 $ cd repo-issue3805
933 933 $ hg push -r 323a9c3ddd91 ../repo-issue3814
934 934 pushing to ../repo-issue3814
935 935 searching for changes
936 936 adding changesets
937 937 adding manifests
938 938 adding file changes
939 939 added 2 changesets with 2 changes to 2 files
940 940 2 new obsolescence markers
941 941 $ hg out ../repo-issue3814
942 942 comparing with ../repo-issue3814
943 943 searching for changes
944 944 no changes found
945 945 [1]
946 946
947 947 Test that a local tag blocks a changeset from being hidden
948 948
949 949 $ hg tag -l visible -r 1 --hidden
950 950 $ hg log -G
951 951 @ 3:323a9c3ddd91 (draft) [tip ] A
952 952 |
953 953 | x 1:29f0c6921ddd (draft *obsolete*) [visible ] A
954 954 |/
955 955 o 0:d20a80d4def3 (draft) [ ] base
956 956
957 957 Test that removing a local tag does not cause some commands to fail
958 958
959 959 $ hg tag -l -r tip tiptag
960 960 $ hg tags
961 961 tiptag 3:323a9c3ddd91
962 962 tip 3:323a9c3ddd91
963 963 visible 1:29f0c6921ddd
964 964 $ hg --config extensions.strip= strip -r tip --no-backup
965 965 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
966 966 $ hg tags
967 967 visible 1:29f0c6921ddd
968 968 tip 1:29f0c6921ddd
969 969
970 970 Test bundle overlay onto hidden revision
971 971
972 972 $ cd ..
973 973 $ hg init repo-bundleoverlay
974 974 $ cd repo-bundleoverlay
975 975 $ echo "A" > foo
976 976 $ hg ci -Am "A"
977 977 adding foo
978 978 $ echo "B" >> foo
979 979 $ hg ci -m "B"
980 980 $ hg up 0
981 981 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
982 982 $ echo "C" >> foo
983 983 $ hg ci -m "C"
984 984 created new head
985 985 $ hg log -G
986 986 @ 2:c186d7714947 (draft) [tip ] C
987 987 |
988 988 | o 1:44526ebb0f98 (draft) [ ] B
989 989 |/
990 990 o 0:4b34ecfb0d56 (draft) [ ] A
991 991
992 992
993 993 $ hg clone -r1 . ../other-bundleoverlay
994 994 adding changesets
995 995 adding manifests
996 996 adding file changes
997 997 added 2 changesets with 2 changes to 1 files
998 998 updating to branch default
999 999 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1000 1000 $ cd ../other-bundleoverlay
1001 1001 $ echo "B+" >> foo
1002 1002 $ hg ci --amend -m "B+"
1003 1003 $ hg log -G --hidden
1004 1004 @ 3:b7d587542d40 (draft) [tip ] B+
1005 1005 |
1006 1006 | x 2:eb95e9297e18 (draft *obsolete*) [ ] temporary amend commit for 44526ebb0f98
1007 1007 | |
1008 1008 | x 1:44526ebb0f98 (draft *obsolete*) [ ] B
1009 1009 |/
1010 1010 o 0:4b34ecfb0d56 (draft) [ ] A
1011 1011
1012 1012
1013 1013 $ hg incoming ../repo-bundleoverlay --bundle ../bundleoverlay.hg
1014 1014 comparing with ../repo-bundleoverlay
1015 1015 searching for changes
1016 1016 1:44526ebb0f98 (draft) [ ] B
1017 1017 2:c186d7714947 (draft) [tip ] C
1018 1018 $ hg log -G -R ../bundleoverlay.hg
1019 1019 o 4:c186d7714947 (draft) [tip ] C
1020 1020 |
1021 1021 | @ 3:b7d587542d40 (draft) [ ] B+
1022 1022 |/
1023 1023 o 0:4b34ecfb0d56 (draft) [ ] A
1024 1024
1025 1025
1026 1026 #if serve
1027 1027
1028 1028 Test issue 4506
1029 1029
1030 1030 $ cd ..
1031 1031 $ hg init repo-issue4506
1032 1032 $ cd repo-issue4506
1033 1033 $ echo "0" > foo
1034 1034 $ hg add foo
1035 1035 $ hg ci -m "content-0"
1036 1036
1037 1037 $ hg up null
1038 1038 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
1039 1039 $ echo "1" > bar
1040 1040 $ hg add bar
1041 1041 $ hg ci -m "content-1"
1042 1042 created new head
1043 1043 $ hg up 0
1044 1044 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
1045 1045 $ hg graft 1
1046 1046 grafting 1:1c9eddb02162 "content-1" (tip)
1047 1047
1048 1048 $ hg debugobsolete `hg log -r1 -T'{node}'` `hg log -r2 -T'{node}'`
1049 1049
1050 1050 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
1051 1051 $ cat hg.pid >> $DAEMON_PIDS
1052 1052
1053 1053 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/1'
1054 1054 404 Not Found
1055 1055 [1]
1056 1056 $ get-with-headers.py --headeronly localhost:$HGPORT 'file/tip/bar'
1057 1057 200 Script output follows
1058 1058 $ get-with-headers.py --headeronly localhost:$HGPORT 'annotate/tip/bar'
1059 1059 200 Script output follows
1060 1060
1061 1061 $ killdaemons.py
1062 1062
1063 1063 #endif
1064 1064
1065 1065 Test heads computation on pending index changes with obsolescence markers
1066 1066 $ cd ..
1067 1067 $ cat >$TESTTMP/test_extension.py << EOF
1068 1068 > from mercurial import cmdutil
1069 1069 > from mercurial.i18n import _
1070 1070 >
1071 1071 > cmdtable = {}
1072 1072 > command = cmdutil.command(cmdtable)
1073 1073 > @command("amendtransient",[], _('hg amendtransient [rev]'))
1074 1074 > def amend(ui, repo, *pats, **opts):
1075 1075 > def commitfunc(ui, repo, message, match, opts):
1076 1076 > return repo.commit(message, repo['.'].user(), repo['.'].date(), match)
1077 1077 > opts['message'] = 'Test'
1078 1078 > opts['logfile'] = None
1079 1079 > cmdutil.amend(ui, repo, commitfunc, repo['.'], {}, pats, opts)
1080 1080 > ui.write('%s\n' % repo.changelog.headrevs())
1081 1081 > EOF
1082 1082 $ cat >> $HGRCPATH << EOF
1083 1083 > [extensions]
1084 1084 > testextension=$TESTTMP/test_extension.py
1085 1085 > EOF
1086 1086 $ hg init repo-issue-nativerevs-pending-changes
1087 1087 $ cd repo-issue-nativerevs-pending-changes
1088 1088 $ mkcommit a
1089 1089 $ mkcommit b
1090 1090 $ hg up ".^"
1091 1091 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
1092 1092 $ echo aa > a
1093 1093 $ hg amendtransient
1094 1094 [1, 3]
1095 1095
1096 1096 Check that corrupted hidden cache does not crash
1097 1097
1098 1098 $ printf "" > .hg/cache/hidden
1099 1099 $ hg log -r . -T '{node}' --debug
1100 1100 corrupted hidden cache
1101 1101 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1102 1102 $ hg log -r . -T '{node}' --debug
1103 1103 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1104 1104
1105 1105 #if unix-permissions
1106 1106 Check that wrong hidden cache permission does not crash
1107 1107
1108 1108 $ chmod 000 .hg/cache/hidden
1109 1109 $ hg log -r . -T '{node}' --debug
1110 1110 cannot read hidden cache
1111 1111 error writing hidden changesets cache
1112 1112 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1113 1113 #endif
1114 1114
1115 1115 Test cache consistency for the visible filter
1116 1116 1) We want to make sure that the cached filtered revs are invalidated when
1117 1117 bookmarks change
1118 1118 $ cd ..
1119 1119 $ cat >$TESTTMP/test_extension.py << EOF
1120 1120 > import weakref
1121 1121 > from mercurial import cmdutil, extensions, bookmarks, repoview
1122 1122 > def _bookmarkchanged(orig, bkmstoreinst, *args, **kwargs):
1123 1123 > reporef = weakref.ref(bkmstoreinst._repo)
1124 1124 > def trhook(tr):
1125 1125 > repo = reporef()
1126 1126 > hidden1 = repoview.computehidden(repo)
1127 1127 > hidden = repoview.filterrevs(repo, 'visible')
1128 1128 > if sorted(hidden1) != sorted(hidden):
1129 1129 > print "cache inconsistency"
1130 1130 > bkmstoreinst._repo.currenttransaction().addpostclose('test_extension', trhook)
1131 1131 > orig(bkmstoreinst, *args, **kwargs)
1132 1132 > def extsetup(ui):
1133 1133 > extensions.wrapfunction(bookmarks.bmstore, 'recordchange',
1134 1134 > _bookmarkchanged)
1135 1135 > EOF
1136 1136
1137 1137 $ hg init repo-cache-inconsistency
1138 1138 $ cd repo-issue-nativerevs-pending-changes
1139 1139 $ mkcommit a
1140 1140 a already tracked!
1141 1141 $ mkcommit b
1142 1142 $ hg id
1143 1143 13bedc178fce tip
1144 1144 $ echo "hello" > b
1145 1145 $ hg commit --amend -m "message"
1146 1146 $ hg book bookb -r 13bedc178fce --hidden
1147 1147 $ hg log -r 13bedc178fce
1148 1148 5:13bedc178fce (draft *obsolete*) [ bookb] add b
1149 1149 $ hg book -d bookb
1150 1150 $ hg log -r 13bedc178fce
1151 1151 abort: hidden revision '13bedc178fce'!
1152 1152 (use --hidden to access hidden revisions)
1153 1153 [255]
1154 1154
1155 1155 Empty out the test extension, as it isn't compatible with later parts
1156 1156 of the test.
1157 1157 $ echo > $TESTTMP/test_extension.py
1158 1158
1159 1159 Test ability to pull changeset with locally applying obsolescence markers
1160 1160 (issue4945)
1161 1161
1162 1162 $ cd ..
1163 1163 $ hg init issue4845
1164 1164 $ cd issue4845
1165 1165
1166 1166 $ echo foo > f0
1167 1167 $ hg add f0
1168 1168 $ hg ci -m '0'
1169 1169 $ echo foo > f1
1170 1170 $ hg add f1
1171 1171 $ hg ci -m '1'
1172 1172 $ echo foo > f2
1173 1173 $ hg add f2
1174 1174 $ hg ci -m '2'
1175 1175
1176 1176 $ echo bar > f2
1177 1177 $ hg commit --amend --config experimetnal.evolution=createmarkers
1178 1178 $ hg log -G
1179 1179 @ 4:b0551702f918 (draft) [tip ] 2
1180 1180 |
1181 1181 o 1:e016b03fd86f (draft) [ ] 1
1182 1182 |
1183 1183 o 0:a78f55e5508c (draft) [ ] 0
1184 1184
1185 1185 $ hg log -G --hidden
1186 1186 @ 4:b0551702f918 (draft) [tip ] 2
1187 1187 |
1188 1188 | x 3:f27abbcc1f77 (draft *obsolete*) [ ] temporary amend commit for e008cf283490
1189 1189 | |
1190 1190 | x 2:e008cf283490 (draft *obsolete*) [ ] 2
1191 1191 |/
1192 1192 o 1:e016b03fd86f (draft) [ ] 1
1193 1193 |
1194 1194 o 0:a78f55e5508c (draft) [ ] 0
1195 1195
1196 1196
1197 1197 $ hg strip -r 1 --config extensions.strip=
1198 1198 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
1199 1199 saved backup bundle to $TESTTMP/tmpe/issue4845/.hg/strip-backup/e016b03fd86f-c41c6bcc-backup.hg (glob)
1200 1200 $ hg log -G
1201 1201 @ 0:a78f55e5508c (draft) [tip ] 0
1202 1202
1203 1203 $ hg log -G --hidden
1204 1204 @ 0:a78f55e5508c (draft) [tip ] 0
1205 1205
1206 1206
1207 1207 $ hg pull .hg/strip-backup/*
1208 1208 pulling from .hg/strip-backup/e016b03fd86f-c41c6bcc-backup.hg
1209 1209 searching for changes
1210 1210 adding changesets
1211 1211 adding manifests
1212 1212 adding file changes
1213 1213 added 2 changesets with 2 changes to 2 files
1214 1214 (run 'hg update' to get a working copy)
1215 1215 $ hg log -G
1216 1216 o 2:b0551702f918 (draft) [tip ] 2
1217 1217 |
1218 1218 o 1:e016b03fd86f (draft) [ ] 1
1219 1219 |
1220 1220 @ 0:a78f55e5508c (draft) [ ] 0
1221 1221
1222 1222 $ hg log -G --hidden
1223 1223 o 2:b0551702f918 (draft) [tip ] 2
1224 1224 |
1225 1225 o 1:e016b03fd86f (draft) [ ] 1
1226 1226 |
1227 1227 @ 0:a78f55e5508c (draft) [ ] 0
1228 1228
1229 1229 Test that 'hg debugobsolete --index --rev' can show indices of obsmarkers when
1230 1230 only a subset of those are displayed (because of --rev option)
1231 1231 $ hg init doindexrev
1232 1232 $ cd doindexrev
1233 1233 $ echo a > a
1234 1234 $ hg ci -Am a
1235 1235 adding a
1236 1236 $ hg ci --amend -m aa
1237 1237 $ echo b > b
1238 1238 $ hg ci -Am b
1239 1239 adding b
1240 1240 $ hg ci --amend -m bb
1241 1241 $ echo c > c
1242 1242 $ hg ci -Am c
1243 1243 adding c
1244 1244 $ hg ci --amend -m cc
1245 1245 $ echo d > d
1246 1246 $ hg ci -Am d
1247 1247 adding d
1248 1248 $ hg ci --amend -m dd
1249 1249 $ hg debugobsolete --index --rev "3+7"
1250 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'user': 'test'} (re)
1251 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'user': 'test'} (re)
1250 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1251 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1252 1252 $ hg debugobsolete --index --rev "3+7" -Tjson
1253 1253 [
1254 1254 {
1255 1255 "date": *, (glob)
1256 1256 "flag": 0,
1257 1257 "index": 1,
1258 "metadata": {"user": "test"},
1258 "metadata": {"operation": "amend", "user": "test"},
1259 1259 "precnode": "6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1",
1260 1260 "succnodes": ["d27fb9b066076fd921277a4b9e8b9cb48c95bc6a"]
1261 1261 },
1262 1262 {
1263 1263 "date": *, (glob)
1264 1264 "flag": 0,
1265 1265 "index": 3,
1266 "metadata": {"user": "test"},
1266 "metadata": {"operation": "amend", "user": "test"},
1267 1267 "precnode": "4715cf767440ed891755448016c2b8cf70760c30",
1268 1268 "succnodes": ["7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d"]
1269 1269 }
1270 1270 ]
1271 1271
1272 1272 Test the --delete option of debugobsolete command
1273 1273 $ hg debugobsolete --index
1274 0 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'user': 'test'} (re)
1275 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'user': 'test'} (re)
1276 2 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'user': 'test'} (re)
1277 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'user': 'test'} (re)
1274 0 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1275 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1276 2 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1277 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1278 1278 $ hg debugobsolete --delete 1 --delete 3
1279 1279 deleted 2 obsolescence markers
1280 1280 $ hg debugobsolete
1281 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'user': 'test'} (re)
1282 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'user': 'test'} (re)
1281 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1282 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1283 1283 $ cd ..
1284 1284
@@ -1,980 +1,980 b''
1 1 ==========================
2 2 Test rebase with obsolete
3 3 ==========================
4 4
5 5 Enable obsolete
6 6
7 7 $ cat >> $HGRCPATH << EOF
8 8 > [ui]
9 9 > logtemplate= {rev}:{node|short} {desc|firstline}
10 10 > [experimental]
11 11 > evolution=createmarkers,allowunstable
12 12 > [phases]
13 13 > publish=False
14 14 > [extensions]
15 15 > rebase=
16 16 > EOF
17 17
18 18 Setup rebase canonical repo
19 19
20 20 $ hg init base
21 21 $ cd base
22 22 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
23 23 adding changesets
24 24 adding manifests
25 25 adding file changes
26 26 added 8 changesets with 7 changes to 7 files (+2 heads)
27 27 (run 'hg heads' to see heads, 'hg merge' to merge)
28 28 $ hg up tip
29 29 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
30 30 $ hg log -G
31 31 @ 7:02de42196ebe H
32 32 |
33 33 | o 6:eea13746799a G
34 34 |/|
35 35 o | 5:24b6387c8c8c F
36 36 | |
37 37 | o 4:9520eea781bc E
38 38 |/
39 39 | o 3:32af7686d403 D
40 40 | |
41 41 | o 2:5fddd98957c8 C
42 42 | |
43 43 | o 1:42ccdea3bb16 B
44 44 |/
45 45 o 0:cd010b8cd998 A
46 46
47 47 $ cd ..
48 48
49 49 simple rebase
50 50 ---------------------------------
51 51
52 52 $ hg clone base simple
53 53 updating to branch default
54 54 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
55 55 $ cd simple
56 56 $ hg up 32af7686d403
57 57 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
58 58 $ hg rebase -d eea13746799a
59 59 rebasing 1:42ccdea3bb16 "B"
60 60 rebasing 2:5fddd98957c8 "C"
61 61 rebasing 3:32af7686d403 "D"
62 62 $ hg log -G
63 63 @ 10:8eeb3c33ad33 D
64 64 |
65 65 o 9:2327fea05063 C
66 66 |
67 67 o 8:e4e5be0395b2 B
68 68 |
69 69 | o 7:02de42196ebe H
70 70 | |
71 71 o | 6:eea13746799a G
72 72 |\|
73 73 | o 5:24b6387c8c8c F
74 74 | |
75 75 o | 4:9520eea781bc E
76 76 |/
77 77 o 0:cd010b8cd998 A
78 78
79 79 $ hg log --hidden -G
80 80 @ 10:8eeb3c33ad33 D
81 81 |
82 82 o 9:2327fea05063 C
83 83 |
84 84 o 8:e4e5be0395b2 B
85 85 |
86 86 | o 7:02de42196ebe H
87 87 | |
88 88 o | 6:eea13746799a G
89 89 |\|
90 90 | o 5:24b6387c8c8c F
91 91 | |
92 92 o | 4:9520eea781bc E
93 93 |/
94 94 | x 3:32af7686d403 D
95 95 | |
96 96 | x 2:5fddd98957c8 C
97 97 | |
98 98 | x 1:42ccdea3bb16 B
99 99 |/
100 100 o 0:cd010b8cd998 A
101 101
102 102 $ hg debugobsolete
103 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 e4e5be0395b2cbd471ed22a26b1b6a1a0658a794 0 (*) {'user': 'test'} (glob)
104 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 2327fea05063f39961b14cb69435a9898dc9a245 0 (*) {'user': 'test'} (glob)
105 32af7686d403cf45b5d95f2d70cebea587ac806a 8eeb3c33ad33d452c89e5dcf611c347f978fb42b 0 (*) {'user': 'test'} (glob)
103 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 e4e5be0395b2cbd471ed22a26b1b6a1a0658a794 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
104 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 2327fea05063f39961b14cb69435a9898dc9a245 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
105 32af7686d403cf45b5d95f2d70cebea587ac806a 8eeb3c33ad33d452c89e5dcf611c347f978fb42b 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
106 106
107 107
108 108 $ cd ..
109 109
110 110 empty changeset
111 111 ---------------------------------
112 112
113 113 $ hg clone base empty
114 114 updating to branch default
115 115 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
116 116 $ cd empty
117 117 $ hg up eea13746799a
118 118 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
119 119
120 120 We make a copy of both the first changeset in the rebased and some other in the
121 121 set.
122 122
123 123 $ hg graft 42ccdea3bb16 32af7686d403
124 124 grafting 1:42ccdea3bb16 "B"
125 125 grafting 3:32af7686d403 "D"
126 126 $ hg rebase -s 42ccdea3bb16 -d .
127 127 rebasing 1:42ccdea3bb16 "B"
128 128 note: rebase of 1:42ccdea3bb16 created no changes to commit
129 129 rebasing 2:5fddd98957c8 "C"
130 130 rebasing 3:32af7686d403 "D"
131 131 note: rebase of 3:32af7686d403 created no changes to commit
132 132 $ hg log -G
133 133 o 10:5ae4c968c6ac C
134 134 |
135 135 @ 9:08483444fef9 D
136 136 |
137 137 o 8:8877864f1edb B
138 138 |
139 139 | o 7:02de42196ebe H
140 140 | |
141 141 o | 6:eea13746799a G
142 142 |\|
143 143 | o 5:24b6387c8c8c F
144 144 | |
145 145 o | 4:9520eea781bc E
146 146 |/
147 147 o 0:cd010b8cd998 A
148 148
149 149 $ hg log --hidden -G
150 150 o 10:5ae4c968c6ac C
151 151 |
152 152 @ 9:08483444fef9 D
153 153 |
154 154 o 8:8877864f1edb B
155 155 |
156 156 | o 7:02de42196ebe H
157 157 | |
158 158 o | 6:eea13746799a G
159 159 |\|
160 160 | o 5:24b6387c8c8c F
161 161 | |
162 162 o | 4:9520eea781bc E
163 163 |/
164 164 | x 3:32af7686d403 D
165 165 | |
166 166 | x 2:5fddd98957c8 C
167 167 | |
168 168 | x 1:42ccdea3bb16 B
169 169 |/
170 170 o 0:cd010b8cd998 A
171 171
172 172 $ hg debugobsolete
173 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'user': 'test'} (glob)
174 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'user': 'test'} (glob)
175 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'user': 'test'} (glob)
173 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'operation': 'rebase', 'user': 'test'} (glob)
174 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
175 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'operation': 'rebase', 'user': 'test'} (glob)
176 176
177 177
178 178 More complex case where part of the rebase set were already rebased
179 179
180 180 $ hg rebase --rev 'desc(D)' --dest 'desc(H)'
181 181 rebasing 9:08483444fef9 "D"
182 182 $ hg debugobsolete
183 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'user': 'test'} (glob)
184 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'user': 'test'} (glob)
185 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'user': 'test'} (glob)
186 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'user': 'test'} (glob)
183 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'operation': 'rebase', 'user': 'test'} (glob)
184 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
185 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'operation': 'rebase', 'user': 'test'} (glob)
186 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
187 187 $ hg log -G
188 188 @ 11:4596109a6a43 D
189 189 |
190 190 | o 10:5ae4c968c6ac C
191 191 | |
192 192 | x 9:08483444fef9 D
193 193 | |
194 194 | o 8:8877864f1edb B
195 195 | |
196 196 o | 7:02de42196ebe H
197 197 | |
198 198 | o 6:eea13746799a G
199 199 |/|
200 200 o | 5:24b6387c8c8c F
201 201 | |
202 202 | o 4:9520eea781bc E
203 203 |/
204 204 o 0:cd010b8cd998 A
205 205
206 206 $ hg rebase --source 'desc(B)' --dest 'tip' --config experimental.rebaseskipobsolete=True
207 207 rebasing 8:8877864f1edb "B"
208 208 note: not rebasing 9:08483444fef9 "D", already in destination as 11:4596109a6a43 "D"
209 209 rebasing 10:5ae4c968c6ac "C"
210 210 $ hg debugobsolete
211 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'user': 'test'} (glob)
212 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'user': 'test'} (glob)
213 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'user': 'test'} (glob)
214 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'user': 'test'} (glob)
215 8877864f1edb05d0e07dc4ba77b67a80a7b86672 462a34d07e599b87ea08676a449373fe4e2e1347 0 (*) {'user': 'test'} (glob)
216 5ae4c968c6aca831df823664e706c9d4aa34473d 98f6af4ee9539e14da4465128f894c274900b6e5 0 (*) {'user': 'test'} (glob)
211 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'operation': 'rebase', 'user': 'test'} (glob)
212 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
213 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'operation': 'rebase', 'user': 'test'} (glob)
214 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
215 8877864f1edb05d0e07dc4ba77b67a80a7b86672 462a34d07e599b87ea08676a449373fe4e2e1347 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
216 5ae4c968c6aca831df823664e706c9d4aa34473d 98f6af4ee9539e14da4465128f894c274900b6e5 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
217 217 $ hg log --rev 'divergent()'
218 218 $ hg log -G
219 219 o 13:98f6af4ee953 C
220 220 |
221 221 o 12:462a34d07e59 B
222 222 |
223 223 @ 11:4596109a6a43 D
224 224 |
225 225 o 7:02de42196ebe H
226 226 |
227 227 | o 6:eea13746799a G
228 228 |/|
229 229 o | 5:24b6387c8c8c F
230 230 | |
231 231 | o 4:9520eea781bc E
232 232 |/
233 233 o 0:cd010b8cd998 A
234 234
235 235 $ hg log --style default --debug -r 4596109a6a4328c398bde3a4a3b6737cfade3003
236 236 changeset: 11:4596109a6a4328c398bde3a4a3b6737cfade3003
237 237 phase: draft
238 238 parent: 7:02de42196ebee42ef284b6780a87cdc96e8eaab6
239 239 parent: -1:0000000000000000000000000000000000000000
240 240 manifest: 11:a91006e3a02f1edf631f7018e6e5684cf27dd905
241 241 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
242 242 date: Sat Apr 30 15:24:48 2011 +0200
243 243 files+: D
244 244 extra: branch=default
245 245 extra: rebase_source=08483444fef91d6224f6655ee586a65d263ad34c
246 246 extra: source=32af7686d403cf45b5d95f2d70cebea587ac806a
247 247 description:
248 248 D
249 249
250 250
251 251 $ hg up -qr 'desc(G)'
252 252 $ hg graft 4596109a6a4328c398bde3a4a3b6737cfade3003
253 253 grafting 11:4596109a6a43 "D"
254 254 $ hg up -qr 'desc(E)'
255 255 $ hg rebase -s tip -d .
256 256 rebasing 14:9e36056a46e3 "D" (tip)
257 257 $ hg log --style default --debug -r tip
258 258 changeset: 15:627d4614809036ba22b9e7cb31638ddc06ab99ab
259 259 tag: tip
260 260 phase: draft
261 261 parent: 4:9520eea781bcca16c1e15acc0ba14335a0e8e5ba
262 262 parent: -1:0000000000000000000000000000000000000000
263 263 manifest: 15:648e8ede73ae3e497d093d3a4c8fcc2daa864f42
264 264 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
265 265 date: Sat Apr 30 15:24:48 2011 +0200
266 266 files+: D
267 267 extra: branch=default
268 268 extra: intermediate-source=4596109a6a4328c398bde3a4a3b6737cfade3003
269 269 extra: rebase_source=9e36056a46e37c9776168c7375734eebc70e294f
270 270 extra: source=32af7686d403cf45b5d95f2d70cebea587ac806a
271 271 description:
272 272 D
273 273
274 274
275 275 Start rebase from a commit that is obsolete but not hidden only because it's
276 276 a working copy parent. We should be moved back to the starting commit as usual
277 277 even though it is hidden (until we're moved there).
278 278
279 279 $ hg --hidden up -qr 'first(hidden())'
280 280 $ hg rebase --rev 13 --dest 15
281 281 rebasing 13:98f6af4ee953 "C"
282 282 $ hg log -G
283 283 o 16:294a2b93eb4d C
284 284 |
285 285 o 15:627d46148090 D
286 286 |
287 287 | o 12:462a34d07e59 B
288 288 | |
289 289 | o 11:4596109a6a43 D
290 290 | |
291 291 | o 7:02de42196ebe H
292 292 | |
293 293 +---o 6:eea13746799a G
294 294 | |/
295 295 | o 5:24b6387c8c8c F
296 296 | |
297 297 o | 4:9520eea781bc E
298 298 |/
299 299 | @ 1:42ccdea3bb16 B
300 300 |/
301 301 o 0:cd010b8cd998 A
302 302
303 303
304 304 $ cd ..
305 305
306 306 collapse rebase
307 307 ---------------------------------
308 308
309 309 $ hg clone base collapse
310 310 updating to branch default
311 311 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
312 312 $ cd collapse
313 313 $ hg rebase -s 42ccdea3bb16 -d eea13746799a --collapse
314 314 rebasing 1:42ccdea3bb16 "B"
315 315 rebasing 2:5fddd98957c8 "C"
316 316 rebasing 3:32af7686d403 "D"
317 317 $ hg log -G
318 318 o 8:4dc2197e807b Collapsed revision
319 319 |
320 320 | @ 7:02de42196ebe H
321 321 | |
322 322 o | 6:eea13746799a G
323 323 |\|
324 324 | o 5:24b6387c8c8c F
325 325 | |
326 326 o | 4:9520eea781bc E
327 327 |/
328 328 o 0:cd010b8cd998 A
329 329
330 330 $ hg log --hidden -G
331 331 o 8:4dc2197e807b Collapsed revision
332 332 |
333 333 | @ 7:02de42196ebe H
334 334 | |
335 335 o | 6:eea13746799a G
336 336 |\|
337 337 | o 5:24b6387c8c8c F
338 338 | |
339 339 o | 4:9520eea781bc E
340 340 |/
341 341 | x 3:32af7686d403 D
342 342 | |
343 343 | x 2:5fddd98957c8 C
344 344 | |
345 345 | x 1:42ccdea3bb16 B
346 346 |/
347 347 o 0:cd010b8cd998 A
348 348
349 349 $ hg id --debug -r tip
350 350 4dc2197e807bae9817f09905b50ab288be2dbbcf tip
351 351 $ hg debugobsolete
352 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'user': 'test'} (glob)
353 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'user': 'test'} (glob)
354 32af7686d403cf45b5d95f2d70cebea587ac806a 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'user': 'test'} (glob)
352 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
353 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
354 32af7686d403cf45b5d95f2d70cebea587ac806a 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
355 355
356 356 $ cd ..
357 357
358 358 Rebase set has hidden descendants
359 359 ---------------------------------
360 360
361 361 We rebase a changeset which has a hidden changeset. The hidden changeset must
362 362 not be rebased.
363 363
364 364 $ hg clone base hidden
365 365 updating to branch default
366 366 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
367 367 $ cd hidden
368 368 $ hg rebase -s 5fddd98957c8 -d eea13746799a
369 369 rebasing 2:5fddd98957c8 "C"
370 370 rebasing 3:32af7686d403 "D"
371 371 $ hg rebase -s 42ccdea3bb16 -d 02de42196ebe
372 372 rebasing 1:42ccdea3bb16 "B"
373 373 $ hg log -G
374 374 o 10:7c6027df6a99 B
375 375 |
376 376 | o 9:cf44d2f5a9f4 D
377 377 | |
378 378 | o 8:e273c5e7d2d2 C
379 379 | |
380 380 @ | 7:02de42196ebe H
381 381 | |
382 382 | o 6:eea13746799a G
383 383 |/|
384 384 o | 5:24b6387c8c8c F
385 385 | |
386 386 | o 4:9520eea781bc E
387 387 |/
388 388 o 0:cd010b8cd998 A
389 389
390 390 $ hg log --hidden -G
391 391 o 10:7c6027df6a99 B
392 392 |
393 393 | o 9:cf44d2f5a9f4 D
394 394 | |
395 395 | o 8:e273c5e7d2d2 C
396 396 | |
397 397 @ | 7:02de42196ebe H
398 398 | |
399 399 | o 6:eea13746799a G
400 400 |/|
401 401 o | 5:24b6387c8c8c F
402 402 | |
403 403 | o 4:9520eea781bc E
404 404 |/
405 405 | x 3:32af7686d403 D
406 406 | |
407 407 | x 2:5fddd98957c8 C
408 408 | |
409 409 | x 1:42ccdea3bb16 B
410 410 |/
411 411 o 0:cd010b8cd998 A
412 412
413 413 $ hg debugobsolete
414 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b e273c5e7d2d29df783dce9f9eaa3ac4adc69c15d 0 (*) {'user': 'test'} (glob)
415 32af7686d403cf45b5d95f2d70cebea587ac806a cf44d2f5a9f4297a62be94cbdd3dff7c7dc54258 0 (*) {'user': 'test'} (glob)
416 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 7c6027df6a99d93f461868e5433f63bde20b6dfb 0 (*) {'user': 'test'} (glob)
414 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b e273c5e7d2d29df783dce9f9eaa3ac4adc69c15d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
415 32af7686d403cf45b5d95f2d70cebea587ac806a cf44d2f5a9f4297a62be94cbdd3dff7c7dc54258 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
416 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 7c6027df6a99d93f461868e5433f63bde20b6dfb 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
417 417
418 418 Test that rewriting leaving instability behind is allowed
419 419 ---------------------------------------------------------------------
420 420
421 421 $ hg log -r 'children(8)'
422 422 9:cf44d2f5a9f4 D (no-eol)
423 423 $ hg rebase -r 8
424 424 rebasing 8:e273c5e7d2d2 "C"
425 425 $ hg log -G
426 426 o 11:0d8f238b634c C
427 427 |
428 428 o 10:7c6027df6a99 B
429 429 |
430 430 | o 9:cf44d2f5a9f4 D
431 431 | |
432 432 | x 8:e273c5e7d2d2 C
433 433 | |
434 434 @ | 7:02de42196ebe H
435 435 | |
436 436 | o 6:eea13746799a G
437 437 |/|
438 438 o | 5:24b6387c8c8c F
439 439 | |
440 440 | o 4:9520eea781bc E
441 441 |/
442 442 o 0:cd010b8cd998 A
443 443
444 444
445 445
446 446 Test multiple root handling
447 447 ------------------------------------
448 448
449 449 $ hg rebase --dest 4 --rev '7+11+9'
450 450 rebasing 9:cf44d2f5a9f4 "D"
451 451 rebasing 7:02de42196ebe "H"
452 452 not rebasing ignored 10:7c6027df6a99 "B"
453 453 rebasing 11:0d8f238b634c "C" (tip)
454 454 $ hg log -G
455 455 o 14:1e8370e38cca C
456 456 |
457 457 @ 13:bfe264faf697 H
458 458 |
459 459 | o 12:102b4c1d889b D
460 460 |/
461 461 | o 10:7c6027df6a99 B
462 462 | |
463 463 | x 7:02de42196ebe H
464 464 | |
465 465 +---o 6:eea13746799a G
466 466 | |/
467 467 | o 5:24b6387c8c8c F
468 468 | |
469 469 o | 4:9520eea781bc E
470 470 |/
471 471 o 0:cd010b8cd998 A
472 472
473 473 $ cd ..
474 474
475 475 test on rebase dropping a merge
476 476
477 477 (setup)
478 478
479 479 $ hg init dropmerge
480 480 $ cd dropmerge
481 481 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
482 482 adding changesets
483 483 adding manifests
484 484 adding file changes
485 485 added 8 changesets with 7 changes to 7 files (+2 heads)
486 486 (run 'hg heads' to see heads, 'hg merge' to merge)
487 487 $ hg up 3
488 488 4 files updated, 0 files merged, 0 files removed, 0 files unresolved
489 489 $ hg merge 7
490 490 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
491 491 (branch merge, don't forget to commit)
492 492 $ hg ci -m 'M'
493 493 $ echo I > I
494 494 $ hg add I
495 495 $ hg ci -m I
496 496 $ hg log -G
497 497 @ 9:4bde274eefcf I
498 498 |
499 499 o 8:53a6a128b2b7 M
500 500 |\
501 501 | o 7:02de42196ebe H
502 502 | |
503 503 | | o 6:eea13746799a G
504 504 | |/|
505 505 | o | 5:24b6387c8c8c F
506 506 | | |
507 507 | | o 4:9520eea781bc E
508 508 | |/
509 509 o | 3:32af7686d403 D
510 510 | |
511 511 o | 2:5fddd98957c8 C
512 512 | |
513 513 o | 1:42ccdea3bb16 B
514 514 |/
515 515 o 0:cd010b8cd998 A
516 516
517 517 (actual test)
518 518
519 519 $ hg rebase --dest 6 --rev '((desc(H) + desc(D))::) - desc(M)'
520 520 rebasing 3:32af7686d403 "D"
521 521 rebasing 7:02de42196ebe "H"
522 522 not rebasing ignored 8:53a6a128b2b7 "M"
523 523 rebasing 9:4bde274eefcf "I" (tip)
524 524 $ hg log -G
525 525 @ 12:acd174b7ab39 I
526 526 |
527 527 o 11:6c11a6218c97 H
528 528 |
529 529 | o 10:b5313c85b22e D
530 530 |/
531 531 | o 8:53a6a128b2b7 M
532 532 | |\
533 533 | | x 7:02de42196ebe H
534 534 | | |
535 535 o---+ 6:eea13746799a G
536 536 | | |
537 537 | | o 5:24b6387c8c8c F
538 538 | | |
539 539 o---+ 4:9520eea781bc E
540 540 / /
541 541 x | 3:32af7686d403 D
542 542 | |
543 543 o | 2:5fddd98957c8 C
544 544 | |
545 545 o | 1:42ccdea3bb16 B
546 546 |/
547 547 o 0:cd010b8cd998 A
548 548
549 549
550 550 Test hidden changesets in the rebase set (issue4504)
551 551
552 552 $ hg up --hidden 9
553 553 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
554 554 $ echo J > J
555 555 $ hg add J
556 556 $ hg commit -m J
557 557 $ hg debugobsolete `hg log --rev . -T '{node}'`
558 558
559 559 $ hg rebase --rev .~1::. --dest 'max(desc(D))' --traceback --config experimental.rebaseskipobsolete=off
560 560 rebasing 9:4bde274eefcf "I"
561 561 rebasing 13:06edfc82198f "J" (tip)
562 562 $ hg log -G
563 563 @ 15:5ae8a643467b J
564 564 |
565 565 o 14:9ad579b4a5de I
566 566 |
567 567 | o 12:acd174b7ab39 I
568 568 | |
569 569 | o 11:6c11a6218c97 H
570 570 | |
571 571 o | 10:b5313c85b22e D
572 572 |/
573 573 | o 8:53a6a128b2b7 M
574 574 | |\
575 575 | | x 7:02de42196ebe H
576 576 | | |
577 577 o---+ 6:eea13746799a G
578 578 | | |
579 579 | | o 5:24b6387c8c8c F
580 580 | | |
581 581 o---+ 4:9520eea781bc E
582 582 / /
583 583 x | 3:32af7686d403 D
584 584 | |
585 585 o | 2:5fddd98957c8 C
586 586 | |
587 587 o | 1:42ccdea3bb16 B
588 588 |/
589 589 o 0:cd010b8cd998 A
590 590
591 591 $ hg up 14 -C
592 592 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
593 593 $ echo "K" > K
594 594 $ hg add K
595 595 $ hg commit --amend -m "K"
596 596 $ echo "L" > L
597 597 $ hg add L
598 598 $ hg commit -m "L"
599 599 $ hg up '.^'
600 600 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
601 601 $ echo "M" > M
602 602 $ hg add M
603 603 $ hg commit --amend -m "M"
604 604 $ hg log -G
605 605 @ 20:bfaedf8eb73b M
606 606 |
607 607 | o 18:97219452e4bd L
608 608 | |
609 609 | x 17:fc37a630c901 K
610 610 |/
611 611 | o 15:5ae8a643467b J
612 612 | |
613 613 | x 14:9ad579b4a5de I
614 614 |/
615 615 | o 12:acd174b7ab39 I
616 616 | |
617 617 | o 11:6c11a6218c97 H
618 618 | |
619 619 o | 10:b5313c85b22e D
620 620 |/
621 621 | o 8:53a6a128b2b7 M
622 622 | |\
623 623 | | x 7:02de42196ebe H
624 624 | | |
625 625 o---+ 6:eea13746799a G
626 626 | | |
627 627 | | o 5:24b6387c8c8c F
628 628 | | |
629 629 o---+ 4:9520eea781bc E
630 630 / /
631 631 x | 3:32af7686d403 D
632 632 | |
633 633 o | 2:5fddd98957c8 C
634 634 | |
635 635 o | 1:42ccdea3bb16 B
636 636 |/
637 637 o 0:cd010b8cd998 A
638 638
639 639 $ hg rebase -s 14 -d 18 --config experimental.rebaseskipobsolete=True
640 640 note: not rebasing 14:9ad579b4a5de "I", already in destination as 17:fc37a630c901 "K"
641 641 rebasing 15:5ae8a643467b "J"
642 642
643 643 $ cd ..
644 644
645 645 Skip obsolete changeset even with multiple hops
646 646 -----------------------------------------------
647 647
648 648 setup
649 649
650 650 $ hg init obsskip
651 651 $ cd obsskip
652 652 $ cat << EOF >> .hg/hgrc
653 653 > [experimental]
654 654 > rebaseskipobsolete = True
655 655 > [extensions]
656 656 > strip =
657 657 > EOF
658 658 $ echo A > A
659 659 $ hg add A
660 660 $ hg commit -m A
661 661 $ echo B > B
662 662 $ hg add B
663 663 $ hg commit -m B0
664 664 $ hg commit --amend -m B1
665 665 $ hg commit --amend -m B2
666 666 $ hg up --hidden 'desc(B0)'
667 667 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
668 668 $ echo C > C
669 669 $ hg add C
670 670 $ hg commit -m C
671 671
672 672 Rebase finds its way in a chain of marker
673 673
674 674 $ hg rebase -d 'desc(B2)'
675 675 note: not rebasing 1:a8b11f55fb19 "B0", already in destination as 3:261e70097290 "B2"
676 676 rebasing 4:212cb178bcbb "C" (tip)
677 677
678 678 Even when the chain include missing node
679 679
680 680 $ hg up --hidden 'desc(B0)'
681 681 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
682 682 $ echo D > D
683 683 $ hg add D
684 684 $ hg commit -m D
685 685 $ hg --hidden strip -r 'desc(B1)'
686 686 saved backup bundle to $TESTTMP/obsskip/.hg/strip-backup/86f6414ccda7-b1c452ee-backup.hg (glob)
687 687
688 688 $ hg rebase -d 'desc(B2)'
689 689 note: not rebasing 1:a8b11f55fb19 "B0", already in destination as 2:261e70097290 "B2"
690 690 rebasing 5:1a79b7535141 "D" (tip)
691 691 $ hg up 4
692 692 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
693 693 $ echo "O" > O
694 694 $ hg add O
695 695 $ hg commit -m O
696 696 $ echo "P" > P
697 697 $ hg add P
698 698 $ hg commit -m P
699 699 $ hg log -G
700 700 @ 8:8d47583e023f P
701 701 |
702 702 o 7:360bbaa7d3ce O
703 703 |
704 704 | o 6:9c48361117de D
705 705 | |
706 706 o | 4:ff2c4d47b71d C
707 707 |/
708 708 o 2:261e70097290 B2
709 709 |
710 710 o 0:4a2df7238c3b A
711 711
712 712 $ hg debugobsolete `hg log -r 7 -T '{node}\n'` --config experimental.evolution=all
713 713 $ hg rebase -d 6 -r "4::"
714 714 rebasing 4:ff2c4d47b71d "C"
715 715 note: not rebasing 7:360bbaa7d3ce "O", it has no successor
716 716 rebasing 8:8d47583e023f "P" (tip)
717 717
718 718 If all the changeset to be rebased are obsolete and present in the destination, we
719 719 should display a friendly error message
720 720
721 721 $ hg log -G
722 722 @ 10:121d9e3bc4c6 P
723 723 |
724 724 o 9:4be60e099a77 C
725 725 |
726 726 o 6:9c48361117de D
727 727 |
728 728 o 2:261e70097290 B2
729 729 |
730 730 o 0:4a2df7238c3b A
731 731
732 732
733 733 $ hg up 9
734 734 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
735 735 $ echo "non-relevant change" > nonrelevant
736 736 $ hg add nonrelevant
737 737 $ hg commit -m nonrelevant
738 738 created new head
739 739 $ hg debugobsolete `hg log -r 11 -T '{node}\n'` --config experimental.evolution=all
740 740 $ hg rebase -r . -d 10
741 741 note: not rebasing 11:f44da1f4954c "nonrelevant" (tip), it has no successor
742 742
743 743 If a rebase is going to create divergence, it should abort
744 744
745 745 $ hg log -G
746 746 @ 11:f44da1f4954c nonrelevant
747 747 |
748 748 | o 10:121d9e3bc4c6 P
749 749 |/
750 750 o 9:4be60e099a77 C
751 751 |
752 752 o 6:9c48361117de D
753 753 |
754 754 o 2:261e70097290 B2
755 755 |
756 756 o 0:4a2df7238c3b A
757 757
758 758
759 759 $ hg up 9
760 760 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
761 761 $ echo "john" > doe
762 762 $ hg add doe
763 763 $ hg commit -m "john doe"
764 764 created new head
765 765 $ hg up 10
766 766 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
767 767 $ echo "foo" > bar
768 768 $ hg add bar
769 769 $ hg commit --amend -m "10'"
770 770 $ hg up 10 --hidden
771 771 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
772 772 $ echo "bar" > foo
773 773 $ hg add foo
774 774 $ hg commit -m "bar foo"
775 775 $ hg log -G
776 776 @ 15:73568ab6879d bar foo
777 777 |
778 778 | o 14:77d874d096a2 10'
779 779 | |
780 780 | | o 12:3eb461388009 john doe
781 781 | |/
782 782 x | 10:121d9e3bc4c6 P
783 783 |/
784 784 o 9:4be60e099a77 C
785 785 |
786 786 o 6:9c48361117de D
787 787 |
788 788 o 2:261e70097290 B2
789 789 |
790 790 o 0:4a2df7238c3b A
791 791
792 792 $ hg summary
793 793 parent: 15:73568ab6879d tip (unstable)
794 794 bar foo
795 795 branch: default
796 796 commit: (clean)
797 797 update: 2 new changesets, 3 branch heads (merge)
798 798 phases: 8 draft
799 799 unstable: 1 changesets
800 800 $ hg rebase -s 10 -d 12
801 801 abort: this rebase will cause divergences from: 121d9e3bc4c6
802 802 (to force the rebase please set experimental.allowdivergence=True)
803 803 [255]
804 804 $ hg log -G
805 805 @ 15:73568ab6879d bar foo
806 806 |
807 807 | o 14:77d874d096a2 10'
808 808 | |
809 809 | | o 12:3eb461388009 john doe
810 810 | |/
811 811 x | 10:121d9e3bc4c6 P
812 812 |/
813 813 o 9:4be60e099a77 C
814 814 |
815 815 o 6:9c48361117de D
816 816 |
817 817 o 2:261e70097290 B2
818 818 |
819 819 o 0:4a2df7238c3b A
820 820
821 821 With experimental.allowdivergence=True, rebase can create divergence
822 822
823 823 $ hg rebase -s 10 -d 12 --config experimental.allowdivergence=True
824 824 rebasing 10:121d9e3bc4c6 "P"
825 825 rebasing 15:73568ab6879d "bar foo" (tip)
826 826 $ hg summary
827 827 parent: 17:61bd55f69bc4 tip
828 828 bar foo
829 829 branch: default
830 830 commit: (clean)
831 831 update: 1 new changesets, 2 branch heads (merge)
832 832 phases: 8 draft
833 833 divergent: 2 changesets
834 834
835 835 rebase --continue + skipped rev because their successors are in destination
836 836 we make a change in trunk and work on conflicting changes to make rebase abort.
837 837
838 838 $ hg log -G -r 17::
839 839 @ 17:61bd55f69bc4 bar foo
840 840 |
841 841 ~
842 842
843 843 Create the two changes in trunk
844 844 $ printf "a" > willconflict
845 845 $ hg add willconflict
846 846 $ hg commit -m "willconflict first version"
847 847
848 848 $ printf "dummy" > C
849 849 $ hg commit -m "dummy change successor"
850 850
851 851 Create the changes that we will rebase
852 852 $ hg update -C 17 -q
853 853 $ printf "b" > willconflict
854 854 $ hg add willconflict
855 855 $ hg commit -m "willconflict second version"
856 856 created new head
857 857 $ printf "dummy" > K
858 858 $ hg add K
859 859 $ hg commit -m "dummy change"
860 860 $ printf "dummy" > L
861 861 $ hg add L
862 862 $ hg commit -m "dummy change"
863 863 $ hg debugobsolete `hg log -r ".^" -T '{node}'` `hg log -r 19 -T '{node}'` --config experimental.evolution=all
864 864
865 865 $ hg log -G -r 17::
866 866 @ 22:7bdc8a87673d dummy change
867 867 |
868 868 x 21:8b31da3c4919 dummy change
869 869 |
870 870 o 20:b82fb57ea638 willconflict second version
871 871 |
872 872 | o 19:601db7a18f51 dummy change successor
873 873 | |
874 874 | o 18:357ddf1602d5 willconflict first version
875 875 |/
876 876 o 17:61bd55f69bc4 bar foo
877 877 |
878 878 ~
879 879 $ hg rebase -r ".^^ + .^ + ." -d 19
880 880 rebasing 20:b82fb57ea638 "willconflict second version"
881 881 merging willconflict
882 882 warning: conflicts while merging willconflict! (edit, then use 'hg resolve --mark')
883 883 unresolved conflicts (see hg resolve, then hg rebase --continue)
884 884 [1]
885 885
886 886 $ hg resolve --mark willconflict
887 887 (no more unresolved files)
888 888 continue: hg rebase --continue
889 889 $ hg rebase --continue
890 890 rebasing 20:b82fb57ea638 "willconflict second version"
891 891 note: not rebasing 21:8b31da3c4919 "dummy change", already in destination as 19:601db7a18f51 "dummy change successor"
892 892 rebasing 22:7bdc8a87673d "dummy change" (tip)
893 893 $ cd ..
894 894
895 895 rebase source is obsoleted (issue5198)
896 896 ---------------------------------
897 897
898 898 $ hg clone base amended
899 899 updating to branch default
900 900 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
901 901 $ cd amended
902 902 $ hg up 9520eea781bc
903 903 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
904 904 $ echo 1 >> E
905 905 $ hg commit --amend -m "E'"
906 906 $ hg log -G
907 907 @ 9:69abe8906104 E'
908 908 |
909 909 | o 7:02de42196ebe H
910 910 | |
911 911 | | o 6:eea13746799a G
912 912 | |/|
913 913 | o | 5:24b6387c8c8c F
914 914 |/ /
915 915 | x 4:9520eea781bc E
916 916 |/
917 917 | o 3:32af7686d403 D
918 918 | |
919 919 | o 2:5fddd98957c8 C
920 920 | |
921 921 | o 1:42ccdea3bb16 B
922 922 |/
923 923 o 0:cd010b8cd998 A
924 924
925 925 $ hg rebase -d . -s 9520eea781bc
926 926 note: not rebasing 4:9520eea781bc "E", already in destination as 9:69abe8906104 "E'"
927 927 rebasing 6:eea13746799a "G"
928 928 $ hg log -G
929 929 o 10:17be06e82e95 G
930 930 |\
931 931 | @ 9:69abe8906104 E'
932 932 | |
933 933 +---o 7:02de42196ebe H
934 934 | |
935 935 o | 5:24b6387c8c8c F
936 936 |/
937 937 | o 3:32af7686d403 D
938 938 | |
939 939 | o 2:5fddd98957c8 C
940 940 | |
941 941 | o 1:42ccdea3bb16 B
942 942 |/
943 943 o 0:cd010b8cd998 A
944 944
945 945 $ cd ..
946 946
947 947 Test that bookmark is moved and working dir is updated when all changesets have
948 948 equivalents in destination
949 949 $ hg init rbsrepo && cd rbsrepo
950 950 $ echo "[experimental]" > .hg/hgrc
951 951 $ echo "evolution=all" >> .hg/hgrc
952 952 $ echo "rebaseskipobsolete=on" >> .hg/hgrc
953 953 $ echo root > root && hg ci -Am root
954 954 adding root
955 955 $ echo a > a && hg ci -Am a
956 956 adding a
957 957 $ hg up 0
958 958 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
959 959 $ echo b > b && hg ci -Am b
960 960 adding b
961 961 created new head
962 962 $ hg rebase -r 2 -d 1
963 963 rebasing 2:1e9a3c00cbe9 "b" (tip)
964 964 $ hg log -r . # working dir is at rev 3 (successor of 2)
965 965 3:be1832deae9a b (no-eol)
966 966 $ hg book -r 2 mybook --hidden # rev 2 has a bookmark on it now
967 967 $ hg up 2 && hg log -r . # working dir is at rev 2 again
968 968 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
969 969 2:1e9a3c00cbe9 b (no-eol)
970 970 $ hg rebase -r 2 -d 3
971 971 note: not rebasing 2:1e9a3c00cbe9 "b" (mybook), already in destination as 3:be1832deae9a "b"
972 972 Check that working directory was updated to rev 3 although rev 2 was skipped
973 973 during the rebase operation
974 974 $ hg log -r .
975 975 3:be1832deae9a b (no-eol)
976 976
977 977 Check that bookmark was moved to rev 3 although rev 2 was skipped
978 978 during the rebase operation
979 979 $ hg bookmarks
980 980 mybook 3:be1832deae9a
General Comments 0
You need to be logged in to leave comments. Login now