##// END OF EJS Templates
merge: have merge.update use a matcher instead of partial fn...
Augie Fackler -
r27344:43c00ca8 default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,1414 +1,1414 b''
1 1 # histedit.py - interactive history editing for mercurial
2 2 #
3 3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """interactive history editing
8 8
9 9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 10 is as follows, assuming the following history::
11 11
12 12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 13 | Add delta
14 14 |
15 15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 16 | Add gamma
17 17 |
18 18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 19 | Add beta
20 20 |
21 21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 22 Add alpha
23 23
24 24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 25 file open in your editor::
26 26
27 27 pick c561b4e977df Add beta
28 28 pick 030b686bedc4 Add gamma
29 29 pick 7c2fd3b9020c Add delta
30 30
31 31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 32 #
33 33 # Commits are listed from least to most recent
34 34 #
35 35 # Commands:
36 36 # p, pick = use commit
37 37 # e, edit = use commit, but stop for amending
38 38 # f, fold = use commit, but combine it with the one above
39 39 # r, roll = like fold, but discard this commit's description
40 40 # d, drop = remove commit from history
41 41 # m, mess = edit commit message without changing commit content
42 42 #
43 43
44 44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 45 for each revision in your history. For example, if you had meant to add gamma
46 46 before beta, and then wanted to add delta in the same revision as beta, you
47 47 would reorganize the file to look like this::
48 48
49 49 pick 030b686bedc4 Add gamma
50 50 pick c561b4e977df Add beta
51 51 fold 7c2fd3b9020c Add delta
52 52
53 53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 54 #
55 55 # Commits are listed from least to most recent
56 56 #
57 57 # Commands:
58 58 # p, pick = use commit
59 59 # e, edit = use commit, but stop for amending
60 60 # f, fold = use commit, but combine it with the one above
61 61 # r, roll = like fold, but discard this commit's description
62 62 # d, drop = remove commit from history
63 63 # m, mess = edit commit message without changing commit content
64 64 #
65 65
66 66 At which point you close the editor and ``histedit`` starts working. When you
67 67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 68 those revisions together, offering you a chance to clean up the commit message::
69 69
70 70 Add beta
71 71 ***
72 72 Add delta
73 73
74 74 Edit the commit message to your liking, then close the editor. For
75 75 this example, let's assume that the commit message was changed to
76 76 ``Add beta and delta.`` After histedit has run and had a chance to
77 77 remove any old or temporary revisions it needed, the history looks
78 78 like this::
79 79
80 80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 81 | Add beta and delta.
82 82 |
83 83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 84 | Add gamma
85 85 |
86 86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 87 Add alpha
88 88
89 89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 90 ones) until after it has completed all the editing operations, so it will
91 91 probably perform several strip operations when it's done. For the above example,
92 92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 93 so you might need to be a little patient. You can choose to keep the original
94 94 revisions by passing the ``--keep`` flag.
95 95
96 96 The ``edit`` operation will drop you back to a command prompt,
97 97 allowing you to edit files freely, or even use ``hg record`` to commit
98 98 some changes as a separate commit. When you're done, any remaining
99 99 uncommitted changes will be committed as well. When done, run ``hg
100 100 histedit --continue`` to finish this step. You'll be prompted for a
101 101 new commit message, but the default commit message will be the
102 102 original message for the ``edit`` ed revision.
103 103
104 104 The ``message`` operation will give you a chance to revise a commit
105 105 message without changing the contents. It's a shortcut for doing
106 106 ``edit`` immediately followed by `hg histedit --continue``.
107 107
108 108 If ``histedit`` encounters a conflict when moving a revision (while
109 109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
110 110 ``edit`` with the difference that it won't prompt you for a commit
111 111 message when done. If you decide at this point that you don't like how
112 112 much work it will be to rearrange history, or that you made a mistake,
113 113 you can use ``hg histedit --abort`` to abandon the new changes you
114 114 have made and return to the state before you attempted to edit your
115 115 history.
116 116
117 117 If we clone the histedit-ed example repository above and add four more
118 118 changes, such that we have the following history::
119 119
120 120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
121 121 | Add theta
122 122 |
123 123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
124 124 | Add eta
125 125 |
126 126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
127 127 | Add zeta
128 128 |
129 129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
130 130 | Add epsilon
131 131 |
132 132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
133 133 | Add beta and delta.
134 134 |
135 135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
136 136 | Add gamma
137 137 |
138 138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
139 139 Add alpha
140 140
141 141 If you run ``hg histedit --outgoing`` on the clone then it is the same
142 142 as running ``hg histedit 836302820282``. If you need plan to push to a
143 143 repository that Mercurial does not detect to be related to the source
144 144 repo, you can add a ``--force`` option.
145 145
146 146 Histedit rule lines are truncated to 80 characters by default. You
147 147 can customize this behavior by setting a different length in your
148 148 configuration file::
149 149
150 150 [histedit]
151 151 linelen = 120 # truncate rule lines at 120 characters
152 152
153 153 ``hg histedit`` attempts to automatically choose an appropriate base
154 154 revision to use. To change which base revision is used, define a
155 155 revset in your configuration file::
156 156
157 157 [histedit]
158 158 defaultrev = only(.) & draft()
159 159 """
160 160
161 161 try:
162 162 import cPickle as pickle
163 163 pickle.dump # import now
164 164 except ImportError:
165 165 import pickle
166 166 import errno
167 167 import os
168 168 import sys
169 169
170 170 from mercurial import bundle2
171 171 from mercurial import cmdutil
172 172 from mercurial import discovery
173 173 from mercurial import error
174 174 from mercurial import copies
175 175 from mercurial import context
176 176 from mercurial import destutil
177 177 from mercurial import exchange
178 178 from mercurial import extensions
179 179 from mercurial import hg
180 180 from mercurial import node
181 181 from mercurial import repair
182 182 from mercurial import scmutil
183 183 from mercurial import util
184 184 from mercurial import obsolete
185 185 from mercurial import merge as mergemod
186 186 from mercurial.lock import release
187 187 from mercurial.i18n import _
188 188
189 189 cmdtable = {}
190 190 command = cmdutil.command(cmdtable)
191 191
192 192 class _constraints(object):
193 193 # aborts if there are multiple rules for one node
194 194 noduplicates = 'noduplicates'
195 195 # abort if the node does belong to edited stack
196 196 forceother = 'forceother'
197 197 # abort if the node doesn't belong to edited stack
198 198 noother = 'noother'
199 199
200 200 @classmethod
201 201 def known(cls):
202 202 return set([v for k, v in cls.__dict__.items() if k[0] != '_'])
203 203
204 204 # Note for extension authors: ONLY specify testedwith = 'internal' for
205 205 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
206 206 # be specifying the version(s) of Mercurial they are tested with, or
207 207 # leave the attribute unspecified.
208 208 testedwith = 'internal'
209 209
210 210 # i18n: command names and abbreviations must remain untranslated
211 211 editcomment = _("""# Edit history between %s and %s
212 212 #
213 213 # Commits are listed from least to most recent
214 214 #
215 215 # Commands:
216 216 # p, pick = use commit
217 217 # e, edit = use commit, but stop for amending
218 218 # f, fold = use commit, but combine it with the one above
219 219 # r, roll = like fold, but discard this commit's description
220 220 # d, drop = remove commit from history
221 221 # m, mess = edit commit message without changing commit content
222 222 #
223 223 """)
224 224
225 225 class histeditstate(object):
226 226 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
227 227 topmost=None, replacements=None, lock=None, wlock=None):
228 228 self.repo = repo
229 229 self.actions = actions
230 230 self.keep = keep
231 231 self.topmost = topmost
232 232 self.parentctxnode = parentctxnode
233 233 self.lock = lock
234 234 self.wlock = wlock
235 235 self.backupfile = None
236 236 if replacements is None:
237 237 self.replacements = []
238 238 else:
239 239 self.replacements = replacements
240 240
241 241 def read(self):
242 242 """Load histedit state from disk and set fields appropriately."""
243 243 try:
244 244 fp = self.repo.vfs('histedit-state', 'r')
245 245 except IOError as err:
246 246 if err.errno != errno.ENOENT:
247 247 raise
248 248 raise error.Abort(_('no histedit in progress'))
249 249
250 250 try:
251 251 data = pickle.load(fp)
252 252 parentctxnode, rules, keep, topmost, replacements = data
253 253 backupfile = None
254 254 except pickle.UnpicklingError:
255 255 data = self._load()
256 256 parentctxnode, rules, keep, topmost, replacements, backupfile = data
257 257
258 258 self.parentctxnode = parentctxnode
259 259 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
260 260 actions = parserules(rules, self)
261 261 self.actions = actions
262 262 self.keep = keep
263 263 self.topmost = topmost
264 264 self.replacements = replacements
265 265 self.backupfile = backupfile
266 266
267 267 def write(self):
268 268 fp = self.repo.vfs('histedit-state', 'w')
269 269 fp.write('v1\n')
270 270 fp.write('%s\n' % node.hex(self.parentctxnode))
271 271 fp.write('%s\n' % node.hex(self.topmost))
272 272 fp.write('%s\n' % self.keep)
273 273 fp.write('%d\n' % len(self.actions))
274 274 for action in self.actions:
275 275 fp.write('%s\n' % action.tostate())
276 276 fp.write('%d\n' % len(self.replacements))
277 277 for replacement in self.replacements:
278 278 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
279 279 for r in replacement[1])))
280 280 backupfile = self.backupfile
281 281 if not backupfile:
282 282 backupfile = ''
283 283 fp.write('%s\n' % backupfile)
284 284 fp.close()
285 285
286 286 def _load(self):
287 287 fp = self.repo.vfs('histedit-state', 'r')
288 288 lines = [l[:-1] for l in fp.readlines()]
289 289
290 290 index = 0
291 291 lines[index] # version number
292 292 index += 1
293 293
294 294 parentctxnode = node.bin(lines[index])
295 295 index += 1
296 296
297 297 topmost = node.bin(lines[index])
298 298 index += 1
299 299
300 300 keep = lines[index] == 'True'
301 301 index += 1
302 302
303 303 # Rules
304 304 rules = []
305 305 rulelen = int(lines[index])
306 306 index += 1
307 307 for i in xrange(rulelen):
308 308 ruleaction = lines[index]
309 309 index += 1
310 310 rule = lines[index]
311 311 index += 1
312 312 rules.append((ruleaction, rule))
313 313
314 314 # Replacements
315 315 replacements = []
316 316 replacementlen = int(lines[index])
317 317 index += 1
318 318 for i in xrange(replacementlen):
319 319 replacement = lines[index]
320 320 original = node.bin(replacement[:40])
321 321 succ = [node.bin(replacement[i:i + 40]) for i in
322 322 range(40, len(replacement), 40)]
323 323 replacements.append((original, succ))
324 324 index += 1
325 325
326 326 backupfile = lines[index]
327 327 index += 1
328 328
329 329 fp.close()
330 330
331 331 return parentctxnode, rules, keep, topmost, replacements, backupfile
332 332
333 333 def clear(self):
334 334 if self.inprogress():
335 335 self.repo.vfs.unlink('histedit-state')
336 336
337 337 def inprogress(self):
338 338 return self.repo.vfs.exists('histedit-state')
339 339
340 340
341 341 class histeditaction(object):
342 342 def __init__(self, state, node):
343 343 self.state = state
344 344 self.repo = state.repo
345 345 self.node = node
346 346
347 347 @classmethod
348 348 def fromrule(cls, state, rule):
349 349 """Parses the given rule, returning an instance of the histeditaction.
350 350 """
351 351 rulehash = rule.strip().split(' ', 1)[0]
352 352 return cls(state, node.bin(rulehash))
353 353
354 354 def verify(self):
355 355 """ Verifies semantic correctness of the rule"""
356 356 repo = self.repo
357 357 ha = node.hex(self.node)
358 358 try:
359 359 self.node = repo[ha].node()
360 360 except error.RepoError:
361 361 raise error.Abort(_('unknown changeset %s listed')
362 362 % ha[:12])
363 363
364 364 def torule(self):
365 365 """build a histedit rule line for an action
366 366
367 367 by default lines are in the form:
368 368 <hash> <rev> <summary>
369 369 """
370 370 ctx = self.repo[self.node]
371 371 summary = ''
372 372 if ctx.description():
373 373 summary = ctx.description().splitlines()[0]
374 374 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
375 375 # trim to 75 columns by default so it's not stupidly wide in my editor
376 376 # (the 5 more are left for verb)
377 377 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
378 378 maxlen = max(maxlen, 22) # avoid truncating hash
379 379 return util.ellipsis(line, maxlen)
380 380
381 381 def tostate(self):
382 382 """Print an action in format used by histedit state files
383 383 (the first line is a verb, the remainder is the second)
384 384 """
385 385 return "%s\n%s" % (self.verb, node.hex(self.node))
386 386
387 387 def constraints(self):
388 388 """Return a set of constrains that this action should be verified for
389 389 """
390 390 return set([_constraints.noduplicates, _constraints.noother])
391 391
392 392 def nodetoverify(self):
393 393 """Returns a node associated with the action that will be used for
394 394 verification purposes.
395 395
396 396 If the action doesn't correspond to node it should return None
397 397 """
398 398 return self.node
399 399
400 400 def run(self):
401 401 """Runs the action. The default behavior is simply apply the action's
402 402 rulectx onto the current parentctx."""
403 403 self.applychange()
404 404 self.continuedirty()
405 405 return self.continueclean()
406 406
407 407 def applychange(self):
408 408 """Applies the changes from this action's rulectx onto the current
409 409 parentctx, but does not commit them."""
410 410 repo = self.repo
411 411 rulectx = repo[self.node]
412 412 hg.update(repo, self.state.parentctxnode)
413 413 stats = applychanges(repo.ui, repo, rulectx, {})
414 414 if stats and stats[3] > 0:
415 415 raise error.InterventionRequired(_('Fix up the change and run '
416 416 'hg histedit --continue'))
417 417
418 418 def continuedirty(self):
419 419 """Continues the action when changes have been applied to the working
420 420 copy. The default behavior is to commit the dirty changes."""
421 421 repo = self.repo
422 422 rulectx = repo[self.node]
423 423
424 424 editor = self.commiteditor()
425 425 commit = commitfuncfor(repo, rulectx)
426 426
427 427 commit(text=rulectx.description(), user=rulectx.user(),
428 428 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
429 429
430 430 def commiteditor(self):
431 431 """The editor to be used to edit the commit message."""
432 432 return False
433 433
434 434 def continueclean(self):
435 435 """Continues the action when the working copy is clean. The default
436 436 behavior is to accept the current commit as the new version of the
437 437 rulectx."""
438 438 ctx = self.repo['.']
439 439 if ctx.node() == self.state.parentctxnode:
440 440 self.repo.ui.warn(_('%s: empty changeset\n') %
441 441 node.short(self.node))
442 442 return ctx, [(self.node, tuple())]
443 443 if ctx.node() == self.node:
444 444 # Nothing changed
445 445 return ctx, []
446 446 return ctx, [(self.node, (ctx.node(),))]
447 447
448 448 def commitfuncfor(repo, src):
449 449 """Build a commit function for the replacement of <src>
450 450
451 451 This function ensure we apply the same treatment to all changesets.
452 452
453 453 - Add a 'histedit_source' entry in extra.
454 454
455 455 Note that fold has its own separated logic because its handling is a bit
456 456 different and not easily factored out of the fold method.
457 457 """
458 458 phasemin = src.phase()
459 459 def commitfunc(**kwargs):
460 460 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
461 461 try:
462 462 repo.ui.setconfig('phases', 'new-commit', phasemin,
463 463 'histedit')
464 464 extra = kwargs.get('extra', {}).copy()
465 465 extra['histedit_source'] = src.hex()
466 466 kwargs['extra'] = extra
467 467 return repo.commit(**kwargs)
468 468 finally:
469 469 repo.ui.restoreconfig(phasebackup)
470 470 return commitfunc
471 471
472 472 def applychanges(ui, repo, ctx, opts):
473 473 """Merge changeset from ctx (only) in the current working directory"""
474 474 wcpar = repo.dirstate.parents()[0]
475 475 if ctx.p1().node() == wcpar:
476 476 # edits are "in place" we do not need to make any merge,
477 477 # just applies changes on parent for edition
478 478 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
479 479 stats = None
480 480 else:
481 481 try:
482 482 # ui.forcemerge is an internal variable, do not document
483 483 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
484 484 'histedit')
485 485 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
486 486 finally:
487 487 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
488 488 return stats
489 489
490 490 def collapse(repo, first, last, commitopts, skipprompt=False):
491 491 """collapse the set of revisions from first to last as new one.
492 492
493 493 Expected commit options are:
494 494 - message
495 495 - date
496 496 - username
497 497 Commit message is edited in all cases.
498 498
499 499 This function works in memory."""
500 500 ctxs = list(repo.set('%d::%d', first, last))
501 501 if not ctxs:
502 502 return None
503 503 for c in ctxs:
504 504 if not c.mutable():
505 505 raise error.Abort(
506 506 _("cannot fold into public change %s") % node.short(c.node()))
507 507 base = first.parents()[0]
508 508
509 509 # commit a new version of the old changeset, including the update
510 510 # collect all files which might be affected
511 511 files = set()
512 512 for ctx in ctxs:
513 513 files.update(ctx.files())
514 514
515 515 # Recompute copies (avoid recording a -> b -> a)
516 516 copied = copies.pathcopies(base, last)
517 517
518 518 # prune files which were reverted by the updates
519 519 def samefile(f):
520 520 if f in last.manifest():
521 521 a = last.filectx(f)
522 522 if f in base.manifest():
523 523 b = base.filectx(f)
524 524 return (a.data() == b.data()
525 525 and a.flags() == b.flags())
526 526 else:
527 527 return False
528 528 else:
529 529 return f not in base.manifest()
530 530 files = [f for f in files if not samefile(f)]
531 531 # commit version of these files as defined by head
532 532 headmf = last.manifest()
533 533 def filectxfn(repo, ctx, path):
534 534 if path in headmf:
535 535 fctx = last[path]
536 536 flags = fctx.flags()
537 537 mctx = context.memfilectx(repo,
538 538 fctx.path(), fctx.data(),
539 539 islink='l' in flags,
540 540 isexec='x' in flags,
541 541 copied=copied.get(path))
542 542 return mctx
543 543 return None
544 544
545 545 if commitopts.get('message'):
546 546 message = commitopts['message']
547 547 else:
548 548 message = first.description()
549 549 user = commitopts.get('user')
550 550 date = commitopts.get('date')
551 551 extra = commitopts.get('extra')
552 552
553 553 parents = (first.p1().node(), first.p2().node())
554 554 editor = None
555 555 if not skipprompt:
556 556 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
557 557 new = context.memctx(repo,
558 558 parents=parents,
559 559 text=message,
560 560 files=files,
561 561 filectxfn=filectxfn,
562 562 user=user,
563 563 date=date,
564 564 extra=extra,
565 565 editor=editor)
566 566 return repo.commitctx(new)
567 567
568 568 def _isdirtywc(repo):
569 569 return repo[None].dirty(missing=True)
570 570
571 571 def abortdirty():
572 572 raise error.Abort(_('working copy has pending changes'),
573 573 hint=_('amend, commit, or revert them and run histedit '
574 574 '--continue, or abort with histedit --abort'))
575 575
576 576
577 577 actiontable = {}
578 578 actionlist = []
579 579
580 580 def addhisteditaction(verbs):
581 581 def wrap(cls):
582 582 cls.verb = verbs[0]
583 583 for verb in verbs:
584 584 actiontable[verb] = cls
585 585 actionlist.append(cls)
586 586 return cls
587 587 return wrap
588 588
589 589
590 590 @addhisteditaction(['pick', 'p'])
591 591 class pick(histeditaction):
592 592 def run(self):
593 593 rulectx = self.repo[self.node]
594 594 if rulectx.parents()[0].node() == self.state.parentctxnode:
595 595 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
596 596 return rulectx, []
597 597
598 598 return super(pick, self).run()
599 599
600 600 @addhisteditaction(['edit', 'e'])
601 601 class edit(histeditaction):
602 602 def run(self):
603 603 repo = self.repo
604 604 rulectx = repo[self.node]
605 605 hg.update(repo, self.state.parentctxnode)
606 606 applychanges(repo.ui, repo, rulectx, {})
607 607 raise error.InterventionRequired(
608 608 _('Make changes as needed, you may commit or record as needed '
609 609 'now.\nWhen you are finished, run hg histedit --continue to '
610 610 'resume.'))
611 611
612 612 def commiteditor(self):
613 613 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
614 614
615 615 @addhisteditaction(['fold', 'f'])
616 616 class fold(histeditaction):
617 617 def continuedirty(self):
618 618 repo = self.repo
619 619 rulectx = repo[self.node]
620 620
621 621 commit = commitfuncfor(repo, rulectx)
622 622 commit(text='fold-temp-revision %s' % node.short(self.node),
623 623 user=rulectx.user(), date=rulectx.date(),
624 624 extra=rulectx.extra())
625 625
626 626 def continueclean(self):
627 627 repo = self.repo
628 628 ctx = repo['.']
629 629 rulectx = repo[self.node]
630 630 parentctxnode = self.state.parentctxnode
631 631 if ctx.node() == parentctxnode:
632 632 repo.ui.warn(_('%s: empty changeset\n') %
633 633 node.short(self.node))
634 634 return ctx, [(self.node, (parentctxnode,))]
635 635
636 636 parentctx = repo[parentctxnode]
637 637 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
638 638 parentctx))
639 639 if not newcommits:
640 640 repo.ui.warn(_('%s: cannot fold - working copy is not a '
641 641 'descendant of previous commit %s\n') %
642 642 (node.short(self.node), node.short(parentctxnode)))
643 643 return ctx, [(self.node, (ctx.node(),))]
644 644
645 645 middlecommits = newcommits.copy()
646 646 middlecommits.discard(ctx.node())
647 647
648 648 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
649 649 middlecommits)
650 650
651 651 def skipprompt(self):
652 652 """Returns true if the rule should skip the message editor.
653 653
654 654 For example, 'fold' wants to show an editor, but 'rollup'
655 655 doesn't want to.
656 656 """
657 657 return False
658 658
659 659 def mergedescs(self):
660 660 """Returns true if the rule should merge messages of multiple changes.
661 661
662 662 This exists mainly so that 'rollup' rules can be a subclass of
663 663 'fold'.
664 664 """
665 665 return True
666 666
667 667 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
668 668 parent = ctx.parents()[0].node()
669 669 hg.update(repo, parent)
670 670 ### prepare new commit data
671 671 commitopts = {}
672 672 commitopts['user'] = ctx.user()
673 673 # commit message
674 674 if not self.mergedescs():
675 675 newmessage = ctx.description()
676 676 else:
677 677 newmessage = '\n***\n'.join(
678 678 [ctx.description()] +
679 679 [repo[r].description() for r in internalchanges] +
680 680 [oldctx.description()]) + '\n'
681 681 commitopts['message'] = newmessage
682 682 # date
683 683 commitopts['date'] = max(ctx.date(), oldctx.date())
684 684 extra = ctx.extra().copy()
685 685 # histedit_source
686 686 # note: ctx is likely a temporary commit but that the best we can do
687 687 # here. This is sufficient to solve issue3681 anyway.
688 688 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
689 689 commitopts['extra'] = extra
690 690 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
691 691 try:
692 692 phasemin = max(ctx.phase(), oldctx.phase())
693 693 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
694 694 n = collapse(repo, ctx, repo[newnode], commitopts,
695 695 skipprompt=self.skipprompt())
696 696 finally:
697 697 repo.ui.restoreconfig(phasebackup)
698 698 if n is None:
699 699 return ctx, []
700 700 hg.update(repo, n)
701 701 replacements = [(oldctx.node(), (newnode,)),
702 702 (ctx.node(), (n,)),
703 703 (newnode, (n,)),
704 704 ]
705 705 for ich in internalchanges:
706 706 replacements.append((ich, (n,)))
707 707 return repo[n], replacements
708 708
709 709 class base(histeditaction):
710 710 def constraints(self):
711 711 return set([_constraints.forceother])
712 712
713 713 def run(self):
714 714 if self.repo['.'].node() != self.node:
715 mergemod.update(self.repo, self.node, False, True, False)
716 # branchmerge, force, partial)
715 mergemod.update(self.repo, self.node, False, True)
716 # branchmerge, force)
717 717 return self.continueclean()
718 718
719 719 def continuedirty(self):
720 720 abortdirty()
721 721
722 722 def continueclean(self):
723 723 basectx = self.repo['.']
724 724 return basectx, []
725 725
726 726 @addhisteditaction(['_multifold'])
727 727 class _multifold(fold):
728 728 """fold subclass used for when multiple folds happen in a row
729 729
730 730 We only want to fire the editor for the folded message once when
731 731 (say) four changes are folded down into a single change. This is
732 732 similar to rollup, but we should preserve both messages so that
733 733 when the last fold operation runs we can show the user all the
734 734 commit messages in their editor.
735 735 """
736 736 def skipprompt(self):
737 737 return True
738 738
739 739 @addhisteditaction(["roll", "r"])
740 740 class rollup(fold):
741 741 def mergedescs(self):
742 742 return False
743 743
744 744 def skipprompt(self):
745 745 return True
746 746
747 747 @addhisteditaction(["drop", "d"])
748 748 class drop(histeditaction):
749 749 def run(self):
750 750 parentctx = self.repo[self.state.parentctxnode]
751 751 return parentctx, [(self.node, tuple())]
752 752
753 753 @addhisteditaction(["mess", "m"])
754 754 class message(histeditaction):
755 755 def commiteditor(self):
756 756 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
757 757
758 758 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
759 759 """utility function to find the first outgoing changeset
760 760
761 761 Used by initialization code"""
762 762 if opts is None:
763 763 opts = {}
764 764 dest = ui.expandpath(remote or 'default-push', remote or 'default')
765 765 dest, revs = hg.parseurl(dest, None)[:2]
766 766 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
767 767
768 768 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
769 769 other = hg.peer(repo, opts, dest)
770 770
771 771 if revs:
772 772 revs = [repo.lookup(rev) for rev in revs]
773 773
774 774 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
775 775 if not outgoing.missing:
776 776 raise error.Abort(_('no outgoing ancestors'))
777 777 roots = list(repo.revs("roots(%ln)", outgoing.missing))
778 778 if 1 < len(roots):
779 779 msg = _('there are ambiguous outgoing revisions')
780 780 hint = _('see "hg help histedit" for more detail')
781 781 raise error.Abort(msg, hint=hint)
782 782 return repo.lookup(roots[0])
783 783
784 784
785 785 @command('histedit',
786 786 [('', 'commands', '',
787 787 _('read history edits from the specified file'), _('FILE')),
788 788 ('c', 'continue', False, _('continue an edit already in progress')),
789 789 ('', 'edit-plan', False, _('edit remaining actions list')),
790 790 ('k', 'keep', False,
791 791 _("don't strip old nodes after edit is complete")),
792 792 ('', 'abort', False, _('abort an edit in progress')),
793 793 ('o', 'outgoing', False, _('changesets not found in destination')),
794 794 ('f', 'force', False,
795 795 _('force outgoing even for unrelated repositories')),
796 796 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
797 797 _("[ANCESTOR] | --outgoing [URL]"))
798 798 def histedit(ui, repo, *freeargs, **opts):
799 799 """interactively edit changeset history
800 800
801 801 This command edits changesets between an ANCESTOR and the parent of
802 802 the working directory.
803 803
804 804 The value from the "histedit.defaultrev" config option is used as a
805 805 revset to select the base revision when ANCESTOR is not specified.
806 806 The first revision returned by the revset is used. By default, this
807 807 selects the editable history that is unique to the ancestry of the
808 808 working directory.
809 809
810 810 With --outgoing, this edits changesets not found in the
811 811 destination repository. If URL of the destination is omitted, the
812 812 'default-push' (or 'default') path will be used.
813 813
814 814 For safety, this command is also aborted if there are ambiguous
815 815 outgoing revisions which may confuse users: for example, if there
816 816 are multiple branches containing outgoing revisions.
817 817
818 818 Use "min(outgoing() and ::.)" or similar revset specification
819 819 instead of --outgoing to specify edit target revision exactly in
820 820 such ambiguous situation. See :hg:`help revsets` for detail about
821 821 selecting revisions.
822 822
823 823 .. container:: verbose
824 824
825 825 Examples:
826 826
827 827 - A number of changes have been made.
828 828 Revision 3 is no longer needed.
829 829
830 830 Start history editing from revision 3::
831 831
832 832 hg histedit -r 3
833 833
834 834 An editor opens, containing the list of revisions,
835 835 with specific actions specified::
836 836
837 837 pick 5339bf82f0ca 3 Zworgle the foobar
838 838 pick 8ef592ce7cc4 4 Bedazzle the zerlog
839 839 pick 0a9639fcda9d 5 Morgify the cromulancy
840 840
841 841 Additional information about the possible actions
842 842 to take appears below the list of revisions.
843 843
844 844 To remove revision 3 from the history,
845 845 its action (at the beginning of the relevant line)
846 846 is changed to 'drop'::
847 847
848 848 drop 5339bf82f0ca 3 Zworgle the foobar
849 849 pick 8ef592ce7cc4 4 Bedazzle the zerlog
850 850 pick 0a9639fcda9d 5 Morgify the cromulancy
851 851
852 852 - A number of changes have been made.
853 853 Revision 2 and 4 need to be swapped.
854 854
855 855 Start history editing from revision 2::
856 856
857 857 hg histedit -r 2
858 858
859 859 An editor opens, containing the list of revisions,
860 860 with specific actions specified::
861 861
862 862 pick 252a1af424ad 2 Blorb a morgwazzle
863 863 pick 5339bf82f0ca 3 Zworgle the foobar
864 864 pick 8ef592ce7cc4 4 Bedazzle the zerlog
865 865
866 866 To swap revision 2 and 4, its lines are swapped
867 867 in the editor::
868 868
869 869 pick 8ef592ce7cc4 4 Bedazzle the zerlog
870 870 pick 5339bf82f0ca 3 Zworgle the foobar
871 871 pick 252a1af424ad 2 Blorb a morgwazzle
872 872
873 873 Returns 0 on success, 1 if user intervention is required (not only
874 874 for intentional "edit" command, but also for resolving unexpected
875 875 conflicts).
876 876 """
877 877 state = histeditstate(repo)
878 878 try:
879 879 state.wlock = repo.wlock()
880 880 state.lock = repo.lock()
881 881 _histedit(ui, repo, state, *freeargs, **opts)
882 882 except error.Abort:
883 883 if repo.vfs.exists('histedit-last-edit.txt'):
884 884 ui.warn(_('warning: histedit rules saved '
885 885 'to: .hg/histedit-last-edit.txt\n'))
886 886 raise
887 887 finally:
888 888 release(state.lock, state.wlock)
889 889
890 890 def _histedit(ui, repo, state, *freeargs, **opts):
891 891 # TODO only abort if we try to histedit mq patches, not just
892 892 # blanket if mq patches are applied somewhere
893 893 mq = getattr(repo, 'mq', None)
894 894 if mq and mq.applied:
895 895 raise error.Abort(_('source has mq patches applied'))
896 896
897 897 # basic argument incompatibility processing
898 898 outg = opts.get('outgoing')
899 899 cont = opts.get('continue')
900 900 editplan = opts.get('edit_plan')
901 901 abort = opts.get('abort')
902 902 force = opts.get('force')
903 903 rules = opts.get('commands', '')
904 904 revs = opts.get('rev', [])
905 905 goal = 'new' # This invocation goal, in new, continue, abort
906 906 if force and not outg:
907 907 raise error.Abort(_('--force only allowed with --outgoing'))
908 908 if cont:
909 909 if any((outg, abort, revs, freeargs, rules, editplan)):
910 910 raise error.Abort(_('no arguments allowed with --continue'))
911 911 goal = 'continue'
912 912 elif abort:
913 913 if any((outg, revs, freeargs, rules, editplan)):
914 914 raise error.Abort(_('no arguments allowed with --abort'))
915 915 goal = 'abort'
916 916 elif editplan:
917 917 if any((outg, revs, freeargs)):
918 918 raise error.Abort(_('only --commands argument allowed with '
919 919 '--edit-plan'))
920 920 goal = 'edit-plan'
921 921 else:
922 922 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
923 923 raise error.Abort(_('history edit already in progress, try '
924 924 '--continue or --abort'))
925 925 if outg:
926 926 if revs:
927 927 raise error.Abort(_('no revisions allowed with --outgoing'))
928 928 if len(freeargs) > 1:
929 929 raise error.Abort(
930 930 _('only one repo argument allowed with --outgoing'))
931 931 else:
932 932 revs.extend(freeargs)
933 933 if len(revs) == 0:
934 934 defaultrev = destutil.desthistedit(ui, repo)
935 935 if defaultrev is not None:
936 936 revs.append(defaultrev)
937 937
938 938 if len(revs) != 1:
939 939 raise error.Abort(
940 940 _('histedit requires exactly one ancestor revision'))
941 941
942 942
943 943 replacements = []
944 944 state.keep = opts.get('keep', False)
945 945 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
946 946
947 947 # rebuild state
948 948 if goal == 'continue':
949 949 state.read()
950 950 state = bootstrapcontinue(ui, state, opts)
951 951 elif goal == 'edit-plan':
952 952 state.read()
953 953 if not rules:
954 954 comment = editcomment % (node.short(state.parentctxnode),
955 955 node.short(state.topmost))
956 956 rules = ruleeditor(repo, ui, state.actions, comment)
957 957 else:
958 958 if rules == '-':
959 959 f = sys.stdin
960 960 else:
961 961 f = open(rules)
962 962 rules = f.read()
963 963 f.close()
964 964 actions = parserules(rules, state)
965 965 ctxs = [repo[act.nodetoverify()] \
966 966 for act in state.actions if act.nodetoverify()]
967 967 verifyactions(actions, state, ctxs)
968 968 state.actions = actions
969 969 state.write()
970 970 return
971 971 elif goal == 'abort':
972 972 try:
973 973 state.read()
974 974 tmpnodes, leafs = newnodestoabort(state)
975 975 ui.debug('restore wc to old parent %s\n'
976 976 % node.short(state.topmost))
977 977
978 978 # Recover our old commits if necessary
979 979 if not state.topmost in repo and state.backupfile:
980 980 backupfile = repo.join(state.backupfile)
981 981 f = hg.openpath(ui, backupfile)
982 982 gen = exchange.readbundle(ui, f, backupfile)
983 983 tr = repo.transaction('histedit.abort')
984 984 try:
985 985 if not isinstance(gen, bundle2.unbundle20):
986 986 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
987 987 if isinstance(gen, bundle2.unbundle20):
988 988 bundle2.applybundle(repo, gen, tr,
989 989 source='histedit',
990 990 url='bundle:' + backupfile)
991 991 tr.close()
992 992 finally:
993 993 tr.release()
994 994
995 995 os.remove(backupfile)
996 996
997 997 # check whether we should update away
998 998 if repo.unfiltered().revs('parents() and (%n or %ln::)',
999 999 state.parentctxnode, leafs | tmpnodes):
1000 1000 hg.clean(repo, state.topmost)
1001 1001 cleanupnode(ui, repo, 'created', tmpnodes)
1002 1002 cleanupnode(ui, repo, 'temp', leafs)
1003 1003 except Exception:
1004 1004 if state.inprogress():
1005 1005 ui.warn(_('warning: encountered an exception during histedit '
1006 1006 '--abort; the repository may not have been completely '
1007 1007 'cleaned up\n'))
1008 1008 raise
1009 1009 finally:
1010 1010 state.clear()
1011 1011 return
1012 1012 else:
1013 1013 cmdutil.checkunfinished(repo)
1014 1014 cmdutil.bailifchanged(repo)
1015 1015
1016 1016 if repo.vfs.exists('histedit-last-edit.txt'):
1017 1017 repo.vfs.unlink('histedit-last-edit.txt')
1018 1018 topmost, empty = repo.dirstate.parents()
1019 1019 if outg:
1020 1020 if freeargs:
1021 1021 remote = freeargs[0]
1022 1022 else:
1023 1023 remote = None
1024 1024 root = findoutgoing(ui, repo, remote, force, opts)
1025 1025 else:
1026 1026 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1027 1027 if len(rr) != 1:
1028 1028 raise error.Abort(_('The specified revisions must have '
1029 1029 'exactly one common root'))
1030 1030 root = rr[0].node()
1031 1031
1032 1032 revs = between(repo, root, topmost, state.keep)
1033 1033 if not revs:
1034 1034 raise error.Abort(_('%s is not an ancestor of working directory') %
1035 1035 node.short(root))
1036 1036
1037 1037 ctxs = [repo[r] for r in revs]
1038 1038 if not rules:
1039 1039 comment = editcomment % (node.short(root), node.short(topmost))
1040 1040 actions = [pick(state, r) for r in revs]
1041 1041 rules = ruleeditor(repo, ui, actions, comment)
1042 1042 else:
1043 1043 if rules == '-':
1044 1044 f = sys.stdin
1045 1045 else:
1046 1046 f = open(rules)
1047 1047 rules = f.read()
1048 1048 f.close()
1049 1049 actions = parserules(rules, state)
1050 1050 verifyactions(actions, state, ctxs)
1051 1051
1052 1052 parentctxnode = repo[root].parents()[0].node()
1053 1053
1054 1054 state.parentctxnode = parentctxnode
1055 1055 state.actions = actions
1056 1056 state.topmost = topmost
1057 1057 state.replacements = replacements
1058 1058
1059 1059 # Create a backup so we can always abort completely.
1060 1060 backupfile = None
1061 1061 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1062 1062 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1063 1063 'histedit')
1064 1064 state.backupfile = backupfile
1065 1065
1066 1066 # preprocess rules so that we can hide inner folds from the user
1067 1067 # and only show one editor
1068 1068 actions = state.actions[:]
1069 1069 for idx, (action, nextact) in enumerate(
1070 1070 zip(actions, actions[1:] + [None])):
1071 1071 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1072 1072 state.actions[idx].__class__ = _multifold
1073 1073
1074 1074 while state.actions:
1075 1075 state.write()
1076 1076 actobj = state.actions.pop(0)
1077 1077 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1078 1078 actobj.torule()))
1079 1079 parentctx, replacement_ = actobj.run()
1080 1080 state.parentctxnode = parentctx.node()
1081 1081 state.replacements.extend(replacement_)
1082 1082 state.write()
1083 1083
1084 1084 hg.update(repo, state.parentctxnode)
1085 1085
1086 1086 mapping, tmpnodes, created, ntm = processreplacement(state)
1087 1087 if mapping:
1088 1088 for prec, succs in mapping.iteritems():
1089 1089 if not succs:
1090 1090 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1091 1091 else:
1092 1092 ui.debug('histedit: %s is replaced by %s\n' % (
1093 1093 node.short(prec), node.short(succs[0])))
1094 1094 if len(succs) > 1:
1095 1095 m = 'histedit: %s'
1096 1096 for n in succs[1:]:
1097 1097 ui.debug(m % node.short(n))
1098 1098
1099 1099 if supportsmarkers:
1100 1100 # Only create markers if the temp nodes weren't already removed.
1101 1101 obsolete.createmarkers(repo, ((repo[t],()) for t in sorted(tmpnodes)
1102 1102 if t in repo))
1103 1103 else:
1104 1104 cleanupnode(ui, repo, 'temp', tmpnodes)
1105 1105
1106 1106 if not state.keep:
1107 1107 if mapping:
1108 1108 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1109 1109 # TODO update mq state
1110 1110 if supportsmarkers:
1111 1111 markers = []
1112 1112 # sort by revision number because it sound "right"
1113 1113 for prec in sorted(mapping, key=repo.changelog.rev):
1114 1114 succs = mapping[prec]
1115 1115 markers.append((repo[prec],
1116 1116 tuple(repo[s] for s in succs)))
1117 1117 if markers:
1118 1118 obsolete.createmarkers(repo, markers)
1119 1119 else:
1120 1120 cleanupnode(ui, repo, 'replaced', mapping)
1121 1121
1122 1122 state.clear()
1123 1123 if os.path.exists(repo.sjoin('undo')):
1124 1124 os.unlink(repo.sjoin('undo'))
1125 1125
1126 1126 def bootstrapcontinue(ui, state, opts):
1127 1127 repo = state.repo
1128 1128 if state.actions:
1129 1129 actobj = state.actions.pop(0)
1130 1130
1131 1131 if _isdirtywc(repo):
1132 1132 actobj.continuedirty()
1133 1133 if _isdirtywc(repo):
1134 1134 abortdirty()
1135 1135
1136 1136 parentctx, replacements = actobj.continueclean()
1137 1137
1138 1138 state.parentctxnode = parentctx.node()
1139 1139 state.replacements.extend(replacements)
1140 1140
1141 1141 return state
1142 1142
1143 1143 def between(repo, old, new, keep):
1144 1144 """select and validate the set of revision to edit
1145 1145
1146 1146 When keep is false, the specified set can't have children."""
1147 1147 ctxs = list(repo.set('%n::%n', old, new))
1148 1148 if ctxs and not keep:
1149 1149 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1150 1150 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1151 1151 raise error.Abort(_('cannot edit history that would orphan nodes'))
1152 1152 if repo.revs('(%ld) and merge()', ctxs):
1153 1153 raise error.Abort(_('cannot edit history that contains merges'))
1154 1154 root = ctxs[0] # list is already sorted by repo.set
1155 1155 if not root.mutable():
1156 1156 raise error.Abort(_('cannot edit public changeset: %s') % root,
1157 1157 hint=_('see "hg help phases" for details'))
1158 1158 return [c.node() for c in ctxs]
1159 1159
1160 1160 def ruleeditor(repo, ui, actions, editcomment=""):
1161 1161 """open an editor to edit rules
1162 1162
1163 1163 rules are in the format [ [act, ctx], ...] like in state.rules
1164 1164 """
1165 1165 rules = '\n'.join([act.torule() for act in actions])
1166 1166 rules += '\n\n'
1167 1167 rules += editcomment
1168 1168 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'})
1169 1169
1170 1170 # Save edit rules in .hg/histedit-last-edit.txt in case
1171 1171 # the user needs to ask for help after something
1172 1172 # surprising happens.
1173 1173 f = open(repo.join('histedit-last-edit.txt'), 'w')
1174 1174 f.write(rules)
1175 1175 f.close()
1176 1176
1177 1177 return rules
1178 1178
1179 1179 def parserules(rules, state):
1180 1180 """Read the histedit rules string and return list of action objects """
1181 1181 rules = [l for l in (r.strip() for r in rules.splitlines())
1182 1182 if l and not l.startswith('#')]
1183 1183 actions = []
1184 1184 for r in rules:
1185 1185 if ' ' not in r:
1186 1186 raise error.Abort(_('malformed line "%s"') % r)
1187 1187 verb, rest = r.split(' ', 1)
1188 1188
1189 1189 if verb not in actiontable:
1190 1190 raise error.Abort(_('unknown action "%s"') % verb)
1191 1191
1192 1192 action = actiontable[verb].fromrule(state, rest)
1193 1193 actions.append(action)
1194 1194 return actions
1195 1195
1196 1196 def verifyactions(actions, state, ctxs):
1197 1197 """Verify that there exists exactly one action per given changeset and
1198 1198 other constraints.
1199 1199
1200 1200 Will abort if there are to many or too few rules, a malformed rule,
1201 1201 or a rule on a changeset outside of the user-given range.
1202 1202 """
1203 1203 expected = set(c.hex() for c in ctxs)
1204 1204 seen = set()
1205 1205 for action in actions:
1206 1206 action.verify()
1207 1207 constraints = action.constraints()
1208 1208 for constraint in constraints:
1209 1209 if constraint not in _constraints.known():
1210 1210 raise error.Abort(_('unknown constraint "%s"') % constraint)
1211 1211
1212 1212 nodetoverify = action.nodetoverify()
1213 1213 if nodetoverify is not None:
1214 1214 ha = node.hex(nodetoverify)
1215 1215 if _constraints.noother in constraints and ha not in expected:
1216 1216 raise error.Abort(
1217 1217 _('may not use "%s" with changesets '
1218 1218 'other than the ones listed') % action.verb)
1219 1219 if _constraints.forceother in constraints and ha in expected:
1220 1220 raise error.Abort(
1221 1221 _('may not use "%s" with changesets '
1222 1222 'within the edited list') % action.verb)
1223 1223 if _constraints.noduplicates in constraints and ha in seen:
1224 1224 raise error.Abort(_('duplicated command for changeset %s') %
1225 1225 ha[:12])
1226 1226 seen.add(ha)
1227 1227 missing = sorted(expected - seen) # sort to stabilize output
1228 1228 if missing:
1229 1229 raise error.Abort(_('missing rules for changeset %s') %
1230 1230 missing[0][:12],
1231 1231 hint=_('use "drop %s" to discard the change') % missing[0][:12])
1232 1232
1233 1233 def newnodestoabort(state):
1234 1234 """process the list of replacements to return
1235 1235
1236 1236 1) the list of final node
1237 1237 2) the list of temporary node
1238 1238
1239 1239 This meant to be used on abort as less data are required in this case.
1240 1240 """
1241 1241 replacements = state.replacements
1242 1242 allsuccs = set()
1243 1243 replaced = set()
1244 1244 for rep in replacements:
1245 1245 allsuccs.update(rep[1])
1246 1246 replaced.add(rep[0])
1247 1247 newnodes = allsuccs - replaced
1248 1248 tmpnodes = allsuccs & replaced
1249 1249 return newnodes, tmpnodes
1250 1250
1251 1251
1252 1252 def processreplacement(state):
1253 1253 """process the list of replacements to return
1254 1254
1255 1255 1) the final mapping between original and created nodes
1256 1256 2) the list of temporary node created by histedit
1257 1257 3) the list of new commit created by histedit"""
1258 1258 replacements = state.replacements
1259 1259 allsuccs = set()
1260 1260 replaced = set()
1261 1261 fullmapping = {}
1262 1262 # initialize basic set
1263 1263 # fullmapping records all operations recorded in replacement
1264 1264 for rep in replacements:
1265 1265 allsuccs.update(rep[1])
1266 1266 replaced.add(rep[0])
1267 1267 fullmapping.setdefault(rep[0], set()).update(rep[1])
1268 1268 new = allsuccs - replaced
1269 1269 tmpnodes = allsuccs & replaced
1270 1270 # Reduce content fullmapping into direct relation between original nodes
1271 1271 # and final node created during history edition
1272 1272 # Dropped changeset are replaced by an empty list
1273 1273 toproceed = set(fullmapping)
1274 1274 final = {}
1275 1275 while toproceed:
1276 1276 for x in list(toproceed):
1277 1277 succs = fullmapping[x]
1278 1278 for s in list(succs):
1279 1279 if s in toproceed:
1280 1280 # non final node with unknown closure
1281 1281 # We can't process this now
1282 1282 break
1283 1283 elif s in final:
1284 1284 # non final node, replace with closure
1285 1285 succs.remove(s)
1286 1286 succs.update(final[s])
1287 1287 else:
1288 1288 final[x] = succs
1289 1289 toproceed.remove(x)
1290 1290 # remove tmpnodes from final mapping
1291 1291 for n in tmpnodes:
1292 1292 del final[n]
1293 1293 # we expect all changes involved in final to exist in the repo
1294 1294 # turn `final` into list (topologically sorted)
1295 1295 nm = state.repo.changelog.nodemap
1296 1296 for prec, succs in final.items():
1297 1297 final[prec] = sorted(succs, key=nm.get)
1298 1298
1299 1299 # computed topmost element (necessary for bookmark)
1300 1300 if new:
1301 1301 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1302 1302 elif not final:
1303 1303 # Nothing rewritten at all. we won't need `newtopmost`
1304 1304 # It is the same as `oldtopmost` and `processreplacement` know it
1305 1305 newtopmost = None
1306 1306 else:
1307 1307 # every body died. The newtopmost is the parent of the root.
1308 1308 r = state.repo.changelog.rev
1309 1309 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1310 1310
1311 1311 return final, tmpnodes, new, newtopmost
1312 1312
1313 1313 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1314 1314 """Move bookmark from old to newly created node"""
1315 1315 if not mapping:
1316 1316 # if nothing got rewritten there is not purpose for this function
1317 1317 return
1318 1318 moves = []
1319 1319 for bk, old in sorted(repo._bookmarks.iteritems()):
1320 1320 if old == oldtopmost:
1321 1321 # special case ensure bookmark stay on tip.
1322 1322 #
1323 1323 # This is arguably a feature and we may only want that for the
1324 1324 # active bookmark. But the behavior is kept compatible with the old
1325 1325 # version for now.
1326 1326 moves.append((bk, newtopmost))
1327 1327 continue
1328 1328 base = old
1329 1329 new = mapping.get(base, None)
1330 1330 if new is None:
1331 1331 continue
1332 1332 while not new:
1333 1333 # base is killed, trying with parent
1334 1334 base = repo[base].p1().node()
1335 1335 new = mapping.get(base, (base,))
1336 1336 # nothing to move
1337 1337 moves.append((bk, new[-1]))
1338 1338 if moves:
1339 1339 lock = tr = None
1340 1340 try:
1341 1341 lock = repo.lock()
1342 1342 tr = repo.transaction('histedit')
1343 1343 marks = repo._bookmarks
1344 1344 for mark, new in moves:
1345 1345 old = marks[mark]
1346 1346 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1347 1347 % (mark, node.short(old), node.short(new)))
1348 1348 marks[mark] = new
1349 1349 marks.recordchange(tr)
1350 1350 tr.close()
1351 1351 finally:
1352 1352 release(tr, lock)
1353 1353
1354 1354 def cleanupnode(ui, repo, name, nodes):
1355 1355 """strip a group of nodes from the repository
1356 1356
1357 1357 The set of node to strip may contains unknown nodes."""
1358 1358 ui.debug('should strip %s nodes %s\n' %
1359 1359 (name, ', '.join([node.short(n) for n in nodes])))
1360 1360 lock = None
1361 1361 try:
1362 1362 lock = repo.lock()
1363 1363 # do not let filtering get in the way of the cleanse
1364 1364 # we should probably get rid of obsolescence marker created during the
1365 1365 # histedit, but we currently do not have such information.
1366 1366 repo = repo.unfiltered()
1367 1367 # Find all nodes that need to be stripped
1368 1368 # (we use %lr instead of %ln to silently ignore unknown items)
1369 1369 nm = repo.changelog.nodemap
1370 1370 nodes = sorted(n for n in nodes if n in nm)
1371 1371 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1372 1372 for c in roots:
1373 1373 # We should process node in reverse order to strip tip most first.
1374 1374 # but this trigger a bug in changegroup hook.
1375 1375 # This would reduce bundle overhead
1376 1376 repair.strip(ui, repo, c)
1377 1377 finally:
1378 1378 release(lock)
1379 1379
1380 1380 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1381 1381 if isinstance(nodelist, str):
1382 1382 nodelist = [nodelist]
1383 1383 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1384 1384 state = histeditstate(repo)
1385 1385 state.read()
1386 1386 histedit_nodes = set([action.nodetoverify() for action
1387 1387 in state.actions if action.nodetoverify()])
1388 1388 strip_nodes = set([repo[n].node() for n in nodelist])
1389 1389 common_nodes = histedit_nodes & strip_nodes
1390 1390 if common_nodes:
1391 1391 raise error.Abort(_("histedit in progress, can't strip %s")
1392 1392 % ', '.join(node.short(x) for x in common_nodes))
1393 1393 return orig(ui, repo, nodelist, *args, **kwargs)
1394 1394
1395 1395 extensions.wrapfunction(repair, 'strip', stripwrapper)
1396 1396
1397 1397 def summaryhook(ui, repo):
1398 1398 if not os.path.exists(repo.join('histedit-state')):
1399 1399 return
1400 1400 state = histeditstate(repo)
1401 1401 state.read()
1402 1402 if state.actions:
1403 1403 # i18n: column positioning for "hg summary"
1404 1404 ui.write(_('hist: %s (histedit --continue)\n') %
1405 1405 (ui.label(_('%d remaining'), 'histedit.remaining') %
1406 1406 len(state.actions)))
1407 1407
1408 1408 def extsetup(ui):
1409 1409 cmdutil.summaryhooks.add('histedit', summaryhook)
1410 1410 cmdutil.unfinishedstates.append(
1411 1411 ['histedit-state', False, True, _('histedit in progress'),
1412 1412 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1413 1413 if ui.configbool("experimental", "histeditng"):
1414 1414 globals()['base'] = addhisteditaction(['base', 'b'])(base)
@@ -1,1430 +1,1433 b''
1 1 # Copyright 2009-2010 Gregory P. Ward
2 2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 3 # Copyright 2010-2011 Fog Creek Software
4 4 # Copyright 2010-2011 Unity Technologies
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10 10
11 11 import os
12 12 import copy
13 13
14 14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
15 15 archival, pathutil, revset, error
16 16 from mercurial.i18n import _
17 17
18 18 import lfutil
19 19 import lfcommands
20 20 import basestore
21 21
22 22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
23 23
24 24 def composelargefilematcher(match, manifest):
25 25 '''create a matcher that matches only the largefiles in the original
26 26 matcher'''
27 27 m = copy.copy(match)
28 28 lfile = lambda f: lfutil.standin(f) in manifest
29 29 m._files = filter(lfile, m._files)
30 30 m._fileroots = set(m._files)
31 31 m._always = False
32 32 origmatchfn = m.matchfn
33 33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
34 34 return m
35 35
36 36 def composenormalfilematcher(match, manifest, exclude=None):
37 37 excluded = set()
38 38 if exclude is not None:
39 39 excluded.update(exclude)
40 40
41 41 m = copy.copy(match)
42 42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
43 43 manifest or f in excluded)
44 44 m._files = filter(notlfile, m._files)
45 45 m._fileroots = set(m._files)
46 46 m._always = False
47 47 origmatchfn = m.matchfn
48 48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
49 49 return m
50 50
51 51 def installnormalfilesmatchfn(manifest):
52 52 '''installmatchfn with a matchfn that ignores all largefiles'''
53 53 def overridematch(ctx, pats=(), opts=None, globbed=False,
54 54 default='relpath', badfn=None):
55 55 if opts is None:
56 56 opts = {}
57 57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
58 58 return composenormalfilematcher(match, manifest)
59 59 oldmatch = installmatchfn(overridematch)
60 60
61 61 def installmatchfn(f):
62 62 '''monkey patch the scmutil module with a custom match function.
63 63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
64 64 oldmatch = scmutil.match
65 65 setattr(f, 'oldmatch', oldmatch)
66 66 scmutil.match = f
67 67 return oldmatch
68 68
69 69 def restorematchfn():
70 70 '''restores scmutil.match to what it was before installmatchfn
71 71 was called. no-op if scmutil.match is its original function.
72 72
73 73 Note that n calls to installmatchfn will require n calls to
74 74 restore the original matchfn.'''
75 75 scmutil.match = getattr(scmutil.match, 'oldmatch')
76 76
77 77 def installmatchandpatsfn(f):
78 78 oldmatchandpats = scmutil.matchandpats
79 79 setattr(f, 'oldmatchandpats', oldmatchandpats)
80 80 scmutil.matchandpats = f
81 81 return oldmatchandpats
82 82
83 83 def restorematchandpatsfn():
84 84 '''restores scmutil.matchandpats to what it was before
85 85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
86 86 is its original function.
87 87
88 88 Note that n calls to installmatchandpatsfn will require n calls
89 89 to restore the original matchfn.'''
90 90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
91 91 scmutil.matchandpats)
92 92
93 93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
94 94 large = opts.get('large')
95 95 lfsize = lfutil.getminsize(
96 96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
97 97
98 98 lfmatcher = None
99 99 if lfutil.islfilesrepo(repo):
100 100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
101 101 if lfpats:
102 102 lfmatcher = match_.match(repo.root, '', list(lfpats))
103 103
104 104 lfnames = []
105 105 m = matcher
106 106
107 107 wctx = repo[None]
108 108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
109 109 exact = m.exact(f)
110 110 lfile = lfutil.standin(f) in wctx
111 111 nfile = f in wctx
112 112 exists = lfile or nfile
113 113
114 114 # addremove in core gets fancy with the name, add doesn't
115 115 if isaddremove:
116 116 name = m.uipath(f)
117 117 else:
118 118 name = m.rel(f)
119 119
120 120 # Don't warn the user when they attempt to add a normal tracked file.
121 121 # The normal add code will do that for us.
122 122 if exact and exists:
123 123 if lfile:
124 124 ui.warn(_('%s already a largefile\n') % name)
125 125 continue
126 126
127 127 if (exact or not exists) and not lfutil.isstandin(f):
128 128 # In case the file was removed previously, but not committed
129 129 # (issue3507)
130 130 if not repo.wvfs.exists(f):
131 131 continue
132 132
133 133 abovemin = (lfsize and
134 134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
135 135 if large or abovemin or (lfmatcher and lfmatcher(f)):
136 136 lfnames.append(f)
137 137 if ui.verbose or not exact:
138 138 ui.status(_('adding %s as a largefile\n') % name)
139 139
140 140 bad = []
141 141
142 142 # Need to lock, otherwise there could be a race condition between
143 143 # when standins are created and added to the repo.
144 144 wlock = repo.wlock()
145 145 try:
146 146 if not opts.get('dry_run'):
147 147 standins = []
148 148 lfdirstate = lfutil.openlfdirstate(ui, repo)
149 149 for f in lfnames:
150 150 standinname = lfutil.standin(f)
151 151 lfutil.writestandin(repo, standinname, hash='',
152 152 executable=lfutil.getexecutable(repo.wjoin(f)))
153 153 standins.append(standinname)
154 154 if lfdirstate[f] == 'r':
155 155 lfdirstate.normallookup(f)
156 156 else:
157 157 lfdirstate.add(f)
158 158 lfdirstate.write()
159 159 bad += [lfutil.splitstandin(f)
160 160 for f in repo[None].add(standins)
161 161 if f in m.files()]
162 162
163 163 added = [f for f in lfnames if f not in bad]
164 164 finally:
165 165 wlock.release()
166 166 return added, bad
167 167
168 168 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
169 169 after = opts.get('after')
170 170 m = composelargefilematcher(matcher, repo[None].manifest())
171 171 try:
172 172 repo.lfstatus = True
173 173 s = repo.status(match=m, clean=not isaddremove)
174 174 finally:
175 175 repo.lfstatus = False
176 176 manifest = repo[None].manifest()
177 177 modified, added, deleted, clean = [[f for f in list
178 178 if lfutil.standin(f) in manifest]
179 179 for list in (s.modified, s.added,
180 180 s.deleted, s.clean)]
181 181
182 182 def warn(files, msg):
183 183 for f in files:
184 184 ui.warn(msg % m.rel(f))
185 185 return int(len(files) > 0)
186 186
187 187 result = 0
188 188
189 189 if after:
190 190 remove = deleted
191 191 result = warn(modified + added + clean,
192 192 _('not removing %s: file still exists\n'))
193 193 else:
194 194 remove = deleted + clean
195 195 result = warn(modified, _('not removing %s: file is modified (use -f'
196 196 ' to force removal)\n'))
197 197 result = warn(added, _('not removing %s: file has been marked for add'
198 198 ' (use forget to undo)\n')) or result
199 199
200 200 # Need to lock because standin files are deleted then removed from the
201 201 # repository and we could race in-between.
202 202 wlock = repo.wlock()
203 203 try:
204 204 lfdirstate = lfutil.openlfdirstate(ui, repo)
205 205 for f in sorted(remove):
206 206 if ui.verbose or not m.exact(f):
207 207 # addremove in core gets fancy with the name, remove doesn't
208 208 if isaddremove:
209 209 name = m.uipath(f)
210 210 else:
211 211 name = m.rel(f)
212 212 ui.status(_('removing %s\n') % name)
213 213
214 214 if not opts.get('dry_run'):
215 215 if not after:
216 216 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
217 217
218 218 if opts.get('dry_run'):
219 219 return result
220 220
221 221 remove = [lfutil.standin(f) for f in remove]
222 222 # If this is being called by addremove, let the original addremove
223 223 # function handle this.
224 224 if not isaddremove:
225 225 for f in remove:
226 226 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
227 227 repo[None].forget(remove)
228 228
229 229 for f in remove:
230 230 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
231 231 False)
232 232
233 233 lfdirstate.write()
234 234 finally:
235 235 wlock.release()
236 236
237 237 return result
238 238
239 239 # For overriding mercurial.hgweb.webcommands so that largefiles will
240 240 # appear at their right place in the manifests.
241 241 def decodepath(orig, path):
242 242 return lfutil.splitstandin(path) or path
243 243
244 244 # -- Wrappers: modify existing commands --------------------------------
245 245
246 246 def overrideadd(orig, ui, repo, *pats, **opts):
247 247 if opts.get('normal') and opts.get('large'):
248 248 raise error.Abort(_('--normal cannot be used with --large'))
249 249 return orig(ui, repo, *pats, **opts)
250 250
251 251 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
252 252 # The --normal flag short circuits this override
253 253 if opts.get('normal'):
254 254 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
255 255
256 256 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
257 257 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
258 258 ladded)
259 259 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
260 260
261 261 bad.extend(f for f in lbad)
262 262 return bad
263 263
264 264 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
265 265 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
266 266 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
267 267 return removelargefiles(ui, repo, False, matcher, after=after,
268 268 force=force) or result
269 269
270 270 def overridestatusfn(orig, repo, rev2, **opts):
271 271 try:
272 272 repo._repo.lfstatus = True
273 273 return orig(repo, rev2, **opts)
274 274 finally:
275 275 repo._repo.lfstatus = False
276 276
277 277 def overridestatus(orig, ui, repo, *pats, **opts):
278 278 try:
279 279 repo.lfstatus = True
280 280 return orig(ui, repo, *pats, **opts)
281 281 finally:
282 282 repo.lfstatus = False
283 283
284 284 def overridedirty(orig, repo, ignoreupdate=False):
285 285 try:
286 286 repo._repo.lfstatus = True
287 287 return orig(repo, ignoreupdate)
288 288 finally:
289 289 repo._repo.lfstatus = False
290 290
291 291 def overridelog(orig, ui, repo, *pats, **opts):
292 292 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
293 293 default='relpath', badfn=None):
294 294 """Matcher that merges root directory with .hglf, suitable for log.
295 295 It is still possible to match .hglf directly.
296 296 For any listed files run log on the standin too.
297 297 matchfn tries both the given filename and with .hglf stripped.
298 298 """
299 299 if opts is None:
300 300 opts = {}
301 301 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
302 302 badfn=badfn)
303 303 m, p = copy.copy(matchandpats)
304 304
305 305 if m.always():
306 306 # We want to match everything anyway, so there's no benefit trying
307 307 # to add standins.
308 308 return matchandpats
309 309
310 310 pats = set(p)
311 311
312 312 def fixpats(pat, tostandin=lfutil.standin):
313 313 if pat.startswith('set:'):
314 314 return pat
315 315
316 316 kindpat = match_._patsplit(pat, None)
317 317
318 318 if kindpat[0] is not None:
319 319 return kindpat[0] + ':' + tostandin(kindpat[1])
320 320 return tostandin(kindpat[1])
321 321
322 322 if m._cwd:
323 323 hglf = lfutil.shortname
324 324 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
325 325
326 326 def tostandin(f):
327 327 # The file may already be a standin, so truncate the back
328 328 # prefix and test before mangling it. This avoids turning
329 329 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
330 330 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
331 331 return f
332 332
333 333 # An absolute path is from outside the repo, so truncate the
334 334 # path to the root before building the standin. Otherwise cwd
335 335 # is somewhere in the repo, relative to root, and needs to be
336 336 # prepended before building the standin.
337 337 if os.path.isabs(m._cwd):
338 338 f = f[len(back):]
339 339 else:
340 340 f = m._cwd + '/' + f
341 341 return back + lfutil.standin(f)
342 342
343 343 pats.update(fixpats(f, tostandin) for f in p)
344 344 else:
345 345 def tostandin(f):
346 346 if lfutil.splitstandin(f):
347 347 return f
348 348 return lfutil.standin(f)
349 349 pats.update(fixpats(f, tostandin) for f in p)
350 350
351 351 for i in range(0, len(m._files)):
352 352 # Don't add '.hglf' to m.files, since that is already covered by '.'
353 353 if m._files[i] == '.':
354 354 continue
355 355 standin = lfutil.standin(m._files[i])
356 356 # If the "standin" is a directory, append instead of replace to
357 357 # support naming a directory on the command line with only
358 358 # largefiles. The original directory is kept to support normal
359 359 # files.
360 360 if standin in repo[ctx.node()]:
361 361 m._files[i] = standin
362 362 elif m._files[i] not in repo[ctx.node()] \
363 363 and repo.wvfs.isdir(standin):
364 364 m._files.append(standin)
365 365
366 366 m._fileroots = set(m._files)
367 367 m._always = False
368 368 origmatchfn = m.matchfn
369 369 def lfmatchfn(f):
370 370 lf = lfutil.splitstandin(f)
371 371 if lf is not None and origmatchfn(lf):
372 372 return True
373 373 r = origmatchfn(f)
374 374 return r
375 375 m.matchfn = lfmatchfn
376 376
377 377 ui.debug('updated patterns: %s\n' % sorted(pats))
378 378 return m, pats
379 379
380 380 # For hg log --patch, the match object is used in two different senses:
381 381 # (1) to determine what revisions should be printed out, and
382 382 # (2) to determine what files to print out diffs for.
383 383 # The magic matchandpats override should be used for case (1) but not for
384 384 # case (2).
385 385 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
386 386 wctx = repo[None]
387 387 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
388 388 return lambda rev: match
389 389
390 390 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
391 391 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
392 392 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
393 393
394 394 try:
395 395 return orig(ui, repo, *pats, **opts)
396 396 finally:
397 397 restorematchandpatsfn()
398 398 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
399 399
400 400 def overrideverify(orig, ui, repo, *pats, **opts):
401 401 large = opts.pop('large', False)
402 402 all = opts.pop('lfa', False)
403 403 contents = opts.pop('lfc', False)
404 404
405 405 result = orig(ui, repo, *pats, **opts)
406 406 if large or all or contents:
407 407 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
408 408 return result
409 409
410 410 def overridedebugstate(orig, ui, repo, *pats, **opts):
411 411 large = opts.pop('large', False)
412 412 if large:
413 413 class fakerepo(object):
414 414 dirstate = lfutil.openlfdirstate(ui, repo)
415 415 orig(ui, fakerepo, *pats, **opts)
416 416 else:
417 417 orig(ui, repo, *pats, **opts)
418 418
419 419 # Before starting the manifest merge, merge.updates will call
420 420 # _checkunknownfile to check if there are any files in the merged-in
421 421 # changeset that collide with unknown files in the working copy.
422 422 #
423 423 # The largefiles are seen as unknown, so this prevents us from merging
424 424 # in a file 'foo' if we already have a largefile with the same name.
425 425 #
426 426 # The overridden function filters the unknown files by removing any
427 427 # largefiles. This makes the merge proceed and we can then handle this
428 428 # case further in the overridden calculateupdates function below.
429 429 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
430 430 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
431 431 return False
432 432 return origfn(repo, wctx, mctx, f, f2)
433 433
434 434 # The manifest merge handles conflicts on the manifest level. We want
435 435 # to handle changes in largefile-ness of files at this level too.
436 436 #
437 437 # The strategy is to run the original calculateupdates and then process
438 438 # the action list it outputs. There are two cases we need to deal with:
439 439 #
440 440 # 1. Normal file in p1, largefile in p2. Here the largefile is
441 441 # detected via its standin file, which will enter the working copy
442 442 # with a "get" action. It is not "merge" since the standin is all
443 443 # Mercurial is concerned with at this level -- the link to the
444 444 # existing normal file is not relevant here.
445 445 #
446 446 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
447 447 # since the largefile will be present in the working copy and
448 448 # different from the normal file in p2. Mercurial therefore
449 449 # triggers a merge action.
450 450 #
451 451 # In both cases, we prompt the user and emit new actions to either
452 452 # remove the standin (if the normal file was kept) or to remove the
453 453 # normal file and get the standin (if the largefile was kept). The
454 454 # default prompt answer is to use the largefile version since it was
455 455 # presumably changed on purpose.
456 456 #
457 457 # Finally, the merge.applyupdates function will then take care of
458 458 # writing the files into the working copy and lfcommands.updatelfiles
459 459 # will update the largefiles.
460 460 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
461 461 partial, acceptremote, followcopies):
462 462 overwrite = force and not branchmerge
463 463 actions, diverge, renamedelete = origfn(
464 464 repo, p1, p2, pas, branchmerge, force, partial, acceptremote,
465 465 followcopies)
466 466
467 467 if overwrite:
468 468 return actions, diverge, renamedelete
469 469
470 470 # Convert to dictionary with filename as key and action as value.
471 471 lfiles = set()
472 472 for f in actions:
473 473 splitstandin = f and lfutil.splitstandin(f)
474 474 if splitstandin in p1:
475 475 lfiles.add(splitstandin)
476 476 elif lfutil.standin(f) in p1:
477 477 lfiles.add(f)
478 478
479 479 for lfile in lfiles:
480 480 standin = lfutil.standin(lfile)
481 481 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
482 482 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
483 483 if sm in ('g', 'dc') and lm != 'r':
484 484 if sm == 'dc':
485 485 f1, f2, fa, move, anc = sargs
486 486 sargs = (p2[f2].flags(),)
487 487 # Case 1: normal file in the working copy, largefile in
488 488 # the second parent
489 489 usermsg = _('remote turned local normal file %s into a largefile\n'
490 490 'use (l)argefile or keep (n)ormal file?'
491 491 '$$ &Largefile $$ &Normal file') % lfile
492 492 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
493 493 actions[lfile] = ('r', None, 'replaced by standin')
494 494 actions[standin] = ('g', sargs, 'replaces standin')
495 495 else: # keep local normal file
496 496 actions[lfile] = ('k', None, 'replaces standin')
497 497 if branchmerge:
498 498 actions[standin] = ('k', None, 'replaced by non-standin')
499 499 else:
500 500 actions[standin] = ('r', None, 'replaced by non-standin')
501 501 elif lm in ('g', 'dc') and sm != 'r':
502 502 if lm == 'dc':
503 503 f1, f2, fa, move, anc = largs
504 504 largs = (p2[f2].flags(),)
505 505 # Case 2: largefile in the working copy, normal file in
506 506 # the second parent
507 507 usermsg = _('remote turned local largefile %s into a normal file\n'
508 508 'keep (l)argefile or use (n)ormal file?'
509 509 '$$ &Largefile $$ &Normal file') % lfile
510 510 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
511 511 if branchmerge:
512 512 # largefile can be restored from standin safely
513 513 actions[lfile] = ('k', None, 'replaced by standin')
514 514 actions[standin] = ('k', None, 'replaces standin')
515 515 else:
516 516 # "lfile" should be marked as "removed" without
517 517 # removal of itself
518 518 actions[lfile] = ('lfmr', None,
519 519 'forget non-standin largefile')
520 520
521 521 # linear-merge should treat this largefile as 're-added'
522 522 actions[standin] = ('a', None, 'keep standin')
523 523 else: # pick remote normal file
524 524 actions[lfile] = ('g', largs, 'replaces standin')
525 525 actions[standin] = ('r', None, 'replaced by non-standin')
526 526
527 527 return actions, diverge, renamedelete
528 528
529 529 def mergerecordupdates(orig, repo, actions, branchmerge):
530 530 if 'lfmr' in actions:
531 531 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
532 532 for lfile, args, msg in actions['lfmr']:
533 533 # this should be executed before 'orig', to execute 'remove'
534 534 # before all other actions
535 535 repo.dirstate.remove(lfile)
536 536 # make sure lfile doesn't get synclfdirstate'd as normal
537 537 lfdirstate.add(lfile)
538 538 lfdirstate.write()
539 539
540 540 return orig(repo, actions, branchmerge)
541 541
542 542
543 543 # Override filemerge to prompt the user about how they wish to merge
544 544 # largefiles. This will handle identical edits without prompting the user.
545 545 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
546 546 labels=None):
547 547 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
548 548 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
549 549 labels=labels)
550 550
551 551 ahash = fca.data().strip().lower()
552 552 dhash = fcd.data().strip().lower()
553 553 ohash = fco.data().strip().lower()
554 554 if (ohash != ahash and
555 555 ohash != dhash and
556 556 (dhash == ahash or
557 557 repo.ui.promptchoice(
558 558 _('largefile %s has a merge conflict\nancestor was %s\n'
559 559 'keep (l)ocal %s or\ntake (o)ther %s?'
560 560 '$$ &Local $$ &Other') %
561 561 (lfutil.splitstandin(orig), ahash, dhash, ohash),
562 562 0) == 1)):
563 563 repo.wwrite(fcd.path(), fco.data(), fco.flags())
564 564 return True, 0, False
565 565
566 566 def copiespathcopies(orig, ctx1, ctx2, match=None):
567 567 copies = orig(ctx1, ctx2, match=match)
568 568 updated = {}
569 569
570 570 for k, v in copies.iteritems():
571 571 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
572 572
573 573 return updated
574 574
575 575 # Copy first changes the matchers to match standins instead of
576 576 # largefiles. Then it overrides util.copyfile in that function it
577 577 # checks if the destination largefile already exists. It also keeps a
578 578 # list of copied files so that the largefiles can be copied and the
579 579 # dirstate updated.
580 580 def overridecopy(orig, ui, repo, pats, opts, rename=False):
581 581 # doesn't remove largefile on rename
582 582 if len(pats) < 2:
583 583 # this isn't legal, let the original function deal with it
584 584 return orig(ui, repo, pats, opts, rename)
585 585
586 586 # This could copy both lfiles and normal files in one command,
587 587 # but we don't want to do that. First replace their matcher to
588 588 # only match normal files and run it, then replace it to just
589 589 # match largefiles and run it again.
590 590 nonormalfiles = False
591 591 nolfiles = False
592 592 installnormalfilesmatchfn(repo[None].manifest())
593 593 try:
594 594 result = orig(ui, repo, pats, opts, rename)
595 595 except error.Abort as e:
596 596 if str(e) != _('no files to copy'):
597 597 raise e
598 598 else:
599 599 nonormalfiles = True
600 600 result = 0
601 601 finally:
602 602 restorematchfn()
603 603
604 604 # The first rename can cause our current working directory to be removed.
605 605 # In that case there is nothing left to copy/rename so just quit.
606 606 try:
607 607 repo.getcwd()
608 608 except OSError:
609 609 return result
610 610
611 611 def makestandin(relpath):
612 612 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
613 613 return os.path.join(repo.wjoin(lfutil.standin(path)))
614 614
615 615 fullpats = scmutil.expandpats(pats)
616 616 dest = fullpats[-1]
617 617
618 618 if os.path.isdir(dest):
619 619 if not os.path.isdir(makestandin(dest)):
620 620 os.makedirs(makestandin(dest))
621 621
622 622 try:
623 623 # When we call orig below it creates the standins but we don't add
624 624 # them to the dir state until later so lock during that time.
625 625 wlock = repo.wlock()
626 626
627 627 manifest = repo[None].manifest()
628 628 def overridematch(ctx, pats=(), opts=None, globbed=False,
629 629 default='relpath', badfn=None):
630 630 if opts is None:
631 631 opts = {}
632 632 newpats = []
633 633 # The patterns were previously mangled to add the standin
634 634 # directory; we need to remove that now
635 635 for pat in pats:
636 636 if match_.patkind(pat) is None and lfutil.shortname in pat:
637 637 newpats.append(pat.replace(lfutil.shortname, ''))
638 638 else:
639 639 newpats.append(pat)
640 640 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
641 641 m = copy.copy(match)
642 642 lfile = lambda f: lfutil.standin(f) in manifest
643 643 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
644 644 m._fileroots = set(m._files)
645 645 origmatchfn = m.matchfn
646 646 m.matchfn = lambda f: (lfutil.isstandin(f) and
647 647 (f in manifest) and
648 648 origmatchfn(lfutil.splitstandin(f)) or
649 649 None)
650 650 return m
651 651 oldmatch = installmatchfn(overridematch)
652 652 listpats = []
653 653 for pat in pats:
654 654 if match_.patkind(pat) is not None:
655 655 listpats.append(pat)
656 656 else:
657 657 listpats.append(makestandin(pat))
658 658
659 659 try:
660 660 origcopyfile = util.copyfile
661 661 copiedfiles = []
662 662 def overridecopyfile(src, dest):
663 663 if (lfutil.shortname in src and
664 664 dest.startswith(repo.wjoin(lfutil.shortname))):
665 665 destlfile = dest.replace(lfutil.shortname, '')
666 666 if not opts['force'] and os.path.exists(destlfile):
667 667 raise IOError('',
668 668 _('destination largefile already exists'))
669 669 copiedfiles.append((src, dest))
670 670 origcopyfile(src, dest)
671 671
672 672 util.copyfile = overridecopyfile
673 673 result += orig(ui, repo, listpats, opts, rename)
674 674 finally:
675 675 util.copyfile = origcopyfile
676 676
677 677 lfdirstate = lfutil.openlfdirstate(ui, repo)
678 678 for (src, dest) in copiedfiles:
679 679 if (lfutil.shortname in src and
680 680 dest.startswith(repo.wjoin(lfutil.shortname))):
681 681 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
682 682 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
683 683 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
684 684 if not os.path.isdir(destlfiledir):
685 685 os.makedirs(destlfiledir)
686 686 if rename:
687 687 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
688 688
689 689 # The file is gone, but this deletes any empty parent
690 690 # directories as a side-effect.
691 691 util.unlinkpath(repo.wjoin(srclfile), True)
692 692 lfdirstate.remove(srclfile)
693 693 else:
694 694 util.copyfile(repo.wjoin(srclfile),
695 695 repo.wjoin(destlfile))
696 696
697 697 lfdirstate.add(destlfile)
698 698 lfdirstate.write()
699 699 except error.Abort as e:
700 700 if str(e) != _('no files to copy'):
701 701 raise e
702 702 else:
703 703 nolfiles = True
704 704 finally:
705 705 restorematchfn()
706 706 wlock.release()
707 707
708 708 if nolfiles and nonormalfiles:
709 709 raise error.Abort(_('no files to copy'))
710 710
711 711 return result
712 712
713 713 # When the user calls revert, we have to be careful to not revert any
714 714 # changes to other largefiles accidentally. This means we have to keep
715 715 # track of the largefiles that are being reverted so we only pull down
716 716 # the necessary largefiles.
717 717 #
718 718 # Standins are only updated (to match the hash of largefiles) before
719 719 # commits. Update the standins then run the original revert, changing
720 720 # the matcher to hit standins instead of largefiles. Based on the
721 721 # resulting standins update the largefiles.
722 722 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
723 723 # Because we put the standins in a bad state (by updating them)
724 724 # and then return them to a correct state we need to lock to
725 725 # prevent others from changing them in their incorrect state.
726 726 wlock = repo.wlock()
727 727 try:
728 728 lfdirstate = lfutil.openlfdirstate(ui, repo)
729 729 s = lfutil.lfdirstatestatus(lfdirstate, repo)
730 730 lfdirstate.write()
731 731 for lfile in s.modified:
732 732 lfutil.updatestandin(repo, lfutil.standin(lfile))
733 733 for lfile in s.deleted:
734 734 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
735 735 os.unlink(repo.wjoin(lfutil.standin(lfile)))
736 736
737 737 oldstandins = lfutil.getstandinsstate(repo)
738 738
739 739 def overridematch(mctx, pats=(), opts=None, globbed=False,
740 740 default='relpath', badfn=None):
741 741 if opts is None:
742 742 opts = {}
743 743 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
744 744 m = copy.copy(match)
745 745
746 746 # revert supports recursing into subrepos, and though largefiles
747 747 # currently doesn't work correctly in that case, this match is
748 748 # called, so the lfdirstate above may not be the correct one for
749 749 # this invocation of match.
750 750 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
751 751 False)
752 752
753 753 def tostandin(f):
754 754 standin = lfutil.standin(f)
755 755 if standin in ctx or standin in mctx:
756 756 return standin
757 757 elif standin in repo[None] or lfdirstate[f] == 'r':
758 758 return None
759 759 return f
760 760 m._files = [tostandin(f) for f in m._files]
761 761 m._files = [f for f in m._files if f is not None]
762 762 m._fileroots = set(m._files)
763 763 origmatchfn = m.matchfn
764 764 def matchfn(f):
765 765 if lfutil.isstandin(f):
766 766 return (origmatchfn(lfutil.splitstandin(f)) and
767 767 (f in ctx or f in mctx))
768 768 return origmatchfn(f)
769 769 m.matchfn = matchfn
770 770 return m
771 771 oldmatch = installmatchfn(overridematch)
772 772 try:
773 773 orig(ui, repo, ctx, parents, *pats, **opts)
774 774 finally:
775 775 restorematchfn()
776 776
777 777 newstandins = lfutil.getstandinsstate(repo)
778 778 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
779 779 # lfdirstate should be 'normallookup'-ed for updated files,
780 780 # because reverting doesn't touch dirstate for 'normal' files
781 781 # when target revision is explicitly specified: in such case,
782 782 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
783 783 # of target (standin) file.
784 784 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
785 785 normallookup=True)
786 786
787 787 finally:
788 788 wlock.release()
789 789
790 790 # after pulling changesets, we need to take some extra care to get
791 791 # largefiles updated remotely
792 792 def overridepull(orig, ui, repo, source=None, **opts):
793 793 revsprepull = len(repo)
794 794 if not source:
795 795 source = 'default'
796 796 repo.lfpullsource = source
797 797 result = orig(ui, repo, source, **opts)
798 798 revspostpull = len(repo)
799 799 lfrevs = opts.get('lfrev', [])
800 800 if opts.get('all_largefiles'):
801 801 lfrevs.append('pulled()')
802 802 if lfrevs and revspostpull > revsprepull:
803 803 numcached = 0
804 804 repo.firstpulled = revsprepull # for pulled() revset expression
805 805 try:
806 806 for rev in scmutil.revrange(repo, lfrevs):
807 807 ui.note(_('pulling largefiles for revision %s\n') % rev)
808 808 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
809 809 numcached += len(cached)
810 810 finally:
811 811 del repo.firstpulled
812 812 ui.status(_("%d largefiles cached\n") % numcached)
813 813 return result
814 814
815 815 def pulledrevsetsymbol(repo, subset, x):
816 816 """``pulled()``
817 817 Changesets that just has been pulled.
818 818
819 819 Only available with largefiles from pull --lfrev expressions.
820 820
821 821 .. container:: verbose
822 822
823 823 Some examples:
824 824
825 825 - pull largefiles for all new changesets::
826 826
827 827 hg pull -lfrev "pulled()"
828 828
829 829 - pull largefiles for all new branch heads::
830 830
831 831 hg pull -lfrev "head(pulled()) and not closed()"
832 832
833 833 """
834 834
835 835 try:
836 836 firstpulled = repo.firstpulled
837 837 except AttributeError:
838 838 raise error.Abort(_("pulled() only available in --lfrev"))
839 839 return revset.baseset([r for r in subset if r >= firstpulled])
840 840
841 841 def overrideclone(orig, ui, source, dest=None, **opts):
842 842 d = dest
843 843 if d is None:
844 844 d = hg.defaultdest(source)
845 845 if opts.get('all_largefiles') and not hg.islocal(d):
846 846 raise error.Abort(_(
847 847 '--all-largefiles is incompatible with non-local destination %s') %
848 848 d)
849 849
850 850 return orig(ui, source, dest, **opts)
851 851
852 852 def hgclone(orig, ui, opts, *args, **kwargs):
853 853 result = orig(ui, opts, *args, **kwargs)
854 854
855 855 if result is not None:
856 856 sourcerepo, destrepo = result
857 857 repo = destrepo.local()
858 858
859 859 # When cloning to a remote repo (like through SSH), no repo is available
860 860 # from the peer. Therefore the largefiles can't be downloaded and the
861 861 # hgrc can't be updated.
862 862 if not repo:
863 863 return result
864 864
865 865 # If largefiles is required for this repo, permanently enable it locally
866 866 if 'largefiles' in repo.requirements:
867 867 fp = repo.vfs('hgrc', 'a', text=True)
868 868 try:
869 869 fp.write('\n[extensions]\nlargefiles=\n')
870 870 finally:
871 871 fp.close()
872 872
873 873 # Caching is implicitly limited to 'rev' option, since the dest repo was
874 874 # truncated at that point. The user may expect a download count with
875 875 # this option, so attempt whether or not this is a largefile repo.
876 876 if opts.get('all_largefiles'):
877 877 success, missing = lfcommands.downloadlfiles(ui, repo, None)
878 878
879 879 if missing != 0:
880 880 return None
881 881
882 882 return result
883 883
884 884 def overriderebase(orig, ui, repo, **opts):
885 885 if not util.safehasattr(repo, '_largefilesenabled'):
886 886 return orig(ui, repo, **opts)
887 887
888 888 resuming = opts.get('continue')
889 889 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
890 890 repo._lfstatuswriters.append(lambda *msg, **opts: None)
891 891 try:
892 892 return orig(ui, repo, **opts)
893 893 finally:
894 894 repo._lfstatuswriters.pop()
895 895 repo._lfcommithooks.pop()
896 896
897 897 def overridearchivecmd(orig, ui, repo, dest, **opts):
898 898 repo.unfiltered().lfstatus = True
899 899
900 900 try:
901 901 return orig(ui, repo.unfiltered(), dest, **opts)
902 902 finally:
903 903 repo.unfiltered().lfstatus = False
904 904
905 905 def hgwebarchive(orig, web, req, tmpl):
906 906 web.repo.lfstatus = True
907 907
908 908 try:
909 909 return orig(web, req, tmpl)
910 910 finally:
911 911 web.repo.lfstatus = False
912 912
913 913 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
914 914 prefix='', mtime=None, subrepos=None):
915 915 # For some reason setting repo.lfstatus in hgwebarchive only changes the
916 916 # unfiltered repo's attr, so check that as well.
917 917 if not repo.lfstatus and not repo.unfiltered().lfstatus:
918 918 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
919 919 subrepos)
920 920
921 921 # No need to lock because we are only reading history and
922 922 # largefile caches, neither of which are modified.
923 923 if node is not None:
924 924 lfcommands.cachelfiles(repo.ui, repo, node)
925 925
926 926 if kind not in archival.archivers:
927 927 raise error.Abort(_("unknown archive type '%s'") % kind)
928 928
929 929 ctx = repo[node]
930 930
931 931 if kind == 'files':
932 932 if prefix:
933 933 raise error.Abort(
934 934 _('cannot give prefix when archiving to files'))
935 935 else:
936 936 prefix = archival.tidyprefix(dest, kind, prefix)
937 937
938 938 def write(name, mode, islink, getdata):
939 939 if matchfn and not matchfn(name):
940 940 return
941 941 data = getdata()
942 942 if decode:
943 943 data = repo.wwritedata(name, data)
944 944 archiver.addfile(prefix + name, mode, islink, data)
945 945
946 946 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
947 947
948 948 if repo.ui.configbool("ui", "archivemeta", True):
949 949 write('.hg_archival.txt', 0o644, False,
950 950 lambda: archival.buildmetadata(ctx))
951 951
952 952 for f in ctx:
953 953 ff = ctx.flags(f)
954 954 getdata = ctx[f].data
955 955 if lfutil.isstandin(f):
956 956 if node is not None:
957 957 path = lfutil.findfile(repo, getdata().strip())
958 958
959 959 if path is None:
960 960 raise error.Abort(
961 961 _('largefile %s not found in repo store or system cache')
962 962 % lfutil.splitstandin(f))
963 963 else:
964 964 path = lfutil.splitstandin(f)
965 965
966 966 f = lfutil.splitstandin(f)
967 967
968 968 def getdatafn():
969 969 fd = None
970 970 try:
971 971 fd = open(path, 'rb')
972 972 return fd.read()
973 973 finally:
974 974 if fd:
975 975 fd.close()
976 976
977 977 getdata = getdatafn
978 978 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
979 979
980 980 if subrepos:
981 981 for subpath in sorted(ctx.substate):
982 982 sub = ctx.workingsub(subpath)
983 983 submatch = match_.narrowmatcher(subpath, matchfn)
984 984 sub._repo.lfstatus = True
985 985 sub.archive(archiver, prefix, submatch)
986 986
987 987 archiver.done()
988 988
989 989 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
990 990 if not repo._repo.lfstatus:
991 991 return orig(repo, archiver, prefix, match)
992 992
993 993 repo._get(repo._state + ('hg',))
994 994 rev = repo._state[1]
995 995 ctx = repo._repo[rev]
996 996
997 997 if ctx.node() is not None:
998 998 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
999 999
1000 1000 def write(name, mode, islink, getdata):
1001 1001 # At this point, the standin has been replaced with the largefile name,
1002 1002 # so the normal matcher works here without the lfutil variants.
1003 1003 if match and not match(f):
1004 1004 return
1005 1005 data = getdata()
1006 1006
1007 1007 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1008 1008
1009 1009 for f in ctx:
1010 1010 ff = ctx.flags(f)
1011 1011 getdata = ctx[f].data
1012 1012 if lfutil.isstandin(f):
1013 1013 if ctx.node() is not None:
1014 1014 path = lfutil.findfile(repo._repo, getdata().strip())
1015 1015
1016 1016 if path is None:
1017 1017 raise error.Abort(
1018 1018 _('largefile %s not found in repo store or system cache')
1019 1019 % lfutil.splitstandin(f))
1020 1020 else:
1021 1021 path = lfutil.splitstandin(f)
1022 1022
1023 1023 f = lfutil.splitstandin(f)
1024 1024
1025 1025 def getdatafn():
1026 1026 fd = None
1027 1027 try:
1028 1028 fd = open(os.path.join(prefix, path), 'rb')
1029 1029 return fd.read()
1030 1030 finally:
1031 1031 if fd:
1032 1032 fd.close()
1033 1033
1034 1034 getdata = getdatafn
1035 1035
1036 1036 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1037 1037
1038 1038 for subpath in sorted(ctx.substate):
1039 1039 sub = ctx.workingsub(subpath)
1040 1040 submatch = match_.narrowmatcher(subpath, match)
1041 1041 sub._repo.lfstatus = True
1042 1042 sub.archive(archiver, prefix + repo._path + '/', submatch)
1043 1043
1044 1044 # If a largefile is modified, the change is not reflected in its
1045 1045 # standin until a commit. cmdutil.bailifchanged() raises an exception
1046 1046 # if the repo has uncommitted changes. Wrap it to also check if
1047 1047 # largefiles were changed. This is used by bisect, backout and fetch.
1048 1048 def overridebailifchanged(orig, repo, *args, **kwargs):
1049 1049 orig(repo, *args, **kwargs)
1050 1050 repo.lfstatus = True
1051 1051 s = repo.status()
1052 1052 repo.lfstatus = False
1053 1053 if s.modified or s.added or s.removed or s.deleted:
1054 1054 raise error.Abort(_('uncommitted changes'))
1055 1055
1056 1056 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1057 1057 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1058 1058 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1059 1059 m = composelargefilematcher(match, repo[None].manifest())
1060 1060
1061 1061 try:
1062 1062 repo.lfstatus = True
1063 1063 s = repo.status(match=m, clean=True)
1064 1064 finally:
1065 1065 repo.lfstatus = False
1066 1066 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1067 1067 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1068 1068
1069 1069 for f in forget:
1070 1070 if lfutil.standin(f) not in repo.dirstate and not \
1071 1071 repo.wvfs.isdir(lfutil.standin(f)):
1072 1072 ui.warn(_('not removing %s: file is already untracked\n')
1073 1073 % m.rel(f))
1074 1074 bad.append(f)
1075 1075
1076 1076 for f in forget:
1077 1077 if ui.verbose or not m.exact(f):
1078 1078 ui.status(_('removing %s\n') % m.rel(f))
1079 1079
1080 1080 # Need to lock because standin files are deleted then removed from the
1081 1081 # repository and we could race in-between.
1082 1082 wlock = repo.wlock()
1083 1083 try:
1084 1084 lfdirstate = lfutil.openlfdirstate(ui, repo)
1085 1085 for f in forget:
1086 1086 if lfdirstate[f] == 'a':
1087 1087 lfdirstate.drop(f)
1088 1088 else:
1089 1089 lfdirstate.remove(f)
1090 1090 lfdirstate.write()
1091 1091 standins = [lfutil.standin(f) for f in forget]
1092 1092 for f in standins:
1093 1093 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1094 1094 rejected = repo[None].forget(standins)
1095 1095 finally:
1096 1096 wlock.release()
1097 1097
1098 1098 bad.extend(f for f in rejected if f in m.files())
1099 1099 forgot.extend(f for f in forget if f not in rejected)
1100 1100 return bad, forgot
1101 1101
1102 1102 def _getoutgoings(repo, other, missing, addfunc):
1103 1103 """get pairs of filename and largefile hash in outgoing revisions
1104 1104 in 'missing'.
1105 1105
1106 1106 largefiles already existing on 'other' repository are ignored.
1107 1107
1108 1108 'addfunc' is invoked with each unique pairs of filename and
1109 1109 largefile hash value.
1110 1110 """
1111 1111 knowns = set()
1112 1112 lfhashes = set()
1113 1113 def dedup(fn, lfhash):
1114 1114 k = (fn, lfhash)
1115 1115 if k not in knowns:
1116 1116 knowns.add(k)
1117 1117 lfhashes.add(lfhash)
1118 1118 lfutil.getlfilestoupload(repo, missing, dedup)
1119 1119 if lfhashes:
1120 1120 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1121 1121 for fn, lfhash in knowns:
1122 1122 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1123 1123 addfunc(fn, lfhash)
1124 1124
1125 1125 def outgoinghook(ui, repo, other, opts, missing):
1126 1126 if opts.pop('large', None):
1127 1127 lfhashes = set()
1128 1128 if ui.debugflag:
1129 1129 toupload = {}
1130 1130 def addfunc(fn, lfhash):
1131 1131 if fn not in toupload:
1132 1132 toupload[fn] = []
1133 1133 toupload[fn].append(lfhash)
1134 1134 lfhashes.add(lfhash)
1135 1135 def showhashes(fn):
1136 1136 for lfhash in sorted(toupload[fn]):
1137 1137 ui.debug(' %s\n' % (lfhash))
1138 1138 else:
1139 1139 toupload = set()
1140 1140 def addfunc(fn, lfhash):
1141 1141 toupload.add(fn)
1142 1142 lfhashes.add(lfhash)
1143 1143 def showhashes(fn):
1144 1144 pass
1145 1145 _getoutgoings(repo, other, missing, addfunc)
1146 1146
1147 1147 if not toupload:
1148 1148 ui.status(_('largefiles: no files to upload\n'))
1149 1149 else:
1150 1150 ui.status(_('largefiles to upload (%d entities):\n')
1151 1151 % (len(lfhashes)))
1152 1152 for file in sorted(toupload):
1153 1153 ui.status(lfutil.splitstandin(file) + '\n')
1154 1154 showhashes(file)
1155 1155 ui.status('\n')
1156 1156
1157 1157 def summaryremotehook(ui, repo, opts, changes):
1158 1158 largeopt = opts.get('large', False)
1159 1159 if changes is None:
1160 1160 if largeopt:
1161 1161 return (False, True) # only outgoing check is needed
1162 1162 else:
1163 1163 return (False, False)
1164 1164 elif largeopt:
1165 1165 url, branch, peer, outgoing = changes[1]
1166 1166 if peer is None:
1167 1167 # i18n: column positioning for "hg summary"
1168 1168 ui.status(_('largefiles: (no remote repo)\n'))
1169 1169 return
1170 1170
1171 1171 toupload = set()
1172 1172 lfhashes = set()
1173 1173 def addfunc(fn, lfhash):
1174 1174 toupload.add(fn)
1175 1175 lfhashes.add(lfhash)
1176 1176 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1177 1177
1178 1178 if not toupload:
1179 1179 # i18n: column positioning for "hg summary"
1180 1180 ui.status(_('largefiles: (no files to upload)\n'))
1181 1181 else:
1182 1182 # i18n: column positioning for "hg summary"
1183 1183 ui.status(_('largefiles: %d entities for %d files to upload\n')
1184 1184 % (len(lfhashes), len(toupload)))
1185 1185
1186 1186 def overridesummary(orig, ui, repo, *pats, **opts):
1187 1187 try:
1188 1188 repo.lfstatus = True
1189 1189 orig(ui, repo, *pats, **opts)
1190 1190 finally:
1191 1191 repo.lfstatus = False
1192 1192
1193 1193 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1194 1194 similarity=None):
1195 1195 if opts is None:
1196 1196 opts = {}
1197 1197 if not lfutil.islfilesrepo(repo):
1198 1198 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1199 1199 # Get the list of missing largefiles so we can remove them
1200 1200 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1201 1201 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1202 1202 False, False, False)
1203 1203
1204 1204 # Call into the normal remove code, but the removing of the standin, we want
1205 1205 # to have handled by original addremove. Monkey patching here makes sure
1206 1206 # we don't remove the standin in the largefiles code, preventing a very
1207 1207 # confused state later.
1208 1208 if s.deleted:
1209 1209 m = copy.copy(matcher)
1210 1210
1211 1211 # The m._files and m._map attributes are not changed to the deleted list
1212 1212 # because that affects the m.exact() test, which in turn governs whether
1213 1213 # or not the file name is printed, and how. Simply limit the original
1214 1214 # matches to those in the deleted status list.
1215 1215 matchfn = m.matchfn
1216 1216 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1217 1217
1218 1218 removelargefiles(repo.ui, repo, True, m, **opts)
1219 1219 # Call into the normal add code, and any files that *should* be added as
1220 1220 # largefiles will be
1221 1221 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1222 1222 # Now that we've handled largefiles, hand off to the original addremove
1223 1223 # function to take care of the rest. Make sure it doesn't do anything with
1224 1224 # largefiles by passing a matcher that will ignore them.
1225 1225 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1226 1226 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1227 1227
1228 1228 # Calling purge with --all will cause the largefiles to be deleted.
1229 1229 # Override repo.status to prevent this from happening.
1230 1230 def overridepurge(orig, ui, repo, *dirs, **opts):
1231 1231 # XXX Monkey patching a repoview will not work. The assigned attribute will
1232 1232 # be set on the unfiltered repo, but we will only lookup attributes in the
1233 1233 # unfiltered repo if the lookup in the repoview object itself fails. As the
1234 1234 # monkey patched method exists on the repoview class the lookup will not
1235 1235 # fail. As a result, the original version will shadow the monkey patched
1236 1236 # one, defeating the monkey patch.
1237 1237 #
1238 1238 # As a work around we use an unfiltered repo here. We should do something
1239 1239 # cleaner instead.
1240 1240 repo = repo.unfiltered()
1241 1241 oldstatus = repo.status
1242 1242 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1243 1243 clean=False, unknown=False, listsubrepos=False):
1244 1244 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1245 1245 listsubrepos)
1246 1246 lfdirstate = lfutil.openlfdirstate(ui, repo)
1247 1247 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1248 1248 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1249 1249 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1250 1250 unknown, ignored, r.clean)
1251 1251 repo.status = overridestatus
1252 1252 orig(ui, repo, *dirs, **opts)
1253 1253 repo.status = oldstatus
1254 1254 def overriderollback(orig, ui, repo, **opts):
1255 1255 wlock = repo.wlock()
1256 1256 try:
1257 1257 before = repo.dirstate.parents()
1258 1258 orphans = set(f for f in repo.dirstate
1259 1259 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1260 1260 result = orig(ui, repo, **opts)
1261 1261 after = repo.dirstate.parents()
1262 1262 if before == after:
1263 1263 return result # no need to restore standins
1264 1264
1265 1265 pctx = repo['.']
1266 1266 for f in repo.dirstate:
1267 1267 if lfutil.isstandin(f):
1268 1268 orphans.discard(f)
1269 1269 if repo.dirstate[f] == 'r':
1270 1270 repo.wvfs.unlinkpath(f, ignoremissing=True)
1271 1271 elif f in pctx:
1272 1272 fctx = pctx[f]
1273 1273 repo.wwrite(f, fctx.data(), fctx.flags())
1274 1274 else:
1275 1275 # content of standin is not so important in 'a',
1276 1276 # 'm' or 'n' (coming from the 2nd parent) cases
1277 1277 lfutil.writestandin(repo, f, '', False)
1278 1278 for standin in orphans:
1279 1279 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1280 1280
1281 1281 lfdirstate = lfutil.openlfdirstate(ui, repo)
1282 1282 orphans = set(lfdirstate)
1283 1283 lfiles = lfutil.listlfiles(repo)
1284 1284 for file in lfiles:
1285 1285 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1286 1286 orphans.discard(file)
1287 1287 for lfile in orphans:
1288 1288 lfdirstate.drop(lfile)
1289 1289 lfdirstate.write()
1290 1290 finally:
1291 1291 wlock.release()
1292 1292 return result
1293 1293
1294 1294 def overridetransplant(orig, ui, repo, *revs, **opts):
1295 1295 resuming = opts.get('continue')
1296 1296 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1297 1297 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1298 1298 try:
1299 1299 result = orig(ui, repo, *revs, **opts)
1300 1300 finally:
1301 1301 repo._lfstatuswriters.pop()
1302 1302 repo._lfcommithooks.pop()
1303 1303 return result
1304 1304
1305 1305 def overridecat(orig, ui, repo, file1, *pats, **opts):
1306 1306 ctx = scmutil.revsingle(repo, opts.get('rev'))
1307 1307 err = 1
1308 1308 notbad = set()
1309 1309 m = scmutil.match(ctx, (file1,) + pats, opts)
1310 1310 origmatchfn = m.matchfn
1311 1311 def lfmatchfn(f):
1312 1312 if origmatchfn(f):
1313 1313 return True
1314 1314 lf = lfutil.splitstandin(f)
1315 1315 if lf is None:
1316 1316 return False
1317 1317 notbad.add(lf)
1318 1318 return origmatchfn(lf)
1319 1319 m.matchfn = lfmatchfn
1320 1320 origbadfn = m.bad
1321 1321 def lfbadfn(f, msg):
1322 1322 if not f in notbad:
1323 1323 origbadfn(f, msg)
1324 1324 m.bad = lfbadfn
1325 1325
1326 1326 origvisitdirfn = m.visitdir
1327 1327 def lfvisitdirfn(dir):
1328 1328 if dir == lfutil.shortname:
1329 1329 return True
1330 1330 ret = origvisitdirfn(dir)
1331 1331 if ret:
1332 1332 return ret
1333 1333 lf = lfutil.splitstandin(dir)
1334 1334 if lf is None:
1335 1335 return False
1336 1336 return origvisitdirfn(lf)
1337 1337 m.visitdir = lfvisitdirfn
1338 1338
1339 1339 for f in ctx.walk(m):
1340 1340 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1341 1341 pathname=f)
1342 1342 lf = lfutil.splitstandin(f)
1343 1343 if lf is None or origmatchfn(f):
1344 1344 # duplicating unreachable code from commands.cat
1345 1345 data = ctx[f].data()
1346 1346 if opts.get('decode'):
1347 1347 data = repo.wwritedata(f, data)
1348 1348 fp.write(data)
1349 1349 else:
1350 1350 hash = lfutil.readstandin(repo, lf, ctx.rev())
1351 1351 if not lfutil.inusercache(repo.ui, hash):
1352 1352 store = basestore._openstore(repo)
1353 1353 success, missing = store.get([(lf, hash)])
1354 1354 if len(success) != 1:
1355 1355 raise error.Abort(
1356 1356 _('largefile %s is not in cache and could not be '
1357 1357 'downloaded') % lf)
1358 1358 path = lfutil.usercachepath(repo.ui, hash)
1359 1359 fpin = open(path, "rb")
1360 1360 for chunk in util.filechunkiter(fpin, 128 * 1024):
1361 1361 fp.write(chunk)
1362 1362 fpin.close()
1363 1363 fp.close()
1364 1364 err = 0
1365 1365 return err
1366 1366
1367 def mergeupdate(orig, repo, node, branchmerge, force, partial,
1367 def mergeupdate(orig, repo, node, branchmerge, force,
1368 1368 *args, **kwargs):
1369 matcher = kwargs.get('matcher', None)
1370 # note if this is a partial update
1371 partial = matcher and not matcher.always()
1369 1372 wlock = repo.wlock()
1370 1373 try:
1371 1374 # branch | | |
1372 1375 # merge | force | partial | action
1373 1376 # -------+-------+---------+--------------
1374 1377 # x | x | x | linear-merge
1375 1378 # o | x | x | branch-merge
1376 1379 # x | o | x | overwrite (as clean update)
1377 1380 # o | o | x | force-branch-merge (*1)
1378 1381 # x | x | o | (*)
1379 1382 # o | x | o | (*)
1380 1383 # x | o | o | overwrite (as revert)
1381 1384 # o | o | o | (*)
1382 1385 #
1383 1386 # (*) don't care
1384 1387 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1385 1388
1386 1389 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1387 1390 unsure, s = lfdirstate.status(match_.always(repo.root,
1388 1391 repo.getcwd()),
1389 1392 [], False, False, False)
1390 1393 pctx = repo['.']
1391 1394 for lfile in unsure + s.modified:
1392 1395 lfileabs = repo.wvfs.join(lfile)
1393 1396 if not os.path.exists(lfileabs):
1394 1397 continue
1395 1398 lfhash = lfutil.hashrepofile(repo, lfile)
1396 1399 standin = lfutil.standin(lfile)
1397 1400 lfutil.writestandin(repo, standin, lfhash,
1398 1401 lfutil.getexecutable(lfileabs))
1399 1402 if (standin in pctx and
1400 1403 lfhash == lfutil.readstandin(repo, lfile, '.')):
1401 1404 lfdirstate.normal(lfile)
1402 1405 for lfile in s.added:
1403 1406 lfutil.updatestandin(repo, lfutil.standin(lfile))
1404 1407 lfdirstate.write()
1405 1408
1406 1409 oldstandins = lfutil.getstandinsstate(repo)
1407 1410
1408 result = orig(repo, node, branchmerge, force, partial, *args, **kwargs)
1411 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1409 1412
1410 1413 newstandins = lfutil.getstandinsstate(repo)
1411 1414 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1412 1415 if branchmerge or force or partial:
1413 1416 filelist.extend(s.deleted + s.removed)
1414 1417
1415 1418 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1416 1419 normallookup=partial)
1417 1420
1418 1421 return result
1419 1422 finally:
1420 1423 wlock.release()
1421 1424
1422 1425 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1423 1426 result = orig(repo, files, *args, **kwargs)
1424 1427
1425 1428 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1426 1429 if filelist:
1427 1430 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1428 1431 printmessage=False, normallookup=True)
1429 1432
1430 1433 return result
@@ -1,1249 +1,1249 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 https://mercurial-scm.org/wiki/RebaseExtension
15 15 '''
16 16
17 17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
19 19 from mercurial import copies, repoview, revset
20 20 from mercurial.commands import templateopts
21 21 from mercurial.node import nullrev, nullid, hex, short
22 22 from mercurial.lock import release
23 23 from mercurial.i18n import _
24 24 import os, errno
25 25
26 26 # The following constants are used throughout the rebase module. The ordering of
27 27 # their values must be maintained.
28 28
29 29 # Indicates that a revision needs to be rebased
30 30 revtodo = -1
31 31 nullmerge = -2
32 32 revignored = -3
33 33 # successor in rebase destination
34 34 revprecursor = -4
35 35 # plain prune (no successor)
36 36 revpruned = -5
37 37 revskipped = (revignored, revprecursor, revpruned)
38 38
39 39 cmdtable = {}
40 40 command = cmdutil.command(cmdtable)
41 41 # Note for extension authors: ONLY specify testedwith = 'internal' for
42 42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
43 43 # be specifying the version(s) of Mercurial they are tested with, or
44 44 # leave the attribute unspecified.
45 45 testedwith = 'internal'
46 46
47 47 def _nothingtorebase():
48 48 return 1
49 49
50 50 def _makeextrafn(copiers):
51 51 """make an extrafn out of the given copy-functions.
52 52
53 53 A copy function takes a context and an extra dict, and mutates the
54 54 extra dict as needed based on the given context.
55 55 """
56 56 def extrafn(ctx, extra):
57 57 for c in copiers:
58 58 c(ctx, extra)
59 59 return extrafn
60 60
61 61 def _destrebase(repo):
62 62 # Destination defaults to the latest revision in the
63 63 # current branch
64 64 branch = repo[None].branch()
65 65 return repo[branch].rev()
66 66
67 67 def _revsetdestrebase(repo, subset, x):
68 68 # ``_rebasedefaultdest()``
69 69
70 70 # default destination for rebase.
71 71 # # XXX: Currently private because I expect the signature to change.
72 72 # # XXX: - taking rev as arguments,
73 73 # # XXX: - bailing out in case of ambiguity vs returning all data.
74 74 # # XXX: - probably merging with the merge destination.
75 75 # i18n: "_rebasedefaultdest" is a keyword
76 76 revset.getargs(x, 0, 0, _("_rebasedefaultdest takes no arguments"))
77 77 return subset & revset.baseset([_destrebase(repo)])
78 78
79 79 @command('rebase',
80 80 [('s', 'source', '',
81 81 _('rebase the specified changeset and descendants'), _('REV')),
82 82 ('b', 'base', '',
83 83 _('rebase everything from branching point of specified changeset'),
84 84 _('REV')),
85 85 ('r', 'rev', [],
86 86 _('rebase these revisions'),
87 87 _('REV')),
88 88 ('d', 'dest', '',
89 89 _('rebase onto the specified changeset'), _('REV')),
90 90 ('', 'collapse', False, _('collapse the rebased changesets')),
91 91 ('m', 'message', '',
92 92 _('use text as collapse commit message'), _('TEXT')),
93 93 ('e', 'edit', False, _('invoke editor on commit messages')),
94 94 ('l', 'logfile', '',
95 95 _('read collapse commit message from file'), _('FILE')),
96 96 ('k', 'keep', False, _('keep original changesets')),
97 97 ('', 'keepbranches', False, _('keep original branch names')),
98 98 ('D', 'detach', False, _('(DEPRECATED)')),
99 99 ('i', 'interactive', False, _('(DEPRECATED)')),
100 100 ('t', 'tool', '', _('specify merge tool')),
101 101 ('c', 'continue', False, _('continue an interrupted rebase')),
102 102 ('a', 'abort', False, _('abort an interrupted rebase'))] +
103 103 templateopts,
104 104 _('[-s REV | -b REV] [-d REV] [OPTION]'))
105 105 def rebase(ui, repo, **opts):
106 106 """move changeset (and descendants) to a different branch
107 107
108 108 Rebase uses repeated merging to graft changesets from one part of
109 109 history (the source) onto another (the destination). This can be
110 110 useful for linearizing *local* changes relative to a master
111 111 development tree.
112 112
113 113 You should not rebase changesets that have already been shared
114 114 with others. Doing so will force everybody else to perform the
115 115 same rebase or they will end up with duplicated changesets after
116 116 pulling in your rebased changesets.
117 117
118 118 In its default configuration, Mercurial will prevent you from
119 119 rebasing published changes. See :hg:`help phases` for details.
120 120
121 121 If you don't specify a destination changeset (``-d/--dest``),
122 122 rebase uses the current branch tip as the destination. (The
123 123 destination changeset is not modified by rebasing, but new
124 124 changesets are added as its descendants.)
125 125
126 126 You can specify which changesets to rebase in two ways: as a
127 127 "source" changeset or as a "base" changeset. Both are shorthand
128 128 for a topologically related set of changesets (the "source
129 129 branch"). If you specify source (``-s/--source``), rebase will
130 130 rebase that changeset and all of its descendants onto dest. If you
131 131 specify base (``-b/--base``), rebase will select ancestors of base
132 132 back to but not including the common ancestor with dest. Thus,
133 133 ``-b`` is less precise but more convenient than ``-s``: you can
134 134 specify any changeset in the source branch, and rebase will select
135 135 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
136 136 uses the parent of the working directory as the base.
137 137
138 138 For advanced usage, a third way is available through the ``--rev``
139 139 option. It allows you to specify an arbitrary set of changesets to
140 140 rebase. Descendants of revs you specify with this option are not
141 141 automatically included in the rebase.
142 142
143 143 By default, rebase recreates the changesets in the source branch
144 144 as descendants of dest and then destroys the originals. Use
145 145 ``--keep`` to preserve the original source changesets. Some
146 146 changesets in the source branch (e.g. merges from the destination
147 147 branch) may be dropped if they no longer contribute any change.
148 148
149 149 One result of the rules for selecting the destination changeset
150 150 and source branch is that, unlike ``merge``, rebase will do
151 151 nothing if you are at the branch tip of a named branch
152 152 with two heads. You need to explicitly specify source and/or
153 153 destination (or ``update`` to the other head, if it's the head of
154 154 the intended source branch).
155 155
156 156 If a rebase is interrupted to manually resolve a merge, it can be
157 157 continued with --continue/-c or aborted with --abort/-a.
158 158
159 159 .. container:: verbose
160 160
161 161 Examples:
162 162
163 163 - move "local changes" (current commit back to branching point)
164 164 to the current branch tip after a pull::
165 165
166 166 hg rebase
167 167
168 168 - move a single changeset to the stable branch::
169 169
170 170 hg rebase -r 5f493448 -d stable
171 171
172 172 - splice a commit and all its descendants onto another part of history::
173 173
174 174 hg rebase --source c0c3 --dest 4cf9
175 175
176 176 - rebase everything on a branch marked by a bookmark onto the
177 177 default branch::
178 178
179 179 hg rebase --base myfeature --dest default
180 180
181 181 - collapse a sequence of changes into a single commit::
182 182
183 183 hg rebase --collapse -r 1520:1525 -d .
184 184
185 185 - move a named branch while preserving its name::
186 186
187 187 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
188 188
189 189 Returns 0 on success, 1 if nothing to rebase or there are
190 190 unresolved conflicts.
191 191
192 192 """
193 193 originalwd = target = None
194 194 activebookmark = None
195 195 external = nullrev
196 196 # Mapping between the old revision id and either what is the new rebased
197 197 # revision or what needs to be done with the old revision. The state dict
198 198 # will be what contains most of the rebase progress state.
199 199 state = {}
200 200 skipped = set()
201 201 targetancestors = set()
202 202
203 203
204 204 lock = wlock = None
205 205 try:
206 206 wlock = repo.wlock()
207 207 lock = repo.lock()
208 208
209 209 # Validate input and define rebasing points
210 210 destf = opts.get('dest', None)
211 211 srcf = opts.get('source', None)
212 212 basef = opts.get('base', None)
213 213 revf = opts.get('rev', [])
214 214 contf = opts.get('continue')
215 215 abortf = opts.get('abort')
216 216 collapsef = opts.get('collapse', False)
217 217 collapsemsg = cmdutil.logmessage(ui, opts)
218 218 date = opts.get('date', None)
219 219 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
220 220 extrafns = []
221 221 if e:
222 222 extrafns = [e]
223 223 keepf = opts.get('keep', False)
224 224 keepbranchesf = opts.get('keepbranches', False)
225 225 # keepopen is not meant for use on the command line, but by
226 226 # other extensions
227 227 keepopen = opts.get('keepopen', False)
228 228
229 229 if opts.get('interactive'):
230 230 try:
231 231 if extensions.find('histedit'):
232 232 enablehistedit = ''
233 233 except KeyError:
234 234 enablehistedit = " --config extensions.histedit="
235 235 help = "hg%s help -e histedit" % enablehistedit
236 236 msg = _("interactive history editing is supported by the "
237 237 "'histedit' extension (see \"%s\")") % help
238 238 raise error.Abort(msg)
239 239
240 240 if collapsemsg and not collapsef:
241 241 raise error.Abort(
242 242 _('message can only be specified with collapse'))
243 243
244 244 if contf or abortf:
245 245 if contf and abortf:
246 246 raise error.Abort(_('cannot use both abort and continue'))
247 247 if collapsef:
248 248 raise error.Abort(
249 249 _('cannot use collapse with continue or abort'))
250 250 if srcf or basef or destf:
251 251 raise error.Abort(
252 252 _('abort and continue do not allow specifying revisions'))
253 253 if abortf and opts.get('tool', False):
254 254 ui.warn(_('tool option will be ignored\n'))
255 255
256 256 try:
257 257 (originalwd, target, state, skipped, collapsef, keepf,
258 258 keepbranchesf, external, activebookmark) = restorestatus(repo)
259 259 except error.RepoLookupError:
260 260 if abortf:
261 261 clearstatus(repo)
262 262 repo.ui.warn(_('rebase aborted (no revision is removed,'
263 263 ' only broken state is cleared)\n'))
264 264 return 0
265 265 else:
266 266 msg = _('cannot continue inconsistent rebase')
267 267 hint = _('use "hg rebase --abort" to clear broken state')
268 268 raise error.Abort(msg, hint=hint)
269 269 if abortf:
270 270 return abort(repo, originalwd, target, state,
271 271 activebookmark=activebookmark)
272 272 else:
273 273 if srcf and basef:
274 274 raise error.Abort(_('cannot specify both a '
275 275 'source and a base'))
276 276 if revf and basef:
277 277 raise error.Abort(_('cannot specify both a '
278 278 'revision and a base'))
279 279 if revf and srcf:
280 280 raise error.Abort(_('cannot specify both a '
281 281 'revision and a source'))
282 282
283 283 cmdutil.checkunfinished(repo)
284 284 cmdutil.bailifchanged(repo)
285 285
286 286 if destf:
287 287 dest = scmutil.revsingle(repo, destf)
288 288 else:
289 289 dest = repo[_destrebase(repo)]
290 290 destf = str(dest)
291 291
292 292 if revf:
293 293 rebaseset = scmutil.revrange(repo, revf)
294 294 if not rebaseset:
295 295 ui.status(_('empty "rev" revision set - '
296 296 'nothing to rebase\n'))
297 297 return _nothingtorebase()
298 298 elif srcf:
299 299 src = scmutil.revrange(repo, [srcf])
300 300 if not src:
301 301 ui.status(_('empty "source" revision set - '
302 302 'nothing to rebase\n'))
303 303 return _nothingtorebase()
304 304 rebaseset = repo.revs('(%ld)::', src)
305 305 assert rebaseset
306 306 else:
307 307 base = scmutil.revrange(repo, [basef or '.'])
308 308 if not base:
309 309 ui.status(_('empty "base" revision set - '
310 310 "can't compute rebase set\n"))
311 311 return _nothingtorebase()
312 312 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
313 313 if commonanc is not None:
314 314 rebaseset = repo.revs('(%d::(%ld) - %d)::',
315 315 commonanc, base, commonanc)
316 316 else:
317 317 rebaseset = []
318 318
319 319 if not rebaseset:
320 320 # transform to list because smartsets are not comparable to
321 321 # lists. This should be improved to honor laziness of
322 322 # smartset.
323 323 if list(base) == [dest.rev()]:
324 324 if basef:
325 325 ui.status(_('nothing to rebase - %s is both "base"'
326 326 ' and destination\n') % dest)
327 327 else:
328 328 ui.status(_('nothing to rebase - working directory '
329 329 'parent is also destination\n'))
330 330 elif not repo.revs('%ld - ::%d', base, dest):
331 331 if basef:
332 332 ui.status(_('nothing to rebase - "base" %s is '
333 333 'already an ancestor of destination '
334 334 '%s\n') %
335 335 ('+'.join(str(repo[r]) for r in base),
336 336 dest))
337 337 else:
338 338 ui.status(_('nothing to rebase - working '
339 339 'directory parent is already an '
340 340 'ancestor of destination %s\n') % dest)
341 341 else: # can it happen?
342 342 ui.status(_('nothing to rebase from %s to %s\n') %
343 343 ('+'.join(str(repo[r]) for r in base), dest))
344 344 return _nothingtorebase()
345 345
346 346 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
347 347 if (not (keepf or allowunstable)
348 348 and repo.revs('first(children(%ld) - %ld)',
349 349 rebaseset, rebaseset)):
350 350 raise error.Abort(
351 351 _("can't remove original changesets with"
352 352 " unrebased descendants"),
353 353 hint=_('use --keep to keep original changesets'))
354 354
355 355 obsoletenotrebased = {}
356 356 if ui.configbool('experimental', 'rebaseskipobsolete'):
357 357 rebasesetrevs = set(rebaseset)
358 358 obsoletenotrebased = _computeobsoletenotrebased(repo,
359 359 rebasesetrevs,
360 360 dest)
361 361
362 362 # - plain prune (no successor) changesets are rebased
363 363 # - split changesets are not rebased if at least one of the
364 364 # changeset resulting from the split is an ancestor of dest
365 365 rebaseset = rebasesetrevs - set(obsoletenotrebased)
366 366 result = buildstate(repo, dest, rebaseset, collapsef,
367 367 obsoletenotrebased)
368 368
369 369 if not result:
370 370 # Empty state built, nothing to rebase
371 371 ui.status(_('nothing to rebase\n'))
372 372 return _nothingtorebase()
373 373
374 374 root = min(rebaseset)
375 375 if not keepf and not repo[root].mutable():
376 376 raise error.Abort(_("can't rebase public changeset %s")
377 377 % repo[root],
378 378 hint=_('see "hg help phases" for details'))
379 379
380 380 originalwd, target, state = result
381 381 if collapsef:
382 382 targetancestors = repo.changelog.ancestors([target],
383 383 inclusive=True)
384 384 external = externalparent(repo, state, targetancestors)
385 385
386 386 if dest.closesbranch() and not keepbranchesf:
387 387 ui.status(_('reopening closed branch head %s\n') % dest)
388 388
389 389 if keepbranchesf and collapsef:
390 390 branches = set()
391 391 for rev in state:
392 392 branches.add(repo[rev].branch())
393 393 if len(branches) > 1:
394 394 raise error.Abort(_('cannot collapse multiple named '
395 395 'branches'))
396 396
397 397 # Rebase
398 398 if not targetancestors:
399 399 targetancestors = repo.changelog.ancestors([target], inclusive=True)
400 400
401 401 # Keep track of the current bookmarks in order to reset them later
402 402 currentbookmarks = repo._bookmarks.copy()
403 403 activebookmark = activebookmark or repo._activebookmark
404 404 if activebookmark:
405 405 bookmarks.deactivate(repo)
406 406
407 407 extrafn = _makeextrafn(extrafns)
408 408
409 409 sortedstate = sorted(state)
410 410 total = len(sortedstate)
411 411 pos = 0
412 412 for rev in sortedstate:
413 413 ctx = repo[rev]
414 414 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
415 415 ctx.description().split('\n', 1)[0])
416 416 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
417 417 if names:
418 418 desc += ' (%s)' % ' '.join(names)
419 419 pos += 1
420 420 if state[rev] == revtodo:
421 421 ui.status(_('rebasing %s\n') % desc)
422 422 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
423 423 _('changesets'), total)
424 424 p1, p2, base = defineparents(repo, rev, target, state,
425 425 targetancestors)
426 426 storestatus(repo, originalwd, target, state, collapsef, keepf,
427 427 keepbranchesf, external, activebookmark)
428 428 if len(repo[None].parents()) == 2:
429 429 repo.ui.debug('resuming interrupted rebase\n')
430 430 else:
431 431 try:
432 432 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
433 433 'rebase')
434 434 stats = rebasenode(repo, rev, p1, base, state,
435 435 collapsef, target)
436 436 if stats and stats[3] > 0:
437 437 raise error.InterventionRequired(
438 438 _('unresolved conflicts (see hg '
439 439 'resolve, then hg rebase --continue)'))
440 440 finally:
441 441 ui.setconfig('ui', 'forcemerge', '', 'rebase')
442 442 if not collapsef:
443 443 merging = p2 != nullrev
444 444 editform = cmdutil.mergeeditform(merging, 'rebase')
445 445 editor = cmdutil.getcommiteditor(editform=editform, **opts)
446 446 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
447 447 editor=editor,
448 448 keepbranches=keepbranchesf,
449 449 date=date)
450 450 else:
451 451 # Skip commit if we are collapsing
452 452 repo.dirstate.beginparentchange()
453 453 repo.setparents(repo[p1].node())
454 454 repo.dirstate.endparentchange()
455 455 newnode = None
456 456 # Update the state
457 457 if newnode is not None:
458 458 state[rev] = repo[newnode].rev()
459 459 ui.debug('rebased as %s\n' % short(newnode))
460 460 else:
461 461 if not collapsef:
462 462 ui.warn(_('note: rebase of %d:%s created no changes '
463 463 'to commit\n') % (rev, ctx))
464 464 skipped.add(rev)
465 465 state[rev] = p1
466 466 ui.debug('next revision set to %s\n' % p1)
467 467 elif state[rev] == nullmerge:
468 468 ui.debug('ignoring null merge rebase of %s\n' % rev)
469 469 elif state[rev] == revignored:
470 470 ui.status(_('not rebasing ignored %s\n') % desc)
471 471 elif state[rev] == revprecursor:
472 472 targetctx = repo[obsoletenotrebased[rev]]
473 473 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
474 474 targetctx.description().split('\n', 1)[0])
475 475 msg = _('note: not rebasing %s, already in destination as %s\n')
476 476 ui.status(msg % (desc, desctarget))
477 477 elif state[rev] == revpruned:
478 478 msg = _('note: not rebasing %s, it has no successor\n')
479 479 ui.status(msg % desc)
480 480 else:
481 481 ui.status(_('already rebased %s as %s\n') %
482 482 (desc, repo[state[rev]]))
483 483
484 484 ui.progress(_('rebasing'), None)
485 485 ui.note(_('rebase merging completed\n'))
486 486
487 487 if collapsef and not keepopen:
488 488 p1, p2, _base = defineparents(repo, min(state), target,
489 489 state, targetancestors)
490 490 editopt = opts.get('edit')
491 491 editform = 'rebase.collapse'
492 492 if collapsemsg:
493 493 commitmsg = collapsemsg
494 494 else:
495 495 commitmsg = 'Collapsed revision'
496 496 for rebased in state:
497 497 if rebased not in skipped and state[rebased] > nullmerge:
498 498 commitmsg += '\n* %s' % repo[rebased].description()
499 499 editopt = True
500 500 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
501 501 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
502 502 extrafn=extrafn, editor=editor,
503 503 keepbranches=keepbranchesf,
504 504 date=date)
505 505 if newnode is None:
506 506 newrev = target
507 507 else:
508 508 newrev = repo[newnode].rev()
509 509 for oldrev in state.iterkeys():
510 510 if state[oldrev] > nullmerge:
511 511 state[oldrev] = newrev
512 512
513 513 if 'qtip' in repo.tags():
514 514 updatemq(repo, state, skipped, **opts)
515 515
516 516 if currentbookmarks:
517 517 # Nodeids are needed to reset bookmarks
518 518 nstate = {}
519 519 for k, v in state.iteritems():
520 520 if v > nullmerge:
521 521 nstate[repo[k].node()] = repo[v].node()
522 522 # XXX this is the same as dest.node() for the non-continue path --
523 523 # this should probably be cleaned up
524 524 targetnode = repo[target].node()
525 525
526 526 # restore original working directory
527 527 # (we do this before stripping)
528 528 newwd = state.get(originalwd, originalwd)
529 529 if newwd < 0:
530 530 # original directory is a parent of rebase set root or ignored
531 531 newwd = originalwd
532 532 if newwd not in [c.rev() for c in repo[None].parents()]:
533 533 ui.note(_("update back to initial working directory parent\n"))
534 534 hg.updaterepo(repo, newwd, False)
535 535
536 536 if not keepf:
537 537 collapsedas = None
538 538 if collapsef:
539 539 collapsedas = newnode
540 540 clearrebased(ui, repo, state, skipped, collapsedas)
541 541
542 542 tr = None
543 543 try:
544 544 tr = repo.transaction('bookmark')
545 545 if currentbookmarks:
546 546 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
547 547 if activebookmark not in repo._bookmarks:
548 548 # active bookmark was divergent one and has been deleted
549 549 activebookmark = None
550 550 tr.close()
551 551 finally:
552 552 release(tr)
553 553 clearstatus(repo)
554 554
555 555 ui.note(_("rebase completed\n"))
556 556 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
557 557 if skipped:
558 558 ui.note(_("%d revisions have been skipped\n") % len(skipped))
559 559
560 560 if (activebookmark and
561 561 repo['.'].node() == repo._bookmarks[activebookmark]):
562 562 bookmarks.activate(repo, activebookmark)
563 563
564 564 finally:
565 565 release(lock, wlock)
566 566
567 567 def externalparent(repo, state, targetancestors):
568 568 """Return the revision that should be used as the second parent
569 569 when the revisions in state is collapsed on top of targetancestors.
570 570 Abort if there is more than one parent.
571 571 """
572 572 parents = set()
573 573 source = min(state)
574 574 for rev in state:
575 575 if rev == source:
576 576 continue
577 577 for p in repo[rev].parents():
578 578 if (p.rev() not in state
579 579 and p.rev() not in targetancestors):
580 580 parents.add(p.rev())
581 581 if not parents:
582 582 return nullrev
583 583 if len(parents) == 1:
584 584 return parents.pop()
585 585 raise error.Abort(_('unable to collapse on top of %s, there is more '
586 586 'than one external parent: %s') %
587 587 (max(targetancestors),
588 588 ', '.join(str(p) for p in sorted(parents))))
589 589
590 590 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
591 591 keepbranches=False, date=None):
592 592 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
593 593 but also store useful information in extra.
594 594 Return node of committed revision.'''
595 595 dsguard = cmdutil.dirstateguard(repo, 'rebase')
596 596 try:
597 597 repo.setparents(repo[p1].node(), repo[p2].node())
598 598 ctx = repo[rev]
599 599 if commitmsg is None:
600 600 commitmsg = ctx.description()
601 601 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
602 602 extra = ctx.extra().copy()
603 603 if not keepbranches:
604 604 del extra['branch']
605 605 extra['rebase_source'] = ctx.hex()
606 606 if extrafn:
607 607 extrafn(ctx, extra)
608 608
609 609 backup = repo.ui.backupconfig('phases', 'new-commit')
610 610 try:
611 611 targetphase = max(ctx.phase(), phases.draft)
612 612 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
613 613 if keepbranch:
614 614 repo.ui.setconfig('ui', 'allowemptycommit', True)
615 615 # Commit might fail if unresolved files exist
616 616 if date is None:
617 617 date = ctx.date()
618 618 newnode = repo.commit(text=commitmsg, user=ctx.user(),
619 619 date=date, extra=extra, editor=editor)
620 620 finally:
621 621 repo.ui.restoreconfig(backup)
622 622
623 623 repo.dirstate.setbranch(repo[newnode].branch())
624 624 dsguard.close()
625 625 return newnode
626 626 finally:
627 627 release(dsguard)
628 628
629 629 def rebasenode(repo, rev, p1, base, state, collapse, target):
630 630 'Rebase a single revision rev on top of p1 using base as merge ancestor'
631 631 # Merge phase
632 632 # Update to target and merge it with local
633 633 if repo['.'].rev() != p1:
634 634 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
635 merge.update(repo, p1, False, True, False)
635 merge.update(repo, p1, False, True)
636 636 else:
637 637 repo.ui.debug(" already in target\n")
638 638 repo.dirstate.write(repo.currenttransaction())
639 639 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
640 640 if base is not None:
641 641 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
642 642 # When collapsing in-place, the parent is the common ancestor, we
643 643 # have to allow merging with it.
644 stats = merge.update(repo, rev, True, True, False, base, collapse,
644 stats = merge.update(repo, rev, True, True, base, collapse,
645 645 labels=['dest', 'source'])
646 646 if collapse:
647 647 copies.duplicatecopies(repo, rev, target)
648 648 else:
649 649 # If we're not using --collapse, we need to
650 650 # duplicate copies between the revision we're
651 651 # rebasing and its first parent, but *not*
652 652 # duplicate any copies that have already been
653 653 # performed in the destination.
654 654 p1rev = repo[rev].p1().rev()
655 655 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
656 656 return stats
657 657
658 658 def nearestrebased(repo, rev, state):
659 659 """return the nearest ancestors of rev in the rebase result"""
660 660 rebased = [r for r in state if state[r] > nullmerge]
661 661 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
662 662 if candidates:
663 663 return state[candidates.first()]
664 664 else:
665 665 return None
666 666
667 667 def defineparents(repo, rev, target, state, targetancestors):
668 668 'Return the new parent relationship of the revision that will be rebased'
669 669 parents = repo[rev].parents()
670 670 p1 = p2 = nullrev
671 671
672 672 p1n = parents[0].rev()
673 673 if p1n in targetancestors:
674 674 p1 = target
675 675 elif p1n in state:
676 676 if state[p1n] == nullmerge:
677 677 p1 = target
678 678 elif state[p1n] in revskipped:
679 679 p1 = nearestrebased(repo, p1n, state)
680 680 if p1 is None:
681 681 p1 = target
682 682 else:
683 683 p1 = state[p1n]
684 684 else: # p1n external
685 685 p1 = target
686 686 p2 = p1n
687 687
688 688 if len(parents) == 2 and parents[1].rev() not in targetancestors:
689 689 p2n = parents[1].rev()
690 690 # interesting second parent
691 691 if p2n in state:
692 692 if p1 == target: # p1n in targetancestors or external
693 693 p1 = state[p2n]
694 694 elif state[p2n] in revskipped:
695 695 p2 = nearestrebased(repo, p2n, state)
696 696 if p2 is None:
697 697 # no ancestors rebased yet, detach
698 698 p2 = target
699 699 else:
700 700 p2 = state[p2n]
701 701 else: # p2n external
702 702 if p2 != nullrev: # p1n external too => rev is a merged revision
703 703 raise error.Abort(_('cannot use revision %d as base, result '
704 704 'would have 3 parents') % rev)
705 705 p2 = p2n
706 706 repo.ui.debug(" future parents are %d and %d\n" %
707 707 (repo[p1].rev(), repo[p2].rev()))
708 708
709 709 if rev == min(state):
710 710 # Case (1) initial changeset of a non-detaching rebase.
711 711 # Let the merge mechanism find the base itself.
712 712 base = None
713 713 elif not repo[rev].p2():
714 714 # Case (2) detaching the node with a single parent, use this parent
715 715 base = repo[rev].p1().rev()
716 716 else:
717 717 # Assuming there is a p1, this is the case where there also is a p2.
718 718 # We are thus rebasing a merge and need to pick the right merge base.
719 719 #
720 720 # Imagine we have:
721 721 # - M: current rebase revision in this step
722 722 # - A: one parent of M
723 723 # - B: other parent of M
724 724 # - D: destination of this merge step (p1 var)
725 725 #
726 726 # Consider the case where D is a descendant of A or B and the other is
727 727 # 'outside'. In this case, the right merge base is the D ancestor.
728 728 #
729 729 # An informal proof, assuming A is 'outside' and B is the D ancestor:
730 730 #
731 731 # If we pick B as the base, the merge involves:
732 732 # - changes from B to M (actual changeset payload)
733 733 # - changes from B to D (induced by rebase) as D is a rebased
734 734 # version of B)
735 735 # Which exactly represent the rebase operation.
736 736 #
737 737 # If we pick A as the base, the merge involves:
738 738 # - changes from A to M (actual changeset payload)
739 739 # - changes from A to D (with include changes between unrelated A and B
740 740 # plus changes induced by rebase)
741 741 # Which does not represent anything sensible and creates a lot of
742 742 # conflicts. A is thus not the right choice - B is.
743 743 #
744 744 # Note: The base found in this 'proof' is only correct in the specified
745 745 # case. This base does not make sense if is not D a descendant of A or B
746 746 # or if the other is not parent 'outside' (especially not if the other
747 747 # parent has been rebased). The current implementation does not
748 748 # make it feasible to consider different cases separately. In these
749 749 # other cases we currently just leave it to the user to correctly
750 750 # resolve an impossible merge using a wrong ancestor.
751 751 for p in repo[rev].parents():
752 752 if state.get(p.rev()) == p1:
753 753 base = p.rev()
754 754 break
755 755 else: # fallback when base not found
756 756 base = None
757 757
758 758 # Raise because this function is called wrong (see issue 4106)
759 759 raise AssertionError('no base found to rebase on '
760 760 '(defineparents called wrong)')
761 761 return p1, p2, base
762 762
763 763 def isagitpatch(repo, patchname):
764 764 'Return true if the given patch is in git format'
765 765 mqpatch = os.path.join(repo.mq.path, patchname)
766 766 for line in patch.linereader(file(mqpatch, 'rb')):
767 767 if line.startswith('diff --git'):
768 768 return True
769 769 return False
770 770
771 771 def updatemq(repo, state, skipped, **opts):
772 772 'Update rebased mq patches - finalize and then import them'
773 773 mqrebase = {}
774 774 mq = repo.mq
775 775 original_series = mq.fullseries[:]
776 776 skippedpatches = set()
777 777
778 778 for p in mq.applied:
779 779 rev = repo[p.node].rev()
780 780 if rev in state:
781 781 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
782 782 (rev, p.name))
783 783 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
784 784 else:
785 785 # Applied but not rebased, not sure this should happen
786 786 skippedpatches.add(p.name)
787 787
788 788 if mqrebase:
789 789 mq.finish(repo, mqrebase.keys())
790 790
791 791 # We must start import from the newest revision
792 792 for rev in sorted(mqrebase, reverse=True):
793 793 if rev not in skipped:
794 794 name, isgit = mqrebase[rev]
795 795 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
796 796 (name, state[rev], repo[state[rev]]))
797 797 mq.qimport(repo, (), patchname=name, git=isgit,
798 798 rev=[str(state[rev])])
799 799 else:
800 800 # Rebased and skipped
801 801 skippedpatches.add(mqrebase[rev][0])
802 802
803 803 # Patches were either applied and rebased and imported in
804 804 # order, applied and removed or unapplied. Discard the removed
805 805 # ones while preserving the original series order and guards.
806 806 newseries = [s for s in original_series
807 807 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
808 808 mq.fullseries[:] = newseries
809 809 mq.seriesdirty = True
810 810 mq.savedirty()
811 811
812 812 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
813 813 'Move bookmarks to their correct changesets, and delete divergent ones'
814 814 marks = repo._bookmarks
815 815 for k, v in originalbookmarks.iteritems():
816 816 if v in nstate:
817 817 # update the bookmarks for revs that have moved
818 818 marks[k] = nstate[v]
819 819 bookmarks.deletedivergent(repo, [targetnode], k)
820 820 marks.recordchange(tr)
821 821
822 822 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
823 823 external, activebookmark):
824 824 'Store the current status to allow recovery'
825 825 f = repo.vfs("rebasestate", "w")
826 826 f.write(repo[originalwd].hex() + '\n')
827 827 f.write(repo[target].hex() + '\n')
828 828 f.write(repo[external].hex() + '\n')
829 829 f.write('%d\n' % int(collapse))
830 830 f.write('%d\n' % int(keep))
831 831 f.write('%d\n' % int(keepbranches))
832 832 f.write('%s\n' % (activebookmark or ''))
833 833 for d, v in state.iteritems():
834 834 oldrev = repo[d].hex()
835 835 if v >= 0:
836 836 newrev = repo[v].hex()
837 837 elif v == revtodo:
838 838 # To maintain format compatibility, we have to use nullid.
839 839 # Please do remove this special case when upgrading the format.
840 840 newrev = hex(nullid)
841 841 else:
842 842 newrev = v
843 843 f.write("%s:%s\n" % (oldrev, newrev))
844 844 f.close()
845 845 repo.ui.debug('rebase status stored\n')
846 846
847 847 def clearstatus(repo):
848 848 'Remove the status files'
849 849 _clearrebasesetvisibiliy(repo)
850 850 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
851 851
852 852 def restorestatus(repo):
853 853 'Restore a previously stored status'
854 854 keepbranches = None
855 855 target = None
856 856 collapse = False
857 857 external = nullrev
858 858 activebookmark = None
859 859 state = {}
860 860
861 861 try:
862 862 f = repo.vfs("rebasestate")
863 863 for i, l in enumerate(f.read().splitlines()):
864 864 if i == 0:
865 865 originalwd = repo[l].rev()
866 866 elif i == 1:
867 867 target = repo[l].rev()
868 868 elif i == 2:
869 869 external = repo[l].rev()
870 870 elif i == 3:
871 871 collapse = bool(int(l))
872 872 elif i == 4:
873 873 keep = bool(int(l))
874 874 elif i == 5:
875 875 keepbranches = bool(int(l))
876 876 elif i == 6 and not (len(l) == 81 and ':' in l):
877 877 # line 6 is a recent addition, so for backwards compatibility
878 878 # check that the line doesn't look like the oldrev:newrev lines
879 879 activebookmark = l
880 880 else:
881 881 oldrev, newrev = l.split(':')
882 882 if newrev in (str(nullmerge), str(revignored),
883 883 str(revprecursor), str(revpruned)):
884 884 state[repo[oldrev].rev()] = int(newrev)
885 885 elif newrev == nullid:
886 886 state[repo[oldrev].rev()] = revtodo
887 887 # Legacy compat special case
888 888 else:
889 889 state[repo[oldrev].rev()] = repo[newrev].rev()
890 890
891 891 except IOError as err:
892 892 if err.errno != errno.ENOENT:
893 893 raise
894 894 raise error.Abort(_('no rebase in progress'))
895 895
896 896 if keepbranches is None:
897 897 raise error.Abort(_('.hg/rebasestate is incomplete'))
898 898
899 899 skipped = set()
900 900 # recompute the set of skipped revs
901 901 if not collapse:
902 902 seen = set([target])
903 903 for old, new in sorted(state.items()):
904 904 if new != revtodo and new in seen:
905 905 skipped.add(old)
906 906 seen.add(new)
907 907 repo.ui.debug('computed skipped revs: %s\n' %
908 908 (' '.join(str(r) for r in sorted(skipped)) or None))
909 909 repo.ui.debug('rebase status resumed\n')
910 910 _setrebasesetvisibility(repo, state.keys())
911 911 return (originalwd, target, state, skipped,
912 912 collapse, keep, keepbranches, external, activebookmark)
913 913
914 914 def needupdate(repo, state):
915 915 '''check whether we should `update --clean` away from a merge, or if
916 916 somehow the working dir got forcibly updated, e.g. by older hg'''
917 917 parents = [p.rev() for p in repo[None].parents()]
918 918
919 919 # Are we in a merge state at all?
920 920 if len(parents) < 2:
921 921 return False
922 922
923 923 # We should be standing on the first as-of-yet unrebased commit.
924 924 firstunrebased = min([old for old, new in state.iteritems()
925 925 if new == nullrev])
926 926 if firstunrebased in parents:
927 927 return True
928 928
929 929 return False
930 930
931 931 def abort(repo, originalwd, target, state, activebookmark=None):
932 932 '''Restore the repository to its original state. Additional args:
933 933
934 934 activebookmark: the name of the bookmark that should be active after the
935 935 restore'''
936 936
937 937 try:
938 938 # If the first commits in the rebased set get skipped during the rebase,
939 939 # their values within the state mapping will be the target rev id. The
940 940 # dstates list must must not contain the target rev (issue4896)
941 941 dstates = [s for s in state.values() if s >= 0 and s != target]
942 942 immutable = [d for d in dstates if not repo[d].mutable()]
943 943 cleanup = True
944 944 if immutable:
945 945 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
946 946 % ', '.join(str(repo[r]) for r in immutable),
947 947 hint=_('see "hg help phases" for details'))
948 948 cleanup = False
949 949
950 950 descendants = set()
951 951 if dstates:
952 952 descendants = set(repo.changelog.descendants(dstates))
953 953 if descendants - set(dstates):
954 954 repo.ui.warn(_("warning: new changesets detected on target branch, "
955 955 "can't strip\n"))
956 956 cleanup = False
957 957
958 958 if cleanup:
959 959 # Update away from the rebase if necessary
960 960 if needupdate(repo, state):
961 merge.update(repo, originalwd, False, True, False)
961 merge.update(repo, originalwd, False, True)
962 962
963 963 # Strip from the first rebased revision
964 964 rebased = filter(lambda x: x >= 0 and x != target, state.values())
965 965 if rebased:
966 966 strippoints = [
967 967 c.node() for c in repo.set('roots(%ld)', rebased)]
968 968 # no backup of rebased cset versions needed
969 969 repair.strip(repo.ui, repo, strippoints)
970 970
971 971 if activebookmark and activebookmark in repo._bookmarks:
972 972 bookmarks.activate(repo, activebookmark)
973 973
974 974 finally:
975 975 clearstatus(repo)
976 976 repo.ui.warn(_('rebase aborted\n'))
977 977 return 0
978 978
979 979 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
980 980 '''Define which revisions are going to be rebased and where
981 981
982 982 repo: repo
983 983 dest: context
984 984 rebaseset: set of rev
985 985 '''
986 986 _setrebasesetvisibility(repo, rebaseset)
987 987
988 988 # This check isn't strictly necessary, since mq detects commits over an
989 989 # applied patch. But it prevents messing up the working directory when
990 990 # a partially completed rebase is blocked by mq.
991 991 if 'qtip' in repo.tags() and (dest.node() in
992 992 [s.node for s in repo.mq.applied]):
993 993 raise error.Abort(_('cannot rebase onto an applied mq patch'))
994 994
995 995 roots = list(repo.set('roots(%ld)', rebaseset))
996 996 if not roots:
997 997 raise error.Abort(_('no matching revisions'))
998 998 roots.sort()
999 999 state = {}
1000 1000 detachset = set()
1001 1001 for root in roots:
1002 1002 commonbase = root.ancestor(dest)
1003 1003 if commonbase == root:
1004 1004 raise error.Abort(_('source is ancestor of destination'))
1005 1005 if commonbase == dest:
1006 1006 samebranch = root.branch() == dest.branch()
1007 1007 if not collapse and samebranch and root in dest.children():
1008 1008 repo.ui.debug('source is a child of destination\n')
1009 1009 return None
1010 1010
1011 1011 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1012 1012 state.update(dict.fromkeys(rebaseset, revtodo))
1013 1013 # Rebase tries to turn <dest> into a parent of <root> while
1014 1014 # preserving the number of parents of rebased changesets:
1015 1015 #
1016 1016 # - A changeset with a single parent will always be rebased as a
1017 1017 # changeset with a single parent.
1018 1018 #
1019 1019 # - A merge will be rebased as merge unless its parents are both
1020 1020 # ancestors of <dest> or are themselves in the rebased set and
1021 1021 # pruned while rebased.
1022 1022 #
1023 1023 # If one parent of <root> is an ancestor of <dest>, the rebased
1024 1024 # version of this parent will be <dest>. This is always true with
1025 1025 # --base option.
1026 1026 #
1027 1027 # Otherwise, we need to *replace* the original parents with
1028 1028 # <dest>. This "detaches" the rebased set from its former location
1029 1029 # and rebases it onto <dest>. Changes introduced by ancestors of
1030 1030 # <root> not common with <dest> (the detachset, marked as
1031 1031 # nullmerge) are "removed" from the rebased changesets.
1032 1032 #
1033 1033 # - If <root> has a single parent, set it to <dest>.
1034 1034 #
1035 1035 # - If <root> is a merge, we cannot decide which parent to
1036 1036 # replace, the rebase operation is not clearly defined.
1037 1037 #
1038 1038 # The table below sums up this behavior:
1039 1039 #
1040 1040 # +------------------+----------------------+-------------------------+
1041 1041 # | | one parent | merge |
1042 1042 # +------------------+----------------------+-------------------------+
1043 1043 # | parent in | new parent is <dest> | parents in ::<dest> are |
1044 1044 # | ::<dest> | | remapped to <dest> |
1045 1045 # +------------------+----------------------+-------------------------+
1046 1046 # | unrelated source | new parent is <dest> | ambiguous, abort |
1047 1047 # +------------------+----------------------+-------------------------+
1048 1048 #
1049 1049 # The actual abort is handled by `defineparents`
1050 1050 if len(root.parents()) <= 1:
1051 1051 # ancestors of <root> not ancestors of <dest>
1052 1052 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1053 1053 [root.rev()]))
1054 1054 for r in detachset:
1055 1055 if r not in state:
1056 1056 state[r] = nullmerge
1057 1057 if len(roots) > 1:
1058 1058 # If we have multiple roots, we may have "hole" in the rebase set.
1059 1059 # Rebase roots that descend from those "hole" should not be detached as
1060 1060 # other root are. We use the special `revignored` to inform rebase that
1061 1061 # the revision should be ignored but that `defineparents` should search
1062 1062 # a rebase destination that make sense regarding rebased topology.
1063 1063 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1064 1064 for ignored in set(rebasedomain) - set(rebaseset):
1065 1065 state[ignored] = revignored
1066 1066 for r in obsoletenotrebased:
1067 1067 if obsoletenotrebased[r] is None:
1068 1068 state[r] = revpruned
1069 1069 else:
1070 1070 state[r] = revprecursor
1071 1071 return repo['.'].rev(), dest.rev(), state
1072 1072
1073 1073 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1074 1074 """dispose of rebased revision at the end of the rebase
1075 1075
1076 1076 If `collapsedas` is not None, the rebase was a collapse whose result if the
1077 1077 `collapsedas` node."""
1078 1078 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1079 1079 markers = []
1080 1080 for rev, newrev in sorted(state.items()):
1081 1081 if newrev >= 0:
1082 1082 if rev in skipped:
1083 1083 succs = ()
1084 1084 elif collapsedas is not None:
1085 1085 succs = (repo[collapsedas],)
1086 1086 else:
1087 1087 succs = (repo[newrev],)
1088 1088 markers.append((repo[rev], succs))
1089 1089 if markers:
1090 1090 obsolete.createmarkers(repo, markers)
1091 1091 else:
1092 1092 rebased = [rev for rev in state if state[rev] > nullmerge]
1093 1093 if rebased:
1094 1094 stripped = []
1095 1095 for root in repo.set('roots(%ld)', rebased):
1096 1096 if set(repo.changelog.descendants([root.rev()])) - set(state):
1097 1097 ui.warn(_("warning: new changesets detected "
1098 1098 "on source branch, not stripping\n"))
1099 1099 else:
1100 1100 stripped.append(root.node())
1101 1101 if stripped:
1102 1102 # backup the old csets by default
1103 1103 repair.strip(ui, repo, stripped, "all")
1104 1104
1105 1105
1106 1106 def pullrebase(orig, ui, repo, *args, **opts):
1107 1107 'Call rebase after pull if the latter has been invoked with --rebase'
1108 1108 ret = None
1109 1109 if opts.get('rebase'):
1110 1110 wlock = lock = None
1111 1111 try:
1112 1112 wlock = repo.wlock()
1113 1113 lock = repo.lock()
1114 1114 if opts.get('update'):
1115 1115 del opts['update']
1116 1116 ui.debug('--update and --rebase are not compatible, ignoring '
1117 1117 'the update flag\n')
1118 1118
1119 1119 movemarkfrom = repo['.'].node()
1120 1120 revsprepull = len(repo)
1121 1121 origpostincoming = commands.postincoming
1122 1122 def _dummy(*args, **kwargs):
1123 1123 pass
1124 1124 commands.postincoming = _dummy
1125 1125 try:
1126 1126 ret = orig(ui, repo, *args, **opts)
1127 1127 finally:
1128 1128 commands.postincoming = origpostincoming
1129 1129 revspostpull = len(repo)
1130 1130 if revspostpull > revsprepull:
1131 1131 # --rev option from pull conflict with rebase own --rev
1132 1132 # dropping it
1133 1133 if 'rev' in opts:
1134 1134 del opts['rev']
1135 1135 # positional argument from pull conflicts with rebase's own
1136 1136 # --source.
1137 1137 if 'source' in opts:
1138 1138 del opts['source']
1139 1139 rebase(ui, repo, **opts)
1140 1140 branch = repo[None].branch()
1141 1141 dest = repo[branch].rev()
1142 1142 if dest != repo['.'].rev():
1143 1143 # there was nothing to rebase we force an update
1144 1144 hg.update(repo, dest)
1145 1145 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1146 1146 ui.status(_("updating bookmark %s\n")
1147 1147 % repo._activebookmark)
1148 1148 finally:
1149 1149 release(lock, wlock)
1150 1150 else:
1151 1151 if opts.get('tool'):
1152 1152 raise error.Abort(_('--tool can only be used with --rebase'))
1153 1153 ret = orig(ui, repo, *args, **opts)
1154 1154
1155 1155 return ret
1156 1156
1157 1157 def _setrebasesetvisibility(repo, revs):
1158 1158 """store the currently rebased set on the repo object
1159 1159
1160 1160 This is used by another function to prevent rebased revision to because
1161 1161 hidden (see issue4505)"""
1162 1162 repo = repo.unfiltered()
1163 1163 revs = set(revs)
1164 1164 repo._rebaseset = revs
1165 1165 # invalidate cache if visibility changes
1166 1166 hiddens = repo.filteredrevcache.get('visible', set())
1167 1167 if revs & hiddens:
1168 1168 repo.invalidatevolatilesets()
1169 1169
1170 1170 def _clearrebasesetvisibiliy(repo):
1171 1171 """remove rebaseset data from the repo"""
1172 1172 repo = repo.unfiltered()
1173 1173 if '_rebaseset' in vars(repo):
1174 1174 del repo._rebaseset
1175 1175
1176 1176 def _rebasedvisible(orig, repo):
1177 1177 """ensure rebased revs stay visible (see issue4505)"""
1178 1178 blockers = orig(repo)
1179 1179 blockers.update(getattr(repo, '_rebaseset', ()))
1180 1180 return blockers
1181 1181
1182 1182 def _computeobsoletenotrebased(repo, rebasesetrevs, dest):
1183 1183 """return a mapping obsolete => successor for all obsolete nodes to be
1184 1184 rebased that have a successors in the destination
1185 1185
1186 1186 obsolete => None entries in the mapping indicate nodes with no succesor"""
1187 1187 obsoletenotrebased = {}
1188 1188
1189 1189 # Build a mapping successor => obsolete nodes for the obsolete
1190 1190 # nodes to be rebased
1191 1191 allsuccessors = {}
1192 1192 cl = repo.changelog
1193 1193 for r in rebasesetrevs:
1194 1194 n = repo[r]
1195 1195 if n.obsolete():
1196 1196 node = cl.node(r)
1197 1197 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1198 1198 try:
1199 1199 allsuccessors[cl.rev(s)] = cl.rev(node)
1200 1200 except LookupError:
1201 1201 pass
1202 1202
1203 1203 if allsuccessors:
1204 1204 # Look for successors of obsolete nodes to be rebased among
1205 1205 # the ancestors of dest
1206 1206 ancs = cl.ancestors([repo[dest].rev()],
1207 1207 stoprev=min(allsuccessors),
1208 1208 inclusive=True)
1209 1209 for s in allsuccessors:
1210 1210 if s in ancs:
1211 1211 obsoletenotrebased[allsuccessors[s]] = s
1212 1212 elif (s == allsuccessors[s] and
1213 1213 allsuccessors.values().count(s) == 1):
1214 1214 # plain prune
1215 1215 obsoletenotrebased[s] = None
1216 1216
1217 1217 return obsoletenotrebased
1218 1218
1219 1219 def summaryhook(ui, repo):
1220 1220 if not os.path.exists(repo.join('rebasestate')):
1221 1221 return
1222 1222 try:
1223 1223 state = restorestatus(repo)[2]
1224 1224 except error.RepoLookupError:
1225 1225 # i18n: column positioning for "hg summary"
1226 1226 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1227 1227 ui.write(msg)
1228 1228 return
1229 1229 numrebased = len([i for i in state.itervalues() if i >= 0])
1230 1230 # i18n: column positioning for "hg summary"
1231 1231 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1232 1232 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1233 1233 ui.label(_('%d remaining'), 'rebase.remaining') %
1234 1234 (len(state) - numrebased)))
1235 1235
1236 1236 def uisetup(ui):
1237 1237 #Replace pull with a decorator to provide --rebase option
1238 1238 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1239 1239 entry[1].append(('', 'rebase', None,
1240 1240 _("rebase working directory to branch head")))
1241 1241 entry[1].append(('t', 'tool', '',
1242 1242 _("specify merge tool for rebase")))
1243 1243 cmdutil.summaryhooks.add('rebase', summaryhook)
1244 1244 cmdutil.unfinishedstates.append(
1245 1245 ['rebasestate', False, False, _('rebase in progress'),
1246 1246 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1247 1247 # ensure rebased rev are not hidden
1248 1248 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1249 1249 revset.symbols['_destrebase'] = _revsetdestrebase
@@ -1,721 +1,721 b''
1 1 # Patch transplanting extension for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to transplant changesets from another branch
9 9
10 10 This extension allows you to transplant changes to another parent revision,
11 11 possibly in another repository. The transplant is done using 'diff' patches.
12 12
13 13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 14 map from a changeset hash to its hash in the source repository.
15 15 '''
16 16
17 17 from mercurial.i18n import _
18 18 import os, tempfile
19 19 from mercurial.node import short
20 20 from mercurial import bundlerepo, hg, merge, match
21 21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
22 22 from mercurial import revset, templatekw, exchange
23 23 from mercurial import lock as lockmod
24 24
25 25 class TransplantError(error.Abort):
26 26 pass
27 27
28 28 cmdtable = {}
29 29 command = cmdutil.command(cmdtable)
30 30 # Note for extension authors: ONLY specify testedwith = 'internal' for
31 31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 32 # be specifying the version(s) of Mercurial they are tested with, or
33 33 # leave the attribute unspecified.
34 34 testedwith = 'internal'
35 35
36 36 class transplantentry(object):
37 37 def __init__(self, lnode, rnode):
38 38 self.lnode = lnode
39 39 self.rnode = rnode
40 40
41 41 class transplants(object):
42 42 def __init__(self, path=None, transplantfile=None, opener=None):
43 43 self.path = path
44 44 self.transplantfile = transplantfile
45 45 self.opener = opener
46 46
47 47 if not opener:
48 48 self.opener = scmutil.opener(self.path)
49 49 self.transplants = {}
50 50 self.dirty = False
51 51 self.read()
52 52
53 53 def read(self):
54 54 abspath = os.path.join(self.path, self.transplantfile)
55 55 if self.transplantfile and os.path.exists(abspath):
56 56 for line in self.opener.read(self.transplantfile).splitlines():
57 57 lnode, rnode = map(revlog.bin, line.split(':'))
58 58 list = self.transplants.setdefault(rnode, [])
59 59 list.append(transplantentry(lnode, rnode))
60 60
61 61 def write(self):
62 62 if self.dirty and self.transplantfile:
63 63 if not os.path.isdir(self.path):
64 64 os.mkdir(self.path)
65 65 fp = self.opener(self.transplantfile, 'w')
66 66 for list in self.transplants.itervalues():
67 67 for t in list:
68 68 l, r = map(revlog.hex, (t.lnode, t.rnode))
69 69 fp.write(l + ':' + r + '\n')
70 70 fp.close()
71 71 self.dirty = False
72 72
73 73 def get(self, rnode):
74 74 return self.transplants.get(rnode) or []
75 75
76 76 def set(self, lnode, rnode):
77 77 list = self.transplants.setdefault(rnode, [])
78 78 list.append(transplantentry(lnode, rnode))
79 79 self.dirty = True
80 80
81 81 def remove(self, transplant):
82 82 list = self.transplants.get(transplant.rnode)
83 83 if list:
84 84 del list[list.index(transplant)]
85 85 self.dirty = True
86 86
87 87 class transplanter(object):
88 88 def __init__(self, ui, repo, opts):
89 89 self.ui = ui
90 90 self.path = repo.join('transplant')
91 91 self.opener = scmutil.opener(self.path)
92 92 self.transplants = transplants(self.path, 'transplants',
93 93 opener=self.opener)
94 94 def getcommiteditor():
95 95 editform = cmdutil.mergeeditform(repo[None], 'transplant')
96 96 return cmdutil.getcommiteditor(editform=editform, **opts)
97 97 self.getcommiteditor = getcommiteditor
98 98
99 99 def applied(self, repo, node, parent):
100 100 '''returns True if a node is already an ancestor of parent
101 101 or is parent or has already been transplanted'''
102 102 if hasnode(repo, parent):
103 103 parentrev = repo.changelog.rev(parent)
104 104 if hasnode(repo, node):
105 105 rev = repo.changelog.rev(node)
106 106 reachable = repo.changelog.ancestors([parentrev], rev,
107 107 inclusive=True)
108 108 if rev in reachable:
109 109 return True
110 110 for t in self.transplants.get(node):
111 111 # it might have been stripped
112 112 if not hasnode(repo, t.lnode):
113 113 self.transplants.remove(t)
114 114 return False
115 115 lnoderev = repo.changelog.rev(t.lnode)
116 116 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
117 117 inclusive=True):
118 118 return True
119 119 return False
120 120
121 121 def apply(self, repo, source, revmap, merges, opts=None):
122 122 '''apply the revisions in revmap one by one in revision order'''
123 123 if opts is None:
124 124 opts = {}
125 125 revs = sorted(revmap)
126 126 p1, p2 = repo.dirstate.parents()
127 127 pulls = []
128 128 diffopts = patch.difffeatureopts(self.ui, opts)
129 129 diffopts.git = True
130 130
131 131 lock = tr = None
132 132 try:
133 133 lock = repo.lock()
134 134 tr = repo.transaction('transplant')
135 135 for rev in revs:
136 136 node = revmap[rev]
137 137 revstr = '%s:%s' % (rev, short(node))
138 138
139 139 if self.applied(repo, node, p1):
140 140 self.ui.warn(_('skipping already applied revision %s\n') %
141 141 revstr)
142 142 continue
143 143
144 144 parents = source.changelog.parents(node)
145 145 if not (opts.get('filter') or opts.get('log')):
146 146 # If the changeset parent is the same as the
147 147 # wdir's parent, just pull it.
148 148 if parents[0] == p1:
149 149 pulls.append(node)
150 150 p1 = node
151 151 continue
152 152 if pulls:
153 153 if source != repo:
154 154 exchange.pull(repo, source.peer(), heads=pulls)
155 merge.update(repo, pulls[-1], False, False, None)
155 merge.update(repo, pulls[-1], False, False)
156 156 p1, p2 = repo.dirstate.parents()
157 157 pulls = []
158 158
159 159 domerge = False
160 160 if node in merges:
161 161 # pulling all the merge revs at once would mean we
162 162 # couldn't transplant after the latest even if
163 163 # transplants before them fail.
164 164 domerge = True
165 165 if not hasnode(repo, node):
166 166 exchange.pull(repo, source.peer(), heads=[node])
167 167
168 168 skipmerge = False
169 169 if parents[1] != revlog.nullid:
170 170 if not opts.get('parent'):
171 171 self.ui.note(_('skipping merge changeset %s:%s\n')
172 172 % (rev, short(node)))
173 173 skipmerge = True
174 174 else:
175 175 parent = source.lookup(opts['parent'])
176 176 if parent not in parents:
177 177 raise error.Abort(_('%s is not a parent of %s') %
178 178 (short(parent), short(node)))
179 179 else:
180 180 parent = parents[0]
181 181
182 182 if skipmerge:
183 183 patchfile = None
184 184 else:
185 185 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
186 186 fp = os.fdopen(fd, 'w')
187 187 gen = patch.diff(source, parent, node, opts=diffopts)
188 188 for chunk in gen:
189 189 fp.write(chunk)
190 190 fp.close()
191 191
192 192 del revmap[rev]
193 193 if patchfile or domerge:
194 194 try:
195 195 try:
196 196 n = self.applyone(repo, node,
197 197 source.changelog.read(node),
198 198 patchfile, merge=domerge,
199 199 log=opts.get('log'),
200 200 filter=opts.get('filter'))
201 201 except TransplantError:
202 202 # Do not rollback, it is up to the user to
203 203 # fix the merge or cancel everything
204 204 tr.close()
205 205 raise
206 206 if n and domerge:
207 207 self.ui.status(_('%s merged at %s\n') % (revstr,
208 208 short(n)))
209 209 elif n:
210 210 self.ui.status(_('%s transplanted to %s\n')
211 211 % (short(node),
212 212 short(n)))
213 213 finally:
214 214 if patchfile:
215 215 os.unlink(patchfile)
216 216 tr.close()
217 217 if pulls:
218 218 exchange.pull(repo, source.peer(), heads=pulls)
219 merge.update(repo, pulls[-1], False, False, None)
219 merge.update(repo, pulls[-1], False, False)
220 220 finally:
221 221 self.saveseries(revmap, merges)
222 222 self.transplants.write()
223 223 if tr:
224 224 tr.release()
225 225 if lock:
226 226 lock.release()
227 227
228 228 def filter(self, filter, node, changelog, patchfile):
229 229 '''arbitrarily rewrite changeset before applying it'''
230 230
231 231 self.ui.status(_('filtering %s\n') % patchfile)
232 232 user, date, msg = (changelog[1], changelog[2], changelog[4])
233 233 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
234 234 fp = os.fdopen(fd, 'w')
235 235 fp.write("# HG changeset patch\n")
236 236 fp.write("# User %s\n" % user)
237 237 fp.write("# Date %d %d\n" % date)
238 238 fp.write(msg + '\n')
239 239 fp.close()
240 240
241 241 try:
242 242 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
243 243 util.shellquote(patchfile)),
244 244 environ={'HGUSER': changelog[1],
245 245 'HGREVISION': revlog.hex(node),
246 246 },
247 247 onerr=error.Abort, errprefix=_('filter failed'))
248 248 user, date, msg = self.parselog(file(headerfile))[1:4]
249 249 finally:
250 250 os.unlink(headerfile)
251 251
252 252 return (user, date, msg)
253 253
254 254 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
255 255 filter=None):
256 256 '''apply the patch in patchfile to the repository as a transplant'''
257 257 (manifest, user, (time, timezone), files, message) = cl[:5]
258 258 date = "%d %d" % (time, timezone)
259 259 extra = {'transplant_source': node}
260 260 if filter:
261 261 (user, date, message) = self.filter(filter, node, cl, patchfile)
262 262
263 263 if log:
264 264 # we don't translate messages inserted into commits
265 265 message += '\n(transplanted from %s)' % revlog.hex(node)
266 266
267 267 self.ui.status(_('applying %s\n') % short(node))
268 268 self.ui.note('%s %s\n%s\n' % (user, date, message))
269 269
270 270 if not patchfile and not merge:
271 271 raise error.Abort(_('can only omit patchfile if merging'))
272 272 if patchfile:
273 273 try:
274 274 files = set()
275 275 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
276 276 files = list(files)
277 277 except Exception as inst:
278 278 seriespath = os.path.join(self.path, 'series')
279 279 if os.path.exists(seriespath):
280 280 os.unlink(seriespath)
281 281 p1 = repo.dirstate.p1()
282 282 p2 = node
283 283 self.log(user, date, message, p1, p2, merge=merge)
284 284 self.ui.write(str(inst) + '\n')
285 285 raise TransplantError(_('fix up the merge and run '
286 286 'hg transplant --continue'))
287 287 else:
288 288 files = None
289 289 if merge:
290 290 p1, p2 = repo.dirstate.parents()
291 291 repo.setparents(p1, node)
292 292 m = match.always(repo.root, '')
293 293 else:
294 294 m = match.exact(repo.root, '', files)
295 295
296 296 n = repo.commit(message, user, date, extra=extra, match=m,
297 297 editor=self.getcommiteditor())
298 298 if not n:
299 299 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
300 300 return None
301 301 if not merge:
302 302 self.transplants.set(n, node)
303 303
304 304 return n
305 305
306 306 def resume(self, repo, source, opts):
307 307 '''recover last transaction and apply remaining changesets'''
308 308 if os.path.exists(os.path.join(self.path, 'journal')):
309 309 n, node = self.recover(repo, source, opts)
310 310 if n:
311 311 self.ui.status(_('%s transplanted as %s\n') % (short(node),
312 312 short(n)))
313 313 else:
314 314 self.ui.status(_('%s skipped due to empty diff\n')
315 315 % (short(node),))
316 316 seriespath = os.path.join(self.path, 'series')
317 317 if not os.path.exists(seriespath):
318 318 self.transplants.write()
319 319 return
320 320 nodes, merges = self.readseries()
321 321 revmap = {}
322 322 for n in nodes:
323 323 revmap[source.changelog.rev(n)] = n
324 324 os.unlink(seriespath)
325 325
326 326 self.apply(repo, source, revmap, merges, opts)
327 327
328 328 def recover(self, repo, source, opts):
329 329 '''commit working directory using journal metadata'''
330 330 node, user, date, message, parents = self.readlog()
331 331 merge = False
332 332
333 333 if not user or not date or not message or not parents[0]:
334 334 raise error.Abort(_('transplant log file is corrupt'))
335 335
336 336 parent = parents[0]
337 337 if len(parents) > 1:
338 338 if opts.get('parent'):
339 339 parent = source.lookup(opts['parent'])
340 340 if parent not in parents:
341 341 raise error.Abort(_('%s is not a parent of %s') %
342 342 (short(parent), short(node)))
343 343 else:
344 344 merge = True
345 345
346 346 extra = {'transplant_source': node}
347 347 try:
348 348 p1, p2 = repo.dirstate.parents()
349 349 if p1 != parent:
350 350 raise error.Abort(_('working directory not at transplant '
351 351 'parent %s') % revlog.hex(parent))
352 352 if merge:
353 353 repo.setparents(p1, parents[1])
354 354 modified, added, removed, deleted = repo.status()[:4]
355 355 if merge or modified or added or removed or deleted:
356 356 n = repo.commit(message, user, date, extra=extra,
357 357 editor=self.getcommiteditor())
358 358 if not n:
359 359 raise error.Abort(_('commit failed'))
360 360 if not merge:
361 361 self.transplants.set(n, node)
362 362 else:
363 363 n = None
364 364 self.unlog()
365 365
366 366 return n, node
367 367 finally:
368 368 # TODO: get rid of this meaningless try/finally enclosing.
369 369 # this is kept only to reduce changes in a patch.
370 370 pass
371 371
372 372 def readseries(self):
373 373 nodes = []
374 374 merges = []
375 375 cur = nodes
376 376 for line in self.opener.read('series').splitlines():
377 377 if line.startswith('# Merges'):
378 378 cur = merges
379 379 continue
380 380 cur.append(revlog.bin(line))
381 381
382 382 return (nodes, merges)
383 383
384 384 def saveseries(self, revmap, merges):
385 385 if not revmap:
386 386 return
387 387
388 388 if not os.path.isdir(self.path):
389 389 os.mkdir(self.path)
390 390 series = self.opener('series', 'w')
391 391 for rev in sorted(revmap):
392 392 series.write(revlog.hex(revmap[rev]) + '\n')
393 393 if merges:
394 394 series.write('# Merges\n')
395 395 for m in merges:
396 396 series.write(revlog.hex(m) + '\n')
397 397 series.close()
398 398
399 399 def parselog(self, fp):
400 400 parents = []
401 401 message = []
402 402 node = revlog.nullid
403 403 inmsg = False
404 404 user = None
405 405 date = None
406 406 for line in fp.read().splitlines():
407 407 if inmsg:
408 408 message.append(line)
409 409 elif line.startswith('# User '):
410 410 user = line[7:]
411 411 elif line.startswith('# Date '):
412 412 date = line[7:]
413 413 elif line.startswith('# Node ID '):
414 414 node = revlog.bin(line[10:])
415 415 elif line.startswith('# Parent '):
416 416 parents.append(revlog.bin(line[9:]))
417 417 elif not line.startswith('# '):
418 418 inmsg = True
419 419 message.append(line)
420 420 if None in (user, date):
421 421 raise error.Abort(_("filter corrupted changeset (no user or date)"))
422 422 return (node, user, date, '\n'.join(message), parents)
423 423
424 424 def log(self, user, date, message, p1, p2, merge=False):
425 425 '''journal changelog metadata for later recover'''
426 426
427 427 if not os.path.isdir(self.path):
428 428 os.mkdir(self.path)
429 429 fp = self.opener('journal', 'w')
430 430 fp.write('# User %s\n' % user)
431 431 fp.write('# Date %s\n' % date)
432 432 fp.write('# Node ID %s\n' % revlog.hex(p2))
433 433 fp.write('# Parent ' + revlog.hex(p1) + '\n')
434 434 if merge:
435 435 fp.write('# Parent ' + revlog.hex(p2) + '\n')
436 436 fp.write(message.rstrip() + '\n')
437 437 fp.close()
438 438
439 439 def readlog(self):
440 440 return self.parselog(self.opener('journal'))
441 441
442 442 def unlog(self):
443 443 '''remove changelog journal'''
444 444 absdst = os.path.join(self.path, 'journal')
445 445 if os.path.exists(absdst):
446 446 os.unlink(absdst)
447 447
448 448 def transplantfilter(self, repo, source, root):
449 449 def matchfn(node):
450 450 if self.applied(repo, node, root):
451 451 return False
452 452 if source.changelog.parents(node)[1] != revlog.nullid:
453 453 return False
454 454 extra = source.changelog.read(node)[5]
455 455 cnode = extra.get('transplant_source')
456 456 if cnode and self.applied(repo, cnode, root):
457 457 return False
458 458 return True
459 459
460 460 return matchfn
461 461
462 462 def hasnode(repo, node):
463 463 try:
464 464 return repo.changelog.rev(node) is not None
465 465 except error.RevlogError:
466 466 return False
467 467
468 468 def browserevs(ui, repo, nodes, opts):
469 469 '''interactively transplant changesets'''
470 470 displayer = cmdutil.show_changeset(ui, repo, opts)
471 471 transplants = []
472 472 merges = []
473 473 prompt = _('apply changeset? [ynmpcq?]:'
474 474 '$$ &yes, transplant this changeset'
475 475 '$$ &no, skip this changeset'
476 476 '$$ &merge at this changeset'
477 477 '$$ show &patch'
478 478 '$$ &commit selected changesets'
479 479 '$$ &quit and cancel transplant'
480 480 '$$ &? (show this help)')
481 481 for node in nodes:
482 482 displayer.show(repo[node])
483 483 action = None
484 484 while not action:
485 485 action = 'ynmpcq?'[ui.promptchoice(prompt)]
486 486 if action == '?':
487 487 for c, t in ui.extractchoices(prompt)[1]:
488 488 ui.write('%s: %s\n' % (c, t))
489 489 action = None
490 490 elif action == 'p':
491 491 parent = repo.changelog.parents(node)[0]
492 492 for chunk in patch.diff(repo, parent, node):
493 493 ui.write(chunk)
494 494 action = None
495 495 if action == 'y':
496 496 transplants.append(node)
497 497 elif action == 'm':
498 498 merges.append(node)
499 499 elif action == 'c':
500 500 break
501 501 elif action == 'q':
502 502 transplants = ()
503 503 merges = ()
504 504 break
505 505 displayer.close()
506 506 return (transplants, merges)
507 507
508 508 @command('transplant',
509 509 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
510 510 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
511 511 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
512 512 ('p', 'prune', [], _('skip over REV'), _('REV')),
513 513 ('m', 'merge', [], _('merge at REV'), _('REV')),
514 514 ('', 'parent', '',
515 515 _('parent to choose when transplanting merge'), _('REV')),
516 516 ('e', 'edit', False, _('invoke editor on commit messages')),
517 517 ('', 'log', None, _('append transplant info to log message')),
518 518 ('c', 'continue', None, _('continue last transplant session '
519 519 'after fixing conflicts')),
520 520 ('', 'filter', '',
521 521 _('filter changesets through command'), _('CMD'))],
522 522 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
523 523 '[-m REV] [REV]...'))
524 524 def transplant(ui, repo, *revs, **opts):
525 525 '''transplant changesets from another branch
526 526
527 527 Selected changesets will be applied on top of the current working
528 528 directory with the log of the original changeset. The changesets
529 529 are copied and will thus appear twice in the history with different
530 530 identities.
531 531
532 532 Consider using the graft command if everything is inside the same
533 533 repository - it will use merges and will usually give a better result.
534 534 Use the rebase extension if the changesets are unpublished and you want
535 535 to move them instead of copying them.
536 536
537 537 If --log is specified, log messages will have a comment appended
538 538 of the form::
539 539
540 540 (transplanted from CHANGESETHASH)
541 541
542 542 You can rewrite the changelog message with the --filter option.
543 543 Its argument will be invoked with the current changelog message as
544 544 $1 and the patch as $2.
545 545
546 546 --source/-s specifies another repository to use for selecting changesets,
547 547 just as if it temporarily had been pulled.
548 548 If --branch/-b is specified, these revisions will be used as
549 549 heads when deciding which changesets to transplant, just as if only
550 550 these revisions had been pulled.
551 551 If --all/-a is specified, all the revisions up to the heads specified
552 552 with --branch will be transplanted.
553 553
554 554 Example:
555 555
556 556 - transplant all changes up to REV on top of your current revision::
557 557
558 558 hg transplant --branch REV --all
559 559
560 560 You can optionally mark selected transplanted changesets as merge
561 561 changesets. You will not be prompted to transplant any ancestors
562 562 of a merged transplant, and you can merge descendants of them
563 563 normally instead of transplanting them.
564 564
565 565 Merge changesets may be transplanted directly by specifying the
566 566 proper parent changeset by calling :hg:`transplant --parent`.
567 567
568 568 If no merges or revisions are provided, :hg:`transplant` will
569 569 start an interactive changeset browser.
570 570
571 571 If a changeset application fails, you can fix the merge by hand
572 572 and then resume where you left off by calling :hg:`transplant
573 573 --continue/-c`.
574 574 '''
575 575 wlock = None
576 576 try:
577 577 wlock = repo.wlock()
578 578 return _dotransplant(ui, repo, *revs, **opts)
579 579 finally:
580 580 lockmod.release(wlock)
581 581
582 582 def _dotransplant(ui, repo, *revs, **opts):
583 583 def incwalk(repo, csets, match=util.always):
584 584 for node in csets:
585 585 if match(node):
586 586 yield node
587 587
588 588 def transplantwalk(repo, dest, heads, match=util.always):
589 589 '''Yield all nodes that are ancestors of a head but not ancestors
590 590 of dest.
591 591 If no heads are specified, the heads of repo will be used.'''
592 592 if not heads:
593 593 heads = repo.heads()
594 594 ancestors = []
595 595 ctx = repo[dest]
596 596 for head in heads:
597 597 ancestors.append(ctx.ancestor(repo[head]).node())
598 598 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
599 599 if match(node):
600 600 yield node
601 601
602 602 def checkopts(opts, revs):
603 603 if opts.get('continue'):
604 604 if opts.get('branch') or opts.get('all') or opts.get('merge'):
605 605 raise error.Abort(_('--continue is incompatible with '
606 606 '--branch, --all and --merge'))
607 607 return
608 608 if not (opts.get('source') or revs or
609 609 opts.get('merge') or opts.get('branch')):
610 610 raise error.Abort(_('no source URL, branch revision, or revision '
611 611 'list provided'))
612 612 if opts.get('all'):
613 613 if not opts.get('branch'):
614 614 raise error.Abort(_('--all requires a branch revision'))
615 615 if revs:
616 616 raise error.Abort(_('--all is incompatible with a '
617 617 'revision list'))
618 618
619 619 checkopts(opts, revs)
620 620
621 621 if not opts.get('log'):
622 622 # deprecated config: transplant.log
623 623 opts['log'] = ui.config('transplant', 'log')
624 624 if not opts.get('filter'):
625 625 # deprecated config: transplant.filter
626 626 opts['filter'] = ui.config('transplant', 'filter')
627 627
628 628 tp = transplanter(ui, repo, opts)
629 629
630 630 cmdutil.checkunfinished(repo)
631 631 p1, p2 = repo.dirstate.parents()
632 632 if len(repo) > 0 and p1 == revlog.nullid:
633 633 raise error.Abort(_('no revision checked out'))
634 634 if not opts.get('continue'):
635 635 if p2 != revlog.nullid:
636 636 raise error.Abort(_('outstanding uncommitted merges'))
637 637 m, a, r, d = repo.status()[:4]
638 638 if m or a or r or d:
639 639 raise error.Abort(_('outstanding local changes'))
640 640
641 641 sourcerepo = opts.get('source')
642 642 if sourcerepo:
643 643 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
644 644 heads = map(peer.lookup, opts.get('branch', ()))
645 645 target = set(heads)
646 646 for r in revs:
647 647 try:
648 648 target.add(peer.lookup(r))
649 649 except error.RepoError:
650 650 pass
651 651 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
652 652 onlyheads=sorted(target), force=True)
653 653 else:
654 654 source = repo
655 655 heads = map(source.lookup, opts.get('branch', ()))
656 656 cleanupfn = None
657 657
658 658 try:
659 659 if opts.get('continue'):
660 660 tp.resume(repo, source, opts)
661 661 return
662 662
663 663 tf = tp.transplantfilter(repo, source, p1)
664 664 if opts.get('prune'):
665 665 prune = set(source.lookup(r)
666 666 for r in scmutil.revrange(source, opts.get('prune')))
667 667 matchfn = lambda x: tf(x) and x not in prune
668 668 else:
669 669 matchfn = tf
670 670 merges = map(source.lookup, opts.get('merge', ()))
671 671 revmap = {}
672 672 if revs:
673 673 for r in scmutil.revrange(source, revs):
674 674 revmap[int(r)] = source.lookup(r)
675 675 elif opts.get('all') or not merges:
676 676 if source != repo:
677 677 alltransplants = incwalk(source, csets, match=matchfn)
678 678 else:
679 679 alltransplants = transplantwalk(source, p1, heads,
680 680 match=matchfn)
681 681 if opts.get('all'):
682 682 revs = alltransplants
683 683 else:
684 684 revs, newmerges = browserevs(ui, source, alltransplants, opts)
685 685 merges.extend(newmerges)
686 686 for r in revs:
687 687 revmap[source.changelog.rev(r)] = r
688 688 for r in merges:
689 689 revmap[source.changelog.rev(r)] = r
690 690
691 691 tp.apply(repo, source, revmap, merges, opts)
692 692 finally:
693 693 if cleanupfn:
694 694 cleanupfn()
695 695
696 696 def revsettransplanted(repo, subset, x):
697 697 """``transplanted([set])``
698 698 Transplanted changesets in set, or all transplanted changesets.
699 699 """
700 700 if x:
701 701 s = revset.getset(repo, subset, x)
702 702 else:
703 703 s = subset
704 704 return revset.baseset([r for r in s if
705 705 repo[r].extra().get('transplant_source')])
706 706
707 707 def kwtransplanted(repo, ctx, **args):
708 708 """:transplanted: String. The node identifier of the transplanted
709 709 changeset if any."""
710 710 n = ctx.extra().get('transplant_source')
711 711 return n and revlog.hex(n) or ''
712 712
713 713 def extsetup(ui):
714 714 revset.symbols['transplanted'] = revsettransplanted
715 715 templatekw.keywords['transplanted'] = kwtransplanted
716 716 cmdutil.unfinishedstates.append(
717 717 ['series', True, False, _('transplant in progress'),
718 718 _("use 'hg transplant --continue' or 'hg update' to abort")])
719 719
720 720 # tell hggettext to extract docstrings from these functions:
721 721 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3422 +1,3422 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from node import hex, bin, nullid, nullrev, short
9 9 from i18n import _
10 10 import os, sys, errno, re, tempfile, cStringIO, shutil
11 11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 12 import match as matchmod
13 13 import repair, graphmod, revset, phases, obsolete, pathutil
14 14 import changelog
15 15 import bookmarks
16 16 import encoding
17 17 import formatter
18 18 import crecord as crecordmod
19 19 import lock as lockmod
20 20
21 21 def ishunk(x):
22 22 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
23 23 return isinstance(x, hunkclasses)
24 24
25 25 def newandmodified(chunks, originalchunks):
26 26 newlyaddedandmodifiedfiles = set()
27 27 for chunk in chunks:
28 28 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
29 29 originalchunks:
30 30 newlyaddedandmodifiedfiles.add(chunk.header.filename())
31 31 return newlyaddedandmodifiedfiles
32 32
33 33 def parsealiases(cmd):
34 34 return cmd.lstrip("^").split("|")
35 35
36 36 def setupwrapcolorwrite(ui):
37 37 # wrap ui.write so diff output can be labeled/colorized
38 38 def wrapwrite(orig, *args, **kw):
39 39 label = kw.pop('label', '')
40 40 for chunk, l in patch.difflabel(lambda: args):
41 41 orig(chunk, label=label + l)
42 42
43 43 oldwrite = ui.write
44 44 def wrap(*args, **kwargs):
45 45 return wrapwrite(oldwrite, *args, **kwargs)
46 46 setattr(ui, 'write', wrap)
47 47 return oldwrite
48 48
49 49 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
50 50 if usecurses:
51 51 if testfile:
52 52 recordfn = crecordmod.testdecorator(testfile,
53 53 crecordmod.testchunkselector)
54 54 else:
55 55 recordfn = crecordmod.chunkselector
56 56
57 57 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
58 58
59 59 else:
60 60 return patch.filterpatch(ui, originalhunks, operation)
61 61
62 62 def recordfilter(ui, originalhunks, operation=None):
63 63 """ Prompts the user to filter the originalhunks and return a list of
64 64 selected hunks.
65 65 *operation* is used for ui purposes to indicate the user
66 66 what kind of filtering they are doing: reverting, committing, shelving, etc.
67 67 *operation* has to be a translated string.
68 68 """
69 69 usecurses = ui.configbool('experimental', 'crecord', False)
70 70 testfile = ui.config('experimental', 'crecordtest', None)
71 71 oldwrite = setupwrapcolorwrite(ui)
72 72 try:
73 73 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
74 74 testfile, operation)
75 75 finally:
76 76 ui.write = oldwrite
77 77 return newchunks, newopts
78 78
79 79 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
80 80 filterfn, *pats, **opts):
81 81 import merge as mergemod
82 82
83 83 if not ui.interactive():
84 84 if cmdsuggest:
85 85 msg = _('running non-interactively, use %s instead') % cmdsuggest
86 86 else:
87 87 msg = _('running non-interactively')
88 88 raise error.Abort(msg)
89 89
90 90 # make sure username is set before going interactive
91 91 if not opts.get('user'):
92 92 ui.username() # raise exception, username not provided
93 93
94 94 def recordfunc(ui, repo, message, match, opts):
95 95 """This is generic record driver.
96 96
97 97 Its job is to interactively filter local changes, and
98 98 accordingly prepare working directory into a state in which the
99 99 job can be delegated to a non-interactive commit command such as
100 100 'commit' or 'qrefresh'.
101 101
102 102 After the actual job is done by non-interactive command, the
103 103 working directory is restored to its original state.
104 104
105 105 In the end we'll record interesting changes, and everything else
106 106 will be left in place, so the user can continue working.
107 107 """
108 108
109 109 checkunfinished(repo, commit=True)
110 110 merge = len(repo[None].parents()) > 1
111 111 if merge:
112 112 raise error.Abort(_('cannot partially commit a merge '
113 113 '(use "hg commit" instead)'))
114 114
115 115 status = repo.status(match=match)
116 116 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
117 117 diffopts.nodates = True
118 118 diffopts.git = True
119 119 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
120 120 originalchunks = patch.parsepatch(originaldiff)
121 121
122 122 # 1. filter patch, so we have intending-to apply subset of it
123 123 try:
124 124 chunks, newopts = filterfn(ui, originalchunks)
125 125 except patch.PatchError as err:
126 126 raise error.Abort(_('error parsing patch: %s') % err)
127 127 opts.update(newopts)
128 128
129 129 # We need to keep a backup of files that have been newly added and
130 130 # modified during the recording process because there is a previous
131 131 # version without the edit in the workdir
132 132 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
133 133 contenders = set()
134 134 for h in chunks:
135 135 try:
136 136 contenders.update(set(h.files()))
137 137 except AttributeError:
138 138 pass
139 139
140 140 changed = status.modified + status.added + status.removed
141 141 newfiles = [f for f in changed if f in contenders]
142 142 if not newfiles:
143 143 ui.status(_('no changes to record\n'))
144 144 return 0
145 145
146 146 modified = set(status.modified)
147 147
148 148 # 2. backup changed files, so we can restore them in the end
149 149
150 150 if backupall:
151 151 tobackup = changed
152 152 else:
153 153 tobackup = [f for f in newfiles if f in modified or f in \
154 154 newlyaddedandmodifiedfiles]
155 155 backups = {}
156 156 if tobackup:
157 157 backupdir = repo.join('record-backups')
158 158 try:
159 159 os.mkdir(backupdir)
160 160 except OSError as err:
161 161 if err.errno != errno.EEXIST:
162 162 raise
163 163 try:
164 164 # backup continues
165 165 for f in tobackup:
166 166 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
167 167 dir=backupdir)
168 168 os.close(fd)
169 169 ui.debug('backup %r as %r\n' % (f, tmpname))
170 170 util.copyfile(repo.wjoin(f), tmpname)
171 171 shutil.copystat(repo.wjoin(f), tmpname)
172 172 backups[f] = tmpname
173 173
174 174 fp = cStringIO.StringIO()
175 175 for c in chunks:
176 176 fname = c.filename()
177 177 if fname in backups:
178 178 c.write(fp)
179 179 dopatch = fp.tell()
180 180 fp.seek(0)
181 181
182 182 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
183 183 # 3a. apply filtered patch to clean repo (clean)
184 184 if backups:
185 185 # Equivalent to hg.revert
186 choices = lambda key: key in backups
186 m = scmutil.matchfiles(repo, backups.keys())
187 187 mergemod.update(repo, repo.dirstate.p1(),
188 False, True, choices)
188 False, True, matcher=m)
189 189
190 190 # 3b. (apply)
191 191 if dopatch:
192 192 try:
193 193 ui.debug('applying patch\n')
194 194 ui.debug(fp.getvalue())
195 195 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
196 196 except patch.PatchError as err:
197 197 raise error.Abort(str(err))
198 198 del fp
199 199
200 200 # 4. We prepared working directory according to filtered
201 201 # patch. Now is the time to delegate the job to
202 202 # commit/qrefresh or the like!
203 203
204 204 # Make all of the pathnames absolute.
205 205 newfiles = [repo.wjoin(nf) for nf in newfiles]
206 206 return commitfunc(ui, repo, *newfiles, **opts)
207 207 finally:
208 208 # 5. finally restore backed-up files
209 209 try:
210 210 dirstate = repo.dirstate
211 211 for realname, tmpname in backups.iteritems():
212 212 ui.debug('restoring %r to %r\n' % (tmpname, realname))
213 213
214 214 if dirstate[realname] == 'n':
215 215 # without normallookup, restoring timestamp
216 216 # may cause partially committed files
217 217 # to be treated as unmodified
218 218 dirstate.normallookup(realname)
219 219
220 220 util.copyfile(tmpname, repo.wjoin(realname))
221 221 # Our calls to copystat() here and above are a
222 222 # hack to trick any editors that have f open that
223 223 # we haven't modified them.
224 224 #
225 225 # Also note that this racy as an editor could
226 226 # notice the file's mtime before we've finished
227 227 # writing it.
228 228 shutil.copystat(tmpname, repo.wjoin(realname))
229 229 os.unlink(tmpname)
230 230 if tobackup:
231 231 os.rmdir(backupdir)
232 232 except OSError:
233 233 pass
234 234
235 235 def recordinwlock(ui, repo, message, match, opts):
236 236 wlock = repo.wlock()
237 237 try:
238 238 return recordfunc(ui, repo, message, match, opts)
239 239 finally:
240 240 wlock.release()
241 241
242 242 return commit(ui, repo, recordinwlock, pats, opts)
243 243
244 244 def findpossible(cmd, table, strict=False):
245 245 """
246 246 Return cmd -> (aliases, command table entry)
247 247 for each matching command.
248 248 Return debug commands (or their aliases) only if no normal command matches.
249 249 """
250 250 choice = {}
251 251 debugchoice = {}
252 252
253 253 if cmd in table:
254 254 # short-circuit exact matches, "log" alias beats "^log|history"
255 255 keys = [cmd]
256 256 else:
257 257 keys = table.keys()
258 258
259 259 allcmds = []
260 260 for e in keys:
261 261 aliases = parsealiases(e)
262 262 allcmds.extend(aliases)
263 263 found = None
264 264 if cmd in aliases:
265 265 found = cmd
266 266 elif not strict:
267 267 for a in aliases:
268 268 if a.startswith(cmd):
269 269 found = a
270 270 break
271 271 if found is not None:
272 272 if aliases[0].startswith("debug") or found.startswith("debug"):
273 273 debugchoice[found] = (aliases, table[e])
274 274 else:
275 275 choice[found] = (aliases, table[e])
276 276
277 277 if not choice and debugchoice:
278 278 choice = debugchoice
279 279
280 280 return choice, allcmds
281 281
282 282 def findcmd(cmd, table, strict=True):
283 283 """Return (aliases, command table entry) for command string."""
284 284 choice, allcmds = findpossible(cmd, table, strict)
285 285
286 286 if cmd in choice:
287 287 return choice[cmd]
288 288
289 289 if len(choice) > 1:
290 290 clist = choice.keys()
291 291 clist.sort()
292 292 raise error.AmbiguousCommand(cmd, clist)
293 293
294 294 if choice:
295 295 return choice.values()[0]
296 296
297 297 raise error.UnknownCommand(cmd, allcmds)
298 298
299 299 def findrepo(p):
300 300 while not os.path.isdir(os.path.join(p, ".hg")):
301 301 oldp, p = p, os.path.dirname(p)
302 302 if p == oldp:
303 303 return None
304 304
305 305 return p
306 306
307 307 def bailifchanged(repo, merge=True):
308 308 if merge and repo.dirstate.p2() != nullid:
309 309 raise error.Abort(_('outstanding uncommitted merge'))
310 310 modified, added, removed, deleted = repo.status()[:4]
311 311 if modified or added or removed or deleted:
312 312 raise error.Abort(_('uncommitted changes'))
313 313 ctx = repo[None]
314 314 for s in sorted(ctx.substate):
315 315 ctx.sub(s).bailifchanged()
316 316
317 317 def logmessage(ui, opts):
318 318 """ get the log message according to -m and -l option """
319 319 message = opts.get('message')
320 320 logfile = opts.get('logfile')
321 321
322 322 if message and logfile:
323 323 raise error.Abort(_('options --message and --logfile are mutually '
324 324 'exclusive'))
325 325 if not message and logfile:
326 326 try:
327 327 if logfile == '-':
328 328 message = ui.fin.read()
329 329 else:
330 330 message = '\n'.join(util.readfile(logfile).splitlines())
331 331 except IOError as inst:
332 332 raise error.Abort(_("can't read commit message '%s': %s") %
333 333 (logfile, inst.strerror))
334 334 return message
335 335
336 336 def mergeeditform(ctxorbool, baseformname):
337 337 """return appropriate editform name (referencing a committemplate)
338 338
339 339 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
340 340 merging is committed.
341 341
342 342 This returns baseformname with '.merge' appended if it is a merge,
343 343 otherwise '.normal' is appended.
344 344 """
345 345 if isinstance(ctxorbool, bool):
346 346 if ctxorbool:
347 347 return baseformname + ".merge"
348 348 elif 1 < len(ctxorbool.parents()):
349 349 return baseformname + ".merge"
350 350
351 351 return baseformname + ".normal"
352 352
353 353 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
354 354 editform='', **opts):
355 355 """get appropriate commit message editor according to '--edit' option
356 356
357 357 'finishdesc' is a function to be called with edited commit message
358 358 (= 'description' of the new changeset) just after editing, but
359 359 before checking empty-ness. It should return actual text to be
360 360 stored into history. This allows to change description before
361 361 storing.
362 362
363 363 'extramsg' is a extra message to be shown in the editor instead of
364 364 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
365 365 is automatically added.
366 366
367 367 'editform' is a dot-separated list of names, to distinguish
368 368 the purpose of commit text editing.
369 369
370 370 'getcommiteditor' returns 'commitforceeditor' regardless of
371 371 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
372 372 they are specific for usage in MQ.
373 373 """
374 374 if edit or finishdesc or extramsg:
375 375 return lambda r, c, s: commitforceeditor(r, c, s,
376 376 finishdesc=finishdesc,
377 377 extramsg=extramsg,
378 378 editform=editform)
379 379 elif editform:
380 380 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
381 381 else:
382 382 return commiteditor
383 383
384 384 def loglimit(opts):
385 385 """get the log limit according to option -l/--limit"""
386 386 limit = opts.get('limit')
387 387 if limit:
388 388 try:
389 389 limit = int(limit)
390 390 except ValueError:
391 391 raise error.Abort(_('limit must be a positive integer'))
392 392 if limit <= 0:
393 393 raise error.Abort(_('limit must be positive'))
394 394 else:
395 395 limit = None
396 396 return limit
397 397
398 398 def makefilename(repo, pat, node, desc=None,
399 399 total=None, seqno=None, revwidth=None, pathname=None):
400 400 node_expander = {
401 401 'H': lambda: hex(node),
402 402 'R': lambda: str(repo.changelog.rev(node)),
403 403 'h': lambda: short(node),
404 404 'm': lambda: re.sub('[^\w]', '_', str(desc))
405 405 }
406 406 expander = {
407 407 '%': lambda: '%',
408 408 'b': lambda: os.path.basename(repo.root),
409 409 }
410 410
411 411 try:
412 412 if node:
413 413 expander.update(node_expander)
414 414 if node:
415 415 expander['r'] = (lambda:
416 416 str(repo.changelog.rev(node)).zfill(revwidth or 0))
417 417 if total is not None:
418 418 expander['N'] = lambda: str(total)
419 419 if seqno is not None:
420 420 expander['n'] = lambda: str(seqno)
421 421 if total is not None and seqno is not None:
422 422 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
423 423 if pathname is not None:
424 424 expander['s'] = lambda: os.path.basename(pathname)
425 425 expander['d'] = lambda: os.path.dirname(pathname) or '.'
426 426 expander['p'] = lambda: pathname
427 427
428 428 newname = []
429 429 patlen = len(pat)
430 430 i = 0
431 431 while i < patlen:
432 432 c = pat[i]
433 433 if c == '%':
434 434 i += 1
435 435 c = pat[i]
436 436 c = expander[c]()
437 437 newname.append(c)
438 438 i += 1
439 439 return ''.join(newname)
440 440 except KeyError as inst:
441 441 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
442 442 inst.args[0])
443 443
444 444 def makefileobj(repo, pat, node=None, desc=None, total=None,
445 445 seqno=None, revwidth=None, mode='wb', modemap=None,
446 446 pathname=None):
447 447
448 448 writable = mode not in ('r', 'rb')
449 449
450 450 if not pat or pat == '-':
451 451 if writable:
452 452 fp = repo.ui.fout
453 453 else:
454 454 fp = repo.ui.fin
455 455 if util.safehasattr(fp, 'fileno'):
456 456 return os.fdopen(os.dup(fp.fileno()), mode)
457 457 else:
458 458 # if this fp can't be duped properly, return
459 459 # a dummy object that can be closed
460 460 class wrappedfileobj(object):
461 461 noop = lambda x: None
462 462 def __init__(self, f):
463 463 self.f = f
464 464 def __getattr__(self, attr):
465 465 if attr == 'close':
466 466 return self.noop
467 467 else:
468 468 return getattr(self.f, attr)
469 469
470 470 return wrappedfileobj(fp)
471 471 if util.safehasattr(pat, 'write') and writable:
472 472 return pat
473 473 if util.safehasattr(pat, 'read') and 'r' in mode:
474 474 return pat
475 475 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
476 476 if modemap is not None:
477 477 mode = modemap.get(fn, mode)
478 478 if mode == 'wb':
479 479 modemap[fn] = 'ab'
480 480 return open(fn, mode)
481 481
482 482 def openrevlog(repo, cmd, file_, opts):
483 483 """opens the changelog, manifest, a filelog or a given revlog"""
484 484 cl = opts['changelog']
485 485 mf = opts['manifest']
486 486 dir = opts['dir']
487 487 msg = None
488 488 if cl and mf:
489 489 msg = _('cannot specify --changelog and --manifest at the same time')
490 490 elif cl and dir:
491 491 msg = _('cannot specify --changelog and --dir at the same time')
492 492 elif cl or mf:
493 493 if file_:
494 494 msg = _('cannot specify filename with --changelog or --manifest')
495 495 elif not repo:
496 496 msg = _('cannot specify --changelog or --manifest or --dir '
497 497 'without a repository')
498 498 if msg:
499 499 raise error.Abort(msg)
500 500
501 501 r = None
502 502 if repo:
503 503 if cl:
504 504 r = repo.unfiltered().changelog
505 505 elif dir:
506 506 if 'treemanifest' not in repo.requirements:
507 507 raise error.Abort(_("--dir can only be used on repos with "
508 508 "treemanifest enabled"))
509 509 dirlog = repo.dirlog(file_)
510 510 if len(dirlog):
511 511 r = dirlog
512 512 elif mf:
513 513 r = repo.manifest
514 514 elif file_:
515 515 filelog = repo.file(file_)
516 516 if len(filelog):
517 517 r = filelog
518 518 if not r:
519 519 if not file_:
520 520 raise error.CommandError(cmd, _('invalid arguments'))
521 521 if not os.path.isfile(file_):
522 522 raise error.Abort(_("revlog '%s' not found") % file_)
523 523 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
524 524 file_[:-2] + ".i")
525 525 return r
526 526
527 527 def copy(ui, repo, pats, opts, rename=False):
528 528 # called with the repo lock held
529 529 #
530 530 # hgsep => pathname that uses "/" to separate directories
531 531 # ossep => pathname that uses os.sep to separate directories
532 532 cwd = repo.getcwd()
533 533 targets = {}
534 534 after = opts.get("after")
535 535 dryrun = opts.get("dry_run")
536 536 wctx = repo[None]
537 537
538 538 def walkpat(pat):
539 539 srcs = []
540 540 if after:
541 541 badstates = '?'
542 542 else:
543 543 badstates = '?r'
544 544 m = scmutil.match(repo[None], [pat], opts, globbed=True)
545 545 for abs in repo.walk(m):
546 546 state = repo.dirstate[abs]
547 547 rel = m.rel(abs)
548 548 exact = m.exact(abs)
549 549 if state in badstates:
550 550 if exact and state == '?':
551 551 ui.warn(_('%s: not copying - file is not managed\n') % rel)
552 552 if exact and state == 'r':
553 553 ui.warn(_('%s: not copying - file has been marked for'
554 554 ' remove\n') % rel)
555 555 continue
556 556 # abs: hgsep
557 557 # rel: ossep
558 558 srcs.append((abs, rel, exact))
559 559 return srcs
560 560
561 561 # abssrc: hgsep
562 562 # relsrc: ossep
563 563 # otarget: ossep
564 564 def copyfile(abssrc, relsrc, otarget, exact):
565 565 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
566 566 if '/' in abstarget:
567 567 # We cannot normalize abstarget itself, this would prevent
568 568 # case only renames, like a => A.
569 569 abspath, absname = abstarget.rsplit('/', 1)
570 570 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
571 571 reltarget = repo.pathto(abstarget, cwd)
572 572 target = repo.wjoin(abstarget)
573 573 src = repo.wjoin(abssrc)
574 574 state = repo.dirstate[abstarget]
575 575
576 576 scmutil.checkportable(ui, abstarget)
577 577
578 578 # check for collisions
579 579 prevsrc = targets.get(abstarget)
580 580 if prevsrc is not None:
581 581 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
582 582 (reltarget, repo.pathto(abssrc, cwd),
583 583 repo.pathto(prevsrc, cwd)))
584 584 return
585 585
586 586 # check for overwrites
587 587 exists = os.path.lexists(target)
588 588 samefile = False
589 589 if exists and abssrc != abstarget:
590 590 if (repo.dirstate.normalize(abssrc) ==
591 591 repo.dirstate.normalize(abstarget)):
592 592 if not rename:
593 593 ui.warn(_("%s: can't copy - same file\n") % reltarget)
594 594 return
595 595 exists = False
596 596 samefile = True
597 597
598 598 if not after and exists or after and state in 'mn':
599 599 if not opts['force']:
600 600 ui.warn(_('%s: not overwriting - file exists\n') %
601 601 reltarget)
602 602 return
603 603
604 604 if after:
605 605 if not exists:
606 606 if rename:
607 607 ui.warn(_('%s: not recording move - %s does not exist\n') %
608 608 (relsrc, reltarget))
609 609 else:
610 610 ui.warn(_('%s: not recording copy - %s does not exist\n') %
611 611 (relsrc, reltarget))
612 612 return
613 613 elif not dryrun:
614 614 try:
615 615 if exists:
616 616 os.unlink(target)
617 617 targetdir = os.path.dirname(target) or '.'
618 618 if not os.path.isdir(targetdir):
619 619 os.makedirs(targetdir)
620 620 if samefile:
621 621 tmp = target + "~hgrename"
622 622 os.rename(src, tmp)
623 623 os.rename(tmp, target)
624 624 else:
625 625 util.copyfile(src, target)
626 626 srcexists = True
627 627 except IOError as inst:
628 628 if inst.errno == errno.ENOENT:
629 629 ui.warn(_('%s: deleted in working directory\n') % relsrc)
630 630 srcexists = False
631 631 else:
632 632 ui.warn(_('%s: cannot copy - %s\n') %
633 633 (relsrc, inst.strerror))
634 634 return True # report a failure
635 635
636 636 if ui.verbose or not exact:
637 637 if rename:
638 638 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
639 639 else:
640 640 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
641 641
642 642 targets[abstarget] = abssrc
643 643
644 644 # fix up dirstate
645 645 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
646 646 dryrun=dryrun, cwd=cwd)
647 647 if rename and not dryrun:
648 648 if not after and srcexists and not samefile:
649 649 util.unlinkpath(repo.wjoin(abssrc))
650 650 wctx.forget([abssrc])
651 651
652 652 # pat: ossep
653 653 # dest ossep
654 654 # srcs: list of (hgsep, hgsep, ossep, bool)
655 655 # return: function that takes hgsep and returns ossep
656 656 def targetpathfn(pat, dest, srcs):
657 657 if os.path.isdir(pat):
658 658 abspfx = pathutil.canonpath(repo.root, cwd, pat)
659 659 abspfx = util.localpath(abspfx)
660 660 if destdirexists:
661 661 striplen = len(os.path.split(abspfx)[0])
662 662 else:
663 663 striplen = len(abspfx)
664 664 if striplen:
665 665 striplen += len(os.sep)
666 666 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
667 667 elif destdirexists:
668 668 res = lambda p: os.path.join(dest,
669 669 os.path.basename(util.localpath(p)))
670 670 else:
671 671 res = lambda p: dest
672 672 return res
673 673
674 674 # pat: ossep
675 675 # dest ossep
676 676 # srcs: list of (hgsep, hgsep, ossep, bool)
677 677 # return: function that takes hgsep and returns ossep
678 678 def targetpathafterfn(pat, dest, srcs):
679 679 if matchmod.patkind(pat):
680 680 # a mercurial pattern
681 681 res = lambda p: os.path.join(dest,
682 682 os.path.basename(util.localpath(p)))
683 683 else:
684 684 abspfx = pathutil.canonpath(repo.root, cwd, pat)
685 685 if len(abspfx) < len(srcs[0][0]):
686 686 # A directory. Either the target path contains the last
687 687 # component of the source path or it does not.
688 688 def evalpath(striplen):
689 689 score = 0
690 690 for s in srcs:
691 691 t = os.path.join(dest, util.localpath(s[0])[striplen:])
692 692 if os.path.lexists(t):
693 693 score += 1
694 694 return score
695 695
696 696 abspfx = util.localpath(abspfx)
697 697 striplen = len(abspfx)
698 698 if striplen:
699 699 striplen += len(os.sep)
700 700 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
701 701 score = evalpath(striplen)
702 702 striplen1 = len(os.path.split(abspfx)[0])
703 703 if striplen1:
704 704 striplen1 += len(os.sep)
705 705 if evalpath(striplen1) > score:
706 706 striplen = striplen1
707 707 res = lambda p: os.path.join(dest,
708 708 util.localpath(p)[striplen:])
709 709 else:
710 710 # a file
711 711 if destdirexists:
712 712 res = lambda p: os.path.join(dest,
713 713 os.path.basename(util.localpath(p)))
714 714 else:
715 715 res = lambda p: dest
716 716 return res
717 717
718 718 pats = scmutil.expandpats(pats)
719 719 if not pats:
720 720 raise error.Abort(_('no source or destination specified'))
721 721 if len(pats) == 1:
722 722 raise error.Abort(_('no destination specified'))
723 723 dest = pats.pop()
724 724 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
725 725 if not destdirexists:
726 726 if len(pats) > 1 or matchmod.patkind(pats[0]):
727 727 raise error.Abort(_('with multiple sources, destination must be an '
728 728 'existing directory'))
729 729 if util.endswithsep(dest):
730 730 raise error.Abort(_('destination %s is not a directory') % dest)
731 731
732 732 tfn = targetpathfn
733 733 if after:
734 734 tfn = targetpathafterfn
735 735 copylist = []
736 736 for pat in pats:
737 737 srcs = walkpat(pat)
738 738 if not srcs:
739 739 continue
740 740 copylist.append((tfn(pat, dest, srcs), srcs))
741 741 if not copylist:
742 742 raise error.Abort(_('no files to copy'))
743 743
744 744 errors = 0
745 745 for targetpath, srcs in copylist:
746 746 for abssrc, relsrc, exact in srcs:
747 747 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
748 748 errors += 1
749 749
750 750 if errors:
751 751 ui.warn(_('(consider using --after)\n'))
752 752
753 753 return errors != 0
754 754
755 755 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
756 756 runargs=None, appendpid=False):
757 757 '''Run a command as a service.'''
758 758
759 759 def writepid(pid):
760 760 if opts['pid_file']:
761 761 if appendpid:
762 762 mode = 'a'
763 763 else:
764 764 mode = 'w'
765 765 fp = open(opts['pid_file'], mode)
766 766 fp.write(str(pid) + '\n')
767 767 fp.close()
768 768
769 769 if opts['daemon'] and not opts['daemon_pipefds']:
770 770 # Signal child process startup with file removal
771 771 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
772 772 os.close(lockfd)
773 773 try:
774 774 if not runargs:
775 775 runargs = util.hgcmd() + sys.argv[1:]
776 776 runargs.append('--daemon-pipefds=%s' % lockpath)
777 777 # Don't pass --cwd to the child process, because we've already
778 778 # changed directory.
779 779 for i in xrange(1, len(runargs)):
780 780 if runargs[i].startswith('--cwd='):
781 781 del runargs[i]
782 782 break
783 783 elif runargs[i].startswith('--cwd'):
784 784 del runargs[i:i + 2]
785 785 break
786 786 def condfn():
787 787 return not os.path.exists(lockpath)
788 788 pid = util.rundetached(runargs, condfn)
789 789 if pid < 0:
790 790 raise error.Abort(_('child process failed to start'))
791 791 writepid(pid)
792 792 finally:
793 793 try:
794 794 os.unlink(lockpath)
795 795 except OSError as e:
796 796 if e.errno != errno.ENOENT:
797 797 raise
798 798 if parentfn:
799 799 return parentfn(pid)
800 800 else:
801 801 return
802 802
803 803 if initfn:
804 804 initfn()
805 805
806 806 if not opts['daemon']:
807 807 writepid(os.getpid())
808 808
809 809 if opts['daemon_pipefds']:
810 810 lockpath = opts['daemon_pipefds']
811 811 try:
812 812 os.setsid()
813 813 except AttributeError:
814 814 pass
815 815 os.unlink(lockpath)
816 816 util.hidewindow()
817 817 sys.stdout.flush()
818 818 sys.stderr.flush()
819 819
820 820 nullfd = os.open(os.devnull, os.O_RDWR)
821 821 logfilefd = nullfd
822 822 if logfile:
823 823 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
824 824 os.dup2(nullfd, 0)
825 825 os.dup2(logfilefd, 1)
826 826 os.dup2(logfilefd, 2)
827 827 if nullfd not in (0, 1, 2):
828 828 os.close(nullfd)
829 829 if logfile and logfilefd not in (0, 1, 2):
830 830 os.close(logfilefd)
831 831
832 832 if runfn:
833 833 return runfn()
834 834
835 835 ## facility to let extension process additional data into an import patch
836 836 # list of identifier to be executed in order
837 837 extrapreimport = [] # run before commit
838 838 extrapostimport = [] # run after commit
839 839 # mapping from identifier to actual import function
840 840 #
841 841 # 'preimport' are run before the commit is made and are provided the following
842 842 # arguments:
843 843 # - repo: the localrepository instance,
844 844 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
845 845 # - extra: the future extra dictionary of the changeset, please mutate it,
846 846 # - opts: the import options.
847 847 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
848 848 # mutation of in memory commit and more. Feel free to rework the code to get
849 849 # there.
850 850 extrapreimportmap = {}
851 851 # 'postimport' are run after the commit is made and are provided the following
852 852 # argument:
853 853 # - ctx: the changectx created by import.
854 854 extrapostimportmap = {}
855 855
856 856 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
857 857 """Utility function used by commands.import to import a single patch
858 858
859 859 This function is explicitly defined here to help the evolve extension to
860 860 wrap this part of the import logic.
861 861
862 862 The API is currently a bit ugly because it a simple code translation from
863 863 the import command. Feel free to make it better.
864 864
865 865 :hunk: a patch (as a binary string)
866 866 :parents: nodes that will be parent of the created commit
867 867 :opts: the full dict of option passed to the import command
868 868 :msgs: list to save commit message to.
869 869 (used in case we need to save it when failing)
870 870 :updatefunc: a function that update a repo to a given node
871 871 updatefunc(<repo>, <node>)
872 872 """
873 873 # avoid cycle context -> subrepo -> cmdutil
874 874 import context
875 875 extractdata = patch.extract(ui, hunk)
876 876 tmpname = extractdata.get('filename')
877 877 message = extractdata.get('message')
878 878 user = extractdata.get('user')
879 879 date = extractdata.get('date')
880 880 branch = extractdata.get('branch')
881 881 nodeid = extractdata.get('nodeid')
882 882 p1 = extractdata.get('p1')
883 883 p2 = extractdata.get('p2')
884 884
885 885 update = not opts.get('bypass')
886 886 strip = opts["strip"]
887 887 prefix = opts["prefix"]
888 888 sim = float(opts.get('similarity') or 0)
889 889 if not tmpname:
890 890 return (None, None, False)
891 891 msg = _('applied to working directory')
892 892
893 893 rejects = False
894 894
895 895 try:
896 896 cmdline_message = logmessage(ui, opts)
897 897 if cmdline_message:
898 898 # pickup the cmdline msg
899 899 message = cmdline_message
900 900 elif message:
901 901 # pickup the patch msg
902 902 message = message.strip()
903 903 else:
904 904 # launch the editor
905 905 message = None
906 906 ui.debug('message:\n%s\n' % message)
907 907
908 908 if len(parents) == 1:
909 909 parents.append(repo[nullid])
910 910 if opts.get('exact'):
911 911 if not nodeid or not p1:
912 912 raise error.Abort(_('not a Mercurial patch'))
913 913 p1 = repo[p1]
914 914 p2 = repo[p2 or nullid]
915 915 elif p2:
916 916 try:
917 917 p1 = repo[p1]
918 918 p2 = repo[p2]
919 919 # Without any options, consider p2 only if the
920 920 # patch is being applied on top of the recorded
921 921 # first parent.
922 922 if p1 != parents[0]:
923 923 p1 = parents[0]
924 924 p2 = repo[nullid]
925 925 except error.RepoError:
926 926 p1, p2 = parents
927 927 if p2.node() == nullid:
928 928 ui.warn(_("warning: import the patch as a normal revision\n"
929 929 "(use --exact to import the patch as a merge)\n"))
930 930 else:
931 931 p1, p2 = parents
932 932
933 933 n = None
934 934 if update:
935 935 if p1 != parents[0]:
936 936 updatefunc(repo, p1.node())
937 937 if p2 != parents[1]:
938 938 repo.setparents(p1.node(), p2.node())
939 939
940 940 if opts.get('exact') or opts.get('import_branch'):
941 941 repo.dirstate.setbranch(branch or 'default')
942 942
943 943 partial = opts.get('partial', False)
944 944 files = set()
945 945 try:
946 946 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
947 947 files=files, eolmode=None, similarity=sim / 100.0)
948 948 except patch.PatchError as e:
949 949 if not partial:
950 950 raise error.Abort(str(e))
951 951 if partial:
952 952 rejects = True
953 953
954 954 files = list(files)
955 955 if opts.get('no_commit'):
956 956 if message:
957 957 msgs.append(message)
958 958 else:
959 959 if opts.get('exact') or p2:
960 960 # If you got here, you either use --force and know what
961 961 # you are doing or used --exact or a merge patch while
962 962 # being updated to its first parent.
963 963 m = None
964 964 else:
965 965 m = scmutil.matchfiles(repo, files or [])
966 966 editform = mergeeditform(repo[None], 'import.normal')
967 967 if opts.get('exact'):
968 968 editor = None
969 969 else:
970 970 editor = getcommiteditor(editform=editform, **opts)
971 971 allowemptyback = repo.ui.backupconfig('ui', 'allowemptycommit')
972 972 extra = {}
973 973 for idfunc in extrapreimport:
974 974 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
975 975 try:
976 976 if partial:
977 977 repo.ui.setconfig('ui', 'allowemptycommit', True)
978 978 n = repo.commit(message, opts.get('user') or user,
979 979 opts.get('date') or date, match=m,
980 980 editor=editor, extra=extra)
981 981 for idfunc in extrapostimport:
982 982 extrapostimportmap[idfunc](repo[n])
983 983 finally:
984 984 repo.ui.restoreconfig(allowemptyback)
985 985 else:
986 986 if opts.get('exact') or opts.get('import_branch'):
987 987 branch = branch or 'default'
988 988 else:
989 989 branch = p1.branch()
990 990 store = patch.filestore()
991 991 try:
992 992 files = set()
993 993 try:
994 994 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
995 995 files, eolmode=None)
996 996 except patch.PatchError as e:
997 997 raise error.Abort(str(e))
998 998 if opts.get('exact'):
999 999 editor = None
1000 1000 else:
1001 1001 editor = getcommiteditor(editform='import.bypass')
1002 1002 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1003 1003 message,
1004 1004 opts.get('user') or user,
1005 1005 opts.get('date') or date,
1006 1006 branch, files, store,
1007 1007 editor=editor)
1008 1008 n = memctx.commit()
1009 1009 finally:
1010 1010 store.close()
1011 1011 if opts.get('exact') and opts.get('no_commit'):
1012 1012 # --exact with --no-commit is still useful in that it does merge
1013 1013 # and branch bits
1014 1014 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1015 1015 elif opts.get('exact') and hex(n) != nodeid:
1016 1016 raise error.Abort(_('patch is damaged or loses information'))
1017 1017 if n:
1018 1018 # i18n: refers to a short changeset id
1019 1019 msg = _('created %s') % short(n)
1020 1020 return (msg, n, rejects)
1021 1021 finally:
1022 1022 os.unlink(tmpname)
1023 1023
1024 1024 # facility to let extensions include additional data in an exported patch
1025 1025 # list of identifiers to be executed in order
1026 1026 extraexport = []
1027 1027 # mapping from identifier to actual export function
1028 1028 # function as to return a string to be added to the header or None
1029 1029 # it is given two arguments (sequencenumber, changectx)
1030 1030 extraexportmap = {}
1031 1031
1032 1032 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1033 1033 opts=None, match=None):
1034 1034 '''export changesets as hg patches.'''
1035 1035
1036 1036 total = len(revs)
1037 1037 revwidth = max([len(str(rev)) for rev in revs])
1038 1038 filemode = {}
1039 1039
1040 1040 def single(rev, seqno, fp):
1041 1041 ctx = repo[rev]
1042 1042 node = ctx.node()
1043 1043 parents = [p.node() for p in ctx.parents() if p]
1044 1044 branch = ctx.branch()
1045 1045 if switch_parent:
1046 1046 parents.reverse()
1047 1047
1048 1048 if parents:
1049 1049 prev = parents[0]
1050 1050 else:
1051 1051 prev = nullid
1052 1052
1053 1053 shouldclose = False
1054 1054 if not fp and len(template) > 0:
1055 1055 desc_lines = ctx.description().rstrip().split('\n')
1056 1056 desc = desc_lines[0] #Commit always has a first line.
1057 1057 fp = makefileobj(repo, template, node, desc=desc, total=total,
1058 1058 seqno=seqno, revwidth=revwidth, mode='wb',
1059 1059 modemap=filemode)
1060 1060 if fp != template:
1061 1061 shouldclose = True
1062 1062 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
1063 1063 repo.ui.note("%s\n" % fp.name)
1064 1064
1065 1065 if not fp:
1066 1066 write = repo.ui.write
1067 1067 else:
1068 1068 def write(s, **kw):
1069 1069 fp.write(s)
1070 1070
1071 1071 write("# HG changeset patch\n")
1072 1072 write("# User %s\n" % ctx.user())
1073 1073 write("# Date %d %d\n" % ctx.date())
1074 1074 write("# %s\n" % util.datestr(ctx.date()))
1075 1075 if branch and branch != 'default':
1076 1076 write("# Branch %s\n" % branch)
1077 1077 write("# Node ID %s\n" % hex(node))
1078 1078 write("# Parent %s\n" % hex(prev))
1079 1079 if len(parents) > 1:
1080 1080 write("# Parent %s\n" % hex(parents[1]))
1081 1081
1082 1082 for headerid in extraexport:
1083 1083 header = extraexportmap[headerid](seqno, ctx)
1084 1084 if header is not None:
1085 1085 write('# %s\n' % header)
1086 1086 write(ctx.description().rstrip())
1087 1087 write("\n\n")
1088 1088
1089 1089 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1090 1090 write(chunk, label=label)
1091 1091
1092 1092 if shouldclose:
1093 1093 fp.close()
1094 1094
1095 1095 for seqno, rev in enumerate(revs):
1096 1096 single(rev, seqno + 1, fp)
1097 1097
1098 1098 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1099 1099 changes=None, stat=False, fp=None, prefix='',
1100 1100 root='', listsubrepos=False):
1101 1101 '''show diff or diffstat.'''
1102 1102 if fp is None:
1103 1103 write = ui.write
1104 1104 else:
1105 1105 def write(s, **kw):
1106 1106 fp.write(s)
1107 1107
1108 1108 if root:
1109 1109 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1110 1110 else:
1111 1111 relroot = ''
1112 1112 if relroot != '':
1113 1113 # XXX relative roots currently don't work if the root is within a
1114 1114 # subrepo
1115 1115 uirelroot = match.uipath(relroot)
1116 1116 relroot += '/'
1117 1117 for matchroot in match.files():
1118 1118 if not matchroot.startswith(relroot):
1119 1119 ui.warn(_('warning: %s not inside relative root %s\n') % (
1120 1120 match.uipath(matchroot), uirelroot))
1121 1121
1122 1122 if stat:
1123 1123 diffopts = diffopts.copy(context=0)
1124 1124 width = 80
1125 1125 if not ui.plain():
1126 1126 width = ui.termwidth()
1127 1127 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1128 1128 prefix=prefix, relroot=relroot)
1129 1129 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1130 1130 width=width,
1131 1131 git=diffopts.git):
1132 1132 write(chunk, label=label)
1133 1133 else:
1134 1134 for chunk, label in patch.diffui(repo, node1, node2, match,
1135 1135 changes, diffopts, prefix=prefix,
1136 1136 relroot=relroot):
1137 1137 write(chunk, label=label)
1138 1138
1139 1139 if listsubrepos:
1140 1140 ctx1 = repo[node1]
1141 1141 ctx2 = repo[node2]
1142 1142 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1143 1143 tempnode2 = node2
1144 1144 try:
1145 1145 if node2 is not None:
1146 1146 tempnode2 = ctx2.substate[subpath][1]
1147 1147 except KeyError:
1148 1148 # A subrepo that existed in node1 was deleted between node1 and
1149 1149 # node2 (inclusive). Thus, ctx2's substate won't contain that
1150 1150 # subpath. The best we can do is to ignore it.
1151 1151 tempnode2 = None
1152 1152 submatch = matchmod.narrowmatcher(subpath, match)
1153 1153 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1154 1154 stat=stat, fp=fp, prefix=prefix)
1155 1155
1156 1156 class changeset_printer(object):
1157 1157 '''show changeset information when templating not requested.'''
1158 1158
1159 1159 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1160 1160 self.ui = ui
1161 1161 self.repo = repo
1162 1162 self.buffered = buffered
1163 1163 self.matchfn = matchfn
1164 1164 self.diffopts = diffopts
1165 1165 self.header = {}
1166 1166 self.hunk = {}
1167 1167 self.lastheader = None
1168 1168 self.footer = None
1169 1169
1170 1170 def flush(self, ctx):
1171 1171 rev = ctx.rev()
1172 1172 if rev in self.header:
1173 1173 h = self.header[rev]
1174 1174 if h != self.lastheader:
1175 1175 self.lastheader = h
1176 1176 self.ui.write(h)
1177 1177 del self.header[rev]
1178 1178 if rev in self.hunk:
1179 1179 self.ui.write(self.hunk[rev])
1180 1180 del self.hunk[rev]
1181 1181 return 1
1182 1182 return 0
1183 1183
1184 1184 def close(self):
1185 1185 if self.footer:
1186 1186 self.ui.write(self.footer)
1187 1187
1188 1188 def show(self, ctx, copies=None, matchfn=None, **props):
1189 1189 if self.buffered:
1190 1190 self.ui.pushbuffer(labeled=True)
1191 1191 self._show(ctx, copies, matchfn, props)
1192 1192 self.hunk[ctx.rev()] = self.ui.popbuffer()
1193 1193 else:
1194 1194 self._show(ctx, copies, matchfn, props)
1195 1195
1196 1196 def _show(self, ctx, copies, matchfn, props):
1197 1197 '''show a single changeset or file revision'''
1198 1198 changenode = ctx.node()
1199 1199 rev = ctx.rev()
1200 1200 if self.ui.debugflag:
1201 1201 hexfunc = hex
1202 1202 else:
1203 1203 hexfunc = short
1204 1204 # as of now, wctx.node() and wctx.rev() return None, but we want to
1205 1205 # show the same values as {node} and {rev} templatekw
1206 1206 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1207 1207
1208 1208 if self.ui.quiet:
1209 1209 self.ui.write("%d:%s\n" % revnode, label='log.node')
1210 1210 return
1211 1211
1212 1212 date = util.datestr(ctx.date())
1213 1213
1214 1214 # i18n: column positioning for "hg log"
1215 1215 self.ui.write(_("changeset: %d:%s\n") % revnode,
1216 1216 label='log.changeset changeset.%s' % ctx.phasestr())
1217 1217
1218 1218 # branches are shown first before any other names due to backwards
1219 1219 # compatibility
1220 1220 branch = ctx.branch()
1221 1221 # don't show the default branch name
1222 1222 if branch != 'default':
1223 1223 # i18n: column positioning for "hg log"
1224 1224 self.ui.write(_("branch: %s\n") % branch,
1225 1225 label='log.branch')
1226 1226
1227 1227 for name, ns in self.repo.names.iteritems():
1228 1228 # branches has special logic already handled above, so here we just
1229 1229 # skip it
1230 1230 if name == 'branches':
1231 1231 continue
1232 1232 # we will use the templatename as the color name since those two
1233 1233 # should be the same
1234 1234 for name in ns.names(self.repo, changenode):
1235 1235 self.ui.write(ns.logfmt % name,
1236 1236 label='log.%s' % ns.colorname)
1237 1237 if self.ui.debugflag:
1238 1238 # i18n: column positioning for "hg log"
1239 1239 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1240 1240 label='log.phase')
1241 1241 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1242 1242 label = 'log.parent changeset.%s' % pctx.phasestr()
1243 1243 # i18n: column positioning for "hg log"
1244 1244 self.ui.write(_("parent: %d:%s\n")
1245 1245 % (pctx.rev(), hexfunc(pctx.node())),
1246 1246 label=label)
1247 1247
1248 1248 if self.ui.debugflag and rev is not None:
1249 1249 mnode = ctx.manifestnode()
1250 1250 # i18n: column positioning for "hg log"
1251 1251 self.ui.write(_("manifest: %d:%s\n") %
1252 1252 (self.repo.manifest.rev(mnode), hex(mnode)),
1253 1253 label='ui.debug log.manifest')
1254 1254 # i18n: column positioning for "hg log"
1255 1255 self.ui.write(_("user: %s\n") % ctx.user(),
1256 1256 label='log.user')
1257 1257 # i18n: column positioning for "hg log"
1258 1258 self.ui.write(_("date: %s\n") % date,
1259 1259 label='log.date')
1260 1260
1261 1261 if self.ui.debugflag:
1262 1262 files = ctx.p1().status(ctx)[:3]
1263 1263 for key, value in zip([# i18n: column positioning for "hg log"
1264 1264 _("files:"),
1265 1265 # i18n: column positioning for "hg log"
1266 1266 _("files+:"),
1267 1267 # i18n: column positioning for "hg log"
1268 1268 _("files-:")], files):
1269 1269 if value:
1270 1270 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1271 1271 label='ui.debug log.files')
1272 1272 elif ctx.files() and self.ui.verbose:
1273 1273 # i18n: column positioning for "hg log"
1274 1274 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1275 1275 label='ui.note log.files')
1276 1276 if copies and self.ui.verbose:
1277 1277 copies = ['%s (%s)' % c for c in copies]
1278 1278 # i18n: column positioning for "hg log"
1279 1279 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1280 1280 label='ui.note log.copies')
1281 1281
1282 1282 extra = ctx.extra()
1283 1283 if extra and self.ui.debugflag:
1284 1284 for key, value in sorted(extra.items()):
1285 1285 # i18n: column positioning for "hg log"
1286 1286 self.ui.write(_("extra: %s=%s\n")
1287 1287 % (key, value.encode('string_escape')),
1288 1288 label='ui.debug log.extra')
1289 1289
1290 1290 description = ctx.description().strip()
1291 1291 if description:
1292 1292 if self.ui.verbose:
1293 1293 self.ui.write(_("description:\n"),
1294 1294 label='ui.note log.description')
1295 1295 self.ui.write(description,
1296 1296 label='ui.note log.description')
1297 1297 self.ui.write("\n\n")
1298 1298 else:
1299 1299 # i18n: column positioning for "hg log"
1300 1300 self.ui.write(_("summary: %s\n") %
1301 1301 description.splitlines()[0],
1302 1302 label='log.summary')
1303 1303 self.ui.write("\n")
1304 1304
1305 1305 self.showpatch(ctx, matchfn)
1306 1306
1307 1307 def showpatch(self, ctx, matchfn):
1308 1308 if not matchfn:
1309 1309 matchfn = self.matchfn
1310 1310 if matchfn:
1311 1311 stat = self.diffopts.get('stat')
1312 1312 diff = self.diffopts.get('patch')
1313 1313 diffopts = patch.diffallopts(self.ui, self.diffopts)
1314 1314 node = ctx.node()
1315 1315 prev = ctx.p1()
1316 1316 if stat:
1317 1317 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1318 1318 match=matchfn, stat=True)
1319 1319 if diff:
1320 1320 if stat:
1321 1321 self.ui.write("\n")
1322 1322 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1323 1323 match=matchfn, stat=False)
1324 1324 self.ui.write("\n")
1325 1325
1326 1326 class jsonchangeset(changeset_printer):
1327 1327 '''format changeset information.'''
1328 1328
1329 1329 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1330 1330 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1331 1331 self.cache = {}
1332 1332 self._first = True
1333 1333
1334 1334 def close(self):
1335 1335 if not self._first:
1336 1336 self.ui.write("\n]\n")
1337 1337 else:
1338 1338 self.ui.write("[]\n")
1339 1339
1340 1340 def _show(self, ctx, copies, matchfn, props):
1341 1341 '''show a single changeset or file revision'''
1342 1342 rev = ctx.rev()
1343 1343 if rev is None:
1344 1344 jrev = jnode = 'null'
1345 1345 else:
1346 1346 jrev = str(rev)
1347 1347 jnode = '"%s"' % hex(ctx.node())
1348 1348 j = encoding.jsonescape
1349 1349
1350 1350 if self._first:
1351 1351 self.ui.write("[\n {")
1352 1352 self._first = False
1353 1353 else:
1354 1354 self.ui.write(",\n {")
1355 1355
1356 1356 if self.ui.quiet:
1357 1357 self.ui.write('\n "rev": %s' % jrev)
1358 1358 self.ui.write(',\n "node": %s' % jnode)
1359 1359 self.ui.write('\n }')
1360 1360 return
1361 1361
1362 1362 self.ui.write('\n "rev": %s' % jrev)
1363 1363 self.ui.write(',\n "node": %s' % jnode)
1364 1364 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1365 1365 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1366 1366 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1367 1367 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1368 1368 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1369 1369
1370 1370 self.ui.write(',\n "bookmarks": [%s]' %
1371 1371 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1372 1372 self.ui.write(',\n "tags": [%s]' %
1373 1373 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1374 1374 self.ui.write(',\n "parents": [%s]' %
1375 1375 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1376 1376
1377 1377 if self.ui.debugflag:
1378 1378 if rev is None:
1379 1379 jmanifestnode = 'null'
1380 1380 else:
1381 1381 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1382 1382 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1383 1383
1384 1384 self.ui.write(',\n "extra": {%s}' %
1385 1385 ", ".join('"%s": "%s"' % (j(k), j(v))
1386 1386 for k, v in ctx.extra().items()))
1387 1387
1388 1388 files = ctx.p1().status(ctx)
1389 1389 self.ui.write(',\n "modified": [%s]' %
1390 1390 ", ".join('"%s"' % j(f) for f in files[0]))
1391 1391 self.ui.write(',\n "added": [%s]' %
1392 1392 ", ".join('"%s"' % j(f) for f in files[1]))
1393 1393 self.ui.write(',\n "removed": [%s]' %
1394 1394 ", ".join('"%s"' % j(f) for f in files[2]))
1395 1395
1396 1396 elif self.ui.verbose:
1397 1397 self.ui.write(',\n "files": [%s]' %
1398 1398 ", ".join('"%s"' % j(f) for f in ctx.files()))
1399 1399
1400 1400 if copies:
1401 1401 self.ui.write(',\n "copies": {%s}' %
1402 1402 ", ".join('"%s": "%s"' % (j(k), j(v))
1403 1403 for k, v in copies))
1404 1404
1405 1405 matchfn = self.matchfn
1406 1406 if matchfn:
1407 1407 stat = self.diffopts.get('stat')
1408 1408 diff = self.diffopts.get('patch')
1409 1409 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1410 1410 node, prev = ctx.node(), ctx.p1().node()
1411 1411 if stat:
1412 1412 self.ui.pushbuffer()
1413 1413 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1414 1414 match=matchfn, stat=True)
1415 1415 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1416 1416 if diff:
1417 1417 self.ui.pushbuffer()
1418 1418 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1419 1419 match=matchfn, stat=False)
1420 1420 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1421 1421
1422 1422 self.ui.write("\n }")
1423 1423
1424 1424 class changeset_templater(changeset_printer):
1425 1425 '''format changeset information.'''
1426 1426
1427 1427 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1428 1428 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1429 1429 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1430 1430 defaulttempl = {
1431 1431 'parent': '{rev}:{node|formatnode} ',
1432 1432 'manifest': '{rev}:{node|formatnode}',
1433 1433 'file_copy': '{name} ({source})',
1434 1434 'extra': '{key}={value|stringescape}'
1435 1435 }
1436 1436 # filecopy is preserved for compatibility reasons
1437 1437 defaulttempl['filecopy'] = defaulttempl['file_copy']
1438 1438 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1439 1439 cache=defaulttempl)
1440 1440 if tmpl:
1441 1441 self.t.cache['changeset'] = tmpl
1442 1442
1443 1443 self.cache = {}
1444 1444
1445 1445 # find correct templates for current mode
1446 1446 tmplmodes = [
1447 1447 (True, None),
1448 1448 (self.ui.verbose, 'verbose'),
1449 1449 (self.ui.quiet, 'quiet'),
1450 1450 (self.ui.debugflag, 'debug'),
1451 1451 ]
1452 1452
1453 1453 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1454 1454 'docheader': '', 'docfooter': ''}
1455 1455 for mode, postfix in tmplmodes:
1456 1456 for t in self._parts:
1457 1457 cur = t
1458 1458 if postfix:
1459 1459 cur += "_" + postfix
1460 1460 if mode and cur in self.t:
1461 1461 self._parts[t] = cur
1462 1462
1463 1463 if self._parts['docheader']:
1464 1464 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1465 1465
1466 1466 def close(self):
1467 1467 if self._parts['docfooter']:
1468 1468 if not self.footer:
1469 1469 self.footer = ""
1470 1470 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1471 1471 return super(changeset_templater, self).close()
1472 1472
1473 1473 def _show(self, ctx, copies, matchfn, props):
1474 1474 '''show a single changeset or file revision'''
1475 1475 props = props.copy()
1476 1476 props.update(templatekw.keywords)
1477 1477 props['templ'] = self.t
1478 1478 props['ctx'] = ctx
1479 1479 props['repo'] = self.repo
1480 1480 props['revcache'] = {'copies': copies}
1481 1481 props['cache'] = self.cache
1482 1482
1483 1483 try:
1484 1484 # write header
1485 1485 if self._parts['header']:
1486 1486 h = templater.stringify(self.t(self._parts['header'], **props))
1487 1487 if self.buffered:
1488 1488 self.header[ctx.rev()] = h
1489 1489 else:
1490 1490 if self.lastheader != h:
1491 1491 self.lastheader = h
1492 1492 self.ui.write(h)
1493 1493
1494 1494 # write changeset metadata, then patch if requested
1495 1495 key = self._parts['changeset']
1496 1496 self.ui.write(templater.stringify(self.t(key, **props)))
1497 1497 self.showpatch(ctx, matchfn)
1498 1498
1499 1499 if self._parts['footer']:
1500 1500 if not self.footer:
1501 1501 self.footer = templater.stringify(
1502 1502 self.t(self._parts['footer'], **props))
1503 1503 except KeyError as inst:
1504 1504 msg = _("%s: no key named '%s'")
1505 1505 raise error.Abort(msg % (self.t.mapfile, inst.args[0]))
1506 1506 except SyntaxError as inst:
1507 1507 raise error.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1508 1508
1509 1509 def gettemplate(ui, tmpl, style):
1510 1510 """
1511 1511 Find the template matching the given template spec or style.
1512 1512 """
1513 1513
1514 1514 # ui settings
1515 1515 if not tmpl and not style: # template are stronger than style
1516 1516 tmpl = ui.config('ui', 'logtemplate')
1517 1517 if tmpl:
1518 1518 try:
1519 1519 tmpl = templater.unquotestring(tmpl)
1520 1520 except SyntaxError:
1521 1521 pass
1522 1522 return tmpl, None
1523 1523 else:
1524 1524 style = util.expandpath(ui.config('ui', 'style', ''))
1525 1525
1526 1526 if not tmpl and style:
1527 1527 mapfile = style
1528 1528 if not os.path.split(mapfile)[0]:
1529 1529 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1530 1530 or templater.templatepath(mapfile))
1531 1531 if mapname:
1532 1532 mapfile = mapname
1533 1533 return None, mapfile
1534 1534
1535 1535 if not tmpl:
1536 1536 return None, None
1537 1537
1538 1538 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1539 1539
1540 1540 def show_changeset(ui, repo, opts, buffered=False):
1541 1541 """show one changeset using template or regular display.
1542 1542
1543 1543 Display format will be the first non-empty hit of:
1544 1544 1. option 'template'
1545 1545 2. option 'style'
1546 1546 3. [ui] setting 'logtemplate'
1547 1547 4. [ui] setting 'style'
1548 1548 If all of these values are either the unset or the empty string,
1549 1549 regular display via changeset_printer() is done.
1550 1550 """
1551 1551 # options
1552 1552 matchfn = None
1553 1553 if opts.get('patch') or opts.get('stat'):
1554 1554 matchfn = scmutil.matchall(repo)
1555 1555
1556 1556 if opts.get('template') == 'json':
1557 1557 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1558 1558
1559 1559 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1560 1560
1561 1561 if not tmpl and not mapfile:
1562 1562 return changeset_printer(ui, repo, matchfn, opts, buffered)
1563 1563
1564 1564 try:
1565 1565 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1566 1566 buffered)
1567 1567 except SyntaxError as inst:
1568 1568 raise error.Abort(inst.args[0])
1569 1569 return t
1570 1570
1571 1571 def showmarker(ui, marker):
1572 1572 """utility function to display obsolescence marker in a readable way
1573 1573
1574 1574 To be used by debug function."""
1575 1575 ui.write(hex(marker.precnode()))
1576 1576 for repl in marker.succnodes():
1577 1577 ui.write(' ')
1578 1578 ui.write(hex(repl))
1579 1579 ui.write(' %X ' % marker.flags())
1580 1580 parents = marker.parentnodes()
1581 1581 if parents is not None:
1582 1582 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1583 1583 ui.write('(%s) ' % util.datestr(marker.date()))
1584 1584 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1585 1585 sorted(marker.metadata().items())
1586 1586 if t[0] != 'date')))
1587 1587 ui.write('\n')
1588 1588
1589 1589 def finddate(ui, repo, date):
1590 1590 """Find the tipmost changeset that matches the given date spec"""
1591 1591
1592 1592 df = util.matchdate(date)
1593 1593 m = scmutil.matchall(repo)
1594 1594 results = {}
1595 1595
1596 1596 def prep(ctx, fns):
1597 1597 d = ctx.date()
1598 1598 if df(d[0]):
1599 1599 results[ctx.rev()] = d
1600 1600
1601 1601 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1602 1602 rev = ctx.rev()
1603 1603 if rev in results:
1604 1604 ui.status(_("found revision %s from %s\n") %
1605 1605 (rev, util.datestr(results[rev])))
1606 1606 return str(rev)
1607 1607
1608 1608 raise error.Abort(_("revision matching date not found"))
1609 1609
1610 1610 def increasingwindows(windowsize=8, sizelimit=512):
1611 1611 while True:
1612 1612 yield windowsize
1613 1613 if windowsize < sizelimit:
1614 1614 windowsize *= 2
1615 1615
1616 1616 class FileWalkError(Exception):
1617 1617 pass
1618 1618
1619 1619 def walkfilerevs(repo, match, follow, revs, fncache):
1620 1620 '''Walks the file history for the matched files.
1621 1621
1622 1622 Returns the changeset revs that are involved in the file history.
1623 1623
1624 1624 Throws FileWalkError if the file history can't be walked using
1625 1625 filelogs alone.
1626 1626 '''
1627 1627 wanted = set()
1628 1628 copies = []
1629 1629 minrev, maxrev = min(revs), max(revs)
1630 1630 def filerevgen(filelog, last):
1631 1631 """
1632 1632 Only files, no patterns. Check the history of each file.
1633 1633
1634 1634 Examines filelog entries within minrev, maxrev linkrev range
1635 1635 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1636 1636 tuples in backwards order
1637 1637 """
1638 1638 cl_count = len(repo)
1639 1639 revs = []
1640 1640 for j in xrange(0, last + 1):
1641 1641 linkrev = filelog.linkrev(j)
1642 1642 if linkrev < minrev:
1643 1643 continue
1644 1644 # only yield rev for which we have the changelog, it can
1645 1645 # happen while doing "hg log" during a pull or commit
1646 1646 if linkrev >= cl_count:
1647 1647 break
1648 1648
1649 1649 parentlinkrevs = []
1650 1650 for p in filelog.parentrevs(j):
1651 1651 if p != nullrev:
1652 1652 parentlinkrevs.append(filelog.linkrev(p))
1653 1653 n = filelog.node(j)
1654 1654 revs.append((linkrev, parentlinkrevs,
1655 1655 follow and filelog.renamed(n)))
1656 1656
1657 1657 return reversed(revs)
1658 1658 def iterfiles():
1659 1659 pctx = repo['.']
1660 1660 for filename in match.files():
1661 1661 if follow:
1662 1662 if filename not in pctx:
1663 1663 raise error.Abort(_('cannot follow file not in parent '
1664 1664 'revision: "%s"') % filename)
1665 1665 yield filename, pctx[filename].filenode()
1666 1666 else:
1667 1667 yield filename, None
1668 1668 for filename_node in copies:
1669 1669 yield filename_node
1670 1670
1671 1671 for file_, node in iterfiles():
1672 1672 filelog = repo.file(file_)
1673 1673 if not len(filelog):
1674 1674 if node is None:
1675 1675 # A zero count may be a directory or deleted file, so
1676 1676 # try to find matching entries on the slow path.
1677 1677 if follow:
1678 1678 raise error.Abort(
1679 1679 _('cannot follow nonexistent file: "%s"') % file_)
1680 1680 raise FileWalkError("Cannot walk via filelog")
1681 1681 else:
1682 1682 continue
1683 1683
1684 1684 if node is None:
1685 1685 last = len(filelog) - 1
1686 1686 else:
1687 1687 last = filelog.rev(node)
1688 1688
1689 1689 # keep track of all ancestors of the file
1690 1690 ancestors = set([filelog.linkrev(last)])
1691 1691
1692 1692 # iterate from latest to oldest revision
1693 1693 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1694 1694 if not follow:
1695 1695 if rev > maxrev:
1696 1696 continue
1697 1697 else:
1698 1698 # Note that last might not be the first interesting
1699 1699 # rev to us:
1700 1700 # if the file has been changed after maxrev, we'll
1701 1701 # have linkrev(last) > maxrev, and we still need
1702 1702 # to explore the file graph
1703 1703 if rev not in ancestors:
1704 1704 continue
1705 1705 # XXX insert 1327 fix here
1706 1706 if flparentlinkrevs:
1707 1707 ancestors.update(flparentlinkrevs)
1708 1708
1709 1709 fncache.setdefault(rev, []).append(file_)
1710 1710 wanted.add(rev)
1711 1711 if copied:
1712 1712 copies.append(copied)
1713 1713
1714 1714 return wanted
1715 1715
1716 1716 class _followfilter(object):
1717 1717 def __init__(self, repo, onlyfirst=False):
1718 1718 self.repo = repo
1719 1719 self.startrev = nullrev
1720 1720 self.roots = set()
1721 1721 self.onlyfirst = onlyfirst
1722 1722
1723 1723 def match(self, rev):
1724 1724 def realparents(rev):
1725 1725 if self.onlyfirst:
1726 1726 return self.repo.changelog.parentrevs(rev)[0:1]
1727 1727 else:
1728 1728 return filter(lambda x: x != nullrev,
1729 1729 self.repo.changelog.parentrevs(rev))
1730 1730
1731 1731 if self.startrev == nullrev:
1732 1732 self.startrev = rev
1733 1733 return True
1734 1734
1735 1735 if rev > self.startrev:
1736 1736 # forward: all descendants
1737 1737 if not self.roots:
1738 1738 self.roots.add(self.startrev)
1739 1739 for parent in realparents(rev):
1740 1740 if parent in self.roots:
1741 1741 self.roots.add(rev)
1742 1742 return True
1743 1743 else:
1744 1744 # backwards: all parents
1745 1745 if not self.roots:
1746 1746 self.roots.update(realparents(self.startrev))
1747 1747 if rev in self.roots:
1748 1748 self.roots.remove(rev)
1749 1749 self.roots.update(realparents(rev))
1750 1750 return True
1751 1751
1752 1752 return False
1753 1753
1754 1754 def walkchangerevs(repo, match, opts, prepare):
1755 1755 '''Iterate over files and the revs in which they changed.
1756 1756
1757 1757 Callers most commonly need to iterate backwards over the history
1758 1758 in which they are interested. Doing so has awful (quadratic-looking)
1759 1759 performance, so we use iterators in a "windowed" way.
1760 1760
1761 1761 We walk a window of revisions in the desired order. Within the
1762 1762 window, we first walk forwards to gather data, then in the desired
1763 1763 order (usually backwards) to display it.
1764 1764
1765 1765 This function returns an iterator yielding contexts. Before
1766 1766 yielding each context, the iterator will first call the prepare
1767 1767 function on each context in the window in forward order.'''
1768 1768
1769 1769 follow = opts.get('follow') or opts.get('follow_first')
1770 1770 revs = _logrevs(repo, opts)
1771 1771 if not revs:
1772 1772 return []
1773 1773 wanted = set()
1774 1774 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1775 1775 opts.get('removed'))
1776 1776 fncache = {}
1777 1777 change = repo.changectx
1778 1778
1779 1779 # First step is to fill wanted, the set of revisions that we want to yield.
1780 1780 # When it does not induce extra cost, we also fill fncache for revisions in
1781 1781 # wanted: a cache of filenames that were changed (ctx.files()) and that
1782 1782 # match the file filtering conditions.
1783 1783
1784 1784 if match.always():
1785 1785 # No files, no patterns. Display all revs.
1786 1786 wanted = revs
1787 1787 elif not slowpath:
1788 1788 # We only have to read through the filelog to find wanted revisions
1789 1789
1790 1790 try:
1791 1791 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1792 1792 except FileWalkError:
1793 1793 slowpath = True
1794 1794
1795 1795 # We decided to fall back to the slowpath because at least one
1796 1796 # of the paths was not a file. Check to see if at least one of them
1797 1797 # existed in history, otherwise simply return
1798 1798 for path in match.files():
1799 1799 if path == '.' or path in repo.store:
1800 1800 break
1801 1801 else:
1802 1802 return []
1803 1803
1804 1804 if slowpath:
1805 1805 # We have to read the changelog to match filenames against
1806 1806 # changed files
1807 1807
1808 1808 if follow:
1809 1809 raise error.Abort(_('can only follow copies/renames for explicit '
1810 1810 'filenames'))
1811 1811
1812 1812 # The slow path checks files modified in every changeset.
1813 1813 # This is really slow on large repos, so compute the set lazily.
1814 1814 class lazywantedset(object):
1815 1815 def __init__(self):
1816 1816 self.set = set()
1817 1817 self.revs = set(revs)
1818 1818
1819 1819 # No need to worry about locality here because it will be accessed
1820 1820 # in the same order as the increasing window below.
1821 1821 def __contains__(self, value):
1822 1822 if value in self.set:
1823 1823 return True
1824 1824 elif not value in self.revs:
1825 1825 return False
1826 1826 else:
1827 1827 self.revs.discard(value)
1828 1828 ctx = change(value)
1829 1829 matches = filter(match, ctx.files())
1830 1830 if matches:
1831 1831 fncache[value] = matches
1832 1832 self.set.add(value)
1833 1833 return True
1834 1834 return False
1835 1835
1836 1836 def discard(self, value):
1837 1837 self.revs.discard(value)
1838 1838 self.set.discard(value)
1839 1839
1840 1840 wanted = lazywantedset()
1841 1841
1842 1842 # it might be worthwhile to do this in the iterator if the rev range
1843 1843 # is descending and the prune args are all within that range
1844 1844 for rev in opts.get('prune', ()):
1845 1845 rev = repo[rev].rev()
1846 1846 ff = _followfilter(repo)
1847 1847 stop = min(revs[0], revs[-1])
1848 1848 for x in xrange(rev, stop - 1, -1):
1849 1849 if ff.match(x):
1850 1850 wanted = wanted - [x]
1851 1851
1852 1852 # Now that wanted is correctly initialized, we can iterate over the
1853 1853 # revision range, yielding only revisions in wanted.
1854 1854 def iterate():
1855 1855 if follow and match.always():
1856 1856 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1857 1857 def want(rev):
1858 1858 return ff.match(rev) and rev in wanted
1859 1859 else:
1860 1860 def want(rev):
1861 1861 return rev in wanted
1862 1862
1863 1863 it = iter(revs)
1864 1864 stopiteration = False
1865 1865 for windowsize in increasingwindows():
1866 1866 nrevs = []
1867 1867 for i in xrange(windowsize):
1868 1868 rev = next(it, None)
1869 1869 if rev is None:
1870 1870 stopiteration = True
1871 1871 break
1872 1872 elif want(rev):
1873 1873 nrevs.append(rev)
1874 1874 for rev in sorted(nrevs):
1875 1875 fns = fncache.get(rev)
1876 1876 ctx = change(rev)
1877 1877 if not fns:
1878 1878 def fns_generator():
1879 1879 for f in ctx.files():
1880 1880 if match(f):
1881 1881 yield f
1882 1882 fns = fns_generator()
1883 1883 prepare(ctx, fns)
1884 1884 for rev in nrevs:
1885 1885 yield change(rev)
1886 1886
1887 1887 if stopiteration:
1888 1888 break
1889 1889
1890 1890 return iterate()
1891 1891
1892 1892 def _makefollowlogfilematcher(repo, files, followfirst):
1893 1893 # When displaying a revision with --patch --follow FILE, we have
1894 1894 # to know which file of the revision must be diffed. With
1895 1895 # --follow, we want the names of the ancestors of FILE in the
1896 1896 # revision, stored in "fcache". "fcache" is populated by
1897 1897 # reproducing the graph traversal already done by --follow revset
1898 1898 # and relating linkrevs to file names (which is not "correct" but
1899 1899 # good enough).
1900 1900 fcache = {}
1901 1901 fcacheready = [False]
1902 1902 pctx = repo['.']
1903 1903
1904 1904 def populate():
1905 1905 for fn in files:
1906 1906 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1907 1907 for c in i:
1908 1908 fcache.setdefault(c.linkrev(), set()).add(c.path())
1909 1909
1910 1910 def filematcher(rev):
1911 1911 if not fcacheready[0]:
1912 1912 # Lazy initialization
1913 1913 fcacheready[0] = True
1914 1914 populate()
1915 1915 return scmutil.matchfiles(repo, fcache.get(rev, []))
1916 1916
1917 1917 return filematcher
1918 1918
1919 1919 def _makenofollowlogfilematcher(repo, pats, opts):
1920 1920 '''hook for extensions to override the filematcher for non-follow cases'''
1921 1921 return None
1922 1922
1923 1923 def _makelogrevset(repo, pats, opts, revs):
1924 1924 """Return (expr, filematcher) where expr is a revset string built
1925 1925 from log options and file patterns or None. If --stat or --patch
1926 1926 are not passed filematcher is None. Otherwise it is a callable
1927 1927 taking a revision number and returning a match objects filtering
1928 1928 the files to be detailed when displaying the revision.
1929 1929 """
1930 1930 opt2revset = {
1931 1931 'no_merges': ('not merge()', None),
1932 1932 'only_merges': ('merge()', None),
1933 1933 '_ancestors': ('ancestors(%(val)s)', None),
1934 1934 '_fancestors': ('_firstancestors(%(val)s)', None),
1935 1935 '_descendants': ('descendants(%(val)s)', None),
1936 1936 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1937 1937 '_matchfiles': ('_matchfiles(%(val)s)', None),
1938 1938 'date': ('date(%(val)r)', None),
1939 1939 'branch': ('branch(%(val)r)', ' or '),
1940 1940 '_patslog': ('filelog(%(val)r)', ' or '),
1941 1941 '_patsfollow': ('follow(%(val)r)', ' or '),
1942 1942 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1943 1943 'keyword': ('keyword(%(val)r)', ' or '),
1944 1944 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1945 1945 'user': ('user(%(val)r)', ' or '),
1946 1946 }
1947 1947
1948 1948 opts = dict(opts)
1949 1949 # follow or not follow?
1950 1950 follow = opts.get('follow') or opts.get('follow_first')
1951 1951 if opts.get('follow_first'):
1952 1952 followfirst = 1
1953 1953 else:
1954 1954 followfirst = 0
1955 1955 # --follow with FILE behavior depends on revs...
1956 1956 it = iter(revs)
1957 1957 startrev = it.next()
1958 1958 followdescendants = startrev < next(it, startrev)
1959 1959
1960 1960 # branch and only_branch are really aliases and must be handled at
1961 1961 # the same time
1962 1962 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1963 1963 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1964 1964 # pats/include/exclude are passed to match.match() directly in
1965 1965 # _matchfiles() revset but walkchangerevs() builds its matcher with
1966 1966 # scmutil.match(). The difference is input pats are globbed on
1967 1967 # platforms without shell expansion (windows).
1968 1968 wctx = repo[None]
1969 1969 match, pats = scmutil.matchandpats(wctx, pats, opts)
1970 1970 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1971 1971 opts.get('removed'))
1972 1972 if not slowpath:
1973 1973 for f in match.files():
1974 1974 if follow and f not in wctx:
1975 1975 # If the file exists, it may be a directory, so let it
1976 1976 # take the slow path.
1977 1977 if os.path.exists(repo.wjoin(f)):
1978 1978 slowpath = True
1979 1979 continue
1980 1980 else:
1981 1981 raise error.Abort(_('cannot follow file not in parent '
1982 1982 'revision: "%s"') % f)
1983 1983 filelog = repo.file(f)
1984 1984 if not filelog:
1985 1985 # A zero count may be a directory or deleted file, so
1986 1986 # try to find matching entries on the slow path.
1987 1987 if follow:
1988 1988 raise error.Abort(
1989 1989 _('cannot follow nonexistent file: "%s"') % f)
1990 1990 slowpath = True
1991 1991
1992 1992 # We decided to fall back to the slowpath because at least one
1993 1993 # of the paths was not a file. Check to see if at least one of them
1994 1994 # existed in history - in that case, we'll continue down the
1995 1995 # slowpath; otherwise, we can turn off the slowpath
1996 1996 if slowpath:
1997 1997 for path in match.files():
1998 1998 if path == '.' or path in repo.store:
1999 1999 break
2000 2000 else:
2001 2001 slowpath = False
2002 2002
2003 2003 fpats = ('_patsfollow', '_patsfollowfirst')
2004 2004 fnopats = (('_ancestors', '_fancestors'),
2005 2005 ('_descendants', '_fdescendants'))
2006 2006 if slowpath:
2007 2007 # See walkchangerevs() slow path.
2008 2008 #
2009 2009 # pats/include/exclude cannot be represented as separate
2010 2010 # revset expressions as their filtering logic applies at file
2011 2011 # level. For instance "-I a -X a" matches a revision touching
2012 2012 # "a" and "b" while "file(a) and not file(b)" does
2013 2013 # not. Besides, filesets are evaluated against the working
2014 2014 # directory.
2015 2015 matchargs = ['r:', 'd:relpath']
2016 2016 for p in pats:
2017 2017 matchargs.append('p:' + p)
2018 2018 for p in opts.get('include', []):
2019 2019 matchargs.append('i:' + p)
2020 2020 for p in opts.get('exclude', []):
2021 2021 matchargs.append('x:' + p)
2022 2022 matchargs = ','.join(('%r' % p) for p in matchargs)
2023 2023 opts['_matchfiles'] = matchargs
2024 2024 if follow:
2025 2025 opts[fnopats[0][followfirst]] = '.'
2026 2026 else:
2027 2027 if follow:
2028 2028 if pats:
2029 2029 # follow() revset interprets its file argument as a
2030 2030 # manifest entry, so use match.files(), not pats.
2031 2031 opts[fpats[followfirst]] = list(match.files())
2032 2032 else:
2033 2033 op = fnopats[followdescendants][followfirst]
2034 2034 opts[op] = 'rev(%d)' % startrev
2035 2035 else:
2036 2036 opts['_patslog'] = list(pats)
2037 2037
2038 2038 filematcher = None
2039 2039 if opts.get('patch') or opts.get('stat'):
2040 2040 # When following files, track renames via a special matcher.
2041 2041 # If we're forced to take the slowpath it means we're following
2042 2042 # at least one pattern/directory, so don't bother with rename tracking.
2043 2043 if follow and not match.always() and not slowpath:
2044 2044 # _makefollowlogfilematcher expects its files argument to be
2045 2045 # relative to the repo root, so use match.files(), not pats.
2046 2046 filematcher = _makefollowlogfilematcher(repo, match.files(),
2047 2047 followfirst)
2048 2048 else:
2049 2049 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2050 2050 if filematcher is None:
2051 2051 filematcher = lambda rev: match
2052 2052
2053 2053 expr = []
2054 2054 for op, val in sorted(opts.iteritems()):
2055 2055 if not val:
2056 2056 continue
2057 2057 if op not in opt2revset:
2058 2058 continue
2059 2059 revop, andor = opt2revset[op]
2060 2060 if '%(val)' not in revop:
2061 2061 expr.append(revop)
2062 2062 else:
2063 2063 if not isinstance(val, list):
2064 2064 e = revop % {'val': val}
2065 2065 else:
2066 2066 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2067 2067 expr.append(e)
2068 2068
2069 2069 if expr:
2070 2070 expr = '(' + ' and '.join(expr) + ')'
2071 2071 else:
2072 2072 expr = None
2073 2073 return expr, filematcher
2074 2074
2075 2075 def _logrevs(repo, opts):
2076 2076 # Default --rev value depends on --follow but --follow behavior
2077 2077 # depends on revisions resolved from --rev...
2078 2078 follow = opts.get('follow') or opts.get('follow_first')
2079 2079 if opts.get('rev'):
2080 2080 revs = scmutil.revrange(repo, opts['rev'])
2081 2081 elif follow and repo.dirstate.p1() == nullid:
2082 2082 revs = revset.baseset()
2083 2083 elif follow:
2084 2084 revs = repo.revs('reverse(:.)')
2085 2085 else:
2086 2086 revs = revset.spanset(repo)
2087 2087 revs.reverse()
2088 2088 return revs
2089 2089
2090 2090 def getgraphlogrevs(repo, pats, opts):
2091 2091 """Return (revs, expr, filematcher) where revs is an iterable of
2092 2092 revision numbers, expr is a revset string built from log options
2093 2093 and file patterns or None, and used to filter 'revs'. If --stat or
2094 2094 --patch are not passed filematcher is None. Otherwise it is a
2095 2095 callable taking a revision number and returning a match objects
2096 2096 filtering the files to be detailed when displaying the revision.
2097 2097 """
2098 2098 limit = loglimit(opts)
2099 2099 revs = _logrevs(repo, opts)
2100 2100 if not revs:
2101 2101 return revset.baseset(), None, None
2102 2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2103 2103 if opts.get('rev'):
2104 2104 # User-specified revs might be unsorted, but don't sort before
2105 2105 # _makelogrevset because it might depend on the order of revs
2106 2106 revs.sort(reverse=True)
2107 2107 if expr:
2108 2108 # Revset matchers often operate faster on revisions in changelog
2109 2109 # order, because most filters deal with the changelog.
2110 2110 revs.reverse()
2111 2111 matcher = revset.match(repo.ui, expr)
2112 2112 # Revset matches can reorder revisions. "A or B" typically returns
2113 2113 # returns the revision matching A then the revision matching B. Sort
2114 2114 # again to fix that.
2115 2115 revs = matcher(repo, revs)
2116 2116 revs.sort(reverse=True)
2117 2117 if limit is not None:
2118 2118 limitedrevs = []
2119 2119 for idx, rev in enumerate(revs):
2120 2120 if idx >= limit:
2121 2121 break
2122 2122 limitedrevs.append(rev)
2123 2123 revs = revset.baseset(limitedrevs)
2124 2124
2125 2125 return revs, expr, filematcher
2126 2126
2127 2127 def getlogrevs(repo, pats, opts):
2128 2128 """Return (revs, expr, filematcher) where revs is an iterable of
2129 2129 revision numbers, expr is a revset string built from log options
2130 2130 and file patterns or None, and used to filter 'revs'. If --stat or
2131 2131 --patch are not passed filematcher is None. Otherwise it is a
2132 2132 callable taking a revision number and returning a match objects
2133 2133 filtering the files to be detailed when displaying the revision.
2134 2134 """
2135 2135 limit = loglimit(opts)
2136 2136 revs = _logrevs(repo, opts)
2137 2137 if not revs:
2138 2138 return revset.baseset([]), None, None
2139 2139 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2140 2140 if expr:
2141 2141 # Revset matchers often operate faster on revisions in changelog
2142 2142 # order, because most filters deal with the changelog.
2143 2143 if not opts.get('rev'):
2144 2144 revs.reverse()
2145 2145 matcher = revset.match(repo.ui, expr)
2146 2146 # Revset matches can reorder revisions. "A or B" typically returns
2147 2147 # returns the revision matching A then the revision matching B. Sort
2148 2148 # again to fix that.
2149 2149 revs = matcher(repo, revs)
2150 2150 if not opts.get('rev'):
2151 2151 revs.sort(reverse=True)
2152 2152 if limit is not None:
2153 2153 limitedrevs = []
2154 2154 for idx, r in enumerate(revs):
2155 2155 if limit <= idx:
2156 2156 break
2157 2157 limitedrevs.append(r)
2158 2158 revs = revset.baseset(limitedrevs)
2159 2159
2160 2160 return revs, expr, filematcher
2161 2161
2162 2162 def _graphnodeformatter(ui, displayer):
2163 2163 spec = ui.config('ui', 'graphnodetemplate')
2164 2164 if not spec:
2165 2165 return templatekw.showgraphnode # fast path for "{graphnode}"
2166 2166
2167 2167 templ = formatter.gettemplater(ui, 'graphnode', spec)
2168 2168 cache = {}
2169 2169 if isinstance(displayer, changeset_templater):
2170 2170 cache = displayer.cache # reuse cache of slow templates
2171 2171 props = templatekw.keywords.copy()
2172 2172 props['templ'] = templ
2173 2173 props['cache'] = cache
2174 2174 def formatnode(repo, ctx):
2175 2175 props['ctx'] = ctx
2176 2176 props['repo'] = repo
2177 2177 props['revcache'] = {}
2178 2178 return templater.stringify(templ('graphnode', **props))
2179 2179 return formatnode
2180 2180
2181 2181 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2182 2182 filematcher=None):
2183 2183 formatnode = _graphnodeformatter(ui, displayer)
2184 2184 seen, state = [], graphmod.asciistate()
2185 2185 for rev, type, ctx, parents in dag:
2186 2186 char = formatnode(repo, ctx)
2187 2187 copies = None
2188 2188 if getrenamed and ctx.rev():
2189 2189 copies = []
2190 2190 for fn in ctx.files():
2191 2191 rename = getrenamed(fn, ctx.rev())
2192 2192 if rename:
2193 2193 copies.append((fn, rename[0]))
2194 2194 revmatchfn = None
2195 2195 if filematcher is not None:
2196 2196 revmatchfn = filematcher(ctx.rev())
2197 2197 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2198 2198 lines = displayer.hunk.pop(rev).split('\n')
2199 2199 if not lines[-1]:
2200 2200 del lines[-1]
2201 2201 displayer.flush(ctx)
2202 2202 edges = edgefn(type, char, lines, seen, rev, parents)
2203 2203 for type, char, lines, coldata in edges:
2204 2204 graphmod.ascii(ui, state, type, char, lines, coldata)
2205 2205 displayer.close()
2206 2206
2207 2207 def graphlog(ui, repo, *pats, **opts):
2208 2208 # Parameters are identical to log command ones
2209 2209 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2210 2210 revdag = graphmod.dagwalker(repo, revs)
2211 2211
2212 2212 getrenamed = None
2213 2213 if opts.get('copies'):
2214 2214 endrev = None
2215 2215 if opts.get('rev'):
2216 2216 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2217 2217 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2218 2218 displayer = show_changeset(ui, repo, opts, buffered=True)
2219 2219 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2220 2220 filematcher)
2221 2221
2222 2222 def checkunsupportedgraphflags(pats, opts):
2223 2223 for op in ["newest_first"]:
2224 2224 if op in opts and opts[op]:
2225 2225 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2226 2226 % op.replace("_", "-"))
2227 2227
2228 2228 def graphrevs(repo, nodes, opts):
2229 2229 limit = loglimit(opts)
2230 2230 nodes.reverse()
2231 2231 if limit is not None:
2232 2232 nodes = nodes[:limit]
2233 2233 return graphmod.nodes(repo, nodes)
2234 2234
2235 2235 def add(ui, repo, match, prefix, explicitonly, **opts):
2236 2236 join = lambda f: os.path.join(prefix, f)
2237 2237 bad = []
2238 2238
2239 2239 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2240 2240 names = []
2241 2241 wctx = repo[None]
2242 2242 cca = None
2243 2243 abort, warn = scmutil.checkportabilityalert(ui)
2244 2244 if abort or warn:
2245 2245 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2246 2246
2247 2247 badmatch = matchmod.badmatch(match, badfn)
2248 2248 dirstate = repo.dirstate
2249 2249 # We don't want to just call wctx.walk here, since it would return a lot of
2250 2250 # clean files, which we aren't interested in and takes time.
2251 2251 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2252 2252 True, False, full=False)):
2253 2253 exact = match.exact(f)
2254 2254 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2255 2255 if cca:
2256 2256 cca(f)
2257 2257 names.append(f)
2258 2258 if ui.verbose or not exact:
2259 2259 ui.status(_('adding %s\n') % match.rel(f))
2260 2260
2261 2261 for subpath in sorted(wctx.substate):
2262 2262 sub = wctx.sub(subpath)
2263 2263 try:
2264 2264 submatch = matchmod.narrowmatcher(subpath, match)
2265 2265 if opts.get('subrepos'):
2266 2266 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2267 2267 else:
2268 2268 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2269 2269 except error.LookupError:
2270 2270 ui.status(_("skipping missing subrepository: %s\n")
2271 2271 % join(subpath))
2272 2272
2273 2273 if not opts.get('dry_run'):
2274 2274 rejected = wctx.add(names, prefix)
2275 2275 bad.extend(f for f in rejected if f in match.files())
2276 2276 return bad
2277 2277
2278 2278 def forget(ui, repo, match, prefix, explicitonly):
2279 2279 join = lambda f: os.path.join(prefix, f)
2280 2280 bad = []
2281 2281 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2282 2282 wctx = repo[None]
2283 2283 forgot = []
2284 2284
2285 2285 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2286 2286 forget = sorted(s[0] + s[1] + s[3] + s[6])
2287 2287 if explicitonly:
2288 2288 forget = [f for f in forget if match.exact(f)]
2289 2289
2290 2290 for subpath in sorted(wctx.substate):
2291 2291 sub = wctx.sub(subpath)
2292 2292 try:
2293 2293 submatch = matchmod.narrowmatcher(subpath, match)
2294 2294 subbad, subforgot = sub.forget(submatch, prefix)
2295 2295 bad.extend([subpath + '/' + f for f in subbad])
2296 2296 forgot.extend([subpath + '/' + f for f in subforgot])
2297 2297 except error.LookupError:
2298 2298 ui.status(_("skipping missing subrepository: %s\n")
2299 2299 % join(subpath))
2300 2300
2301 2301 if not explicitonly:
2302 2302 for f in match.files():
2303 2303 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2304 2304 if f not in forgot:
2305 2305 if repo.wvfs.exists(f):
2306 2306 # Don't complain if the exact case match wasn't given.
2307 2307 # But don't do this until after checking 'forgot', so
2308 2308 # that subrepo files aren't normalized, and this op is
2309 2309 # purely from data cached by the status walk above.
2310 2310 if repo.dirstate.normalize(f) in repo.dirstate:
2311 2311 continue
2312 2312 ui.warn(_('not removing %s: '
2313 2313 'file is already untracked\n')
2314 2314 % match.rel(f))
2315 2315 bad.append(f)
2316 2316
2317 2317 for f in forget:
2318 2318 if ui.verbose or not match.exact(f):
2319 2319 ui.status(_('removing %s\n') % match.rel(f))
2320 2320
2321 2321 rejected = wctx.forget(forget, prefix)
2322 2322 bad.extend(f for f in rejected if f in match.files())
2323 2323 forgot.extend(f for f in forget if f not in rejected)
2324 2324 return bad, forgot
2325 2325
2326 2326 def files(ui, ctx, m, fm, fmt, subrepos):
2327 2327 rev = ctx.rev()
2328 2328 ret = 1
2329 2329 ds = ctx.repo().dirstate
2330 2330
2331 2331 for f in ctx.matches(m):
2332 2332 if rev is None and ds[f] == 'r':
2333 2333 continue
2334 2334 fm.startitem()
2335 2335 if ui.verbose:
2336 2336 fc = ctx[f]
2337 2337 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2338 2338 fm.data(abspath=f)
2339 2339 fm.write('path', fmt, m.rel(f))
2340 2340 ret = 0
2341 2341
2342 2342 for subpath in sorted(ctx.substate):
2343 2343 def matchessubrepo(subpath):
2344 2344 return (m.always() or m.exact(subpath)
2345 2345 or any(f.startswith(subpath + '/') for f in m.files()))
2346 2346
2347 2347 if subrepos or matchessubrepo(subpath):
2348 2348 sub = ctx.sub(subpath)
2349 2349 try:
2350 2350 submatch = matchmod.narrowmatcher(subpath, m)
2351 2351 if sub.printfiles(ui, submatch, fm, fmt, subrepos) == 0:
2352 2352 ret = 0
2353 2353 except error.LookupError:
2354 2354 ui.status(_("skipping missing subrepository: %s\n")
2355 2355 % m.abs(subpath))
2356 2356
2357 2357 return ret
2358 2358
2359 2359 def remove(ui, repo, m, prefix, after, force, subrepos):
2360 2360 join = lambda f: os.path.join(prefix, f)
2361 2361 ret = 0
2362 2362 s = repo.status(match=m, clean=True)
2363 2363 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2364 2364
2365 2365 wctx = repo[None]
2366 2366
2367 2367 for subpath in sorted(wctx.substate):
2368 2368 def matchessubrepo(matcher, subpath):
2369 2369 if matcher.exact(subpath):
2370 2370 return True
2371 2371 for f in matcher.files():
2372 2372 if f.startswith(subpath):
2373 2373 return True
2374 2374 return False
2375 2375
2376 2376 if subrepos or matchessubrepo(m, subpath):
2377 2377 sub = wctx.sub(subpath)
2378 2378 try:
2379 2379 submatch = matchmod.narrowmatcher(subpath, m)
2380 2380 if sub.removefiles(submatch, prefix, after, force, subrepos):
2381 2381 ret = 1
2382 2382 except error.LookupError:
2383 2383 ui.status(_("skipping missing subrepository: %s\n")
2384 2384 % join(subpath))
2385 2385
2386 2386 # warn about failure to delete explicit files/dirs
2387 2387 deleteddirs = util.dirs(deleted)
2388 2388 for f in m.files():
2389 2389 def insubrepo():
2390 2390 for subpath in wctx.substate:
2391 2391 if f.startswith(subpath):
2392 2392 return True
2393 2393 return False
2394 2394
2395 2395 isdir = f in deleteddirs or wctx.hasdir(f)
2396 2396 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2397 2397 continue
2398 2398
2399 2399 if repo.wvfs.exists(f):
2400 2400 if repo.wvfs.isdir(f):
2401 2401 ui.warn(_('not removing %s: no tracked files\n')
2402 2402 % m.rel(f))
2403 2403 else:
2404 2404 ui.warn(_('not removing %s: file is untracked\n')
2405 2405 % m.rel(f))
2406 2406 # missing files will generate a warning elsewhere
2407 2407 ret = 1
2408 2408
2409 2409 if force:
2410 2410 list = modified + deleted + clean + added
2411 2411 elif after:
2412 2412 list = deleted
2413 2413 for f in modified + added + clean:
2414 2414 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2415 2415 ret = 1
2416 2416 else:
2417 2417 list = deleted + clean
2418 2418 for f in modified:
2419 2419 ui.warn(_('not removing %s: file is modified (use -f'
2420 2420 ' to force removal)\n') % m.rel(f))
2421 2421 ret = 1
2422 2422 for f in added:
2423 2423 ui.warn(_('not removing %s: file has been marked for add'
2424 2424 ' (use forget to undo)\n') % m.rel(f))
2425 2425 ret = 1
2426 2426
2427 2427 for f in sorted(list):
2428 2428 if ui.verbose or not m.exact(f):
2429 2429 ui.status(_('removing %s\n') % m.rel(f))
2430 2430
2431 2431 wlock = repo.wlock()
2432 2432 try:
2433 2433 if not after:
2434 2434 for f in list:
2435 2435 if f in added:
2436 2436 continue # we never unlink added files on remove
2437 2437 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2438 2438 repo[None].forget(list)
2439 2439 finally:
2440 2440 wlock.release()
2441 2441
2442 2442 return ret
2443 2443
2444 2444 def cat(ui, repo, ctx, matcher, prefix, **opts):
2445 2445 err = 1
2446 2446
2447 2447 def write(path):
2448 2448 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2449 2449 pathname=os.path.join(prefix, path))
2450 2450 data = ctx[path].data()
2451 2451 if opts.get('decode'):
2452 2452 data = repo.wwritedata(path, data)
2453 2453 fp.write(data)
2454 2454 fp.close()
2455 2455
2456 2456 # Automation often uses hg cat on single files, so special case it
2457 2457 # for performance to avoid the cost of parsing the manifest.
2458 2458 if len(matcher.files()) == 1 and not matcher.anypats():
2459 2459 file = matcher.files()[0]
2460 2460 mf = repo.manifest
2461 2461 mfnode = ctx.manifestnode()
2462 2462 if mfnode and mf.find(mfnode, file)[0]:
2463 2463 write(file)
2464 2464 return 0
2465 2465
2466 2466 # Don't warn about "missing" files that are really in subrepos
2467 2467 def badfn(path, msg):
2468 2468 for subpath in ctx.substate:
2469 2469 if path.startswith(subpath):
2470 2470 return
2471 2471 matcher.bad(path, msg)
2472 2472
2473 2473 for abs in ctx.walk(matchmod.badmatch(matcher, badfn)):
2474 2474 write(abs)
2475 2475 err = 0
2476 2476
2477 2477 for subpath in sorted(ctx.substate):
2478 2478 sub = ctx.sub(subpath)
2479 2479 try:
2480 2480 submatch = matchmod.narrowmatcher(subpath, matcher)
2481 2481
2482 2482 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2483 2483 **opts):
2484 2484 err = 0
2485 2485 except error.RepoLookupError:
2486 2486 ui.status(_("skipping missing subrepository: %s\n")
2487 2487 % os.path.join(prefix, subpath))
2488 2488
2489 2489 return err
2490 2490
2491 2491 def commit(ui, repo, commitfunc, pats, opts):
2492 2492 '''commit the specified files or all outstanding changes'''
2493 2493 date = opts.get('date')
2494 2494 if date:
2495 2495 opts['date'] = util.parsedate(date)
2496 2496 message = logmessage(ui, opts)
2497 2497 matcher = scmutil.match(repo[None], pats, opts)
2498 2498
2499 2499 # extract addremove carefully -- this function can be called from a command
2500 2500 # that doesn't support addremove
2501 2501 if opts.get('addremove'):
2502 2502 if scmutil.addremove(repo, matcher, "", opts) != 0:
2503 2503 raise error.Abort(
2504 2504 _("failed to mark all new/missing files as added/removed"))
2505 2505
2506 2506 return commitfunc(ui, repo, message, matcher, opts)
2507 2507
2508 2508 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2509 2509 # avoid cycle context -> subrepo -> cmdutil
2510 2510 import context
2511 2511
2512 2512 # amend will reuse the existing user if not specified, but the obsolete
2513 2513 # marker creation requires that the current user's name is specified.
2514 2514 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2515 2515 ui.username() # raise exception if username not set
2516 2516
2517 2517 ui.note(_('amending changeset %s\n') % old)
2518 2518 base = old.p1()
2519 2519 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2520 2520
2521 2521 wlock = lock = newid = None
2522 2522 try:
2523 2523 wlock = repo.wlock()
2524 2524 lock = repo.lock()
2525 2525 tr = repo.transaction('amend')
2526 2526 try:
2527 2527 # See if we got a message from -m or -l, if not, open the editor
2528 2528 # with the message of the changeset to amend
2529 2529 message = logmessage(ui, opts)
2530 2530 # ensure logfile does not conflict with later enforcement of the
2531 2531 # message. potential logfile content has been processed by
2532 2532 # `logmessage` anyway.
2533 2533 opts.pop('logfile')
2534 2534 # First, do a regular commit to record all changes in the working
2535 2535 # directory (if there are any)
2536 2536 ui.callhooks = False
2537 2537 activebookmark = repo._activebookmark
2538 2538 try:
2539 2539 repo._activebookmark = None
2540 2540 opts['message'] = 'temporary amend commit for %s' % old
2541 2541 node = commit(ui, repo, commitfunc, pats, opts)
2542 2542 finally:
2543 2543 repo._activebookmark = activebookmark
2544 2544 ui.callhooks = True
2545 2545 ctx = repo[node]
2546 2546
2547 2547 # Participating changesets:
2548 2548 #
2549 2549 # node/ctx o - new (intermediate) commit that contains changes
2550 2550 # | from working dir to go into amending commit
2551 2551 # | (or a workingctx if there were no changes)
2552 2552 # |
2553 2553 # old o - changeset to amend
2554 2554 # |
2555 2555 # base o - parent of amending changeset
2556 2556
2557 2557 # Update extra dict from amended commit (e.g. to preserve graft
2558 2558 # source)
2559 2559 extra.update(old.extra())
2560 2560
2561 2561 # Also update it from the intermediate commit or from the wctx
2562 2562 extra.update(ctx.extra())
2563 2563
2564 2564 if len(old.parents()) > 1:
2565 2565 # ctx.files() isn't reliable for merges, so fall back to the
2566 2566 # slower repo.status() method
2567 2567 files = set([fn for st in repo.status(base, old)[:3]
2568 2568 for fn in st])
2569 2569 else:
2570 2570 files = set(old.files())
2571 2571
2572 2572 # Second, we use either the commit we just did, or if there were no
2573 2573 # changes the parent of the working directory as the version of the
2574 2574 # files in the final amend commit
2575 2575 if node:
2576 2576 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2577 2577
2578 2578 user = ctx.user()
2579 2579 date = ctx.date()
2580 2580 # Recompute copies (avoid recording a -> b -> a)
2581 2581 copied = copies.pathcopies(base, ctx)
2582 2582 if old.p2:
2583 2583 copied.update(copies.pathcopies(old.p2(), ctx))
2584 2584
2585 2585 # Prune files which were reverted by the updates: if old
2586 2586 # introduced file X and our intermediate commit, node,
2587 2587 # renamed that file, then those two files are the same and
2588 2588 # we can discard X from our list of files. Likewise if X
2589 2589 # was deleted, it's no longer relevant
2590 2590 files.update(ctx.files())
2591 2591
2592 2592 def samefile(f):
2593 2593 if f in ctx.manifest():
2594 2594 a = ctx.filectx(f)
2595 2595 if f in base.manifest():
2596 2596 b = base.filectx(f)
2597 2597 return (not a.cmp(b)
2598 2598 and a.flags() == b.flags())
2599 2599 else:
2600 2600 return False
2601 2601 else:
2602 2602 return f not in base.manifest()
2603 2603 files = [f for f in files if not samefile(f)]
2604 2604
2605 2605 def filectxfn(repo, ctx_, path):
2606 2606 try:
2607 2607 fctx = ctx[path]
2608 2608 flags = fctx.flags()
2609 2609 mctx = context.memfilectx(repo,
2610 2610 fctx.path(), fctx.data(),
2611 2611 islink='l' in flags,
2612 2612 isexec='x' in flags,
2613 2613 copied=copied.get(path))
2614 2614 return mctx
2615 2615 except KeyError:
2616 2616 return None
2617 2617 else:
2618 2618 ui.note(_('copying changeset %s to %s\n') % (old, base))
2619 2619
2620 2620 # Use version of files as in the old cset
2621 2621 def filectxfn(repo, ctx_, path):
2622 2622 try:
2623 2623 return old.filectx(path)
2624 2624 except KeyError:
2625 2625 return None
2626 2626
2627 2627 user = opts.get('user') or old.user()
2628 2628 date = opts.get('date') or old.date()
2629 2629 editform = mergeeditform(old, 'commit.amend')
2630 2630 editor = getcommiteditor(editform=editform, **opts)
2631 2631 if not message:
2632 2632 editor = getcommiteditor(edit=True, editform=editform)
2633 2633 message = old.description()
2634 2634
2635 2635 pureextra = extra.copy()
2636 2636 if 'amend_source' in pureextra:
2637 2637 del pureextra['amend_source']
2638 2638 pureoldextra = old.extra()
2639 2639 if 'amend_source' in pureoldextra:
2640 2640 del pureoldextra['amend_source']
2641 2641 extra['amend_source'] = old.hex()
2642 2642
2643 2643 new = context.memctx(repo,
2644 2644 parents=[base.node(), old.p2().node()],
2645 2645 text=message,
2646 2646 files=files,
2647 2647 filectxfn=filectxfn,
2648 2648 user=user,
2649 2649 date=date,
2650 2650 extra=extra,
2651 2651 editor=editor)
2652 2652
2653 2653 newdesc = changelog.stripdesc(new.description())
2654 2654 if ((not node)
2655 2655 and newdesc == old.description()
2656 2656 and user == old.user()
2657 2657 and date == old.date()
2658 2658 and pureextra == pureoldextra):
2659 2659 # nothing changed. continuing here would create a new node
2660 2660 # anyway because of the amend_source noise.
2661 2661 #
2662 2662 # This not what we expect from amend.
2663 2663 return old.node()
2664 2664
2665 2665 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2666 2666 try:
2667 2667 if opts.get('secret'):
2668 2668 commitphase = 'secret'
2669 2669 else:
2670 2670 commitphase = old.phase()
2671 2671 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2672 2672 newid = repo.commitctx(new)
2673 2673 finally:
2674 2674 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2675 2675 if newid != old.node():
2676 2676 # Reroute the working copy parent to the new changeset
2677 2677 repo.setparents(newid, nullid)
2678 2678
2679 2679 # Move bookmarks from old parent to amend commit
2680 2680 bms = repo.nodebookmarks(old.node())
2681 2681 if bms:
2682 2682 marks = repo._bookmarks
2683 2683 for bm in bms:
2684 2684 ui.debug('moving bookmarks %r from %s to %s\n' %
2685 2685 (marks, old.hex(), hex(newid)))
2686 2686 marks[bm] = newid
2687 2687 marks.recordchange(tr)
2688 2688 #commit the whole amend process
2689 2689 if createmarkers:
2690 2690 # mark the new changeset as successor of the rewritten one
2691 2691 new = repo[newid]
2692 2692 obs = [(old, (new,))]
2693 2693 if node:
2694 2694 obs.append((ctx, ()))
2695 2695
2696 2696 obsolete.createmarkers(repo, obs)
2697 2697 tr.close()
2698 2698 finally:
2699 2699 tr.release()
2700 2700 if not createmarkers and newid != old.node():
2701 2701 # Strip the intermediate commit (if there was one) and the amended
2702 2702 # commit
2703 2703 if node:
2704 2704 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2705 2705 ui.note(_('stripping amended changeset %s\n') % old)
2706 2706 repair.strip(ui, repo, old.node(), topic='amend-backup')
2707 2707 finally:
2708 2708 lockmod.release(lock, wlock)
2709 2709 return newid
2710 2710
2711 2711 def commiteditor(repo, ctx, subs, editform=''):
2712 2712 if ctx.description():
2713 2713 return ctx.description()
2714 2714 return commitforceeditor(repo, ctx, subs, editform=editform,
2715 2715 unchangedmessagedetection=True)
2716 2716
2717 2717 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2718 2718 editform='', unchangedmessagedetection=False):
2719 2719 if not extramsg:
2720 2720 extramsg = _("Leave message empty to abort commit.")
2721 2721
2722 2722 forms = [e for e in editform.split('.') if e]
2723 2723 forms.insert(0, 'changeset')
2724 2724 templatetext = None
2725 2725 while forms:
2726 2726 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2727 2727 if tmpl:
2728 2728 templatetext = committext = buildcommittemplate(
2729 2729 repo, ctx, subs, extramsg, tmpl)
2730 2730 break
2731 2731 forms.pop()
2732 2732 else:
2733 2733 committext = buildcommittext(repo, ctx, subs, extramsg)
2734 2734
2735 2735 # run editor in the repository root
2736 2736 olddir = os.getcwd()
2737 2737 os.chdir(repo.root)
2738 2738
2739 2739 # make in-memory changes visible to external process
2740 2740 tr = repo.currenttransaction()
2741 2741 repo.dirstate.write(tr)
2742 2742 pending = tr and tr.writepending() and repo.root
2743 2743
2744 2744 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2745 2745 editform=editform, pending=pending)
2746 2746 text = re.sub("(?m)^HG:.*(\n|$)", "", editortext)
2747 2747 os.chdir(olddir)
2748 2748
2749 2749 if finishdesc:
2750 2750 text = finishdesc(text)
2751 2751 if not text.strip():
2752 2752 raise error.Abort(_("empty commit message"))
2753 2753 if unchangedmessagedetection and editortext == templatetext:
2754 2754 raise error.Abort(_("commit message unchanged"))
2755 2755
2756 2756 return text
2757 2757
2758 2758 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2759 2759 ui = repo.ui
2760 2760 tmpl, mapfile = gettemplate(ui, tmpl, None)
2761 2761
2762 2762 try:
2763 2763 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2764 2764 except SyntaxError as inst:
2765 2765 raise error.Abort(inst.args[0])
2766 2766
2767 2767 for k, v in repo.ui.configitems('committemplate'):
2768 2768 if k != 'changeset':
2769 2769 t.t.cache[k] = v
2770 2770
2771 2771 if not extramsg:
2772 2772 extramsg = '' # ensure that extramsg is string
2773 2773
2774 2774 ui.pushbuffer()
2775 2775 t.show(ctx, extramsg=extramsg)
2776 2776 return ui.popbuffer()
2777 2777
2778 2778 def hgprefix(msg):
2779 2779 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2780 2780
2781 2781 def buildcommittext(repo, ctx, subs, extramsg):
2782 2782 edittext = []
2783 2783 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2784 2784 if ctx.description():
2785 2785 edittext.append(ctx.description())
2786 2786 edittext.append("")
2787 2787 edittext.append("") # Empty line between message and comments.
2788 2788 edittext.append(hgprefix(_("Enter commit message."
2789 2789 " Lines beginning with 'HG:' are removed.")))
2790 2790 edittext.append(hgprefix(extramsg))
2791 2791 edittext.append("HG: --")
2792 2792 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2793 2793 if ctx.p2():
2794 2794 edittext.append(hgprefix(_("branch merge")))
2795 2795 if ctx.branch():
2796 2796 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2797 2797 if bookmarks.isactivewdirparent(repo):
2798 2798 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2799 2799 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2800 2800 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2801 2801 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2802 2802 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2803 2803 if not added and not modified and not removed:
2804 2804 edittext.append(hgprefix(_("no files changed")))
2805 2805 edittext.append("")
2806 2806
2807 2807 return "\n".join(edittext)
2808 2808
2809 2809 def commitstatus(repo, node, branch, bheads=None, opts=None):
2810 2810 if opts is None:
2811 2811 opts = {}
2812 2812 ctx = repo[node]
2813 2813 parents = ctx.parents()
2814 2814
2815 2815 if (not opts.get('amend') and bheads and node not in bheads and not
2816 2816 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2817 2817 repo.ui.status(_('created new head\n'))
2818 2818 # The message is not printed for initial roots. For the other
2819 2819 # changesets, it is printed in the following situations:
2820 2820 #
2821 2821 # Par column: for the 2 parents with ...
2822 2822 # N: null or no parent
2823 2823 # B: parent is on another named branch
2824 2824 # C: parent is a regular non head changeset
2825 2825 # H: parent was a branch head of the current branch
2826 2826 # Msg column: whether we print "created new head" message
2827 2827 # In the following, it is assumed that there already exists some
2828 2828 # initial branch heads of the current branch, otherwise nothing is
2829 2829 # printed anyway.
2830 2830 #
2831 2831 # Par Msg Comment
2832 2832 # N N y additional topo root
2833 2833 #
2834 2834 # B N y additional branch root
2835 2835 # C N y additional topo head
2836 2836 # H N n usual case
2837 2837 #
2838 2838 # B B y weird additional branch root
2839 2839 # C B y branch merge
2840 2840 # H B n merge with named branch
2841 2841 #
2842 2842 # C C y additional head from merge
2843 2843 # C H n merge with a head
2844 2844 #
2845 2845 # H H n head merge: head count decreases
2846 2846
2847 2847 if not opts.get('close_branch'):
2848 2848 for r in parents:
2849 2849 if r.closesbranch() and r.branch() == branch:
2850 2850 repo.ui.status(_('reopening closed branch head %d\n') % r)
2851 2851
2852 2852 if repo.ui.debugflag:
2853 2853 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2854 2854 elif repo.ui.verbose:
2855 2855 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2856 2856
2857 2857 def revert(ui, repo, ctx, parents, *pats, **opts):
2858 2858 parent, p2 = parents
2859 2859 node = ctx.node()
2860 2860
2861 2861 mf = ctx.manifest()
2862 2862 if node == p2:
2863 2863 parent = p2
2864 2864 if node == parent:
2865 2865 pmf = mf
2866 2866 else:
2867 2867 pmf = None
2868 2868
2869 2869 # need all matching names in dirstate and manifest of target rev,
2870 2870 # so have to walk both. do not print errors if files exist in one
2871 2871 # but not other. in both cases, filesets should be evaluated against
2872 2872 # workingctx to get consistent result (issue4497). this means 'set:**'
2873 2873 # cannot be used to select missing files from target rev.
2874 2874
2875 2875 # `names` is a mapping for all elements in working copy and target revision
2876 2876 # The mapping is in the form:
2877 2877 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2878 2878 names = {}
2879 2879
2880 2880 wlock = repo.wlock()
2881 2881 try:
2882 2882 ## filling of the `names` mapping
2883 2883 # walk dirstate to fill `names`
2884 2884
2885 2885 interactive = opts.get('interactive', False)
2886 2886 wctx = repo[None]
2887 2887 m = scmutil.match(wctx, pats, opts)
2888 2888
2889 2889 # we'll need this later
2890 2890 targetsubs = sorted(s for s in wctx.substate if m(s))
2891 2891
2892 2892 if not m.always():
2893 2893 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2894 2894 names[abs] = m.rel(abs), m.exact(abs)
2895 2895
2896 2896 # walk target manifest to fill `names`
2897 2897
2898 2898 def badfn(path, msg):
2899 2899 if path in names:
2900 2900 return
2901 2901 if path in ctx.substate:
2902 2902 return
2903 2903 path_ = path + '/'
2904 2904 for f in names:
2905 2905 if f.startswith(path_):
2906 2906 return
2907 2907 ui.warn("%s: %s\n" % (m.rel(path), msg))
2908 2908
2909 2909 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2910 2910 if abs not in names:
2911 2911 names[abs] = m.rel(abs), m.exact(abs)
2912 2912
2913 2913 # Find status of all file in `names`.
2914 2914 m = scmutil.matchfiles(repo, names)
2915 2915
2916 2916 changes = repo.status(node1=node, match=m,
2917 2917 unknown=True, ignored=True, clean=True)
2918 2918 else:
2919 2919 changes = repo.status(node1=node, match=m)
2920 2920 for kind in changes:
2921 2921 for abs in kind:
2922 2922 names[abs] = m.rel(abs), m.exact(abs)
2923 2923
2924 2924 m = scmutil.matchfiles(repo, names)
2925 2925
2926 2926 modified = set(changes.modified)
2927 2927 added = set(changes.added)
2928 2928 removed = set(changes.removed)
2929 2929 _deleted = set(changes.deleted)
2930 2930 unknown = set(changes.unknown)
2931 2931 unknown.update(changes.ignored)
2932 2932 clean = set(changes.clean)
2933 2933 modadded = set()
2934 2934
2935 2935 # split between files known in target manifest and the others
2936 2936 smf = set(mf)
2937 2937
2938 2938 # determine the exact nature of the deleted changesets
2939 2939 deladded = _deleted - smf
2940 2940 deleted = _deleted - deladded
2941 2941
2942 2942 # We need to account for the state of the file in the dirstate,
2943 2943 # even when we revert against something else than parent. This will
2944 2944 # slightly alter the behavior of revert (doing back up or not, delete
2945 2945 # or just forget etc).
2946 2946 if parent == node:
2947 2947 dsmodified = modified
2948 2948 dsadded = added
2949 2949 dsremoved = removed
2950 2950 # store all local modifications, useful later for rename detection
2951 2951 localchanges = dsmodified | dsadded
2952 2952 modified, added, removed = set(), set(), set()
2953 2953 else:
2954 2954 changes = repo.status(node1=parent, match=m)
2955 2955 dsmodified = set(changes.modified)
2956 2956 dsadded = set(changes.added)
2957 2957 dsremoved = set(changes.removed)
2958 2958 # store all local modifications, useful later for rename detection
2959 2959 localchanges = dsmodified | dsadded
2960 2960
2961 2961 # only take into account for removes between wc and target
2962 2962 clean |= dsremoved - removed
2963 2963 dsremoved &= removed
2964 2964 # distinct between dirstate remove and other
2965 2965 removed -= dsremoved
2966 2966
2967 2967 modadded = added & dsmodified
2968 2968 added -= modadded
2969 2969
2970 2970 # tell newly modified apart.
2971 2971 dsmodified &= modified
2972 2972 dsmodified |= modified & dsadded # dirstate added may needs backup
2973 2973 modified -= dsmodified
2974 2974
2975 2975 # We need to wait for some post-processing to update this set
2976 2976 # before making the distinction. The dirstate will be used for
2977 2977 # that purpose.
2978 2978 dsadded = added
2979 2979
2980 2980 # in case of merge, files that are actually added can be reported as
2981 2981 # modified, we need to post process the result
2982 2982 if p2 != nullid:
2983 2983 if pmf is None:
2984 2984 # only need parent manifest in the merge case,
2985 2985 # so do not read by default
2986 2986 pmf = repo[parent].manifest()
2987 2987 mergeadd = dsmodified - set(pmf)
2988 2988 dsadded |= mergeadd
2989 2989 dsmodified -= mergeadd
2990 2990
2991 2991 # if f is a rename, update `names` to also revert the source
2992 2992 cwd = repo.getcwd()
2993 2993 for f in localchanges:
2994 2994 src = repo.dirstate.copied(f)
2995 2995 # XXX should we check for rename down to target node?
2996 2996 if src and src not in names and repo.dirstate[src] == 'r':
2997 2997 dsremoved.add(src)
2998 2998 names[src] = (repo.pathto(src, cwd), True)
2999 2999
3000 3000 # distinguish between file to forget and the other
3001 3001 added = set()
3002 3002 for abs in dsadded:
3003 3003 if repo.dirstate[abs] != 'a':
3004 3004 added.add(abs)
3005 3005 dsadded -= added
3006 3006
3007 3007 for abs in deladded:
3008 3008 if repo.dirstate[abs] == 'a':
3009 3009 dsadded.add(abs)
3010 3010 deladded -= dsadded
3011 3011
3012 3012 # For files marked as removed, we check if an unknown file is present at
3013 3013 # the same path. If a such file exists it may need to be backed up.
3014 3014 # Making the distinction at this stage helps have simpler backup
3015 3015 # logic.
3016 3016 removunk = set()
3017 3017 for abs in removed:
3018 3018 target = repo.wjoin(abs)
3019 3019 if os.path.lexists(target):
3020 3020 removunk.add(abs)
3021 3021 removed -= removunk
3022 3022
3023 3023 dsremovunk = set()
3024 3024 for abs in dsremoved:
3025 3025 target = repo.wjoin(abs)
3026 3026 if os.path.lexists(target):
3027 3027 dsremovunk.add(abs)
3028 3028 dsremoved -= dsremovunk
3029 3029
3030 3030 # action to be actually performed by revert
3031 3031 # (<list of file>, message>) tuple
3032 3032 actions = {'revert': ([], _('reverting %s\n')),
3033 3033 'add': ([], _('adding %s\n')),
3034 3034 'remove': ([], _('removing %s\n')),
3035 3035 'drop': ([], _('removing %s\n')),
3036 3036 'forget': ([], _('forgetting %s\n')),
3037 3037 'undelete': ([], _('undeleting %s\n')),
3038 3038 'noop': (None, _('no changes needed to %s\n')),
3039 3039 'unknown': (None, _('file not managed: %s\n')),
3040 3040 }
3041 3041
3042 3042 # "constant" that convey the backup strategy.
3043 3043 # All set to `discard` if `no-backup` is set do avoid checking
3044 3044 # no_backup lower in the code.
3045 3045 # These values are ordered for comparison purposes
3046 3046 backup = 2 # unconditionally do backup
3047 3047 check = 1 # check if the existing file differs from target
3048 3048 discard = 0 # never do backup
3049 3049 if opts.get('no_backup'):
3050 3050 backup = check = discard
3051 3051
3052 3052 backupanddel = actions['remove']
3053 3053 if not opts.get('no_backup'):
3054 3054 backupanddel = actions['drop']
3055 3055
3056 3056 disptable = (
3057 3057 # dispatch table:
3058 3058 # file state
3059 3059 # action
3060 3060 # make backup
3061 3061
3062 3062 ## Sets that results that will change file on disk
3063 3063 # Modified compared to target, no local change
3064 3064 (modified, actions['revert'], discard),
3065 3065 # Modified compared to target, but local file is deleted
3066 3066 (deleted, actions['revert'], discard),
3067 3067 # Modified compared to target, local change
3068 3068 (dsmodified, actions['revert'], backup),
3069 3069 # Added since target
3070 3070 (added, actions['remove'], discard),
3071 3071 # Added in working directory
3072 3072 (dsadded, actions['forget'], discard),
3073 3073 # Added since target, have local modification
3074 3074 (modadded, backupanddel, backup),
3075 3075 # Added since target but file is missing in working directory
3076 3076 (deladded, actions['drop'], discard),
3077 3077 # Removed since target, before working copy parent
3078 3078 (removed, actions['add'], discard),
3079 3079 # Same as `removed` but an unknown file exists at the same path
3080 3080 (removunk, actions['add'], check),
3081 3081 # Removed since targe, marked as such in working copy parent
3082 3082 (dsremoved, actions['undelete'], discard),
3083 3083 # Same as `dsremoved` but an unknown file exists at the same path
3084 3084 (dsremovunk, actions['undelete'], check),
3085 3085 ## the following sets does not result in any file changes
3086 3086 # File with no modification
3087 3087 (clean, actions['noop'], discard),
3088 3088 # Existing file, not tracked anywhere
3089 3089 (unknown, actions['unknown'], discard),
3090 3090 )
3091 3091
3092 3092 for abs, (rel, exact) in sorted(names.items()):
3093 3093 # target file to be touch on disk (relative to cwd)
3094 3094 target = repo.wjoin(abs)
3095 3095 # search the entry in the dispatch table.
3096 3096 # if the file is in any of these sets, it was touched in the working
3097 3097 # directory parent and we are sure it needs to be reverted.
3098 3098 for table, (xlist, msg), dobackup in disptable:
3099 3099 if abs not in table:
3100 3100 continue
3101 3101 if xlist is not None:
3102 3102 xlist.append(abs)
3103 3103 if dobackup and (backup <= dobackup
3104 3104 or wctx[abs].cmp(ctx[abs])):
3105 3105 bakname = origpath(ui, repo, rel)
3106 3106 ui.note(_('saving current version of %s as %s\n') %
3107 3107 (rel, bakname))
3108 3108 if not opts.get('dry_run'):
3109 3109 if interactive:
3110 3110 util.copyfile(target, bakname)
3111 3111 else:
3112 3112 util.rename(target, bakname)
3113 3113 if ui.verbose or not exact:
3114 3114 if not isinstance(msg, basestring):
3115 3115 msg = msg(abs)
3116 3116 ui.status(msg % rel)
3117 3117 elif exact:
3118 3118 ui.warn(msg % rel)
3119 3119 break
3120 3120
3121 3121 if not opts.get('dry_run'):
3122 3122 needdata = ('revert', 'add', 'undelete')
3123 3123 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3124 3124 _performrevert(repo, parents, ctx, actions, interactive)
3125 3125
3126 3126 if targetsubs:
3127 3127 # Revert the subrepos on the revert list
3128 3128 for sub in targetsubs:
3129 3129 try:
3130 3130 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3131 3131 except KeyError:
3132 3132 raise error.Abort("subrepository '%s' does not exist in %s!"
3133 3133 % (sub, short(ctx.node())))
3134 3134 finally:
3135 3135 wlock.release()
3136 3136
3137 3137 def origpath(ui, repo, filepath):
3138 3138 '''customize where .orig files are created
3139 3139
3140 3140 Fetch user defined path from config file: [ui] origbackuppath = <path>
3141 3141 Fall back to default (filepath) if not specified
3142 3142 '''
3143 3143 origbackuppath = ui.config('ui', 'origbackuppath', None)
3144 3144 if origbackuppath is None:
3145 3145 return filepath + ".orig"
3146 3146
3147 3147 filepathfromroot = os.path.relpath(filepath, start=repo.root)
3148 3148 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
3149 3149
3150 3150 origbackupdir = repo.vfs.dirname(fullorigpath)
3151 3151 if not repo.vfs.exists(origbackupdir):
3152 3152 ui.note(_('creating directory: %s\n') % origbackupdir)
3153 3153 util.makedirs(origbackupdir)
3154 3154
3155 3155 return fullorigpath + ".orig"
3156 3156
3157 3157 def _revertprefetch(repo, ctx, *files):
3158 3158 """Let extension changing the storage layer prefetch content"""
3159 3159 pass
3160 3160
3161 3161 def _performrevert(repo, parents, ctx, actions, interactive=False):
3162 3162 """function that actually perform all the actions computed for revert
3163 3163
3164 3164 This is an independent function to let extension to plug in and react to
3165 3165 the imminent revert.
3166 3166
3167 3167 Make sure you have the working directory locked when calling this function.
3168 3168 """
3169 3169 parent, p2 = parents
3170 3170 node = ctx.node()
3171 3171 def checkout(f):
3172 3172 fc = ctx[f]
3173 3173 repo.wwrite(f, fc.data(), fc.flags())
3174 3174
3175 3175 audit_path = pathutil.pathauditor(repo.root)
3176 3176 for f in actions['forget'][0]:
3177 3177 repo.dirstate.drop(f)
3178 3178 for f in actions['remove'][0]:
3179 3179 audit_path(f)
3180 3180 try:
3181 3181 util.unlinkpath(repo.wjoin(f))
3182 3182 except OSError:
3183 3183 pass
3184 3184 repo.dirstate.remove(f)
3185 3185 for f in actions['drop'][0]:
3186 3186 audit_path(f)
3187 3187 repo.dirstate.remove(f)
3188 3188
3189 3189 normal = None
3190 3190 if node == parent:
3191 3191 # We're reverting to our parent. If possible, we'd like status
3192 3192 # to report the file as clean. We have to use normallookup for
3193 3193 # merges to avoid losing information about merged/dirty files.
3194 3194 if p2 != nullid:
3195 3195 normal = repo.dirstate.normallookup
3196 3196 else:
3197 3197 normal = repo.dirstate.normal
3198 3198
3199 3199 newlyaddedandmodifiedfiles = set()
3200 3200 if interactive:
3201 3201 # Prompt the user for changes to revert
3202 3202 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3203 3203 m = scmutil.match(ctx, torevert, {})
3204 3204 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3205 3205 diffopts.nodates = True
3206 3206 diffopts.git = True
3207 3207 reversehunks = repo.ui.configbool('experimental',
3208 3208 'revertalternateinteractivemode',
3209 3209 True)
3210 3210 if reversehunks:
3211 3211 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3212 3212 else:
3213 3213 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3214 3214 originalchunks = patch.parsepatch(diff)
3215 3215
3216 3216 try:
3217 3217
3218 3218 chunks, opts = recordfilter(repo.ui, originalchunks)
3219 3219 if reversehunks:
3220 3220 chunks = patch.reversehunks(chunks)
3221 3221
3222 3222 except patch.PatchError as err:
3223 3223 raise error.Abort(_('error parsing patch: %s') % err)
3224 3224
3225 3225 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3226 3226 # Apply changes
3227 3227 fp = cStringIO.StringIO()
3228 3228 for c in chunks:
3229 3229 c.write(fp)
3230 3230 dopatch = fp.tell()
3231 3231 fp.seek(0)
3232 3232 if dopatch:
3233 3233 try:
3234 3234 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3235 3235 except patch.PatchError as err:
3236 3236 raise error.Abort(str(err))
3237 3237 del fp
3238 3238 else:
3239 3239 for f in actions['revert'][0]:
3240 3240 checkout(f)
3241 3241 if normal:
3242 3242 normal(f)
3243 3243
3244 3244 for f in actions['add'][0]:
3245 3245 # Don't checkout modified files, they are already created by the diff
3246 3246 if f not in newlyaddedandmodifiedfiles:
3247 3247 checkout(f)
3248 3248 repo.dirstate.add(f)
3249 3249
3250 3250 normal = repo.dirstate.normallookup
3251 3251 if node == parent and p2 == nullid:
3252 3252 normal = repo.dirstate.normal
3253 3253 for f in actions['undelete'][0]:
3254 3254 checkout(f)
3255 3255 normal(f)
3256 3256
3257 3257 copied = copies.pathcopies(repo[parent], ctx)
3258 3258
3259 3259 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3260 3260 if f in copied:
3261 3261 repo.dirstate.copy(copied[f], f)
3262 3262
3263 3263 def command(table):
3264 3264 """Returns a function object to be used as a decorator for making commands.
3265 3265
3266 3266 This function receives a command table as its argument. The table should
3267 3267 be a dict.
3268 3268
3269 3269 The returned function can be used as a decorator for adding commands
3270 3270 to that command table. This function accepts multiple arguments to define
3271 3271 a command.
3272 3272
3273 3273 The first argument is the command name.
3274 3274
3275 3275 The options argument is an iterable of tuples defining command arguments.
3276 3276 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3277 3277
3278 3278 The synopsis argument defines a short, one line summary of how to use the
3279 3279 command. This shows up in the help output.
3280 3280
3281 3281 The norepo argument defines whether the command does not require a
3282 3282 local repository. Most commands operate against a repository, thus the
3283 3283 default is False.
3284 3284
3285 3285 The optionalrepo argument defines whether the command optionally requires
3286 3286 a local repository.
3287 3287
3288 3288 The inferrepo argument defines whether to try to find a repository from the
3289 3289 command line arguments. If True, arguments will be examined for potential
3290 3290 repository locations. See ``findrepo()``. If a repository is found, it
3291 3291 will be used.
3292 3292 """
3293 3293 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3294 3294 inferrepo=False):
3295 3295 def decorator(func):
3296 3296 if synopsis:
3297 3297 table[name] = func, list(options), synopsis
3298 3298 else:
3299 3299 table[name] = func, list(options)
3300 3300
3301 3301 if norepo:
3302 3302 # Avoid import cycle.
3303 3303 import commands
3304 3304 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3305 3305
3306 3306 if optionalrepo:
3307 3307 import commands
3308 3308 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3309 3309
3310 3310 if inferrepo:
3311 3311 import commands
3312 3312 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3313 3313
3314 3314 return func
3315 3315 return decorator
3316 3316
3317 3317 return cmd
3318 3318
3319 3319 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3320 3320 # commands.outgoing. "missing" is "missing" of the result of
3321 3321 # "findcommonoutgoing()"
3322 3322 outgoinghooks = util.hooks()
3323 3323
3324 3324 # a list of (ui, repo) functions called by commands.summary
3325 3325 summaryhooks = util.hooks()
3326 3326
3327 3327 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3328 3328 #
3329 3329 # functions should return tuple of booleans below, if 'changes' is None:
3330 3330 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3331 3331 #
3332 3332 # otherwise, 'changes' is a tuple of tuples below:
3333 3333 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3334 3334 # - (desturl, destbranch, destpeer, outgoing)
3335 3335 summaryremotehooks = util.hooks()
3336 3336
3337 3337 # A list of state files kept by multistep operations like graft.
3338 3338 # Since graft cannot be aborted, it is considered 'clearable' by update.
3339 3339 # note: bisect is intentionally excluded
3340 3340 # (state file, clearable, allowcommit, error, hint)
3341 3341 unfinishedstates = [
3342 3342 ('graftstate', True, False, _('graft in progress'),
3343 3343 _("use 'hg graft --continue' or 'hg update' to abort")),
3344 3344 ('updatestate', True, False, _('last update was interrupted'),
3345 3345 _("use 'hg update' to get a consistent checkout"))
3346 3346 ]
3347 3347
3348 3348 def checkunfinished(repo, commit=False):
3349 3349 '''Look for an unfinished multistep operation, like graft, and abort
3350 3350 if found. It's probably good to check this right before
3351 3351 bailifchanged().
3352 3352 '''
3353 3353 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3354 3354 if commit and allowcommit:
3355 3355 continue
3356 3356 if repo.vfs.exists(f):
3357 3357 raise error.Abort(msg, hint=hint)
3358 3358
3359 3359 def clearunfinished(repo):
3360 3360 '''Check for unfinished operations (as above), and clear the ones
3361 3361 that are clearable.
3362 3362 '''
3363 3363 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3364 3364 if not clearable and repo.vfs.exists(f):
3365 3365 raise error.Abort(msg, hint=hint)
3366 3366 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3367 3367 if clearable and repo.vfs.exists(f):
3368 3368 util.unlink(repo.join(f))
3369 3369
3370 3370 class dirstateguard(object):
3371 3371 '''Restore dirstate at unexpected failure.
3372 3372
3373 3373 At the construction, this class does:
3374 3374
3375 3375 - write current ``repo.dirstate`` out, and
3376 3376 - save ``.hg/dirstate`` into the backup file
3377 3377
3378 3378 This restores ``.hg/dirstate`` from backup file, if ``release()``
3379 3379 is invoked before ``close()``.
3380 3380
3381 3381 This just removes the backup file at ``close()`` before ``release()``.
3382 3382 '''
3383 3383
3384 3384 def __init__(self, repo, name):
3385 3385 self._repo = repo
3386 3386 self._suffix = '.backup.%s.%d' % (name, id(self))
3387 3387 repo.dirstate._savebackup(repo.currenttransaction(), self._suffix)
3388 3388 self._active = True
3389 3389 self._closed = False
3390 3390
3391 3391 def __del__(self):
3392 3392 if self._active: # still active
3393 3393 # this may occur, even if this class is used correctly:
3394 3394 # for example, releasing other resources like transaction
3395 3395 # may raise exception before ``dirstateguard.release`` in
3396 3396 # ``release(tr, ....)``.
3397 3397 self._abort()
3398 3398
3399 3399 def close(self):
3400 3400 if not self._active: # already inactivated
3401 3401 msg = (_("can't close already inactivated backup: dirstate%s")
3402 3402 % self._suffix)
3403 3403 raise error.Abort(msg)
3404 3404
3405 3405 self._repo.dirstate._clearbackup(self._repo.currenttransaction(),
3406 3406 self._suffix)
3407 3407 self._active = False
3408 3408 self._closed = True
3409 3409
3410 3410 def _abort(self):
3411 3411 self._repo.dirstate._restorebackup(self._repo.currenttransaction(),
3412 3412 self._suffix)
3413 3413 self._active = False
3414 3414
3415 3415 def release(self):
3416 3416 if not self._closed:
3417 3417 if not self._active: # already inactivated
3418 3418 msg = (_("can't release already inactivated backup:"
3419 3419 " dirstate%s")
3420 3420 % self._suffix)
3421 3421 raise error.Abort(msg)
3422 3422 self._abort()
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now