##// END OF EJS Templates
update: clarify update() call sites by specifying argument names...
Martin von Zweigbergk -
r40402:b14fdf1f default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,1659 +1,1658 b''
1 1 # histedit.py - interactive history editing for mercurial
2 2 #
3 3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """interactive history editing
8 8
9 9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 10 is as follows, assuming the following history::
11 11
12 12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 13 | Add delta
14 14 |
15 15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 16 | Add gamma
17 17 |
18 18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 19 | Add beta
20 20 |
21 21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 22 Add alpha
23 23
24 24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 25 file open in your editor::
26 26
27 27 pick c561b4e977df Add beta
28 28 pick 030b686bedc4 Add gamma
29 29 pick 7c2fd3b9020c Add delta
30 30
31 31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 32 #
33 33 # Commits are listed from least to most recent
34 34 #
35 35 # Commands:
36 36 # p, pick = use commit
37 37 # e, edit = use commit, but stop for amending
38 38 # f, fold = use commit, but combine it with the one above
39 39 # r, roll = like fold, but discard this commit's description and date
40 40 # d, drop = remove commit from history
41 41 # m, mess = edit commit message without changing commit content
42 42 # b, base = checkout changeset and apply further changesets from there
43 43 #
44 44
45 45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
46 46 for each revision in your history. For example, if you had meant to add gamma
47 47 before beta, and then wanted to add delta in the same revision as beta, you
48 48 would reorganize the file to look like this::
49 49
50 50 pick 030b686bedc4 Add gamma
51 51 pick c561b4e977df Add beta
52 52 fold 7c2fd3b9020c Add delta
53 53
54 54 # Edit history between c561b4e977df and 7c2fd3b9020c
55 55 #
56 56 # Commits are listed from least to most recent
57 57 #
58 58 # Commands:
59 59 # p, pick = use commit
60 60 # e, edit = use commit, but stop for amending
61 61 # f, fold = use commit, but combine it with the one above
62 62 # r, roll = like fold, but discard this commit's description and date
63 63 # d, drop = remove commit from history
64 64 # m, mess = edit commit message without changing commit content
65 65 # b, base = checkout changeset and apply further changesets from there
66 66 #
67 67
68 68 At which point you close the editor and ``histedit`` starts working. When you
69 69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
70 70 those revisions together, offering you a chance to clean up the commit message::
71 71
72 72 Add beta
73 73 ***
74 74 Add delta
75 75
76 76 Edit the commit message to your liking, then close the editor. The date used
77 77 for the commit will be the later of the two commits' dates. For this example,
78 78 let's assume that the commit message was changed to ``Add beta and delta.``
79 79 After histedit has run and had a chance to remove any old or temporary
80 80 revisions it needed, the history looks like this::
81 81
82 82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
83 83 | Add beta and delta.
84 84 |
85 85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
86 86 | Add gamma
87 87 |
88 88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
89 89 Add alpha
90 90
91 91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
92 92 ones) until after it has completed all the editing operations, so it will
93 93 probably perform several strip operations when it's done. For the above example,
94 94 it had to run strip twice. Strip can be slow depending on a variety of factors,
95 95 so you might need to be a little patient. You can choose to keep the original
96 96 revisions by passing the ``--keep`` flag.
97 97
98 98 The ``edit`` operation will drop you back to a command prompt,
99 99 allowing you to edit files freely, or even use ``hg record`` to commit
100 100 some changes as a separate commit. When you're done, any remaining
101 101 uncommitted changes will be committed as well. When done, run ``hg
102 102 histedit --continue`` to finish this step. If there are uncommitted
103 103 changes, you'll be prompted for a new commit message, but the default
104 104 commit message will be the original message for the ``edit`` ed
105 105 revision, and the date of the original commit will be preserved.
106 106
107 107 The ``message`` operation will give you a chance to revise a commit
108 108 message without changing the contents. It's a shortcut for doing
109 109 ``edit`` immediately followed by `hg histedit --continue``.
110 110
111 111 If ``histedit`` encounters a conflict when moving a revision (while
112 112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
113 113 ``edit`` with the difference that it won't prompt you for a commit
114 114 message when done. If you decide at this point that you don't like how
115 115 much work it will be to rearrange history, or that you made a mistake,
116 116 you can use ``hg histedit --abort`` to abandon the new changes you
117 117 have made and return to the state before you attempted to edit your
118 118 history.
119 119
120 120 If we clone the histedit-ed example repository above and add four more
121 121 changes, such that we have the following history::
122 122
123 123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
124 124 | Add theta
125 125 |
126 126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
127 127 | Add eta
128 128 |
129 129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
130 130 | Add zeta
131 131 |
132 132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
133 133 | Add epsilon
134 134 |
135 135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
136 136 | Add beta and delta.
137 137 |
138 138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
139 139 | Add gamma
140 140 |
141 141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
142 142 Add alpha
143 143
144 144 If you run ``hg histedit --outgoing`` on the clone then it is the same
145 145 as running ``hg histedit 836302820282``. If you need plan to push to a
146 146 repository that Mercurial does not detect to be related to the source
147 147 repo, you can add a ``--force`` option.
148 148
149 149 Config
150 150 ------
151 151
152 152 Histedit rule lines are truncated to 80 characters by default. You
153 153 can customize this behavior by setting a different length in your
154 154 configuration file::
155 155
156 156 [histedit]
157 157 linelen = 120 # truncate rule lines at 120 characters
158 158
159 159 ``hg histedit`` attempts to automatically choose an appropriate base
160 160 revision to use. To change which base revision is used, define a
161 161 revset in your configuration file::
162 162
163 163 [histedit]
164 164 defaultrev = only(.) & draft()
165 165
166 166 By default each edited revision needs to be present in histedit commands.
167 167 To remove revision you need to use ``drop`` operation. You can configure
168 168 the drop to be implicit for missing commits by adding::
169 169
170 170 [histedit]
171 171 dropmissing = True
172 172
173 173 By default, histedit will close the transaction after each action. For
174 174 performance purposes, you can configure histedit to use a single transaction
175 175 across the entire histedit. WARNING: This setting introduces a significant risk
176 176 of losing the work you've done in a histedit if the histedit aborts
177 177 unexpectedly::
178 178
179 179 [histedit]
180 180 singletransaction = True
181 181
182 182 """
183 183
184 184 from __future__ import absolute_import
185 185
186 186 import os
187 187
188 188 from mercurial.i18n import _
189 189 from mercurial import (
190 190 bundle2,
191 191 cmdutil,
192 192 context,
193 193 copies,
194 194 destutil,
195 195 discovery,
196 196 error,
197 197 exchange,
198 198 extensions,
199 199 hg,
200 200 lock,
201 201 merge as mergemod,
202 202 mergeutil,
203 203 node,
204 204 obsolete,
205 205 pycompat,
206 206 registrar,
207 207 repair,
208 208 scmutil,
209 209 state as statemod,
210 210 util,
211 211 )
212 212 from mercurial.utils import (
213 213 stringutil,
214 214 )
215 215
216 216 pickle = util.pickle
217 217 release = lock.release
218 218 cmdtable = {}
219 219 command = registrar.command(cmdtable)
220 220
221 221 configtable = {}
222 222 configitem = registrar.configitem(configtable)
223 223 configitem('experimental', 'histedit.autoverb',
224 224 default=False,
225 225 )
226 226 configitem('histedit', 'defaultrev',
227 227 default=None,
228 228 )
229 229 configitem('histedit', 'dropmissing',
230 230 default=False,
231 231 )
232 232 configitem('histedit', 'linelen',
233 233 default=80,
234 234 )
235 235 configitem('histedit', 'singletransaction',
236 236 default=False,
237 237 )
238 238
239 239 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
240 240 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
241 241 # be specifying the version(s) of Mercurial they are tested with, or
242 242 # leave the attribute unspecified.
243 243 testedwith = 'ships-with-hg-core'
244 244
245 245 actiontable = {}
246 246 primaryactions = set()
247 247 secondaryactions = set()
248 248 tertiaryactions = set()
249 249 internalactions = set()
250 250
251 251 def geteditcomment(ui, first, last):
252 252 """ construct the editor comment
253 253 The comment includes::
254 254 - an intro
255 255 - sorted primary commands
256 256 - sorted short commands
257 257 - sorted long commands
258 258 - additional hints
259 259
260 260 Commands are only included once.
261 261 """
262 262 intro = _("""Edit history between %s and %s
263 263
264 264 Commits are listed from least to most recent
265 265
266 266 You can reorder changesets by reordering the lines
267 267
268 268 Commands:
269 269 """)
270 270 actions = []
271 271 def addverb(v):
272 272 a = actiontable[v]
273 273 lines = a.message.split("\n")
274 274 if len(a.verbs):
275 275 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
276 276 actions.append(" %s = %s" % (v, lines[0]))
277 277 actions.extend([' %s' for l in lines[1:]])
278 278
279 279 for v in (
280 280 sorted(primaryactions) +
281 281 sorted(secondaryactions) +
282 282 sorted(tertiaryactions)
283 283 ):
284 284 addverb(v)
285 285 actions.append('')
286 286
287 287 hints = []
288 288 if ui.configbool('histedit', 'dropmissing'):
289 289 hints.append("Deleting a changeset from the list "
290 290 "will DISCARD it from the edited history!")
291 291
292 292 lines = (intro % (first, last)).split('\n') + actions + hints
293 293
294 294 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
295 295
296 296 class histeditstate(object):
297 297 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
298 298 topmost=None, replacements=None, lock=None, wlock=None):
299 299 self.repo = repo
300 300 self.actions = actions
301 301 self.keep = keep
302 302 self.topmost = topmost
303 303 self.parentctxnode = parentctxnode
304 304 self.lock = lock
305 305 self.wlock = wlock
306 306 self.backupfile = None
307 307 self.stateobj = statemod.cmdstate(repo, 'histedit-state')
308 308 if replacements is None:
309 309 self.replacements = []
310 310 else:
311 311 self.replacements = replacements
312 312
313 313 def read(self):
314 314 """Load histedit state from disk and set fields appropriately."""
315 315 if not self.stateobj.exists():
316 316 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
317 317
318 318 data = self._read()
319 319
320 320 self.parentctxnode = data['parentctxnode']
321 321 actions = parserules(data['rules'], self)
322 322 self.actions = actions
323 323 self.keep = data['keep']
324 324 self.topmost = data['topmost']
325 325 self.replacements = data['replacements']
326 326 self.backupfile = data['backupfile']
327 327
328 328 def _read(self):
329 329 fp = self.repo.vfs.read('histedit-state')
330 330 if fp.startswith('v1\n'):
331 331 data = self._load()
332 332 parentctxnode, rules, keep, topmost, replacements, backupfile = data
333 333 else:
334 334 data = pickle.loads(fp)
335 335 parentctxnode, rules, keep, topmost, replacements = data
336 336 backupfile = None
337 337 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
338 338
339 339 return {'parentctxnode': parentctxnode, "rules": rules, "keep": keep,
340 340 "topmost": topmost, "replacements": replacements,
341 341 "backupfile": backupfile}
342 342
343 343 def write(self, tr=None):
344 344 if tr:
345 345 tr.addfilegenerator('histedit-state', ('histedit-state',),
346 346 self._write, location='plain')
347 347 else:
348 348 with self.repo.vfs("histedit-state", "w") as f:
349 349 self._write(f)
350 350
351 351 def _write(self, fp):
352 352 fp.write('v1\n')
353 353 fp.write('%s\n' % node.hex(self.parentctxnode))
354 354 fp.write('%s\n' % node.hex(self.topmost))
355 355 fp.write('%s\n' % ('True' if self.keep else 'False'))
356 356 fp.write('%d\n' % len(self.actions))
357 357 for action in self.actions:
358 358 fp.write('%s\n' % action.tostate())
359 359 fp.write('%d\n' % len(self.replacements))
360 360 for replacement in self.replacements:
361 361 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
362 362 for r in replacement[1])))
363 363 backupfile = self.backupfile
364 364 if not backupfile:
365 365 backupfile = ''
366 366 fp.write('%s\n' % backupfile)
367 367
368 368 def _load(self):
369 369 fp = self.repo.vfs('histedit-state', 'r')
370 370 lines = [l[:-1] for l in fp.readlines()]
371 371
372 372 index = 0
373 373 lines[index] # version number
374 374 index += 1
375 375
376 376 parentctxnode = node.bin(lines[index])
377 377 index += 1
378 378
379 379 topmost = node.bin(lines[index])
380 380 index += 1
381 381
382 382 keep = lines[index] == 'True'
383 383 index += 1
384 384
385 385 # Rules
386 386 rules = []
387 387 rulelen = int(lines[index])
388 388 index += 1
389 389 for i in pycompat.xrange(rulelen):
390 390 ruleaction = lines[index]
391 391 index += 1
392 392 rule = lines[index]
393 393 index += 1
394 394 rules.append((ruleaction, rule))
395 395
396 396 # Replacements
397 397 replacements = []
398 398 replacementlen = int(lines[index])
399 399 index += 1
400 400 for i in pycompat.xrange(replacementlen):
401 401 replacement = lines[index]
402 402 original = node.bin(replacement[:40])
403 403 succ = [node.bin(replacement[i:i + 40]) for i in
404 404 range(40, len(replacement), 40)]
405 405 replacements.append((original, succ))
406 406 index += 1
407 407
408 408 backupfile = lines[index]
409 409 index += 1
410 410
411 411 fp.close()
412 412
413 413 return parentctxnode, rules, keep, topmost, replacements, backupfile
414 414
415 415 def clear(self):
416 416 if self.inprogress():
417 417 self.repo.vfs.unlink('histedit-state')
418 418
419 419 def inprogress(self):
420 420 return self.repo.vfs.exists('histedit-state')
421 421
422 422
423 423 class histeditaction(object):
424 424 def __init__(self, state, node):
425 425 self.state = state
426 426 self.repo = state.repo
427 427 self.node = node
428 428
429 429 @classmethod
430 430 def fromrule(cls, state, rule):
431 431 """Parses the given rule, returning an instance of the histeditaction.
432 432 """
433 433 ruleid = rule.strip().split(' ', 1)[0]
434 434 # ruleid can be anything from rev numbers, hashes, "bookmarks" etc
435 435 # Check for validation of rule ids and get the rulehash
436 436 try:
437 437 rev = node.bin(ruleid)
438 438 except TypeError:
439 439 try:
440 440 _ctx = scmutil.revsingle(state.repo, ruleid)
441 441 rulehash = _ctx.hex()
442 442 rev = node.bin(rulehash)
443 443 except error.RepoLookupError:
444 444 raise error.ParseError(_("invalid changeset %s") % ruleid)
445 445 return cls(state, rev)
446 446
447 447 def verify(self, prev, expected, seen):
448 448 """ Verifies semantic correctness of the rule"""
449 449 repo = self.repo
450 450 ha = node.hex(self.node)
451 451 self.node = scmutil.resolvehexnodeidprefix(repo, ha)
452 452 if self.node is None:
453 453 raise error.ParseError(_('unknown changeset %s listed') % ha[:12])
454 454 self._verifynodeconstraints(prev, expected, seen)
455 455
456 456 def _verifynodeconstraints(self, prev, expected, seen):
457 457 # by default command need a node in the edited list
458 458 if self.node not in expected:
459 459 raise error.ParseError(_('%s "%s" changeset was not a candidate')
460 460 % (self.verb, node.short(self.node)),
461 461 hint=_('only use listed changesets'))
462 462 # and only one command per node
463 463 if self.node in seen:
464 464 raise error.ParseError(_('duplicated command for changeset %s') %
465 465 node.short(self.node))
466 466
467 467 def torule(self):
468 468 """build a histedit rule line for an action
469 469
470 470 by default lines are in the form:
471 471 <hash> <rev> <summary>
472 472 """
473 473 ctx = self.repo[self.node]
474 474 summary = _getsummary(ctx)
475 475 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
476 476 # trim to 75 columns by default so it's not stupidly wide in my editor
477 477 # (the 5 more are left for verb)
478 478 maxlen = self.repo.ui.configint('histedit', 'linelen')
479 479 maxlen = max(maxlen, 22) # avoid truncating hash
480 480 return stringutil.ellipsis(line, maxlen)
481 481
482 482 def tostate(self):
483 483 """Print an action in format used by histedit state files
484 484 (the first line is a verb, the remainder is the second)
485 485 """
486 486 return "%s\n%s" % (self.verb, node.hex(self.node))
487 487
488 488 def run(self):
489 489 """Runs the action. The default behavior is simply apply the action's
490 490 rulectx onto the current parentctx."""
491 491 self.applychange()
492 492 self.continuedirty()
493 493 return self.continueclean()
494 494
495 495 def applychange(self):
496 496 """Applies the changes from this action's rulectx onto the current
497 497 parentctx, but does not commit them."""
498 498 repo = self.repo
499 499 rulectx = repo[self.node]
500 500 repo.ui.pushbuffer(error=True, labeled=True)
501 501 hg.update(repo, self.state.parentctxnode, quietempty=True)
502 502 stats = applychanges(repo.ui, repo, rulectx, {})
503 503 repo.dirstate.setbranch(rulectx.branch())
504 504 if stats.unresolvedcount:
505 505 buf = repo.ui.popbuffer()
506 506 repo.ui.write(buf)
507 507 raise error.InterventionRequired(
508 508 _('Fix up the change (%s %s)') %
509 509 (self.verb, node.short(self.node)),
510 510 hint=_('hg histedit --continue to resume'))
511 511 else:
512 512 repo.ui.popbuffer()
513 513
514 514 def continuedirty(self):
515 515 """Continues the action when changes have been applied to the working
516 516 copy. The default behavior is to commit the dirty changes."""
517 517 repo = self.repo
518 518 rulectx = repo[self.node]
519 519
520 520 editor = self.commiteditor()
521 521 commit = commitfuncfor(repo, rulectx)
522 522
523 523 commit(text=rulectx.description(), user=rulectx.user(),
524 524 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
525 525
526 526 def commiteditor(self):
527 527 """The editor to be used to edit the commit message."""
528 528 return False
529 529
530 530 def continueclean(self):
531 531 """Continues the action when the working copy is clean. The default
532 532 behavior is to accept the current commit as the new version of the
533 533 rulectx."""
534 534 ctx = self.repo['.']
535 535 if ctx.node() == self.state.parentctxnode:
536 536 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
537 537 node.short(self.node))
538 538 return ctx, [(self.node, tuple())]
539 539 if ctx.node() == self.node:
540 540 # Nothing changed
541 541 return ctx, []
542 542 return ctx, [(self.node, (ctx.node(),))]
543 543
544 544 def commitfuncfor(repo, src):
545 545 """Build a commit function for the replacement of <src>
546 546
547 547 This function ensure we apply the same treatment to all changesets.
548 548
549 549 - Add a 'histedit_source' entry in extra.
550 550
551 551 Note that fold has its own separated logic because its handling is a bit
552 552 different and not easily factored out of the fold method.
553 553 """
554 554 phasemin = src.phase()
555 555 def commitfunc(**kwargs):
556 556 overrides = {('phases', 'new-commit'): phasemin}
557 557 with repo.ui.configoverride(overrides, 'histedit'):
558 558 extra = kwargs.get(r'extra', {}).copy()
559 559 extra['histedit_source'] = src.hex()
560 560 kwargs[r'extra'] = extra
561 561 return repo.commit(**kwargs)
562 562 return commitfunc
563 563
564 564 def applychanges(ui, repo, ctx, opts):
565 565 """Merge changeset from ctx (only) in the current working directory"""
566 566 wcpar = repo.dirstate.parents()[0]
567 567 if ctx.p1().node() == wcpar:
568 568 # edits are "in place" we do not need to make any merge,
569 569 # just applies changes on parent for editing
570 570 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
571 571 stats = mergemod.updateresult(0, 0, 0, 0)
572 572 else:
573 573 try:
574 574 # ui.forcemerge is an internal variable, do not document
575 575 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
576 576 'histedit')
577 577 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
578 578 finally:
579 579 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
580 580 return stats
581 581
582 582 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
583 583 """collapse the set of revisions from first to last as new one.
584 584
585 585 Expected commit options are:
586 586 - message
587 587 - date
588 588 - username
589 589 Commit message is edited in all cases.
590 590
591 591 This function works in memory."""
592 592 ctxs = list(repo.set('%d::%d', firstctx.rev(), lastctx.rev()))
593 593 if not ctxs:
594 594 return None
595 595 for c in ctxs:
596 596 if not c.mutable():
597 597 raise error.ParseError(
598 598 _("cannot fold into public change %s") % node.short(c.node()))
599 599 base = firstctx.parents()[0]
600 600
601 601 # commit a new version of the old changeset, including the update
602 602 # collect all files which might be affected
603 603 files = set()
604 604 for ctx in ctxs:
605 605 files.update(ctx.files())
606 606
607 607 # Recompute copies (avoid recording a -> b -> a)
608 608 copied = copies.pathcopies(base, lastctx)
609 609
610 610 # prune files which were reverted by the updates
611 611 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
612 612 # commit version of these files as defined by head
613 613 headmf = lastctx.manifest()
614 614 def filectxfn(repo, ctx, path):
615 615 if path in headmf:
616 616 fctx = lastctx[path]
617 617 flags = fctx.flags()
618 618 mctx = context.memfilectx(repo, ctx,
619 619 fctx.path(), fctx.data(),
620 620 islink='l' in flags,
621 621 isexec='x' in flags,
622 622 copied=copied.get(path))
623 623 return mctx
624 624 return None
625 625
626 626 if commitopts.get('message'):
627 627 message = commitopts['message']
628 628 else:
629 629 message = firstctx.description()
630 630 user = commitopts.get('user')
631 631 date = commitopts.get('date')
632 632 extra = commitopts.get('extra')
633 633
634 634 parents = (firstctx.p1().node(), firstctx.p2().node())
635 635 editor = None
636 636 if not skipprompt:
637 637 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
638 638 new = context.memctx(repo,
639 639 parents=parents,
640 640 text=message,
641 641 files=files,
642 642 filectxfn=filectxfn,
643 643 user=user,
644 644 date=date,
645 645 extra=extra,
646 646 editor=editor)
647 647 return repo.commitctx(new)
648 648
649 649 def _isdirtywc(repo):
650 650 return repo[None].dirty(missing=True)
651 651
652 652 def abortdirty():
653 653 raise error.Abort(_('working copy has pending changes'),
654 654 hint=_('amend, commit, or revert them and run histedit '
655 655 '--continue, or abort with histedit --abort'))
656 656
657 657 def action(verbs, message, priority=False, internal=False):
658 658 def wrap(cls):
659 659 assert not priority or not internal
660 660 verb = verbs[0]
661 661 if priority:
662 662 primaryactions.add(verb)
663 663 elif internal:
664 664 internalactions.add(verb)
665 665 elif len(verbs) > 1:
666 666 secondaryactions.add(verb)
667 667 else:
668 668 tertiaryactions.add(verb)
669 669
670 670 cls.verb = verb
671 671 cls.verbs = verbs
672 672 cls.message = message
673 673 for verb in verbs:
674 674 actiontable[verb] = cls
675 675 return cls
676 676 return wrap
677 677
678 678 @action(['pick', 'p'],
679 679 _('use commit'),
680 680 priority=True)
681 681 class pick(histeditaction):
682 682 def run(self):
683 683 rulectx = self.repo[self.node]
684 684 if rulectx.parents()[0].node() == self.state.parentctxnode:
685 685 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
686 686 return rulectx, []
687 687
688 688 return super(pick, self).run()
689 689
690 690 @action(['edit', 'e'],
691 691 _('use commit, but stop for amending'),
692 692 priority=True)
693 693 class edit(histeditaction):
694 694 def run(self):
695 695 repo = self.repo
696 696 rulectx = repo[self.node]
697 697 hg.update(repo, self.state.parentctxnode, quietempty=True)
698 698 applychanges(repo.ui, repo, rulectx, {})
699 699 raise error.InterventionRequired(
700 700 _('Editing (%s), you may commit or record as needed now.')
701 701 % node.short(self.node),
702 702 hint=_('hg histedit --continue to resume'))
703 703
704 704 def commiteditor(self):
705 705 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
706 706
707 707 @action(['fold', 'f'],
708 708 _('use commit, but combine it with the one above'))
709 709 class fold(histeditaction):
710 710 def verify(self, prev, expected, seen):
711 711 """ Verifies semantic correctness of the fold rule"""
712 712 super(fold, self).verify(prev, expected, seen)
713 713 repo = self.repo
714 714 if not prev:
715 715 c = repo[self.node].parents()[0]
716 716 elif not prev.verb in ('pick', 'base'):
717 717 return
718 718 else:
719 719 c = repo[prev.node]
720 720 if not c.mutable():
721 721 raise error.ParseError(
722 722 _("cannot fold into public change %s") % node.short(c.node()))
723 723
724 724
725 725 def continuedirty(self):
726 726 repo = self.repo
727 727 rulectx = repo[self.node]
728 728
729 729 commit = commitfuncfor(repo, rulectx)
730 730 commit(text='fold-temp-revision %s' % node.short(self.node),
731 731 user=rulectx.user(), date=rulectx.date(),
732 732 extra=rulectx.extra())
733 733
734 734 def continueclean(self):
735 735 repo = self.repo
736 736 ctx = repo['.']
737 737 rulectx = repo[self.node]
738 738 parentctxnode = self.state.parentctxnode
739 739 if ctx.node() == parentctxnode:
740 740 repo.ui.warn(_('%s: empty changeset\n') %
741 741 node.short(self.node))
742 742 return ctx, [(self.node, (parentctxnode,))]
743 743
744 744 parentctx = repo[parentctxnode]
745 745 newcommits = set(c.node() for c in repo.set('(%d::. - %d)',
746 746 parentctx.rev(),
747 747 parentctx.rev()))
748 748 if not newcommits:
749 749 repo.ui.warn(_('%s: cannot fold - working copy is not a '
750 750 'descendant of previous commit %s\n') %
751 751 (node.short(self.node), node.short(parentctxnode)))
752 752 return ctx, [(self.node, (ctx.node(),))]
753 753
754 754 middlecommits = newcommits.copy()
755 755 middlecommits.discard(ctx.node())
756 756
757 757 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
758 758 middlecommits)
759 759
760 760 def skipprompt(self):
761 761 """Returns true if the rule should skip the message editor.
762 762
763 763 For example, 'fold' wants to show an editor, but 'rollup'
764 764 doesn't want to.
765 765 """
766 766 return False
767 767
768 768 def mergedescs(self):
769 769 """Returns true if the rule should merge messages of multiple changes.
770 770
771 771 This exists mainly so that 'rollup' rules can be a subclass of
772 772 'fold'.
773 773 """
774 774 return True
775 775
776 776 def firstdate(self):
777 777 """Returns true if the rule should preserve the date of the first
778 778 change.
779 779
780 780 This exists mainly so that 'rollup' rules can be a subclass of
781 781 'fold'.
782 782 """
783 783 return False
784 784
785 785 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
786 786 parent = ctx.parents()[0].node()
787 787 hg.updaterepo(repo, parent, overwrite=False)
788 788 ### prepare new commit data
789 789 commitopts = {}
790 790 commitopts['user'] = ctx.user()
791 791 # commit message
792 792 if not self.mergedescs():
793 793 newmessage = ctx.description()
794 794 else:
795 795 newmessage = '\n***\n'.join(
796 796 [ctx.description()] +
797 797 [repo[r].description() for r in internalchanges] +
798 798 [oldctx.description()]) + '\n'
799 799 commitopts['message'] = newmessage
800 800 # date
801 801 if self.firstdate():
802 802 commitopts['date'] = ctx.date()
803 803 else:
804 804 commitopts['date'] = max(ctx.date(), oldctx.date())
805 805 extra = ctx.extra().copy()
806 806 # histedit_source
807 807 # note: ctx is likely a temporary commit but that the best we can do
808 808 # here. This is sufficient to solve issue3681 anyway.
809 809 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
810 810 commitopts['extra'] = extra
811 811 phasemin = max(ctx.phase(), oldctx.phase())
812 812 overrides = {('phases', 'new-commit'): phasemin}
813 813 with repo.ui.configoverride(overrides, 'histedit'):
814 814 n = collapse(repo, ctx, repo[newnode], commitopts,
815 815 skipprompt=self.skipprompt())
816 816 if n is None:
817 817 return ctx, []
818 818 hg.updaterepo(repo, n, overwrite=False)
819 819 replacements = [(oldctx.node(), (newnode,)),
820 820 (ctx.node(), (n,)),
821 821 (newnode, (n,)),
822 822 ]
823 823 for ich in internalchanges:
824 824 replacements.append((ich, (n,)))
825 825 return repo[n], replacements
826 826
827 827 @action(['base', 'b'],
828 828 _('checkout changeset and apply further changesets from there'))
829 829 class base(histeditaction):
830 830
831 831 def run(self):
832 832 if self.repo['.'].node() != self.node:
833 mergemod.update(self.repo, self.node, False, True)
834 # branchmerge, force)
833 mergemod.update(self.repo, self.node, branchmerge=False, force=True)
835 834 return self.continueclean()
836 835
837 836 def continuedirty(self):
838 837 abortdirty()
839 838
840 839 def continueclean(self):
841 840 basectx = self.repo['.']
842 841 return basectx, []
843 842
844 843 def _verifynodeconstraints(self, prev, expected, seen):
845 844 # base can only be use with a node not in the edited set
846 845 if self.node in expected:
847 846 msg = _('%s "%s" changeset was an edited list candidate')
848 847 raise error.ParseError(
849 848 msg % (self.verb, node.short(self.node)),
850 849 hint=_('base must only use unlisted changesets'))
851 850
852 851 @action(['_multifold'],
853 852 _(
854 853 """fold subclass used for when multiple folds happen in a row
855 854
856 855 We only want to fire the editor for the folded message once when
857 856 (say) four changes are folded down into a single change. This is
858 857 similar to rollup, but we should preserve both messages so that
859 858 when the last fold operation runs we can show the user all the
860 859 commit messages in their editor.
861 860 """),
862 861 internal=True)
863 862 class _multifold(fold):
864 863 def skipprompt(self):
865 864 return True
866 865
867 866 @action(["roll", "r"],
868 867 _("like fold, but discard this commit's description and date"))
869 868 class rollup(fold):
870 869 def mergedescs(self):
871 870 return False
872 871
873 872 def skipprompt(self):
874 873 return True
875 874
876 875 def firstdate(self):
877 876 return True
878 877
879 878 @action(["drop", "d"],
880 879 _('remove commit from history'))
881 880 class drop(histeditaction):
882 881 def run(self):
883 882 parentctx = self.repo[self.state.parentctxnode]
884 883 return parentctx, [(self.node, tuple())]
885 884
886 885 @action(["mess", "m"],
887 886 _('edit commit message without changing commit content'),
888 887 priority=True)
889 888 class message(histeditaction):
890 889 def commiteditor(self):
891 890 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
892 891
893 892 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
894 893 """utility function to find the first outgoing changeset
895 894
896 895 Used by initialization code"""
897 896 if opts is None:
898 897 opts = {}
899 898 dest = ui.expandpath(remote or 'default-push', remote or 'default')
900 899 dest, branches = hg.parseurl(dest, None)[:2]
901 900 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
902 901
903 902 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
904 903 other = hg.peer(repo, opts, dest)
905 904
906 905 if revs:
907 906 revs = [repo.lookup(rev) for rev in revs]
908 907
909 908 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
910 909 if not outgoing.missing:
911 910 raise error.Abort(_('no outgoing ancestors'))
912 911 roots = list(repo.revs("roots(%ln)", outgoing.missing))
913 912 if len(roots) > 1:
914 913 msg = _('there are ambiguous outgoing revisions')
915 914 hint = _("see 'hg help histedit' for more detail")
916 915 raise error.Abort(msg, hint=hint)
917 916 return repo[roots[0]].node()
918 917
919 918 @command('histedit',
920 919 [('', 'commands', '',
921 920 _('read history edits from the specified file'), _('FILE')),
922 921 ('c', 'continue', False, _('continue an edit already in progress')),
923 922 ('', 'edit-plan', False, _('edit remaining actions list')),
924 923 ('k', 'keep', False,
925 924 _("don't strip old nodes after edit is complete")),
926 925 ('', 'abort', False, _('abort an edit in progress')),
927 926 ('o', 'outgoing', False, _('changesets not found in destination')),
928 927 ('f', 'force', False,
929 928 _('force outgoing even for unrelated repositories')),
930 929 ('r', 'rev', [], _('first revision to be edited'), _('REV'))] +
931 930 cmdutil.formatteropts,
932 931 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"),
933 932 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
934 933 def histedit(ui, repo, *freeargs, **opts):
935 934 """interactively edit changeset history
936 935
937 936 This command lets you edit a linear series of changesets (up to
938 937 and including the working directory, which should be clean).
939 938 You can:
940 939
941 940 - `pick` to [re]order a changeset
942 941
943 942 - `drop` to omit changeset
944 943
945 944 - `mess` to reword the changeset commit message
946 945
947 946 - `fold` to combine it with the preceding changeset (using the later date)
948 947
949 948 - `roll` like fold, but discarding this commit's description and date
950 949
951 950 - `edit` to edit this changeset (preserving date)
952 951
953 952 - `base` to checkout changeset and apply further changesets from there
954 953
955 954 There are a number of ways to select the root changeset:
956 955
957 956 - Specify ANCESTOR directly
958 957
959 958 - Use --outgoing -- it will be the first linear changeset not
960 959 included in destination. (See :hg:`help config.paths.default-push`)
961 960
962 961 - Otherwise, the value from the "histedit.defaultrev" config option
963 962 is used as a revset to select the base revision when ANCESTOR is not
964 963 specified. The first revision returned by the revset is used. By
965 964 default, this selects the editable history that is unique to the
966 965 ancestry of the working directory.
967 966
968 967 .. container:: verbose
969 968
970 969 If you use --outgoing, this command will abort if there are ambiguous
971 970 outgoing revisions. For example, if there are multiple branches
972 971 containing outgoing revisions.
973 972
974 973 Use "min(outgoing() and ::.)" or similar revset specification
975 974 instead of --outgoing to specify edit target revision exactly in
976 975 such ambiguous situation. See :hg:`help revsets` for detail about
977 976 selecting revisions.
978 977
979 978 .. container:: verbose
980 979
981 980 Examples:
982 981
983 982 - A number of changes have been made.
984 983 Revision 3 is no longer needed.
985 984
986 985 Start history editing from revision 3::
987 986
988 987 hg histedit -r 3
989 988
990 989 An editor opens, containing the list of revisions,
991 990 with specific actions specified::
992 991
993 992 pick 5339bf82f0ca 3 Zworgle the foobar
994 993 pick 8ef592ce7cc4 4 Bedazzle the zerlog
995 994 pick 0a9639fcda9d 5 Morgify the cromulancy
996 995
997 996 Additional information about the possible actions
998 997 to take appears below the list of revisions.
999 998
1000 999 To remove revision 3 from the history,
1001 1000 its action (at the beginning of the relevant line)
1002 1001 is changed to 'drop'::
1003 1002
1004 1003 drop 5339bf82f0ca 3 Zworgle the foobar
1005 1004 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1006 1005 pick 0a9639fcda9d 5 Morgify the cromulancy
1007 1006
1008 1007 - A number of changes have been made.
1009 1008 Revision 2 and 4 need to be swapped.
1010 1009
1011 1010 Start history editing from revision 2::
1012 1011
1013 1012 hg histedit -r 2
1014 1013
1015 1014 An editor opens, containing the list of revisions,
1016 1015 with specific actions specified::
1017 1016
1018 1017 pick 252a1af424ad 2 Blorb a morgwazzle
1019 1018 pick 5339bf82f0ca 3 Zworgle the foobar
1020 1019 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1021 1020
1022 1021 To swap revision 2 and 4, its lines are swapped
1023 1022 in the editor::
1024 1023
1025 1024 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1026 1025 pick 5339bf82f0ca 3 Zworgle the foobar
1027 1026 pick 252a1af424ad 2 Blorb a morgwazzle
1028 1027
1029 1028 Returns 0 on success, 1 if user intervention is required (not only
1030 1029 for intentional "edit" command, but also for resolving unexpected
1031 1030 conflicts).
1032 1031 """
1033 1032 state = histeditstate(repo)
1034 1033 try:
1035 1034 state.wlock = repo.wlock()
1036 1035 state.lock = repo.lock()
1037 1036 _histedit(ui, repo, state, *freeargs, **opts)
1038 1037 finally:
1039 1038 release(state.lock, state.wlock)
1040 1039
1041 1040 goalcontinue = 'continue'
1042 1041 goalabort = 'abort'
1043 1042 goaleditplan = 'edit-plan'
1044 1043 goalnew = 'new'
1045 1044
1046 1045 def _getgoal(opts):
1047 1046 if opts.get('continue'):
1048 1047 return goalcontinue
1049 1048 if opts.get('abort'):
1050 1049 return goalabort
1051 1050 if opts.get('edit_plan'):
1052 1051 return goaleditplan
1053 1052 return goalnew
1054 1053
1055 1054 def _readfile(ui, path):
1056 1055 if path == '-':
1057 1056 with ui.timeblockedsection('histedit'):
1058 1057 return ui.fin.read()
1059 1058 else:
1060 1059 with open(path, 'rb') as f:
1061 1060 return f.read()
1062 1061
1063 1062 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1064 1063 # TODO only abort if we try to histedit mq patches, not just
1065 1064 # blanket if mq patches are applied somewhere
1066 1065 mq = getattr(repo, 'mq', None)
1067 1066 if mq and mq.applied:
1068 1067 raise error.Abort(_('source has mq patches applied'))
1069 1068
1070 1069 # basic argument incompatibility processing
1071 1070 outg = opts.get('outgoing')
1072 1071 editplan = opts.get('edit_plan')
1073 1072 abort = opts.get('abort')
1074 1073 force = opts.get('force')
1075 1074 if force and not outg:
1076 1075 raise error.Abort(_('--force only allowed with --outgoing'))
1077 1076 if goal == 'continue':
1078 1077 if any((outg, abort, revs, freeargs, rules, editplan)):
1079 1078 raise error.Abort(_('no arguments allowed with --continue'))
1080 1079 elif goal == 'abort':
1081 1080 if any((outg, revs, freeargs, rules, editplan)):
1082 1081 raise error.Abort(_('no arguments allowed with --abort'))
1083 1082 elif goal == 'edit-plan':
1084 1083 if any((outg, revs, freeargs)):
1085 1084 raise error.Abort(_('only --commands argument allowed with '
1086 1085 '--edit-plan'))
1087 1086 else:
1088 1087 if state.inprogress():
1089 1088 raise error.Abort(_('history edit already in progress, try '
1090 1089 '--continue or --abort'))
1091 1090 if outg:
1092 1091 if revs:
1093 1092 raise error.Abort(_('no revisions allowed with --outgoing'))
1094 1093 if len(freeargs) > 1:
1095 1094 raise error.Abort(
1096 1095 _('only one repo argument allowed with --outgoing'))
1097 1096 else:
1098 1097 revs.extend(freeargs)
1099 1098 if len(revs) == 0:
1100 1099 defaultrev = destutil.desthistedit(ui, repo)
1101 1100 if defaultrev is not None:
1102 1101 revs.append(defaultrev)
1103 1102
1104 1103 if len(revs) != 1:
1105 1104 raise error.Abort(
1106 1105 _('histedit requires exactly one ancestor revision'))
1107 1106
1108 1107 def _histedit(ui, repo, state, *freeargs, **opts):
1109 1108 opts = pycompat.byteskwargs(opts)
1110 1109 fm = ui.formatter('histedit', opts)
1111 1110 fm.startitem()
1112 1111 goal = _getgoal(opts)
1113 1112 revs = opts.get('rev', [])
1114 1113 # experimental config: ui.history-editing-backup
1115 1114 nobackup = not ui.configbool('ui', 'history-editing-backup')
1116 1115 rules = opts.get('commands', '')
1117 1116 state.keep = opts.get('keep', False)
1118 1117
1119 1118 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1120 1119
1121 1120 # rebuild state
1122 1121 if goal == goalcontinue:
1123 1122 state.read()
1124 1123 state = bootstrapcontinue(ui, state, opts)
1125 1124 elif goal == goaleditplan:
1126 1125 _edithisteditplan(ui, repo, state, rules)
1127 1126 return
1128 1127 elif goal == goalabort:
1129 1128 _aborthistedit(ui, repo, state, nobackup=nobackup)
1130 1129 return
1131 1130 else:
1132 1131 # goal == goalnew
1133 1132 _newhistedit(ui, repo, state, revs, freeargs, opts)
1134 1133
1135 1134 _continuehistedit(ui, repo, state)
1136 1135 _finishhistedit(ui, repo, state, fm)
1137 1136 fm.end()
1138 1137
1139 1138 def _continuehistedit(ui, repo, state):
1140 1139 """This function runs after either:
1141 1140 - bootstrapcontinue (if the goal is 'continue')
1142 1141 - _newhistedit (if the goal is 'new')
1143 1142 """
1144 1143 # preprocess rules so that we can hide inner folds from the user
1145 1144 # and only show one editor
1146 1145 actions = state.actions[:]
1147 1146 for idx, (action, nextact) in enumerate(
1148 1147 zip(actions, actions[1:] + [None])):
1149 1148 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1150 1149 state.actions[idx].__class__ = _multifold
1151 1150
1152 1151 # Force an initial state file write, so the user can run --abort/continue
1153 1152 # even if there's an exception before the first transaction serialize.
1154 1153 state.write()
1155 1154
1156 1155 tr = None
1157 1156 # Don't use singletransaction by default since it rolls the entire
1158 1157 # transaction back if an unexpected exception happens (like a
1159 1158 # pretxncommit hook throws, or the user aborts the commit msg editor).
1160 1159 if ui.configbool("histedit", "singletransaction"):
1161 1160 # Don't use a 'with' for the transaction, since actions may close
1162 1161 # and reopen a transaction. For example, if the action executes an
1163 1162 # external process it may choose to commit the transaction first.
1164 1163 tr = repo.transaction('histedit')
1165 1164 progress = ui.makeprogress(_("editing"), unit=_('changes'),
1166 1165 total=len(state.actions))
1167 1166 with progress, util.acceptintervention(tr):
1168 1167 while state.actions:
1169 1168 state.write(tr=tr)
1170 1169 actobj = state.actions[0]
1171 1170 progress.increment(item=actobj.torule())
1172 1171 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1173 1172 actobj.torule()))
1174 1173 parentctx, replacement_ = actobj.run()
1175 1174 state.parentctxnode = parentctx.node()
1176 1175 state.replacements.extend(replacement_)
1177 1176 state.actions.pop(0)
1178 1177
1179 1178 state.write()
1180 1179
1181 1180 def _finishhistedit(ui, repo, state, fm):
1182 1181 """This action runs when histedit is finishing its session"""
1183 1182 hg.updaterepo(repo, state.parentctxnode, overwrite=False)
1184 1183
1185 1184 mapping, tmpnodes, created, ntm = processreplacement(state)
1186 1185 if mapping:
1187 1186 for prec, succs in mapping.iteritems():
1188 1187 if not succs:
1189 1188 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1190 1189 else:
1191 1190 ui.debug('histedit: %s is replaced by %s\n' % (
1192 1191 node.short(prec), node.short(succs[0])))
1193 1192 if len(succs) > 1:
1194 1193 m = 'histedit: %s'
1195 1194 for n in succs[1:]:
1196 1195 ui.debug(m % node.short(n))
1197 1196
1198 1197 if not state.keep:
1199 1198 if mapping:
1200 1199 movetopmostbookmarks(repo, state.topmost, ntm)
1201 1200 # TODO update mq state
1202 1201 else:
1203 1202 mapping = {}
1204 1203
1205 1204 for n in tmpnodes:
1206 1205 if n in repo:
1207 1206 mapping[n] = ()
1208 1207
1209 1208 # remove entries about unknown nodes
1210 1209 nodemap = repo.unfiltered().changelog.nodemap
1211 1210 mapping = {k: v for k, v in mapping.items()
1212 1211 if k in nodemap and all(n in nodemap for n in v)}
1213 1212 scmutil.cleanupnodes(repo, mapping, 'histedit')
1214 1213 hf = fm.hexfunc
1215 1214 fl = fm.formatlist
1216 1215 fd = fm.formatdict
1217 1216 nodechanges = fd({hf(oldn): fl([hf(n) for n in newn], name='node')
1218 1217 for oldn, newn in mapping.iteritems()},
1219 1218 key="oldnode", value="newnodes")
1220 1219 fm.data(nodechanges=nodechanges)
1221 1220
1222 1221 state.clear()
1223 1222 if os.path.exists(repo.sjoin('undo')):
1224 1223 os.unlink(repo.sjoin('undo'))
1225 1224 if repo.vfs.exists('histedit-last-edit.txt'):
1226 1225 repo.vfs.unlink('histedit-last-edit.txt')
1227 1226
1228 1227 def _aborthistedit(ui, repo, state, nobackup=False):
1229 1228 try:
1230 1229 state.read()
1231 1230 __, leafs, tmpnodes, __ = processreplacement(state)
1232 1231 ui.debug('restore wc to old parent %s\n'
1233 1232 % node.short(state.topmost))
1234 1233
1235 1234 # Recover our old commits if necessary
1236 1235 if not state.topmost in repo and state.backupfile:
1237 1236 backupfile = repo.vfs.join(state.backupfile)
1238 1237 f = hg.openpath(ui, backupfile)
1239 1238 gen = exchange.readbundle(ui, f, backupfile)
1240 1239 with repo.transaction('histedit.abort') as tr:
1241 1240 bundle2.applybundle(repo, gen, tr, source='histedit',
1242 1241 url='bundle:' + backupfile)
1243 1242
1244 1243 os.remove(backupfile)
1245 1244
1246 1245 # check whether we should update away
1247 1246 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1248 1247 state.parentctxnode, leafs | tmpnodes):
1249 1248 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1250 1249 cleanupnode(ui, repo, tmpnodes, nobackup=nobackup)
1251 1250 cleanupnode(ui, repo, leafs, nobackup=nobackup)
1252 1251 except Exception:
1253 1252 if state.inprogress():
1254 1253 ui.warn(_('warning: encountered an exception during histedit '
1255 1254 '--abort; the repository may not have been completely '
1256 1255 'cleaned up\n'))
1257 1256 raise
1258 1257 finally:
1259 1258 state.clear()
1260 1259
1261 1260 def _edithisteditplan(ui, repo, state, rules):
1262 1261 state.read()
1263 1262 if not rules:
1264 1263 comment = geteditcomment(ui,
1265 1264 node.short(state.parentctxnode),
1266 1265 node.short(state.topmost))
1267 1266 rules = ruleeditor(repo, ui, state.actions, comment)
1268 1267 else:
1269 1268 rules = _readfile(ui, rules)
1270 1269 actions = parserules(rules, state)
1271 1270 ctxs = [repo[act.node] \
1272 1271 for act in state.actions if act.node]
1273 1272 warnverifyactions(ui, repo, actions, state, ctxs)
1274 1273 state.actions = actions
1275 1274 state.write()
1276 1275
1277 1276 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1278 1277 outg = opts.get('outgoing')
1279 1278 rules = opts.get('commands', '')
1280 1279 force = opts.get('force')
1281 1280
1282 1281 cmdutil.checkunfinished(repo)
1283 1282 cmdutil.bailifchanged(repo)
1284 1283
1285 1284 topmost, empty = repo.dirstate.parents()
1286 1285 if outg:
1287 1286 if freeargs:
1288 1287 remote = freeargs[0]
1289 1288 else:
1290 1289 remote = None
1291 1290 root = findoutgoing(ui, repo, remote, force, opts)
1292 1291 else:
1293 1292 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1294 1293 if len(rr) != 1:
1295 1294 raise error.Abort(_('The specified revisions must have '
1296 1295 'exactly one common root'))
1297 1296 root = rr[0].node()
1298 1297
1299 1298 revs = between(repo, root, topmost, state.keep)
1300 1299 if not revs:
1301 1300 raise error.Abort(_('%s is not an ancestor of working directory') %
1302 1301 node.short(root))
1303 1302
1304 1303 ctxs = [repo[r] for r in revs]
1305 1304 if not rules:
1306 1305 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1307 1306 actions = [pick(state, r) for r in revs]
1308 1307 rules = ruleeditor(repo, ui, actions, comment)
1309 1308 else:
1310 1309 rules = _readfile(ui, rules)
1311 1310 actions = parserules(rules, state)
1312 1311 warnverifyactions(ui, repo, actions, state, ctxs)
1313 1312
1314 1313 parentctxnode = repo[root].parents()[0].node()
1315 1314
1316 1315 state.parentctxnode = parentctxnode
1317 1316 state.actions = actions
1318 1317 state.topmost = topmost
1319 1318 state.replacements = []
1320 1319
1321 1320 ui.log("histedit", "%d actions to histedit", len(actions),
1322 1321 histedit_num_actions=len(actions))
1323 1322
1324 1323 # Create a backup so we can always abort completely.
1325 1324 backupfile = None
1326 1325 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1327 1326 backupfile = repair.backupbundle(repo, [parentctxnode],
1328 1327 [topmost], root, 'histedit')
1329 1328 state.backupfile = backupfile
1330 1329
1331 1330 def _getsummary(ctx):
1332 1331 # a common pattern is to extract the summary but default to the empty
1333 1332 # string
1334 1333 summary = ctx.description() or ''
1335 1334 if summary:
1336 1335 summary = summary.splitlines()[0]
1337 1336 return summary
1338 1337
1339 1338 def bootstrapcontinue(ui, state, opts):
1340 1339 repo = state.repo
1341 1340
1342 1341 ms = mergemod.mergestate.read(repo)
1343 1342 mergeutil.checkunresolved(ms)
1344 1343
1345 1344 if state.actions:
1346 1345 actobj = state.actions.pop(0)
1347 1346
1348 1347 if _isdirtywc(repo):
1349 1348 actobj.continuedirty()
1350 1349 if _isdirtywc(repo):
1351 1350 abortdirty()
1352 1351
1353 1352 parentctx, replacements = actobj.continueclean()
1354 1353
1355 1354 state.parentctxnode = parentctx.node()
1356 1355 state.replacements.extend(replacements)
1357 1356
1358 1357 return state
1359 1358
1360 1359 def between(repo, old, new, keep):
1361 1360 """select and validate the set of revision to edit
1362 1361
1363 1362 When keep is false, the specified set can't have children."""
1364 1363 revs = repo.revs('%n::%n', old, new)
1365 1364 if revs and not keep:
1366 1365 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1367 1366 repo.revs('(%ld::) - (%ld)', revs, revs)):
1368 1367 raise error.Abort(_('can only histedit a changeset together '
1369 1368 'with all its descendants'))
1370 1369 if repo.revs('(%ld) and merge()', revs):
1371 1370 raise error.Abort(_('cannot edit history that contains merges'))
1372 1371 root = repo[revs.first()] # list is already sorted by repo.revs()
1373 1372 if not root.mutable():
1374 1373 raise error.Abort(_('cannot edit public changeset: %s') % root,
1375 1374 hint=_("see 'hg help phases' for details"))
1376 1375 return pycompat.maplist(repo.changelog.node, revs)
1377 1376
1378 1377 def ruleeditor(repo, ui, actions, editcomment=""):
1379 1378 """open an editor to edit rules
1380 1379
1381 1380 rules are in the format [ [act, ctx], ...] like in state.rules
1382 1381 """
1383 1382 if repo.ui.configbool("experimental", "histedit.autoverb"):
1384 1383 newact = util.sortdict()
1385 1384 for act in actions:
1386 1385 ctx = repo[act.node]
1387 1386 summary = _getsummary(ctx)
1388 1387 fword = summary.split(' ', 1)[0].lower()
1389 1388 added = False
1390 1389
1391 1390 # if it doesn't end with the special character '!' just skip this
1392 1391 if fword.endswith('!'):
1393 1392 fword = fword[:-1]
1394 1393 if fword in primaryactions | secondaryactions | tertiaryactions:
1395 1394 act.verb = fword
1396 1395 # get the target summary
1397 1396 tsum = summary[len(fword) + 1:].lstrip()
1398 1397 # safe but slow: reverse iterate over the actions so we
1399 1398 # don't clash on two commits having the same summary
1400 1399 for na, l in reversed(list(newact.iteritems())):
1401 1400 actx = repo[na.node]
1402 1401 asum = _getsummary(actx)
1403 1402 if asum == tsum:
1404 1403 added = True
1405 1404 l.append(act)
1406 1405 break
1407 1406
1408 1407 if not added:
1409 1408 newact[act] = []
1410 1409
1411 1410 # copy over and flatten the new list
1412 1411 actions = []
1413 1412 for na, l in newact.iteritems():
1414 1413 actions.append(na)
1415 1414 actions += l
1416 1415
1417 1416 rules = '\n'.join([act.torule() for act in actions])
1418 1417 rules += '\n\n'
1419 1418 rules += editcomment
1420 1419 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1421 1420 repopath=repo.path, action='histedit')
1422 1421
1423 1422 # Save edit rules in .hg/histedit-last-edit.txt in case
1424 1423 # the user needs to ask for help after something
1425 1424 # surprising happens.
1426 1425 with repo.vfs('histedit-last-edit.txt', 'wb') as f:
1427 1426 f.write(rules)
1428 1427
1429 1428 return rules
1430 1429
1431 1430 def parserules(rules, state):
1432 1431 """Read the histedit rules string and return list of action objects """
1433 1432 rules = [l for l in (r.strip() for r in rules.splitlines())
1434 1433 if l and not l.startswith('#')]
1435 1434 actions = []
1436 1435 for r in rules:
1437 1436 if ' ' not in r:
1438 1437 raise error.ParseError(_('malformed line "%s"') % r)
1439 1438 verb, rest = r.split(' ', 1)
1440 1439
1441 1440 if verb not in actiontable:
1442 1441 raise error.ParseError(_('unknown action "%s"') % verb)
1443 1442
1444 1443 action = actiontable[verb].fromrule(state, rest)
1445 1444 actions.append(action)
1446 1445 return actions
1447 1446
1448 1447 def warnverifyactions(ui, repo, actions, state, ctxs):
1449 1448 try:
1450 1449 verifyactions(actions, state, ctxs)
1451 1450 except error.ParseError:
1452 1451 if repo.vfs.exists('histedit-last-edit.txt'):
1453 1452 ui.warn(_('warning: histedit rules saved '
1454 1453 'to: .hg/histedit-last-edit.txt\n'))
1455 1454 raise
1456 1455
1457 1456 def verifyactions(actions, state, ctxs):
1458 1457 """Verify that there exists exactly one action per given changeset and
1459 1458 other constraints.
1460 1459
1461 1460 Will abort if there are to many or too few rules, a malformed rule,
1462 1461 or a rule on a changeset outside of the user-given range.
1463 1462 """
1464 1463 expected = set(c.node() for c in ctxs)
1465 1464 seen = set()
1466 1465 prev = None
1467 1466
1468 1467 if actions and actions[0].verb in ['roll', 'fold']:
1469 1468 raise error.ParseError(_('first changeset cannot use verb "%s"') %
1470 1469 actions[0].verb)
1471 1470
1472 1471 for action in actions:
1473 1472 action.verify(prev, expected, seen)
1474 1473 prev = action
1475 1474 if action.node is not None:
1476 1475 seen.add(action.node)
1477 1476 missing = sorted(expected - seen) # sort to stabilize output
1478 1477
1479 1478 if state.repo.ui.configbool('histedit', 'dropmissing'):
1480 1479 if len(actions) == 0:
1481 1480 raise error.ParseError(_('no rules provided'),
1482 1481 hint=_('use strip extension to remove commits'))
1483 1482
1484 1483 drops = [drop(state, n) for n in missing]
1485 1484 # put the in the beginning so they execute immediately and
1486 1485 # don't show in the edit-plan in the future
1487 1486 actions[:0] = drops
1488 1487 elif missing:
1489 1488 raise error.ParseError(_('missing rules for changeset %s') %
1490 1489 node.short(missing[0]),
1491 1490 hint=_('use "drop %s" to discard, see also: '
1492 1491 "'hg help -e histedit.config'")
1493 1492 % node.short(missing[0]))
1494 1493
1495 1494 def adjustreplacementsfrommarkers(repo, oldreplacements):
1496 1495 """Adjust replacements from obsolescence markers
1497 1496
1498 1497 Replacements structure is originally generated based on
1499 1498 histedit's state and does not account for changes that are
1500 1499 not recorded there. This function fixes that by adding
1501 1500 data read from obsolescence markers"""
1502 1501 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1503 1502 return oldreplacements
1504 1503
1505 1504 unfi = repo.unfiltered()
1506 1505 nm = unfi.changelog.nodemap
1507 1506 obsstore = repo.obsstore
1508 1507 newreplacements = list(oldreplacements)
1509 1508 oldsuccs = [r[1] for r in oldreplacements]
1510 1509 # successors that have already been added to succstocheck once
1511 1510 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1512 1511 succstocheck = list(seensuccs)
1513 1512 while succstocheck:
1514 1513 n = succstocheck.pop()
1515 1514 missing = nm.get(n) is None
1516 1515 markers = obsstore.successors.get(n, ())
1517 1516 if missing and not markers:
1518 1517 # dead end, mark it as such
1519 1518 newreplacements.append((n, ()))
1520 1519 for marker in markers:
1521 1520 nsuccs = marker[1]
1522 1521 newreplacements.append((n, nsuccs))
1523 1522 for nsucc in nsuccs:
1524 1523 if nsucc not in seensuccs:
1525 1524 seensuccs.add(nsucc)
1526 1525 succstocheck.append(nsucc)
1527 1526
1528 1527 return newreplacements
1529 1528
1530 1529 def processreplacement(state):
1531 1530 """process the list of replacements to return
1532 1531
1533 1532 1) the final mapping between original and created nodes
1534 1533 2) the list of temporary node created by histedit
1535 1534 3) the list of new commit created by histedit"""
1536 1535 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1537 1536 allsuccs = set()
1538 1537 replaced = set()
1539 1538 fullmapping = {}
1540 1539 # initialize basic set
1541 1540 # fullmapping records all operations recorded in replacement
1542 1541 for rep in replacements:
1543 1542 allsuccs.update(rep[1])
1544 1543 replaced.add(rep[0])
1545 1544 fullmapping.setdefault(rep[0], set()).update(rep[1])
1546 1545 new = allsuccs - replaced
1547 1546 tmpnodes = allsuccs & replaced
1548 1547 # Reduce content fullmapping into direct relation between original nodes
1549 1548 # and final node created during history edition
1550 1549 # Dropped changeset are replaced by an empty list
1551 1550 toproceed = set(fullmapping)
1552 1551 final = {}
1553 1552 while toproceed:
1554 1553 for x in list(toproceed):
1555 1554 succs = fullmapping[x]
1556 1555 for s in list(succs):
1557 1556 if s in toproceed:
1558 1557 # non final node with unknown closure
1559 1558 # We can't process this now
1560 1559 break
1561 1560 elif s in final:
1562 1561 # non final node, replace with closure
1563 1562 succs.remove(s)
1564 1563 succs.update(final[s])
1565 1564 else:
1566 1565 final[x] = succs
1567 1566 toproceed.remove(x)
1568 1567 # remove tmpnodes from final mapping
1569 1568 for n in tmpnodes:
1570 1569 del final[n]
1571 1570 # we expect all changes involved in final to exist in the repo
1572 1571 # turn `final` into list (topologically sorted)
1573 1572 nm = state.repo.changelog.nodemap
1574 1573 for prec, succs in final.items():
1575 1574 final[prec] = sorted(succs, key=nm.get)
1576 1575
1577 1576 # computed topmost element (necessary for bookmark)
1578 1577 if new:
1579 1578 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1580 1579 elif not final:
1581 1580 # Nothing rewritten at all. we won't need `newtopmost`
1582 1581 # It is the same as `oldtopmost` and `processreplacement` know it
1583 1582 newtopmost = None
1584 1583 else:
1585 1584 # every body died. The newtopmost is the parent of the root.
1586 1585 r = state.repo.changelog.rev
1587 1586 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1588 1587
1589 1588 return final, tmpnodes, new, newtopmost
1590 1589
1591 1590 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
1592 1591 """Move bookmark from oldtopmost to newly created topmost
1593 1592
1594 1593 This is arguably a feature and we may only want that for the active
1595 1594 bookmark. But the behavior is kept compatible with the old version for now.
1596 1595 """
1597 1596 if not oldtopmost or not newtopmost:
1598 1597 return
1599 1598 oldbmarks = repo.nodebookmarks(oldtopmost)
1600 1599 if oldbmarks:
1601 1600 with repo.lock(), repo.transaction('histedit') as tr:
1602 1601 marks = repo._bookmarks
1603 1602 changes = []
1604 1603 for name in oldbmarks:
1605 1604 changes.append((name, newtopmost))
1606 1605 marks.applychanges(repo, tr, changes)
1607 1606
1608 1607 def cleanupnode(ui, repo, nodes, nobackup=False):
1609 1608 """strip a group of nodes from the repository
1610 1609
1611 1610 The set of node to strip may contains unknown nodes."""
1612 1611 with repo.lock():
1613 1612 # do not let filtering get in the way of the cleanse
1614 1613 # we should probably get rid of obsolescence marker created during the
1615 1614 # histedit, but we currently do not have such information.
1616 1615 repo = repo.unfiltered()
1617 1616 # Find all nodes that need to be stripped
1618 1617 # (we use %lr instead of %ln to silently ignore unknown items)
1619 1618 nm = repo.changelog.nodemap
1620 1619 nodes = sorted(n for n in nodes if n in nm)
1621 1620 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1622 1621 if roots:
1623 1622 backup = not nobackup
1624 1623 repair.strip(ui, repo, roots, backup=backup)
1625 1624
1626 1625 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1627 1626 if isinstance(nodelist, str):
1628 1627 nodelist = [nodelist]
1629 1628 state = histeditstate(repo)
1630 1629 if state.inprogress():
1631 1630 state.read()
1632 1631 histedit_nodes = {action.node for action
1633 1632 in state.actions if action.node}
1634 1633 common_nodes = histedit_nodes & set(nodelist)
1635 1634 if common_nodes:
1636 1635 raise error.Abort(_("histedit in progress, can't strip %s")
1637 1636 % ', '.join(node.short(x) for x in common_nodes))
1638 1637 return orig(ui, repo, nodelist, *args, **kwargs)
1639 1638
1640 1639 extensions.wrapfunction(repair, 'strip', stripwrapper)
1641 1640
1642 1641 def summaryhook(ui, repo):
1643 1642 state = histeditstate(repo)
1644 1643 if not state.inprogress():
1645 1644 return
1646 1645 state.read()
1647 1646 if state.actions:
1648 1647 # i18n: column positioning for "hg summary"
1649 1648 ui.write(_('hist: %s (histedit --continue)\n') %
1650 1649 (ui.label(_('%d remaining'), 'histedit.remaining') %
1651 1650 len(state.actions)))
1652 1651
1653 1652 def extsetup(ui):
1654 1653 cmdutil.summaryhooks.add('histedit', summaryhook)
1655 1654 cmdutil.unfinishedstates.append(
1656 1655 ['histedit-state', False, True, _('histedit in progress'),
1657 1656 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1658 1657 cmdutil.afterresolvedstates.append(
1659 1658 ['histedit-state', _('hg histedit --continue')])
@@ -1,1947 +1,1948 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 https://mercurial-scm.org/wiki/RebaseExtension
15 15 '''
16 16
17 17 from __future__ import absolute_import
18 18
19 19 import errno
20 20 import os
21 21
22 22 from mercurial.i18n import _
23 23 from mercurial.node import (
24 24 nullrev,
25 25 short,
26 26 )
27 27 from mercurial import (
28 28 bookmarks,
29 29 cmdutil,
30 30 commands,
31 31 copies,
32 32 destutil,
33 33 dirstateguard,
34 34 error,
35 35 extensions,
36 36 hg,
37 37 merge as mergemod,
38 38 mergeutil,
39 39 obsolete,
40 40 obsutil,
41 41 patch,
42 42 phases,
43 43 pycompat,
44 44 registrar,
45 45 repair,
46 46 revset,
47 47 revsetlang,
48 48 scmutil,
49 49 smartset,
50 50 state as statemod,
51 51 util,
52 52 )
53 53
54 54 # The following constants are used throughout the rebase module. The ordering of
55 55 # their values must be maintained.
56 56
57 57 # Indicates that a revision needs to be rebased
58 58 revtodo = -1
59 59 revtodostr = '-1'
60 60
61 61 # legacy revstates no longer needed in current code
62 62 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
63 63 legacystates = {'-2', '-3', '-4', '-5'}
64 64
65 65 cmdtable = {}
66 66 command = registrar.command(cmdtable)
67 67 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
68 68 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
69 69 # be specifying the version(s) of Mercurial they are tested with, or
70 70 # leave the attribute unspecified.
71 71 testedwith = 'ships-with-hg-core'
72 72
73 73 def _nothingtorebase():
74 74 return 1
75 75
76 76 def _savegraft(ctx, extra):
77 77 s = ctx.extra().get('source', None)
78 78 if s is not None:
79 79 extra['source'] = s
80 80 s = ctx.extra().get('intermediate-source', None)
81 81 if s is not None:
82 82 extra['intermediate-source'] = s
83 83
84 84 def _savebranch(ctx, extra):
85 85 extra['branch'] = ctx.branch()
86 86
87 87 def _destrebase(repo, sourceset, destspace=None):
88 88 """small wrapper around destmerge to pass the right extra args
89 89
90 90 Please wrap destutil.destmerge instead."""
91 91 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
92 92 onheadcheck=False, destspace=destspace)
93 93
94 94 revsetpredicate = registrar.revsetpredicate()
95 95
96 96 @revsetpredicate('_destrebase')
97 97 def _revsetdestrebase(repo, subset, x):
98 98 # ``_rebasedefaultdest()``
99 99
100 100 # default destination for rebase.
101 101 # # XXX: Currently private because I expect the signature to change.
102 102 # # XXX: - bailing out in case of ambiguity vs returning all data.
103 103 # i18n: "_rebasedefaultdest" is a keyword
104 104 sourceset = None
105 105 if x is not None:
106 106 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
107 107 return subset & smartset.baseset([_destrebase(repo, sourceset)])
108 108
109 109 @revsetpredicate('_destautoorphanrebase')
110 110 def _revsetdestautoorphanrebase(repo, subset, x):
111 111 """automatic rebase destination for a single orphan revision"""
112 112 unfi = repo.unfiltered()
113 113 obsoleted = unfi.revs('obsolete()')
114 114
115 115 src = revset.getset(repo, subset, x).first()
116 116
117 117 # Empty src or already obsoleted - Do not return a destination
118 118 if not src or src in obsoleted:
119 119 return smartset.baseset()
120 120 dests = destutil.orphanpossibledestination(repo, src)
121 121 if len(dests) > 1:
122 122 raise error.Abort(
123 123 _("ambiguous automatic rebase: %r could end up on any of %r") % (
124 124 src, dests))
125 125 # We have zero or one destination, so we can just return here.
126 126 return smartset.baseset(dests)
127 127
128 128 def _ctxdesc(ctx):
129 129 """short description for a context"""
130 130 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
131 131 ctx.description().split('\n', 1)[0])
132 132 repo = ctx.repo()
133 133 names = []
134 134 for nsname, ns in repo.names.iteritems():
135 135 if nsname == 'branches':
136 136 continue
137 137 names.extend(ns.names(repo, ctx.node()))
138 138 if names:
139 139 desc += ' (%s)' % ' '.join(names)
140 140 return desc
141 141
142 142 class rebaseruntime(object):
143 143 """This class is a container for rebase runtime state"""
144 144 def __init__(self, repo, ui, inmemory=False, opts=None):
145 145 if opts is None:
146 146 opts = {}
147 147
148 148 # prepared: whether we have rebasestate prepared or not. Currently it
149 149 # decides whether "self.repo" is unfiltered or not.
150 150 # The rebasestate has explicit hash to hash instructions not depending
151 151 # on visibility. If rebasestate exists (in-memory or on-disk), use
152 152 # unfiltered repo to avoid visibility issues.
153 153 # Before knowing rebasestate (i.e. when starting a new rebase (not
154 154 # --continue or --abort)), the original repo should be used so
155 155 # visibility-dependent revsets are correct.
156 156 self.prepared = False
157 157 self._repo = repo
158 158
159 159 self.ui = ui
160 160 self.opts = opts
161 161 self.originalwd = None
162 162 self.external = nullrev
163 163 # Mapping between the old revision id and either what is the new rebased
164 164 # revision or what needs to be done with the old revision. The state
165 165 # dict will be what contains most of the rebase progress state.
166 166 self.state = {}
167 167 self.activebookmark = None
168 168 self.destmap = {}
169 169 self.skipped = set()
170 170
171 171 self.collapsef = opts.get('collapse', False)
172 172 self.collapsemsg = cmdutil.logmessage(ui, opts)
173 173 self.date = opts.get('date', None)
174 174
175 175 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
176 176 self.extrafns = [_savegraft]
177 177 if e:
178 178 self.extrafns = [e]
179 179
180 180 self.backupf = ui.configbool('ui', 'history-editing-backup')
181 181 self.keepf = opts.get('keep', False)
182 182 self.keepbranchesf = opts.get('keepbranches', False)
183 183 self.obsoletenotrebased = {}
184 184 self.obsoletewithoutsuccessorindestination = set()
185 185 self.inmemory = inmemory
186 186 self.stateobj = statemod.cmdstate(repo, 'rebasestate')
187 187
188 188 @property
189 189 def repo(self):
190 190 if self.prepared:
191 191 return self._repo.unfiltered()
192 192 else:
193 193 return self._repo
194 194
195 195 def storestatus(self, tr=None):
196 196 """Store the current status to allow recovery"""
197 197 if tr:
198 198 tr.addfilegenerator('rebasestate', ('rebasestate',),
199 199 self._writestatus, location='plain')
200 200 else:
201 201 with self.repo.vfs("rebasestate", "w") as f:
202 202 self._writestatus(f)
203 203
204 204 def _writestatus(self, f):
205 205 repo = self.repo
206 206 assert repo.filtername is None
207 207 f.write(repo[self.originalwd].hex() + '\n')
208 208 # was "dest". we now write dest per src root below.
209 209 f.write('\n')
210 210 f.write(repo[self.external].hex() + '\n')
211 211 f.write('%d\n' % int(self.collapsef))
212 212 f.write('%d\n' % int(self.keepf))
213 213 f.write('%d\n' % int(self.keepbranchesf))
214 214 f.write('%s\n' % (self.activebookmark or ''))
215 215 destmap = self.destmap
216 216 for d, v in self.state.iteritems():
217 217 oldrev = repo[d].hex()
218 218 if v >= 0:
219 219 newrev = repo[v].hex()
220 220 else:
221 221 newrev = "%d" % v
222 222 destnode = repo[destmap[d]].hex()
223 223 f.write("%s:%s:%s\n" % (oldrev, newrev, destnode))
224 224 repo.ui.debug('rebase status stored\n')
225 225
226 226 def restorestatus(self):
227 227 """Restore a previously stored status"""
228 228 if not self.stateobj.exists():
229 229 cmdutil.wrongtooltocontinue(self.repo, _('rebase'))
230 230
231 231 data = self._read()
232 232 self.repo.ui.debug('rebase status resumed\n')
233 233
234 234 self.originalwd = data['originalwd']
235 235 self.destmap = data['destmap']
236 236 self.state = data['state']
237 237 self.skipped = data['skipped']
238 238 self.collapsef = data['collapse']
239 239 self.keepf = data['keep']
240 240 self.keepbranchesf = data['keepbranches']
241 241 self.external = data['external']
242 242 self.activebookmark = data['activebookmark']
243 243
244 244 def _read(self):
245 245 self.prepared = True
246 246 repo = self.repo
247 247 assert repo.filtername is None
248 248 data = {'keepbranches': None, 'collapse': None, 'activebookmark': None,
249 249 'external': nullrev, 'keep': None, 'originalwd': None}
250 250 legacydest = None
251 251 state = {}
252 252 destmap = {}
253 253
254 254 if True:
255 255 f = repo.vfs("rebasestate")
256 256 for i, l in enumerate(f.read().splitlines()):
257 257 if i == 0:
258 258 data['originalwd'] = repo[l].rev()
259 259 elif i == 1:
260 260 # this line should be empty in newer version. but legacy
261 261 # clients may still use it
262 262 if l:
263 263 legacydest = repo[l].rev()
264 264 elif i == 2:
265 265 data['external'] = repo[l].rev()
266 266 elif i == 3:
267 267 data['collapse'] = bool(int(l))
268 268 elif i == 4:
269 269 data['keep'] = bool(int(l))
270 270 elif i == 5:
271 271 data['keepbranches'] = bool(int(l))
272 272 elif i == 6 and not (len(l) == 81 and ':' in l):
273 273 # line 6 is a recent addition, so for backwards
274 274 # compatibility check that the line doesn't look like the
275 275 # oldrev:newrev lines
276 276 data['activebookmark'] = l
277 277 else:
278 278 args = l.split(':')
279 279 oldrev = repo[args[0]].rev()
280 280 newrev = args[1]
281 281 if newrev in legacystates:
282 282 continue
283 283 if len(args) > 2:
284 284 destrev = repo[args[2]].rev()
285 285 else:
286 286 destrev = legacydest
287 287 destmap[oldrev] = destrev
288 288 if newrev == revtodostr:
289 289 state[oldrev] = revtodo
290 290 # Legacy compat special case
291 291 else:
292 292 state[oldrev] = repo[newrev].rev()
293 293
294 294 if data['keepbranches'] is None:
295 295 raise error.Abort(_('.hg/rebasestate is incomplete'))
296 296
297 297 data['destmap'] = destmap
298 298 data['state'] = state
299 299 skipped = set()
300 300 # recompute the set of skipped revs
301 301 if not data['collapse']:
302 302 seen = set(destmap.values())
303 303 for old, new in sorted(state.items()):
304 304 if new != revtodo and new in seen:
305 305 skipped.add(old)
306 306 seen.add(new)
307 307 data['skipped'] = skipped
308 308 repo.ui.debug('computed skipped revs: %s\n' %
309 309 (' '.join('%d' % r for r in sorted(skipped)) or ''))
310 310
311 311 return data
312 312
313 313 def _handleskippingobsolete(self, obsoleterevs, destmap):
314 314 """Compute structures necessary for skipping obsolete revisions
315 315
316 316 obsoleterevs: iterable of all obsolete revisions in rebaseset
317 317 destmap: {srcrev: destrev} destination revisions
318 318 """
319 319 self.obsoletenotrebased = {}
320 320 if not self.ui.configbool('experimental', 'rebaseskipobsolete'):
321 321 return
322 322 obsoleteset = set(obsoleterevs)
323 323 (self.obsoletenotrebased,
324 324 self.obsoletewithoutsuccessorindestination,
325 325 obsoleteextinctsuccessors) = _computeobsoletenotrebased(
326 326 self.repo, obsoleteset, destmap)
327 327 skippedset = set(self.obsoletenotrebased)
328 328 skippedset.update(self.obsoletewithoutsuccessorindestination)
329 329 skippedset.update(obsoleteextinctsuccessors)
330 330 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
331 331
332 332 def _prepareabortorcontinue(self, isabort, backup=True, suppwarns=False):
333 333 try:
334 334 self.restorestatus()
335 335 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
336 336 except error.RepoLookupError:
337 337 if isabort:
338 338 clearstatus(self.repo)
339 339 clearcollapsemsg(self.repo)
340 340 self.repo.ui.warn(_('rebase aborted (no revision is removed,'
341 341 ' only broken state is cleared)\n'))
342 342 return 0
343 343 else:
344 344 msg = _('cannot continue inconsistent rebase')
345 345 hint = _('use "hg rebase --abort" to clear broken state')
346 346 raise error.Abort(msg, hint=hint)
347 347
348 348 if isabort:
349 349 backup = backup and self.backupf
350 350 return abort(self.repo, self.originalwd, self.destmap, self.state,
351 351 activebookmark=self.activebookmark, backup=backup,
352 352 suppwarns=suppwarns)
353 353
354 354 def _preparenewrebase(self, destmap):
355 355 if not destmap:
356 356 return _nothingtorebase()
357 357
358 358 rebaseset = destmap.keys()
359 359 allowunstable = obsolete.isenabled(self.repo, obsolete.allowunstableopt)
360 360 if (not (self.keepf or allowunstable)
361 361 and self.repo.revs('first(children(%ld) - %ld)',
362 362 rebaseset, rebaseset)):
363 363 raise error.Abort(
364 364 _("can't remove original changesets with"
365 365 " unrebased descendants"),
366 366 hint=_('use --keep to keep original changesets'))
367 367
368 368 result = buildstate(self.repo, destmap, self.collapsef)
369 369
370 370 if not result:
371 371 # Empty state built, nothing to rebase
372 372 self.ui.status(_('nothing to rebase\n'))
373 373 return _nothingtorebase()
374 374
375 375 for root in self.repo.set('roots(%ld)', rebaseset):
376 376 if not self.keepf and not root.mutable():
377 377 raise error.Abort(_("can't rebase public changeset %s")
378 378 % root,
379 379 hint=_("see 'hg help phases' for details"))
380 380
381 381 (self.originalwd, self.destmap, self.state) = result
382 382 if self.collapsef:
383 383 dests = set(self.destmap.values())
384 384 if len(dests) != 1:
385 385 raise error.Abort(
386 386 _('--collapse does not work with multiple destinations'))
387 387 destrev = next(iter(dests))
388 388 destancestors = self.repo.changelog.ancestors([destrev],
389 389 inclusive=True)
390 390 self.external = externalparent(self.repo, self.state, destancestors)
391 391
392 392 for destrev in sorted(set(destmap.values())):
393 393 dest = self.repo[destrev]
394 394 if dest.closesbranch() and not self.keepbranchesf:
395 395 self.ui.status(_('reopening closed branch head %s\n') % dest)
396 396
397 397 self.prepared = True
398 398
399 399 def _assignworkingcopy(self):
400 400 if self.inmemory:
401 401 from mercurial.context import overlayworkingctx
402 402 self.wctx = overlayworkingctx(self.repo)
403 403 self.repo.ui.debug("rebasing in-memory\n")
404 404 else:
405 405 self.wctx = self.repo[None]
406 406 self.repo.ui.debug("rebasing on disk\n")
407 407 self.repo.ui.log("rebase", "", rebase_imm_used=self.inmemory)
408 408
409 409 def _performrebase(self, tr):
410 410 self._assignworkingcopy()
411 411 repo, ui = self.repo, self.ui
412 412 if self.keepbranchesf:
413 413 # insert _savebranch at the start of extrafns so if
414 414 # there's a user-provided extrafn it can clobber branch if
415 415 # desired
416 416 self.extrafns.insert(0, _savebranch)
417 417 if self.collapsef:
418 418 branches = set()
419 419 for rev in self.state:
420 420 branches.add(repo[rev].branch())
421 421 if len(branches) > 1:
422 422 raise error.Abort(_('cannot collapse multiple named '
423 423 'branches'))
424 424
425 425 # Calculate self.obsoletenotrebased
426 426 obsrevs = _filterobsoleterevs(self.repo, self.state)
427 427 self._handleskippingobsolete(obsrevs, self.destmap)
428 428
429 429 # Keep track of the active bookmarks in order to reset them later
430 430 self.activebookmark = self.activebookmark or repo._activebookmark
431 431 if self.activebookmark:
432 432 bookmarks.deactivate(repo)
433 433
434 434 # Store the state before we begin so users can run 'hg rebase --abort'
435 435 # if we fail before the transaction closes.
436 436 self.storestatus()
437 437 if tr:
438 438 # When using single transaction, store state when transaction
439 439 # commits.
440 440 self.storestatus(tr)
441 441
442 442 cands = [k for k, v in self.state.iteritems() if v == revtodo]
443 443 p = repo.ui.makeprogress(_("rebasing"), unit=_('changesets'),
444 444 total=len(cands))
445 445 def progress(ctx):
446 446 p.increment(item=("%d:%s" % (ctx.rev(), ctx)))
447 447 allowdivergence = self.ui.configbool(
448 448 'experimental', 'evolution.allowdivergence')
449 449 for subset in sortsource(self.destmap):
450 450 sortedrevs = self.repo.revs('sort(%ld, -topo)', subset)
451 451 if not allowdivergence:
452 452 sortedrevs -= self.repo.revs(
453 453 'descendants(%ld) and not %ld',
454 454 self.obsoletewithoutsuccessorindestination,
455 455 self.obsoletewithoutsuccessorindestination,
456 456 )
457 457 for rev in sortedrevs:
458 458 self._rebasenode(tr, rev, allowdivergence, progress)
459 459 p.complete()
460 460 ui.note(_('rebase merging completed\n'))
461 461
462 462 def _concludenode(self, rev, p1, p2, editor, commitmsg=None):
463 463 '''Commit the wd changes with parents p1 and p2.
464 464
465 465 Reuse commit info from rev but also store useful information in extra.
466 466 Return node of committed revision.'''
467 467 repo = self.repo
468 468 ctx = repo[rev]
469 469 if commitmsg is None:
470 470 commitmsg = ctx.description()
471 471 date = self.date
472 472 if date is None:
473 473 date = ctx.date()
474 474 extra = {'rebase_source': ctx.hex()}
475 475 for c in self.extrafns:
476 476 c(ctx, extra)
477 477 keepbranch = self.keepbranchesf and repo[p1].branch() != ctx.branch()
478 478 destphase = max(ctx.phase(), phases.draft)
479 479 overrides = {('phases', 'new-commit'): destphase}
480 480 if keepbranch:
481 481 overrides[('ui', 'allowemptycommit')] = True
482 482 with repo.ui.configoverride(overrides, 'rebase'):
483 483 if self.inmemory:
484 484 newnode = commitmemorynode(repo, p1, p2,
485 485 wctx=self.wctx,
486 486 extra=extra,
487 487 commitmsg=commitmsg,
488 488 editor=editor,
489 489 user=ctx.user(),
490 490 date=date)
491 491 mergemod.mergestate.clean(repo)
492 492 else:
493 493 newnode = commitnode(repo, p1, p2,
494 494 extra=extra,
495 495 commitmsg=commitmsg,
496 496 editor=editor,
497 497 user=ctx.user(),
498 498 date=date)
499 499
500 500 if newnode is None:
501 501 # If it ended up being a no-op commit, then the normal
502 502 # merge state clean-up path doesn't happen, so do it
503 503 # here. Fix issue5494
504 504 mergemod.mergestate.clean(repo)
505 505 return newnode
506 506
507 507 def _rebasenode(self, tr, rev, allowdivergence, progressfn):
508 508 repo, ui, opts = self.repo, self.ui, self.opts
509 509 dest = self.destmap[rev]
510 510 ctx = repo[rev]
511 511 desc = _ctxdesc(ctx)
512 512 if self.state[rev] == rev:
513 513 ui.status(_('already rebased %s\n') % desc)
514 514 elif (not allowdivergence
515 515 and rev in self.obsoletewithoutsuccessorindestination):
516 516 msg = _('note: not rebasing %s and its descendants as '
517 517 'this would cause divergence\n') % desc
518 518 repo.ui.status(msg)
519 519 self.skipped.add(rev)
520 520 elif rev in self.obsoletenotrebased:
521 521 succ = self.obsoletenotrebased[rev]
522 522 if succ is None:
523 523 msg = _('note: not rebasing %s, it has no '
524 524 'successor\n') % desc
525 525 else:
526 526 succdesc = _ctxdesc(repo[succ])
527 527 msg = (_('note: not rebasing %s, already in '
528 528 'destination as %s\n') % (desc, succdesc))
529 529 repo.ui.status(msg)
530 530 # Make clearrebased aware state[rev] is not a true successor
531 531 self.skipped.add(rev)
532 532 # Record rev as moved to its desired destination in self.state.
533 533 # This helps bookmark and working parent movement.
534 534 dest = max(adjustdest(repo, rev, self.destmap, self.state,
535 535 self.skipped))
536 536 self.state[rev] = dest
537 537 elif self.state[rev] == revtodo:
538 538 ui.status(_('rebasing %s\n') % desc)
539 539 progressfn(ctx)
540 540 p1, p2, base = defineparents(repo, rev, self.destmap,
541 541 self.state, self.skipped,
542 542 self.obsoletenotrebased)
543 543 if len(repo[None].parents()) == 2:
544 544 repo.ui.debug('resuming interrupted rebase\n')
545 545 else:
546 546 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
547 547 with ui.configoverride(overrides, 'rebase'):
548 548 stats = rebasenode(repo, rev, p1, base, self.collapsef,
549 549 dest, wctx=self.wctx)
550 550 if stats.unresolvedcount > 0:
551 551 if self.inmemory:
552 552 raise error.InMemoryMergeConflictsError()
553 553 else:
554 554 raise error.InterventionRequired(
555 555 _('unresolved conflicts (see hg '
556 556 'resolve, then hg rebase --continue)'))
557 557 if not self.collapsef:
558 558 merging = p2 != nullrev
559 559 editform = cmdutil.mergeeditform(merging, 'rebase')
560 560 editor = cmdutil.getcommiteditor(editform=editform,
561 561 **pycompat.strkwargs(opts))
562 562 newnode = self._concludenode(rev, p1, p2, editor)
563 563 else:
564 564 # Skip commit if we are collapsing
565 565 if self.inmemory:
566 566 self.wctx.setbase(repo[p1])
567 567 else:
568 568 repo.setparents(repo[p1].node())
569 569 newnode = None
570 570 # Update the state
571 571 if newnode is not None:
572 572 self.state[rev] = repo[newnode].rev()
573 573 ui.debug('rebased as %s\n' % short(newnode))
574 574 else:
575 575 if not self.collapsef:
576 576 ui.warn(_('note: rebase of %d:%s created no changes '
577 577 'to commit\n') % (rev, ctx))
578 578 self.skipped.add(rev)
579 579 self.state[rev] = p1
580 580 ui.debug('next revision set to %d\n' % p1)
581 581 else:
582 582 ui.status(_('already rebased %s as %s\n') %
583 583 (desc, repo[self.state[rev]]))
584 584 if not tr:
585 585 # When not using single transaction, store state after each
586 586 # commit is completely done. On InterventionRequired, we thus
587 587 # won't store the status. Instead, we'll hit the "len(parents) == 2"
588 588 # case and realize that the commit was in progress.
589 589 self.storestatus()
590 590
591 591 def _finishrebase(self):
592 592 repo, ui, opts = self.repo, self.ui, self.opts
593 593 fm = ui.formatter('rebase', opts)
594 594 fm.startitem()
595 595 if self.collapsef:
596 596 p1, p2, _base = defineparents(repo, min(self.state), self.destmap,
597 597 self.state, self.skipped,
598 598 self.obsoletenotrebased)
599 599 editopt = opts.get('edit')
600 600 editform = 'rebase.collapse'
601 601 if self.collapsemsg:
602 602 commitmsg = self.collapsemsg
603 603 else:
604 604 commitmsg = 'Collapsed revision'
605 605 for rebased in sorted(self.state):
606 606 if rebased not in self.skipped:
607 607 commitmsg += '\n* %s' % repo[rebased].description()
608 608 editopt = True
609 609 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
610 610 revtoreuse = max(self.state)
611 611
612 612 newnode = self._concludenode(revtoreuse, p1, self.external,
613 613 editor, commitmsg=commitmsg)
614 614
615 615 if newnode is not None:
616 616 newrev = repo[newnode].rev()
617 617 for oldrev in self.state:
618 618 self.state[oldrev] = newrev
619 619
620 620 if 'qtip' in repo.tags():
621 621 updatemq(repo, self.state, self.skipped,
622 622 **pycompat.strkwargs(opts))
623 623
624 624 # restore original working directory
625 625 # (we do this before stripping)
626 626 newwd = self.state.get(self.originalwd, self.originalwd)
627 627 if newwd < 0:
628 628 # original directory is a parent of rebase set root or ignored
629 629 newwd = self.originalwd
630 630 if newwd not in [c.rev() for c in repo[None].parents()]:
631 631 ui.note(_("update back to initial working directory parent\n"))
632 632 hg.updaterepo(repo, newwd, overwrite=False)
633 633
634 634 collapsedas = None
635 635 if self.collapsef and not self.keepf:
636 636 collapsedas = newnode
637 637 clearrebased(ui, repo, self.destmap, self.state, self.skipped,
638 638 collapsedas, self.keepf, fm=fm, backup=self.backupf)
639 639
640 640 clearstatus(repo)
641 641 clearcollapsemsg(repo)
642 642
643 643 ui.note(_("rebase completed\n"))
644 644 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
645 645 if self.skipped:
646 646 skippedlen = len(self.skipped)
647 647 ui.note(_("%d revisions have been skipped\n") % skippedlen)
648 648 fm.end()
649 649
650 650 if (self.activebookmark and self.activebookmark in repo._bookmarks and
651 651 repo['.'].node() == repo._bookmarks[self.activebookmark]):
652 652 bookmarks.activate(repo, self.activebookmark)
653 653
654 654 @command('rebase',
655 655 [('s', 'source', '',
656 656 _('rebase the specified changeset and descendants'), _('REV')),
657 657 ('b', 'base', '',
658 658 _('rebase everything from branching point of specified changeset'),
659 659 _('REV')),
660 660 ('r', 'rev', [],
661 661 _('rebase these revisions'),
662 662 _('REV')),
663 663 ('d', 'dest', '',
664 664 _('rebase onto the specified changeset'), _('REV')),
665 665 ('', 'collapse', False, _('collapse the rebased changesets')),
666 666 ('m', 'message', '',
667 667 _('use text as collapse commit message'), _('TEXT')),
668 668 ('e', 'edit', False, _('invoke editor on commit messages')),
669 669 ('l', 'logfile', '',
670 670 _('read collapse commit message from file'), _('FILE')),
671 671 ('k', 'keep', False, _('keep original changesets')),
672 672 ('', 'keepbranches', False, _('keep original branch names')),
673 673 ('D', 'detach', False, _('(DEPRECATED)')),
674 674 ('i', 'interactive', False, _('(DEPRECATED)')),
675 675 ('t', 'tool', '', _('specify merge tool')),
676 676 ('', 'stop', False, _('stop interrupted rebase')),
677 677 ('c', 'continue', False, _('continue an interrupted rebase')),
678 678 ('a', 'abort', False, _('abort an interrupted rebase')),
679 679 ('', 'auto-orphans', '', _('automatically rebase orphan revisions '
680 680 'in the specified revset (EXPERIMENTAL)')),
681 681 ] + cmdutil.dryrunopts + cmdutil.formatteropts + cmdutil.confirmopts,
682 682 _('[-s REV | -b REV] [-d REV] [OPTION]'),
683 683 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
684 684 def rebase(ui, repo, **opts):
685 685 """move changeset (and descendants) to a different branch
686 686
687 687 Rebase uses repeated merging to graft changesets from one part of
688 688 history (the source) onto another (the destination). This can be
689 689 useful for linearizing *local* changes relative to a master
690 690 development tree.
691 691
692 692 Published commits cannot be rebased (see :hg:`help phases`).
693 693 To copy commits, see :hg:`help graft`.
694 694
695 695 If you don't specify a destination changeset (``-d/--dest``), rebase
696 696 will use the same logic as :hg:`merge` to pick a destination. if
697 697 the current branch contains exactly one other head, the other head
698 698 is merged with by default. Otherwise, an explicit revision with
699 699 which to merge with must be provided. (destination changeset is not
700 700 modified by rebasing, but new changesets are added as its
701 701 descendants.)
702 702
703 703 Here are the ways to select changesets:
704 704
705 705 1. Explicitly select them using ``--rev``.
706 706
707 707 2. Use ``--source`` to select a root changeset and include all of its
708 708 descendants.
709 709
710 710 3. Use ``--base`` to select a changeset; rebase will find ancestors
711 711 and their descendants which are not also ancestors of the destination.
712 712
713 713 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
714 714 rebase will use ``--base .`` as above.
715 715
716 716 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
717 717 can be used in ``--dest``. Destination would be calculated per source
718 718 revision with ``SRC`` substituted by that single source revision and
719 719 ``ALLSRC`` substituted by all source revisions.
720 720
721 721 Rebase will destroy original changesets unless you use ``--keep``.
722 722 It will also move your bookmarks (even if you do).
723 723
724 724 Some changesets may be dropped if they do not contribute changes
725 725 (e.g. merges from the destination branch).
726 726
727 727 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
728 728 a named branch with two heads. You will need to explicitly specify source
729 729 and/or destination.
730 730
731 731 If you need to use a tool to automate merge/conflict decisions, you
732 732 can specify one with ``--tool``, see :hg:`help merge-tools`.
733 733 As a caveat: the tool will not be used to mediate when a file was
734 734 deleted, there is no hook presently available for this.
735 735
736 736 If a rebase is interrupted to manually resolve a conflict, it can be
737 737 continued with --continue/-c, aborted with --abort/-a, or stopped with
738 738 --stop.
739 739
740 740 .. container:: verbose
741 741
742 742 Examples:
743 743
744 744 - move "local changes" (current commit back to branching point)
745 745 to the current branch tip after a pull::
746 746
747 747 hg rebase
748 748
749 749 - move a single changeset to the stable branch::
750 750
751 751 hg rebase -r 5f493448 -d stable
752 752
753 753 - splice a commit and all its descendants onto another part of history::
754 754
755 755 hg rebase --source c0c3 --dest 4cf9
756 756
757 757 - rebase everything on a branch marked by a bookmark onto the
758 758 default branch::
759 759
760 760 hg rebase --base myfeature --dest default
761 761
762 762 - collapse a sequence of changes into a single commit::
763 763
764 764 hg rebase --collapse -r 1520:1525 -d .
765 765
766 766 - move a named branch while preserving its name::
767 767
768 768 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
769 769
770 770 - stabilize orphaned changesets so history looks linear::
771 771
772 772 hg rebase -r 'orphan()-obsolete()'\
773 773 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
774 774 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
775 775
776 776 Configuration Options:
777 777
778 778 You can make rebase require a destination if you set the following config
779 779 option::
780 780
781 781 [commands]
782 782 rebase.requiredest = True
783 783
784 784 By default, rebase will close the transaction after each commit. For
785 785 performance purposes, you can configure rebase to use a single transaction
786 786 across the entire rebase. WARNING: This setting introduces a significant
787 787 risk of losing the work you've done in a rebase if the rebase aborts
788 788 unexpectedly::
789 789
790 790 [rebase]
791 791 singletransaction = True
792 792
793 793 By default, rebase writes to the working copy, but you can configure it to
794 794 run in-memory for for better performance, and to allow it to run if the
795 795 working copy is dirty::
796 796
797 797 [rebase]
798 798 experimental.inmemory = True
799 799
800 800 Return Values:
801 801
802 802 Returns 0 on success, 1 if nothing to rebase or there are
803 803 unresolved conflicts.
804 804
805 805 """
806 806 opts = pycompat.byteskwargs(opts)
807 807 inmemory = ui.configbool('rebase', 'experimental.inmemory')
808 808 dryrun = opts.get('dry_run')
809 809 confirm = opts.get('confirm')
810 810 selactions = [k for k in ['abort', 'stop', 'continue'] if opts.get(k)]
811 811 if len(selactions) > 1:
812 812 raise error.Abort(_('cannot use --%s with --%s')
813 813 % tuple(selactions[:2]))
814 814 action = selactions[0] if selactions else None
815 815 if dryrun and action:
816 816 raise error.Abort(_('cannot specify both --dry-run and --%s') % action)
817 817 if confirm and action:
818 818 raise error.Abort(_('cannot specify both --confirm and --%s') % action)
819 819 if dryrun and confirm:
820 820 raise error.Abort(_('cannot specify both --confirm and --dry-run'))
821 821
822 822 if action or repo.currenttransaction() is not None:
823 823 # in-memory rebase is not compatible with resuming rebases.
824 824 # (Or if it is run within a transaction, since the restart logic can
825 825 # fail the entire transaction.)
826 826 inmemory = False
827 827
828 828 if opts.get('auto_orphans'):
829 829 for key in opts:
830 830 if key != 'auto_orphans' and opts.get(key):
831 831 raise error.Abort(_('--auto-orphans is incompatible with %s') %
832 832 ('--' + key))
833 833 userrevs = list(repo.revs(opts.get('auto_orphans')))
834 834 opts['rev'] = [revsetlang.formatspec('%ld and orphan()', userrevs)]
835 835 opts['dest'] = '_destautoorphanrebase(SRC)'
836 836
837 837 if dryrun or confirm:
838 838 return _dryrunrebase(ui, repo, action, opts)
839 839 elif action == 'stop':
840 840 rbsrt = rebaseruntime(repo, ui)
841 841 with repo.wlock(), repo.lock():
842 842 rbsrt.restorestatus()
843 843 if rbsrt.collapsef:
844 844 raise error.Abort(_("cannot stop in --collapse session"))
845 845 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
846 846 if not (rbsrt.keepf or allowunstable):
847 847 raise error.Abort(_("cannot remove original changesets with"
848 848 " unrebased descendants"),
849 849 hint=_('either enable obsmarkers to allow unstable '
850 850 'revisions or use --keep to keep original '
851 851 'changesets'))
852 852 if needupdate(repo, rbsrt.state):
853 853 # update to the current working revision
854 854 # to clear interrupted merge
855 855 hg.updaterepo(repo, rbsrt.originalwd, overwrite=True)
856 856 rbsrt._finishrebase()
857 857 return 0
858 858 elif inmemory:
859 859 try:
860 860 # in-memory merge doesn't support conflicts, so if we hit any, abort
861 861 # and re-run as an on-disk merge.
862 862 overrides = {('rebase', 'singletransaction'): True}
863 863 with ui.configoverride(overrides, 'rebase'):
864 864 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
865 865 except error.InMemoryMergeConflictsError:
866 866 ui.warn(_('hit merge conflicts; re-running rebase without in-memory'
867 867 ' merge\n'))
868 868 _dorebase(ui, repo, action='abort', opts={})
869 869 return _dorebase(ui, repo, action, opts, inmemory=False)
870 870 else:
871 871 return _dorebase(ui, repo, action, opts)
872 872
873 873 def _dryrunrebase(ui, repo, action, opts):
874 874 rbsrt = rebaseruntime(repo, ui, inmemory=True, opts=opts)
875 875 confirm = opts.get('confirm')
876 876 if confirm:
877 877 ui.status(_('starting in-memory rebase\n'))
878 878 else:
879 879 ui.status(_('starting dry-run rebase; repository will not be '
880 880 'changed\n'))
881 881 with repo.wlock(), repo.lock():
882 882 needsabort = True
883 883 try:
884 884 overrides = {('rebase', 'singletransaction'): True}
885 885 with ui.configoverride(overrides, 'rebase'):
886 886 _origrebase(ui, repo, action, opts, rbsrt, inmemory=True,
887 887 leaveunfinished=True)
888 888 except error.InMemoryMergeConflictsError:
889 889 ui.status(_('hit a merge conflict\n'))
890 890 return 1
891 891 else:
892 892 if confirm:
893 893 ui.status(_('rebase completed successfully\n'))
894 894 if not ui.promptchoice(_(b'apply changes (yn)?'
895 895 b'$$ &Yes $$ &No')):
896 896 # finish unfinished rebase
897 897 rbsrt._finishrebase()
898 898 else:
899 899 rbsrt._prepareabortorcontinue(isabort=True, backup=False,
900 900 suppwarns=True)
901 901 needsabort = False
902 902 else:
903 903 ui.status(_('dry-run rebase completed successfully; run without'
904 904 ' -n/--dry-run to perform this rebase\n'))
905 905 return 0
906 906 finally:
907 907 if needsabort:
908 908 # no need to store backup in case of dryrun
909 909 rbsrt._prepareabortorcontinue(isabort=True, backup=False,
910 910 suppwarns=True)
911 911
912 912 def _dorebase(ui, repo, action, opts, inmemory=False):
913 913 rbsrt = rebaseruntime(repo, ui, inmemory, opts)
914 914 return _origrebase(ui, repo, action, opts, rbsrt, inmemory=inmemory)
915 915
916 916 def _origrebase(ui, repo, action, opts, rbsrt, inmemory=False,
917 917 leaveunfinished=False):
918 918 assert action != 'stop'
919 919 with repo.wlock(), repo.lock():
920 920 # Validate input and define rebasing points
921 921 destf = opts.get('dest', None)
922 922 srcf = opts.get('source', None)
923 923 basef = opts.get('base', None)
924 924 revf = opts.get('rev', [])
925 925 # search default destination in this space
926 926 # used in the 'hg pull --rebase' case, see issue 5214.
927 927 destspace = opts.get('_destspace')
928 928 if opts.get('interactive'):
929 929 try:
930 930 if extensions.find('histedit'):
931 931 enablehistedit = ''
932 932 except KeyError:
933 933 enablehistedit = " --config extensions.histedit="
934 934 help = "hg%s help -e histedit" % enablehistedit
935 935 msg = _("interactive history editing is supported by the "
936 936 "'histedit' extension (see \"%s\")") % help
937 937 raise error.Abort(msg)
938 938
939 939 if rbsrt.collapsemsg and not rbsrt.collapsef:
940 940 raise error.Abort(
941 941 _('message can only be specified with collapse'))
942 942
943 943 if action:
944 944 if rbsrt.collapsef:
945 945 raise error.Abort(
946 946 _('cannot use collapse with continue or abort'))
947 947 if srcf or basef or destf:
948 948 raise error.Abort(
949 949 _('abort and continue do not allow specifying revisions'))
950 950 if action == 'abort' and opts.get('tool', False):
951 951 ui.warn(_('tool option will be ignored\n'))
952 952 if action == 'continue':
953 953 ms = mergemod.mergestate.read(repo)
954 954 mergeutil.checkunresolved(ms)
955 955
956 956 retcode = rbsrt._prepareabortorcontinue(isabort=(action == 'abort'))
957 957 if retcode is not None:
958 958 return retcode
959 959 else:
960 960 destmap = _definedestmap(ui, repo, inmemory, destf, srcf, basef,
961 961 revf, destspace=destspace)
962 962 retcode = rbsrt._preparenewrebase(destmap)
963 963 if retcode is not None:
964 964 return retcode
965 965 storecollapsemsg(repo, rbsrt.collapsemsg)
966 966
967 967 tr = None
968 968
969 969 singletr = ui.configbool('rebase', 'singletransaction')
970 970 if singletr:
971 971 tr = repo.transaction('rebase')
972 972
973 973 # If `rebase.singletransaction` is enabled, wrap the entire operation in
974 974 # one transaction here. Otherwise, transactions are obtained when
975 975 # committing each node, which is slower but allows partial success.
976 976 with util.acceptintervention(tr):
977 977 # Same logic for the dirstate guard, except we don't create one when
978 978 # rebasing in-memory (it's not needed).
979 979 dsguard = None
980 980 if singletr and not inmemory:
981 981 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
982 982 with util.acceptintervention(dsguard):
983 983 rbsrt._performrebase(tr)
984 984 if not leaveunfinished:
985 985 rbsrt._finishrebase()
986 986
987 987 def _definedestmap(ui, repo, inmemory, destf=None, srcf=None, basef=None,
988 988 revf=None, destspace=None):
989 989 """use revisions argument to define destmap {srcrev: destrev}"""
990 990 if revf is None:
991 991 revf = []
992 992
993 993 # destspace is here to work around issues with `hg pull --rebase` see
994 994 # issue5214 for details
995 995 if srcf and basef:
996 996 raise error.Abort(_('cannot specify both a source and a base'))
997 997 if revf and basef:
998 998 raise error.Abort(_('cannot specify both a revision and a base'))
999 999 if revf and srcf:
1000 1000 raise error.Abort(_('cannot specify both a revision and a source'))
1001 1001
1002 1002 if not inmemory:
1003 1003 cmdutil.checkunfinished(repo)
1004 1004 cmdutil.bailifchanged(repo)
1005 1005
1006 1006 if ui.configbool('commands', 'rebase.requiredest') and not destf:
1007 1007 raise error.Abort(_('you must specify a destination'),
1008 1008 hint=_('use: hg rebase -d REV'))
1009 1009
1010 1010 dest = None
1011 1011
1012 1012 if revf:
1013 1013 rebaseset = scmutil.revrange(repo, revf)
1014 1014 if not rebaseset:
1015 1015 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
1016 1016 return None
1017 1017 elif srcf:
1018 1018 src = scmutil.revrange(repo, [srcf])
1019 1019 if not src:
1020 1020 ui.status(_('empty "source" revision set - nothing to rebase\n'))
1021 1021 return None
1022 1022 rebaseset = repo.revs('(%ld)::', src)
1023 1023 assert rebaseset
1024 1024 else:
1025 1025 base = scmutil.revrange(repo, [basef or '.'])
1026 1026 if not base:
1027 1027 ui.status(_('empty "base" revision set - '
1028 1028 "can't compute rebase set\n"))
1029 1029 return None
1030 1030 if destf:
1031 1031 # --base does not support multiple destinations
1032 1032 dest = scmutil.revsingle(repo, destf)
1033 1033 else:
1034 1034 dest = repo[_destrebase(repo, base, destspace=destspace)]
1035 1035 destf = bytes(dest)
1036 1036
1037 1037 roots = [] # selected children of branching points
1038 1038 bpbase = {} # {branchingpoint: [origbase]}
1039 1039 for b in base: # group bases by branching points
1040 1040 bp = repo.revs('ancestor(%d, %d)', b, dest.rev()).first()
1041 1041 bpbase[bp] = bpbase.get(bp, []) + [b]
1042 1042 if None in bpbase:
1043 1043 # emulate the old behavior, showing "nothing to rebase" (a better
1044 1044 # behavior may be abort with "cannot find branching point" error)
1045 1045 bpbase.clear()
1046 1046 for bp, bs in bpbase.iteritems(): # calculate roots
1047 1047 roots += list(repo.revs('children(%d) & ancestors(%ld)', bp, bs))
1048 1048
1049 1049 rebaseset = repo.revs('%ld::', roots)
1050 1050
1051 1051 if not rebaseset:
1052 1052 # transform to list because smartsets are not comparable to
1053 1053 # lists. This should be improved to honor laziness of
1054 1054 # smartset.
1055 1055 if list(base) == [dest.rev()]:
1056 1056 if basef:
1057 1057 ui.status(_('nothing to rebase - %s is both "base"'
1058 1058 ' and destination\n') % dest)
1059 1059 else:
1060 1060 ui.status(_('nothing to rebase - working directory '
1061 1061 'parent is also destination\n'))
1062 1062 elif not repo.revs('%ld - ::%d', base, dest.rev()):
1063 1063 if basef:
1064 1064 ui.status(_('nothing to rebase - "base" %s is '
1065 1065 'already an ancestor of destination '
1066 1066 '%s\n') %
1067 1067 ('+'.join(bytes(repo[r]) for r in base),
1068 1068 dest))
1069 1069 else:
1070 1070 ui.status(_('nothing to rebase - working '
1071 1071 'directory parent is already an '
1072 1072 'ancestor of destination %s\n') % dest)
1073 1073 else: # can it happen?
1074 1074 ui.status(_('nothing to rebase from %s to %s\n') %
1075 1075 ('+'.join(bytes(repo[r]) for r in base), dest))
1076 1076 return None
1077 1077
1078 1078 rebasingwcp = repo['.'].rev() in rebaseset
1079 1079 ui.log("rebase", "", rebase_rebasing_wcp=rebasingwcp)
1080 1080 if inmemory and rebasingwcp:
1081 1081 # Check these since we did not before.
1082 1082 cmdutil.checkunfinished(repo)
1083 1083 cmdutil.bailifchanged(repo)
1084 1084
1085 1085 if not destf:
1086 1086 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1087 1087 destf = bytes(dest)
1088 1088
1089 1089 allsrc = revsetlang.formatspec('%ld', rebaseset)
1090 1090 alias = {'ALLSRC': allsrc}
1091 1091
1092 1092 if dest is None:
1093 1093 try:
1094 1094 # fast path: try to resolve dest without SRC alias
1095 1095 dest = scmutil.revsingle(repo, destf, localalias=alias)
1096 1096 except error.RepoLookupError:
1097 1097 # multi-dest path: resolve dest for each SRC separately
1098 1098 destmap = {}
1099 1099 for r in rebaseset:
1100 1100 alias['SRC'] = revsetlang.formatspec('%d', r)
1101 1101 # use repo.anyrevs instead of scmutil.revsingle because we
1102 1102 # don't want to abort if destset is empty.
1103 1103 destset = repo.anyrevs([destf], user=True, localalias=alias)
1104 1104 size = len(destset)
1105 1105 if size == 1:
1106 1106 destmap[r] = destset.first()
1107 1107 elif size == 0:
1108 1108 ui.note(_('skipping %s - empty destination\n') % repo[r])
1109 1109 else:
1110 1110 raise error.Abort(_('rebase destination for %s is not '
1111 1111 'unique') % repo[r])
1112 1112
1113 1113 if dest is not None:
1114 1114 # single-dest case: assign dest to each rev in rebaseset
1115 1115 destrev = dest.rev()
1116 1116 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1117 1117
1118 1118 if not destmap:
1119 1119 ui.status(_('nothing to rebase - empty destination\n'))
1120 1120 return None
1121 1121
1122 1122 return destmap
1123 1123
1124 1124 def externalparent(repo, state, destancestors):
1125 1125 """Return the revision that should be used as the second parent
1126 1126 when the revisions in state is collapsed on top of destancestors.
1127 1127 Abort if there is more than one parent.
1128 1128 """
1129 1129 parents = set()
1130 1130 source = min(state)
1131 1131 for rev in state:
1132 1132 if rev == source:
1133 1133 continue
1134 1134 for p in repo[rev].parents():
1135 1135 if (p.rev() not in state
1136 1136 and p.rev() not in destancestors):
1137 1137 parents.add(p.rev())
1138 1138 if not parents:
1139 1139 return nullrev
1140 1140 if len(parents) == 1:
1141 1141 return parents.pop()
1142 1142 raise error.Abort(_('unable to collapse on top of %d, there is more '
1143 1143 'than one external parent: %s') %
1144 1144 (max(destancestors),
1145 1145 ', '.join("%d" % p for p in sorted(parents))))
1146 1146
1147 1147 def commitmemorynode(repo, p1, p2, wctx, editor, extra, user, date, commitmsg):
1148 1148 '''Commit the memory changes with parents p1 and p2.
1149 1149 Return node of committed revision.'''
1150 1150 # Replicates the empty check in ``repo.commit``.
1151 1151 if wctx.isempty() and not repo.ui.configbool('ui', 'allowemptycommit'):
1152 1152 return None
1153 1153
1154 1154 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1155 1155 # ``branch`` (used when passing ``--keepbranches``).
1156 1156 branch = repo[p1].branch()
1157 1157 if 'branch' in extra:
1158 1158 branch = extra['branch']
1159 1159
1160 1160 memctx = wctx.tomemctx(commitmsg, parents=(p1, p2), date=date,
1161 1161 extra=extra, user=user, branch=branch, editor=editor)
1162 1162 commitres = repo.commitctx(memctx)
1163 1163 wctx.clean() # Might be reused
1164 1164 return commitres
1165 1165
1166 1166 def commitnode(repo, p1, p2, editor, extra, user, date, commitmsg):
1167 1167 '''Commit the wd changes with parents p1 and p2.
1168 1168 Return node of committed revision.'''
1169 1169 dsguard = util.nullcontextmanager()
1170 1170 if not repo.ui.configbool('rebase', 'singletransaction'):
1171 1171 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
1172 1172 with dsguard:
1173 1173 repo.setparents(repo[p1].node(), repo[p2].node())
1174 1174
1175 1175 # Commit might fail if unresolved files exist
1176 1176 newnode = repo.commit(text=commitmsg, user=user, date=date,
1177 1177 extra=extra, editor=editor)
1178 1178
1179 1179 repo.dirstate.setbranch(repo[newnode].branch())
1180 1180 return newnode
1181 1181
1182 1182 def rebasenode(repo, rev, p1, base, collapse, dest, wctx):
1183 1183 'Rebase a single revision rev on top of p1 using base as merge ancestor'
1184 1184 # Merge phase
1185 1185 # Update to destination and merge it with local
1186 1186 if wctx.isinmemory():
1187 1187 wctx.setbase(repo[p1])
1188 1188 else:
1189 1189 if repo['.'].rev() != p1:
1190 1190 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
1191 mergemod.update(repo, p1, False, True)
1191 mergemod.update(repo, p1, branchmerge=False, force=True)
1192 1192 else:
1193 1193 repo.ui.debug(" already in destination\n")
1194 1194 # This is, alas, necessary to invalidate workingctx's manifest cache,
1195 1195 # as well as other data we litter on it in other places.
1196 1196 wctx = repo[None]
1197 1197 repo.dirstate.write(repo.currenttransaction())
1198 1198 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
1199 1199 if base is not None:
1200 1200 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
1201 1201 # When collapsing in-place, the parent is the common ancestor, we
1202 1202 # have to allow merging with it.
1203 stats = mergemod.update(repo, rev, True, True, base, collapse,
1203 stats = mergemod.update(repo, rev, branchmerge=True, force=True,
1204 ancestor=base, mergeancestor=collapse,
1204 1205 labels=['dest', 'source'], wc=wctx)
1205 1206 if collapse:
1206 1207 copies.duplicatecopies(repo, wctx, rev, dest)
1207 1208 else:
1208 1209 # If we're not using --collapse, we need to
1209 1210 # duplicate copies between the revision we're
1210 1211 # rebasing and its first parent, but *not*
1211 1212 # duplicate any copies that have already been
1212 1213 # performed in the destination.
1213 1214 p1rev = repo[rev].p1().rev()
1214 1215 copies.duplicatecopies(repo, wctx, rev, p1rev, skiprev=dest)
1215 1216 return stats
1216 1217
1217 1218 def adjustdest(repo, rev, destmap, state, skipped):
1218 1219 """adjust rebase destination given the current rebase state
1219 1220
1220 1221 rev is what is being rebased. Return a list of two revs, which are the
1221 1222 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1222 1223 nullrev, return dest without adjustment for it.
1223 1224
1224 1225 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1225 1226 to B1, and E's destination will be adjusted from F to B1.
1226 1227
1227 1228 B1 <- written during rebasing B
1228 1229 |
1229 1230 F <- original destination of B, E
1230 1231 |
1231 1232 | E <- rev, which is being rebased
1232 1233 | |
1233 1234 | D <- prev, one parent of rev being checked
1234 1235 | |
1235 1236 | x <- skipped, ex. no successor or successor in (::dest)
1236 1237 | |
1237 1238 | C <- rebased as C', different destination
1238 1239 | |
1239 1240 | B <- rebased as B1 C'
1240 1241 |/ |
1241 1242 A G <- destination of C, different
1242 1243
1243 1244 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1244 1245 first move C to C1, G to G1, and when it's checking H, the adjusted
1245 1246 destinations will be [C1, G1].
1246 1247
1247 1248 H C1 G1
1248 1249 /| | /
1249 1250 F G |/
1250 1251 K | | -> K
1251 1252 | C D |
1252 1253 | |/ |
1253 1254 | B | ...
1254 1255 |/ |/
1255 1256 A A
1256 1257
1257 1258 Besides, adjust dest according to existing rebase information. For example,
1258 1259
1259 1260 B C D B needs to be rebased on top of C, C needs to be rebased on top
1260 1261 \|/ of D. We will rebase C first.
1261 1262 A
1262 1263
1263 1264 C' After rebasing C, when considering B's destination, use C'
1264 1265 | instead of the original C.
1265 1266 B D
1266 1267 \ /
1267 1268 A
1268 1269 """
1269 1270 # pick already rebased revs with same dest from state as interesting source
1270 1271 dest = destmap[rev]
1271 1272 source = [s for s, d in state.items()
1272 1273 if d > 0 and destmap[s] == dest and s not in skipped]
1273 1274
1274 1275 result = []
1275 1276 for prev in repo.changelog.parentrevs(rev):
1276 1277 adjusted = dest
1277 1278 if prev != nullrev:
1278 1279 candidate = repo.revs('max(%ld and (::%d))', source, prev).first()
1279 1280 if candidate is not None:
1280 1281 adjusted = state[candidate]
1281 1282 if adjusted == dest and dest in state:
1282 1283 adjusted = state[dest]
1283 1284 if adjusted == revtodo:
1284 1285 # sortsource should produce an order that makes this impossible
1285 1286 raise error.ProgrammingError(
1286 1287 'rev %d should be rebased already at this time' % dest)
1287 1288 result.append(adjusted)
1288 1289 return result
1289 1290
1290 1291 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1291 1292 """
1292 1293 Abort if rebase will create divergence or rebase is noop because of markers
1293 1294
1294 1295 `rebaseobsrevs`: set of obsolete revision in source
1295 1296 `rebaseobsskipped`: set of revisions from source skipped because they have
1296 1297 successors in destination or no non-obsolete successor.
1297 1298 """
1298 1299 # Obsolete node with successors not in dest leads to divergence
1299 1300 divergenceok = ui.configbool('experimental',
1300 1301 'evolution.allowdivergence')
1301 1302 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1302 1303
1303 1304 if divergencebasecandidates and not divergenceok:
1304 1305 divhashes = (bytes(repo[r])
1305 1306 for r in divergencebasecandidates)
1306 1307 msg = _("this rebase will cause "
1307 1308 "divergences from: %s")
1308 1309 h = _("to force the rebase please set "
1309 1310 "experimental.evolution.allowdivergence=True")
1310 1311 raise error.Abort(msg % (",".join(divhashes),), hint=h)
1311 1312
1312 1313 def successorrevs(unfi, rev):
1313 1314 """yield revision numbers for successors of rev"""
1314 1315 assert unfi.filtername is None
1315 1316 nodemap = unfi.changelog.nodemap
1316 1317 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1317 1318 if s in nodemap:
1318 1319 yield nodemap[s]
1319 1320
1320 1321 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1321 1322 """Return new parents and optionally a merge base for rev being rebased
1322 1323
1323 1324 The destination specified by "dest" cannot always be used directly because
1324 1325 previously rebase result could affect destination. For example,
1325 1326
1326 1327 D E rebase -r C+D+E -d B
1327 1328 |/ C will be rebased to C'
1328 1329 B C D's new destination will be C' instead of B
1329 1330 |/ E's new destination will be C' instead of B
1330 1331 A
1331 1332
1332 1333 The new parents of a merge is slightly more complicated. See the comment
1333 1334 block below.
1334 1335 """
1335 1336 # use unfiltered changelog since successorrevs may return filtered nodes
1336 1337 assert repo.filtername is None
1337 1338 cl = repo.changelog
1338 1339 isancestor = cl.isancestorrev
1339 1340
1340 1341 dest = destmap[rev]
1341 1342 oldps = repo.changelog.parentrevs(rev) # old parents
1342 1343 newps = [nullrev, nullrev] # new parents
1343 1344 dests = adjustdest(repo, rev, destmap, state, skipped)
1344 1345 bases = list(oldps) # merge base candidates, initially just old parents
1345 1346
1346 1347 if all(r == nullrev for r in oldps[1:]):
1347 1348 # For non-merge changeset, just move p to adjusted dest as requested.
1348 1349 newps[0] = dests[0]
1349 1350 else:
1350 1351 # For merge changeset, if we move p to dests[i] unconditionally, both
1351 1352 # parents may change and the end result looks like "the merge loses a
1352 1353 # parent", which is a surprise. This is a limit because "--dest" only
1353 1354 # accepts one dest per src.
1354 1355 #
1355 1356 # Therefore, only move p with reasonable conditions (in this order):
1356 1357 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1357 1358 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1358 1359 #
1359 1360 # Comparing with adjustdest, the logic here does some additional work:
1360 1361 # 1. decide which parents will not be moved towards dest
1361 1362 # 2. if the above decision is "no", should a parent still be moved
1362 1363 # because it was rebased?
1363 1364 #
1364 1365 # For example:
1365 1366 #
1366 1367 # C # "rebase -r C -d D" is an error since none of the parents
1367 1368 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1368 1369 # A B D # B (using rule "2."), since B will be rebased.
1369 1370 #
1370 1371 # The loop tries to be not rely on the fact that a Mercurial node has
1371 1372 # at most 2 parents.
1372 1373 for i, p in enumerate(oldps):
1373 1374 np = p # new parent
1374 1375 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1375 1376 np = dests[i]
1376 1377 elif p in state and state[p] > 0:
1377 1378 np = state[p]
1378 1379
1379 1380 # "bases" only record "special" merge bases that cannot be
1380 1381 # calculated from changelog DAG (i.e. isancestor(p, np) is False).
1381 1382 # For example:
1382 1383 #
1383 1384 # B' # rebase -s B -d D, when B was rebased to B'. dest for C
1384 1385 # | C # is B', but merge base for C is B, instead of
1385 1386 # D | # changelog.ancestor(C, B') == A. If changelog DAG and
1386 1387 # | B # "state" edges are merged (so there will be an edge from
1387 1388 # |/ # B to B'), the merge base is still ancestor(C, B') in
1388 1389 # A # the merged graph.
1389 1390 #
1390 1391 # Also see https://bz.mercurial-scm.org/show_bug.cgi?id=1950#c8
1391 1392 # which uses "virtual null merge" to explain this situation.
1392 1393 if isancestor(p, np):
1393 1394 bases[i] = nullrev
1394 1395
1395 1396 # If one parent becomes an ancestor of the other, drop the ancestor
1396 1397 for j, x in enumerate(newps[:i]):
1397 1398 if x == nullrev:
1398 1399 continue
1399 1400 if isancestor(np, x): # CASE-1
1400 1401 np = nullrev
1401 1402 elif isancestor(x, np): # CASE-2
1402 1403 newps[j] = np
1403 1404 np = nullrev
1404 1405 # New parents forming an ancestor relationship does not
1405 1406 # mean the old parents have a similar relationship. Do not
1406 1407 # set bases[x] to nullrev.
1407 1408 bases[j], bases[i] = bases[i], bases[j]
1408 1409
1409 1410 newps[i] = np
1410 1411
1411 1412 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1412 1413 # base. If only p2 changes, merging using unchanged p1 as merge base is
1413 1414 # suboptimal. Therefore swap parents to make the merge sane.
1414 1415 if newps[1] != nullrev and oldps[0] == newps[0]:
1415 1416 assert len(newps) == 2 and len(oldps) == 2
1416 1417 newps.reverse()
1417 1418 bases.reverse()
1418 1419
1419 1420 # No parent change might be an error because we fail to make rev a
1420 1421 # descendent of requested dest. This can happen, for example:
1421 1422 #
1422 1423 # C # rebase -r C -d D
1423 1424 # /| # None of A and B will be changed to D and rebase fails.
1424 1425 # A B D
1425 1426 if set(newps) == set(oldps) and dest not in newps:
1426 1427 raise error.Abort(_('cannot rebase %d:%s without '
1427 1428 'moving at least one of its parents')
1428 1429 % (rev, repo[rev]))
1429 1430
1430 1431 # Source should not be ancestor of dest. The check here guarantees it's
1431 1432 # impossible. With multi-dest, the initial check does not cover complex
1432 1433 # cases since we don't have abstractions to dry-run rebase cheaply.
1433 1434 if any(p != nullrev and isancestor(rev, p) for p in newps):
1434 1435 raise error.Abort(_('source is ancestor of destination'))
1435 1436
1436 1437 # "rebasenode" updates to new p1, use the corresponding merge base.
1437 1438 if bases[0] != nullrev:
1438 1439 base = bases[0]
1439 1440 else:
1440 1441 base = None
1441 1442
1442 1443 # Check if the merge will contain unwanted changes. That may happen if
1443 1444 # there are multiple special (non-changelog ancestor) merge bases, which
1444 1445 # cannot be handled well by the 3-way merge algorithm. For example:
1445 1446 #
1446 1447 # F
1447 1448 # /|
1448 1449 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1449 1450 # | | # as merge base, the difference between D and F will include
1450 1451 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1451 1452 # |/ # chosen, the rebased F will contain B.
1452 1453 # A Z
1453 1454 #
1454 1455 # But our merge base candidates (D and E in above case) could still be
1455 1456 # better than the default (ancestor(F, Z) == null). Therefore still
1456 1457 # pick one (so choose p1 above).
1457 1458 if sum(1 for b in bases if b != nullrev) > 1:
1458 1459 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1459 1460 for i, base in enumerate(bases):
1460 1461 if base == nullrev:
1461 1462 continue
1462 1463 # Revisions in the side (not chosen as merge base) branch that
1463 1464 # might contain "surprising" contents
1464 1465 siderevs = list(repo.revs('((%ld-%d) %% (%d+%d))',
1465 1466 bases, base, base, dest))
1466 1467
1467 1468 # If those revisions are covered by rebaseset, the result is good.
1468 1469 # A merge in rebaseset would be considered to cover its ancestors.
1469 1470 if siderevs:
1470 1471 rebaseset = [r for r, d in state.items()
1471 1472 if d > 0 and r not in obsskipped]
1472 1473 merges = [r for r in rebaseset
1473 1474 if cl.parentrevs(r)[1] != nullrev]
1474 1475 unwanted[i] = list(repo.revs('%ld - (::%ld) - %ld',
1475 1476 siderevs, merges, rebaseset))
1476 1477
1477 1478 # Choose a merge base that has a minimal number of unwanted revs.
1478 1479 l, i = min((len(revs), i)
1479 1480 for i, revs in enumerate(unwanted) if revs is not None)
1480 1481 base = bases[i]
1481 1482
1482 1483 # newps[0] should match merge base if possible. Currently, if newps[i]
1483 1484 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1484 1485 # the other's ancestor. In that case, it's fine to not swap newps here.
1485 1486 # (see CASE-1 and CASE-2 above)
1486 1487 if i != 0 and newps[i] != nullrev:
1487 1488 newps[0], newps[i] = newps[i], newps[0]
1488 1489
1489 1490 # The merge will include unwanted revisions. Abort now. Revisit this if
1490 1491 # we have a more advanced merge algorithm that handles multiple bases.
1491 1492 if l > 0:
1492 1493 unwanteddesc = _(' or ').join(
1493 1494 (', '.join('%d:%s' % (r, repo[r]) for r in revs)
1494 1495 for revs in unwanted if revs is not None))
1495 1496 raise error.Abort(
1496 1497 _('rebasing %d:%s will include unwanted changes from %s')
1497 1498 % (rev, repo[rev], unwanteddesc))
1498 1499
1499 1500 repo.ui.debug(" future parents are %d and %d\n" % tuple(newps))
1500 1501
1501 1502 return newps[0], newps[1], base
1502 1503
1503 1504 def isagitpatch(repo, patchname):
1504 1505 'Return true if the given patch is in git format'
1505 1506 mqpatch = os.path.join(repo.mq.path, patchname)
1506 1507 for line in patch.linereader(open(mqpatch, 'rb')):
1507 1508 if line.startswith('diff --git'):
1508 1509 return True
1509 1510 return False
1510 1511
1511 1512 def updatemq(repo, state, skipped, **opts):
1512 1513 'Update rebased mq patches - finalize and then import them'
1513 1514 mqrebase = {}
1514 1515 mq = repo.mq
1515 1516 original_series = mq.fullseries[:]
1516 1517 skippedpatches = set()
1517 1518
1518 1519 for p in mq.applied:
1519 1520 rev = repo[p.node].rev()
1520 1521 if rev in state:
1521 1522 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
1522 1523 (rev, p.name))
1523 1524 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1524 1525 else:
1525 1526 # Applied but not rebased, not sure this should happen
1526 1527 skippedpatches.add(p.name)
1527 1528
1528 1529 if mqrebase:
1529 1530 mq.finish(repo, mqrebase.keys())
1530 1531
1531 1532 # We must start import from the newest revision
1532 1533 for rev in sorted(mqrebase, reverse=True):
1533 1534 if rev not in skipped:
1534 1535 name, isgit = mqrebase[rev]
1535 1536 repo.ui.note(_('updating mq patch %s to %d:%s\n') %
1536 1537 (name, state[rev], repo[state[rev]]))
1537 1538 mq.qimport(repo, (), patchname=name, git=isgit,
1538 1539 rev=["%d" % state[rev]])
1539 1540 else:
1540 1541 # Rebased and skipped
1541 1542 skippedpatches.add(mqrebase[rev][0])
1542 1543
1543 1544 # Patches were either applied and rebased and imported in
1544 1545 # order, applied and removed or unapplied. Discard the removed
1545 1546 # ones while preserving the original series order and guards.
1546 1547 newseries = [s for s in original_series
1547 1548 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
1548 1549 mq.fullseries[:] = newseries
1549 1550 mq.seriesdirty = True
1550 1551 mq.savedirty()
1551 1552
1552 1553 def storecollapsemsg(repo, collapsemsg):
1553 1554 'Store the collapse message to allow recovery'
1554 1555 collapsemsg = collapsemsg or ''
1555 1556 f = repo.vfs("last-message.txt", "w")
1556 1557 f.write("%s\n" % collapsemsg)
1557 1558 f.close()
1558 1559
1559 1560 def clearcollapsemsg(repo):
1560 1561 'Remove collapse message file'
1561 1562 repo.vfs.unlinkpath("last-message.txt", ignoremissing=True)
1562 1563
1563 1564 def restorecollapsemsg(repo, isabort):
1564 1565 'Restore previously stored collapse message'
1565 1566 try:
1566 1567 f = repo.vfs("last-message.txt")
1567 1568 collapsemsg = f.readline().strip()
1568 1569 f.close()
1569 1570 except IOError as err:
1570 1571 if err.errno != errno.ENOENT:
1571 1572 raise
1572 1573 if isabort:
1573 1574 # Oh well, just abort like normal
1574 1575 collapsemsg = ''
1575 1576 else:
1576 1577 raise error.Abort(_('missing .hg/last-message.txt for rebase'))
1577 1578 return collapsemsg
1578 1579
1579 1580 def clearstatus(repo):
1580 1581 'Remove the status files'
1581 1582 # Make sure the active transaction won't write the state file
1582 1583 tr = repo.currenttransaction()
1583 1584 if tr:
1584 1585 tr.removefilegenerator('rebasestate')
1585 1586 repo.vfs.unlinkpath("rebasestate", ignoremissing=True)
1586 1587
1587 1588 def needupdate(repo, state):
1588 1589 '''check whether we should `update --clean` away from a merge, or if
1589 1590 somehow the working dir got forcibly updated, e.g. by older hg'''
1590 1591 parents = [p.rev() for p in repo[None].parents()]
1591 1592
1592 1593 # Are we in a merge state at all?
1593 1594 if len(parents) < 2:
1594 1595 return False
1595 1596
1596 1597 # We should be standing on the first as-of-yet unrebased commit.
1597 1598 firstunrebased = min([old for old, new in state.iteritems()
1598 1599 if new == nullrev])
1599 1600 if firstunrebased in parents:
1600 1601 return True
1601 1602
1602 1603 return False
1603 1604
1604 1605 def abort(repo, originalwd, destmap, state, activebookmark=None, backup=True,
1605 1606 suppwarns=False):
1606 1607 '''Restore the repository to its original state. Additional args:
1607 1608
1608 1609 activebookmark: the name of the bookmark that should be active after the
1609 1610 restore'''
1610 1611
1611 1612 try:
1612 1613 # If the first commits in the rebased set get skipped during the rebase,
1613 1614 # their values within the state mapping will be the dest rev id. The
1614 1615 # rebased list must must not contain the dest rev (issue4896)
1615 1616 rebased = [s for r, s in state.items()
1616 1617 if s >= 0 and s != r and s != destmap[r]]
1617 1618 immutable = [d for d in rebased if not repo[d].mutable()]
1618 1619 cleanup = True
1619 1620 if immutable:
1620 1621 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1621 1622 % ', '.join(bytes(repo[r]) for r in immutable),
1622 1623 hint=_("see 'hg help phases' for details"))
1623 1624 cleanup = False
1624 1625
1625 1626 descendants = set()
1626 1627 if rebased:
1627 1628 descendants = set(repo.changelog.descendants(rebased))
1628 1629 if descendants - set(rebased):
1629 1630 repo.ui.warn(_("warning: new changesets detected on destination "
1630 1631 "branch, can't strip\n"))
1631 1632 cleanup = False
1632 1633
1633 1634 if cleanup:
1634 1635 shouldupdate = False
1635 1636 if rebased:
1636 1637 strippoints = [
1637 1638 c.node() for c in repo.set('roots(%ld)', rebased)]
1638 1639
1639 1640 updateifonnodes = set(rebased)
1640 1641 updateifonnodes.update(destmap.values())
1641 1642 updateifonnodes.add(originalwd)
1642 1643 shouldupdate = repo['.'].rev() in updateifonnodes
1643 1644
1644 1645 # Update away from the rebase if necessary
1645 1646 if shouldupdate or needupdate(repo, state):
1646 mergemod.update(repo, originalwd, False, True)
1647 mergemod.update(repo, originalwd, branchmerge=False, force=True)
1647 1648
1648 1649 # Strip from the first rebased revision
1649 1650 if rebased:
1650 1651 repair.strip(repo.ui, repo, strippoints, backup=backup)
1651 1652
1652 1653 if activebookmark and activebookmark in repo._bookmarks:
1653 1654 bookmarks.activate(repo, activebookmark)
1654 1655
1655 1656 finally:
1656 1657 clearstatus(repo)
1657 1658 clearcollapsemsg(repo)
1658 1659 if not suppwarns:
1659 1660 repo.ui.warn(_('rebase aborted\n'))
1660 1661 return 0
1661 1662
1662 1663 def sortsource(destmap):
1663 1664 """yield source revisions in an order that we only rebase things once
1664 1665
1665 1666 If source and destination overlaps, we should filter out revisions
1666 1667 depending on other revisions which hasn't been rebased yet.
1667 1668
1668 1669 Yield a sorted list of revisions each time.
1669 1670
1670 1671 For example, when rebasing A to B, B to C. This function yields [B], then
1671 1672 [A], indicating B needs to be rebased first.
1672 1673
1673 1674 Raise if there is a cycle so the rebase is impossible.
1674 1675 """
1675 1676 srcset = set(destmap)
1676 1677 while srcset:
1677 1678 srclist = sorted(srcset)
1678 1679 result = []
1679 1680 for r in srclist:
1680 1681 if destmap[r] not in srcset:
1681 1682 result.append(r)
1682 1683 if not result:
1683 1684 raise error.Abort(_('source and destination form a cycle'))
1684 1685 srcset -= set(result)
1685 1686 yield result
1686 1687
1687 1688 def buildstate(repo, destmap, collapse):
1688 1689 '''Define which revisions are going to be rebased and where
1689 1690
1690 1691 repo: repo
1691 1692 destmap: {srcrev: destrev}
1692 1693 '''
1693 1694 rebaseset = destmap.keys()
1694 1695 originalwd = repo['.'].rev()
1695 1696
1696 1697 # This check isn't strictly necessary, since mq detects commits over an
1697 1698 # applied patch. But it prevents messing up the working directory when
1698 1699 # a partially completed rebase is blocked by mq.
1699 1700 if 'qtip' in repo.tags():
1700 1701 mqapplied = set(repo[s.node].rev() for s in repo.mq.applied)
1701 1702 if set(destmap.values()) & mqapplied:
1702 1703 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1703 1704
1704 1705 # Get "cycle" error early by exhausting the generator.
1705 1706 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
1706 1707 if not sortedsrc:
1707 1708 raise error.Abort(_('no matching revisions'))
1708 1709
1709 1710 # Only check the first batch of revisions to rebase not depending on other
1710 1711 # rebaseset. This means "source is ancestor of destination" for the second
1711 1712 # (and following) batches of revisions are not checked here. We rely on
1712 1713 # "defineparents" to do that check.
1713 1714 roots = list(repo.set('roots(%ld)', sortedsrc[0]))
1714 1715 if not roots:
1715 1716 raise error.Abort(_('no matching revisions'))
1716 1717 def revof(r):
1717 1718 return r.rev()
1718 1719 roots = sorted(roots, key=revof)
1719 1720 state = dict.fromkeys(rebaseset, revtodo)
1720 1721 emptyrebase = (len(sortedsrc) == 1)
1721 1722 for root in roots:
1722 1723 dest = repo[destmap[root.rev()]]
1723 1724 commonbase = root.ancestor(dest)
1724 1725 if commonbase == root:
1725 1726 raise error.Abort(_('source is ancestor of destination'))
1726 1727 if commonbase == dest:
1727 1728 wctx = repo[None]
1728 1729 if dest == wctx.p1():
1729 1730 # when rebasing to '.', it will use the current wd branch name
1730 1731 samebranch = root.branch() == wctx.branch()
1731 1732 else:
1732 1733 samebranch = root.branch() == dest.branch()
1733 1734 if not collapse and samebranch and dest in root.parents():
1734 1735 # mark the revision as done by setting its new revision
1735 1736 # equal to its old (current) revisions
1736 1737 state[root.rev()] = root.rev()
1737 1738 repo.ui.debug('source is a child of destination\n')
1738 1739 continue
1739 1740
1740 1741 emptyrebase = False
1741 1742 repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root))
1742 1743 if emptyrebase:
1743 1744 return None
1744 1745 for rev in sorted(state):
1745 1746 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
1746 1747 # if all parents of this revision are done, then so is this revision
1747 1748 if parents and all((state.get(p) == p for p in parents)):
1748 1749 state[rev] = rev
1749 1750 return originalwd, destmap, state
1750 1751
1751 1752 def clearrebased(ui, repo, destmap, state, skipped, collapsedas=None,
1752 1753 keepf=False, fm=None, backup=True):
1753 1754 """dispose of rebased revision at the end of the rebase
1754 1755
1755 1756 If `collapsedas` is not None, the rebase was a collapse whose result if the
1756 1757 `collapsedas` node.
1757 1758
1758 1759 If `keepf` is not True, the rebase has --keep set and no nodes should be
1759 1760 removed (but bookmarks still need to be moved).
1760 1761
1761 1762 If `backup` is False, no backup will be stored when stripping rebased
1762 1763 revisions.
1763 1764 """
1764 1765 tonode = repo.changelog.node
1765 1766 replacements = {}
1766 1767 moves = {}
1767 1768 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
1768 1769
1769 1770 collapsednodes = []
1770 1771 for rev, newrev in sorted(state.items()):
1771 1772 if newrev >= 0 and newrev != rev:
1772 1773 oldnode = tonode(rev)
1773 1774 newnode = collapsedas or tonode(newrev)
1774 1775 moves[oldnode] = newnode
1775 1776 if not keepf:
1776 1777 succs = None
1777 1778 if rev in skipped:
1778 1779 if stripcleanup or not repo[rev].obsolete():
1779 1780 succs = ()
1780 1781 elif collapsedas:
1781 1782 collapsednodes.append(oldnode)
1782 1783 else:
1783 1784 succs = (newnode,)
1784 1785 if succs is not None:
1785 1786 replacements[(oldnode,)] = succs
1786 1787 if collapsednodes:
1787 1788 replacements[tuple(collapsednodes)] = (collapsedas,)
1788 1789 scmutil.cleanupnodes(repo, replacements, 'rebase', moves, backup=backup)
1789 1790 if fm:
1790 1791 hf = fm.hexfunc
1791 1792 fl = fm.formatlist
1792 1793 fd = fm.formatdict
1793 1794 changes = {}
1794 1795 for oldns, newn in replacements.iteritems():
1795 1796 for oldn in oldns:
1796 1797 changes[hf(oldn)] = fl([hf(n) for n in newn], name='node')
1797 1798 nodechanges = fd(changes, key="oldnode", value="newnodes")
1798 1799 fm.data(nodechanges=nodechanges)
1799 1800
1800 1801 def pullrebase(orig, ui, repo, *args, **opts):
1801 1802 'Call rebase after pull if the latter has been invoked with --rebase'
1802 1803 ret = None
1803 1804 if opts.get(r'rebase'):
1804 1805 if ui.configbool('commands', 'rebase.requiredest'):
1805 1806 msg = _('rebase destination required by configuration')
1806 1807 hint = _('use hg pull followed by hg rebase -d DEST')
1807 1808 raise error.Abort(msg, hint=hint)
1808 1809
1809 1810 with repo.wlock(), repo.lock():
1810 1811 if opts.get(r'update'):
1811 1812 del opts[r'update']
1812 1813 ui.debug('--update and --rebase are not compatible, ignoring '
1813 1814 'the update flag\n')
1814 1815
1815 1816 cmdutil.checkunfinished(repo)
1816 1817 cmdutil.bailifchanged(repo, hint=_('cannot pull with rebase: '
1817 1818 'please commit or shelve your changes first'))
1818 1819
1819 1820 revsprepull = len(repo)
1820 1821 origpostincoming = commands.postincoming
1821 1822 def _dummy(*args, **kwargs):
1822 1823 pass
1823 1824 commands.postincoming = _dummy
1824 1825 try:
1825 1826 ret = orig(ui, repo, *args, **opts)
1826 1827 finally:
1827 1828 commands.postincoming = origpostincoming
1828 1829 revspostpull = len(repo)
1829 1830 if revspostpull > revsprepull:
1830 1831 # --rev option from pull conflict with rebase own --rev
1831 1832 # dropping it
1832 1833 if r'rev' in opts:
1833 1834 del opts[r'rev']
1834 1835 # positional argument from pull conflicts with rebase's own
1835 1836 # --source.
1836 1837 if r'source' in opts:
1837 1838 del opts[r'source']
1838 1839 # revsprepull is the len of the repo, not revnum of tip.
1839 1840 destspace = list(repo.changelog.revs(start=revsprepull))
1840 1841 opts[r'_destspace'] = destspace
1841 1842 try:
1842 1843 rebase(ui, repo, **opts)
1843 1844 except error.NoMergeDestAbort:
1844 1845 # we can maybe update instead
1845 1846 rev, _a, _b = destutil.destupdate(repo)
1846 1847 if rev == repo['.'].rev():
1847 1848 ui.status(_('nothing to rebase\n'))
1848 1849 else:
1849 1850 ui.status(_('nothing to rebase - updating instead\n'))
1850 1851 # not passing argument to get the bare update behavior
1851 1852 # with warning and trumpets
1852 1853 commands.update(ui, repo)
1853 1854 else:
1854 1855 if opts.get(r'tool'):
1855 1856 raise error.Abort(_('--tool can only be used with --rebase'))
1856 1857 ret = orig(ui, repo, *args, **opts)
1857 1858
1858 1859 return ret
1859 1860
1860 1861 def _filterobsoleterevs(repo, revs):
1861 1862 """returns a set of the obsolete revisions in revs"""
1862 1863 return set(r for r in revs if repo[r].obsolete())
1863 1864
1864 1865 def _computeobsoletenotrebased(repo, rebaseobsrevs, destmap):
1865 1866 """Return (obsoletenotrebased, obsoletewithoutsuccessorindestination).
1866 1867
1867 1868 `obsoletenotrebased` is a mapping mapping obsolete => successor for all
1868 1869 obsolete nodes to be rebased given in `rebaseobsrevs`.
1869 1870
1870 1871 `obsoletewithoutsuccessorindestination` is a set with obsolete revisions
1871 1872 without a successor in destination.
1872 1873
1873 1874 `obsoleteextinctsuccessors` is a set of obsolete revisions with only
1874 1875 obsolete successors.
1875 1876 """
1876 1877 obsoletenotrebased = {}
1877 1878 obsoletewithoutsuccessorindestination = set([])
1878 1879 obsoleteextinctsuccessors = set([])
1879 1880
1880 1881 assert repo.filtername is None
1881 1882 cl = repo.changelog
1882 1883 nodemap = cl.nodemap
1883 1884 extinctrevs = set(repo.revs('extinct()'))
1884 1885 for srcrev in rebaseobsrevs:
1885 1886 srcnode = cl.node(srcrev)
1886 1887 # XXX: more advanced APIs are required to handle split correctly
1887 1888 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
1888 1889 # obsutil.allsuccessors includes node itself
1889 1890 successors.remove(srcnode)
1890 1891 succrevs = {nodemap[s] for s in successors if s in nodemap}
1891 1892 if succrevs.issubset(extinctrevs):
1892 1893 # all successors are extinct
1893 1894 obsoleteextinctsuccessors.add(srcrev)
1894 1895 if not successors:
1895 1896 # no successor
1896 1897 obsoletenotrebased[srcrev] = None
1897 1898 else:
1898 1899 dstrev = destmap[srcrev]
1899 1900 for succrev in succrevs:
1900 1901 if cl.isancestorrev(succrev, dstrev):
1901 1902 obsoletenotrebased[srcrev] = succrev
1902 1903 break
1903 1904 else:
1904 1905 # If 'srcrev' has a successor in rebase set but none in
1905 1906 # destination (which would be catched above), we shall skip it
1906 1907 # and its descendants to avoid divergence.
1907 1908 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
1908 1909 obsoletewithoutsuccessorindestination.add(srcrev)
1909 1910
1910 1911 return (
1911 1912 obsoletenotrebased,
1912 1913 obsoletewithoutsuccessorindestination,
1913 1914 obsoleteextinctsuccessors,
1914 1915 )
1915 1916
1916 1917 def summaryhook(ui, repo):
1917 1918 if not repo.vfs.exists('rebasestate'):
1918 1919 return
1919 1920 try:
1920 1921 rbsrt = rebaseruntime(repo, ui, {})
1921 1922 rbsrt.restorestatus()
1922 1923 state = rbsrt.state
1923 1924 except error.RepoLookupError:
1924 1925 # i18n: column positioning for "hg summary"
1925 1926 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1926 1927 ui.write(msg)
1927 1928 return
1928 1929 numrebased = len([i for i in state.itervalues() if i >= 0])
1929 1930 # i18n: column positioning for "hg summary"
1930 1931 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1931 1932 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1932 1933 ui.label(_('%d remaining'), 'rebase.remaining') %
1933 1934 (len(state) - numrebased)))
1934 1935
1935 1936 def uisetup(ui):
1936 1937 #Replace pull with a decorator to provide --rebase option
1937 1938 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1938 1939 entry[1].append(('', 'rebase', None,
1939 1940 _("rebase working directory to branch head")))
1940 1941 entry[1].append(('t', 'tool', '',
1941 1942 _("specify merge tool for rebase")))
1942 1943 cmdutil.summaryhooks.add('rebase', summaryhook)
1943 1944 cmdutil.unfinishedstates.append(
1944 1945 ['rebasestate', False, False, _('rebase in progress'),
1945 1946 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1946 1947 cmdutil.afterresolvedstates.append(
1947 1948 ['rebasestate', _('hg rebase --continue')])
@@ -1,1152 +1,1152 b''
1 1 # shelve.py - save/restore working directory state
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """save and restore changes to the working directory
9 9
10 10 The "hg shelve" command saves changes made to the working directory
11 11 and reverts those changes, resetting the working directory to a clean
12 12 state.
13 13
14 14 Later on, the "hg unshelve" command restores the changes saved by "hg
15 15 shelve". Changes can be restored even after updating to a different
16 16 parent, in which case Mercurial's merge machinery will resolve any
17 17 conflicts if necessary.
18 18
19 19 You can have more than one shelved change outstanding at a time; each
20 20 shelved change has a distinct name. For details, see the help for "hg
21 21 shelve".
22 22 """
23 23 from __future__ import absolute_import
24 24
25 25 import collections
26 26 import errno
27 27 import itertools
28 28 import stat
29 29
30 30 from mercurial.i18n import _
31 31 from mercurial import (
32 32 bookmarks,
33 33 bundle2,
34 34 bundlerepo,
35 35 changegroup,
36 36 cmdutil,
37 37 discovery,
38 38 error,
39 39 exchange,
40 40 hg,
41 41 lock as lockmod,
42 42 mdiff,
43 43 merge,
44 44 narrowspec,
45 45 node as nodemod,
46 46 patch,
47 47 phases,
48 48 pycompat,
49 49 registrar,
50 50 repair,
51 51 scmutil,
52 52 templatefilters,
53 53 util,
54 54 vfs as vfsmod,
55 55 )
56 56
57 57 from . import (
58 58 rebase,
59 59 )
60 60 from mercurial.utils import (
61 61 dateutil,
62 62 stringutil,
63 63 )
64 64
65 65 cmdtable = {}
66 66 command = registrar.command(cmdtable)
67 67 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
68 68 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
69 69 # be specifying the version(s) of Mercurial they are tested with, or
70 70 # leave the attribute unspecified.
71 71 testedwith = 'ships-with-hg-core'
72 72
73 73 configtable = {}
74 74 configitem = registrar.configitem(configtable)
75 75
76 76 configitem('shelve', 'maxbackups',
77 77 default=10,
78 78 )
79 79
80 80 backupdir = 'shelve-backup'
81 81 shelvedir = 'shelved'
82 82 shelvefileextensions = ['hg', 'patch', 'shelve']
83 83 # universal extension is present in all types of shelves
84 84 patchextension = 'patch'
85 85
86 86 # we never need the user, so we use a
87 87 # generic user for all shelve operations
88 88 shelveuser = 'shelve@localhost'
89 89
90 90 class shelvedfile(object):
91 91 """Helper for the file storing a single shelve
92 92
93 93 Handles common functions on shelve files (.hg/.patch) using
94 94 the vfs layer"""
95 95 def __init__(self, repo, name, filetype=None):
96 96 self.repo = repo
97 97 self.name = name
98 98 self.vfs = vfsmod.vfs(repo.vfs.join(shelvedir))
99 99 self.backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
100 100 self.ui = self.repo.ui
101 101 if filetype:
102 102 self.fname = name + '.' + filetype
103 103 else:
104 104 self.fname = name
105 105
106 106 def exists(self):
107 107 return self.vfs.exists(self.fname)
108 108
109 109 def filename(self):
110 110 return self.vfs.join(self.fname)
111 111
112 112 def backupfilename(self):
113 113 def gennames(base):
114 114 yield base
115 115 base, ext = base.rsplit('.', 1)
116 116 for i in itertools.count(1):
117 117 yield '%s-%d.%s' % (base, i, ext)
118 118
119 119 name = self.backupvfs.join(self.fname)
120 120 for n in gennames(name):
121 121 if not self.backupvfs.exists(n):
122 122 return n
123 123
124 124 def movetobackup(self):
125 125 if not self.backupvfs.isdir():
126 126 self.backupvfs.makedir()
127 127 util.rename(self.filename(), self.backupfilename())
128 128
129 129 def stat(self):
130 130 return self.vfs.stat(self.fname)
131 131
132 132 def opener(self, mode='rb'):
133 133 try:
134 134 return self.vfs(self.fname, mode)
135 135 except IOError as err:
136 136 if err.errno != errno.ENOENT:
137 137 raise
138 138 raise error.Abort(_("shelved change '%s' not found") % self.name)
139 139
140 140 def applybundle(self):
141 141 fp = self.opener()
142 142 try:
143 143 targetphase = phases.internal
144 144 if not phases.supportinternal(self.repo):
145 145 targetphase = phases.secret
146 146 gen = exchange.readbundle(self.repo.ui, fp, self.fname, self.vfs)
147 147 pretip = self.repo['tip']
148 148 tr = self.repo.currenttransaction()
149 149 bundle2.applybundle(self.repo, gen, tr,
150 150 source='unshelve',
151 151 url='bundle:' + self.vfs.join(self.fname),
152 152 targetphase=targetphase)
153 153 shelvectx = self.repo['tip']
154 154 if pretip == shelvectx:
155 155 shelverev = tr.changes['revduplicates'][-1]
156 156 shelvectx = self.repo[shelverev]
157 157 return shelvectx
158 158 finally:
159 159 fp.close()
160 160
161 161 def bundlerepo(self):
162 162 path = self.vfs.join(self.fname)
163 163 return bundlerepo.instance(self.repo.baseui,
164 164 'bundle://%s+%s' % (self.repo.root, path))
165 165
166 166 def writebundle(self, bases, node):
167 167 cgversion = changegroup.safeversion(self.repo)
168 168 if cgversion == '01':
169 169 btype = 'HG10BZ'
170 170 compression = None
171 171 else:
172 172 btype = 'HG20'
173 173 compression = 'BZ'
174 174
175 175 repo = self.repo.unfiltered()
176 176
177 177 outgoing = discovery.outgoing(repo, missingroots=bases,
178 178 missingheads=[node])
179 179 cg = changegroup.makechangegroup(repo, outgoing, cgversion, 'shelve')
180 180
181 181 bundle2.writebundle(self.ui, cg, self.fname, btype, self.vfs,
182 182 compression=compression)
183 183
184 184 def writeinfo(self, info):
185 185 scmutil.simplekeyvaluefile(self.vfs, self.fname).write(info)
186 186
187 187 def readinfo(self):
188 188 return scmutil.simplekeyvaluefile(self.vfs, self.fname).read()
189 189
190 190 class shelvedstate(object):
191 191 """Handle persistence during unshelving operations.
192 192
193 193 Handles saving and restoring a shelved state. Ensures that different
194 194 versions of a shelved state are possible and handles them appropriately.
195 195 """
196 196 _version = 2
197 197 _filename = 'shelvedstate'
198 198 _keep = 'keep'
199 199 _nokeep = 'nokeep'
200 200 # colon is essential to differentiate from a real bookmark name
201 201 _noactivebook = ':no-active-bookmark'
202 202
203 203 @classmethod
204 204 def _verifyandtransform(cls, d):
205 205 """Some basic shelvestate syntactic verification and transformation"""
206 206 try:
207 207 d['originalwctx'] = nodemod.bin(d['originalwctx'])
208 208 d['pendingctx'] = nodemod.bin(d['pendingctx'])
209 209 d['parents'] = [nodemod.bin(h)
210 210 for h in d['parents'].split(' ')]
211 211 d['nodestoremove'] = [nodemod.bin(h)
212 212 for h in d['nodestoremove'].split(' ')]
213 213 except (ValueError, TypeError, KeyError) as err:
214 214 raise error.CorruptedState(pycompat.bytestr(err))
215 215
216 216 @classmethod
217 217 def _getversion(cls, repo):
218 218 """Read version information from shelvestate file"""
219 219 fp = repo.vfs(cls._filename)
220 220 try:
221 221 version = int(fp.readline().strip())
222 222 except ValueError as err:
223 223 raise error.CorruptedState(pycompat.bytestr(err))
224 224 finally:
225 225 fp.close()
226 226 return version
227 227
228 228 @classmethod
229 229 def _readold(cls, repo):
230 230 """Read the old position-based version of a shelvestate file"""
231 231 # Order is important, because old shelvestate file uses it
232 232 # to detemine values of fields (i.g. name is on the second line,
233 233 # originalwctx is on the third and so forth). Please do not change.
234 234 keys = ['version', 'name', 'originalwctx', 'pendingctx', 'parents',
235 235 'nodestoremove', 'branchtorestore', 'keep', 'activebook']
236 236 # this is executed only seldomly, so it is not a big deal
237 237 # that we open this file twice
238 238 fp = repo.vfs(cls._filename)
239 239 d = {}
240 240 try:
241 241 for key in keys:
242 242 d[key] = fp.readline().strip()
243 243 finally:
244 244 fp.close()
245 245 return d
246 246
247 247 @classmethod
248 248 def load(cls, repo):
249 249 version = cls._getversion(repo)
250 250 if version < cls._version:
251 251 d = cls._readold(repo)
252 252 elif version == cls._version:
253 253 d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
254 254 .read(firstlinenonkeyval=True)
255 255 else:
256 256 raise error.Abort(_('this version of shelve is incompatible '
257 257 'with the version used in this repo'))
258 258
259 259 cls._verifyandtransform(d)
260 260 try:
261 261 obj = cls()
262 262 obj.name = d['name']
263 263 obj.wctx = repo[d['originalwctx']]
264 264 obj.pendingctx = repo[d['pendingctx']]
265 265 obj.parents = d['parents']
266 266 obj.nodestoremove = d['nodestoremove']
267 267 obj.branchtorestore = d.get('branchtorestore', '')
268 268 obj.keep = d.get('keep') == cls._keep
269 269 obj.activebookmark = ''
270 270 if d.get('activebook', '') != cls._noactivebook:
271 271 obj.activebookmark = d.get('activebook', '')
272 272 except (error.RepoLookupError, KeyError) as err:
273 273 raise error.CorruptedState(pycompat.bytestr(err))
274 274
275 275 return obj
276 276
277 277 @classmethod
278 278 def save(cls, repo, name, originalwctx, pendingctx, nodestoremove,
279 279 branchtorestore, keep=False, activebook=''):
280 280 info = {
281 281 "name": name,
282 282 "originalwctx": nodemod.hex(originalwctx.node()),
283 283 "pendingctx": nodemod.hex(pendingctx.node()),
284 284 "parents": ' '.join([nodemod.hex(p)
285 285 for p in repo.dirstate.parents()]),
286 286 "nodestoremove": ' '.join([nodemod.hex(n)
287 287 for n in nodestoremove]),
288 288 "branchtorestore": branchtorestore,
289 289 "keep": cls._keep if keep else cls._nokeep,
290 290 "activebook": activebook or cls._noactivebook
291 291 }
292 292 scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
293 293 .write(info, firstline=("%d" % cls._version))
294 294
295 295 @classmethod
296 296 def clear(cls, repo):
297 297 repo.vfs.unlinkpath(cls._filename, ignoremissing=True)
298 298
299 299 def cleanupoldbackups(repo):
300 300 vfs = vfsmod.vfs(repo.vfs.join(backupdir))
301 301 maxbackups = repo.ui.configint('shelve', 'maxbackups')
302 302 hgfiles = [f for f in vfs.listdir()
303 303 if f.endswith('.' + patchextension)]
304 304 hgfiles = sorted([(vfs.stat(f)[stat.ST_MTIME], f) for f in hgfiles])
305 305 if maxbackups > 0 and maxbackups < len(hgfiles):
306 306 bordermtime = hgfiles[-maxbackups][0]
307 307 else:
308 308 bordermtime = None
309 309 for mtime, f in hgfiles[:len(hgfiles) - maxbackups]:
310 310 if mtime == bordermtime:
311 311 # keep it, because timestamp can't decide exact order of backups
312 312 continue
313 313 base = f[:-(1 + len(patchextension))]
314 314 for ext in shelvefileextensions:
315 315 vfs.tryunlink(base + '.' + ext)
316 316
317 317 def _backupactivebookmark(repo):
318 318 activebookmark = repo._activebookmark
319 319 if activebookmark:
320 320 bookmarks.deactivate(repo)
321 321 return activebookmark
322 322
323 323 def _restoreactivebookmark(repo, mark):
324 324 if mark:
325 325 bookmarks.activate(repo, mark)
326 326
327 327 def _aborttransaction(repo):
328 328 '''Abort current transaction for shelve/unshelve, but keep dirstate
329 329 '''
330 330 tr = repo.currenttransaction()
331 331 dirstatebackupname = 'dirstate.shelve'
332 332 narrowspecbackupname = 'narrowspec.shelve'
333 333 repo.dirstate.savebackup(tr, dirstatebackupname)
334 334 narrowspec.savebackup(repo, narrowspecbackupname)
335 335 tr.abort()
336 336 narrowspec.restorebackup(repo, narrowspecbackupname)
337 337 repo.dirstate.restorebackup(None, dirstatebackupname)
338 338
339 339 def getshelvename(repo, parent, opts):
340 340 """Decide on the name this shelve is going to have"""
341 341 def gennames():
342 342 yield label
343 343 for i in itertools.count(1):
344 344 yield '%s-%02d' % (label, i)
345 345 name = opts.get('name')
346 346 label = repo._activebookmark or parent.branch() or 'default'
347 347 # slashes aren't allowed in filenames, therefore we rename it
348 348 label = label.replace('/', '_')
349 349 label = label.replace('\\', '_')
350 350 # filenames must not start with '.' as it should not be hidden
351 351 if label.startswith('.'):
352 352 label = label.replace('.', '_', 1)
353 353
354 354 if name:
355 355 if shelvedfile(repo, name, patchextension).exists():
356 356 e = _("a shelved change named '%s' already exists") % name
357 357 raise error.Abort(e)
358 358
359 359 # ensure we are not creating a subdirectory or a hidden file
360 360 if '/' in name or '\\' in name:
361 361 raise error.Abort(_('shelved change names can not contain slashes'))
362 362 if name.startswith('.'):
363 363 raise error.Abort(_("shelved change names can not start with '.'"))
364 364
365 365 else:
366 366 for n in gennames():
367 367 if not shelvedfile(repo, n, patchextension).exists():
368 368 name = n
369 369 break
370 370
371 371 return name
372 372
373 373 def mutableancestors(ctx):
374 374 """return all mutable ancestors for ctx (included)
375 375
376 376 Much faster than the revset ancestors(ctx) & draft()"""
377 377 seen = {nodemod.nullrev}
378 378 visit = collections.deque()
379 379 visit.append(ctx)
380 380 while visit:
381 381 ctx = visit.popleft()
382 382 yield ctx.node()
383 383 for parent in ctx.parents():
384 384 rev = parent.rev()
385 385 if rev not in seen:
386 386 seen.add(rev)
387 387 if parent.mutable():
388 388 visit.append(parent)
389 389
390 390 def getcommitfunc(extra, interactive, editor=False):
391 391 def commitfunc(ui, repo, message, match, opts):
392 392 hasmq = util.safehasattr(repo, 'mq')
393 393 if hasmq:
394 394 saved, repo.mq.checkapplied = repo.mq.checkapplied, False
395 395
396 396 targetphase = phases.internal
397 397 if not phases.supportinternal(repo):
398 398 targetphase = phases.secret
399 399 overrides = {('phases', 'new-commit'): targetphase}
400 400 try:
401 401 editor_ = False
402 402 if editor:
403 403 editor_ = cmdutil.getcommiteditor(editform='shelve.shelve',
404 404 **pycompat.strkwargs(opts))
405 405 with repo.ui.configoverride(overrides):
406 406 return repo.commit(message, shelveuser, opts.get('date'),
407 407 match, editor=editor_, extra=extra)
408 408 finally:
409 409 if hasmq:
410 410 repo.mq.checkapplied = saved
411 411
412 412 def interactivecommitfunc(ui, repo, *pats, **opts):
413 413 opts = pycompat.byteskwargs(opts)
414 414 match = scmutil.match(repo['.'], pats, {})
415 415 message = opts['message']
416 416 return commitfunc(ui, repo, message, match, opts)
417 417
418 418 return interactivecommitfunc if interactive else commitfunc
419 419
420 420 def _nothingtoshelvemessaging(ui, repo, pats, opts):
421 421 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
422 422 if stat.deleted:
423 423 ui.status(_("nothing changed (%d missing files, see "
424 424 "'hg status')\n") % len(stat.deleted))
425 425 else:
426 426 ui.status(_("nothing changed\n"))
427 427
428 428 def _shelvecreatedcommit(repo, node, name):
429 429 info = {'node': nodemod.hex(node)}
430 430 shelvedfile(repo, name, 'shelve').writeinfo(info)
431 431 bases = list(mutableancestors(repo[node]))
432 432 shelvedfile(repo, name, 'hg').writebundle(bases, node)
433 433 with shelvedfile(repo, name, patchextension).opener('wb') as fp:
434 434 cmdutil.exportfile(repo, [node], fp, opts=mdiff.diffopts(git=True))
435 435
436 436 def _includeunknownfiles(repo, pats, opts, extra):
437 437 s = repo.status(match=scmutil.match(repo[None], pats, opts),
438 438 unknown=True)
439 439 if s.unknown:
440 440 extra['shelve_unknown'] = '\0'.join(s.unknown)
441 441 repo[None].add(s.unknown)
442 442
443 443 def _finishshelve(repo):
444 444 if phases.supportinternal(repo):
445 445 backupname = 'dirstate.shelve'
446 446 tr = repo.currenttransaction()
447 447 repo.dirstate.savebackup(tr, backupname)
448 448 tr.close()
449 449 repo.dirstate.restorebackup(None, backupname)
450 450 else:
451 451 _aborttransaction(repo)
452 452
453 453 def createcmd(ui, repo, pats, opts):
454 454 """subcommand that creates a new shelve"""
455 455 with repo.wlock():
456 456 cmdutil.checkunfinished(repo)
457 457 return _docreatecmd(ui, repo, pats, opts)
458 458
459 459 def _docreatecmd(ui, repo, pats, opts):
460 460 wctx = repo[None]
461 461 parents = wctx.parents()
462 462 if len(parents) > 1:
463 463 raise error.Abort(_('cannot shelve while merging'))
464 464 parent = parents[0]
465 465 origbranch = wctx.branch()
466 466
467 467 if parent.node() != nodemod.nullid:
468 468 desc = "changes to: %s" % parent.description().split('\n', 1)[0]
469 469 else:
470 470 desc = '(changes in empty repository)'
471 471
472 472 if not opts.get('message'):
473 473 opts['message'] = desc
474 474
475 475 lock = tr = activebookmark = None
476 476 try:
477 477 lock = repo.lock()
478 478
479 479 # use an uncommitted transaction to generate the bundle to avoid
480 480 # pull races. ensure we don't print the abort message to stderr.
481 481 tr = repo.transaction('commit', report=lambda x: None)
482 482
483 483 interactive = opts.get('interactive', False)
484 484 includeunknown = (opts.get('unknown', False) and
485 485 not opts.get('addremove', False))
486 486
487 487 name = getshelvename(repo, parent, opts)
488 488 activebookmark = _backupactivebookmark(repo)
489 489 extra = {'internal': 'shelve'}
490 490 if includeunknown:
491 491 _includeunknownfiles(repo, pats, opts, extra)
492 492
493 493 if _iswctxonnewbranch(repo) and not _isbareshelve(pats, opts):
494 494 # In non-bare shelve we don't store newly created branch
495 495 # at bundled commit
496 496 repo.dirstate.setbranch(repo['.'].branch())
497 497
498 498 commitfunc = getcommitfunc(extra, interactive, editor=True)
499 499 if not interactive:
500 500 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
501 501 else:
502 502 node = cmdutil.dorecord(ui, repo, commitfunc, None,
503 503 False, cmdutil.recordfilter, *pats,
504 504 **pycompat.strkwargs(opts))
505 505 if not node:
506 506 _nothingtoshelvemessaging(ui, repo, pats, opts)
507 507 return 1
508 508
509 509 _shelvecreatedcommit(repo, node, name)
510 510
511 511 if ui.formatted():
512 512 desc = stringutil.ellipsis(desc, ui.termwidth())
513 513 ui.status(_('shelved as %s\n') % name)
514 514 hg.update(repo, parent.node())
515 515 if origbranch != repo['.'].branch() and not _isbareshelve(pats, opts):
516 516 repo.dirstate.setbranch(origbranch)
517 517
518 518 _finishshelve(repo)
519 519 finally:
520 520 _restoreactivebookmark(repo, activebookmark)
521 521 lockmod.release(tr, lock)
522 522
523 523 def _isbareshelve(pats, opts):
524 524 return (not pats
525 525 and not opts.get('interactive', False)
526 526 and not opts.get('include', False)
527 527 and not opts.get('exclude', False))
528 528
529 529 def _iswctxonnewbranch(repo):
530 530 return repo[None].branch() != repo['.'].branch()
531 531
532 532 def cleanupcmd(ui, repo):
533 533 """subcommand that deletes all shelves"""
534 534
535 535 with repo.wlock():
536 536 for (name, _type) in repo.vfs.readdir(shelvedir):
537 537 suffix = name.rsplit('.', 1)[-1]
538 538 if suffix in shelvefileextensions:
539 539 shelvedfile(repo, name).movetobackup()
540 540 cleanupoldbackups(repo)
541 541
542 542 def deletecmd(ui, repo, pats):
543 543 """subcommand that deletes a specific shelve"""
544 544 if not pats:
545 545 raise error.Abort(_('no shelved changes specified!'))
546 546 with repo.wlock():
547 547 try:
548 548 for name in pats:
549 549 for suffix in shelvefileextensions:
550 550 shfile = shelvedfile(repo, name, suffix)
551 551 # patch file is necessary, as it should
552 552 # be present for any kind of shelve,
553 553 # but the .hg file is optional as in future we
554 554 # will add obsolete shelve with does not create a
555 555 # bundle
556 556 if shfile.exists() or suffix == patchextension:
557 557 shfile.movetobackup()
558 558 cleanupoldbackups(repo)
559 559 except OSError as err:
560 560 if err.errno != errno.ENOENT:
561 561 raise
562 562 raise error.Abort(_("shelved change '%s' not found") % name)
563 563
564 564 def listshelves(repo):
565 565 """return all shelves in repo as list of (time, filename)"""
566 566 try:
567 567 names = repo.vfs.readdir(shelvedir)
568 568 except OSError as err:
569 569 if err.errno != errno.ENOENT:
570 570 raise
571 571 return []
572 572 info = []
573 573 for (name, _type) in names:
574 574 pfx, sfx = name.rsplit('.', 1)
575 575 if not pfx or sfx != patchextension:
576 576 continue
577 577 st = shelvedfile(repo, name).stat()
578 578 info.append((st[stat.ST_MTIME], shelvedfile(repo, pfx).filename()))
579 579 return sorted(info, reverse=True)
580 580
581 581 def listcmd(ui, repo, pats, opts):
582 582 """subcommand that displays the list of shelves"""
583 583 pats = set(pats)
584 584 width = 80
585 585 if not ui.plain():
586 586 width = ui.termwidth()
587 587 namelabel = 'shelve.newest'
588 588 ui.pager('shelve')
589 589 for mtime, name in listshelves(repo):
590 590 sname = util.split(name)[1]
591 591 if pats and sname not in pats:
592 592 continue
593 593 ui.write(sname, label=namelabel)
594 594 namelabel = 'shelve.name'
595 595 if ui.quiet:
596 596 ui.write('\n')
597 597 continue
598 598 ui.write(' ' * (16 - len(sname)))
599 599 used = 16
600 600 date = dateutil.makedate(mtime)
601 601 age = '(%s)' % templatefilters.age(date, abbrev=True)
602 602 ui.write(age, label='shelve.age')
603 603 ui.write(' ' * (12 - len(age)))
604 604 used += 12
605 605 with open(name + '.' + patchextension, 'rb') as fp:
606 606 while True:
607 607 line = fp.readline()
608 608 if not line:
609 609 break
610 610 if not line.startswith('#'):
611 611 desc = line.rstrip()
612 612 if ui.formatted():
613 613 desc = stringutil.ellipsis(desc, width - used)
614 614 ui.write(desc)
615 615 break
616 616 ui.write('\n')
617 617 if not (opts['patch'] or opts['stat']):
618 618 continue
619 619 difflines = fp.readlines()
620 620 if opts['patch']:
621 621 for chunk, label in patch.difflabel(iter, difflines):
622 622 ui.write(chunk, label=label)
623 623 if opts['stat']:
624 624 for chunk, label in patch.diffstatui(difflines, width=width):
625 625 ui.write(chunk, label=label)
626 626
627 627 def patchcmds(ui, repo, pats, opts):
628 628 """subcommand that displays shelves"""
629 629 if len(pats) == 0:
630 630 shelves = listshelves(repo)
631 631 if not shelves:
632 632 raise error.Abort(_("there are no shelves to show"))
633 633 mtime, name = shelves[0]
634 634 sname = util.split(name)[1]
635 635 pats = [sname]
636 636
637 637 for shelfname in pats:
638 638 if not shelvedfile(repo, shelfname, patchextension).exists():
639 639 raise error.Abort(_("cannot find shelf %s") % shelfname)
640 640
641 641 listcmd(ui, repo, pats, opts)
642 642
643 643 def checkparents(repo, state):
644 644 """check parent while resuming an unshelve"""
645 645 if state.parents != repo.dirstate.parents():
646 646 raise error.Abort(_('working directory parents do not match unshelve '
647 647 'state'))
648 648
649 649 def pathtofiles(repo, files):
650 650 cwd = repo.getcwd()
651 651 return [repo.pathto(f, cwd) for f in files]
652 652
653 653 def unshelveabort(ui, repo, state, opts):
654 654 """subcommand that abort an in-progress unshelve"""
655 655 with repo.lock():
656 656 try:
657 657 checkparents(repo, state)
658 658
659 merge.update(repo, state.pendingctx, False, True)
659 merge.update(repo, state.pendingctx, branchmerge=False, force=True)
660 660 if (state.activebookmark
661 661 and state.activebookmark in repo._bookmarks):
662 662 bookmarks.activate(repo, state.activebookmark)
663 663
664 664 if repo.vfs.exists('unshelverebasestate'):
665 665 repo.vfs.rename('unshelverebasestate', 'rebasestate')
666 666 rebase.clearstatus(repo)
667 667
668 668 mergefiles(ui, repo, state.wctx, state.pendingctx)
669 669 if not phases.supportinternal(repo):
670 670 repair.strip(ui, repo, state.nodestoremove, backup=False,
671 671 topic='shelve')
672 672 finally:
673 673 shelvedstate.clear(repo)
674 674 ui.warn(_("unshelve of '%s' aborted\n") % state.name)
675 675
676 676 def mergefiles(ui, repo, wctx, shelvectx):
677 677 """updates to wctx and merges the changes from shelvectx into the
678 678 dirstate."""
679 679 with ui.configoverride({('ui', 'quiet'): True}):
680 680 hg.update(repo, wctx.node())
681 681 files = []
682 682 files.extend(shelvectx.files())
683 683 files.extend(shelvectx.parents()[0].files())
684 684
685 685 # revert will overwrite unknown files, so move them out of the way
686 686 for file in repo.status(unknown=True).unknown:
687 687 if file in files:
688 688 util.rename(file, scmutil.origpath(ui, repo, file))
689 689 ui.pushbuffer(True)
690 690 cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents(),
691 691 *pathtofiles(repo, files),
692 692 **{r'no_backup': True})
693 693 ui.popbuffer()
694 694
695 695 def restorebranch(ui, repo, branchtorestore):
696 696 if branchtorestore and branchtorestore != repo.dirstate.branch():
697 697 repo.dirstate.setbranch(branchtorestore)
698 698 ui.status(_('marked working directory as branch %s\n')
699 699 % branchtorestore)
700 700
701 701 def unshelvecleanup(ui, repo, name, opts):
702 702 """remove related files after an unshelve"""
703 703 if not opts.get('keep'):
704 704 for filetype in shelvefileextensions:
705 705 shfile = shelvedfile(repo, name, filetype)
706 706 if shfile.exists():
707 707 shfile.movetobackup()
708 708 cleanupoldbackups(repo)
709 709
710 710 def unshelvecontinue(ui, repo, state, opts):
711 711 """subcommand to continue an in-progress unshelve"""
712 712 # We're finishing off a merge. First parent is our original
713 713 # parent, second is the temporary "fake" commit we're unshelving.
714 714 with repo.lock():
715 715 checkparents(repo, state)
716 716 ms = merge.mergestate.read(repo)
717 717 if list(ms.unresolved()):
718 718 raise error.Abort(
719 719 _("unresolved conflicts, can't continue"),
720 720 hint=_("see 'hg resolve', then 'hg unshelve --continue'"))
721 721
722 722 shelvectx = repo[state.parents[1]]
723 723 pendingctx = state.pendingctx
724 724
725 725 with repo.dirstate.parentchange():
726 726 repo.setparents(state.pendingctx.node(), nodemod.nullid)
727 727 repo.dirstate.write(repo.currenttransaction())
728 728
729 729 targetphase = phases.internal
730 730 if not phases.supportinternal(repo):
731 731 targetphase = phases.secret
732 732 overrides = {('phases', 'new-commit'): targetphase}
733 733 with repo.ui.configoverride(overrides, 'unshelve'):
734 734 with repo.dirstate.parentchange():
735 735 repo.setparents(state.parents[0], nodemod.nullid)
736 736 newnode = repo.commit(text=shelvectx.description(),
737 737 extra=shelvectx.extra(),
738 738 user=shelvectx.user(),
739 739 date=shelvectx.date())
740 740
741 741 if newnode is None:
742 742 # If it ended up being a no-op commit, then the normal
743 743 # merge state clean-up path doesn't happen, so do it
744 744 # here. Fix issue5494
745 745 merge.mergestate.clean(repo)
746 746 shelvectx = state.pendingctx
747 747 msg = _('note: unshelved changes already existed '
748 748 'in the working copy\n')
749 749 ui.status(msg)
750 750 else:
751 751 # only strip the shelvectx if we produced one
752 752 state.nodestoremove.append(newnode)
753 753 shelvectx = repo[newnode]
754 754
755 755 hg.updaterepo(repo, pendingctx.node(), overwrite=False)
756 756
757 757 if repo.vfs.exists('unshelverebasestate'):
758 758 repo.vfs.rename('unshelverebasestate', 'rebasestate')
759 759 rebase.clearstatus(repo)
760 760
761 761 mergefiles(ui, repo, state.wctx, shelvectx)
762 762 restorebranch(ui, repo, state.branchtorestore)
763 763
764 764 if not phases.supportinternal(repo):
765 765 repair.strip(ui, repo, state.nodestoremove, backup=False,
766 766 topic='shelve')
767 767 _restoreactivebookmark(repo, state.activebookmark)
768 768 shelvedstate.clear(repo)
769 769 unshelvecleanup(ui, repo, state.name, opts)
770 770 ui.status(_("unshelve of '%s' complete\n") % state.name)
771 771
772 772 def _commitworkingcopychanges(ui, repo, opts, tmpwctx):
773 773 """Temporarily commit working copy changes before moving unshelve commit"""
774 774 # Store pending changes in a commit and remember added in case a shelve
775 775 # contains unknown files that are part of the pending change
776 776 s = repo.status()
777 777 addedbefore = frozenset(s.added)
778 778 if not (s.modified or s.added or s.removed):
779 779 return tmpwctx, addedbefore
780 780 ui.status(_("temporarily committing pending changes "
781 781 "(restore with 'hg unshelve --abort')\n"))
782 782 extra = {'internal': 'shelve'}
783 783 commitfunc = getcommitfunc(extra=extra, interactive=False,
784 784 editor=False)
785 785 tempopts = {}
786 786 tempopts['message'] = "pending changes temporary commit"
787 787 tempopts['date'] = opts.get('date')
788 788 with ui.configoverride({('ui', 'quiet'): True}):
789 789 node = cmdutil.commit(ui, repo, commitfunc, [], tempopts)
790 790 tmpwctx = repo[node]
791 791 return tmpwctx, addedbefore
792 792
793 793 def _unshelverestorecommit(ui, repo, basename):
794 794 """Recreate commit in the repository during the unshelve"""
795 795 repo = repo.unfiltered()
796 796 node = None
797 797 if shelvedfile(repo, basename, 'shelve').exists():
798 798 node = shelvedfile(repo, basename, 'shelve').readinfo()['node']
799 799 if node is None or node not in repo:
800 800 with ui.configoverride({('ui', 'quiet'): True}):
801 801 shelvectx = shelvedfile(repo, basename, 'hg').applybundle()
802 802 # We might not strip the unbundled changeset, so we should keep track of
803 803 # the unshelve node in case we need to reuse it (eg: unshelve --keep)
804 804 if node is None:
805 805 info = {'node': nodemod.hex(shelvectx.node())}
806 806 shelvedfile(repo, basename, 'shelve').writeinfo(info)
807 807 else:
808 808 shelvectx = repo[node]
809 809
810 810 return repo, shelvectx
811 811
812 812 def _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev, basename, pctx,
813 813 tmpwctx, shelvectx, branchtorestore,
814 814 activebookmark):
815 815 """Rebase restored commit from its original location to a destination"""
816 816 # If the shelve is not immediately on top of the commit
817 817 # we'll be merging with, rebase it to be on top.
818 818 if tmpwctx.node() == shelvectx.parents()[0].node():
819 819 return shelvectx
820 820
821 821 overrides = {
822 822 ('ui', 'forcemerge'): opts.get('tool', ''),
823 823 ('phases', 'new-commit'): phases.secret,
824 824 }
825 825 with repo.ui.configoverride(overrides, 'unshelve'):
826 826 ui.status(_('rebasing shelved changes\n'))
827 827 stats = merge.graft(repo, shelvectx, shelvectx.p1(),
828 828 labels=['shelve', 'working-copy'],
829 829 keepconflictparent=True)
830 830 if stats.unresolvedcount:
831 831 tr.close()
832 832
833 833 nodestoremove = [repo.changelog.node(rev)
834 834 for rev in pycompat.xrange(oldtiprev, len(repo))]
835 835 shelvedstate.save(repo, basename, pctx, tmpwctx, nodestoremove,
836 836 branchtorestore, opts.get('keep'), activebookmark)
837 837 raise error.InterventionRequired(
838 838 _("unresolved conflicts (see 'hg resolve', then "
839 839 "'hg unshelve --continue')"))
840 840
841 841 with repo.dirstate.parentchange():
842 842 repo.setparents(tmpwctx.node(), nodemod.nullid)
843 843 newnode = repo.commit(text=shelvectx.description(),
844 844 extra=shelvectx.extra(),
845 845 user=shelvectx.user(),
846 846 date=shelvectx.date())
847 847
848 848 if newnode is None:
849 849 # If it ended up being a no-op commit, then the normal
850 850 # merge state clean-up path doesn't happen, so do it
851 851 # here. Fix issue5494
852 852 merge.mergestate.clean(repo)
853 853 shelvectx = tmpwctx
854 854 msg = _('note: unshelved changes already existed '
855 855 'in the working copy\n')
856 856 ui.status(msg)
857 857 else:
858 858 shelvectx = repo[newnode]
859 859 hg.updaterepo(repo, tmpwctx.node(), False)
860 860
861 861 return shelvectx
862 862
863 863 def _forgetunknownfiles(repo, shelvectx, addedbefore):
864 864 # Forget any files that were unknown before the shelve, unknown before
865 865 # unshelve started, but are now added.
866 866 shelveunknown = shelvectx.extra().get('shelve_unknown')
867 867 if not shelveunknown:
868 868 return
869 869 shelveunknown = frozenset(shelveunknown.split('\0'))
870 870 addedafter = frozenset(repo.status().added)
871 871 toforget = (addedafter & shelveunknown) - addedbefore
872 872 repo[None].forget(toforget)
873 873
874 874 def _finishunshelve(repo, oldtiprev, tr, activebookmark):
875 875 _restoreactivebookmark(repo, activebookmark)
876 876 # The transaction aborting will strip all the commits for us,
877 877 # but it doesn't update the inmemory structures, so addchangegroup
878 878 # hooks still fire and try to operate on the missing commits.
879 879 # Clean up manually to prevent this.
880 880 repo.unfiltered().changelog.strip(oldtiprev, tr)
881 881 _aborttransaction(repo)
882 882
883 883 def _checkunshelveuntrackedproblems(ui, repo, shelvectx):
884 884 """Check potential problems which may result from working
885 885 copy having untracked changes."""
886 886 wcdeleted = set(repo.status().deleted)
887 887 shelvetouched = set(shelvectx.files())
888 888 intersection = wcdeleted.intersection(shelvetouched)
889 889 if intersection:
890 890 m = _("shelved change touches missing files")
891 891 hint = _("run hg status to see which files are missing")
892 892 raise error.Abort(m, hint=hint)
893 893
894 894 @command('unshelve',
895 895 [('a', 'abort', None,
896 896 _('abort an incomplete unshelve operation')),
897 897 ('c', 'continue', None,
898 898 _('continue an incomplete unshelve operation')),
899 899 ('k', 'keep', None,
900 900 _('keep shelve after unshelving')),
901 901 ('n', 'name', '',
902 902 _('restore shelved change with given name'), _('NAME')),
903 903 ('t', 'tool', '', _('specify merge tool')),
904 904 ('', 'date', '',
905 905 _('set date for temporary commits (DEPRECATED)'), _('DATE'))],
906 906 _('hg unshelve [[-n] SHELVED]'),
907 907 helpcategory=command.CATEGORY_WORKING_DIRECTORY)
908 908 def unshelve(ui, repo, *shelved, **opts):
909 909 """restore a shelved change to the working directory
910 910
911 911 This command accepts an optional name of a shelved change to
912 912 restore. If none is given, the most recent shelved change is used.
913 913
914 914 If a shelved change is applied successfully, the bundle that
915 915 contains the shelved changes is moved to a backup location
916 916 (.hg/shelve-backup).
917 917
918 918 Since you can restore a shelved change on top of an arbitrary
919 919 commit, it is possible that unshelving will result in a conflict
920 920 between your changes and the commits you are unshelving onto. If
921 921 this occurs, you must resolve the conflict, then use
922 922 ``--continue`` to complete the unshelve operation. (The bundle
923 923 will not be moved until you successfully complete the unshelve.)
924 924
925 925 (Alternatively, you can use ``--abort`` to abandon an unshelve
926 926 that causes a conflict. This reverts the unshelved changes, and
927 927 leaves the bundle in place.)
928 928
929 929 If bare shelved change(when no files are specified, without interactive,
930 930 include and exclude option) was done on newly created branch it would
931 931 restore branch information to the working directory.
932 932
933 933 After a successful unshelve, the shelved changes are stored in a
934 934 backup directory. Only the N most recent backups are kept. N
935 935 defaults to 10 but can be overridden using the ``shelve.maxbackups``
936 936 configuration option.
937 937
938 938 .. container:: verbose
939 939
940 940 Timestamp in seconds is used to decide order of backups. More
941 941 than ``maxbackups`` backups are kept, if same timestamp
942 942 prevents from deciding exact order of them, for safety.
943 943 """
944 944 with repo.wlock():
945 945 return _dounshelve(ui, repo, *shelved, **opts)
946 946
947 947 def _dounshelve(ui, repo, *shelved, **opts):
948 948 opts = pycompat.byteskwargs(opts)
949 949 abortf = opts.get('abort')
950 950 continuef = opts.get('continue')
951 951 if not abortf and not continuef:
952 952 cmdutil.checkunfinished(repo)
953 953 shelved = list(shelved)
954 954 if opts.get("name"):
955 955 shelved.append(opts["name"])
956 956
957 957 if abortf or continuef:
958 958 if abortf and continuef:
959 959 raise error.Abort(_('cannot use both abort and continue'))
960 960 if shelved:
961 961 raise error.Abort(_('cannot combine abort/continue with '
962 962 'naming a shelved change'))
963 963 if abortf and opts.get('tool', False):
964 964 ui.warn(_('tool option will be ignored\n'))
965 965
966 966 try:
967 967 state = shelvedstate.load(repo)
968 968 if opts.get('keep') is None:
969 969 opts['keep'] = state.keep
970 970 except IOError as err:
971 971 if err.errno != errno.ENOENT:
972 972 raise
973 973 cmdutil.wrongtooltocontinue(repo, _('unshelve'))
974 974 except error.CorruptedState as err:
975 975 ui.debug(pycompat.bytestr(err) + '\n')
976 976 if continuef:
977 977 msg = _('corrupted shelved state file')
978 978 hint = _('please run hg unshelve --abort to abort unshelve '
979 979 'operation')
980 980 raise error.Abort(msg, hint=hint)
981 981 elif abortf:
982 982 msg = _('could not read shelved state file, your working copy '
983 983 'may be in an unexpected state\nplease update to some '
984 984 'commit\n')
985 985 ui.warn(msg)
986 986 shelvedstate.clear(repo)
987 987 return
988 988
989 989 if abortf:
990 990 return unshelveabort(ui, repo, state, opts)
991 991 elif continuef:
992 992 return unshelvecontinue(ui, repo, state, opts)
993 993 elif len(shelved) > 1:
994 994 raise error.Abort(_('can only unshelve one change at a time'))
995 995 elif not shelved:
996 996 shelved = listshelves(repo)
997 997 if not shelved:
998 998 raise error.Abort(_('no shelved changes to apply!'))
999 999 basename = util.split(shelved[0][1])[1]
1000 1000 ui.status(_("unshelving change '%s'\n") % basename)
1001 1001 else:
1002 1002 basename = shelved[0]
1003 1003
1004 1004 if not shelvedfile(repo, basename, patchextension).exists():
1005 1005 raise error.Abort(_("shelved change '%s' not found") % basename)
1006 1006
1007 1007 repo = repo.unfiltered()
1008 1008 lock = tr = None
1009 1009 try:
1010 1010 lock = repo.lock()
1011 1011 tr = repo.transaction('unshelve', report=lambda x: None)
1012 1012 oldtiprev = len(repo)
1013 1013
1014 1014 pctx = repo['.']
1015 1015 tmpwctx = pctx
1016 1016 # The goal is to have a commit structure like so:
1017 1017 # ...-> pctx -> tmpwctx -> shelvectx
1018 1018 # where tmpwctx is an optional commit with the user's pending changes
1019 1019 # and shelvectx is the unshelved changes. Then we merge it all down
1020 1020 # to the original pctx.
1021 1021
1022 1022 activebookmark = _backupactivebookmark(repo)
1023 1023 tmpwctx, addedbefore = _commitworkingcopychanges(ui, repo, opts,
1024 1024 tmpwctx)
1025 1025 repo, shelvectx = _unshelverestorecommit(ui, repo, basename)
1026 1026 _checkunshelveuntrackedproblems(ui, repo, shelvectx)
1027 1027 branchtorestore = ''
1028 1028 if shelvectx.branch() != shelvectx.p1().branch():
1029 1029 branchtorestore = shelvectx.branch()
1030 1030
1031 1031 shelvectx = _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev,
1032 1032 basename, pctx, tmpwctx,
1033 1033 shelvectx, branchtorestore,
1034 1034 activebookmark)
1035 1035 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
1036 1036 with ui.configoverride(overrides, 'unshelve'):
1037 1037 mergefiles(ui, repo, pctx, shelvectx)
1038 1038 restorebranch(ui, repo, branchtorestore)
1039 1039 _forgetunknownfiles(repo, shelvectx, addedbefore)
1040 1040
1041 1041 shelvedstate.clear(repo)
1042 1042 _finishunshelve(repo, oldtiprev, tr, activebookmark)
1043 1043 unshelvecleanup(ui, repo, basename, opts)
1044 1044 finally:
1045 1045 if tr:
1046 1046 tr.release()
1047 1047 lockmod.release(lock)
1048 1048
1049 1049 @command('shelve',
1050 1050 [('A', 'addremove', None,
1051 1051 _('mark new/missing files as added/removed before shelving')),
1052 1052 ('u', 'unknown', None,
1053 1053 _('store unknown files in the shelve')),
1054 1054 ('', 'cleanup', None,
1055 1055 _('delete all shelved changes')),
1056 1056 ('', 'date', '',
1057 1057 _('shelve with the specified commit date'), _('DATE')),
1058 1058 ('d', 'delete', None,
1059 1059 _('delete the named shelved change(s)')),
1060 1060 ('e', 'edit', False,
1061 1061 _('invoke editor on commit messages')),
1062 1062 ('l', 'list', None,
1063 1063 _('list current shelves')),
1064 1064 ('m', 'message', '',
1065 1065 _('use text as shelve message'), _('TEXT')),
1066 1066 ('n', 'name', '',
1067 1067 _('use the given name for the shelved commit'), _('NAME')),
1068 1068 ('p', 'patch', None,
1069 1069 _('output patches for changes (provide the names of the shelved '
1070 1070 'changes as positional arguments)')),
1071 1071 ('i', 'interactive', None,
1072 1072 _('interactive mode, only works while creating a shelve')),
1073 1073 ('', 'stat', None,
1074 1074 _('output diffstat-style summary of changes (provide the names of '
1075 1075 'the shelved changes as positional arguments)')
1076 1076 )] + cmdutil.walkopts,
1077 1077 _('hg shelve [OPTION]... [FILE]...'),
1078 1078 helpcategory=command.CATEGORY_WORKING_DIRECTORY)
1079 1079 def shelvecmd(ui, repo, *pats, **opts):
1080 1080 '''save and set aside changes from the working directory
1081 1081
1082 1082 Shelving takes files that "hg status" reports as not clean, saves
1083 1083 the modifications to a bundle (a shelved change), and reverts the
1084 1084 files so that their state in the working directory becomes clean.
1085 1085
1086 1086 To restore these changes to the working directory, using "hg
1087 1087 unshelve"; this will work even if you switch to a different
1088 1088 commit.
1089 1089
1090 1090 When no files are specified, "hg shelve" saves all not-clean
1091 1091 files. If specific files or directories are named, only changes to
1092 1092 those files are shelved.
1093 1093
1094 1094 In bare shelve (when no files are specified, without interactive,
1095 1095 include and exclude option), shelving remembers information if the
1096 1096 working directory was on newly created branch, in other words working
1097 1097 directory was on different branch than its first parent. In this
1098 1098 situation unshelving restores branch information to the working directory.
1099 1099
1100 1100 Each shelved change has a name that makes it easier to find later.
1101 1101 The name of a shelved change defaults to being based on the active
1102 1102 bookmark, or if there is no active bookmark, the current named
1103 1103 branch. To specify a different name, use ``--name``.
1104 1104
1105 1105 To see a list of existing shelved changes, use the ``--list``
1106 1106 option. For each shelved change, this will print its name, age,
1107 1107 and description; use ``--patch`` or ``--stat`` for more details.
1108 1108
1109 1109 To delete specific shelved changes, use ``--delete``. To delete
1110 1110 all shelved changes, use ``--cleanup``.
1111 1111 '''
1112 1112 opts = pycompat.byteskwargs(opts)
1113 1113 allowables = [
1114 1114 ('addremove', {'create'}), # 'create' is pseudo action
1115 1115 ('unknown', {'create'}),
1116 1116 ('cleanup', {'cleanup'}),
1117 1117 # ('date', {'create'}), # ignored for passing '--date "0 0"' in tests
1118 1118 ('delete', {'delete'}),
1119 1119 ('edit', {'create'}),
1120 1120 ('list', {'list'}),
1121 1121 ('message', {'create'}),
1122 1122 ('name', {'create'}),
1123 1123 ('patch', {'patch', 'list'}),
1124 1124 ('stat', {'stat', 'list'}),
1125 1125 ]
1126 1126 def checkopt(opt):
1127 1127 if opts.get(opt):
1128 1128 for i, allowable in allowables:
1129 1129 if opts[i] and opt not in allowable:
1130 1130 raise error.Abort(_("options '--%s' and '--%s' may not be "
1131 1131 "used together") % (opt, i))
1132 1132 return True
1133 1133 if checkopt('cleanup'):
1134 1134 if pats:
1135 1135 raise error.Abort(_("cannot specify names when using '--cleanup'"))
1136 1136 return cleanupcmd(ui, repo)
1137 1137 elif checkopt('delete'):
1138 1138 return deletecmd(ui, repo, pats)
1139 1139 elif checkopt('list'):
1140 1140 return listcmd(ui, repo, pats, opts)
1141 1141 elif checkopt('patch') or checkopt('stat'):
1142 1142 return patchcmds(ui, repo, pats, opts)
1143 1143 else:
1144 1144 return createcmd(ui, repo, pats, opts)
1145 1145
1146 1146 def extsetup(ui):
1147 1147 cmdutil.unfinishedstates.append(
1148 1148 [shelvedstate._filename, False, False,
1149 1149 _('unshelve already in progress'),
1150 1150 _("use 'hg unshelve --continue' or 'hg unshelve --abort'")])
1151 1151 cmdutil.afterresolvedstates.append(
1152 1152 [shelvedstate._filename, _('hg unshelve --continue')])
@@ -1,768 +1,769 b''
1 1 # Patch transplanting extension for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to transplant changesets from another branch
9 9
10 10 This extension allows you to transplant changes to another parent revision,
11 11 possibly in another repository. The transplant is done using 'diff' patches.
12 12
13 13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 14 map from a changeset hash to its hash in the source repository.
15 15 '''
16 16 from __future__ import absolute_import
17 17
18 18 import os
19 19
20 20 from mercurial.i18n import _
21 21 from mercurial import (
22 22 bundlerepo,
23 23 cmdutil,
24 24 error,
25 25 exchange,
26 26 hg,
27 27 logcmdutil,
28 28 match,
29 29 merge,
30 30 node as nodemod,
31 31 patch,
32 32 pycompat,
33 33 registrar,
34 34 revlog,
35 35 revset,
36 36 scmutil,
37 37 smartset,
38 38 util,
39 39 vfs as vfsmod,
40 40 )
41 41 from mercurial.utils import (
42 42 procutil,
43 43 stringutil,
44 44 )
45 45
46 46 class TransplantError(error.Abort):
47 47 pass
48 48
49 49 cmdtable = {}
50 50 command = registrar.command(cmdtable)
51 51 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
52 52 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
53 53 # be specifying the version(s) of Mercurial they are tested with, or
54 54 # leave the attribute unspecified.
55 55 testedwith = 'ships-with-hg-core'
56 56
57 57 configtable = {}
58 58 configitem = registrar.configitem(configtable)
59 59
60 60 configitem('transplant', 'filter',
61 61 default=None,
62 62 )
63 63 configitem('transplant', 'log',
64 64 default=None,
65 65 )
66 66
67 67 class transplantentry(object):
68 68 def __init__(self, lnode, rnode):
69 69 self.lnode = lnode
70 70 self.rnode = rnode
71 71
72 72 class transplants(object):
73 73 def __init__(self, path=None, transplantfile=None, opener=None):
74 74 self.path = path
75 75 self.transplantfile = transplantfile
76 76 self.opener = opener
77 77
78 78 if not opener:
79 79 self.opener = vfsmod.vfs(self.path)
80 80 self.transplants = {}
81 81 self.dirty = False
82 82 self.read()
83 83
84 84 def read(self):
85 85 abspath = os.path.join(self.path, self.transplantfile)
86 86 if self.transplantfile and os.path.exists(abspath):
87 87 for line in self.opener.read(self.transplantfile).splitlines():
88 88 lnode, rnode = map(revlog.bin, line.split(':'))
89 89 list = self.transplants.setdefault(rnode, [])
90 90 list.append(transplantentry(lnode, rnode))
91 91
92 92 def write(self):
93 93 if self.dirty and self.transplantfile:
94 94 if not os.path.isdir(self.path):
95 95 os.mkdir(self.path)
96 96 fp = self.opener(self.transplantfile, 'w')
97 97 for list in self.transplants.itervalues():
98 98 for t in list:
99 99 l, r = map(nodemod.hex, (t.lnode, t.rnode))
100 100 fp.write(l + ':' + r + '\n')
101 101 fp.close()
102 102 self.dirty = False
103 103
104 104 def get(self, rnode):
105 105 return self.transplants.get(rnode) or []
106 106
107 107 def set(self, lnode, rnode):
108 108 list = self.transplants.setdefault(rnode, [])
109 109 list.append(transplantentry(lnode, rnode))
110 110 self.dirty = True
111 111
112 112 def remove(self, transplant):
113 113 list = self.transplants.get(transplant.rnode)
114 114 if list:
115 115 del list[list.index(transplant)]
116 116 self.dirty = True
117 117
118 118 class transplanter(object):
119 119 def __init__(self, ui, repo, opts):
120 120 self.ui = ui
121 121 self.path = repo.vfs.join('transplant')
122 122 self.opener = vfsmod.vfs(self.path)
123 123 self.transplants = transplants(self.path, 'transplants',
124 124 opener=self.opener)
125 125 def getcommiteditor():
126 126 editform = cmdutil.mergeeditform(repo[None], 'transplant')
127 127 return cmdutil.getcommiteditor(editform=editform,
128 128 **pycompat.strkwargs(opts))
129 129 self.getcommiteditor = getcommiteditor
130 130
131 131 def applied(self, repo, node, parent):
132 132 '''returns True if a node is already an ancestor of parent
133 133 or is parent or has already been transplanted'''
134 134 if hasnode(repo, parent):
135 135 parentrev = repo.changelog.rev(parent)
136 136 if hasnode(repo, node):
137 137 rev = repo.changelog.rev(node)
138 138 reachable = repo.changelog.ancestors([parentrev], rev,
139 139 inclusive=True)
140 140 if rev in reachable:
141 141 return True
142 142 for t in self.transplants.get(node):
143 143 # it might have been stripped
144 144 if not hasnode(repo, t.lnode):
145 145 self.transplants.remove(t)
146 146 return False
147 147 lnoderev = repo.changelog.rev(t.lnode)
148 148 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
149 149 inclusive=True):
150 150 return True
151 151 return False
152 152
153 153 def apply(self, repo, source, revmap, merges, opts=None):
154 154 '''apply the revisions in revmap one by one in revision order'''
155 155 if opts is None:
156 156 opts = {}
157 157 revs = sorted(revmap)
158 158 p1, p2 = repo.dirstate.parents()
159 159 pulls = []
160 160 diffopts = patch.difffeatureopts(self.ui, opts)
161 161 diffopts.git = True
162 162
163 163 lock = tr = None
164 164 try:
165 165 lock = repo.lock()
166 166 tr = repo.transaction('transplant')
167 167 for rev in revs:
168 168 node = revmap[rev]
169 169 revstr = '%d:%s' % (rev, nodemod.short(node))
170 170
171 171 if self.applied(repo, node, p1):
172 172 self.ui.warn(_('skipping already applied revision %s\n') %
173 173 revstr)
174 174 continue
175 175
176 176 parents = source.changelog.parents(node)
177 177 if not (opts.get('filter') or opts.get('log')):
178 178 # If the changeset parent is the same as the
179 179 # wdir's parent, just pull it.
180 180 if parents[0] == p1:
181 181 pulls.append(node)
182 182 p1 = node
183 183 continue
184 184 if pulls:
185 185 if source != repo:
186 186 exchange.pull(repo, source.peer(), heads=pulls)
187 merge.update(repo, pulls[-1], False, False)
187 merge.update(repo, pulls[-1], branchmerge=False,
188 force=False)
188 189 p1, p2 = repo.dirstate.parents()
189 190 pulls = []
190 191
191 192 domerge = False
192 193 if node in merges:
193 194 # pulling all the merge revs at once would mean we
194 195 # couldn't transplant after the latest even if
195 196 # transplants before them fail.
196 197 domerge = True
197 198 if not hasnode(repo, node):
198 199 exchange.pull(repo, source.peer(), heads=[node])
199 200
200 201 skipmerge = False
201 202 if parents[1] != revlog.nullid:
202 203 if not opts.get('parent'):
203 204 self.ui.note(_('skipping merge changeset %d:%s\n')
204 205 % (rev, nodemod.short(node)))
205 206 skipmerge = True
206 207 else:
207 208 parent = source.lookup(opts['parent'])
208 209 if parent not in parents:
209 210 raise error.Abort(_('%s is not a parent of %s') %
210 211 (nodemod.short(parent),
211 212 nodemod.short(node)))
212 213 else:
213 214 parent = parents[0]
214 215
215 216 if skipmerge:
216 217 patchfile = None
217 218 else:
218 219 fd, patchfile = pycompat.mkstemp(prefix='hg-transplant-')
219 220 fp = os.fdopen(fd, r'wb')
220 221 gen = patch.diff(source, parent, node, opts=diffopts)
221 222 for chunk in gen:
222 223 fp.write(chunk)
223 224 fp.close()
224 225
225 226 del revmap[rev]
226 227 if patchfile or domerge:
227 228 try:
228 229 try:
229 230 n = self.applyone(repo, node,
230 231 source.changelog.read(node),
231 232 patchfile, merge=domerge,
232 233 log=opts.get('log'),
233 234 filter=opts.get('filter'))
234 235 except TransplantError:
235 236 # Do not rollback, it is up to the user to
236 237 # fix the merge or cancel everything
237 238 tr.close()
238 239 raise
239 240 if n and domerge:
240 241 self.ui.status(_('%s merged at %s\n') % (revstr,
241 242 nodemod.short(n)))
242 243 elif n:
243 244 self.ui.status(_('%s transplanted to %s\n')
244 245 % (nodemod.short(node),
245 246 nodemod.short(n)))
246 247 finally:
247 248 if patchfile:
248 249 os.unlink(patchfile)
249 250 tr.close()
250 251 if pulls:
251 252 exchange.pull(repo, source.peer(), heads=pulls)
252 merge.update(repo, pulls[-1], False, False)
253 merge.update(repo, pulls[-1], branchmerge=False, force=False)
253 254 finally:
254 255 self.saveseries(revmap, merges)
255 256 self.transplants.write()
256 257 if tr:
257 258 tr.release()
258 259 if lock:
259 260 lock.release()
260 261
261 262 def filter(self, filter, node, changelog, patchfile):
262 263 '''arbitrarily rewrite changeset before applying it'''
263 264
264 265 self.ui.status(_('filtering %s\n') % patchfile)
265 266 user, date, msg = (changelog[1], changelog[2], changelog[4])
266 267 fd, headerfile = pycompat.mkstemp(prefix='hg-transplant-')
267 268 fp = os.fdopen(fd, r'wb')
268 269 fp.write("# HG changeset patch\n")
269 270 fp.write("# User %s\n" % user)
270 271 fp.write("# Date %d %d\n" % date)
271 272 fp.write(msg + '\n')
272 273 fp.close()
273 274
274 275 try:
275 276 self.ui.system('%s %s %s' % (filter,
276 277 procutil.shellquote(headerfile),
277 278 procutil.shellquote(patchfile)),
278 279 environ={'HGUSER': changelog[1],
279 280 'HGREVISION': nodemod.hex(node),
280 281 },
281 282 onerr=error.Abort, errprefix=_('filter failed'),
282 283 blockedtag='transplant_filter')
283 284 user, date, msg = self.parselog(open(headerfile, 'rb'))[1:4]
284 285 finally:
285 286 os.unlink(headerfile)
286 287
287 288 return (user, date, msg)
288 289
289 290 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
290 291 filter=None):
291 292 '''apply the patch in patchfile to the repository as a transplant'''
292 293 (manifest, user, (time, timezone), files, message) = cl[:5]
293 294 date = "%d %d" % (time, timezone)
294 295 extra = {'transplant_source': node}
295 296 if filter:
296 297 (user, date, message) = self.filter(filter, node, cl, patchfile)
297 298
298 299 if log:
299 300 # we don't translate messages inserted into commits
300 301 message += '\n(transplanted from %s)' % nodemod.hex(node)
301 302
302 303 self.ui.status(_('applying %s\n') % nodemod.short(node))
303 304 self.ui.note('%s %s\n%s\n' % (user, date, message))
304 305
305 306 if not patchfile and not merge:
306 307 raise error.Abort(_('can only omit patchfile if merging'))
307 308 if patchfile:
308 309 try:
309 310 files = set()
310 311 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
311 312 files = list(files)
312 313 except Exception as inst:
313 314 seriespath = os.path.join(self.path, 'series')
314 315 if os.path.exists(seriespath):
315 316 os.unlink(seriespath)
316 317 p1 = repo.dirstate.p1()
317 318 p2 = node
318 319 self.log(user, date, message, p1, p2, merge=merge)
319 320 self.ui.write(stringutil.forcebytestr(inst) + '\n')
320 321 raise TransplantError(_('fix up the working directory and run '
321 322 'hg transplant --continue'))
322 323 else:
323 324 files = None
324 325 if merge:
325 326 p1, p2 = repo.dirstate.parents()
326 327 repo.setparents(p1, node)
327 328 m = match.always(repo.root, '')
328 329 else:
329 330 m = match.exact(repo.root, '', files)
330 331
331 332 n = repo.commit(message, user, date, extra=extra, match=m,
332 333 editor=self.getcommiteditor())
333 334 if not n:
334 335 self.ui.warn(_('skipping emptied changeset %s\n') %
335 336 nodemod.short(node))
336 337 return None
337 338 if not merge:
338 339 self.transplants.set(n, node)
339 340
340 341 return n
341 342
342 343 def canresume(self):
343 344 return os.path.exists(os.path.join(self.path, 'journal'))
344 345
345 346 def resume(self, repo, source, opts):
346 347 '''recover last transaction and apply remaining changesets'''
347 348 if os.path.exists(os.path.join(self.path, 'journal')):
348 349 n, node = self.recover(repo, source, opts)
349 350 if n:
350 351 self.ui.status(_('%s transplanted as %s\n') %
351 352 (nodemod.short(node),
352 353 nodemod.short(n)))
353 354 else:
354 355 self.ui.status(_('%s skipped due to empty diff\n')
355 356 % (nodemod.short(node),))
356 357 seriespath = os.path.join(self.path, 'series')
357 358 if not os.path.exists(seriespath):
358 359 self.transplants.write()
359 360 return
360 361 nodes, merges = self.readseries()
361 362 revmap = {}
362 363 for n in nodes:
363 364 revmap[source.changelog.rev(n)] = n
364 365 os.unlink(seriespath)
365 366
366 367 self.apply(repo, source, revmap, merges, opts)
367 368
368 369 def recover(self, repo, source, opts):
369 370 '''commit working directory using journal metadata'''
370 371 node, user, date, message, parents = self.readlog()
371 372 merge = False
372 373
373 374 if not user or not date or not message or not parents[0]:
374 375 raise error.Abort(_('transplant log file is corrupt'))
375 376
376 377 parent = parents[0]
377 378 if len(parents) > 1:
378 379 if opts.get('parent'):
379 380 parent = source.lookup(opts['parent'])
380 381 if parent not in parents:
381 382 raise error.Abort(_('%s is not a parent of %s') %
382 383 (nodemod.short(parent),
383 384 nodemod.short(node)))
384 385 else:
385 386 merge = True
386 387
387 388 extra = {'transplant_source': node}
388 389 try:
389 390 p1, p2 = repo.dirstate.parents()
390 391 if p1 != parent:
391 392 raise error.Abort(_('working directory not at transplant '
392 393 'parent %s') % nodemod.hex(parent))
393 394 if merge:
394 395 repo.setparents(p1, parents[1])
395 396 modified, added, removed, deleted = repo.status()[:4]
396 397 if merge or modified or added or removed or deleted:
397 398 n = repo.commit(message, user, date, extra=extra,
398 399 editor=self.getcommiteditor())
399 400 if not n:
400 401 raise error.Abort(_('commit failed'))
401 402 if not merge:
402 403 self.transplants.set(n, node)
403 404 else:
404 405 n = None
405 406 self.unlog()
406 407
407 408 return n, node
408 409 finally:
409 410 # TODO: get rid of this meaningless try/finally enclosing.
410 411 # this is kept only to reduce changes in a patch.
411 412 pass
412 413
413 414 def readseries(self):
414 415 nodes = []
415 416 merges = []
416 417 cur = nodes
417 418 for line in self.opener.read('series').splitlines():
418 419 if line.startswith('# Merges'):
419 420 cur = merges
420 421 continue
421 422 cur.append(revlog.bin(line))
422 423
423 424 return (nodes, merges)
424 425
425 426 def saveseries(self, revmap, merges):
426 427 if not revmap:
427 428 return
428 429
429 430 if not os.path.isdir(self.path):
430 431 os.mkdir(self.path)
431 432 series = self.opener('series', 'w')
432 433 for rev in sorted(revmap):
433 434 series.write(nodemod.hex(revmap[rev]) + '\n')
434 435 if merges:
435 436 series.write('# Merges\n')
436 437 for m in merges:
437 438 series.write(nodemod.hex(m) + '\n')
438 439 series.close()
439 440
440 441 def parselog(self, fp):
441 442 parents = []
442 443 message = []
443 444 node = revlog.nullid
444 445 inmsg = False
445 446 user = None
446 447 date = None
447 448 for line in fp.read().splitlines():
448 449 if inmsg:
449 450 message.append(line)
450 451 elif line.startswith('# User '):
451 452 user = line[7:]
452 453 elif line.startswith('# Date '):
453 454 date = line[7:]
454 455 elif line.startswith('# Node ID '):
455 456 node = revlog.bin(line[10:])
456 457 elif line.startswith('# Parent '):
457 458 parents.append(revlog.bin(line[9:]))
458 459 elif not line.startswith('# '):
459 460 inmsg = True
460 461 message.append(line)
461 462 if None in (user, date):
462 463 raise error.Abort(_("filter corrupted changeset (no user or date)"))
463 464 return (node, user, date, '\n'.join(message), parents)
464 465
465 466 def log(self, user, date, message, p1, p2, merge=False):
466 467 '''journal changelog metadata for later recover'''
467 468
468 469 if not os.path.isdir(self.path):
469 470 os.mkdir(self.path)
470 471 fp = self.opener('journal', 'w')
471 472 fp.write('# User %s\n' % user)
472 473 fp.write('# Date %s\n' % date)
473 474 fp.write('# Node ID %s\n' % nodemod.hex(p2))
474 475 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
475 476 if merge:
476 477 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
477 478 fp.write(message.rstrip() + '\n')
478 479 fp.close()
479 480
480 481 def readlog(self):
481 482 return self.parselog(self.opener('journal'))
482 483
483 484 def unlog(self):
484 485 '''remove changelog journal'''
485 486 absdst = os.path.join(self.path, 'journal')
486 487 if os.path.exists(absdst):
487 488 os.unlink(absdst)
488 489
489 490 def transplantfilter(self, repo, source, root):
490 491 def matchfn(node):
491 492 if self.applied(repo, node, root):
492 493 return False
493 494 if source.changelog.parents(node)[1] != revlog.nullid:
494 495 return False
495 496 extra = source.changelog.read(node)[5]
496 497 cnode = extra.get('transplant_source')
497 498 if cnode and self.applied(repo, cnode, root):
498 499 return False
499 500 return True
500 501
501 502 return matchfn
502 503
503 504 def hasnode(repo, node):
504 505 try:
505 506 return repo.changelog.rev(node) is not None
506 507 except error.StorageError:
507 508 return False
508 509
509 510 def browserevs(ui, repo, nodes, opts):
510 511 '''interactively transplant changesets'''
511 512 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
512 513 transplants = []
513 514 merges = []
514 515 prompt = _('apply changeset? [ynmpcq?]:'
515 516 '$$ &yes, transplant this changeset'
516 517 '$$ &no, skip this changeset'
517 518 '$$ &merge at this changeset'
518 519 '$$ show &patch'
519 520 '$$ &commit selected changesets'
520 521 '$$ &quit and cancel transplant'
521 522 '$$ &? (show this help)')
522 523 for node in nodes:
523 524 displayer.show(repo[node])
524 525 action = None
525 526 while not action:
526 527 choice = ui.promptchoice(prompt)
527 528 action = 'ynmpcq?'[choice:choice + 1]
528 529 if action == '?':
529 530 for c, t in ui.extractchoices(prompt)[1]:
530 531 ui.write('%s: %s\n' % (c, t))
531 532 action = None
532 533 elif action == 'p':
533 534 parent = repo.changelog.parents(node)[0]
534 535 for chunk in patch.diff(repo, parent, node):
535 536 ui.write(chunk)
536 537 action = None
537 538 if action == 'y':
538 539 transplants.append(node)
539 540 elif action == 'm':
540 541 merges.append(node)
541 542 elif action == 'c':
542 543 break
543 544 elif action == 'q':
544 545 transplants = ()
545 546 merges = ()
546 547 break
547 548 displayer.close()
548 549 return (transplants, merges)
549 550
550 551 @command('transplant',
551 552 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
552 553 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
553 554 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
554 555 ('p', 'prune', [], _('skip over REV'), _('REV')),
555 556 ('m', 'merge', [], _('merge at REV'), _('REV')),
556 557 ('', 'parent', '',
557 558 _('parent to choose when transplanting merge'), _('REV')),
558 559 ('e', 'edit', False, _('invoke editor on commit messages')),
559 560 ('', 'log', None, _('append transplant info to log message')),
560 561 ('c', 'continue', None, _('continue last transplant session '
561 562 'after fixing conflicts')),
562 563 ('', 'filter', '',
563 564 _('filter changesets through command'), _('CMD'))],
564 565 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
565 566 '[-m REV] [REV]...'),
566 567 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
567 568 def transplant(ui, repo, *revs, **opts):
568 569 '''transplant changesets from another branch
569 570
570 571 Selected changesets will be applied on top of the current working
571 572 directory with the log of the original changeset. The changesets
572 573 are copied and will thus appear twice in the history with different
573 574 identities.
574 575
575 576 Consider using the graft command if everything is inside the same
576 577 repository - it will use merges and will usually give a better result.
577 578 Use the rebase extension if the changesets are unpublished and you want
578 579 to move them instead of copying them.
579 580
580 581 If --log is specified, log messages will have a comment appended
581 582 of the form::
582 583
583 584 (transplanted from CHANGESETHASH)
584 585
585 586 You can rewrite the changelog message with the --filter option.
586 587 Its argument will be invoked with the current changelog message as
587 588 $1 and the patch as $2.
588 589
589 590 --source/-s specifies another repository to use for selecting changesets,
590 591 just as if it temporarily had been pulled.
591 592 If --branch/-b is specified, these revisions will be used as
592 593 heads when deciding which changesets to transplant, just as if only
593 594 these revisions had been pulled.
594 595 If --all/-a is specified, all the revisions up to the heads specified
595 596 with --branch will be transplanted.
596 597
597 598 Example:
598 599
599 600 - transplant all changes up to REV on top of your current revision::
600 601
601 602 hg transplant --branch REV --all
602 603
603 604 You can optionally mark selected transplanted changesets as merge
604 605 changesets. You will not be prompted to transplant any ancestors
605 606 of a merged transplant, and you can merge descendants of them
606 607 normally instead of transplanting them.
607 608
608 609 Merge changesets may be transplanted directly by specifying the
609 610 proper parent changeset by calling :hg:`transplant --parent`.
610 611
611 612 If no merges or revisions are provided, :hg:`transplant` will
612 613 start an interactive changeset browser.
613 614
614 615 If a changeset application fails, you can fix the merge by hand
615 616 and then resume where you left off by calling :hg:`transplant
616 617 --continue/-c`.
617 618 '''
618 619 with repo.wlock():
619 620 return _dotransplant(ui, repo, *revs, **opts)
620 621
621 622 def _dotransplant(ui, repo, *revs, **opts):
622 623 def incwalk(repo, csets, match=util.always):
623 624 for node in csets:
624 625 if match(node):
625 626 yield node
626 627
627 628 def transplantwalk(repo, dest, heads, match=util.always):
628 629 '''Yield all nodes that are ancestors of a head but not ancestors
629 630 of dest.
630 631 If no heads are specified, the heads of repo will be used.'''
631 632 if not heads:
632 633 heads = repo.heads()
633 634 ancestors = []
634 635 ctx = repo[dest]
635 636 for head in heads:
636 637 ancestors.append(ctx.ancestor(repo[head]).node())
637 638 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
638 639 if match(node):
639 640 yield node
640 641
641 642 def checkopts(opts, revs):
642 643 if opts.get('continue'):
643 644 if opts.get('branch') or opts.get('all') or opts.get('merge'):
644 645 raise error.Abort(_('--continue is incompatible with '
645 646 '--branch, --all and --merge'))
646 647 return
647 648 if not (opts.get('source') or revs or
648 649 opts.get('merge') or opts.get('branch')):
649 650 raise error.Abort(_('no source URL, branch revision, or revision '
650 651 'list provided'))
651 652 if opts.get('all'):
652 653 if not opts.get('branch'):
653 654 raise error.Abort(_('--all requires a branch revision'))
654 655 if revs:
655 656 raise error.Abort(_('--all is incompatible with a '
656 657 'revision list'))
657 658
658 659 opts = pycompat.byteskwargs(opts)
659 660 checkopts(opts, revs)
660 661
661 662 if not opts.get('log'):
662 663 # deprecated config: transplant.log
663 664 opts['log'] = ui.config('transplant', 'log')
664 665 if not opts.get('filter'):
665 666 # deprecated config: transplant.filter
666 667 opts['filter'] = ui.config('transplant', 'filter')
667 668
668 669 tp = transplanter(ui, repo, opts)
669 670
670 671 p1, p2 = repo.dirstate.parents()
671 672 if len(repo) > 0 and p1 == revlog.nullid:
672 673 raise error.Abort(_('no revision checked out'))
673 674 if opts.get('continue'):
674 675 if not tp.canresume():
675 676 raise error.Abort(_('no transplant to continue'))
676 677 else:
677 678 cmdutil.checkunfinished(repo)
678 679 if p2 != revlog.nullid:
679 680 raise error.Abort(_('outstanding uncommitted merges'))
680 681 m, a, r, d = repo.status()[:4]
681 682 if m or a or r or d:
682 683 raise error.Abort(_('outstanding local changes'))
683 684
684 685 sourcerepo = opts.get('source')
685 686 if sourcerepo:
686 687 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
687 688 heads = pycompat.maplist(peer.lookup, opts.get('branch', ()))
688 689 target = set(heads)
689 690 for r in revs:
690 691 try:
691 692 target.add(peer.lookup(r))
692 693 except error.RepoError:
693 694 pass
694 695 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
695 696 onlyheads=sorted(target), force=True)
696 697 else:
697 698 source = repo
698 699 heads = pycompat.maplist(source.lookup, opts.get('branch', ()))
699 700 cleanupfn = None
700 701
701 702 try:
702 703 if opts.get('continue'):
703 704 tp.resume(repo, source, opts)
704 705 return
705 706
706 707 tf = tp.transplantfilter(repo, source, p1)
707 708 if opts.get('prune'):
708 709 prune = set(source[r].node()
709 710 for r in scmutil.revrange(source, opts.get('prune')))
710 711 matchfn = lambda x: tf(x) and x not in prune
711 712 else:
712 713 matchfn = tf
713 714 merges = pycompat.maplist(source.lookup, opts.get('merge', ()))
714 715 revmap = {}
715 716 if revs:
716 717 for r in scmutil.revrange(source, revs):
717 718 revmap[int(r)] = source[r].node()
718 719 elif opts.get('all') or not merges:
719 720 if source != repo:
720 721 alltransplants = incwalk(source, csets, match=matchfn)
721 722 else:
722 723 alltransplants = transplantwalk(source, p1, heads,
723 724 match=matchfn)
724 725 if opts.get('all'):
725 726 revs = alltransplants
726 727 else:
727 728 revs, newmerges = browserevs(ui, source, alltransplants, opts)
728 729 merges.extend(newmerges)
729 730 for r in revs:
730 731 revmap[source.changelog.rev(r)] = r
731 732 for r in merges:
732 733 revmap[source.changelog.rev(r)] = r
733 734
734 735 tp.apply(repo, source, revmap, merges, opts)
735 736 finally:
736 737 if cleanupfn:
737 738 cleanupfn()
738 739
739 740 revsetpredicate = registrar.revsetpredicate()
740 741
741 742 @revsetpredicate('transplanted([set])')
742 743 def revsettransplanted(repo, subset, x):
743 744 """Transplanted changesets in set, or all transplanted changesets.
744 745 """
745 746 if x:
746 747 s = revset.getset(repo, subset, x)
747 748 else:
748 749 s = subset
749 750 return smartset.baseset([r for r in s if
750 751 repo[r].extra().get('transplant_source')])
751 752
752 753 templatekeyword = registrar.templatekeyword()
753 754
754 755 @templatekeyword('transplanted', requires={'ctx'})
755 756 def kwtransplanted(context, mapping):
756 757 """String. The node identifier of the transplanted
757 758 changeset if any."""
758 759 ctx = context.resource(mapping, 'ctx')
759 760 n = ctx.extra().get('transplant_source')
760 761 return n and nodemod.hex(n) or ''
761 762
762 763 def extsetup(ui):
763 764 cmdutil.unfinishedstates.append(
764 765 ['transplant/journal', True, False, _('transplant in progress'),
765 766 _("use 'hg transplant --continue' or 'hg update' to abort")])
766 767
767 768 # tell hggettext to extract docstrings from these functions:
768 769 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3313 +1,3313 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import os
12 12 import re
13 13
14 14 from .i18n import _
15 15 from .node import (
16 16 hex,
17 17 nullid,
18 18 nullrev,
19 19 short,
20 20 )
21 21
22 22 from . import (
23 23 bookmarks,
24 24 changelog,
25 25 copies,
26 26 crecord as crecordmod,
27 27 dirstateguard,
28 28 encoding,
29 29 error,
30 30 formatter,
31 31 logcmdutil,
32 32 match as matchmod,
33 33 merge as mergemod,
34 34 mergeutil,
35 35 obsolete,
36 36 patch,
37 37 pathutil,
38 38 phases,
39 39 pycompat,
40 40 revlog,
41 41 rewriteutil,
42 42 scmutil,
43 43 smartset,
44 44 subrepoutil,
45 45 templatekw,
46 46 templater,
47 47 util,
48 48 vfs as vfsmod,
49 49 )
50 50
51 51 from .utils import (
52 52 dateutil,
53 53 stringutil,
54 54 )
55 55
56 56 stringio = util.stringio
57 57
58 58 # templates of common command options
59 59
60 60 dryrunopts = [
61 61 ('n', 'dry-run', None,
62 62 _('do not perform actions, just print output')),
63 63 ]
64 64
65 65 confirmopts = [
66 66 ('', 'confirm', None,
67 67 _('ask before applying actions')),
68 68 ]
69 69
70 70 remoteopts = [
71 71 ('e', 'ssh', '',
72 72 _('specify ssh command to use'), _('CMD')),
73 73 ('', 'remotecmd', '',
74 74 _('specify hg command to run on the remote side'), _('CMD')),
75 75 ('', 'insecure', None,
76 76 _('do not verify server certificate (ignoring web.cacerts config)')),
77 77 ]
78 78
79 79 walkopts = [
80 80 ('I', 'include', [],
81 81 _('include names matching the given patterns'), _('PATTERN')),
82 82 ('X', 'exclude', [],
83 83 _('exclude names matching the given patterns'), _('PATTERN')),
84 84 ]
85 85
86 86 commitopts = [
87 87 ('m', 'message', '',
88 88 _('use text as commit message'), _('TEXT')),
89 89 ('l', 'logfile', '',
90 90 _('read commit message from file'), _('FILE')),
91 91 ]
92 92
93 93 commitopts2 = [
94 94 ('d', 'date', '',
95 95 _('record the specified date as commit date'), _('DATE')),
96 96 ('u', 'user', '',
97 97 _('record the specified user as committer'), _('USER')),
98 98 ]
99 99
100 100 formatteropts = [
101 101 ('T', 'template', '',
102 102 _('display with template'), _('TEMPLATE')),
103 103 ]
104 104
105 105 templateopts = [
106 106 ('', 'style', '',
107 107 _('display using template map file (DEPRECATED)'), _('STYLE')),
108 108 ('T', 'template', '',
109 109 _('display with template'), _('TEMPLATE')),
110 110 ]
111 111
112 112 logopts = [
113 113 ('p', 'patch', None, _('show patch')),
114 114 ('g', 'git', None, _('use git extended diff format')),
115 115 ('l', 'limit', '',
116 116 _('limit number of changes displayed'), _('NUM')),
117 117 ('M', 'no-merges', None, _('do not show merges')),
118 118 ('', 'stat', None, _('output diffstat-style summary of changes')),
119 119 ('G', 'graph', None, _("show the revision DAG")),
120 120 ] + templateopts
121 121
122 122 diffopts = [
123 123 ('a', 'text', None, _('treat all files as text')),
124 124 ('g', 'git', None, _('use git extended diff format')),
125 125 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
126 126 ('', 'nodates', None, _('omit dates from diff headers'))
127 127 ]
128 128
129 129 diffwsopts = [
130 130 ('w', 'ignore-all-space', None,
131 131 _('ignore white space when comparing lines')),
132 132 ('b', 'ignore-space-change', None,
133 133 _('ignore changes in the amount of white space')),
134 134 ('B', 'ignore-blank-lines', None,
135 135 _('ignore changes whose lines are all blank')),
136 136 ('Z', 'ignore-space-at-eol', None,
137 137 _('ignore changes in whitespace at EOL')),
138 138 ]
139 139
140 140 diffopts2 = [
141 141 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
142 142 ('p', 'show-function', None, _('show which function each change is in')),
143 143 ('', 'reverse', None, _('produce a diff that undoes the changes')),
144 144 ] + diffwsopts + [
145 145 ('U', 'unified', '',
146 146 _('number of lines of context to show'), _('NUM')),
147 147 ('', 'stat', None, _('output diffstat-style summary of changes')),
148 148 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
149 149 ]
150 150
151 151 mergetoolopts = [
152 152 ('t', 'tool', '', _('specify merge tool'), _('TOOL')),
153 153 ]
154 154
155 155 similarityopts = [
156 156 ('s', 'similarity', '',
157 157 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
158 158 ]
159 159
160 160 subrepoopts = [
161 161 ('S', 'subrepos', None,
162 162 _('recurse into subrepositories'))
163 163 ]
164 164
165 165 debugrevlogopts = [
166 166 ('c', 'changelog', False, _('open changelog')),
167 167 ('m', 'manifest', False, _('open manifest')),
168 168 ('', 'dir', '', _('open directory manifest')),
169 169 ]
170 170
171 171 # special string such that everything below this line will be ingored in the
172 172 # editor text
173 173 _linebelow = "^HG: ------------------------ >8 ------------------------$"
174 174
175 175 def ishunk(x):
176 176 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
177 177 return isinstance(x, hunkclasses)
178 178
179 179 def newandmodified(chunks, originalchunks):
180 180 newlyaddedandmodifiedfiles = set()
181 181 for chunk in chunks:
182 182 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
183 183 originalchunks:
184 184 newlyaddedandmodifiedfiles.add(chunk.header.filename())
185 185 return newlyaddedandmodifiedfiles
186 186
187 187 def parsealiases(cmd):
188 188 return cmd.split("|")
189 189
190 190 def setupwrapcolorwrite(ui):
191 191 # wrap ui.write so diff output can be labeled/colorized
192 192 def wrapwrite(orig, *args, **kw):
193 193 label = kw.pop(r'label', '')
194 194 for chunk, l in patch.difflabel(lambda: args):
195 195 orig(chunk, label=label + l)
196 196
197 197 oldwrite = ui.write
198 198 def wrap(*args, **kwargs):
199 199 return wrapwrite(oldwrite, *args, **kwargs)
200 200 setattr(ui, 'write', wrap)
201 201 return oldwrite
202 202
203 203 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
204 204 try:
205 205 if usecurses:
206 206 if testfile:
207 207 recordfn = crecordmod.testdecorator(
208 208 testfile, crecordmod.testchunkselector)
209 209 else:
210 210 recordfn = crecordmod.chunkselector
211 211
212 212 return crecordmod.filterpatch(ui, originalhunks, recordfn,
213 213 operation)
214 214 except crecordmod.fallbackerror as e:
215 215 ui.warn('%s\n' % e.message)
216 216 ui.warn(_('falling back to text mode\n'))
217 217
218 218 return patch.filterpatch(ui, originalhunks, operation)
219 219
220 220 def recordfilter(ui, originalhunks, operation=None):
221 221 """ Prompts the user to filter the originalhunks and return a list of
222 222 selected hunks.
223 223 *operation* is used for to build ui messages to indicate the user what
224 224 kind of filtering they are doing: reverting, committing, shelving, etc.
225 225 (see patch.filterpatch).
226 226 """
227 227 usecurses = crecordmod.checkcurses(ui)
228 228 testfile = ui.config('experimental', 'crecordtest')
229 229 oldwrite = setupwrapcolorwrite(ui)
230 230 try:
231 231 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
232 232 testfile, operation)
233 233 finally:
234 234 ui.write = oldwrite
235 235 return newchunks, newopts
236 236
237 237 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
238 238 filterfn, *pats, **opts):
239 239 opts = pycompat.byteskwargs(opts)
240 240 if not ui.interactive():
241 241 if cmdsuggest:
242 242 msg = _('running non-interactively, use %s instead') % cmdsuggest
243 243 else:
244 244 msg = _('running non-interactively')
245 245 raise error.Abort(msg)
246 246
247 247 # make sure username is set before going interactive
248 248 if not opts.get('user'):
249 249 ui.username() # raise exception, username not provided
250 250
251 251 def recordfunc(ui, repo, message, match, opts):
252 252 """This is generic record driver.
253 253
254 254 Its job is to interactively filter local changes, and
255 255 accordingly prepare working directory into a state in which the
256 256 job can be delegated to a non-interactive commit command such as
257 257 'commit' or 'qrefresh'.
258 258
259 259 After the actual job is done by non-interactive command, the
260 260 working directory is restored to its original state.
261 261
262 262 In the end we'll record interesting changes, and everything else
263 263 will be left in place, so the user can continue working.
264 264 """
265 265
266 266 checkunfinished(repo, commit=True)
267 267 wctx = repo[None]
268 268 merge = len(wctx.parents()) > 1
269 269 if merge:
270 270 raise error.Abort(_('cannot partially commit a merge '
271 271 '(use "hg commit" instead)'))
272 272
273 273 def fail(f, msg):
274 274 raise error.Abort('%s: %s' % (f, msg))
275 275
276 276 force = opts.get('force')
277 277 if not force:
278 278 vdirs = []
279 279 match.explicitdir = vdirs.append
280 280 match.bad = fail
281 281
282 282 status = repo.status(match=match)
283 283 if not force:
284 284 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
285 285 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
286 286 diffopts.nodates = True
287 287 diffopts.git = True
288 288 diffopts.showfunc = True
289 289 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
290 290 originalchunks = patch.parsepatch(originaldiff)
291 291
292 292 # 1. filter patch, since we are intending to apply subset of it
293 293 try:
294 294 chunks, newopts = filterfn(ui, originalchunks)
295 295 except error.PatchError as err:
296 296 raise error.Abort(_('error parsing patch: %s') % err)
297 297 opts.update(newopts)
298 298
299 299 # We need to keep a backup of files that have been newly added and
300 300 # modified during the recording process because there is a previous
301 301 # version without the edit in the workdir
302 302 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
303 303 contenders = set()
304 304 for h in chunks:
305 305 try:
306 306 contenders.update(set(h.files()))
307 307 except AttributeError:
308 308 pass
309 309
310 310 changed = status.modified + status.added + status.removed
311 311 newfiles = [f for f in changed if f in contenders]
312 312 if not newfiles:
313 313 ui.status(_('no changes to record\n'))
314 314 return 0
315 315
316 316 modified = set(status.modified)
317 317
318 318 # 2. backup changed files, so we can restore them in the end
319 319
320 320 if backupall:
321 321 tobackup = changed
322 322 else:
323 323 tobackup = [f for f in newfiles if f in modified or f in \
324 324 newlyaddedandmodifiedfiles]
325 325 backups = {}
326 326 if tobackup:
327 327 backupdir = repo.vfs.join('record-backups')
328 328 try:
329 329 os.mkdir(backupdir)
330 330 except OSError as err:
331 331 if err.errno != errno.EEXIST:
332 332 raise
333 333 try:
334 334 # backup continues
335 335 for f in tobackup:
336 336 fd, tmpname = pycompat.mkstemp(prefix=f.replace('/', '_') + '.',
337 337 dir=backupdir)
338 338 os.close(fd)
339 339 ui.debug('backup %r as %r\n' % (f, tmpname))
340 340 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
341 341 backups[f] = tmpname
342 342
343 343 fp = stringio()
344 344 for c in chunks:
345 345 fname = c.filename()
346 346 if fname in backups:
347 347 c.write(fp)
348 348 dopatch = fp.tell()
349 349 fp.seek(0)
350 350
351 351 # 2.5 optionally review / modify patch in text editor
352 352 if opts.get('review', False):
353 353 patchtext = (crecordmod.diffhelptext
354 354 + crecordmod.patchhelptext
355 355 + fp.read())
356 356 reviewedpatch = ui.edit(patchtext, "",
357 357 action="diff",
358 358 repopath=repo.path)
359 359 fp.truncate(0)
360 360 fp.write(reviewedpatch)
361 361 fp.seek(0)
362 362
363 363 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
364 364 # 3a. apply filtered patch to clean repo (clean)
365 365 if backups:
366 366 # Equivalent to hg.revert
367 367 m = scmutil.matchfiles(repo, backups.keys())
368 mergemod.update(repo, repo.dirstate.p1(),
369 False, True, matcher=m)
368 mergemod.update(repo, repo.dirstate.p1(), branchmerge=False,
369 force=True, matcher=m)
370 370
371 371 # 3b. (apply)
372 372 if dopatch:
373 373 try:
374 374 ui.debug('applying patch\n')
375 375 ui.debug(fp.getvalue())
376 376 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
377 377 except error.PatchError as err:
378 378 raise error.Abort(pycompat.bytestr(err))
379 379 del fp
380 380
381 381 # 4. We prepared working directory according to filtered
382 382 # patch. Now is the time to delegate the job to
383 383 # commit/qrefresh or the like!
384 384
385 385 # Make all of the pathnames absolute.
386 386 newfiles = [repo.wjoin(nf) for nf in newfiles]
387 387 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
388 388 finally:
389 389 # 5. finally restore backed-up files
390 390 try:
391 391 dirstate = repo.dirstate
392 392 for realname, tmpname in backups.iteritems():
393 393 ui.debug('restoring %r to %r\n' % (tmpname, realname))
394 394
395 395 if dirstate[realname] == 'n':
396 396 # without normallookup, restoring timestamp
397 397 # may cause partially committed files
398 398 # to be treated as unmodified
399 399 dirstate.normallookup(realname)
400 400
401 401 # copystat=True here and above are a hack to trick any
402 402 # editors that have f open that we haven't modified them.
403 403 #
404 404 # Also note that this racy as an editor could notice the
405 405 # file's mtime before we've finished writing it.
406 406 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
407 407 os.unlink(tmpname)
408 408 if tobackup:
409 409 os.rmdir(backupdir)
410 410 except OSError:
411 411 pass
412 412
413 413 def recordinwlock(ui, repo, message, match, opts):
414 414 with repo.wlock():
415 415 return recordfunc(ui, repo, message, match, opts)
416 416
417 417 return commit(ui, repo, recordinwlock, pats, opts)
418 418
419 419 class dirnode(object):
420 420 """
421 421 Represent a directory in user working copy with information required for
422 422 the purpose of tersing its status.
423 423
424 424 path is the path to the directory, without a trailing '/'
425 425
426 426 statuses is a set of statuses of all files in this directory (this includes
427 427 all the files in all the subdirectories too)
428 428
429 429 files is a list of files which are direct child of this directory
430 430
431 431 subdirs is a dictionary of sub-directory name as the key and it's own
432 432 dirnode object as the value
433 433 """
434 434
435 435 def __init__(self, dirpath):
436 436 self.path = dirpath
437 437 self.statuses = set([])
438 438 self.files = []
439 439 self.subdirs = {}
440 440
441 441 def _addfileindir(self, filename, status):
442 442 """Add a file in this directory as a direct child."""
443 443 self.files.append((filename, status))
444 444
445 445 def addfile(self, filename, status):
446 446 """
447 447 Add a file to this directory or to its direct parent directory.
448 448
449 449 If the file is not direct child of this directory, we traverse to the
450 450 directory of which this file is a direct child of and add the file
451 451 there.
452 452 """
453 453
454 454 # the filename contains a path separator, it means it's not the direct
455 455 # child of this directory
456 456 if '/' in filename:
457 457 subdir, filep = filename.split('/', 1)
458 458
459 459 # does the dirnode object for subdir exists
460 460 if subdir not in self.subdirs:
461 461 subdirpath = pathutil.join(self.path, subdir)
462 462 self.subdirs[subdir] = dirnode(subdirpath)
463 463
464 464 # try adding the file in subdir
465 465 self.subdirs[subdir].addfile(filep, status)
466 466
467 467 else:
468 468 self._addfileindir(filename, status)
469 469
470 470 if status not in self.statuses:
471 471 self.statuses.add(status)
472 472
473 473 def iterfilepaths(self):
474 474 """Yield (status, path) for files directly under this directory."""
475 475 for f, st in self.files:
476 476 yield st, pathutil.join(self.path, f)
477 477
478 478 def tersewalk(self, terseargs):
479 479 """
480 480 Yield (status, path) obtained by processing the status of this
481 481 dirnode.
482 482
483 483 terseargs is the string of arguments passed by the user with `--terse`
484 484 flag.
485 485
486 486 Following are the cases which can happen:
487 487
488 488 1) All the files in the directory (including all the files in its
489 489 subdirectories) share the same status and the user has asked us to terse
490 490 that status. -> yield (status, dirpath). dirpath will end in '/'.
491 491
492 492 2) Otherwise, we do following:
493 493
494 494 a) Yield (status, filepath) for all the files which are in this
495 495 directory (only the ones in this directory, not the subdirs)
496 496
497 497 b) Recurse the function on all the subdirectories of this
498 498 directory
499 499 """
500 500
501 501 if len(self.statuses) == 1:
502 502 onlyst = self.statuses.pop()
503 503
504 504 # Making sure we terse only when the status abbreviation is
505 505 # passed as terse argument
506 506 if onlyst in terseargs:
507 507 yield onlyst, self.path + '/'
508 508 return
509 509
510 510 # add the files to status list
511 511 for st, fpath in self.iterfilepaths():
512 512 yield st, fpath
513 513
514 514 #recurse on the subdirs
515 515 for dirobj in self.subdirs.values():
516 516 for st, fpath in dirobj.tersewalk(terseargs):
517 517 yield st, fpath
518 518
519 519 def tersedir(statuslist, terseargs):
520 520 """
521 521 Terse the status if all the files in a directory shares the same status.
522 522
523 523 statuslist is scmutil.status() object which contains a list of files for
524 524 each status.
525 525 terseargs is string which is passed by the user as the argument to `--terse`
526 526 flag.
527 527
528 528 The function makes a tree of objects of dirnode class, and at each node it
529 529 stores the information required to know whether we can terse a certain
530 530 directory or not.
531 531 """
532 532 # the order matters here as that is used to produce final list
533 533 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
534 534
535 535 # checking the argument validity
536 536 for s in pycompat.bytestr(terseargs):
537 537 if s not in allst:
538 538 raise error.Abort(_("'%s' not recognized") % s)
539 539
540 540 # creating a dirnode object for the root of the repo
541 541 rootobj = dirnode('')
542 542 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
543 543 'ignored', 'removed')
544 544
545 545 tersedict = {}
546 546 for attrname in pstatus:
547 547 statuschar = attrname[0:1]
548 548 for f in getattr(statuslist, attrname):
549 549 rootobj.addfile(f, statuschar)
550 550 tersedict[statuschar] = []
551 551
552 552 # we won't be tersing the root dir, so add files in it
553 553 for st, fpath in rootobj.iterfilepaths():
554 554 tersedict[st].append(fpath)
555 555
556 556 # process each sub-directory and build tersedict
557 557 for subdir in rootobj.subdirs.values():
558 558 for st, f in subdir.tersewalk(terseargs):
559 559 tersedict[st].append(f)
560 560
561 561 tersedlist = []
562 562 for st in allst:
563 563 tersedict[st].sort()
564 564 tersedlist.append(tersedict[st])
565 565
566 566 return tersedlist
567 567
568 568 def _commentlines(raw):
569 569 '''Surround lineswith a comment char and a new line'''
570 570 lines = raw.splitlines()
571 571 commentedlines = ['# %s' % line for line in lines]
572 572 return '\n'.join(commentedlines) + '\n'
573 573
574 574 def _conflictsmsg(repo):
575 575 mergestate = mergemod.mergestate.read(repo)
576 576 if not mergestate.active():
577 577 return
578 578
579 579 m = scmutil.match(repo[None])
580 580 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
581 581 if unresolvedlist:
582 582 mergeliststr = '\n'.join(
583 583 [' %s' % util.pathto(repo.root, encoding.getcwd(), path)
584 584 for path in sorted(unresolvedlist)])
585 585 msg = _('''Unresolved merge conflicts:
586 586
587 587 %s
588 588
589 589 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
590 590 else:
591 591 msg = _('No unresolved merge conflicts.')
592 592
593 593 return _commentlines(msg)
594 594
595 595 def _helpmessage(continuecmd, abortcmd):
596 596 msg = _('To continue: %s\n'
597 597 'To abort: %s') % (continuecmd, abortcmd)
598 598 return _commentlines(msg)
599 599
600 600 def _rebasemsg():
601 601 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
602 602
603 603 def _histeditmsg():
604 604 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
605 605
606 606 def _unshelvemsg():
607 607 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
608 608
609 609 def _graftmsg():
610 610 # tweakdefaults requires `update` to have a rev hence the `.`
611 611 return _helpmessage('hg graft --continue', 'hg graft --abort')
612 612
613 613 def _mergemsg():
614 614 # tweakdefaults requires `update` to have a rev hence the `.`
615 615 return _helpmessage('hg commit', 'hg merge --abort')
616 616
617 617 def _bisectmsg():
618 618 msg = _('To mark the changeset good: hg bisect --good\n'
619 619 'To mark the changeset bad: hg bisect --bad\n'
620 620 'To abort: hg bisect --reset\n')
621 621 return _commentlines(msg)
622 622
623 623 def fileexistspredicate(filename):
624 624 return lambda repo: repo.vfs.exists(filename)
625 625
626 626 def _mergepredicate(repo):
627 627 return len(repo[None].parents()) > 1
628 628
629 629 STATES = (
630 630 # (state, predicate to detect states, helpful message function)
631 631 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
632 632 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
633 633 ('graft', fileexistspredicate('graftstate'), _graftmsg),
634 634 ('unshelve', fileexistspredicate('shelvedstate'), _unshelvemsg),
635 635 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
636 636 # The merge state is part of a list that will be iterated over.
637 637 # They need to be last because some of the other unfinished states may also
638 638 # be in a merge or update state (eg. rebase, histedit, graft, etc).
639 639 # We want those to have priority.
640 640 ('merge', _mergepredicate, _mergemsg),
641 641 )
642 642
643 643 def _getrepostate(repo):
644 644 # experimental config: commands.status.skipstates
645 645 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
646 646 for state, statedetectionpredicate, msgfn in STATES:
647 647 if state in skip:
648 648 continue
649 649 if statedetectionpredicate(repo):
650 650 return (state, statedetectionpredicate, msgfn)
651 651
652 652 def morestatus(repo, fm):
653 653 statetuple = _getrepostate(repo)
654 654 label = 'status.morestatus'
655 655 if statetuple:
656 656 state, statedetectionpredicate, helpfulmsg = statetuple
657 657 statemsg = _('The repository is in an unfinished *%s* state.') % state
658 658 fm.plain('%s\n' % _commentlines(statemsg), label=label)
659 659 conmsg = _conflictsmsg(repo)
660 660 if conmsg:
661 661 fm.plain('%s\n' % conmsg, label=label)
662 662 if helpfulmsg:
663 663 helpmsg = helpfulmsg()
664 664 fm.plain('%s\n' % helpmsg, label=label)
665 665
666 666 def findpossible(cmd, table, strict=False):
667 667 """
668 668 Return cmd -> (aliases, command table entry)
669 669 for each matching command.
670 670 Return debug commands (or their aliases) only if no normal command matches.
671 671 """
672 672 choice = {}
673 673 debugchoice = {}
674 674
675 675 if cmd in table:
676 676 # short-circuit exact matches, "log" alias beats "log|history"
677 677 keys = [cmd]
678 678 else:
679 679 keys = table.keys()
680 680
681 681 allcmds = []
682 682 for e in keys:
683 683 aliases = parsealiases(e)
684 684 allcmds.extend(aliases)
685 685 found = None
686 686 if cmd in aliases:
687 687 found = cmd
688 688 elif not strict:
689 689 for a in aliases:
690 690 if a.startswith(cmd):
691 691 found = a
692 692 break
693 693 if found is not None:
694 694 if aliases[0].startswith("debug") or found.startswith("debug"):
695 695 debugchoice[found] = (aliases, table[e])
696 696 else:
697 697 choice[found] = (aliases, table[e])
698 698
699 699 if not choice and debugchoice:
700 700 choice = debugchoice
701 701
702 702 return choice, allcmds
703 703
704 704 def findcmd(cmd, table, strict=True):
705 705 """Return (aliases, command table entry) for command string."""
706 706 choice, allcmds = findpossible(cmd, table, strict)
707 707
708 708 if cmd in choice:
709 709 return choice[cmd]
710 710
711 711 if len(choice) > 1:
712 712 clist = sorted(choice)
713 713 raise error.AmbiguousCommand(cmd, clist)
714 714
715 715 if choice:
716 716 return list(choice.values())[0]
717 717
718 718 raise error.UnknownCommand(cmd, allcmds)
719 719
720 720 def changebranch(ui, repo, revs, label):
721 721 """ Change the branch name of given revs to label """
722 722
723 723 with repo.wlock(), repo.lock(), repo.transaction('branches'):
724 724 # abort in case of uncommitted merge or dirty wdir
725 725 bailifchanged(repo)
726 726 revs = scmutil.revrange(repo, revs)
727 727 if not revs:
728 728 raise error.Abort("empty revision set")
729 729 roots = repo.revs('roots(%ld)', revs)
730 730 if len(roots) > 1:
731 731 raise error.Abort(_("cannot change branch of non-linear revisions"))
732 732 rewriteutil.precheck(repo, revs, 'change branch of')
733 733
734 734 root = repo[roots.first()]
735 735 if not root.p1().branch() == label and label in repo.branchmap():
736 736 raise error.Abort(_("a branch of the same name already exists"))
737 737
738 738 if repo.revs('merge() and %ld', revs):
739 739 raise error.Abort(_("cannot change branch of a merge commit"))
740 740 if repo.revs('obsolete() and %ld', revs):
741 741 raise error.Abort(_("cannot change branch of a obsolete changeset"))
742 742
743 743 # make sure only topological heads
744 744 if repo.revs('heads(%ld) - head()', revs):
745 745 raise error.Abort(_("cannot change branch in middle of a stack"))
746 746
747 747 replacements = {}
748 748 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
749 749 # mercurial.subrepo -> mercurial.cmdutil
750 750 from . import context
751 751 for rev in revs:
752 752 ctx = repo[rev]
753 753 oldbranch = ctx.branch()
754 754 # check if ctx has same branch
755 755 if oldbranch == label:
756 756 continue
757 757
758 758 def filectxfn(repo, newctx, path):
759 759 try:
760 760 return ctx[path]
761 761 except error.ManifestLookupError:
762 762 return None
763 763
764 764 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
765 765 % (hex(ctx.node()), oldbranch, label))
766 766 extra = ctx.extra()
767 767 extra['branch_change'] = hex(ctx.node())
768 768 # While changing branch of set of linear commits, make sure that
769 769 # we base our commits on new parent rather than old parent which
770 770 # was obsoleted while changing the branch
771 771 p1 = ctx.p1().node()
772 772 p2 = ctx.p2().node()
773 773 if p1 in replacements:
774 774 p1 = replacements[p1][0]
775 775 if p2 in replacements:
776 776 p2 = replacements[p2][0]
777 777
778 778 mc = context.memctx(repo, (p1, p2),
779 779 ctx.description(),
780 780 ctx.files(),
781 781 filectxfn,
782 782 user=ctx.user(),
783 783 date=ctx.date(),
784 784 extra=extra,
785 785 branch=label)
786 786
787 787 newnode = repo.commitctx(mc)
788 788 replacements[ctx.node()] = (newnode,)
789 789 ui.debug('new node id is %s\n' % hex(newnode))
790 790
791 791 # create obsmarkers and move bookmarks
792 792 scmutil.cleanupnodes(repo, replacements, 'branch-change', fixphase=True)
793 793
794 794 # move the working copy too
795 795 wctx = repo[None]
796 796 # in-progress merge is a bit too complex for now.
797 797 if len(wctx.parents()) == 1:
798 798 newid = replacements.get(wctx.p1().node())
799 799 if newid is not None:
800 800 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
801 801 # mercurial.cmdutil
802 802 from . import hg
803 803 hg.update(repo, newid[0], quietempty=True)
804 804
805 805 ui.status(_("changed branch on %d changesets\n") % len(replacements))
806 806
807 807 def findrepo(p):
808 808 while not os.path.isdir(os.path.join(p, ".hg")):
809 809 oldp, p = p, os.path.dirname(p)
810 810 if p == oldp:
811 811 return None
812 812
813 813 return p
814 814
815 815 def bailifchanged(repo, merge=True, hint=None):
816 816 """ enforce the precondition that working directory must be clean.
817 817
818 818 'merge' can be set to false if a pending uncommitted merge should be
819 819 ignored (such as when 'update --check' runs).
820 820
821 821 'hint' is the usual hint given to Abort exception.
822 822 """
823 823
824 824 if merge and repo.dirstate.p2() != nullid:
825 825 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
826 826 modified, added, removed, deleted = repo.status()[:4]
827 827 if modified or added or removed or deleted:
828 828 raise error.Abort(_('uncommitted changes'), hint=hint)
829 829 ctx = repo[None]
830 830 for s in sorted(ctx.substate):
831 831 ctx.sub(s).bailifchanged(hint=hint)
832 832
833 833 def logmessage(ui, opts):
834 834 """ get the log message according to -m and -l option """
835 835 message = opts.get('message')
836 836 logfile = opts.get('logfile')
837 837
838 838 if message and logfile:
839 839 raise error.Abort(_('options --message and --logfile are mutually '
840 840 'exclusive'))
841 841 if not message and logfile:
842 842 try:
843 843 if isstdiofilename(logfile):
844 844 message = ui.fin.read()
845 845 else:
846 846 message = '\n'.join(util.readfile(logfile).splitlines())
847 847 except IOError as inst:
848 848 raise error.Abort(_("can't read commit message '%s': %s") %
849 849 (logfile, encoding.strtolocal(inst.strerror)))
850 850 return message
851 851
852 852 def mergeeditform(ctxorbool, baseformname):
853 853 """return appropriate editform name (referencing a committemplate)
854 854
855 855 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
856 856 merging is committed.
857 857
858 858 This returns baseformname with '.merge' appended if it is a merge,
859 859 otherwise '.normal' is appended.
860 860 """
861 861 if isinstance(ctxorbool, bool):
862 862 if ctxorbool:
863 863 return baseformname + ".merge"
864 864 elif len(ctxorbool.parents()) > 1:
865 865 return baseformname + ".merge"
866 866
867 867 return baseformname + ".normal"
868 868
869 869 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
870 870 editform='', **opts):
871 871 """get appropriate commit message editor according to '--edit' option
872 872
873 873 'finishdesc' is a function to be called with edited commit message
874 874 (= 'description' of the new changeset) just after editing, but
875 875 before checking empty-ness. It should return actual text to be
876 876 stored into history. This allows to change description before
877 877 storing.
878 878
879 879 'extramsg' is a extra message to be shown in the editor instead of
880 880 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
881 881 is automatically added.
882 882
883 883 'editform' is a dot-separated list of names, to distinguish
884 884 the purpose of commit text editing.
885 885
886 886 'getcommiteditor' returns 'commitforceeditor' regardless of
887 887 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
888 888 they are specific for usage in MQ.
889 889 """
890 890 if edit or finishdesc or extramsg:
891 891 return lambda r, c, s: commitforceeditor(r, c, s,
892 892 finishdesc=finishdesc,
893 893 extramsg=extramsg,
894 894 editform=editform)
895 895 elif editform:
896 896 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
897 897 else:
898 898 return commiteditor
899 899
900 900 def _escapecommandtemplate(tmpl):
901 901 parts = []
902 902 for typ, start, end in templater.scantemplate(tmpl, raw=True):
903 903 if typ == b'string':
904 904 parts.append(stringutil.escapestr(tmpl[start:end]))
905 905 else:
906 906 parts.append(tmpl[start:end])
907 907 return b''.join(parts)
908 908
909 909 def rendercommandtemplate(ui, tmpl, props):
910 910 r"""Expand a literal template 'tmpl' in a way suitable for command line
911 911
912 912 '\' in outermost string is not taken as an escape character because it
913 913 is a directory separator on Windows.
914 914
915 915 >>> from . import ui as uimod
916 916 >>> ui = uimod.ui()
917 917 >>> rendercommandtemplate(ui, b'c:\\{path}', {b'path': b'foo'})
918 918 'c:\\foo'
919 919 >>> rendercommandtemplate(ui, b'{"c:\\{path}"}', {'path': b'foo'})
920 920 'c:{path}'
921 921 """
922 922 if not tmpl:
923 923 return tmpl
924 924 t = formatter.maketemplater(ui, _escapecommandtemplate(tmpl))
925 925 return t.renderdefault(props)
926 926
927 927 def rendertemplate(ctx, tmpl, props=None):
928 928 """Expand a literal template 'tmpl' byte-string against one changeset
929 929
930 930 Each props item must be a stringify-able value or a callable returning
931 931 such value, i.e. no bare list nor dict should be passed.
932 932 """
933 933 repo = ctx.repo()
934 934 tres = formatter.templateresources(repo.ui, repo)
935 935 t = formatter.maketemplater(repo.ui, tmpl, defaults=templatekw.keywords,
936 936 resources=tres)
937 937 mapping = {'ctx': ctx}
938 938 if props:
939 939 mapping.update(props)
940 940 return t.renderdefault(mapping)
941 941
942 942 def _buildfntemplate(pat, total=None, seqno=None, revwidth=None, pathname=None):
943 943 r"""Convert old-style filename format string to template string
944 944
945 945 >>> _buildfntemplate(b'foo-%b-%n.patch', seqno=0)
946 946 'foo-{reporoot|basename}-{seqno}.patch'
947 947 >>> _buildfntemplate(b'%R{tags % "{tag}"}%H')
948 948 '{rev}{tags % "{tag}"}{node}'
949 949
950 950 '\' in outermost strings has to be escaped because it is a directory
951 951 separator on Windows:
952 952
953 953 >>> _buildfntemplate(b'c:\\tmp\\%R\\%n.patch', seqno=0)
954 954 'c:\\\\tmp\\\\{rev}\\\\{seqno}.patch'
955 955 >>> _buildfntemplate(b'\\\\foo\\bar.patch')
956 956 '\\\\\\\\foo\\\\bar.patch'
957 957 >>> _buildfntemplate(b'\\{tags % "{tag}"}')
958 958 '\\\\{tags % "{tag}"}'
959 959
960 960 but inner strings follow the template rules (i.e. '\' is taken as an
961 961 escape character):
962 962
963 963 >>> _buildfntemplate(br'{"c:\tmp"}', seqno=0)
964 964 '{"c:\\tmp"}'
965 965 """
966 966 expander = {
967 967 b'H': b'{node}',
968 968 b'R': b'{rev}',
969 969 b'h': b'{node|short}',
970 970 b'm': br'{sub(r"[^\w]", "_", desc|firstline)}',
971 971 b'r': b'{if(revwidth, pad(rev, revwidth, "0", left=True), rev)}',
972 972 b'%': b'%',
973 973 b'b': b'{reporoot|basename}',
974 974 }
975 975 if total is not None:
976 976 expander[b'N'] = b'{total}'
977 977 if seqno is not None:
978 978 expander[b'n'] = b'{seqno}'
979 979 if total is not None and seqno is not None:
980 980 expander[b'n'] = b'{pad(seqno, total|stringify|count, "0", left=True)}'
981 981 if pathname is not None:
982 982 expander[b's'] = b'{pathname|basename}'
983 983 expander[b'd'] = b'{if(pathname|dirname, pathname|dirname, ".")}'
984 984 expander[b'p'] = b'{pathname}'
985 985
986 986 newname = []
987 987 for typ, start, end in templater.scantemplate(pat, raw=True):
988 988 if typ != b'string':
989 989 newname.append(pat[start:end])
990 990 continue
991 991 i = start
992 992 while i < end:
993 993 n = pat.find(b'%', i, end)
994 994 if n < 0:
995 995 newname.append(stringutil.escapestr(pat[i:end]))
996 996 break
997 997 newname.append(stringutil.escapestr(pat[i:n]))
998 998 if n + 2 > end:
999 999 raise error.Abort(_("incomplete format spec in output "
1000 1000 "filename"))
1001 1001 c = pat[n + 1:n + 2]
1002 1002 i = n + 2
1003 1003 try:
1004 1004 newname.append(expander[c])
1005 1005 except KeyError:
1006 1006 raise error.Abort(_("invalid format spec '%%%s' in output "
1007 1007 "filename") % c)
1008 1008 return ''.join(newname)
1009 1009
1010 1010 def makefilename(ctx, pat, **props):
1011 1011 if not pat:
1012 1012 return pat
1013 1013 tmpl = _buildfntemplate(pat, **props)
1014 1014 # BUG: alias expansion shouldn't be made against template fragments
1015 1015 # rewritten from %-format strings, but we have no easy way to partially
1016 1016 # disable the expansion.
1017 1017 return rendertemplate(ctx, tmpl, pycompat.byteskwargs(props))
1018 1018
1019 1019 def isstdiofilename(pat):
1020 1020 """True if the given pat looks like a filename denoting stdin/stdout"""
1021 1021 return not pat or pat == '-'
1022 1022
1023 1023 class _unclosablefile(object):
1024 1024 def __init__(self, fp):
1025 1025 self._fp = fp
1026 1026
1027 1027 def close(self):
1028 1028 pass
1029 1029
1030 1030 def __iter__(self):
1031 1031 return iter(self._fp)
1032 1032
1033 1033 def __getattr__(self, attr):
1034 1034 return getattr(self._fp, attr)
1035 1035
1036 1036 def __enter__(self):
1037 1037 return self
1038 1038
1039 1039 def __exit__(self, exc_type, exc_value, exc_tb):
1040 1040 pass
1041 1041
1042 1042 def makefileobj(ctx, pat, mode='wb', **props):
1043 1043 writable = mode not in ('r', 'rb')
1044 1044
1045 1045 if isstdiofilename(pat):
1046 1046 repo = ctx.repo()
1047 1047 if writable:
1048 1048 fp = repo.ui.fout
1049 1049 else:
1050 1050 fp = repo.ui.fin
1051 1051 return _unclosablefile(fp)
1052 1052 fn = makefilename(ctx, pat, **props)
1053 1053 return open(fn, mode)
1054 1054
1055 1055 def openstorage(repo, cmd, file_, opts, returnrevlog=False):
1056 1056 """opens the changelog, manifest, a filelog or a given revlog"""
1057 1057 cl = opts['changelog']
1058 1058 mf = opts['manifest']
1059 1059 dir = opts['dir']
1060 1060 msg = None
1061 1061 if cl and mf:
1062 1062 msg = _('cannot specify --changelog and --manifest at the same time')
1063 1063 elif cl and dir:
1064 1064 msg = _('cannot specify --changelog and --dir at the same time')
1065 1065 elif cl or mf or dir:
1066 1066 if file_:
1067 1067 msg = _('cannot specify filename with --changelog or --manifest')
1068 1068 elif not repo:
1069 1069 msg = _('cannot specify --changelog or --manifest or --dir '
1070 1070 'without a repository')
1071 1071 if msg:
1072 1072 raise error.Abort(msg)
1073 1073
1074 1074 r = None
1075 1075 if repo:
1076 1076 if cl:
1077 1077 r = repo.unfiltered().changelog
1078 1078 elif dir:
1079 1079 if 'treemanifest' not in repo.requirements:
1080 1080 raise error.Abort(_("--dir can only be used on repos with "
1081 1081 "treemanifest enabled"))
1082 1082 if not dir.endswith('/'):
1083 1083 dir = dir + '/'
1084 1084 dirlog = repo.manifestlog.getstorage(dir)
1085 1085 if len(dirlog):
1086 1086 r = dirlog
1087 1087 elif mf:
1088 1088 r = repo.manifestlog.getstorage(b'')
1089 1089 elif file_:
1090 1090 filelog = repo.file(file_)
1091 1091 if len(filelog):
1092 1092 r = filelog
1093 1093
1094 1094 # Not all storage may be revlogs. If requested, try to return an actual
1095 1095 # revlog instance.
1096 1096 if returnrevlog:
1097 1097 if isinstance(r, revlog.revlog):
1098 1098 pass
1099 1099 elif util.safehasattr(r, '_revlog'):
1100 1100 r = r._revlog
1101 1101 elif r is not None:
1102 1102 raise error.Abort(_('%r does not appear to be a revlog') % r)
1103 1103
1104 1104 if not r:
1105 1105 if not returnrevlog:
1106 1106 raise error.Abort(_('cannot give path to non-revlog'))
1107 1107
1108 1108 if not file_:
1109 1109 raise error.CommandError(cmd, _('invalid arguments'))
1110 1110 if not os.path.isfile(file_):
1111 1111 raise error.Abort(_("revlog '%s' not found") % file_)
1112 1112 r = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False),
1113 1113 file_[:-2] + ".i")
1114 1114 return r
1115 1115
1116 1116 def openrevlog(repo, cmd, file_, opts):
1117 1117 """Obtain a revlog backing storage of an item.
1118 1118
1119 1119 This is similar to ``openstorage()`` except it always returns a revlog.
1120 1120
1121 1121 In most cases, a caller cares about the main storage object - not the
1122 1122 revlog backing it. Therefore, this function should only be used by code
1123 1123 that needs to examine low-level revlog implementation details. e.g. debug
1124 1124 commands.
1125 1125 """
1126 1126 return openstorage(repo, cmd, file_, opts, returnrevlog=True)
1127 1127
1128 1128 def copy(ui, repo, pats, opts, rename=False):
1129 1129 # called with the repo lock held
1130 1130 #
1131 1131 # hgsep => pathname that uses "/" to separate directories
1132 1132 # ossep => pathname that uses os.sep to separate directories
1133 1133 cwd = repo.getcwd()
1134 1134 targets = {}
1135 1135 after = opts.get("after")
1136 1136 dryrun = opts.get("dry_run")
1137 1137 wctx = repo[None]
1138 1138
1139 1139 def walkpat(pat):
1140 1140 srcs = []
1141 1141 if after:
1142 1142 badstates = '?'
1143 1143 else:
1144 1144 badstates = '?r'
1145 1145 m = scmutil.match(wctx, [pat], opts, globbed=True)
1146 1146 for abs in wctx.walk(m):
1147 1147 state = repo.dirstate[abs]
1148 1148 rel = m.rel(abs)
1149 1149 exact = m.exact(abs)
1150 1150 if state in badstates:
1151 1151 if exact and state == '?':
1152 1152 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1153 1153 if exact and state == 'r':
1154 1154 ui.warn(_('%s: not copying - file has been marked for'
1155 1155 ' remove\n') % rel)
1156 1156 continue
1157 1157 # abs: hgsep
1158 1158 # rel: ossep
1159 1159 srcs.append((abs, rel, exact))
1160 1160 return srcs
1161 1161
1162 1162 # abssrc: hgsep
1163 1163 # relsrc: ossep
1164 1164 # otarget: ossep
1165 1165 def copyfile(abssrc, relsrc, otarget, exact):
1166 1166 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1167 1167 if '/' in abstarget:
1168 1168 # We cannot normalize abstarget itself, this would prevent
1169 1169 # case only renames, like a => A.
1170 1170 abspath, absname = abstarget.rsplit('/', 1)
1171 1171 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1172 1172 reltarget = repo.pathto(abstarget, cwd)
1173 1173 target = repo.wjoin(abstarget)
1174 1174 src = repo.wjoin(abssrc)
1175 1175 state = repo.dirstate[abstarget]
1176 1176
1177 1177 scmutil.checkportable(ui, abstarget)
1178 1178
1179 1179 # check for collisions
1180 1180 prevsrc = targets.get(abstarget)
1181 1181 if prevsrc is not None:
1182 1182 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1183 1183 (reltarget, repo.pathto(abssrc, cwd),
1184 1184 repo.pathto(prevsrc, cwd)))
1185 1185 return True # report a failure
1186 1186
1187 1187 # check for overwrites
1188 1188 exists = os.path.lexists(target)
1189 1189 samefile = False
1190 1190 if exists and abssrc != abstarget:
1191 1191 if (repo.dirstate.normalize(abssrc) ==
1192 1192 repo.dirstate.normalize(abstarget)):
1193 1193 if not rename:
1194 1194 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1195 1195 return True # report a failure
1196 1196 exists = False
1197 1197 samefile = True
1198 1198
1199 1199 if not after and exists or after and state in 'mn':
1200 1200 if not opts['force']:
1201 1201 if state in 'mn':
1202 1202 msg = _('%s: not overwriting - file already committed\n')
1203 1203 if after:
1204 1204 flags = '--after --force'
1205 1205 else:
1206 1206 flags = '--force'
1207 1207 if rename:
1208 1208 hint = _("('hg rename %s' to replace the file by "
1209 1209 'recording a rename)\n') % flags
1210 1210 else:
1211 1211 hint = _("('hg copy %s' to replace the file by "
1212 1212 'recording a copy)\n') % flags
1213 1213 else:
1214 1214 msg = _('%s: not overwriting - file exists\n')
1215 1215 if rename:
1216 1216 hint = _("('hg rename --after' to record the rename)\n")
1217 1217 else:
1218 1218 hint = _("('hg copy --after' to record the copy)\n")
1219 1219 ui.warn(msg % reltarget)
1220 1220 ui.warn(hint)
1221 1221 return True # report a failure
1222 1222
1223 1223 if after:
1224 1224 if not exists:
1225 1225 if rename:
1226 1226 ui.warn(_('%s: not recording move - %s does not exist\n') %
1227 1227 (relsrc, reltarget))
1228 1228 else:
1229 1229 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1230 1230 (relsrc, reltarget))
1231 1231 return True # report a failure
1232 1232 elif not dryrun:
1233 1233 try:
1234 1234 if exists:
1235 1235 os.unlink(target)
1236 1236 targetdir = os.path.dirname(target) or '.'
1237 1237 if not os.path.isdir(targetdir):
1238 1238 os.makedirs(targetdir)
1239 1239 if samefile:
1240 1240 tmp = target + "~hgrename"
1241 1241 os.rename(src, tmp)
1242 1242 os.rename(tmp, target)
1243 1243 else:
1244 1244 # Preserve stat info on renames, not on copies; this matches
1245 1245 # Linux CLI behavior.
1246 1246 util.copyfile(src, target, copystat=rename)
1247 1247 srcexists = True
1248 1248 except IOError as inst:
1249 1249 if inst.errno == errno.ENOENT:
1250 1250 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1251 1251 srcexists = False
1252 1252 else:
1253 1253 ui.warn(_('%s: cannot copy - %s\n') %
1254 1254 (relsrc, encoding.strtolocal(inst.strerror)))
1255 1255 if rename:
1256 1256 hint = _("('hg rename --after' to record the rename)\n")
1257 1257 else:
1258 1258 hint = _("('hg copy --after' to record the copy)\n")
1259 1259 return True # report a failure
1260 1260
1261 1261 if ui.verbose or not exact:
1262 1262 if rename:
1263 1263 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1264 1264 else:
1265 1265 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1266 1266
1267 1267 targets[abstarget] = abssrc
1268 1268
1269 1269 # fix up dirstate
1270 1270 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1271 1271 dryrun=dryrun, cwd=cwd)
1272 1272 if rename and not dryrun:
1273 1273 if not after and srcexists and not samefile:
1274 1274 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
1275 1275 repo.wvfs.unlinkpath(abssrc, rmdir=rmdir)
1276 1276 wctx.forget([abssrc])
1277 1277
1278 1278 # pat: ossep
1279 1279 # dest ossep
1280 1280 # srcs: list of (hgsep, hgsep, ossep, bool)
1281 1281 # return: function that takes hgsep and returns ossep
1282 1282 def targetpathfn(pat, dest, srcs):
1283 1283 if os.path.isdir(pat):
1284 1284 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1285 1285 abspfx = util.localpath(abspfx)
1286 1286 if destdirexists:
1287 1287 striplen = len(os.path.split(abspfx)[0])
1288 1288 else:
1289 1289 striplen = len(abspfx)
1290 1290 if striplen:
1291 1291 striplen += len(pycompat.ossep)
1292 1292 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1293 1293 elif destdirexists:
1294 1294 res = lambda p: os.path.join(dest,
1295 1295 os.path.basename(util.localpath(p)))
1296 1296 else:
1297 1297 res = lambda p: dest
1298 1298 return res
1299 1299
1300 1300 # pat: ossep
1301 1301 # dest ossep
1302 1302 # srcs: list of (hgsep, hgsep, ossep, bool)
1303 1303 # return: function that takes hgsep and returns ossep
1304 1304 def targetpathafterfn(pat, dest, srcs):
1305 1305 if matchmod.patkind(pat):
1306 1306 # a mercurial pattern
1307 1307 res = lambda p: os.path.join(dest,
1308 1308 os.path.basename(util.localpath(p)))
1309 1309 else:
1310 1310 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1311 1311 if len(abspfx) < len(srcs[0][0]):
1312 1312 # A directory. Either the target path contains the last
1313 1313 # component of the source path or it does not.
1314 1314 def evalpath(striplen):
1315 1315 score = 0
1316 1316 for s in srcs:
1317 1317 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1318 1318 if os.path.lexists(t):
1319 1319 score += 1
1320 1320 return score
1321 1321
1322 1322 abspfx = util.localpath(abspfx)
1323 1323 striplen = len(abspfx)
1324 1324 if striplen:
1325 1325 striplen += len(pycompat.ossep)
1326 1326 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1327 1327 score = evalpath(striplen)
1328 1328 striplen1 = len(os.path.split(abspfx)[0])
1329 1329 if striplen1:
1330 1330 striplen1 += len(pycompat.ossep)
1331 1331 if evalpath(striplen1) > score:
1332 1332 striplen = striplen1
1333 1333 res = lambda p: os.path.join(dest,
1334 1334 util.localpath(p)[striplen:])
1335 1335 else:
1336 1336 # a file
1337 1337 if destdirexists:
1338 1338 res = lambda p: os.path.join(dest,
1339 1339 os.path.basename(util.localpath(p)))
1340 1340 else:
1341 1341 res = lambda p: dest
1342 1342 return res
1343 1343
1344 1344 pats = scmutil.expandpats(pats)
1345 1345 if not pats:
1346 1346 raise error.Abort(_('no source or destination specified'))
1347 1347 if len(pats) == 1:
1348 1348 raise error.Abort(_('no destination specified'))
1349 1349 dest = pats.pop()
1350 1350 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1351 1351 if not destdirexists:
1352 1352 if len(pats) > 1 or matchmod.patkind(pats[0]):
1353 1353 raise error.Abort(_('with multiple sources, destination must be an '
1354 1354 'existing directory'))
1355 1355 if util.endswithsep(dest):
1356 1356 raise error.Abort(_('destination %s is not a directory') % dest)
1357 1357
1358 1358 tfn = targetpathfn
1359 1359 if after:
1360 1360 tfn = targetpathafterfn
1361 1361 copylist = []
1362 1362 for pat in pats:
1363 1363 srcs = walkpat(pat)
1364 1364 if not srcs:
1365 1365 continue
1366 1366 copylist.append((tfn(pat, dest, srcs), srcs))
1367 1367 if not copylist:
1368 1368 raise error.Abort(_('no files to copy'))
1369 1369
1370 1370 errors = 0
1371 1371 for targetpath, srcs in copylist:
1372 1372 for abssrc, relsrc, exact in srcs:
1373 1373 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1374 1374 errors += 1
1375 1375
1376 1376 return errors != 0
1377 1377
1378 1378 ## facility to let extension process additional data into an import patch
1379 1379 # list of identifier to be executed in order
1380 1380 extrapreimport = [] # run before commit
1381 1381 extrapostimport = [] # run after commit
1382 1382 # mapping from identifier to actual import function
1383 1383 #
1384 1384 # 'preimport' are run before the commit is made and are provided the following
1385 1385 # arguments:
1386 1386 # - repo: the localrepository instance,
1387 1387 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1388 1388 # - extra: the future extra dictionary of the changeset, please mutate it,
1389 1389 # - opts: the import options.
1390 1390 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1391 1391 # mutation of in memory commit and more. Feel free to rework the code to get
1392 1392 # there.
1393 1393 extrapreimportmap = {}
1394 1394 # 'postimport' are run after the commit is made and are provided the following
1395 1395 # argument:
1396 1396 # - ctx: the changectx created by import.
1397 1397 extrapostimportmap = {}
1398 1398
1399 1399 def tryimportone(ui, repo, patchdata, parents, opts, msgs, updatefunc):
1400 1400 """Utility function used by commands.import to import a single patch
1401 1401
1402 1402 This function is explicitly defined here to help the evolve extension to
1403 1403 wrap this part of the import logic.
1404 1404
1405 1405 The API is currently a bit ugly because it a simple code translation from
1406 1406 the import command. Feel free to make it better.
1407 1407
1408 1408 :patchdata: a dictionary containing parsed patch data (such as from
1409 1409 ``patch.extract()``)
1410 1410 :parents: nodes that will be parent of the created commit
1411 1411 :opts: the full dict of option passed to the import command
1412 1412 :msgs: list to save commit message to.
1413 1413 (used in case we need to save it when failing)
1414 1414 :updatefunc: a function that update a repo to a given node
1415 1415 updatefunc(<repo>, <node>)
1416 1416 """
1417 1417 # avoid cycle context -> subrepo -> cmdutil
1418 1418 from . import context
1419 1419
1420 1420 tmpname = patchdata.get('filename')
1421 1421 message = patchdata.get('message')
1422 1422 user = opts.get('user') or patchdata.get('user')
1423 1423 date = opts.get('date') or patchdata.get('date')
1424 1424 branch = patchdata.get('branch')
1425 1425 nodeid = patchdata.get('nodeid')
1426 1426 p1 = patchdata.get('p1')
1427 1427 p2 = patchdata.get('p2')
1428 1428
1429 1429 nocommit = opts.get('no_commit')
1430 1430 importbranch = opts.get('import_branch')
1431 1431 update = not opts.get('bypass')
1432 1432 strip = opts["strip"]
1433 1433 prefix = opts["prefix"]
1434 1434 sim = float(opts.get('similarity') or 0)
1435 1435
1436 1436 if not tmpname:
1437 1437 return None, None, False
1438 1438
1439 1439 rejects = False
1440 1440
1441 1441 cmdline_message = logmessage(ui, opts)
1442 1442 if cmdline_message:
1443 1443 # pickup the cmdline msg
1444 1444 message = cmdline_message
1445 1445 elif message:
1446 1446 # pickup the patch msg
1447 1447 message = message.strip()
1448 1448 else:
1449 1449 # launch the editor
1450 1450 message = None
1451 1451 ui.debug('message:\n%s\n' % (message or ''))
1452 1452
1453 1453 if len(parents) == 1:
1454 1454 parents.append(repo[nullid])
1455 1455 if opts.get('exact'):
1456 1456 if not nodeid or not p1:
1457 1457 raise error.Abort(_('not a Mercurial patch'))
1458 1458 p1 = repo[p1]
1459 1459 p2 = repo[p2 or nullid]
1460 1460 elif p2:
1461 1461 try:
1462 1462 p1 = repo[p1]
1463 1463 p2 = repo[p2]
1464 1464 # Without any options, consider p2 only if the
1465 1465 # patch is being applied on top of the recorded
1466 1466 # first parent.
1467 1467 if p1 != parents[0]:
1468 1468 p1 = parents[0]
1469 1469 p2 = repo[nullid]
1470 1470 except error.RepoError:
1471 1471 p1, p2 = parents
1472 1472 if p2.node() == nullid:
1473 1473 ui.warn(_("warning: import the patch as a normal revision\n"
1474 1474 "(use --exact to import the patch as a merge)\n"))
1475 1475 else:
1476 1476 p1, p2 = parents
1477 1477
1478 1478 n = None
1479 1479 if update:
1480 1480 if p1 != parents[0]:
1481 1481 updatefunc(repo, p1.node())
1482 1482 if p2 != parents[1]:
1483 1483 repo.setparents(p1.node(), p2.node())
1484 1484
1485 1485 if opts.get('exact') or importbranch:
1486 1486 repo.dirstate.setbranch(branch or 'default')
1487 1487
1488 1488 partial = opts.get('partial', False)
1489 1489 files = set()
1490 1490 try:
1491 1491 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1492 1492 files=files, eolmode=None, similarity=sim / 100.0)
1493 1493 except error.PatchError as e:
1494 1494 if not partial:
1495 1495 raise error.Abort(pycompat.bytestr(e))
1496 1496 if partial:
1497 1497 rejects = True
1498 1498
1499 1499 files = list(files)
1500 1500 if nocommit:
1501 1501 if message:
1502 1502 msgs.append(message)
1503 1503 else:
1504 1504 if opts.get('exact') or p2:
1505 1505 # If you got here, you either use --force and know what
1506 1506 # you are doing or used --exact or a merge patch while
1507 1507 # being updated to its first parent.
1508 1508 m = None
1509 1509 else:
1510 1510 m = scmutil.matchfiles(repo, files or [])
1511 1511 editform = mergeeditform(repo[None], 'import.normal')
1512 1512 if opts.get('exact'):
1513 1513 editor = None
1514 1514 else:
1515 1515 editor = getcommiteditor(editform=editform,
1516 1516 **pycompat.strkwargs(opts))
1517 1517 extra = {}
1518 1518 for idfunc in extrapreimport:
1519 1519 extrapreimportmap[idfunc](repo, patchdata, extra, opts)
1520 1520 overrides = {}
1521 1521 if partial:
1522 1522 overrides[('ui', 'allowemptycommit')] = True
1523 1523 with repo.ui.configoverride(overrides, 'import'):
1524 1524 n = repo.commit(message, user,
1525 1525 date, match=m,
1526 1526 editor=editor, extra=extra)
1527 1527 for idfunc in extrapostimport:
1528 1528 extrapostimportmap[idfunc](repo[n])
1529 1529 else:
1530 1530 if opts.get('exact') or importbranch:
1531 1531 branch = branch or 'default'
1532 1532 else:
1533 1533 branch = p1.branch()
1534 1534 store = patch.filestore()
1535 1535 try:
1536 1536 files = set()
1537 1537 try:
1538 1538 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1539 1539 files, eolmode=None)
1540 1540 except error.PatchError as e:
1541 1541 raise error.Abort(stringutil.forcebytestr(e))
1542 1542 if opts.get('exact'):
1543 1543 editor = None
1544 1544 else:
1545 1545 editor = getcommiteditor(editform='import.bypass')
1546 1546 memctx = context.memctx(repo, (p1.node(), p2.node()),
1547 1547 message,
1548 1548 files=files,
1549 1549 filectxfn=store,
1550 1550 user=user,
1551 1551 date=date,
1552 1552 branch=branch,
1553 1553 editor=editor)
1554 1554 n = memctx.commit()
1555 1555 finally:
1556 1556 store.close()
1557 1557 if opts.get('exact') and nocommit:
1558 1558 # --exact with --no-commit is still useful in that it does merge
1559 1559 # and branch bits
1560 1560 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1561 1561 elif opts.get('exact') and (not n or hex(n) != nodeid):
1562 1562 raise error.Abort(_('patch is damaged or loses information'))
1563 1563 msg = _('applied to working directory')
1564 1564 if n:
1565 1565 # i18n: refers to a short changeset id
1566 1566 msg = _('created %s') % short(n)
1567 1567 return msg, n, rejects
1568 1568
1569 1569 # facility to let extensions include additional data in an exported patch
1570 1570 # list of identifiers to be executed in order
1571 1571 extraexport = []
1572 1572 # mapping from identifier to actual export function
1573 1573 # function as to return a string to be added to the header or None
1574 1574 # it is given two arguments (sequencenumber, changectx)
1575 1575 extraexportmap = {}
1576 1576
1577 1577 def _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts):
1578 1578 node = scmutil.binnode(ctx)
1579 1579 parents = [p.node() for p in ctx.parents() if p]
1580 1580 branch = ctx.branch()
1581 1581 if switch_parent:
1582 1582 parents.reverse()
1583 1583
1584 1584 if parents:
1585 1585 prev = parents[0]
1586 1586 else:
1587 1587 prev = nullid
1588 1588
1589 1589 fm.context(ctx=ctx)
1590 1590 fm.plain('# HG changeset patch\n')
1591 1591 fm.write('user', '# User %s\n', ctx.user())
1592 1592 fm.plain('# Date %d %d\n' % ctx.date())
1593 1593 fm.write('date', '# %s\n', fm.formatdate(ctx.date()))
1594 1594 fm.condwrite(branch and branch != 'default',
1595 1595 'branch', '# Branch %s\n', branch)
1596 1596 fm.write('node', '# Node ID %s\n', hex(node))
1597 1597 fm.plain('# Parent %s\n' % hex(prev))
1598 1598 if len(parents) > 1:
1599 1599 fm.plain('# Parent %s\n' % hex(parents[1]))
1600 1600 fm.data(parents=fm.formatlist(pycompat.maplist(hex, parents), name='node'))
1601 1601
1602 1602 # TODO: redesign extraexportmap function to support formatter
1603 1603 for headerid in extraexport:
1604 1604 header = extraexportmap[headerid](seqno, ctx)
1605 1605 if header is not None:
1606 1606 fm.plain('# %s\n' % header)
1607 1607
1608 1608 fm.write('desc', '%s\n', ctx.description().rstrip())
1609 1609 fm.plain('\n')
1610 1610
1611 1611 if fm.isplain():
1612 1612 chunkiter = patch.diffui(repo, prev, node, match, opts=diffopts)
1613 1613 for chunk, label in chunkiter:
1614 1614 fm.plain(chunk, label=label)
1615 1615 else:
1616 1616 chunkiter = patch.diff(repo, prev, node, match, opts=diffopts)
1617 1617 # TODO: make it structured?
1618 1618 fm.data(diff=b''.join(chunkiter))
1619 1619
1620 1620 def _exportfile(repo, revs, fm, dest, switch_parent, diffopts, match):
1621 1621 """Export changesets to stdout or a single file"""
1622 1622 for seqno, rev in enumerate(revs, 1):
1623 1623 ctx = repo[rev]
1624 1624 if not dest.startswith('<'):
1625 1625 repo.ui.note("%s\n" % dest)
1626 1626 fm.startitem()
1627 1627 _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts)
1628 1628
1629 1629 def _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, diffopts,
1630 1630 match):
1631 1631 """Export changesets to possibly multiple files"""
1632 1632 total = len(revs)
1633 1633 revwidth = max(len(str(rev)) for rev in revs)
1634 1634 filemap = util.sortdict() # filename: [(seqno, rev), ...]
1635 1635
1636 1636 for seqno, rev in enumerate(revs, 1):
1637 1637 ctx = repo[rev]
1638 1638 dest = makefilename(ctx, fntemplate,
1639 1639 total=total, seqno=seqno, revwidth=revwidth)
1640 1640 filemap.setdefault(dest, []).append((seqno, rev))
1641 1641
1642 1642 for dest in filemap:
1643 1643 with formatter.maybereopen(basefm, dest) as fm:
1644 1644 repo.ui.note("%s\n" % dest)
1645 1645 for seqno, rev in filemap[dest]:
1646 1646 fm.startitem()
1647 1647 ctx = repo[rev]
1648 1648 _exportsingle(repo, ctx, fm, match, switch_parent, seqno,
1649 1649 diffopts)
1650 1650
1651 1651 def export(repo, revs, basefm, fntemplate='hg-%h.patch', switch_parent=False,
1652 1652 opts=None, match=None):
1653 1653 '''export changesets as hg patches
1654 1654
1655 1655 Args:
1656 1656 repo: The repository from which we're exporting revisions.
1657 1657 revs: A list of revisions to export as revision numbers.
1658 1658 basefm: A formatter to which patches should be written.
1659 1659 fntemplate: An optional string to use for generating patch file names.
1660 1660 switch_parent: If True, show diffs against second parent when not nullid.
1661 1661 Default is false, which always shows diff against p1.
1662 1662 opts: diff options to use for generating the patch.
1663 1663 match: If specified, only export changes to files matching this matcher.
1664 1664
1665 1665 Returns:
1666 1666 Nothing.
1667 1667
1668 1668 Side Effect:
1669 1669 "HG Changeset Patch" data is emitted to one of the following
1670 1670 destinations:
1671 1671 fntemplate specified: Each rev is written to a unique file named using
1672 1672 the given template.
1673 1673 Otherwise: All revs will be written to basefm.
1674 1674 '''
1675 1675 scmutil.prefetchfiles(repo, revs, match)
1676 1676
1677 1677 if not fntemplate:
1678 1678 _exportfile(repo, revs, basefm, '<unnamed>', switch_parent, opts, match)
1679 1679 else:
1680 1680 _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, opts,
1681 1681 match)
1682 1682
1683 1683 def exportfile(repo, revs, fp, switch_parent=False, opts=None, match=None):
1684 1684 """Export changesets to the given file stream"""
1685 1685 scmutil.prefetchfiles(repo, revs, match)
1686 1686
1687 1687 dest = getattr(fp, 'name', '<unnamed>')
1688 1688 with formatter.formatter(repo.ui, fp, 'export', {}) as fm:
1689 1689 _exportfile(repo, revs, fm, dest, switch_parent, opts, match)
1690 1690
1691 1691 def showmarker(fm, marker, index=None):
1692 1692 """utility function to display obsolescence marker in a readable way
1693 1693
1694 1694 To be used by debug function."""
1695 1695 if index is not None:
1696 1696 fm.write('index', '%i ', index)
1697 1697 fm.write('prednode', '%s ', hex(marker.prednode()))
1698 1698 succs = marker.succnodes()
1699 1699 fm.condwrite(succs, 'succnodes', '%s ',
1700 1700 fm.formatlist(map(hex, succs), name='node'))
1701 1701 fm.write('flag', '%X ', marker.flags())
1702 1702 parents = marker.parentnodes()
1703 1703 if parents is not None:
1704 1704 fm.write('parentnodes', '{%s} ',
1705 1705 fm.formatlist(map(hex, parents), name='node', sep=', '))
1706 1706 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1707 1707 meta = marker.metadata().copy()
1708 1708 meta.pop('date', None)
1709 1709 smeta = pycompat.rapply(pycompat.maybebytestr, meta)
1710 1710 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1711 1711 fm.plain('\n')
1712 1712
1713 1713 def finddate(ui, repo, date):
1714 1714 """Find the tipmost changeset that matches the given date spec"""
1715 1715
1716 1716 df = dateutil.matchdate(date)
1717 1717 m = scmutil.matchall(repo)
1718 1718 results = {}
1719 1719
1720 1720 def prep(ctx, fns):
1721 1721 d = ctx.date()
1722 1722 if df(d[0]):
1723 1723 results[ctx.rev()] = d
1724 1724
1725 1725 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1726 1726 rev = ctx.rev()
1727 1727 if rev in results:
1728 1728 ui.status(_("found revision %s from %s\n") %
1729 1729 (rev, dateutil.datestr(results[rev])))
1730 1730 return '%d' % rev
1731 1731
1732 1732 raise error.Abort(_("revision matching date not found"))
1733 1733
1734 1734 def increasingwindows(windowsize=8, sizelimit=512):
1735 1735 while True:
1736 1736 yield windowsize
1737 1737 if windowsize < sizelimit:
1738 1738 windowsize *= 2
1739 1739
1740 1740 def _walkrevs(repo, opts):
1741 1741 # Default --rev value depends on --follow but --follow behavior
1742 1742 # depends on revisions resolved from --rev...
1743 1743 follow = opts.get('follow') or opts.get('follow_first')
1744 1744 if opts.get('rev'):
1745 1745 revs = scmutil.revrange(repo, opts['rev'])
1746 1746 elif follow and repo.dirstate.p1() == nullid:
1747 1747 revs = smartset.baseset()
1748 1748 elif follow:
1749 1749 revs = repo.revs('reverse(:.)')
1750 1750 else:
1751 1751 revs = smartset.spanset(repo)
1752 1752 revs.reverse()
1753 1753 return revs
1754 1754
1755 1755 class FileWalkError(Exception):
1756 1756 pass
1757 1757
1758 1758 def walkfilerevs(repo, match, follow, revs, fncache):
1759 1759 '''Walks the file history for the matched files.
1760 1760
1761 1761 Returns the changeset revs that are involved in the file history.
1762 1762
1763 1763 Throws FileWalkError if the file history can't be walked using
1764 1764 filelogs alone.
1765 1765 '''
1766 1766 wanted = set()
1767 1767 copies = []
1768 1768 minrev, maxrev = min(revs), max(revs)
1769 1769 def filerevgen(filelog, last):
1770 1770 """
1771 1771 Only files, no patterns. Check the history of each file.
1772 1772
1773 1773 Examines filelog entries within minrev, maxrev linkrev range
1774 1774 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1775 1775 tuples in backwards order
1776 1776 """
1777 1777 cl_count = len(repo)
1778 1778 revs = []
1779 1779 for j in pycompat.xrange(0, last + 1):
1780 1780 linkrev = filelog.linkrev(j)
1781 1781 if linkrev < minrev:
1782 1782 continue
1783 1783 # only yield rev for which we have the changelog, it can
1784 1784 # happen while doing "hg log" during a pull or commit
1785 1785 if linkrev >= cl_count:
1786 1786 break
1787 1787
1788 1788 parentlinkrevs = []
1789 1789 for p in filelog.parentrevs(j):
1790 1790 if p != nullrev:
1791 1791 parentlinkrevs.append(filelog.linkrev(p))
1792 1792 n = filelog.node(j)
1793 1793 revs.append((linkrev, parentlinkrevs,
1794 1794 follow and filelog.renamed(n)))
1795 1795
1796 1796 return reversed(revs)
1797 1797 def iterfiles():
1798 1798 pctx = repo['.']
1799 1799 for filename in match.files():
1800 1800 if follow:
1801 1801 if filename not in pctx:
1802 1802 raise error.Abort(_('cannot follow file not in parent '
1803 1803 'revision: "%s"') % filename)
1804 1804 yield filename, pctx[filename].filenode()
1805 1805 else:
1806 1806 yield filename, None
1807 1807 for filename_node in copies:
1808 1808 yield filename_node
1809 1809
1810 1810 for file_, node in iterfiles():
1811 1811 filelog = repo.file(file_)
1812 1812 if not len(filelog):
1813 1813 if node is None:
1814 1814 # A zero count may be a directory or deleted file, so
1815 1815 # try to find matching entries on the slow path.
1816 1816 if follow:
1817 1817 raise error.Abort(
1818 1818 _('cannot follow nonexistent file: "%s"') % file_)
1819 1819 raise FileWalkError("Cannot walk via filelog")
1820 1820 else:
1821 1821 continue
1822 1822
1823 1823 if node is None:
1824 1824 last = len(filelog) - 1
1825 1825 else:
1826 1826 last = filelog.rev(node)
1827 1827
1828 1828 # keep track of all ancestors of the file
1829 1829 ancestors = {filelog.linkrev(last)}
1830 1830
1831 1831 # iterate from latest to oldest revision
1832 1832 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1833 1833 if not follow:
1834 1834 if rev > maxrev:
1835 1835 continue
1836 1836 else:
1837 1837 # Note that last might not be the first interesting
1838 1838 # rev to us:
1839 1839 # if the file has been changed after maxrev, we'll
1840 1840 # have linkrev(last) > maxrev, and we still need
1841 1841 # to explore the file graph
1842 1842 if rev not in ancestors:
1843 1843 continue
1844 1844 # XXX insert 1327 fix here
1845 1845 if flparentlinkrevs:
1846 1846 ancestors.update(flparentlinkrevs)
1847 1847
1848 1848 fncache.setdefault(rev, []).append(file_)
1849 1849 wanted.add(rev)
1850 1850 if copied:
1851 1851 copies.append(copied)
1852 1852
1853 1853 return wanted
1854 1854
1855 1855 class _followfilter(object):
1856 1856 def __init__(self, repo, onlyfirst=False):
1857 1857 self.repo = repo
1858 1858 self.startrev = nullrev
1859 1859 self.roots = set()
1860 1860 self.onlyfirst = onlyfirst
1861 1861
1862 1862 def match(self, rev):
1863 1863 def realparents(rev):
1864 1864 if self.onlyfirst:
1865 1865 return self.repo.changelog.parentrevs(rev)[0:1]
1866 1866 else:
1867 1867 return filter(lambda x: x != nullrev,
1868 1868 self.repo.changelog.parentrevs(rev))
1869 1869
1870 1870 if self.startrev == nullrev:
1871 1871 self.startrev = rev
1872 1872 return True
1873 1873
1874 1874 if rev > self.startrev:
1875 1875 # forward: all descendants
1876 1876 if not self.roots:
1877 1877 self.roots.add(self.startrev)
1878 1878 for parent in realparents(rev):
1879 1879 if parent in self.roots:
1880 1880 self.roots.add(rev)
1881 1881 return True
1882 1882 else:
1883 1883 # backwards: all parents
1884 1884 if not self.roots:
1885 1885 self.roots.update(realparents(self.startrev))
1886 1886 if rev in self.roots:
1887 1887 self.roots.remove(rev)
1888 1888 self.roots.update(realparents(rev))
1889 1889 return True
1890 1890
1891 1891 return False
1892 1892
1893 1893 def walkchangerevs(repo, match, opts, prepare):
1894 1894 '''Iterate over files and the revs in which they changed.
1895 1895
1896 1896 Callers most commonly need to iterate backwards over the history
1897 1897 in which they are interested. Doing so has awful (quadratic-looking)
1898 1898 performance, so we use iterators in a "windowed" way.
1899 1899
1900 1900 We walk a window of revisions in the desired order. Within the
1901 1901 window, we first walk forwards to gather data, then in the desired
1902 1902 order (usually backwards) to display it.
1903 1903
1904 1904 This function returns an iterator yielding contexts. Before
1905 1905 yielding each context, the iterator will first call the prepare
1906 1906 function on each context in the window in forward order.'''
1907 1907
1908 1908 allfiles = opts.get('all_files')
1909 1909 follow = opts.get('follow') or opts.get('follow_first')
1910 1910 revs = _walkrevs(repo, opts)
1911 1911 if not revs:
1912 1912 return []
1913 1913 wanted = set()
1914 1914 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1915 1915 fncache = {}
1916 1916 change = repo.__getitem__
1917 1917
1918 1918 # First step is to fill wanted, the set of revisions that we want to yield.
1919 1919 # When it does not induce extra cost, we also fill fncache for revisions in
1920 1920 # wanted: a cache of filenames that were changed (ctx.files()) and that
1921 1921 # match the file filtering conditions.
1922 1922
1923 1923 if match.always() or allfiles:
1924 1924 # No files, no patterns. Display all revs.
1925 1925 wanted = revs
1926 1926 elif not slowpath:
1927 1927 # We only have to read through the filelog to find wanted revisions
1928 1928
1929 1929 try:
1930 1930 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1931 1931 except FileWalkError:
1932 1932 slowpath = True
1933 1933
1934 1934 # We decided to fall back to the slowpath because at least one
1935 1935 # of the paths was not a file. Check to see if at least one of them
1936 1936 # existed in history, otherwise simply return
1937 1937 for path in match.files():
1938 1938 if path == '.' or path in repo.store:
1939 1939 break
1940 1940 else:
1941 1941 return []
1942 1942
1943 1943 if slowpath:
1944 1944 # We have to read the changelog to match filenames against
1945 1945 # changed files
1946 1946
1947 1947 if follow:
1948 1948 raise error.Abort(_('can only follow copies/renames for explicit '
1949 1949 'filenames'))
1950 1950
1951 1951 # The slow path checks files modified in every changeset.
1952 1952 # This is really slow on large repos, so compute the set lazily.
1953 1953 class lazywantedset(object):
1954 1954 def __init__(self):
1955 1955 self.set = set()
1956 1956 self.revs = set(revs)
1957 1957
1958 1958 # No need to worry about locality here because it will be accessed
1959 1959 # in the same order as the increasing window below.
1960 1960 def __contains__(self, value):
1961 1961 if value in self.set:
1962 1962 return True
1963 1963 elif not value in self.revs:
1964 1964 return False
1965 1965 else:
1966 1966 self.revs.discard(value)
1967 1967 ctx = change(value)
1968 1968 matches = [f for f in ctx.files() if match(f)]
1969 1969 if matches:
1970 1970 fncache[value] = matches
1971 1971 self.set.add(value)
1972 1972 return True
1973 1973 return False
1974 1974
1975 1975 def discard(self, value):
1976 1976 self.revs.discard(value)
1977 1977 self.set.discard(value)
1978 1978
1979 1979 wanted = lazywantedset()
1980 1980
1981 1981 # it might be worthwhile to do this in the iterator if the rev range
1982 1982 # is descending and the prune args are all within that range
1983 1983 for rev in opts.get('prune', ()):
1984 1984 rev = repo[rev].rev()
1985 1985 ff = _followfilter(repo)
1986 1986 stop = min(revs[0], revs[-1])
1987 1987 for x in pycompat.xrange(rev, stop - 1, -1):
1988 1988 if ff.match(x):
1989 1989 wanted = wanted - [x]
1990 1990
1991 1991 # Now that wanted is correctly initialized, we can iterate over the
1992 1992 # revision range, yielding only revisions in wanted.
1993 1993 def iterate():
1994 1994 if follow and match.always():
1995 1995 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1996 1996 def want(rev):
1997 1997 return ff.match(rev) and rev in wanted
1998 1998 else:
1999 1999 def want(rev):
2000 2000 return rev in wanted
2001 2001
2002 2002 it = iter(revs)
2003 2003 stopiteration = False
2004 2004 for windowsize in increasingwindows():
2005 2005 nrevs = []
2006 2006 for i in pycompat.xrange(windowsize):
2007 2007 rev = next(it, None)
2008 2008 if rev is None:
2009 2009 stopiteration = True
2010 2010 break
2011 2011 elif want(rev):
2012 2012 nrevs.append(rev)
2013 2013 for rev in sorted(nrevs):
2014 2014 fns = fncache.get(rev)
2015 2015 ctx = change(rev)
2016 2016 if not fns:
2017 2017 def fns_generator():
2018 2018 if allfiles:
2019 2019 fiter = iter(ctx)
2020 2020 else:
2021 2021 fiter = ctx.files()
2022 2022 for f in fiter:
2023 2023 if match(f):
2024 2024 yield f
2025 2025 fns = fns_generator()
2026 2026 prepare(ctx, fns)
2027 2027 for rev in nrevs:
2028 2028 yield change(rev)
2029 2029
2030 2030 if stopiteration:
2031 2031 break
2032 2032
2033 2033 return iterate()
2034 2034
2035 2035 def add(ui, repo, match, prefix, explicitonly, **opts):
2036 2036 join = lambda f: os.path.join(prefix, f)
2037 2037 bad = []
2038 2038
2039 2039 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2040 2040 names = []
2041 2041 wctx = repo[None]
2042 2042 cca = None
2043 2043 abort, warn = scmutil.checkportabilityalert(ui)
2044 2044 if abort or warn:
2045 2045 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2046 2046
2047 2047 match = repo.narrowmatch(match, includeexact=True)
2048 2048 badmatch = matchmod.badmatch(match, badfn)
2049 2049 dirstate = repo.dirstate
2050 2050 # We don't want to just call wctx.walk here, since it would return a lot of
2051 2051 # clean files, which we aren't interested in and takes time.
2052 2052 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
2053 2053 unknown=True, ignored=False, full=False)):
2054 2054 exact = match.exact(f)
2055 2055 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2056 2056 if cca:
2057 2057 cca(f)
2058 2058 names.append(f)
2059 2059 if ui.verbose or not exact:
2060 2060 ui.status(_('adding %s\n') % match.rel(f),
2061 2061 label='addremove.added')
2062 2062
2063 2063 for subpath in sorted(wctx.substate):
2064 2064 sub = wctx.sub(subpath)
2065 2065 try:
2066 2066 submatch = matchmod.subdirmatcher(subpath, match)
2067 2067 if opts.get(r'subrepos'):
2068 2068 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2069 2069 else:
2070 2070 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2071 2071 except error.LookupError:
2072 2072 ui.status(_("skipping missing subrepository: %s\n")
2073 2073 % join(subpath))
2074 2074
2075 2075 if not opts.get(r'dry_run'):
2076 2076 rejected = wctx.add(names, prefix)
2077 2077 bad.extend(f for f in rejected if f in match.files())
2078 2078 return bad
2079 2079
2080 2080 def addwebdirpath(repo, serverpath, webconf):
2081 2081 webconf[serverpath] = repo.root
2082 2082 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2083 2083
2084 2084 for r in repo.revs('filelog("path:.hgsub")'):
2085 2085 ctx = repo[r]
2086 2086 for subpath in ctx.substate:
2087 2087 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2088 2088
2089 2089 def forget(ui, repo, match, prefix, explicitonly, dryrun, interactive):
2090 2090 if dryrun and interactive:
2091 2091 raise error.Abort(_("cannot specify both --dry-run and --interactive"))
2092 2092 join = lambda f: os.path.join(prefix, f)
2093 2093 bad = []
2094 2094 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2095 2095 wctx = repo[None]
2096 2096 forgot = []
2097 2097
2098 2098 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2099 2099 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2100 2100 if explicitonly:
2101 2101 forget = [f for f in forget if match.exact(f)]
2102 2102
2103 2103 for subpath in sorted(wctx.substate):
2104 2104 sub = wctx.sub(subpath)
2105 2105 try:
2106 2106 submatch = matchmod.subdirmatcher(subpath, match)
2107 2107 subbad, subforgot = sub.forget(submatch, prefix, dryrun=dryrun,
2108 2108 interactive=interactive)
2109 2109 bad.extend([subpath + '/' + f for f in subbad])
2110 2110 forgot.extend([subpath + '/' + f for f in subforgot])
2111 2111 except error.LookupError:
2112 2112 ui.status(_("skipping missing subrepository: %s\n")
2113 2113 % join(subpath))
2114 2114
2115 2115 if not explicitonly:
2116 2116 for f in match.files():
2117 2117 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2118 2118 if f not in forgot:
2119 2119 if repo.wvfs.exists(f):
2120 2120 # Don't complain if the exact case match wasn't given.
2121 2121 # But don't do this until after checking 'forgot', so
2122 2122 # that subrepo files aren't normalized, and this op is
2123 2123 # purely from data cached by the status walk above.
2124 2124 if repo.dirstate.normalize(f) in repo.dirstate:
2125 2125 continue
2126 2126 ui.warn(_('not removing %s: '
2127 2127 'file is already untracked\n')
2128 2128 % match.rel(f))
2129 2129 bad.append(f)
2130 2130
2131 2131 if interactive:
2132 2132 responses = _('[Ynsa?]'
2133 2133 '$$ &Yes, forget this file'
2134 2134 '$$ &No, skip this file'
2135 2135 '$$ &Skip remaining files'
2136 2136 '$$ Include &all remaining files'
2137 2137 '$$ &? (display help)')
2138 2138 for filename in forget[:]:
2139 2139 r = ui.promptchoice(_('forget %s %s') % (filename, responses))
2140 2140 if r == 4: # ?
2141 2141 while r == 4:
2142 2142 for c, t in ui.extractchoices(responses)[1]:
2143 2143 ui.write('%s - %s\n' % (c, encoding.lower(t)))
2144 2144 r = ui.promptchoice(_('forget %s %s') % (filename,
2145 2145 responses))
2146 2146 if r == 0: # yes
2147 2147 continue
2148 2148 elif r == 1: # no
2149 2149 forget.remove(filename)
2150 2150 elif r == 2: # Skip
2151 2151 fnindex = forget.index(filename)
2152 2152 del forget[fnindex:]
2153 2153 break
2154 2154 elif r == 3: # All
2155 2155 break
2156 2156
2157 2157 for f in forget:
2158 2158 if ui.verbose or not match.exact(f) or interactive:
2159 2159 ui.status(_('removing %s\n') % match.rel(f),
2160 2160 label='addremove.removed')
2161 2161
2162 2162 if not dryrun:
2163 2163 rejected = wctx.forget(forget, prefix)
2164 2164 bad.extend(f for f in rejected if f in match.files())
2165 2165 forgot.extend(f for f in forget if f not in rejected)
2166 2166 return bad, forgot
2167 2167
2168 2168 def files(ui, ctx, m, fm, fmt, subrepos):
2169 2169 ret = 1
2170 2170
2171 2171 needsfctx = ui.verbose or {'size', 'flags'} & fm.datahint()
2172 2172 for f in ctx.matches(m):
2173 2173 fm.startitem()
2174 2174 fm.context(ctx=ctx)
2175 2175 if needsfctx:
2176 2176 fc = ctx[f]
2177 2177 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2178 2178 fm.data(path=f)
2179 2179 fm.plain(fmt % m.rel(f))
2180 2180 ret = 0
2181 2181
2182 2182 for subpath in sorted(ctx.substate):
2183 2183 submatch = matchmod.subdirmatcher(subpath, m)
2184 2184 if (subrepos or m.exact(subpath) or any(submatch.files())):
2185 2185 sub = ctx.sub(subpath)
2186 2186 try:
2187 2187 recurse = m.exact(subpath) or subrepos
2188 2188 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2189 2189 ret = 0
2190 2190 except error.LookupError:
2191 2191 ui.status(_("skipping missing subrepository: %s\n")
2192 2192 % m.abs(subpath))
2193 2193
2194 2194 return ret
2195 2195
2196 2196 def remove(ui, repo, m, prefix, after, force, subrepos, dryrun, warnings=None):
2197 2197 join = lambda f: os.path.join(prefix, f)
2198 2198 ret = 0
2199 2199 s = repo.status(match=m, clean=True)
2200 2200 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2201 2201
2202 2202 wctx = repo[None]
2203 2203
2204 2204 if warnings is None:
2205 2205 warnings = []
2206 2206 warn = True
2207 2207 else:
2208 2208 warn = False
2209 2209
2210 2210 subs = sorted(wctx.substate)
2211 2211 progress = ui.makeprogress(_('searching'), total=len(subs),
2212 2212 unit=_('subrepos'))
2213 2213 for subpath in subs:
2214 2214 submatch = matchmod.subdirmatcher(subpath, m)
2215 2215 if subrepos or m.exact(subpath) or any(submatch.files()):
2216 2216 progress.increment()
2217 2217 sub = wctx.sub(subpath)
2218 2218 try:
2219 2219 if sub.removefiles(submatch, prefix, after, force, subrepos,
2220 2220 dryrun, warnings):
2221 2221 ret = 1
2222 2222 except error.LookupError:
2223 2223 warnings.append(_("skipping missing subrepository: %s\n")
2224 2224 % join(subpath))
2225 2225 progress.complete()
2226 2226
2227 2227 # warn about failure to delete explicit files/dirs
2228 2228 deleteddirs = util.dirs(deleted)
2229 2229 files = m.files()
2230 2230 progress = ui.makeprogress(_('deleting'), total=len(files),
2231 2231 unit=_('files'))
2232 2232 for f in files:
2233 2233 def insubrepo():
2234 2234 for subpath in wctx.substate:
2235 2235 if f.startswith(subpath + '/'):
2236 2236 return True
2237 2237 return False
2238 2238
2239 2239 progress.increment()
2240 2240 isdir = f in deleteddirs or wctx.hasdir(f)
2241 2241 if (f in repo.dirstate or isdir or f == '.'
2242 2242 or insubrepo() or f in subs):
2243 2243 continue
2244 2244
2245 2245 if repo.wvfs.exists(f):
2246 2246 if repo.wvfs.isdir(f):
2247 2247 warnings.append(_('not removing %s: no tracked files\n')
2248 2248 % m.rel(f))
2249 2249 else:
2250 2250 warnings.append(_('not removing %s: file is untracked\n')
2251 2251 % m.rel(f))
2252 2252 # missing files will generate a warning elsewhere
2253 2253 ret = 1
2254 2254 progress.complete()
2255 2255
2256 2256 if force:
2257 2257 list = modified + deleted + clean + added
2258 2258 elif after:
2259 2259 list = deleted
2260 2260 remaining = modified + added + clean
2261 2261 progress = ui.makeprogress(_('skipping'), total=len(remaining),
2262 2262 unit=_('files'))
2263 2263 for f in remaining:
2264 2264 progress.increment()
2265 2265 if ui.verbose or (f in files):
2266 2266 warnings.append(_('not removing %s: file still exists\n')
2267 2267 % m.rel(f))
2268 2268 ret = 1
2269 2269 progress.complete()
2270 2270 else:
2271 2271 list = deleted + clean
2272 2272 progress = ui.makeprogress(_('skipping'),
2273 2273 total=(len(modified) + len(added)),
2274 2274 unit=_('files'))
2275 2275 for f in modified:
2276 2276 progress.increment()
2277 2277 warnings.append(_('not removing %s: file is modified (use -f'
2278 2278 ' to force removal)\n') % m.rel(f))
2279 2279 ret = 1
2280 2280 for f in added:
2281 2281 progress.increment()
2282 2282 warnings.append(_("not removing %s: file has been marked for add"
2283 2283 " (use 'hg forget' to undo add)\n") % m.rel(f))
2284 2284 ret = 1
2285 2285 progress.complete()
2286 2286
2287 2287 list = sorted(list)
2288 2288 progress = ui.makeprogress(_('deleting'), total=len(list),
2289 2289 unit=_('files'))
2290 2290 for f in list:
2291 2291 if ui.verbose or not m.exact(f):
2292 2292 progress.increment()
2293 2293 ui.status(_('removing %s\n') % m.rel(f),
2294 2294 label='addremove.removed')
2295 2295 progress.complete()
2296 2296
2297 2297 if not dryrun:
2298 2298 with repo.wlock():
2299 2299 if not after:
2300 2300 for f in list:
2301 2301 if f in added:
2302 2302 continue # we never unlink added files on remove
2303 2303 rmdir = repo.ui.configbool('experimental',
2304 2304 'removeemptydirs')
2305 2305 repo.wvfs.unlinkpath(f, ignoremissing=True, rmdir=rmdir)
2306 2306 repo[None].forget(list)
2307 2307
2308 2308 if warn:
2309 2309 for warning in warnings:
2310 2310 ui.warn(warning)
2311 2311
2312 2312 return ret
2313 2313
2314 2314 def _updatecatformatter(fm, ctx, matcher, path, decode):
2315 2315 """Hook for adding data to the formatter used by ``hg cat``.
2316 2316
2317 2317 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2318 2318 this method first."""
2319 2319 data = ctx[path].data()
2320 2320 if decode:
2321 2321 data = ctx.repo().wwritedata(path, data)
2322 2322 fm.startitem()
2323 2323 fm.context(ctx=ctx)
2324 2324 fm.write('data', '%s', data)
2325 2325 fm.data(path=path)
2326 2326
2327 2327 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2328 2328 err = 1
2329 2329 opts = pycompat.byteskwargs(opts)
2330 2330
2331 2331 def write(path):
2332 2332 filename = None
2333 2333 if fntemplate:
2334 2334 filename = makefilename(ctx, fntemplate,
2335 2335 pathname=os.path.join(prefix, path))
2336 2336 # attempt to create the directory if it does not already exist
2337 2337 try:
2338 2338 os.makedirs(os.path.dirname(filename))
2339 2339 except OSError:
2340 2340 pass
2341 2341 with formatter.maybereopen(basefm, filename) as fm:
2342 2342 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2343 2343
2344 2344 # Automation often uses hg cat on single files, so special case it
2345 2345 # for performance to avoid the cost of parsing the manifest.
2346 2346 if len(matcher.files()) == 1 and not matcher.anypats():
2347 2347 file = matcher.files()[0]
2348 2348 mfl = repo.manifestlog
2349 2349 mfnode = ctx.manifestnode()
2350 2350 try:
2351 2351 if mfnode and mfl[mfnode].find(file)[0]:
2352 2352 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2353 2353 write(file)
2354 2354 return 0
2355 2355 except KeyError:
2356 2356 pass
2357 2357
2358 2358 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2359 2359
2360 2360 for abs in ctx.walk(matcher):
2361 2361 write(abs)
2362 2362 err = 0
2363 2363
2364 2364 for subpath in sorted(ctx.substate):
2365 2365 sub = ctx.sub(subpath)
2366 2366 try:
2367 2367 submatch = matchmod.subdirmatcher(subpath, matcher)
2368 2368
2369 2369 if not sub.cat(submatch, basefm, fntemplate,
2370 2370 os.path.join(prefix, sub._path),
2371 2371 **pycompat.strkwargs(opts)):
2372 2372 err = 0
2373 2373 except error.RepoLookupError:
2374 2374 ui.status(_("skipping missing subrepository: %s\n")
2375 2375 % os.path.join(prefix, subpath))
2376 2376
2377 2377 return err
2378 2378
2379 2379 def commit(ui, repo, commitfunc, pats, opts):
2380 2380 '''commit the specified files or all outstanding changes'''
2381 2381 date = opts.get('date')
2382 2382 if date:
2383 2383 opts['date'] = dateutil.parsedate(date)
2384 2384 message = logmessage(ui, opts)
2385 2385 matcher = scmutil.match(repo[None], pats, opts)
2386 2386
2387 2387 dsguard = None
2388 2388 # extract addremove carefully -- this function can be called from a command
2389 2389 # that doesn't support addremove
2390 2390 if opts.get('addremove'):
2391 2391 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2392 2392 with dsguard or util.nullcontextmanager():
2393 2393 if dsguard:
2394 2394 if scmutil.addremove(repo, matcher, "", opts) != 0:
2395 2395 raise error.Abort(
2396 2396 _("failed to mark all new/missing files as added/removed"))
2397 2397
2398 2398 return commitfunc(ui, repo, message, matcher, opts)
2399 2399
2400 2400 def samefile(f, ctx1, ctx2):
2401 2401 if f in ctx1.manifest():
2402 2402 a = ctx1.filectx(f)
2403 2403 if f in ctx2.manifest():
2404 2404 b = ctx2.filectx(f)
2405 2405 return (not a.cmp(b)
2406 2406 and a.flags() == b.flags())
2407 2407 else:
2408 2408 return False
2409 2409 else:
2410 2410 return f not in ctx2.manifest()
2411 2411
2412 2412 def amend(ui, repo, old, extra, pats, opts):
2413 2413 # avoid cycle context -> subrepo -> cmdutil
2414 2414 from . import context
2415 2415
2416 2416 # amend will reuse the existing user if not specified, but the obsolete
2417 2417 # marker creation requires that the current user's name is specified.
2418 2418 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2419 2419 ui.username() # raise exception if username not set
2420 2420
2421 2421 ui.note(_('amending changeset %s\n') % old)
2422 2422 base = old.p1()
2423 2423
2424 2424 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2425 2425 # Participating changesets:
2426 2426 #
2427 2427 # wctx o - workingctx that contains changes from working copy
2428 2428 # | to go into amending commit
2429 2429 # |
2430 2430 # old o - changeset to amend
2431 2431 # |
2432 2432 # base o - first parent of the changeset to amend
2433 2433 wctx = repo[None]
2434 2434
2435 2435 # Copy to avoid mutating input
2436 2436 extra = extra.copy()
2437 2437 # Update extra dict from amended commit (e.g. to preserve graft
2438 2438 # source)
2439 2439 extra.update(old.extra())
2440 2440
2441 2441 # Also update it from the from the wctx
2442 2442 extra.update(wctx.extra())
2443 2443
2444 2444 user = opts.get('user') or old.user()
2445 2445 date = opts.get('date') or old.date()
2446 2446
2447 2447 # Parse the date to allow comparison between date and old.date()
2448 2448 date = dateutil.parsedate(date)
2449 2449
2450 2450 if len(old.parents()) > 1:
2451 2451 # ctx.files() isn't reliable for merges, so fall back to the
2452 2452 # slower repo.status() method
2453 2453 files = set([fn for st in base.status(old)[:3]
2454 2454 for fn in st])
2455 2455 else:
2456 2456 files = set(old.files())
2457 2457
2458 2458 # add/remove the files to the working copy if the "addremove" option
2459 2459 # was specified.
2460 2460 matcher = scmutil.match(wctx, pats, opts)
2461 2461 if (opts.get('addremove')
2462 2462 and scmutil.addremove(repo, matcher, "", opts)):
2463 2463 raise error.Abort(
2464 2464 _("failed to mark all new/missing files as added/removed"))
2465 2465
2466 2466 # Check subrepos. This depends on in-place wctx._status update in
2467 2467 # subrepo.precommit(). To minimize the risk of this hack, we do
2468 2468 # nothing if .hgsub does not exist.
2469 2469 if '.hgsub' in wctx or '.hgsub' in old:
2470 2470 subs, commitsubs, newsubstate = subrepoutil.precommit(
2471 2471 ui, wctx, wctx._status, matcher)
2472 2472 # amend should abort if commitsubrepos is enabled
2473 2473 assert not commitsubs
2474 2474 if subs:
2475 2475 subrepoutil.writestate(repo, newsubstate)
2476 2476
2477 2477 ms = mergemod.mergestate.read(repo)
2478 2478 mergeutil.checkunresolved(ms)
2479 2479
2480 2480 filestoamend = set(f for f in wctx.files() if matcher(f))
2481 2481
2482 2482 changes = (len(filestoamend) > 0)
2483 2483 if changes:
2484 2484 # Recompute copies (avoid recording a -> b -> a)
2485 2485 copied = copies.pathcopies(base, wctx, matcher)
2486 2486 if old.p2:
2487 2487 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2488 2488
2489 2489 # Prune files which were reverted by the updates: if old
2490 2490 # introduced file X and the file was renamed in the working
2491 2491 # copy, then those two files are the same and
2492 2492 # we can discard X from our list of files. Likewise if X
2493 2493 # was removed, it's no longer relevant. If X is missing (aka
2494 2494 # deleted), old X must be preserved.
2495 2495 files.update(filestoamend)
2496 2496 files = [f for f in files if (not samefile(f, wctx, base)
2497 2497 or f in wctx.deleted())]
2498 2498
2499 2499 def filectxfn(repo, ctx_, path):
2500 2500 try:
2501 2501 # If the file being considered is not amongst the files
2502 2502 # to be amended, we should return the file context from the
2503 2503 # old changeset. This avoids issues when only some files in
2504 2504 # the working copy are being amended but there are also
2505 2505 # changes to other files from the old changeset.
2506 2506 if path not in filestoamend:
2507 2507 return old.filectx(path)
2508 2508
2509 2509 # Return None for removed files.
2510 2510 if path in wctx.removed():
2511 2511 return None
2512 2512
2513 2513 fctx = wctx[path]
2514 2514 flags = fctx.flags()
2515 2515 mctx = context.memfilectx(repo, ctx_,
2516 2516 fctx.path(), fctx.data(),
2517 2517 islink='l' in flags,
2518 2518 isexec='x' in flags,
2519 2519 copied=copied.get(path))
2520 2520 return mctx
2521 2521 except KeyError:
2522 2522 return None
2523 2523 else:
2524 2524 ui.note(_('copying changeset %s to %s\n') % (old, base))
2525 2525
2526 2526 # Use version of files as in the old cset
2527 2527 def filectxfn(repo, ctx_, path):
2528 2528 try:
2529 2529 return old.filectx(path)
2530 2530 except KeyError:
2531 2531 return None
2532 2532
2533 2533 # See if we got a message from -m or -l, if not, open the editor with
2534 2534 # the message of the changeset to amend.
2535 2535 message = logmessage(ui, opts)
2536 2536
2537 2537 editform = mergeeditform(old, 'commit.amend')
2538 2538 editor = getcommiteditor(editform=editform,
2539 2539 **pycompat.strkwargs(opts))
2540 2540
2541 2541 if not message:
2542 2542 editor = getcommiteditor(edit=True, editform=editform)
2543 2543 message = old.description()
2544 2544
2545 2545 pureextra = extra.copy()
2546 2546 extra['amend_source'] = old.hex()
2547 2547
2548 2548 new = context.memctx(repo,
2549 2549 parents=[base.node(), old.p2().node()],
2550 2550 text=message,
2551 2551 files=files,
2552 2552 filectxfn=filectxfn,
2553 2553 user=user,
2554 2554 date=date,
2555 2555 extra=extra,
2556 2556 editor=editor)
2557 2557
2558 2558 newdesc = changelog.stripdesc(new.description())
2559 2559 if ((not changes)
2560 2560 and newdesc == old.description()
2561 2561 and user == old.user()
2562 2562 and date == old.date()
2563 2563 and pureextra == old.extra()):
2564 2564 # nothing changed. continuing here would create a new node
2565 2565 # anyway because of the amend_source noise.
2566 2566 #
2567 2567 # This not what we expect from amend.
2568 2568 return old.node()
2569 2569
2570 2570 commitphase = None
2571 2571 if opts.get('secret'):
2572 2572 commitphase = phases.secret
2573 2573 newid = repo.commitctx(new)
2574 2574
2575 2575 # Reroute the working copy parent to the new changeset
2576 2576 repo.setparents(newid, nullid)
2577 2577 mapping = {old.node(): (newid,)}
2578 2578 obsmetadata = None
2579 2579 if opts.get('note'):
2580 2580 obsmetadata = {'note': encoding.fromlocal(opts['note'])}
2581 2581 backup = ui.configbool('ui', 'history-editing-backup')
2582 2582 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata,
2583 2583 fixphase=True, targetphase=commitphase,
2584 2584 backup=backup)
2585 2585
2586 2586 # Fixing the dirstate because localrepo.commitctx does not update
2587 2587 # it. This is rather convenient because we did not need to update
2588 2588 # the dirstate for all the files in the new commit which commitctx
2589 2589 # could have done if it updated the dirstate. Now, we can
2590 2590 # selectively update the dirstate only for the amended files.
2591 2591 dirstate = repo.dirstate
2592 2592
2593 2593 # Update the state of the files which were added and
2594 2594 # and modified in the amend to "normal" in the dirstate.
2595 2595 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2596 2596 for f in normalfiles:
2597 2597 dirstate.normal(f)
2598 2598
2599 2599 # Update the state of files which were removed in the amend
2600 2600 # to "removed" in the dirstate.
2601 2601 removedfiles = set(wctx.removed()) & filestoamend
2602 2602 for f in removedfiles:
2603 2603 dirstate.drop(f)
2604 2604
2605 2605 return newid
2606 2606
2607 2607 def commiteditor(repo, ctx, subs, editform=''):
2608 2608 if ctx.description():
2609 2609 return ctx.description()
2610 2610 return commitforceeditor(repo, ctx, subs, editform=editform,
2611 2611 unchangedmessagedetection=True)
2612 2612
2613 2613 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2614 2614 editform='', unchangedmessagedetection=False):
2615 2615 if not extramsg:
2616 2616 extramsg = _("Leave message empty to abort commit.")
2617 2617
2618 2618 forms = [e for e in editform.split('.') if e]
2619 2619 forms.insert(0, 'changeset')
2620 2620 templatetext = None
2621 2621 while forms:
2622 2622 ref = '.'.join(forms)
2623 2623 if repo.ui.config('committemplate', ref):
2624 2624 templatetext = committext = buildcommittemplate(
2625 2625 repo, ctx, subs, extramsg, ref)
2626 2626 break
2627 2627 forms.pop()
2628 2628 else:
2629 2629 committext = buildcommittext(repo, ctx, subs, extramsg)
2630 2630
2631 2631 # run editor in the repository root
2632 2632 olddir = encoding.getcwd()
2633 2633 os.chdir(repo.root)
2634 2634
2635 2635 # make in-memory changes visible to external process
2636 2636 tr = repo.currenttransaction()
2637 2637 repo.dirstate.write(tr)
2638 2638 pending = tr and tr.writepending() and repo.root
2639 2639
2640 2640 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2641 2641 editform=editform, pending=pending,
2642 2642 repopath=repo.path, action='commit')
2643 2643 text = editortext
2644 2644
2645 2645 # strip away anything below this special string (used for editors that want
2646 2646 # to display the diff)
2647 2647 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2648 2648 if stripbelow:
2649 2649 text = text[:stripbelow.start()]
2650 2650
2651 2651 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2652 2652 os.chdir(olddir)
2653 2653
2654 2654 if finishdesc:
2655 2655 text = finishdesc(text)
2656 2656 if not text.strip():
2657 2657 raise error.Abort(_("empty commit message"))
2658 2658 if unchangedmessagedetection and editortext == templatetext:
2659 2659 raise error.Abort(_("commit message unchanged"))
2660 2660
2661 2661 return text
2662 2662
2663 2663 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2664 2664 ui = repo.ui
2665 2665 spec = formatter.templatespec(ref, None, None)
2666 2666 t = logcmdutil.changesettemplater(ui, repo, spec)
2667 2667 t.t.cache.update((k, templater.unquotestring(v))
2668 2668 for k, v in repo.ui.configitems('committemplate'))
2669 2669
2670 2670 if not extramsg:
2671 2671 extramsg = '' # ensure that extramsg is string
2672 2672
2673 2673 ui.pushbuffer()
2674 2674 t.show(ctx, extramsg=extramsg)
2675 2675 return ui.popbuffer()
2676 2676
2677 2677 def hgprefix(msg):
2678 2678 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2679 2679
2680 2680 def buildcommittext(repo, ctx, subs, extramsg):
2681 2681 edittext = []
2682 2682 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2683 2683 if ctx.description():
2684 2684 edittext.append(ctx.description())
2685 2685 edittext.append("")
2686 2686 edittext.append("") # Empty line between message and comments.
2687 2687 edittext.append(hgprefix(_("Enter commit message."
2688 2688 " Lines beginning with 'HG:' are removed.")))
2689 2689 edittext.append(hgprefix(extramsg))
2690 2690 edittext.append("HG: --")
2691 2691 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2692 2692 if ctx.p2():
2693 2693 edittext.append(hgprefix(_("branch merge")))
2694 2694 if ctx.branch():
2695 2695 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2696 2696 if bookmarks.isactivewdirparent(repo):
2697 2697 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2698 2698 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2699 2699 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2700 2700 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2701 2701 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2702 2702 if not added and not modified and not removed:
2703 2703 edittext.append(hgprefix(_("no files changed")))
2704 2704 edittext.append("")
2705 2705
2706 2706 return "\n".join(edittext)
2707 2707
2708 2708 def commitstatus(repo, node, branch, bheads=None, opts=None):
2709 2709 if opts is None:
2710 2710 opts = {}
2711 2711 ctx = repo[node]
2712 2712 parents = ctx.parents()
2713 2713
2714 2714 if (not opts.get('amend') and bheads and node not in bheads and not
2715 2715 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2716 2716 repo.ui.status(_('created new head\n'))
2717 2717 # The message is not printed for initial roots. For the other
2718 2718 # changesets, it is printed in the following situations:
2719 2719 #
2720 2720 # Par column: for the 2 parents with ...
2721 2721 # N: null or no parent
2722 2722 # B: parent is on another named branch
2723 2723 # C: parent is a regular non head changeset
2724 2724 # H: parent was a branch head of the current branch
2725 2725 # Msg column: whether we print "created new head" message
2726 2726 # In the following, it is assumed that there already exists some
2727 2727 # initial branch heads of the current branch, otherwise nothing is
2728 2728 # printed anyway.
2729 2729 #
2730 2730 # Par Msg Comment
2731 2731 # N N y additional topo root
2732 2732 #
2733 2733 # B N y additional branch root
2734 2734 # C N y additional topo head
2735 2735 # H N n usual case
2736 2736 #
2737 2737 # B B y weird additional branch root
2738 2738 # C B y branch merge
2739 2739 # H B n merge with named branch
2740 2740 #
2741 2741 # C C y additional head from merge
2742 2742 # C H n merge with a head
2743 2743 #
2744 2744 # H H n head merge: head count decreases
2745 2745
2746 2746 if not opts.get('close_branch'):
2747 2747 for r in parents:
2748 2748 if r.closesbranch() and r.branch() == branch:
2749 2749 repo.ui.status(_('reopening closed branch head %d\n') % r.rev())
2750 2750
2751 2751 if repo.ui.debugflag:
2752 2752 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx.hex()))
2753 2753 elif repo.ui.verbose:
2754 2754 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx))
2755 2755
2756 2756 def postcommitstatus(repo, pats, opts):
2757 2757 return repo.status(match=scmutil.match(repo[None], pats, opts))
2758 2758
2759 2759 def revert(ui, repo, ctx, parents, *pats, **opts):
2760 2760 opts = pycompat.byteskwargs(opts)
2761 2761 parent, p2 = parents
2762 2762 node = ctx.node()
2763 2763
2764 2764 mf = ctx.manifest()
2765 2765 if node == p2:
2766 2766 parent = p2
2767 2767
2768 2768 # need all matching names in dirstate and manifest of target rev,
2769 2769 # so have to walk both. do not print errors if files exist in one
2770 2770 # but not other. in both cases, filesets should be evaluated against
2771 2771 # workingctx to get consistent result (issue4497). this means 'set:**'
2772 2772 # cannot be used to select missing files from target rev.
2773 2773
2774 2774 # `names` is a mapping for all elements in working copy and target revision
2775 2775 # The mapping is in the form:
2776 2776 # <abs path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2777 2777 names = {}
2778 2778
2779 2779 with repo.wlock():
2780 2780 ## filling of the `names` mapping
2781 2781 # walk dirstate to fill `names`
2782 2782
2783 2783 interactive = opts.get('interactive', False)
2784 2784 wctx = repo[None]
2785 2785 m = scmutil.match(wctx, pats, opts)
2786 2786
2787 2787 # we'll need this later
2788 2788 targetsubs = sorted(s for s in wctx.substate if m(s))
2789 2789
2790 2790 if not m.always():
2791 2791 matcher = matchmod.badmatch(m, lambda x, y: False)
2792 2792 for abs in wctx.walk(matcher):
2793 2793 names[abs] = m.rel(abs), m.exact(abs)
2794 2794
2795 2795 # walk target manifest to fill `names`
2796 2796
2797 2797 def badfn(path, msg):
2798 2798 if path in names:
2799 2799 return
2800 2800 if path in ctx.substate:
2801 2801 return
2802 2802 path_ = path + '/'
2803 2803 for f in names:
2804 2804 if f.startswith(path_):
2805 2805 return
2806 2806 ui.warn("%s: %s\n" % (m.rel(path), msg))
2807 2807
2808 2808 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2809 2809 if abs not in names:
2810 2810 names[abs] = m.rel(abs), m.exact(abs)
2811 2811
2812 2812 # Find status of all file in `names`.
2813 2813 m = scmutil.matchfiles(repo, names)
2814 2814
2815 2815 changes = repo.status(node1=node, match=m,
2816 2816 unknown=True, ignored=True, clean=True)
2817 2817 else:
2818 2818 changes = repo.status(node1=node, match=m)
2819 2819 for kind in changes:
2820 2820 for abs in kind:
2821 2821 names[abs] = m.rel(abs), m.exact(abs)
2822 2822
2823 2823 m = scmutil.matchfiles(repo, names)
2824 2824
2825 2825 modified = set(changes.modified)
2826 2826 added = set(changes.added)
2827 2827 removed = set(changes.removed)
2828 2828 _deleted = set(changes.deleted)
2829 2829 unknown = set(changes.unknown)
2830 2830 unknown.update(changes.ignored)
2831 2831 clean = set(changes.clean)
2832 2832 modadded = set()
2833 2833
2834 2834 # We need to account for the state of the file in the dirstate,
2835 2835 # even when we revert against something else than parent. This will
2836 2836 # slightly alter the behavior of revert (doing back up or not, delete
2837 2837 # or just forget etc).
2838 2838 if parent == node:
2839 2839 dsmodified = modified
2840 2840 dsadded = added
2841 2841 dsremoved = removed
2842 2842 # store all local modifications, useful later for rename detection
2843 2843 localchanges = dsmodified | dsadded
2844 2844 modified, added, removed = set(), set(), set()
2845 2845 else:
2846 2846 changes = repo.status(node1=parent, match=m)
2847 2847 dsmodified = set(changes.modified)
2848 2848 dsadded = set(changes.added)
2849 2849 dsremoved = set(changes.removed)
2850 2850 # store all local modifications, useful later for rename detection
2851 2851 localchanges = dsmodified | dsadded
2852 2852
2853 2853 # only take into account for removes between wc and target
2854 2854 clean |= dsremoved - removed
2855 2855 dsremoved &= removed
2856 2856 # distinct between dirstate remove and other
2857 2857 removed -= dsremoved
2858 2858
2859 2859 modadded = added & dsmodified
2860 2860 added -= modadded
2861 2861
2862 2862 # tell newly modified apart.
2863 2863 dsmodified &= modified
2864 2864 dsmodified |= modified & dsadded # dirstate added may need backup
2865 2865 modified -= dsmodified
2866 2866
2867 2867 # We need to wait for some post-processing to update this set
2868 2868 # before making the distinction. The dirstate will be used for
2869 2869 # that purpose.
2870 2870 dsadded = added
2871 2871
2872 2872 # in case of merge, files that are actually added can be reported as
2873 2873 # modified, we need to post process the result
2874 2874 if p2 != nullid:
2875 2875 mergeadd = set(dsmodified)
2876 2876 for path in dsmodified:
2877 2877 if path in mf:
2878 2878 mergeadd.remove(path)
2879 2879 dsadded |= mergeadd
2880 2880 dsmodified -= mergeadd
2881 2881
2882 2882 # if f is a rename, update `names` to also revert the source
2883 2883 cwd = repo.getcwd()
2884 2884 for f in localchanges:
2885 2885 src = repo.dirstate.copied(f)
2886 2886 # XXX should we check for rename down to target node?
2887 2887 if src and src not in names and repo.dirstate[src] == 'r':
2888 2888 dsremoved.add(src)
2889 2889 names[src] = (repo.pathto(src, cwd), True)
2890 2890
2891 2891 # determine the exact nature of the deleted changesets
2892 2892 deladded = set(_deleted)
2893 2893 for path in _deleted:
2894 2894 if path in mf:
2895 2895 deladded.remove(path)
2896 2896 deleted = _deleted - deladded
2897 2897
2898 2898 # distinguish between file to forget and the other
2899 2899 added = set()
2900 2900 for abs in dsadded:
2901 2901 if repo.dirstate[abs] != 'a':
2902 2902 added.add(abs)
2903 2903 dsadded -= added
2904 2904
2905 2905 for abs in deladded:
2906 2906 if repo.dirstate[abs] == 'a':
2907 2907 dsadded.add(abs)
2908 2908 deladded -= dsadded
2909 2909
2910 2910 # For files marked as removed, we check if an unknown file is present at
2911 2911 # the same path. If a such file exists it may need to be backed up.
2912 2912 # Making the distinction at this stage helps have simpler backup
2913 2913 # logic.
2914 2914 removunk = set()
2915 2915 for abs in removed:
2916 2916 target = repo.wjoin(abs)
2917 2917 if os.path.lexists(target):
2918 2918 removunk.add(abs)
2919 2919 removed -= removunk
2920 2920
2921 2921 dsremovunk = set()
2922 2922 for abs in dsremoved:
2923 2923 target = repo.wjoin(abs)
2924 2924 if os.path.lexists(target):
2925 2925 dsremovunk.add(abs)
2926 2926 dsremoved -= dsremovunk
2927 2927
2928 2928 # action to be actually performed by revert
2929 2929 # (<list of file>, message>) tuple
2930 2930 actions = {'revert': ([], _('reverting %s\n')),
2931 2931 'add': ([], _('adding %s\n')),
2932 2932 'remove': ([], _('removing %s\n')),
2933 2933 'drop': ([], _('removing %s\n')),
2934 2934 'forget': ([], _('forgetting %s\n')),
2935 2935 'undelete': ([], _('undeleting %s\n')),
2936 2936 'noop': (None, _('no changes needed to %s\n')),
2937 2937 'unknown': (None, _('file not managed: %s\n')),
2938 2938 }
2939 2939
2940 2940 # "constant" that convey the backup strategy.
2941 2941 # All set to `discard` if `no-backup` is set do avoid checking
2942 2942 # no_backup lower in the code.
2943 2943 # These values are ordered for comparison purposes
2944 2944 backupinteractive = 3 # do backup if interactively modified
2945 2945 backup = 2 # unconditionally do backup
2946 2946 check = 1 # check if the existing file differs from target
2947 2947 discard = 0 # never do backup
2948 2948 if opts.get('no_backup'):
2949 2949 backupinteractive = backup = check = discard
2950 2950 if interactive:
2951 2951 dsmodifiedbackup = backupinteractive
2952 2952 else:
2953 2953 dsmodifiedbackup = backup
2954 2954 tobackup = set()
2955 2955
2956 2956 backupanddel = actions['remove']
2957 2957 if not opts.get('no_backup'):
2958 2958 backupanddel = actions['drop']
2959 2959
2960 2960 disptable = (
2961 2961 # dispatch table:
2962 2962 # file state
2963 2963 # action
2964 2964 # make backup
2965 2965
2966 2966 ## Sets that results that will change file on disk
2967 2967 # Modified compared to target, no local change
2968 2968 (modified, actions['revert'], discard),
2969 2969 # Modified compared to target, but local file is deleted
2970 2970 (deleted, actions['revert'], discard),
2971 2971 # Modified compared to target, local change
2972 2972 (dsmodified, actions['revert'], dsmodifiedbackup),
2973 2973 # Added since target
2974 2974 (added, actions['remove'], discard),
2975 2975 # Added in working directory
2976 2976 (dsadded, actions['forget'], discard),
2977 2977 # Added since target, have local modification
2978 2978 (modadded, backupanddel, backup),
2979 2979 # Added since target but file is missing in working directory
2980 2980 (deladded, actions['drop'], discard),
2981 2981 # Removed since target, before working copy parent
2982 2982 (removed, actions['add'], discard),
2983 2983 # Same as `removed` but an unknown file exists at the same path
2984 2984 (removunk, actions['add'], check),
2985 2985 # Removed since targe, marked as such in working copy parent
2986 2986 (dsremoved, actions['undelete'], discard),
2987 2987 # Same as `dsremoved` but an unknown file exists at the same path
2988 2988 (dsremovunk, actions['undelete'], check),
2989 2989 ## the following sets does not result in any file changes
2990 2990 # File with no modification
2991 2991 (clean, actions['noop'], discard),
2992 2992 # Existing file, not tracked anywhere
2993 2993 (unknown, actions['unknown'], discard),
2994 2994 )
2995 2995
2996 2996 for abs, (rel, exact) in sorted(names.items()):
2997 2997 # target file to be touch on disk (relative to cwd)
2998 2998 target = repo.wjoin(abs)
2999 2999 # search the entry in the dispatch table.
3000 3000 # if the file is in any of these sets, it was touched in the working
3001 3001 # directory parent and we are sure it needs to be reverted.
3002 3002 for table, (xlist, msg), dobackup in disptable:
3003 3003 if abs not in table:
3004 3004 continue
3005 3005 if xlist is not None:
3006 3006 xlist.append(abs)
3007 3007 if dobackup:
3008 3008 # If in interactive mode, don't automatically create
3009 3009 # .orig files (issue4793)
3010 3010 if dobackup == backupinteractive:
3011 3011 tobackup.add(abs)
3012 3012 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3013 3013 bakname = scmutil.origpath(ui, repo, rel)
3014 3014 ui.note(_('saving current version of %s as %s\n') %
3015 3015 (rel, bakname))
3016 3016 if not opts.get('dry_run'):
3017 3017 if interactive:
3018 3018 util.copyfile(target, bakname)
3019 3019 else:
3020 3020 util.rename(target, bakname)
3021 3021 if opts.get('dry_run'):
3022 3022 if ui.verbose or not exact:
3023 3023 ui.status(msg % rel)
3024 3024 elif exact:
3025 3025 ui.warn(msg % rel)
3026 3026 break
3027 3027
3028 3028 if not opts.get('dry_run'):
3029 3029 needdata = ('revert', 'add', 'undelete')
3030 3030 oplist = [actions[name][0] for name in needdata]
3031 3031 prefetch = scmutil.prefetchfiles
3032 3032 matchfiles = scmutil.matchfiles
3033 3033 prefetch(repo, [ctx.rev()],
3034 3034 matchfiles(repo,
3035 3035 [f for sublist in oplist for f in sublist]))
3036 3036 _performrevert(repo, parents, ctx, names, actions, interactive,
3037 3037 tobackup)
3038 3038
3039 3039 if targetsubs:
3040 3040 # Revert the subrepos on the revert list
3041 3041 for sub in targetsubs:
3042 3042 try:
3043 3043 wctx.sub(sub).revert(ctx.substate[sub], *pats,
3044 3044 **pycompat.strkwargs(opts))
3045 3045 except KeyError:
3046 3046 raise error.Abort("subrepository '%s' does not exist in %s!"
3047 3047 % (sub, short(ctx.node())))
3048 3048
3049 3049 def _performrevert(repo, parents, ctx, names, actions, interactive=False,
3050 3050 tobackup=None):
3051 3051 """function that actually perform all the actions computed for revert
3052 3052
3053 3053 This is an independent function to let extension to plug in and react to
3054 3054 the imminent revert.
3055 3055
3056 3056 Make sure you have the working directory locked when calling this function.
3057 3057 """
3058 3058 parent, p2 = parents
3059 3059 node = ctx.node()
3060 3060 excluded_files = []
3061 3061
3062 3062 def checkout(f):
3063 3063 fc = ctx[f]
3064 3064 repo.wwrite(f, fc.data(), fc.flags())
3065 3065
3066 3066 def doremove(f):
3067 3067 try:
3068 3068 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
3069 3069 repo.wvfs.unlinkpath(f, rmdir=rmdir)
3070 3070 except OSError:
3071 3071 pass
3072 3072 repo.dirstate.remove(f)
3073 3073
3074 3074 def prntstatusmsg(action, f):
3075 3075 rel, exact = names[f]
3076 3076 if repo.ui.verbose or not exact:
3077 3077 repo.ui.status(actions[action][1] % rel)
3078 3078
3079 3079 audit_path = pathutil.pathauditor(repo.root, cached=True)
3080 3080 for f in actions['forget'][0]:
3081 3081 if interactive:
3082 3082 choice = repo.ui.promptchoice(
3083 3083 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
3084 3084 if choice == 0:
3085 3085 prntstatusmsg('forget', f)
3086 3086 repo.dirstate.drop(f)
3087 3087 else:
3088 3088 excluded_files.append(f)
3089 3089 else:
3090 3090 prntstatusmsg('forget', f)
3091 3091 repo.dirstate.drop(f)
3092 3092 for f in actions['remove'][0]:
3093 3093 audit_path(f)
3094 3094 if interactive:
3095 3095 choice = repo.ui.promptchoice(
3096 3096 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
3097 3097 if choice == 0:
3098 3098 prntstatusmsg('remove', f)
3099 3099 doremove(f)
3100 3100 else:
3101 3101 excluded_files.append(f)
3102 3102 else:
3103 3103 prntstatusmsg('remove', f)
3104 3104 doremove(f)
3105 3105 for f in actions['drop'][0]:
3106 3106 audit_path(f)
3107 3107 prntstatusmsg('drop', f)
3108 3108 repo.dirstate.remove(f)
3109 3109
3110 3110 normal = None
3111 3111 if node == parent:
3112 3112 # We're reverting to our parent. If possible, we'd like status
3113 3113 # to report the file as clean. We have to use normallookup for
3114 3114 # merges to avoid losing information about merged/dirty files.
3115 3115 if p2 != nullid:
3116 3116 normal = repo.dirstate.normallookup
3117 3117 else:
3118 3118 normal = repo.dirstate.normal
3119 3119
3120 3120 newlyaddedandmodifiedfiles = set()
3121 3121 if interactive:
3122 3122 # Prompt the user for changes to revert
3123 3123 torevert = [f for f in actions['revert'][0] if f not in excluded_files]
3124 3124 m = scmutil.matchfiles(repo, torevert)
3125 3125 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3126 3126 diffopts.nodates = True
3127 3127 diffopts.git = True
3128 3128 operation = 'discard'
3129 3129 reversehunks = True
3130 3130 if node != parent:
3131 3131 operation = 'apply'
3132 3132 reversehunks = False
3133 3133 if reversehunks:
3134 3134 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3135 3135 else:
3136 3136 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3137 3137 originalchunks = patch.parsepatch(diff)
3138 3138
3139 3139 try:
3140 3140
3141 3141 chunks, opts = recordfilter(repo.ui, originalchunks,
3142 3142 operation=operation)
3143 3143 if reversehunks:
3144 3144 chunks = patch.reversehunks(chunks)
3145 3145
3146 3146 except error.PatchError as err:
3147 3147 raise error.Abort(_('error parsing patch: %s') % err)
3148 3148
3149 3149 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3150 3150 if tobackup is None:
3151 3151 tobackup = set()
3152 3152 # Apply changes
3153 3153 fp = stringio()
3154 3154 # chunks are serialized per file, but files aren't sorted
3155 3155 for f in sorted(set(c.header.filename() for c in chunks if ishunk(c))):
3156 3156 prntstatusmsg('revert', f)
3157 3157 for c in chunks:
3158 3158 if ishunk(c):
3159 3159 abs = c.header.filename()
3160 3160 # Create a backup file only if this hunk should be backed up
3161 3161 if c.header.filename() in tobackup:
3162 3162 target = repo.wjoin(abs)
3163 3163 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3164 3164 util.copyfile(target, bakname)
3165 3165 tobackup.remove(abs)
3166 3166 c.write(fp)
3167 3167 dopatch = fp.tell()
3168 3168 fp.seek(0)
3169 3169 if dopatch:
3170 3170 try:
3171 3171 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3172 3172 except error.PatchError as err:
3173 3173 raise error.Abort(pycompat.bytestr(err))
3174 3174 del fp
3175 3175 else:
3176 3176 for f in actions['revert'][0]:
3177 3177 prntstatusmsg('revert', f)
3178 3178 checkout(f)
3179 3179 if normal:
3180 3180 normal(f)
3181 3181
3182 3182 for f in actions['add'][0]:
3183 3183 # Don't checkout modified files, they are already created by the diff
3184 3184 if f not in newlyaddedandmodifiedfiles:
3185 3185 prntstatusmsg('add', f)
3186 3186 checkout(f)
3187 3187 repo.dirstate.add(f)
3188 3188
3189 3189 normal = repo.dirstate.normallookup
3190 3190 if node == parent and p2 == nullid:
3191 3191 normal = repo.dirstate.normal
3192 3192 for f in actions['undelete'][0]:
3193 3193 prntstatusmsg('undelete', f)
3194 3194 checkout(f)
3195 3195 normal(f)
3196 3196
3197 3197 copied = copies.pathcopies(repo[parent], ctx)
3198 3198
3199 3199 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3200 3200 if f in copied:
3201 3201 repo.dirstate.copy(copied[f], f)
3202 3202
3203 3203 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3204 3204 # commands.outgoing. "missing" is "missing" of the result of
3205 3205 # "findcommonoutgoing()"
3206 3206 outgoinghooks = util.hooks()
3207 3207
3208 3208 # a list of (ui, repo) functions called by commands.summary
3209 3209 summaryhooks = util.hooks()
3210 3210
3211 3211 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3212 3212 #
3213 3213 # functions should return tuple of booleans below, if 'changes' is None:
3214 3214 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3215 3215 #
3216 3216 # otherwise, 'changes' is a tuple of tuples below:
3217 3217 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3218 3218 # - (desturl, destbranch, destpeer, outgoing)
3219 3219 summaryremotehooks = util.hooks()
3220 3220
3221 3221 # A list of state files kept by multistep operations like graft.
3222 3222 # Since graft cannot be aborted, it is considered 'clearable' by update.
3223 3223 # note: bisect is intentionally excluded
3224 3224 # (state file, clearable, allowcommit, error, hint)
3225 3225 unfinishedstates = [
3226 3226 ('graftstate', True, False, _('graft in progress'),
3227 3227 _("use 'hg graft --continue' or 'hg graft --stop' to stop")),
3228 3228 ('updatestate', True, False, _('last update was interrupted'),
3229 3229 _("use 'hg update' to get a consistent checkout"))
3230 3230 ]
3231 3231
3232 3232 def checkunfinished(repo, commit=False):
3233 3233 '''Look for an unfinished multistep operation, like graft, and abort
3234 3234 if found. It's probably good to check this right before
3235 3235 bailifchanged().
3236 3236 '''
3237 3237 # Check for non-clearable states first, so things like rebase will take
3238 3238 # precedence over update.
3239 3239 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3240 3240 if clearable or (commit and allowcommit):
3241 3241 continue
3242 3242 if repo.vfs.exists(f):
3243 3243 raise error.Abort(msg, hint=hint)
3244 3244
3245 3245 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3246 3246 if not clearable or (commit and allowcommit):
3247 3247 continue
3248 3248 if repo.vfs.exists(f):
3249 3249 raise error.Abort(msg, hint=hint)
3250 3250
3251 3251 def clearunfinished(repo):
3252 3252 '''Check for unfinished operations (as above), and clear the ones
3253 3253 that are clearable.
3254 3254 '''
3255 3255 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3256 3256 if not clearable and repo.vfs.exists(f):
3257 3257 raise error.Abort(msg, hint=hint)
3258 3258 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3259 3259 if clearable and repo.vfs.exists(f):
3260 3260 util.unlink(repo.vfs.join(f))
3261 3261
3262 3262 afterresolvedstates = [
3263 3263 ('graftstate',
3264 3264 _('hg graft --continue')),
3265 3265 ]
3266 3266
3267 3267 def howtocontinue(repo):
3268 3268 '''Check for an unfinished operation and return the command to finish
3269 3269 it.
3270 3270
3271 3271 afterresolvedstates tuples define a .hg/{file} and the corresponding
3272 3272 command needed to finish it.
3273 3273
3274 3274 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3275 3275 a boolean.
3276 3276 '''
3277 3277 contmsg = _("continue: %s")
3278 3278 for f, msg in afterresolvedstates:
3279 3279 if repo.vfs.exists(f):
3280 3280 return contmsg % msg, True
3281 3281 if repo[None].dirty(missing=True, merge=False, branch=False):
3282 3282 return contmsg % _("hg commit"), False
3283 3283 return None, None
3284 3284
3285 3285 def checkafterresolved(repo):
3286 3286 '''Inform the user about the next action after completing hg resolve
3287 3287
3288 3288 If there's a matching afterresolvedstates, howtocontinue will yield
3289 3289 repo.ui.warn as the reporter.
3290 3290
3291 3291 Otherwise, it will yield repo.ui.note.
3292 3292 '''
3293 3293 msg, warning = howtocontinue(repo)
3294 3294 if msg is not None:
3295 3295 if warning:
3296 3296 repo.ui.warn("%s\n" % msg)
3297 3297 else:
3298 3298 repo.ui.note("%s\n" % msg)
3299 3299
3300 3300 def wrongtooltocontinue(repo, task):
3301 3301 '''Raise an abort suggesting how to properly continue if there is an
3302 3302 active task.
3303 3303
3304 3304 Uses howtocontinue() to find the active task.
3305 3305
3306 3306 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3307 3307 a hint.
3308 3308 '''
3309 3309 after = howtocontinue(repo)
3310 3310 hint = None
3311 3311 if after[1]:
3312 3312 hint = after[0]
3313 3313 raise error.Abort(_('no %s in progress') % task, hint=hint)
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now