##// END OF EJS Templates
merge: have merge.update use a matcher instead of partial fn...
Augie Fackler -
r27344:43c00ca8 default
parent child Browse files
Show More
@@ -1,1414 +1,1414 b''
1 1 # histedit.py - interactive history editing for mercurial
2 2 #
3 3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """interactive history editing
8 8
9 9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 10 is as follows, assuming the following history::
11 11
12 12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 13 | Add delta
14 14 |
15 15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 16 | Add gamma
17 17 |
18 18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 19 | Add beta
20 20 |
21 21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 22 Add alpha
23 23
24 24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 25 file open in your editor::
26 26
27 27 pick c561b4e977df Add beta
28 28 pick 030b686bedc4 Add gamma
29 29 pick 7c2fd3b9020c Add delta
30 30
31 31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 32 #
33 33 # Commits are listed from least to most recent
34 34 #
35 35 # Commands:
36 36 # p, pick = use commit
37 37 # e, edit = use commit, but stop for amending
38 38 # f, fold = use commit, but combine it with the one above
39 39 # r, roll = like fold, but discard this commit's description
40 40 # d, drop = remove commit from history
41 41 # m, mess = edit commit message without changing commit content
42 42 #
43 43
44 44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 45 for each revision in your history. For example, if you had meant to add gamma
46 46 before beta, and then wanted to add delta in the same revision as beta, you
47 47 would reorganize the file to look like this::
48 48
49 49 pick 030b686bedc4 Add gamma
50 50 pick c561b4e977df Add beta
51 51 fold 7c2fd3b9020c Add delta
52 52
53 53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 54 #
55 55 # Commits are listed from least to most recent
56 56 #
57 57 # Commands:
58 58 # p, pick = use commit
59 59 # e, edit = use commit, but stop for amending
60 60 # f, fold = use commit, but combine it with the one above
61 61 # r, roll = like fold, but discard this commit's description
62 62 # d, drop = remove commit from history
63 63 # m, mess = edit commit message without changing commit content
64 64 #
65 65
66 66 At which point you close the editor and ``histedit`` starts working. When you
67 67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 68 those revisions together, offering you a chance to clean up the commit message::
69 69
70 70 Add beta
71 71 ***
72 72 Add delta
73 73
74 74 Edit the commit message to your liking, then close the editor. For
75 75 this example, let's assume that the commit message was changed to
76 76 ``Add beta and delta.`` After histedit has run and had a chance to
77 77 remove any old or temporary revisions it needed, the history looks
78 78 like this::
79 79
80 80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 81 | Add beta and delta.
82 82 |
83 83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 84 | Add gamma
85 85 |
86 86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 87 Add alpha
88 88
89 89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 90 ones) until after it has completed all the editing operations, so it will
91 91 probably perform several strip operations when it's done. For the above example,
92 92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 93 so you might need to be a little patient. You can choose to keep the original
94 94 revisions by passing the ``--keep`` flag.
95 95
96 96 The ``edit`` operation will drop you back to a command prompt,
97 97 allowing you to edit files freely, or even use ``hg record`` to commit
98 98 some changes as a separate commit. When you're done, any remaining
99 99 uncommitted changes will be committed as well. When done, run ``hg
100 100 histedit --continue`` to finish this step. You'll be prompted for a
101 101 new commit message, but the default commit message will be the
102 102 original message for the ``edit`` ed revision.
103 103
104 104 The ``message`` operation will give you a chance to revise a commit
105 105 message without changing the contents. It's a shortcut for doing
106 106 ``edit`` immediately followed by `hg histedit --continue``.
107 107
108 108 If ``histedit`` encounters a conflict when moving a revision (while
109 109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
110 110 ``edit`` with the difference that it won't prompt you for a commit
111 111 message when done. If you decide at this point that you don't like how
112 112 much work it will be to rearrange history, or that you made a mistake,
113 113 you can use ``hg histedit --abort`` to abandon the new changes you
114 114 have made and return to the state before you attempted to edit your
115 115 history.
116 116
117 117 If we clone the histedit-ed example repository above and add four more
118 118 changes, such that we have the following history::
119 119
120 120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
121 121 | Add theta
122 122 |
123 123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
124 124 | Add eta
125 125 |
126 126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
127 127 | Add zeta
128 128 |
129 129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
130 130 | Add epsilon
131 131 |
132 132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
133 133 | Add beta and delta.
134 134 |
135 135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
136 136 | Add gamma
137 137 |
138 138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
139 139 Add alpha
140 140
141 141 If you run ``hg histedit --outgoing`` on the clone then it is the same
142 142 as running ``hg histedit 836302820282``. If you need plan to push to a
143 143 repository that Mercurial does not detect to be related to the source
144 144 repo, you can add a ``--force`` option.
145 145
146 146 Histedit rule lines are truncated to 80 characters by default. You
147 147 can customize this behavior by setting a different length in your
148 148 configuration file::
149 149
150 150 [histedit]
151 151 linelen = 120 # truncate rule lines at 120 characters
152 152
153 153 ``hg histedit`` attempts to automatically choose an appropriate base
154 154 revision to use. To change which base revision is used, define a
155 155 revset in your configuration file::
156 156
157 157 [histedit]
158 158 defaultrev = only(.) & draft()
159 159 """
160 160
161 161 try:
162 162 import cPickle as pickle
163 163 pickle.dump # import now
164 164 except ImportError:
165 165 import pickle
166 166 import errno
167 167 import os
168 168 import sys
169 169
170 170 from mercurial import bundle2
171 171 from mercurial import cmdutil
172 172 from mercurial import discovery
173 173 from mercurial import error
174 174 from mercurial import copies
175 175 from mercurial import context
176 176 from mercurial import destutil
177 177 from mercurial import exchange
178 178 from mercurial import extensions
179 179 from mercurial import hg
180 180 from mercurial import node
181 181 from mercurial import repair
182 182 from mercurial import scmutil
183 183 from mercurial import util
184 184 from mercurial import obsolete
185 185 from mercurial import merge as mergemod
186 186 from mercurial.lock import release
187 187 from mercurial.i18n import _
188 188
189 189 cmdtable = {}
190 190 command = cmdutil.command(cmdtable)
191 191
192 192 class _constraints(object):
193 193 # aborts if there are multiple rules for one node
194 194 noduplicates = 'noduplicates'
195 195 # abort if the node does belong to edited stack
196 196 forceother = 'forceother'
197 197 # abort if the node doesn't belong to edited stack
198 198 noother = 'noother'
199 199
200 200 @classmethod
201 201 def known(cls):
202 202 return set([v for k, v in cls.__dict__.items() if k[0] != '_'])
203 203
204 204 # Note for extension authors: ONLY specify testedwith = 'internal' for
205 205 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
206 206 # be specifying the version(s) of Mercurial they are tested with, or
207 207 # leave the attribute unspecified.
208 208 testedwith = 'internal'
209 209
210 210 # i18n: command names and abbreviations must remain untranslated
211 211 editcomment = _("""# Edit history between %s and %s
212 212 #
213 213 # Commits are listed from least to most recent
214 214 #
215 215 # Commands:
216 216 # p, pick = use commit
217 217 # e, edit = use commit, but stop for amending
218 218 # f, fold = use commit, but combine it with the one above
219 219 # r, roll = like fold, but discard this commit's description
220 220 # d, drop = remove commit from history
221 221 # m, mess = edit commit message without changing commit content
222 222 #
223 223 """)
224 224
225 225 class histeditstate(object):
226 226 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
227 227 topmost=None, replacements=None, lock=None, wlock=None):
228 228 self.repo = repo
229 229 self.actions = actions
230 230 self.keep = keep
231 231 self.topmost = topmost
232 232 self.parentctxnode = parentctxnode
233 233 self.lock = lock
234 234 self.wlock = wlock
235 235 self.backupfile = None
236 236 if replacements is None:
237 237 self.replacements = []
238 238 else:
239 239 self.replacements = replacements
240 240
241 241 def read(self):
242 242 """Load histedit state from disk and set fields appropriately."""
243 243 try:
244 244 fp = self.repo.vfs('histedit-state', 'r')
245 245 except IOError as err:
246 246 if err.errno != errno.ENOENT:
247 247 raise
248 248 raise error.Abort(_('no histedit in progress'))
249 249
250 250 try:
251 251 data = pickle.load(fp)
252 252 parentctxnode, rules, keep, topmost, replacements = data
253 253 backupfile = None
254 254 except pickle.UnpicklingError:
255 255 data = self._load()
256 256 parentctxnode, rules, keep, topmost, replacements, backupfile = data
257 257
258 258 self.parentctxnode = parentctxnode
259 259 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
260 260 actions = parserules(rules, self)
261 261 self.actions = actions
262 262 self.keep = keep
263 263 self.topmost = topmost
264 264 self.replacements = replacements
265 265 self.backupfile = backupfile
266 266
267 267 def write(self):
268 268 fp = self.repo.vfs('histedit-state', 'w')
269 269 fp.write('v1\n')
270 270 fp.write('%s\n' % node.hex(self.parentctxnode))
271 271 fp.write('%s\n' % node.hex(self.topmost))
272 272 fp.write('%s\n' % self.keep)
273 273 fp.write('%d\n' % len(self.actions))
274 274 for action in self.actions:
275 275 fp.write('%s\n' % action.tostate())
276 276 fp.write('%d\n' % len(self.replacements))
277 277 for replacement in self.replacements:
278 278 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
279 279 for r in replacement[1])))
280 280 backupfile = self.backupfile
281 281 if not backupfile:
282 282 backupfile = ''
283 283 fp.write('%s\n' % backupfile)
284 284 fp.close()
285 285
286 286 def _load(self):
287 287 fp = self.repo.vfs('histedit-state', 'r')
288 288 lines = [l[:-1] for l in fp.readlines()]
289 289
290 290 index = 0
291 291 lines[index] # version number
292 292 index += 1
293 293
294 294 parentctxnode = node.bin(lines[index])
295 295 index += 1
296 296
297 297 topmost = node.bin(lines[index])
298 298 index += 1
299 299
300 300 keep = lines[index] == 'True'
301 301 index += 1
302 302
303 303 # Rules
304 304 rules = []
305 305 rulelen = int(lines[index])
306 306 index += 1
307 307 for i in xrange(rulelen):
308 308 ruleaction = lines[index]
309 309 index += 1
310 310 rule = lines[index]
311 311 index += 1
312 312 rules.append((ruleaction, rule))
313 313
314 314 # Replacements
315 315 replacements = []
316 316 replacementlen = int(lines[index])
317 317 index += 1
318 318 for i in xrange(replacementlen):
319 319 replacement = lines[index]
320 320 original = node.bin(replacement[:40])
321 321 succ = [node.bin(replacement[i:i + 40]) for i in
322 322 range(40, len(replacement), 40)]
323 323 replacements.append((original, succ))
324 324 index += 1
325 325
326 326 backupfile = lines[index]
327 327 index += 1
328 328
329 329 fp.close()
330 330
331 331 return parentctxnode, rules, keep, topmost, replacements, backupfile
332 332
333 333 def clear(self):
334 334 if self.inprogress():
335 335 self.repo.vfs.unlink('histedit-state')
336 336
337 337 def inprogress(self):
338 338 return self.repo.vfs.exists('histedit-state')
339 339
340 340
341 341 class histeditaction(object):
342 342 def __init__(self, state, node):
343 343 self.state = state
344 344 self.repo = state.repo
345 345 self.node = node
346 346
347 347 @classmethod
348 348 def fromrule(cls, state, rule):
349 349 """Parses the given rule, returning an instance of the histeditaction.
350 350 """
351 351 rulehash = rule.strip().split(' ', 1)[0]
352 352 return cls(state, node.bin(rulehash))
353 353
354 354 def verify(self):
355 355 """ Verifies semantic correctness of the rule"""
356 356 repo = self.repo
357 357 ha = node.hex(self.node)
358 358 try:
359 359 self.node = repo[ha].node()
360 360 except error.RepoError:
361 361 raise error.Abort(_('unknown changeset %s listed')
362 362 % ha[:12])
363 363
364 364 def torule(self):
365 365 """build a histedit rule line for an action
366 366
367 367 by default lines are in the form:
368 368 <hash> <rev> <summary>
369 369 """
370 370 ctx = self.repo[self.node]
371 371 summary = ''
372 372 if ctx.description():
373 373 summary = ctx.description().splitlines()[0]
374 374 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
375 375 # trim to 75 columns by default so it's not stupidly wide in my editor
376 376 # (the 5 more are left for verb)
377 377 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
378 378 maxlen = max(maxlen, 22) # avoid truncating hash
379 379 return util.ellipsis(line, maxlen)
380 380
381 381 def tostate(self):
382 382 """Print an action in format used by histedit state files
383 383 (the first line is a verb, the remainder is the second)
384 384 """
385 385 return "%s\n%s" % (self.verb, node.hex(self.node))
386 386
387 387 def constraints(self):
388 388 """Return a set of constrains that this action should be verified for
389 389 """
390 390 return set([_constraints.noduplicates, _constraints.noother])
391 391
392 392 def nodetoverify(self):
393 393 """Returns a node associated with the action that will be used for
394 394 verification purposes.
395 395
396 396 If the action doesn't correspond to node it should return None
397 397 """
398 398 return self.node
399 399
400 400 def run(self):
401 401 """Runs the action. The default behavior is simply apply the action's
402 402 rulectx onto the current parentctx."""
403 403 self.applychange()
404 404 self.continuedirty()
405 405 return self.continueclean()
406 406
407 407 def applychange(self):
408 408 """Applies the changes from this action's rulectx onto the current
409 409 parentctx, but does not commit them."""
410 410 repo = self.repo
411 411 rulectx = repo[self.node]
412 412 hg.update(repo, self.state.parentctxnode)
413 413 stats = applychanges(repo.ui, repo, rulectx, {})
414 414 if stats and stats[3] > 0:
415 415 raise error.InterventionRequired(_('Fix up the change and run '
416 416 'hg histedit --continue'))
417 417
418 418 def continuedirty(self):
419 419 """Continues the action when changes have been applied to the working
420 420 copy. The default behavior is to commit the dirty changes."""
421 421 repo = self.repo
422 422 rulectx = repo[self.node]
423 423
424 424 editor = self.commiteditor()
425 425 commit = commitfuncfor(repo, rulectx)
426 426
427 427 commit(text=rulectx.description(), user=rulectx.user(),
428 428 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
429 429
430 430 def commiteditor(self):
431 431 """The editor to be used to edit the commit message."""
432 432 return False
433 433
434 434 def continueclean(self):
435 435 """Continues the action when the working copy is clean. The default
436 436 behavior is to accept the current commit as the new version of the
437 437 rulectx."""
438 438 ctx = self.repo['.']
439 439 if ctx.node() == self.state.parentctxnode:
440 440 self.repo.ui.warn(_('%s: empty changeset\n') %
441 441 node.short(self.node))
442 442 return ctx, [(self.node, tuple())]
443 443 if ctx.node() == self.node:
444 444 # Nothing changed
445 445 return ctx, []
446 446 return ctx, [(self.node, (ctx.node(),))]
447 447
448 448 def commitfuncfor(repo, src):
449 449 """Build a commit function for the replacement of <src>
450 450
451 451 This function ensure we apply the same treatment to all changesets.
452 452
453 453 - Add a 'histedit_source' entry in extra.
454 454
455 455 Note that fold has its own separated logic because its handling is a bit
456 456 different and not easily factored out of the fold method.
457 457 """
458 458 phasemin = src.phase()
459 459 def commitfunc(**kwargs):
460 460 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
461 461 try:
462 462 repo.ui.setconfig('phases', 'new-commit', phasemin,
463 463 'histedit')
464 464 extra = kwargs.get('extra', {}).copy()
465 465 extra['histedit_source'] = src.hex()
466 466 kwargs['extra'] = extra
467 467 return repo.commit(**kwargs)
468 468 finally:
469 469 repo.ui.restoreconfig(phasebackup)
470 470 return commitfunc
471 471
472 472 def applychanges(ui, repo, ctx, opts):
473 473 """Merge changeset from ctx (only) in the current working directory"""
474 474 wcpar = repo.dirstate.parents()[0]
475 475 if ctx.p1().node() == wcpar:
476 476 # edits are "in place" we do not need to make any merge,
477 477 # just applies changes on parent for edition
478 478 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
479 479 stats = None
480 480 else:
481 481 try:
482 482 # ui.forcemerge is an internal variable, do not document
483 483 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
484 484 'histedit')
485 485 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
486 486 finally:
487 487 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
488 488 return stats
489 489
490 490 def collapse(repo, first, last, commitopts, skipprompt=False):
491 491 """collapse the set of revisions from first to last as new one.
492 492
493 493 Expected commit options are:
494 494 - message
495 495 - date
496 496 - username
497 497 Commit message is edited in all cases.
498 498
499 499 This function works in memory."""
500 500 ctxs = list(repo.set('%d::%d', first, last))
501 501 if not ctxs:
502 502 return None
503 503 for c in ctxs:
504 504 if not c.mutable():
505 505 raise error.Abort(
506 506 _("cannot fold into public change %s") % node.short(c.node()))
507 507 base = first.parents()[0]
508 508
509 509 # commit a new version of the old changeset, including the update
510 510 # collect all files which might be affected
511 511 files = set()
512 512 for ctx in ctxs:
513 513 files.update(ctx.files())
514 514
515 515 # Recompute copies (avoid recording a -> b -> a)
516 516 copied = copies.pathcopies(base, last)
517 517
518 518 # prune files which were reverted by the updates
519 519 def samefile(f):
520 520 if f in last.manifest():
521 521 a = last.filectx(f)
522 522 if f in base.manifest():
523 523 b = base.filectx(f)
524 524 return (a.data() == b.data()
525 525 and a.flags() == b.flags())
526 526 else:
527 527 return False
528 528 else:
529 529 return f not in base.manifest()
530 530 files = [f for f in files if not samefile(f)]
531 531 # commit version of these files as defined by head
532 532 headmf = last.manifest()
533 533 def filectxfn(repo, ctx, path):
534 534 if path in headmf:
535 535 fctx = last[path]
536 536 flags = fctx.flags()
537 537 mctx = context.memfilectx(repo,
538 538 fctx.path(), fctx.data(),
539 539 islink='l' in flags,
540 540 isexec='x' in flags,
541 541 copied=copied.get(path))
542 542 return mctx
543 543 return None
544 544
545 545 if commitopts.get('message'):
546 546 message = commitopts['message']
547 547 else:
548 548 message = first.description()
549 549 user = commitopts.get('user')
550 550 date = commitopts.get('date')
551 551 extra = commitopts.get('extra')
552 552
553 553 parents = (first.p1().node(), first.p2().node())
554 554 editor = None
555 555 if not skipprompt:
556 556 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
557 557 new = context.memctx(repo,
558 558 parents=parents,
559 559 text=message,
560 560 files=files,
561 561 filectxfn=filectxfn,
562 562 user=user,
563 563 date=date,
564 564 extra=extra,
565 565 editor=editor)
566 566 return repo.commitctx(new)
567 567
568 568 def _isdirtywc(repo):
569 569 return repo[None].dirty(missing=True)
570 570
571 571 def abortdirty():
572 572 raise error.Abort(_('working copy has pending changes'),
573 573 hint=_('amend, commit, or revert them and run histedit '
574 574 '--continue, or abort with histedit --abort'))
575 575
576 576
577 577 actiontable = {}
578 578 actionlist = []
579 579
580 580 def addhisteditaction(verbs):
581 581 def wrap(cls):
582 582 cls.verb = verbs[0]
583 583 for verb in verbs:
584 584 actiontable[verb] = cls
585 585 actionlist.append(cls)
586 586 return cls
587 587 return wrap
588 588
589 589
590 590 @addhisteditaction(['pick', 'p'])
591 591 class pick(histeditaction):
592 592 def run(self):
593 593 rulectx = self.repo[self.node]
594 594 if rulectx.parents()[0].node() == self.state.parentctxnode:
595 595 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
596 596 return rulectx, []
597 597
598 598 return super(pick, self).run()
599 599
600 600 @addhisteditaction(['edit', 'e'])
601 601 class edit(histeditaction):
602 602 def run(self):
603 603 repo = self.repo
604 604 rulectx = repo[self.node]
605 605 hg.update(repo, self.state.parentctxnode)
606 606 applychanges(repo.ui, repo, rulectx, {})
607 607 raise error.InterventionRequired(
608 608 _('Make changes as needed, you may commit or record as needed '
609 609 'now.\nWhen you are finished, run hg histedit --continue to '
610 610 'resume.'))
611 611
612 612 def commiteditor(self):
613 613 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
614 614
615 615 @addhisteditaction(['fold', 'f'])
616 616 class fold(histeditaction):
617 617 def continuedirty(self):
618 618 repo = self.repo
619 619 rulectx = repo[self.node]
620 620
621 621 commit = commitfuncfor(repo, rulectx)
622 622 commit(text='fold-temp-revision %s' % node.short(self.node),
623 623 user=rulectx.user(), date=rulectx.date(),
624 624 extra=rulectx.extra())
625 625
626 626 def continueclean(self):
627 627 repo = self.repo
628 628 ctx = repo['.']
629 629 rulectx = repo[self.node]
630 630 parentctxnode = self.state.parentctxnode
631 631 if ctx.node() == parentctxnode:
632 632 repo.ui.warn(_('%s: empty changeset\n') %
633 633 node.short(self.node))
634 634 return ctx, [(self.node, (parentctxnode,))]
635 635
636 636 parentctx = repo[parentctxnode]
637 637 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
638 638 parentctx))
639 639 if not newcommits:
640 640 repo.ui.warn(_('%s: cannot fold - working copy is not a '
641 641 'descendant of previous commit %s\n') %
642 642 (node.short(self.node), node.short(parentctxnode)))
643 643 return ctx, [(self.node, (ctx.node(),))]
644 644
645 645 middlecommits = newcommits.copy()
646 646 middlecommits.discard(ctx.node())
647 647
648 648 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
649 649 middlecommits)
650 650
651 651 def skipprompt(self):
652 652 """Returns true if the rule should skip the message editor.
653 653
654 654 For example, 'fold' wants to show an editor, but 'rollup'
655 655 doesn't want to.
656 656 """
657 657 return False
658 658
659 659 def mergedescs(self):
660 660 """Returns true if the rule should merge messages of multiple changes.
661 661
662 662 This exists mainly so that 'rollup' rules can be a subclass of
663 663 'fold'.
664 664 """
665 665 return True
666 666
667 667 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
668 668 parent = ctx.parents()[0].node()
669 669 hg.update(repo, parent)
670 670 ### prepare new commit data
671 671 commitopts = {}
672 672 commitopts['user'] = ctx.user()
673 673 # commit message
674 674 if not self.mergedescs():
675 675 newmessage = ctx.description()
676 676 else:
677 677 newmessage = '\n***\n'.join(
678 678 [ctx.description()] +
679 679 [repo[r].description() for r in internalchanges] +
680 680 [oldctx.description()]) + '\n'
681 681 commitopts['message'] = newmessage
682 682 # date
683 683 commitopts['date'] = max(ctx.date(), oldctx.date())
684 684 extra = ctx.extra().copy()
685 685 # histedit_source
686 686 # note: ctx is likely a temporary commit but that the best we can do
687 687 # here. This is sufficient to solve issue3681 anyway.
688 688 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
689 689 commitopts['extra'] = extra
690 690 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
691 691 try:
692 692 phasemin = max(ctx.phase(), oldctx.phase())
693 693 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
694 694 n = collapse(repo, ctx, repo[newnode], commitopts,
695 695 skipprompt=self.skipprompt())
696 696 finally:
697 697 repo.ui.restoreconfig(phasebackup)
698 698 if n is None:
699 699 return ctx, []
700 700 hg.update(repo, n)
701 701 replacements = [(oldctx.node(), (newnode,)),
702 702 (ctx.node(), (n,)),
703 703 (newnode, (n,)),
704 704 ]
705 705 for ich in internalchanges:
706 706 replacements.append((ich, (n,)))
707 707 return repo[n], replacements
708 708
709 709 class base(histeditaction):
710 710 def constraints(self):
711 711 return set([_constraints.forceother])
712 712
713 713 def run(self):
714 714 if self.repo['.'].node() != self.node:
715 mergemod.update(self.repo, self.node, False, True, False)
716 # branchmerge, force, partial)
715 mergemod.update(self.repo, self.node, False, True)
716 # branchmerge, force)
717 717 return self.continueclean()
718 718
719 719 def continuedirty(self):
720 720 abortdirty()
721 721
722 722 def continueclean(self):
723 723 basectx = self.repo['.']
724 724 return basectx, []
725 725
726 726 @addhisteditaction(['_multifold'])
727 727 class _multifold(fold):
728 728 """fold subclass used for when multiple folds happen in a row
729 729
730 730 We only want to fire the editor for the folded message once when
731 731 (say) four changes are folded down into a single change. This is
732 732 similar to rollup, but we should preserve both messages so that
733 733 when the last fold operation runs we can show the user all the
734 734 commit messages in their editor.
735 735 """
736 736 def skipprompt(self):
737 737 return True
738 738
739 739 @addhisteditaction(["roll", "r"])
740 740 class rollup(fold):
741 741 def mergedescs(self):
742 742 return False
743 743
744 744 def skipprompt(self):
745 745 return True
746 746
747 747 @addhisteditaction(["drop", "d"])
748 748 class drop(histeditaction):
749 749 def run(self):
750 750 parentctx = self.repo[self.state.parentctxnode]
751 751 return parentctx, [(self.node, tuple())]
752 752
753 753 @addhisteditaction(["mess", "m"])
754 754 class message(histeditaction):
755 755 def commiteditor(self):
756 756 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
757 757
758 758 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
759 759 """utility function to find the first outgoing changeset
760 760
761 761 Used by initialization code"""
762 762 if opts is None:
763 763 opts = {}
764 764 dest = ui.expandpath(remote or 'default-push', remote or 'default')
765 765 dest, revs = hg.parseurl(dest, None)[:2]
766 766 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
767 767
768 768 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
769 769 other = hg.peer(repo, opts, dest)
770 770
771 771 if revs:
772 772 revs = [repo.lookup(rev) for rev in revs]
773 773
774 774 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
775 775 if not outgoing.missing:
776 776 raise error.Abort(_('no outgoing ancestors'))
777 777 roots = list(repo.revs("roots(%ln)", outgoing.missing))
778 778 if 1 < len(roots):
779 779 msg = _('there are ambiguous outgoing revisions')
780 780 hint = _('see "hg help histedit" for more detail')
781 781 raise error.Abort(msg, hint=hint)
782 782 return repo.lookup(roots[0])
783 783
784 784
785 785 @command('histedit',
786 786 [('', 'commands', '',
787 787 _('read history edits from the specified file'), _('FILE')),
788 788 ('c', 'continue', False, _('continue an edit already in progress')),
789 789 ('', 'edit-plan', False, _('edit remaining actions list')),
790 790 ('k', 'keep', False,
791 791 _("don't strip old nodes after edit is complete")),
792 792 ('', 'abort', False, _('abort an edit in progress')),
793 793 ('o', 'outgoing', False, _('changesets not found in destination')),
794 794 ('f', 'force', False,
795 795 _('force outgoing even for unrelated repositories')),
796 796 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
797 797 _("[ANCESTOR] | --outgoing [URL]"))
798 798 def histedit(ui, repo, *freeargs, **opts):
799 799 """interactively edit changeset history
800 800
801 801 This command edits changesets between an ANCESTOR and the parent of
802 802 the working directory.
803 803
804 804 The value from the "histedit.defaultrev" config option is used as a
805 805 revset to select the base revision when ANCESTOR is not specified.
806 806 The first revision returned by the revset is used. By default, this
807 807 selects the editable history that is unique to the ancestry of the
808 808 working directory.
809 809
810 810 With --outgoing, this edits changesets not found in the
811 811 destination repository. If URL of the destination is omitted, the
812 812 'default-push' (or 'default') path will be used.
813 813
814 814 For safety, this command is also aborted if there are ambiguous
815 815 outgoing revisions which may confuse users: for example, if there
816 816 are multiple branches containing outgoing revisions.
817 817
818 818 Use "min(outgoing() and ::.)" or similar revset specification
819 819 instead of --outgoing to specify edit target revision exactly in
820 820 such ambiguous situation. See :hg:`help revsets` for detail about
821 821 selecting revisions.
822 822
823 823 .. container:: verbose
824 824
825 825 Examples:
826 826
827 827 - A number of changes have been made.
828 828 Revision 3 is no longer needed.
829 829
830 830 Start history editing from revision 3::
831 831
832 832 hg histedit -r 3
833 833
834 834 An editor opens, containing the list of revisions,
835 835 with specific actions specified::
836 836
837 837 pick 5339bf82f0ca 3 Zworgle the foobar
838 838 pick 8ef592ce7cc4 4 Bedazzle the zerlog
839 839 pick 0a9639fcda9d 5 Morgify the cromulancy
840 840
841 841 Additional information about the possible actions
842 842 to take appears below the list of revisions.
843 843
844 844 To remove revision 3 from the history,
845 845 its action (at the beginning of the relevant line)
846 846 is changed to 'drop'::
847 847
848 848 drop 5339bf82f0ca 3 Zworgle the foobar
849 849 pick 8ef592ce7cc4 4 Bedazzle the zerlog
850 850 pick 0a9639fcda9d 5 Morgify the cromulancy
851 851
852 852 - A number of changes have been made.
853 853 Revision 2 and 4 need to be swapped.
854 854
855 855 Start history editing from revision 2::
856 856
857 857 hg histedit -r 2
858 858
859 859 An editor opens, containing the list of revisions,
860 860 with specific actions specified::
861 861
862 862 pick 252a1af424ad 2 Blorb a morgwazzle
863 863 pick 5339bf82f0ca 3 Zworgle the foobar
864 864 pick 8ef592ce7cc4 4 Bedazzle the zerlog
865 865
866 866 To swap revision 2 and 4, its lines are swapped
867 867 in the editor::
868 868
869 869 pick 8ef592ce7cc4 4 Bedazzle the zerlog
870 870 pick 5339bf82f0ca 3 Zworgle the foobar
871 871 pick 252a1af424ad 2 Blorb a morgwazzle
872 872
873 873 Returns 0 on success, 1 if user intervention is required (not only
874 874 for intentional "edit" command, but also for resolving unexpected
875 875 conflicts).
876 876 """
877 877 state = histeditstate(repo)
878 878 try:
879 879 state.wlock = repo.wlock()
880 880 state.lock = repo.lock()
881 881 _histedit(ui, repo, state, *freeargs, **opts)
882 882 except error.Abort:
883 883 if repo.vfs.exists('histedit-last-edit.txt'):
884 884 ui.warn(_('warning: histedit rules saved '
885 885 'to: .hg/histedit-last-edit.txt\n'))
886 886 raise
887 887 finally:
888 888 release(state.lock, state.wlock)
889 889
890 890 def _histedit(ui, repo, state, *freeargs, **opts):
891 891 # TODO only abort if we try to histedit mq patches, not just
892 892 # blanket if mq patches are applied somewhere
893 893 mq = getattr(repo, 'mq', None)
894 894 if mq and mq.applied:
895 895 raise error.Abort(_('source has mq patches applied'))
896 896
897 897 # basic argument incompatibility processing
898 898 outg = opts.get('outgoing')
899 899 cont = opts.get('continue')
900 900 editplan = opts.get('edit_plan')
901 901 abort = opts.get('abort')
902 902 force = opts.get('force')
903 903 rules = opts.get('commands', '')
904 904 revs = opts.get('rev', [])
905 905 goal = 'new' # This invocation goal, in new, continue, abort
906 906 if force and not outg:
907 907 raise error.Abort(_('--force only allowed with --outgoing'))
908 908 if cont:
909 909 if any((outg, abort, revs, freeargs, rules, editplan)):
910 910 raise error.Abort(_('no arguments allowed with --continue'))
911 911 goal = 'continue'
912 912 elif abort:
913 913 if any((outg, revs, freeargs, rules, editplan)):
914 914 raise error.Abort(_('no arguments allowed with --abort'))
915 915 goal = 'abort'
916 916 elif editplan:
917 917 if any((outg, revs, freeargs)):
918 918 raise error.Abort(_('only --commands argument allowed with '
919 919 '--edit-plan'))
920 920 goal = 'edit-plan'
921 921 else:
922 922 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
923 923 raise error.Abort(_('history edit already in progress, try '
924 924 '--continue or --abort'))
925 925 if outg:
926 926 if revs:
927 927 raise error.Abort(_('no revisions allowed with --outgoing'))
928 928 if len(freeargs) > 1:
929 929 raise error.Abort(
930 930 _('only one repo argument allowed with --outgoing'))
931 931 else:
932 932 revs.extend(freeargs)
933 933 if len(revs) == 0:
934 934 defaultrev = destutil.desthistedit(ui, repo)
935 935 if defaultrev is not None:
936 936 revs.append(defaultrev)
937 937
938 938 if len(revs) != 1:
939 939 raise error.Abort(
940 940 _('histedit requires exactly one ancestor revision'))
941 941
942 942
943 943 replacements = []
944 944 state.keep = opts.get('keep', False)
945 945 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
946 946
947 947 # rebuild state
948 948 if goal == 'continue':
949 949 state.read()
950 950 state = bootstrapcontinue(ui, state, opts)
951 951 elif goal == 'edit-plan':
952 952 state.read()
953 953 if not rules:
954 954 comment = editcomment % (node.short(state.parentctxnode),
955 955 node.short(state.topmost))
956 956 rules = ruleeditor(repo, ui, state.actions, comment)
957 957 else:
958 958 if rules == '-':
959 959 f = sys.stdin
960 960 else:
961 961 f = open(rules)
962 962 rules = f.read()
963 963 f.close()
964 964 actions = parserules(rules, state)
965 965 ctxs = [repo[act.nodetoverify()] \
966 966 for act in state.actions if act.nodetoverify()]
967 967 verifyactions(actions, state, ctxs)
968 968 state.actions = actions
969 969 state.write()
970 970 return
971 971 elif goal == 'abort':
972 972 try:
973 973 state.read()
974 974 tmpnodes, leafs = newnodestoabort(state)
975 975 ui.debug('restore wc to old parent %s\n'
976 976 % node.short(state.topmost))
977 977
978 978 # Recover our old commits if necessary
979 979 if not state.topmost in repo and state.backupfile:
980 980 backupfile = repo.join(state.backupfile)
981 981 f = hg.openpath(ui, backupfile)
982 982 gen = exchange.readbundle(ui, f, backupfile)
983 983 tr = repo.transaction('histedit.abort')
984 984 try:
985 985 if not isinstance(gen, bundle2.unbundle20):
986 986 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
987 987 if isinstance(gen, bundle2.unbundle20):
988 988 bundle2.applybundle(repo, gen, tr,
989 989 source='histedit',
990 990 url='bundle:' + backupfile)
991 991 tr.close()
992 992 finally:
993 993 tr.release()
994 994
995 995 os.remove(backupfile)
996 996
997 997 # check whether we should update away
998 998 if repo.unfiltered().revs('parents() and (%n or %ln::)',
999 999 state.parentctxnode, leafs | tmpnodes):
1000 1000 hg.clean(repo, state.topmost)
1001 1001 cleanupnode(ui, repo, 'created', tmpnodes)
1002 1002 cleanupnode(ui, repo, 'temp', leafs)
1003 1003 except Exception:
1004 1004 if state.inprogress():
1005 1005 ui.warn(_('warning: encountered an exception during histedit '
1006 1006 '--abort; the repository may not have been completely '
1007 1007 'cleaned up\n'))
1008 1008 raise
1009 1009 finally:
1010 1010 state.clear()
1011 1011 return
1012 1012 else:
1013 1013 cmdutil.checkunfinished(repo)
1014 1014 cmdutil.bailifchanged(repo)
1015 1015
1016 1016 if repo.vfs.exists('histedit-last-edit.txt'):
1017 1017 repo.vfs.unlink('histedit-last-edit.txt')
1018 1018 topmost, empty = repo.dirstate.parents()
1019 1019 if outg:
1020 1020 if freeargs:
1021 1021 remote = freeargs[0]
1022 1022 else:
1023 1023 remote = None
1024 1024 root = findoutgoing(ui, repo, remote, force, opts)
1025 1025 else:
1026 1026 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1027 1027 if len(rr) != 1:
1028 1028 raise error.Abort(_('The specified revisions must have '
1029 1029 'exactly one common root'))
1030 1030 root = rr[0].node()
1031 1031
1032 1032 revs = between(repo, root, topmost, state.keep)
1033 1033 if not revs:
1034 1034 raise error.Abort(_('%s is not an ancestor of working directory') %
1035 1035 node.short(root))
1036 1036
1037 1037 ctxs = [repo[r] for r in revs]
1038 1038 if not rules:
1039 1039 comment = editcomment % (node.short(root), node.short(topmost))
1040 1040 actions = [pick(state, r) for r in revs]
1041 1041 rules = ruleeditor(repo, ui, actions, comment)
1042 1042 else:
1043 1043 if rules == '-':
1044 1044 f = sys.stdin
1045 1045 else:
1046 1046 f = open(rules)
1047 1047 rules = f.read()
1048 1048 f.close()
1049 1049 actions = parserules(rules, state)
1050 1050 verifyactions(actions, state, ctxs)
1051 1051
1052 1052 parentctxnode = repo[root].parents()[0].node()
1053 1053
1054 1054 state.parentctxnode = parentctxnode
1055 1055 state.actions = actions
1056 1056 state.topmost = topmost
1057 1057 state.replacements = replacements
1058 1058
1059 1059 # Create a backup so we can always abort completely.
1060 1060 backupfile = None
1061 1061 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1062 1062 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1063 1063 'histedit')
1064 1064 state.backupfile = backupfile
1065 1065
1066 1066 # preprocess rules so that we can hide inner folds from the user
1067 1067 # and only show one editor
1068 1068 actions = state.actions[:]
1069 1069 for idx, (action, nextact) in enumerate(
1070 1070 zip(actions, actions[1:] + [None])):
1071 1071 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1072 1072 state.actions[idx].__class__ = _multifold
1073 1073
1074 1074 while state.actions:
1075 1075 state.write()
1076 1076 actobj = state.actions.pop(0)
1077 1077 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1078 1078 actobj.torule()))
1079 1079 parentctx, replacement_ = actobj.run()
1080 1080 state.parentctxnode = parentctx.node()
1081 1081 state.replacements.extend(replacement_)
1082 1082 state.write()
1083 1083
1084 1084 hg.update(repo, state.parentctxnode)
1085 1085
1086 1086 mapping, tmpnodes, created, ntm = processreplacement(state)
1087 1087 if mapping:
1088 1088 for prec, succs in mapping.iteritems():
1089 1089 if not succs:
1090 1090 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1091 1091 else:
1092 1092 ui.debug('histedit: %s is replaced by %s\n' % (
1093 1093 node.short(prec), node.short(succs[0])))
1094 1094 if len(succs) > 1:
1095 1095 m = 'histedit: %s'
1096 1096 for n in succs[1:]:
1097 1097 ui.debug(m % node.short(n))
1098 1098
1099 1099 if supportsmarkers:
1100 1100 # Only create markers if the temp nodes weren't already removed.
1101 1101 obsolete.createmarkers(repo, ((repo[t],()) for t in sorted(tmpnodes)
1102 1102 if t in repo))
1103 1103 else:
1104 1104 cleanupnode(ui, repo, 'temp', tmpnodes)
1105 1105
1106 1106 if not state.keep:
1107 1107 if mapping:
1108 1108 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1109 1109 # TODO update mq state
1110 1110 if supportsmarkers:
1111 1111 markers = []
1112 1112 # sort by revision number because it sound "right"
1113 1113 for prec in sorted(mapping, key=repo.changelog.rev):
1114 1114 succs = mapping[prec]
1115 1115 markers.append((repo[prec],
1116 1116 tuple(repo[s] for s in succs)))
1117 1117 if markers:
1118 1118 obsolete.createmarkers(repo, markers)
1119 1119 else:
1120 1120 cleanupnode(ui, repo, 'replaced', mapping)
1121 1121
1122 1122 state.clear()
1123 1123 if os.path.exists(repo.sjoin('undo')):
1124 1124 os.unlink(repo.sjoin('undo'))
1125 1125
1126 1126 def bootstrapcontinue(ui, state, opts):
1127 1127 repo = state.repo
1128 1128 if state.actions:
1129 1129 actobj = state.actions.pop(0)
1130 1130
1131 1131 if _isdirtywc(repo):
1132 1132 actobj.continuedirty()
1133 1133 if _isdirtywc(repo):
1134 1134 abortdirty()
1135 1135
1136 1136 parentctx, replacements = actobj.continueclean()
1137 1137
1138 1138 state.parentctxnode = parentctx.node()
1139 1139 state.replacements.extend(replacements)
1140 1140
1141 1141 return state
1142 1142
1143 1143 def between(repo, old, new, keep):
1144 1144 """select and validate the set of revision to edit
1145 1145
1146 1146 When keep is false, the specified set can't have children."""
1147 1147 ctxs = list(repo.set('%n::%n', old, new))
1148 1148 if ctxs and not keep:
1149 1149 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1150 1150 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1151 1151 raise error.Abort(_('cannot edit history that would orphan nodes'))
1152 1152 if repo.revs('(%ld) and merge()', ctxs):
1153 1153 raise error.Abort(_('cannot edit history that contains merges'))
1154 1154 root = ctxs[0] # list is already sorted by repo.set
1155 1155 if not root.mutable():
1156 1156 raise error.Abort(_('cannot edit public changeset: %s') % root,
1157 1157 hint=_('see "hg help phases" for details'))
1158 1158 return [c.node() for c in ctxs]
1159 1159
1160 1160 def ruleeditor(repo, ui, actions, editcomment=""):
1161 1161 """open an editor to edit rules
1162 1162
1163 1163 rules are in the format [ [act, ctx], ...] like in state.rules
1164 1164 """
1165 1165 rules = '\n'.join([act.torule() for act in actions])
1166 1166 rules += '\n\n'
1167 1167 rules += editcomment
1168 1168 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'})
1169 1169
1170 1170 # Save edit rules in .hg/histedit-last-edit.txt in case
1171 1171 # the user needs to ask for help after something
1172 1172 # surprising happens.
1173 1173 f = open(repo.join('histedit-last-edit.txt'), 'w')
1174 1174 f.write(rules)
1175 1175 f.close()
1176 1176
1177 1177 return rules
1178 1178
1179 1179 def parserules(rules, state):
1180 1180 """Read the histedit rules string and return list of action objects """
1181 1181 rules = [l for l in (r.strip() for r in rules.splitlines())
1182 1182 if l and not l.startswith('#')]
1183 1183 actions = []
1184 1184 for r in rules:
1185 1185 if ' ' not in r:
1186 1186 raise error.Abort(_('malformed line "%s"') % r)
1187 1187 verb, rest = r.split(' ', 1)
1188 1188
1189 1189 if verb not in actiontable:
1190 1190 raise error.Abort(_('unknown action "%s"') % verb)
1191 1191
1192 1192 action = actiontable[verb].fromrule(state, rest)
1193 1193 actions.append(action)
1194 1194 return actions
1195 1195
1196 1196 def verifyactions(actions, state, ctxs):
1197 1197 """Verify that there exists exactly one action per given changeset and
1198 1198 other constraints.
1199 1199
1200 1200 Will abort if there are to many or too few rules, a malformed rule,
1201 1201 or a rule on a changeset outside of the user-given range.
1202 1202 """
1203 1203 expected = set(c.hex() for c in ctxs)
1204 1204 seen = set()
1205 1205 for action in actions:
1206 1206 action.verify()
1207 1207 constraints = action.constraints()
1208 1208 for constraint in constraints:
1209 1209 if constraint not in _constraints.known():
1210 1210 raise error.Abort(_('unknown constraint "%s"') % constraint)
1211 1211
1212 1212 nodetoverify = action.nodetoverify()
1213 1213 if nodetoverify is not None:
1214 1214 ha = node.hex(nodetoverify)
1215 1215 if _constraints.noother in constraints and ha not in expected:
1216 1216 raise error.Abort(
1217 1217 _('may not use "%s" with changesets '
1218 1218 'other than the ones listed') % action.verb)
1219 1219 if _constraints.forceother in constraints and ha in expected:
1220 1220 raise error.Abort(
1221 1221 _('may not use "%s" with changesets '
1222 1222 'within the edited list') % action.verb)
1223 1223 if _constraints.noduplicates in constraints and ha in seen:
1224 1224 raise error.Abort(_('duplicated command for changeset %s') %
1225 1225 ha[:12])
1226 1226 seen.add(ha)
1227 1227 missing = sorted(expected - seen) # sort to stabilize output
1228 1228 if missing:
1229 1229 raise error.Abort(_('missing rules for changeset %s') %
1230 1230 missing[0][:12],
1231 1231 hint=_('use "drop %s" to discard the change') % missing[0][:12])
1232 1232
1233 1233 def newnodestoabort(state):
1234 1234 """process the list of replacements to return
1235 1235
1236 1236 1) the list of final node
1237 1237 2) the list of temporary node
1238 1238
1239 1239 This meant to be used on abort as less data are required in this case.
1240 1240 """
1241 1241 replacements = state.replacements
1242 1242 allsuccs = set()
1243 1243 replaced = set()
1244 1244 for rep in replacements:
1245 1245 allsuccs.update(rep[1])
1246 1246 replaced.add(rep[0])
1247 1247 newnodes = allsuccs - replaced
1248 1248 tmpnodes = allsuccs & replaced
1249 1249 return newnodes, tmpnodes
1250 1250
1251 1251
1252 1252 def processreplacement(state):
1253 1253 """process the list of replacements to return
1254 1254
1255 1255 1) the final mapping between original and created nodes
1256 1256 2) the list of temporary node created by histedit
1257 1257 3) the list of new commit created by histedit"""
1258 1258 replacements = state.replacements
1259 1259 allsuccs = set()
1260 1260 replaced = set()
1261 1261 fullmapping = {}
1262 1262 # initialize basic set
1263 1263 # fullmapping records all operations recorded in replacement
1264 1264 for rep in replacements:
1265 1265 allsuccs.update(rep[1])
1266 1266 replaced.add(rep[0])
1267 1267 fullmapping.setdefault(rep[0], set()).update(rep[1])
1268 1268 new = allsuccs - replaced
1269 1269 tmpnodes = allsuccs & replaced
1270 1270 # Reduce content fullmapping into direct relation between original nodes
1271 1271 # and final node created during history edition
1272 1272 # Dropped changeset are replaced by an empty list
1273 1273 toproceed = set(fullmapping)
1274 1274 final = {}
1275 1275 while toproceed:
1276 1276 for x in list(toproceed):
1277 1277 succs = fullmapping[x]
1278 1278 for s in list(succs):
1279 1279 if s in toproceed:
1280 1280 # non final node with unknown closure
1281 1281 # We can't process this now
1282 1282 break
1283 1283 elif s in final:
1284 1284 # non final node, replace with closure
1285 1285 succs.remove(s)
1286 1286 succs.update(final[s])
1287 1287 else:
1288 1288 final[x] = succs
1289 1289 toproceed.remove(x)
1290 1290 # remove tmpnodes from final mapping
1291 1291 for n in tmpnodes:
1292 1292 del final[n]
1293 1293 # we expect all changes involved in final to exist in the repo
1294 1294 # turn `final` into list (topologically sorted)
1295 1295 nm = state.repo.changelog.nodemap
1296 1296 for prec, succs in final.items():
1297 1297 final[prec] = sorted(succs, key=nm.get)
1298 1298
1299 1299 # computed topmost element (necessary for bookmark)
1300 1300 if new:
1301 1301 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1302 1302 elif not final:
1303 1303 # Nothing rewritten at all. we won't need `newtopmost`
1304 1304 # It is the same as `oldtopmost` and `processreplacement` know it
1305 1305 newtopmost = None
1306 1306 else:
1307 1307 # every body died. The newtopmost is the parent of the root.
1308 1308 r = state.repo.changelog.rev
1309 1309 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1310 1310
1311 1311 return final, tmpnodes, new, newtopmost
1312 1312
1313 1313 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1314 1314 """Move bookmark from old to newly created node"""
1315 1315 if not mapping:
1316 1316 # if nothing got rewritten there is not purpose for this function
1317 1317 return
1318 1318 moves = []
1319 1319 for bk, old in sorted(repo._bookmarks.iteritems()):
1320 1320 if old == oldtopmost:
1321 1321 # special case ensure bookmark stay on tip.
1322 1322 #
1323 1323 # This is arguably a feature and we may only want that for the
1324 1324 # active bookmark. But the behavior is kept compatible with the old
1325 1325 # version for now.
1326 1326 moves.append((bk, newtopmost))
1327 1327 continue
1328 1328 base = old
1329 1329 new = mapping.get(base, None)
1330 1330 if new is None:
1331 1331 continue
1332 1332 while not new:
1333 1333 # base is killed, trying with parent
1334 1334 base = repo[base].p1().node()
1335 1335 new = mapping.get(base, (base,))
1336 1336 # nothing to move
1337 1337 moves.append((bk, new[-1]))
1338 1338 if moves:
1339 1339 lock = tr = None
1340 1340 try:
1341 1341 lock = repo.lock()
1342 1342 tr = repo.transaction('histedit')
1343 1343 marks = repo._bookmarks
1344 1344 for mark, new in moves:
1345 1345 old = marks[mark]
1346 1346 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1347 1347 % (mark, node.short(old), node.short(new)))
1348 1348 marks[mark] = new
1349 1349 marks.recordchange(tr)
1350 1350 tr.close()
1351 1351 finally:
1352 1352 release(tr, lock)
1353 1353
1354 1354 def cleanupnode(ui, repo, name, nodes):
1355 1355 """strip a group of nodes from the repository
1356 1356
1357 1357 The set of node to strip may contains unknown nodes."""
1358 1358 ui.debug('should strip %s nodes %s\n' %
1359 1359 (name, ', '.join([node.short(n) for n in nodes])))
1360 1360 lock = None
1361 1361 try:
1362 1362 lock = repo.lock()
1363 1363 # do not let filtering get in the way of the cleanse
1364 1364 # we should probably get rid of obsolescence marker created during the
1365 1365 # histedit, but we currently do not have such information.
1366 1366 repo = repo.unfiltered()
1367 1367 # Find all nodes that need to be stripped
1368 1368 # (we use %lr instead of %ln to silently ignore unknown items)
1369 1369 nm = repo.changelog.nodemap
1370 1370 nodes = sorted(n for n in nodes if n in nm)
1371 1371 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1372 1372 for c in roots:
1373 1373 # We should process node in reverse order to strip tip most first.
1374 1374 # but this trigger a bug in changegroup hook.
1375 1375 # This would reduce bundle overhead
1376 1376 repair.strip(ui, repo, c)
1377 1377 finally:
1378 1378 release(lock)
1379 1379
1380 1380 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1381 1381 if isinstance(nodelist, str):
1382 1382 nodelist = [nodelist]
1383 1383 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1384 1384 state = histeditstate(repo)
1385 1385 state.read()
1386 1386 histedit_nodes = set([action.nodetoverify() for action
1387 1387 in state.actions if action.nodetoverify()])
1388 1388 strip_nodes = set([repo[n].node() for n in nodelist])
1389 1389 common_nodes = histedit_nodes & strip_nodes
1390 1390 if common_nodes:
1391 1391 raise error.Abort(_("histedit in progress, can't strip %s")
1392 1392 % ', '.join(node.short(x) for x in common_nodes))
1393 1393 return orig(ui, repo, nodelist, *args, **kwargs)
1394 1394
1395 1395 extensions.wrapfunction(repair, 'strip', stripwrapper)
1396 1396
1397 1397 def summaryhook(ui, repo):
1398 1398 if not os.path.exists(repo.join('histedit-state')):
1399 1399 return
1400 1400 state = histeditstate(repo)
1401 1401 state.read()
1402 1402 if state.actions:
1403 1403 # i18n: column positioning for "hg summary"
1404 1404 ui.write(_('hist: %s (histedit --continue)\n') %
1405 1405 (ui.label(_('%d remaining'), 'histedit.remaining') %
1406 1406 len(state.actions)))
1407 1407
1408 1408 def extsetup(ui):
1409 1409 cmdutil.summaryhooks.add('histedit', summaryhook)
1410 1410 cmdutil.unfinishedstates.append(
1411 1411 ['histedit-state', False, True, _('histedit in progress'),
1412 1412 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1413 1413 if ui.configbool("experimental", "histeditng"):
1414 1414 globals()['base'] = addhisteditaction(['base', 'b'])(base)
@@ -1,1430 +1,1433 b''
1 1 # Copyright 2009-2010 Gregory P. Ward
2 2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 3 # Copyright 2010-2011 Fog Creek Software
4 4 # Copyright 2010-2011 Unity Technologies
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10 10
11 11 import os
12 12 import copy
13 13
14 14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
15 15 archival, pathutil, revset, error
16 16 from mercurial.i18n import _
17 17
18 18 import lfutil
19 19 import lfcommands
20 20 import basestore
21 21
22 22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
23 23
24 24 def composelargefilematcher(match, manifest):
25 25 '''create a matcher that matches only the largefiles in the original
26 26 matcher'''
27 27 m = copy.copy(match)
28 28 lfile = lambda f: lfutil.standin(f) in manifest
29 29 m._files = filter(lfile, m._files)
30 30 m._fileroots = set(m._files)
31 31 m._always = False
32 32 origmatchfn = m.matchfn
33 33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
34 34 return m
35 35
36 36 def composenormalfilematcher(match, manifest, exclude=None):
37 37 excluded = set()
38 38 if exclude is not None:
39 39 excluded.update(exclude)
40 40
41 41 m = copy.copy(match)
42 42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
43 43 manifest or f in excluded)
44 44 m._files = filter(notlfile, m._files)
45 45 m._fileroots = set(m._files)
46 46 m._always = False
47 47 origmatchfn = m.matchfn
48 48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
49 49 return m
50 50
51 51 def installnormalfilesmatchfn(manifest):
52 52 '''installmatchfn with a matchfn that ignores all largefiles'''
53 53 def overridematch(ctx, pats=(), opts=None, globbed=False,
54 54 default='relpath', badfn=None):
55 55 if opts is None:
56 56 opts = {}
57 57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
58 58 return composenormalfilematcher(match, manifest)
59 59 oldmatch = installmatchfn(overridematch)
60 60
61 61 def installmatchfn(f):
62 62 '''monkey patch the scmutil module with a custom match function.
63 63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
64 64 oldmatch = scmutil.match
65 65 setattr(f, 'oldmatch', oldmatch)
66 66 scmutil.match = f
67 67 return oldmatch
68 68
69 69 def restorematchfn():
70 70 '''restores scmutil.match to what it was before installmatchfn
71 71 was called. no-op if scmutil.match is its original function.
72 72
73 73 Note that n calls to installmatchfn will require n calls to
74 74 restore the original matchfn.'''
75 75 scmutil.match = getattr(scmutil.match, 'oldmatch')
76 76
77 77 def installmatchandpatsfn(f):
78 78 oldmatchandpats = scmutil.matchandpats
79 79 setattr(f, 'oldmatchandpats', oldmatchandpats)
80 80 scmutil.matchandpats = f
81 81 return oldmatchandpats
82 82
83 83 def restorematchandpatsfn():
84 84 '''restores scmutil.matchandpats to what it was before
85 85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
86 86 is its original function.
87 87
88 88 Note that n calls to installmatchandpatsfn will require n calls
89 89 to restore the original matchfn.'''
90 90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
91 91 scmutil.matchandpats)
92 92
93 93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
94 94 large = opts.get('large')
95 95 lfsize = lfutil.getminsize(
96 96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
97 97
98 98 lfmatcher = None
99 99 if lfutil.islfilesrepo(repo):
100 100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
101 101 if lfpats:
102 102 lfmatcher = match_.match(repo.root, '', list(lfpats))
103 103
104 104 lfnames = []
105 105 m = matcher
106 106
107 107 wctx = repo[None]
108 108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
109 109 exact = m.exact(f)
110 110 lfile = lfutil.standin(f) in wctx
111 111 nfile = f in wctx
112 112 exists = lfile or nfile
113 113
114 114 # addremove in core gets fancy with the name, add doesn't
115 115 if isaddremove:
116 116 name = m.uipath(f)
117 117 else:
118 118 name = m.rel(f)
119 119
120 120 # Don't warn the user when they attempt to add a normal tracked file.
121 121 # The normal add code will do that for us.
122 122 if exact and exists:
123 123 if lfile:
124 124 ui.warn(_('%s already a largefile\n') % name)
125 125 continue
126 126
127 127 if (exact or not exists) and not lfutil.isstandin(f):
128 128 # In case the file was removed previously, but not committed
129 129 # (issue3507)
130 130 if not repo.wvfs.exists(f):
131 131 continue
132 132
133 133 abovemin = (lfsize and
134 134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
135 135 if large or abovemin or (lfmatcher and lfmatcher(f)):
136 136 lfnames.append(f)
137 137 if ui.verbose or not exact:
138 138 ui.status(_('adding %s as a largefile\n') % name)
139 139
140 140 bad = []
141 141
142 142 # Need to lock, otherwise there could be a race condition between
143 143 # when standins are created and added to the repo.
144 144 wlock = repo.wlock()
145 145 try:
146 146 if not opts.get('dry_run'):
147 147 standins = []
148 148 lfdirstate = lfutil.openlfdirstate(ui, repo)
149 149 for f in lfnames:
150 150 standinname = lfutil.standin(f)
151 151 lfutil.writestandin(repo, standinname, hash='',
152 152 executable=lfutil.getexecutable(repo.wjoin(f)))
153 153 standins.append(standinname)
154 154 if lfdirstate[f] == 'r':
155 155 lfdirstate.normallookup(f)
156 156 else:
157 157 lfdirstate.add(f)
158 158 lfdirstate.write()
159 159 bad += [lfutil.splitstandin(f)
160 160 for f in repo[None].add(standins)
161 161 if f in m.files()]
162 162
163 163 added = [f for f in lfnames if f not in bad]
164 164 finally:
165 165 wlock.release()
166 166 return added, bad
167 167
168 168 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
169 169 after = opts.get('after')
170 170 m = composelargefilematcher(matcher, repo[None].manifest())
171 171 try:
172 172 repo.lfstatus = True
173 173 s = repo.status(match=m, clean=not isaddremove)
174 174 finally:
175 175 repo.lfstatus = False
176 176 manifest = repo[None].manifest()
177 177 modified, added, deleted, clean = [[f for f in list
178 178 if lfutil.standin(f) in manifest]
179 179 for list in (s.modified, s.added,
180 180 s.deleted, s.clean)]
181 181
182 182 def warn(files, msg):
183 183 for f in files:
184 184 ui.warn(msg % m.rel(f))
185 185 return int(len(files) > 0)
186 186
187 187 result = 0
188 188
189 189 if after:
190 190 remove = deleted
191 191 result = warn(modified + added + clean,
192 192 _('not removing %s: file still exists\n'))
193 193 else:
194 194 remove = deleted + clean
195 195 result = warn(modified, _('not removing %s: file is modified (use -f'
196 196 ' to force removal)\n'))
197 197 result = warn(added, _('not removing %s: file has been marked for add'
198 198 ' (use forget to undo)\n')) or result
199 199
200 200 # Need to lock because standin files are deleted then removed from the
201 201 # repository and we could race in-between.
202 202 wlock = repo.wlock()
203 203 try:
204 204 lfdirstate = lfutil.openlfdirstate(ui, repo)
205 205 for f in sorted(remove):
206 206 if ui.verbose or not m.exact(f):
207 207 # addremove in core gets fancy with the name, remove doesn't
208 208 if isaddremove:
209 209 name = m.uipath(f)
210 210 else:
211 211 name = m.rel(f)
212 212 ui.status(_('removing %s\n') % name)
213 213
214 214 if not opts.get('dry_run'):
215 215 if not after:
216 216 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
217 217
218 218 if opts.get('dry_run'):
219 219 return result
220 220
221 221 remove = [lfutil.standin(f) for f in remove]
222 222 # If this is being called by addremove, let the original addremove
223 223 # function handle this.
224 224 if not isaddremove:
225 225 for f in remove:
226 226 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
227 227 repo[None].forget(remove)
228 228
229 229 for f in remove:
230 230 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
231 231 False)
232 232
233 233 lfdirstate.write()
234 234 finally:
235 235 wlock.release()
236 236
237 237 return result
238 238
239 239 # For overriding mercurial.hgweb.webcommands so that largefiles will
240 240 # appear at their right place in the manifests.
241 241 def decodepath(orig, path):
242 242 return lfutil.splitstandin(path) or path
243 243
244 244 # -- Wrappers: modify existing commands --------------------------------
245 245
246 246 def overrideadd(orig, ui, repo, *pats, **opts):
247 247 if opts.get('normal') and opts.get('large'):
248 248 raise error.Abort(_('--normal cannot be used with --large'))
249 249 return orig(ui, repo, *pats, **opts)
250 250
251 251 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
252 252 # The --normal flag short circuits this override
253 253 if opts.get('normal'):
254 254 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
255 255
256 256 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
257 257 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
258 258 ladded)
259 259 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
260 260
261 261 bad.extend(f for f in lbad)
262 262 return bad
263 263
264 264 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
265 265 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
266 266 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
267 267 return removelargefiles(ui, repo, False, matcher, after=after,
268 268 force=force) or result
269 269
270 270 def overridestatusfn(orig, repo, rev2, **opts):
271 271 try:
272 272 repo._repo.lfstatus = True
273 273 return orig(repo, rev2, **opts)
274 274 finally:
275 275 repo._repo.lfstatus = False
276 276
277 277 def overridestatus(orig, ui, repo, *pats, **opts):
278 278 try:
279 279 repo.lfstatus = True
280 280 return orig(ui, repo, *pats, **opts)
281 281 finally:
282 282 repo.lfstatus = False
283 283
284 284 def overridedirty(orig, repo, ignoreupdate=False):
285 285 try:
286 286 repo._repo.lfstatus = True
287 287 return orig(repo, ignoreupdate)
288 288 finally:
289 289 repo._repo.lfstatus = False
290 290
291 291 def overridelog(orig, ui, repo, *pats, **opts):
292 292 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
293 293 default='relpath', badfn=None):
294 294 """Matcher that merges root directory with .hglf, suitable for log.
295 295 It is still possible to match .hglf directly.
296 296 For any listed files run log on the standin too.
297 297 matchfn tries both the given filename and with .hglf stripped.
298 298 """
299 299 if opts is None:
300 300 opts = {}
301 301 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
302 302 badfn=badfn)
303 303 m, p = copy.copy(matchandpats)
304 304
305 305 if m.always():
306 306 # We want to match everything anyway, so there's no benefit trying
307 307 # to add standins.
308 308 return matchandpats
309 309
310 310 pats = set(p)
311 311
312 312 def fixpats(pat, tostandin=lfutil.standin):
313 313 if pat.startswith('set:'):
314 314 return pat
315 315
316 316 kindpat = match_._patsplit(pat, None)
317 317
318 318 if kindpat[0] is not None:
319 319 return kindpat[0] + ':' + tostandin(kindpat[1])
320 320 return tostandin(kindpat[1])
321 321
322 322 if m._cwd:
323 323 hglf = lfutil.shortname
324 324 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
325 325
326 326 def tostandin(f):
327 327 # The file may already be a standin, so truncate the back
328 328 # prefix and test before mangling it. This avoids turning
329 329 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
330 330 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
331 331 return f
332 332
333 333 # An absolute path is from outside the repo, so truncate the
334 334 # path to the root before building the standin. Otherwise cwd
335 335 # is somewhere in the repo, relative to root, and needs to be
336 336 # prepended before building the standin.
337 337 if os.path.isabs(m._cwd):
338 338 f = f[len(back):]
339 339 else:
340 340 f = m._cwd + '/' + f
341 341 return back + lfutil.standin(f)
342 342
343 343 pats.update(fixpats(f, tostandin) for f in p)
344 344 else:
345 345 def tostandin(f):
346 346 if lfutil.splitstandin(f):
347 347 return f
348 348 return lfutil.standin(f)
349 349 pats.update(fixpats(f, tostandin) for f in p)
350 350
351 351 for i in range(0, len(m._files)):
352 352 # Don't add '.hglf' to m.files, since that is already covered by '.'
353 353 if m._files[i] == '.':
354 354 continue
355 355 standin = lfutil.standin(m._files[i])
356 356 # If the "standin" is a directory, append instead of replace to
357 357 # support naming a directory on the command line with only
358 358 # largefiles. The original directory is kept to support normal
359 359 # files.
360 360 if standin in repo[ctx.node()]:
361 361 m._files[i] = standin
362 362 elif m._files[i] not in repo[ctx.node()] \
363 363 and repo.wvfs.isdir(standin):
364 364 m._files.append(standin)
365 365
366 366 m._fileroots = set(m._files)
367 367 m._always = False
368 368 origmatchfn = m.matchfn
369 369 def lfmatchfn(f):
370 370 lf = lfutil.splitstandin(f)
371 371 if lf is not None and origmatchfn(lf):
372 372 return True
373 373 r = origmatchfn(f)
374 374 return r
375 375 m.matchfn = lfmatchfn
376 376
377 377 ui.debug('updated patterns: %s\n' % sorted(pats))
378 378 return m, pats
379 379
380 380 # For hg log --patch, the match object is used in two different senses:
381 381 # (1) to determine what revisions should be printed out, and
382 382 # (2) to determine what files to print out diffs for.
383 383 # The magic matchandpats override should be used for case (1) but not for
384 384 # case (2).
385 385 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
386 386 wctx = repo[None]
387 387 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
388 388 return lambda rev: match
389 389
390 390 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
391 391 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
392 392 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
393 393
394 394 try:
395 395 return orig(ui, repo, *pats, **opts)
396 396 finally:
397 397 restorematchandpatsfn()
398 398 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
399 399
400 400 def overrideverify(orig, ui, repo, *pats, **opts):
401 401 large = opts.pop('large', False)
402 402 all = opts.pop('lfa', False)
403 403 contents = opts.pop('lfc', False)
404 404
405 405 result = orig(ui, repo, *pats, **opts)
406 406 if large or all or contents:
407 407 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
408 408 return result
409 409
410 410 def overridedebugstate(orig, ui, repo, *pats, **opts):
411 411 large = opts.pop('large', False)
412 412 if large:
413 413 class fakerepo(object):
414 414 dirstate = lfutil.openlfdirstate(ui, repo)
415 415 orig(ui, fakerepo, *pats, **opts)
416 416 else:
417 417 orig(ui, repo, *pats, **opts)
418 418
419 419 # Before starting the manifest merge, merge.updates will call
420 420 # _checkunknownfile to check if there are any files in the merged-in
421 421 # changeset that collide with unknown files in the working copy.
422 422 #
423 423 # The largefiles are seen as unknown, so this prevents us from merging
424 424 # in a file 'foo' if we already have a largefile with the same name.
425 425 #
426 426 # The overridden function filters the unknown files by removing any
427 427 # largefiles. This makes the merge proceed and we can then handle this
428 428 # case further in the overridden calculateupdates function below.
429 429 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
430 430 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
431 431 return False
432 432 return origfn(repo, wctx, mctx, f, f2)
433 433
434 434 # The manifest merge handles conflicts on the manifest level. We want
435 435 # to handle changes in largefile-ness of files at this level too.
436 436 #
437 437 # The strategy is to run the original calculateupdates and then process
438 438 # the action list it outputs. There are two cases we need to deal with:
439 439 #
440 440 # 1. Normal file in p1, largefile in p2. Here the largefile is
441 441 # detected via its standin file, which will enter the working copy
442 442 # with a "get" action. It is not "merge" since the standin is all
443 443 # Mercurial is concerned with at this level -- the link to the
444 444 # existing normal file is not relevant here.
445 445 #
446 446 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
447 447 # since the largefile will be present in the working copy and
448 448 # different from the normal file in p2. Mercurial therefore
449 449 # triggers a merge action.
450 450 #
451 451 # In both cases, we prompt the user and emit new actions to either
452 452 # remove the standin (if the normal file was kept) or to remove the
453 453 # normal file and get the standin (if the largefile was kept). The
454 454 # default prompt answer is to use the largefile version since it was
455 455 # presumably changed on purpose.
456 456 #
457 457 # Finally, the merge.applyupdates function will then take care of
458 458 # writing the files into the working copy and lfcommands.updatelfiles
459 459 # will update the largefiles.
460 460 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
461 461 partial, acceptremote, followcopies):
462 462 overwrite = force and not branchmerge
463 463 actions, diverge, renamedelete = origfn(
464 464 repo, p1, p2, pas, branchmerge, force, partial, acceptremote,
465 465 followcopies)
466 466
467 467 if overwrite:
468 468 return actions, diverge, renamedelete
469 469
470 470 # Convert to dictionary with filename as key and action as value.
471 471 lfiles = set()
472 472 for f in actions:
473 473 splitstandin = f and lfutil.splitstandin(f)
474 474 if splitstandin in p1:
475 475 lfiles.add(splitstandin)
476 476 elif lfutil.standin(f) in p1:
477 477 lfiles.add(f)
478 478
479 479 for lfile in lfiles:
480 480 standin = lfutil.standin(lfile)
481 481 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
482 482 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
483 483 if sm in ('g', 'dc') and lm != 'r':
484 484 if sm == 'dc':
485 485 f1, f2, fa, move, anc = sargs
486 486 sargs = (p2[f2].flags(),)
487 487 # Case 1: normal file in the working copy, largefile in
488 488 # the second parent
489 489 usermsg = _('remote turned local normal file %s into a largefile\n'
490 490 'use (l)argefile or keep (n)ormal file?'
491 491 '$$ &Largefile $$ &Normal file') % lfile
492 492 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
493 493 actions[lfile] = ('r', None, 'replaced by standin')
494 494 actions[standin] = ('g', sargs, 'replaces standin')
495 495 else: # keep local normal file
496 496 actions[lfile] = ('k', None, 'replaces standin')
497 497 if branchmerge:
498 498 actions[standin] = ('k', None, 'replaced by non-standin')
499 499 else:
500 500 actions[standin] = ('r', None, 'replaced by non-standin')
501 501 elif lm in ('g', 'dc') and sm != 'r':
502 502 if lm == 'dc':
503 503 f1, f2, fa, move, anc = largs
504 504 largs = (p2[f2].flags(),)
505 505 # Case 2: largefile in the working copy, normal file in
506 506 # the second parent
507 507 usermsg = _('remote turned local largefile %s into a normal file\n'
508 508 'keep (l)argefile or use (n)ormal file?'
509 509 '$$ &Largefile $$ &Normal file') % lfile
510 510 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
511 511 if branchmerge:
512 512 # largefile can be restored from standin safely
513 513 actions[lfile] = ('k', None, 'replaced by standin')
514 514 actions[standin] = ('k', None, 'replaces standin')
515 515 else:
516 516 # "lfile" should be marked as "removed" without
517 517 # removal of itself
518 518 actions[lfile] = ('lfmr', None,
519 519 'forget non-standin largefile')
520 520
521 521 # linear-merge should treat this largefile as 're-added'
522 522 actions[standin] = ('a', None, 'keep standin')
523 523 else: # pick remote normal file
524 524 actions[lfile] = ('g', largs, 'replaces standin')
525 525 actions[standin] = ('r', None, 'replaced by non-standin')
526 526
527 527 return actions, diverge, renamedelete
528 528
529 529 def mergerecordupdates(orig, repo, actions, branchmerge):
530 530 if 'lfmr' in actions:
531 531 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
532 532 for lfile, args, msg in actions['lfmr']:
533 533 # this should be executed before 'orig', to execute 'remove'
534 534 # before all other actions
535 535 repo.dirstate.remove(lfile)
536 536 # make sure lfile doesn't get synclfdirstate'd as normal
537 537 lfdirstate.add(lfile)
538 538 lfdirstate.write()
539 539
540 540 return orig(repo, actions, branchmerge)
541 541
542 542
543 543 # Override filemerge to prompt the user about how they wish to merge
544 544 # largefiles. This will handle identical edits without prompting the user.
545 545 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
546 546 labels=None):
547 547 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
548 548 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
549 549 labels=labels)
550 550
551 551 ahash = fca.data().strip().lower()
552 552 dhash = fcd.data().strip().lower()
553 553 ohash = fco.data().strip().lower()
554 554 if (ohash != ahash and
555 555 ohash != dhash and
556 556 (dhash == ahash or
557 557 repo.ui.promptchoice(
558 558 _('largefile %s has a merge conflict\nancestor was %s\n'
559 559 'keep (l)ocal %s or\ntake (o)ther %s?'
560 560 '$$ &Local $$ &Other') %
561 561 (lfutil.splitstandin(orig), ahash, dhash, ohash),
562 562 0) == 1)):
563 563 repo.wwrite(fcd.path(), fco.data(), fco.flags())
564 564 return True, 0, False
565 565
566 566 def copiespathcopies(orig, ctx1, ctx2, match=None):
567 567 copies = orig(ctx1, ctx2, match=match)
568 568 updated = {}
569 569
570 570 for k, v in copies.iteritems():
571 571 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
572 572
573 573 return updated
574 574
575 575 # Copy first changes the matchers to match standins instead of
576 576 # largefiles. Then it overrides util.copyfile in that function it
577 577 # checks if the destination largefile already exists. It also keeps a
578 578 # list of copied files so that the largefiles can be copied and the
579 579 # dirstate updated.
580 580 def overridecopy(orig, ui, repo, pats, opts, rename=False):
581 581 # doesn't remove largefile on rename
582 582 if len(pats) < 2:
583 583 # this isn't legal, let the original function deal with it
584 584 return orig(ui, repo, pats, opts, rename)
585 585
586 586 # This could copy both lfiles and normal files in one command,
587 587 # but we don't want to do that. First replace their matcher to
588 588 # only match normal files and run it, then replace it to just
589 589 # match largefiles and run it again.
590 590 nonormalfiles = False
591 591 nolfiles = False
592 592 installnormalfilesmatchfn(repo[None].manifest())
593 593 try:
594 594 result = orig(ui, repo, pats, opts, rename)
595 595 except error.Abort as e:
596 596 if str(e) != _('no files to copy'):
597 597 raise e
598 598 else:
599 599 nonormalfiles = True
600 600 result = 0
601 601 finally:
602 602 restorematchfn()
603 603
604 604 # The first rename can cause our current working directory to be removed.
605 605 # In that case there is nothing left to copy/rename so just quit.
606 606 try:
607 607 repo.getcwd()
608 608 except OSError:
609 609 return result
610 610
611 611 def makestandin(relpath):
612 612 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
613 613 return os.path.join(repo.wjoin(lfutil.standin(path)))
614 614
615 615 fullpats = scmutil.expandpats(pats)
616 616 dest = fullpats[-1]
617 617
618 618 if os.path.isdir(dest):
619 619 if not os.path.isdir(makestandin(dest)):
620 620 os.makedirs(makestandin(dest))
621 621
622 622 try:
623 623 # When we call orig below it creates the standins but we don't add
624 624 # them to the dir state until later so lock during that time.
625 625 wlock = repo.wlock()
626 626
627 627 manifest = repo[None].manifest()
628 628 def overridematch(ctx, pats=(), opts=None, globbed=False,
629 629 default='relpath', badfn=None):
630 630 if opts is None:
631 631 opts = {}
632 632 newpats = []
633 633 # The patterns were previously mangled to add the standin
634 634 # directory; we need to remove that now
635 635 for pat in pats:
636 636 if match_.patkind(pat) is None and lfutil.shortname in pat:
637 637 newpats.append(pat.replace(lfutil.shortname, ''))
638 638 else:
639 639 newpats.append(pat)
640 640 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
641 641 m = copy.copy(match)
642 642 lfile = lambda f: lfutil.standin(f) in manifest
643 643 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
644 644 m._fileroots = set(m._files)
645 645 origmatchfn = m.matchfn
646 646 m.matchfn = lambda f: (lfutil.isstandin(f) and
647 647 (f in manifest) and
648 648 origmatchfn(lfutil.splitstandin(f)) or
649 649 None)
650 650 return m
651 651 oldmatch = installmatchfn(overridematch)
652 652 listpats = []
653 653 for pat in pats:
654 654 if match_.patkind(pat) is not None:
655 655 listpats.append(pat)
656 656 else:
657 657 listpats.append(makestandin(pat))
658 658
659 659 try:
660 660 origcopyfile = util.copyfile
661 661 copiedfiles = []
662 662 def overridecopyfile(src, dest):
663 663 if (lfutil.shortname in src and
664 664 dest.startswith(repo.wjoin(lfutil.shortname))):
665 665 destlfile = dest.replace(lfutil.shortname, '')
666 666 if not opts['force'] and os.path.exists(destlfile):
667 667 raise IOError('',
668 668 _('destination largefile already exists'))
669 669 copiedfiles.append((src, dest))
670 670 origcopyfile(src, dest)
671 671
672 672 util.copyfile = overridecopyfile
673 673 result += orig(ui, repo, listpats, opts, rename)
674 674 finally:
675 675 util.copyfile = origcopyfile
676 676
677 677 lfdirstate = lfutil.openlfdirstate(ui, repo)
678 678 for (src, dest) in copiedfiles:
679 679 if (lfutil.shortname in src and
680 680 dest.startswith(repo.wjoin(lfutil.shortname))):
681 681 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
682 682 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
683 683 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
684 684 if not os.path.isdir(destlfiledir):
685 685 os.makedirs(destlfiledir)
686 686 if rename:
687 687 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
688 688
689 689 # The file is gone, but this deletes any empty parent
690 690 # directories as a side-effect.
691 691 util.unlinkpath(repo.wjoin(srclfile), True)
692 692 lfdirstate.remove(srclfile)
693 693 else:
694 694 util.copyfile(repo.wjoin(srclfile),
695 695 repo.wjoin(destlfile))
696 696
697 697 lfdirstate.add(destlfile)
698 698 lfdirstate.write()
699 699 except error.Abort as e:
700 700 if str(e) != _('no files to copy'):
701 701 raise e
702 702 else:
703 703 nolfiles = True
704 704 finally:
705 705 restorematchfn()
706 706 wlock.release()
707 707
708 708 if nolfiles and nonormalfiles:
709 709 raise error.Abort(_('no files to copy'))
710 710
711 711 return result
712 712
713 713 # When the user calls revert, we have to be careful to not revert any
714 714 # changes to other largefiles accidentally. This means we have to keep
715 715 # track of the largefiles that are being reverted so we only pull down
716 716 # the necessary largefiles.
717 717 #
718 718 # Standins are only updated (to match the hash of largefiles) before
719 719 # commits. Update the standins then run the original revert, changing
720 720 # the matcher to hit standins instead of largefiles. Based on the
721 721 # resulting standins update the largefiles.
722 722 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
723 723 # Because we put the standins in a bad state (by updating them)
724 724 # and then return them to a correct state we need to lock to
725 725 # prevent others from changing them in their incorrect state.
726 726 wlock = repo.wlock()
727 727 try:
728 728 lfdirstate = lfutil.openlfdirstate(ui, repo)
729 729 s = lfutil.lfdirstatestatus(lfdirstate, repo)
730 730 lfdirstate.write()
731 731 for lfile in s.modified:
732 732 lfutil.updatestandin(repo, lfutil.standin(lfile))
733 733 for lfile in s.deleted:
734 734 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
735 735 os.unlink(repo.wjoin(lfutil.standin(lfile)))
736 736
737 737 oldstandins = lfutil.getstandinsstate(repo)
738 738
739 739 def overridematch(mctx, pats=(), opts=None, globbed=False,
740 740 default='relpath', badfn=None):
741 741 if opts is None:
742 742 opts = {}
743 743 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
744 744 m = copy.copy(match)
745 745
746 746 # revert supports recursing into subrepos, and though largefiles
747 747 # currently doesn't work correctly in that case, this match is
748 748 # called, so the lfdirstate above may not be the correct one for
749 749 # this invocation of match.
750 750 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
751 751 False)
752 752
753 753 def tostandin(f):
754 754 standin = lfutil.standin(f)
755 755 if standin in ctx or standin in mctx:
756 756 return standin
757 757 elif standin in repo[None] or lfdirstate[f] == 'r':
758 758 return None
759 759 return f
760 760 m._files = [tostandin(f) for f in m._files]
761 761 m._files = [f for f in m._files if f is not None]
762 762 m._fileroots = set(m._files)
763 763 origmatchfn = m.matchfn
764 764 def matchfn(f):
765 765 if lfutil.isstandin(f):
766 766 return (origmatchfn(lfutil.splitstandin(f)) and
767 767 (f in ctx or f in mctx))
768 768 return origmatchfn(f)
769 769 m.matchfn = matchfn
770 770 return m
771 771 oldmatch = installmatchfn(overridematch)
772 772 try:
773 773 orig(ui, repo, ctx, parents, *pats, **opts)
774 774 finally:
775 775 restorematchfn()
776 776
777 777 newstandins = lfutil.getstandinsstate(repo)
778 778 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
779 779 # lfdirstate should be 'normallookup'-ed for updated files,
780 780 # because reverting doesn't touch dirstate for 'normal' files
781 781 # when target revision is explicitly specified: in such case,
782 782 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
783 783 # of target (standin) file.
784 784 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
785 785 normallookup=True)
786 786
787 787 finally:
788 788 wlock.release()
789 789
790 790 # after pulling changesets, we need to take some extra care to get
791 791 # largefiles updated remotely
792 792 def overridepull(orig, ui, repo, source=None, **opts):
793 793 revsprepull = len(repo)
794 794 if not source:
795 795 source = 'default'
796 796 repo.lfpullsource = source
797 797 result = orig(ui, repo, source, **opts)
798 798 revspostpull = len(repo)
799 799 lfrevs = opts.get('lfrev', [])
800 800 if opts.get('all_largefiles'):
801 801 lfrevs.append('pulled()')
802 802 if lfrevs and revspostpull > revsprepull:
803 803 numcached = 0
804 804 repo.firstpulled = revsprepull # for pulled() revset expression
805 805 try:
806 806 for rev in scmutil.revrange(repo, lfrevs):
807 807 ui.note(_('pulling largefiles for revision %s\n') % rev)
808 808 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
809 809 numcached += len(cached)
810 810 finally:
811 811 del repo.firstpulled
812 812 ui.status(_("%d largefiles cached\n") % numcached)
813 813 return result
814 814
815 815 def pulledrevsetsymbol(repo, subset, x):
816 816 """``pulled()``
817 817 Changesets that just has been pulled.
818 818
819 819 Only available with largefiles from pull --lfrev expressions.
820 820
821 821 .. container:: verbose
822 822
823 823 Some examples:
824 824
825 825 - pull largefiles for all new changesets::
826 826
827 827 hg pull -lfrev "pulled()"
828 828
829 829 - pull largefiles for all new branch heads::
830 830
831 831 hg pull -lfrev "head(pulled()) and not closed()"
832 832
833 833 """
834 834
835 835 try:
836 836 firstpulled = repo.firstpulled
837 837 except AttributeError:
838 838 raise error.Abort(_("pulled() only available in --lfrev"))
839 839 return revset.baseset([r for r in subset if r >= firstpulled])
840 840
841 841 def overrideclone(orig, ui, source, dest=None, **opts):
842 842 d = dest
843 843 if d is None:
844 844 d = hg.defaultdest(source)
845 845 if opts.get('all_largefiles') and not hg.islocal(d):
846 846 raise error.Abort(_(
847 847 '--all-largefiles is incompatible with non-local destination %s') %
848 848 d)
849 849
850 850 return orig(ui, source, dest, **opts)
851 851
852 852 def hgclone(orig, ui, opts, *args, **kwargs):
853 853 result = orig(ui, opts, *args, **kwargs)
854 854
855 855 if result is not None:
856 856 sourcerepo, destrepo = result
857 857 repo = destrepo.local()
858 858
859 859 # When cloning to a remote repo (like through SSH), no repo is available
860 860 # from the peer. Therefore the largefiles can't be downloaded and the
861 861 # hgrc can't be updated.
862 862 if not repo:
863 863 return result
864 864
865 865 # If largefiles is required for this repo, permanently enable it locally
866 866 if 'largefiles' in repo.requirements:
867 867 fp = repo.vfs('hgrc', 'a', text=True)
868 868 try:
869 869 fp.write('\n[extensions]\nlargefiles=\n')
870 870 finally:
871 871 fp.close()
872 872
873 873 # Caching is implicitly limited to 'rev' option, since the dest repo was
874 874 # truncated at that point. The user may expect a download count with
875 875 # this option, so attempt whether or not this is a largefile repo.
876 876 if opts.get('all_largefiles'):
877 877 success, missing = lfcommands.downloadlfiles(ui, repo, None)
878 878
879 879 if missing != 0:
880 880 return None
881 881
882 882 return result
883 883
884 884 def overriderebase(orig, ui, repo, **opts):
885 885 if not util.safehasattr(repo, '_largefilesenabled'):
886 886 return orig(ui, repo, **opts)
887 887
888 888 resuming = opts.get('continue')
889 889 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
890 890 repo._lfstatuswriters.append(lambda *msg, **opts: None)
891 891 try:
892 892 return orig(ui, repo, **opts)
893 893 finally:
894 894 repo._lfstatuswriters.pop()
895 895 repo._lfcommithooks.pop()
896 896
897 897 def overridearchivecmd(orig, ui, repo, dest, **opts):
898 898 repo.unfiltered().lfstatus = True
899 899
900 900 try:
901 901 return orig(ui, repo.unfiltered(), dest, **opts)
902 902 finally:
903 903 repo.unfiltered().lfstatus = False
904 904
905 905 def hgwebarchive(orig, web, req, tmpl):
906 906 web.repo.lfstatus = True
907 907
908 908 try:
909 909 return orig(web, req, tmpl)
910 910 finally:
911 911 web.repo.lfstatus = False
912 912
913 913 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
914 914 prefix='', mtime=None, subrepos=None):
915 915 # For some reason setting repo.lfstatus in hgwebarchive only changes the
916 916 # unfiltered repo's attr, so check that as well.
917 917 if not repo.lfstatus and not repo.unfiltered().lfstatus:
918 918 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
919 919 subrepos)
920 920
921 921 # No need to lock because we are only reading history and
922 922 # largefile caches, neither of which are modified.
923 923 if node is not None:
924 924 lfcommands.cachelfiles(repo.ui, repo, node)
925 925
926 926 if kind not in archival.archivers:
927 927 raise error.Abort(_("unknown archive type '%s'") % kind)
928 928
929 929 ctx = repo[node]
930 930
931 931 if kind == 'files':
932 932 if prefix:
933 933 raise error.Abort(
934 934 _('cannot give prefix when archiving to files'))
935 935 else:
936 936 prefix = archival.tidyprefix(dest, kind, prefix)
937 937
938 938 def write(name, mode, islink, getdata):
939 939 if matchfn and not matchfn(name):
940 940 return
941 941 data = getdata()
942 942 if decode:
943 943 data = repo.wwritedata(name, data)
944 944 archiver.addfile(prefix + name, mode, islink, data)
945 945
946 946 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
947 947
948 948 if repo.ui.configbool("ui", "archivemeta", True):
949 949 write('.hg_archival.txt', 0o644, False,
950 950 lambda: archival.buildmetadata(ctx))
951 951
952 952 for f in ctx:
953 953 ff = ctx.flags(f)
954 954 getdata = ctx[f].data
955 955 if lfutil.isstandin(f):
956 956 if node is not None:
957 957 path = lfutil.findfile(repo, getdata().strip())
958 958
959 959 if path is None:
960 960 raise error.Abort(
961 961 _('largefile %s not found in repo store or system cache')
962 962 % lfutil.splitstandin(f))
963 963 else:
964 964 path = lfutil.splitstandin(f)
965 965
966 966 f = lfutil.splitstandin(f)
967 967
968 968 def getdatafn():
969 969 fd = None
970 970 try:
971 971 fd = open(path, 'rb')
972 972 return fd.read()
973 973 finally:
974 974 if fd:
975 975 fd.close()
976 976
977 977 getdata = getdatafn
978 978 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
979 979
980 980 if subrepos:
981 981 for subpath in sorted(ctx.substate):
982 982 sub = ctx.workingsub(subpath)
983 983 submatch = match_.narrowmatcher(subpath, matchfn)
984 984 sub._repo.lfstatus = True
985 985 sub.archive(archiver, prefix, submatch)
986 986
987 987 archiver.done()
988 988
989 989 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
990 990 if not repo._repo.lfstatus:
991 991 return orig(repo, archiver, prefix, match)
992 992
993 993 repo._get(repo._state + ('hg',))
994 994 rev = repo._state[1]
995 995 ctx = repo._repo[rev]
996 996
997 997 if ctx.node() is not None:
998 998 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
999 999
1000 1000 def write(name, mode, islink, getdata):
1001 1001 # At this point, the standin has been replaced with the largefile name,
1002 1002 # so the normal matcher works here without the lfutil variants.
1003 1003 if match and not match(f):
1004 1004 return
1005 1005 data = getdata()
1006 1006
1007 1007 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1008 1008
1009 1009 for f in ctx:
1010 1010 ff = ctx.flags(f)
1011 1011 getdata = ctx[f].data
1012 1012 if lfutil.isstandin(f):
1013 1013 if ctx.node() is not None:
1014 1014 path = lfutil.findfile(repo._repo, getdata().strip())
1015 1015
1016 1016 if path is None:
1017 1017 raise error.Abort(
1018 1018 _('largefile %s not found in repo store or system cache')
1019 1019 % lfutil.splitstandin(f))
1020 1020 else:
1021 1021 path = lfutil.splitstandin(f)
1022 1022
1023 1023 f = lfutil.splitstandin(f)
1024 1024
1025 1025 def getdatafn():
1026 1026 fd = None
1027 1027 try:
1028 1028 fd = open(os.path.join(prefix, path), 'rb')
1029 1029 return fd.read()
1030 1030 finally:
1031 1031 if fd:
1032 1032 fd.close()
1033 1033
1034 1034 getdata = getdatafn
1035 1035
1036 1036 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1037 1037
1038 1038 for subpath in sorted(ctx.substate):
1039 1039 sub = ctx.workingsub(subpath)
1040 1040 submatch = match_.narrowmatcher(subpath, match)
1041 1041 sub._repo.lfstatus = True
1042 1042 sub.archive(archiver, prefix + repo._path + '/', submatch)
1043 1043
1044 1044 # If a largefile is modified, the change is not reflected in its
1045 1045 # standin until a commit. cmdutil.bailifchanged() raises an exception
1046 1046 # if the repo has uncommitted changes. Wrap it to also check if
1047 1047 # largefiles were changed. This is used by bisect, backout and fetch.
1048 1048 def overridebailifchanged(orig, repo, *args, **kwargs):
1049 1049 orig(repo, *args, **kwargs)
1050 1050 repo.lfstatus = True
1051 1051 s = repo.status()
1052 1052 repo.lfstatus = False
1053 1053 if s.modified or s.added or s.removed or s.deleted:
1054 1054 raise error.Abort(_('uncommitted changes'))
1055 1055
1056 1056 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1057 1057 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1058 1058 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1059 1059 m = composelargefilematcher(match, repo[None].manifest())
1060 1060
1061 1061 try:
1062 1062 repo.lfstatus = True
1063 1063 s = repo.status(match=m, clean=True)
1064 1064 finally:
1065 1065 repo.lfstatus = False
1066 1066 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1067 1067 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1068 1068
1069 1069 for f in forget:
1070 1070 if lfutil.standin(f) not in repo.dirstate and not \
1071 1071 repo.wvfs.isdir(lfutil.standin(f)):
1072 1072 ui.warn(_('not removing %s: file is already untracked\n')
1073 1073 % m.rel(f))
1074 1074 bad.append(f)
1075 1075
1076 1076 for f in forget:
1077 1077 if ui.verbose or not m.exact(f):
1078 1078 ui.status(_('removing %s\n') % m.rel(f))
1079 1079
1080 1080 # Need to lock because standin files are deleted then removed from the
1081 1081 # repository and we could race in-between.
1082 1082 wlock = repo.wlock()
1083 1083 try:
1084 1084 lfdirstate = lfutil.openlfdirstate(ui, repo)
1085 1085 for f in forget:
1086 1086 if lfdirstate[f] == 'a':
1087 1087 lfdirstate.drop(f)
1088 1088 else:
1089 1089 lfdirstate.remove(f)
1090 1090 lfdirstate.write()
1091 1091 standins = [lfutil.standin(f) for f in forget]
1092 1092 for f in standins:
1093 1093 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1094 1094 rejected = repo[None].forget(standins)
1095 1095 finally:
1096 1096 wlock.release()
1097 1097
1098 1098 bad.extend(f for f in rejected if f in m.files())
1099 1099 forgot.extend(f for f in forget if f not in rejected)
1100 1100 return bad, forgot
1101 1101
1102 1102 def _getoutgoings(repo, other, missing, addfunc):
1103 1103 """get pairs of filename and largefile hash in outgoing revisions
1104 1104 in 'missing'.
1105 1105
1106 1106 largefiles already existing on 'other' repository are ignored.
1107 1107
1108 1108 'addfunc' is invoked with each unique pairs of filename and
1109 1109 largefile hash value.
1110 1110 """
1111 1111 knowns = set()
1112 1112 lfhashes = set()
1113 1113 def dedup(fn, lfhash):
1114 1114 k = (fn, lfhash)
1115 1115 if k not in knowns:
1116 1116 knowns.add(k)
1117 1117 lfhashes.add(lfhash)
1118 1118 lfutil.getlfilestoupload(repo, missing, dedup)
1119 1119 if lfhashes:
1120 1120 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1121 1121 for fn, lfhash in knowns:
1122 1122 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1123 1123 addfunc(fn, lfhash)
1124 1124
1125 1125 def outgoinghook(ui, repo, other, opts, missing):
1126 1126 if opts.pop('large', None):
1127 1127 lfhashes = set()
1128 1128 if ui.debugflag:
1129 1129 toupload = {}
1130 1130 def addfunc(fn, lfhash):
1131 1131 if fn not in toupload:
1132 1132 toupload[fn] = []
1133 1133 toupload[fn].append(lfhash)
1134 1134 lfhashes.add(lfhash)
1135 1135 def showhashes(fn):
1136 1136 for lfhash in sorted(toupload[fn]):
1137 1137 ui.debug(' %s\n' % (lfhash))
1138 1138 else:
1139 1139 toupload = set()
1140 1140 def addfunc(fn, lfhash):
1141 1141 toupload.add(fn)
1142 1142 lfhashes.add(lfhash)
1143 1143 def showhashes(fn):
1144 1144 pass
1145 1145 _getoutgoings(repo, other, missing, addfunc)
1146 1146
1147 1147 if not toupload:
1148 1148 ui.status(_('largefiles: no files to upload\n'))
1149 1149 else:
1150 1150 ui.status(_('largefiles to upload (%d entities):\n')
1151 1151 % (len(lfhashes)))
1152 1152 for file in sorted(toupload):
1153 1153 ui.status(lfutil.splitstandin(file) + '\n')
1154 1154 showhashes(file)
1155 1155 ui.status('\n')
1156 1156
1157 1157 def summaryremotehook(ui, repo, opts, changes):
1158 1158 largeopt = opts.get('large', False)
1159 1159 if changes is None:
1160 1160 if largeopt:
1161 1161 return (False, True) # only outgoing check is needed
1162 1162 else:
1163 1163 return (False, False)
1164 1164 elif largeopt:
1165 1165 url, branch, peer, outgoing = changes[1]
1166 1166 if peer is None:
1167 1167 # i18n: column positioning for "hg summary"
1168 1168 ui.status(_('largefiles: (no remote repo)\n'))
1169 1169 return
1170 1170
1171 1171 toupload = set()
1172 1172 lfhashes = set()
1173 1173 def addfunc(fn, lfhash):
1174 1174 toupload.add(fn)
1175 1175 lfhashes.add(lfhash)
1176 1176 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1177 1177
1178 1178 if not toupload:
1179 1179 # i18n: column positioning for "hg summary"
1180 1180 ui.status(_('largefiles: (no files to upload)\n'))
1181 1181 else:
1182 1182 # i18n: column positioning for "hg summary"
1183 1183 ui.status(_('largefiles: %d entities for %d files to upload\n')
1184 1184 % (len(lfhashes), len(toupload)))
1185 1185
1186 1186 def overridesummary(orig, ui, repo, *pats, **opts):
1187 1187 try:
1188 1188 repo.lfstatus = True
1189 1189 orig(ui, repo, *pats, **opts)
1190 1190 finally:
1191 1191 repo.lfstatus = False
1192 1192
1193 1193 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1194 1194 similarity=None):
1195 1195 if opts is None:
1196 1196 opts = {}
1197 1197 if not lfutil.islfilesrepo(repo):
1198 1198 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1199 1199 # Get the list of missing largefiles so we can remove them
1200 1200 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1201 1201 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1202 1202 False, False, False)
1203 1203
1204 1204 # Call into the normal remove code, but the removing of the standin, we want
1205 1205 # to have handled by original addremove. Monkey patching here makes sure
1206 1206 # we don't remove the standin in the largefiles code, preventing a very
1207 1207 # confused state later.
1208 1208 if s.deleted:
1209 1209 m = copy.copy(matcher)
1210 1210
1211 1211 # The m._files and m._map attributes are not changed to the deleted list
1212 1212 # because that affects the m.exact() test, which in turn governs whether
1213 1213 # or not the file name is printed, and how. Simply limit the original
1214 1214 # matches to those in the deleted status list.
1215 1215 matchfn = m.matchfn
1216 1216 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1217 1217
1218 1218 removelargefiles(repo.ui, repo, True, m, **opts)
1219 1219 # Call into the normal add code, and any files that *should* be added as
1220 1220 # largefiles will be
1221 1221 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1222 1222 # Now that we've handled largefiles, hand off to the original addremove
1223 1223 # function to take care of the rest. Make sure it doesn't do anything with
1224 1224 # largefiles by passing a matcher that will ignore them.
1225 1225 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1226 1226 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1227 1227
1228 1228 # Calling purge with --all will cause the largefiles to be deleted.
1229 1229 # Override repo.status to prevent this from happening.
1230 1230 def overridepurge(orig, ui, repo, *dirs, **opts):
1231 1231 # XXX Monkey patching a repoview will not work. The assigned attribute will
1232 1232 # be set on the unfiltered repo, but we will only lookup attributes in the
1233 1233 # unfiltered repo if the lookup in the repoview object itself fails. As the
1234 1234 # monkey patched method exists on the repoview class the lookup will not
1235 1235 # fail. As a result, the original version will shadow the monkey patched
1236 1236 # one, defeating the monkey patch.
1237 1237 #
1238 1238 # As a work around we use an unfiltered repo here. We should do something
1239 1239 # cleaner instead.
1240 1240 repo = repo.unfiltered()
1241 1241 oldstatus = repo.status
1242 1242 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1243 1243 clean=False, unknown=False, listsubrepos=False):
1244 1244 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1245 1245 listsubrepos)
1246 1246 lfdirstate = lfutil.openlfdirstate(ui, repo)
1247 1247 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1248 1248 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1249 1249 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1250 1250 unknown, ignored, r.clean)
1251 1251 repo.status = overridestatus
1252 1252 orig(ui, repo, *dirs, **opts)
1253 1253 repo.status = oldstatus
1254 1254 def overriderollback(orig, ui, repo, **opts):
1255 1255 wlock = repo.wlock()
1256 1256 try:
1257 1257 before = repo.dirstate.parents()
1258 1258 orphans = set(f for f in repo.dirstate
1259 1259 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1260 1260 result = orig(ui, repo, **opts)
1261 1261 after = repo.dirstate.parents()
1262 1262 if before == after:
1263 1263 return result # no need to restore standins
1264 1264
1265 1265 pctx = repo['.']
1266 1266 for f in repo.dirstate:
1267 1267 if lfutil.isstandin(f):
1268 1268 orphans.discard(f)
1269 1269 if repo.dirstate[f] == 'r':
1270 1270 repo.wvfs.unlinkpath(f, ignoremissing=True)
1271 1271 elif f in pctx:
1272 1272 fctx = pctx[f]
1273 1273 repo.wwrite(f, fctx.data(), fctx.flags())
1274 1274 else:
1275 1275 # content of standin is not so important in 'a',
1276 1276 # 'm' or 'n' (coming from the 2nd parent) cases
1277 1277 lfutil.writestandin(repo, f, '', False)
1278 1278 for standin in orphans:
1279 1279 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1280 1280
1281 1281 lfdirstate = lfutil.openlfdirstate(ui, repo)
1282 1282 orphans = set(lfdirstate)
1283 1283 lfiles = lfutil.listlfiles(repo)
1284 1284 for file in lfiles:
1285 1285 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1286 1286 orphans.discard(file)
1287 1287 for lfile in orphans:
1288 1288 lfdirstate.drop(lfile)
1289 1289 lfdirstate.write()
1290 1290 finally:
1291 1291 wlock.release()
1292 1292 return result
1293 1293
1294 1294 def overridetransplant(orig, ui, repo, *revs, **opts):
1295 1295 resuming = opts.get('continue')
1296 1296 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1297 1297 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1298 1298 try:
1299 1299 result = orig(ui, repo, *revs, **opts)
1300 1300 finally:
1301 1301 repo._lfstatuswriters.pop()
1302 1302 repo._lfcommithooks.pop()
1303 1303 return result
1304 1304
1305 1305 def overridecat(orig, ui, repo, file1, *pats, **opts):
1306 1306 ctx = scmutil.revsingle(repo, opts.get('rev'))
1307 1307 err = 1
1308 1308 notbad = set()
1309 1309 m = scmutil.match(ctx, (file1,) + pats, opts)
1310 1310 origmatchfn = m.matchfn
1311 1311 def lfmatchfn(f):
1312 1312 if origmatchfn(f):
1313 1313 return True
1314 1314 lf = lfutil.splitstandin(f)
1315 1315 if lf is None:
1316 1316 return False
1317 1317 notbad.add(lf)
1318 1318 return origmatchfn(lf)
1319 1319 m.matchfn = lfmatchfn
1320 1320 origbadfn = m.bad
1321 1321 def lfbadfn(f, msg):
1322 1322 if not f in notbad:
1323 1323 origbadfn(f, msg)
1324 1324 m.bad = lfbadfn
1325 1325
1326 1326 origvisitdirfn = m.visitdir
1327 1327 def lfvisitdirfn(dir):
1328 1328 if dir == lfutil.shortname:
1329 1329 return True
1330 1330 ret = origvisitdirfn(dir)
1331 1331 if ret:
1332 1332 return ret
1333 1333 lf = lfutil.splitstandin(dir)
1334 1334 if lf is None:
1335 1335 return False
1336 1336 return origvisitdirfn(lf)
1337 1337 m.visitdir = lfvisitdirfn
1338 1338
1339 1339 for f in ctx.walk(m):
1340 1340 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1341 1341 pathname=f)
1342 1342 lf = lfutil.splitstandin(f)
1343 1343 if lf is None or origmatchfn(f):
1344 1344 # duplicating unreachable code from commands.cat
1345 1345 data = ctx[f].data()
1346 1346 if opts.get('decode'):
1347 1347 data = repo.wwritedata(f, data)
1348 1348 fp.write(data)
1349 1349 else:
1350 1350 hash = lfutil.readstandin(repo, lf, ctx.rev())
1351 1351 if not lfutil.inusercache(repo.ui, hash):
1352 1352 store = basestore._openstore(repo)
1353 1353 success, missing = store.get([(lf, hash)])
1354 1354 if len(success) != 1:
1355 1355 raise error.Abort(
1356 1356 _('largefile %s is not in cache and could not be '
1357 1357 'downloaded') % lf)
1358 1358 path = lfutil.usercachepath(repo.ui, hash)
1359 1359 fpin = open(path, "rb")
1360 1360 for chunk in util.filechunkiter(fpin, 128 * 1024):
1361 1361 fp.write(chunk)
1362 1362 fpin.close()
1363 1363 fp.close()
1364 1364 err = 0
1365 1365 return err
1366 1366
1367 def mergeupdate(orig, repo, node, branchmerge, force, partial,
1367 def mergeupdate(orig, repo, node, branchmerge, force,
1368 1368 *args, **kwargs):
1369 matcher = kwargs.get('matcher', None)
1370 # note if this is a partial update
1371 partial = matcher and not matcher.always()
1369 1372 wlock = repo.wlock()
1370 1373 try:
1371 1374 # branch | | |
1372 1375 # merge | force | partial | action
1373 1376 # -------+-------+---------+--------------
1374 1377 # x | x | x | linear-merge
1375 1378 # o | x | x | branch-merge
1376 1379 # x | o | x | overwrite (as clean update)
1377 1380 # o | o | x | force-branch-merge (*1)
1378 1381 # x | x | o | (*)
1379 1382 # o | x | o | (*)
1380 1383 # x | o | o | overwrite (as revert)
1381 1384 # o | o | o | (*)
1382 1385 #
1383 1386 # (*) don't care
1384 1387 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1385 1388
1386 1389 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1387 1390 unsure, s = lfdirstate.status(match_.always(repo.root,
1388 1391 repo.getcwd()),
1389 1392 [], False, False, False)
1390 1393 pctx = repo['.']
1391 1394 for lfile in unsure + s.modified:
1392 1395 lfileabs = repo.wvfs.join(lfile)
1393 1396 if not os.path.exists(lfileabs):
1394 1397 continue
1395 1398 lfhash = lfutil.hashrepofile(repo, lfile)
1396 1399 standin = lfutil.standin(lfile)
1397 1400 lfutil.writestandin(repo, standin, lfhash,
1398 1401 lfutil.getexecutable(lfileabs))
1399 1402 if (standin in pctx and
1400 1403 lfhash == lfutil.readstandin(repo, lfile, '.')):
1401 1404 lfdirstate.normal(lfile)
1402 1405 for lfile in s.added:
1403 1406 lfutil.updatestandin(repo, lfutil.standin(lfile))
1404 1407 lfdirstate.write()
1405 1408
1406 1409 oldstandins = lfutil.getstandinsstate(repo)
1407 1410
1408 result = orig(repo, node, branchmerge, force, partial, *args, **kwargs)
1411 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1409 1412
1410 1413 newstandins = lfutil.getstandinsstate(repo)
1411 1414 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1412 1415 if branchmerge or force or partial:
1413 1416 filelist.extend(s.deleted + s.removed)
1414 1417
1415 1418 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1416 1419 normallookup=partial)
1417 1420
1418 1421 return result
1419 1422 finally:
1420 1423 wlock.release()
1421 1424
1422 1425 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1423 1426 result = orig(repo, files, *args, **kwargs)
1424 1427
1425 1428 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1426 1429 if filelist:
1427 1430 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1428 1431 printmessage=False, normallookup=True)
1429 1432
1430 1433 return result
@@ -1,1249 +1,1249 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 https://mercurial-scm.org/wiki/RebaseExtension
15 15 '''
16 16
17 17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
19 19 from mercurial import copies, repoview, revset
20 20 from mercurial.commands import templateopts
21 21 from mercurial.node import nullrev, nullid, hex, short
22 22 from mercurial.lock import release
23 23 from mercurial.i18n import _
24 24 import os, errno
25 25
26 26 # The following constants are used throughout the rebase module. The ordering of
27 27 # their values must be maintained.
28 28
29 29 # Indicates that a revision needs to be rebased
30 30 revtodo = -1
31 31 nullmerge = -2
32 32 revignored = -3
33 33 # successor in rebase destination
34 34 revprecursor = -4
35 35 # plain prune (no successor)
36 36 revpruned = -5
37 37 revskipped = (revignored, revprecursor, revpruned)
38 38
39 39 cmdtable = {}
40 40 command = cmdutil.command(cmdtable)
41 41 # Note for extension authors: ONLY specify testedwith = 'internal' for
42 42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
43 43 # be specifying the version(s) of Mercurial they are tested with, or
44 44 # leave the attribute unspecified.
45 45 testedwith = 'internal'
46 46
47 47 def _nothingtorebase():
48 48 return 1
49 49
50 50 def _makeextrafn(copiers):
51 51 """make an extrafn out of the given copy-functions.
52 52
53 53 A copy function takes a context and an extra dict, and mutates the
54 54 extra dict as needed based on the given context.
55 55 """
56 56 def extrafn(ctx, extra):
57 57 for c in copiers:
58 58 c(ctx, extra)
59 59 return extrafn
60 60
61 61 def _destrebase(repo):
62 62 # Destination defaults to the latest revision in the
63 63 # current branch
64 64 branch = repo[None].branch()
65 65 return repo[branch].rev()
66 66
67 67 def _revsetdestrebase(repo, subset, x):
68 68 # ``_rebasedefaultdest()``
69 69
70 70 # default destination for rebase.
71 71 # # XXX: Currently private because I expect the signature to change.
72 72 # # XXX: - taking rev as arguments,
73 73 # # XXX: - bailing out in case of ambiguity vs returning all data.
74 74 # # XXX: - probably merging with the merge destination.
75 75 # i18n: "_rebasedefaultdest" is a keyword
76 76 revset.getargs(x, 0, 0, _("_rebasedefaultdest takes no arguments"))
77 77 return subset & revset.baseset([_destrebase(repo)])
78 78
79 79 @command('rebase',
80 80 [('s', 'source', '',
81 81 _('rebase the specified changeset and descendants'), _('REV')),
82 82 ('b', 'base', '',
83 83 _('rebase everything from branching point of specified changeset'),
84 84 _('REV')),
85 85 ('r', 'rev', [],
86 86 _('rebase these revisions'),
87 87 _('REV')),
88 88 ('d', 'dest', '',
89 89 _('rebase onto the specified changeset'), _('REV')),
90 90 ('', 'collapse', False, _('collapse the rebased changesets')),
91 91 ('m', 'message', '',
92 92 _('use text as collapse commit message'), _('TEXT')),
93 93 ('e', 'edit', False, _('invoke editor on commit messages')),
94 94 ('l', 'logfile', '',
95 95 _('read collapse commit message from file'), _('FILE')),
96 96 ('k', 'keep', False, _('keep original changesets')),
97 97 ('', 'keepbranches', False, _('keep original branch names')),
98 98 ('D', 'detach', False, _('(DEPRECATED)')),
99 99 ('i', 'interactive', False, _('(DEPRECATED)')),
100 100 ('t', 'tool', '', _('specify merge tool')),
101 101 ('c', 'continue', False, _('continue an interrupted rebase')),
102 102 ('a', 'abort', False, _('abort an interrupted rebase'))] +
103 103 templateopts,
104 104 _('[-s REV | -b REV] [-d REV] [OPTION]'))
105 105 def rebase(ui, repo, **opts):
106 106 """move changeset (and descendants) to a different branch
107 107
108 108 Rebase uses repeated merging to graft changesets from one part of
109 109 history (the source) onto another (the destination). This can be
110 110 useful for linearizing *local* changes relative to a master
111 111 development tree.
112 112
113 113 You should not rebase changesets that have already been shared
114 114 with others. Doing so will force everybody else to perform the
115 115 same rebase or they will end up with duplicated changesets after
116 116 pulling in your rebased changesets.
117 117
118 118 In its default configuration, Mercurial will prevent you from
119 119 rebasing published changes. See :hg:`help phases` for details.
120 120
121 121 If you don't specify a destination changeset (``-d/--dest``),
122 122 rebase uses the current branch tip as the destination. (The
123 123 destination changeset is not modified by rebasing, but new
124 124 changesets are added as its descendants.)
125 125
126 126 You can specify which changesets to rebase in two ways: as a
127 127 "source" changeset or as a "base" changeset. Both are shorthand
128 128 for a topologically related set of changesets (the "source
129 129 branch"). If you specify source (``-s/--source``), rebase will
130 130 rebase that changeset and all of its descendants onto dest. If you
131 131 specify base (``-b/--base``), rebase will select ancestors of base
132 132 back to but not including the common ancestor with dest. Thus,
133 133 ``-b`` is less precise but more convenient than ``-s``: you can
134 134 specify any changeset in the source branch, and rebase will select
135 135 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
136 136 uses the parent of the working directory as the base.
137 137
138 138 For advanced usage, a third way is available through the ``--rev``
139 139 option. It allows you to specify an arbitrary set of changesets to
140 140 rebase. Descendants of revs you specify with this option are not
141 141 automatically included in the rebase.
142 142
143 143 By default, rebase recreates the changesets in the source branch
144 144 as descendants of dest and then destroys the originals. Use
145 145 ``--keep`` to preserve the original source changesets. Some
146 146 changesets in the source branch (e.g. merges from the destination
147 147 branch) may be dropped if they no longer contribute any change.
148 148
149 149 One result of the rules for selecting the destination changeset
150 150 and source branch is that, unlike ``merge``, rebase will do
151 151 nothing if you are at the branch tip of a named branch
152 152 with two heads. You need to explicitly specify source and/or
153 153 destination (or ``update`` to the other head, if it's the head of
154 154 the intended source branch).
155 155
156 156 If a rebase is interrupted to manually resolve a merge, it can be
157 157 continued with --continue/-c or aborted with --abort/-a.
158 158
159 159 .. container:: verbose
160 160
161 161 Examples:
162 162
163 163 - move "local changes" (current commit back to branching point)
164 164 to the current branch tip after a pull::
165 165
166 166 hg rebase
167 167
168 168 - move a single changeset to the stable branch::
169 169
170 170 hg rebase -r 5f493448 -d stable
171 171
172 172 - splice a commit and all its descendants onto another part of history::
173 173
174 174 hg rebase --source c0c3 --dest 4cf9
175 175
176 176 - rebase everything on a branch marked by a bookmark onto the
177 177 default branch::
178 178
179 179 hg rebase --base myfeature --dest default
180 180
181 181 - collapse a sequence of changes into a single commit::
182 182
183 183 hg rebase --collapse -r 1520:1525 -d .
184 184
185 185 - move a named branch while preserving its name::
186 186
187 187 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
188 188
189 189 Returns 0 on success, 1 if nothing to rebase or there are
190 190 unresolved conflicts.
191 191
192 192 """
193 193 originalwd = target = None
194 194 activebookmark = None
195 195 external = nullrev
196 196 # Mapping between the old revision id and either what is the new rebased
197 197 # revision or what needs to be done with the old revision. The state dict
198 198 # will be what contains most of the rebase progress state.
199 199 state = {}
200 200 skipped = set()
201 201 targetancestors = set()
202 202
203 203
204 204 lock = wlock = None
205 205 try:
206 206 wlock = repo.wlock()
207 207 lock = repo.lock()
208 208
209 209 # Validate input and define rebasing points
210 210 destf = opts.get('dest', None)
211 211 srcf = opts.get('source', None)
212 212 basef = opts.get('base', None)
213 213 revf = opts.get('rev', [])
214 214 contf = opts.get('continue')
215 215 abortf = opts.get('abort')
216 216 collapsef = opts.get('collapse', False)
217 217 collapsemsg = cmdutil.logmessage(ui, opts)
218 218 date = opts.get('date', None)
219 219 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
220 220 extrafns = []
221 221 if e:
222 222 extrafns = [e]
223 223 keepf = opts.get('keep', False)
224 224 keepbranchesf = opts.get('keepbranches', False)
225 225 # keepopen is not meant for use on the command line, but by
226 226 # other extensions
227 227 keepopen = opts.get('keepopen', False)
228 228
229 229 if opts.get('interactive'):
230 230 try:
231 231 if extensions.find('histedit'):
232 232 enablehistedit = ''
233 233 except KeyError:
234 234 enablehistedit = " --config extensions.histedit="
235 235 help = "hg%s help -e histedit" % enablehistedit
236 236 msg = _("interactive history editing is supported by the "
237 237 "'histedit' extension (see \"%s\")") % help
238 238 raise error.Abort(msg)
239 239
240 240 if collapsemsg and not collapsef:
241 241 raise error.Abort(
242 242 _('message can only be specified with collapse'))
243 243
244 244 if contf or abortf:
245 245 if contf and abortf:
246 246 raise error.Abort(_('cannot use both abort and continue'))
247 247 if collapsef:
248 248 raise error.Abort(
249 249 _('cannot use collapse with continue or abort'))
250 250 if srcf or basef or destf:
251 251 raise error.Abort(
252 252 _('abort and continue do not allow specifying revisions'))
253 253 if abortf and opts.get('tool', False):
254 254 ui.warn(_('tool option will be ignored\n'))
255 255
256 256 try:
257 257 (originalwd, target, state, skipped, collapsef, keepf,
258 258 keepbranchesf, external, activebookmark) = restorestatus(repo)
259 259 except error.RepoLookupError:
260 260 if abortf:
261 261 clearstatus(repo)
262 262 repo.ui.warn(_('rebase aborted (no revision is removed,'
263 263 ' only broken state is cleared)\n'))
264 264 return 0
265 265 else:
266 266 msg = _('cannot continue inconsistent rebase')
267 267 hint = _('use "hg rebase --abort" to clear broken state')
268 268 raise error.Abort(msg, hint=hint)
269 269 if abortf:
270 270 return abort(repo, originalwd, target, state,
271 271 activebookmark=activebookmark)
272 272 else:
273 273 if srcf and basef:
274 274 raise error.Abort(_('cannot specify both a '
275 275 'source and a base'))
276 276 if revf and basef:
277 277 raise error.Abort(_('cannot specify both a '
278 278 'revision and a base'))
279 279 if revf and srcf:
280 280 raise error.Abort(_('cannot specify both a '
281 281 'revision and a source'))
282 282
283 283 cmdutil.checkunfinished(repo)
284 284 cmdutil.bailifchanged(repo)
285 285
286 286 if destf:
287 287 dest = scmutil.revsingle(repo, destf)
288 288 else:
289 289 dest = repo[_destrebase(repo)]
290 290 destf = str(dest)
291 291
292 292 if revf:
293 293 rebaseset = scmutil.revrange(repo, revf)
294 294 if not rebaseset:
295 295 ui.status(_('empty "rev" revision set - '
296 296 'nothing to rebase\n'))
297 297 return _nothingtorebase()
298 298 elif srcf:
299 299 src = scmutil.revrange(repo, [srcf])
300 300 if not src:
301 301 ui.status(_('empty "source" revision set - '
302 302 'nothing to rebase\n'))
303 303 return _nothingtorebase()
304 304 rebaseset = repo.revs('(%ld)::', src)
305 305 assert rebaseset
306 306 else:
307 307 base = scmutil.revrange(repo, [basef or '.'])
308 308 if not base:
309 309 ui.status(_('empty "base" revision set - '
310 310 "can't compute rebase set\n"))
311 311 return _nothingtorebase()
312 312 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
313 313 if commonanc is not None:
314 314 rebaseset = repo.revs('(%d::(%ld) - %d)::',
315 315 commonanc, base, commonanc)
316 316 else:
317 317 rebaseset = []
318 318
319 319 if not rebaseset:
320 320 # transform to list because smartsets are not comparable to
321 321 # lists. This should be improved to honor laziness of
322 322 # smartset.
323 323 if list(base) == [dest.rev()]:
324 324 if basef:
325 325 ui.status(_('nothing to rebase - %s is both "base"'
326 326 ' and destination\n') % dest)
327 327 else:
328 328 ui.status(_('nothing to rebase - working directory '
329 329 'parent is also destination\n'))
330 330 elif not repo.revs('%ld - ::%d', base, dest):
331 331 if basef:
332 332 ui.status(_('nothing to rebase - "base" %s is '
333 333 'already an ancestor of destination '
334 334 '%s\n') %
335 335 ('+'.join(str(repo[r]) for r in base),
336 336 dest))
337 337 else:
338 338 ui.status(_('nothing to rebase - working '
339 339 'directory parent is already an '
340 340 'ancestor of destination %s\n') % dest)
341 341 else: # can it happen?
342 342 ui.status(_('nothing to rebase from %s to %s\n') %
343 343 ('+'.join(str(repo[r]) for r in base), dest))
344 344 return _nothingtorebase()
345 345
346 346 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
347 347 if (not (keepf or allowunstable)
348 348 and repo.revs('first(children(%ld) - %ld)',
349 349 rebaseset, rebaseset)):
350 350 raise error.Abort(
351 351 _("can't remove original changesets with"
352 352 " unrebased descendants"),
353 353 hint=_('use --keep to keep original changesets'))
354 354
355 355 obsoletenotrebased = {}
356 356 if ui.configbool('experimental', 'rebaseskipobsolete'):
357 357 rebasesetrevs = set(rebaseset)
358 358 obsoletenotrebased = _computeobsoletenotrebased(repo,
359 359 rebasesetrevs,
360 360 dest)
361 361
362 362 # - plain prune (no successor) changesets are rebased
363 363 # - split changesets are not rebased if at least one of the
364 364 # changeset resulting from the split is an ancestor of dest
365 365 rebaseset = rebasesetrevs - set(obsoletenotrebased)
366 366 result = buildstate(repo, dest, rebaseset, collapsef,
367 367 obsoletenotrebased)
368 368
369 369 if not result:
370 370 # Empty state built, nothing to rebase
371 371 ui.status(_('nothing to rebase\n'))
372 372 return _nothingtorebase()
373 373
374 374 root = min(rebaseset)
375 375 if not keepf and not repo[root].mutable():
376 376 raise error.Abort(_("can't rebase public changeset %s")
377 377 % repo[root],
378 378 hint=_('see "hg help phases" for details'))
379 379
380 380 originalwd, target, state = result
381 381 if collapsef:
382 382 targetancestors = repo.changelog.ancestors([target],
383 383 inclusive=True)
384 384 external = externalparent(repo, state, targetancestors)
385 385
386 386 if dest.closesbranch() and not keepbranchesf:
387 387 ui.status(_('reopening closed branch head %s\n') % dest)
388 388
389 389 if keepbranchesf and collapsef:
390 390 branches = set()
391 391 for rev in state:
392 392 branches.add(repo[rev].branch())
393 393 if len(branches) > 1:
394 394 raise error.Abort(_('cannot collapse multiple named '
395 395 'branches'))
396 396
397 397 # Rebase
398 398 if not targetancestors:
399 399 targetancestors = repo.changelog.ancestors([target], inclusive=True)
400 400
401 401 # Keep track of the current bookmarks in order to reset them later
402 402 currentbookmarks = repo._bookmarks.copy()
403 403 activebookmark = activebookmark or repo._activebookmark
404 404 if activebookmark:
405 405 bookmarks.deactivate(repo)
406 406
407 407 extrafn = _makeextrafn(extrafns)
408 408
409 409 sortedstate = sorted(state)
410 410 total = len(sortedstate)
411 411 pos = 0
412 412 for rev in sortedstate:
413 413 ctx = repo[rev]
414 414 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
415 415 ctx.description().split('\n', 1)[0])
416 416 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
417 417 if names:
418 418 desc += ' (%s)' % ' '.join(names)
419 419 pos += 1
420 420 if state[rev] == revtodo:
421 421 ui.status(_('rebasing %s\n') % desc)
422 422 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
423 423 _('changesets'), total)
424 424 p1, p2, base = defineparents(repo, rev, target, state,
425 425 targetancestors)
426 426 storestatus(repo, originalwd, target, state, collapsef, keepf,
427 427 keepbranchesf, external, activebookmark)
428 428 if len(repo[None].parents()) == 2:
429 429 repo.ui.debug('resuming interrupted rebase\n')
430 430 else:
431 431 try:
432 432 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
433 433 'rebase')
434 434 stats = rebasenode(repo, rev, p1, base, state,
435 435 collapsef, target)
436 436 if stats and stats[3] > 0:
437 437 raise error.InterventionRequired(
438 438 _('unresolved conflicts (see hg '
439 439 'resolve, then hg rebase --continue)'))
440 440 finally:
441 441 ui.setconfig('ui', 'forcemerge', '', 'rebase')
442 442 if not collapsef:
443 443 merging = p2 != nullrev
444 444 editform = cmdutil.mergeeditform(merging, 'rebase')
445 445 editor = cmdutil.getcommiteditor(editform=editform, **opts)
446 446 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
447 447 editor=editor,
448 448 keepbranches=keepbranchesf,
449 449 date=date)
450 450 else:
451 451 # Skip commit if we are collapsing
452 452 repo.dirstate.beginparentchange()
453 453 repo.setparents(repo[p1].node())
454 454 repo.dirstate.endparentchange()
455 455 newnode = None
456 456 # Update the state
457 457 if newnode is not None:
458 458 state[rev] = repo[newnode].rev()
459 459 ui.debug('rebased as %s\n' % short(newnode))
460 460 else:
461 461 if not collapsef:
462 462 ui.warn(_('note: rebase of %d:%s created no changes '
463 463 'to commit\n') % (rev, ctx))
464 464 skipped.add(rev)
465 465 state[rev] = p1
466 466 ui.debug('next revision set to %s\n' % p1)
467 467 elif state[rev] == nullmerge:
468 468 ui.debug('ignoring null merge rebase of %s\n' % rev)
469 469 elif state[rev] == revignored:
470 470 ui.status(_('not rebasing ignored %s\n') % desc)
471 471 elif state[rev] == revprecursor:
472 472 targetctx = repo[obsoletenotrebased[rev]]
473 473 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
474 474 targetctx.description().split('\n', 1)[0])
475 475 msg = _('note: not rebasing %s, already in destination as %s\n')
476 476 ui.status(msg % (desc, desctarget))
477 477 elif state[rev] == revpruned:
478 478 msg = _('note: not rebasing %s, it has no successor\n')
479 479 ui.status(msg % desc)
480 480 else:
481 481 ui.status(_('already rebased %s as %s\n') %
482 482 (desc, repo[state[rev]]))
483 483
484 484 ui.progress(_('rebasing'), None)
485 485 ui.note(_('rebase merging completed\n'))
486 486
487 487 if collapsef and not keepopen:
488 488 p1, p2, _base = defineparents(repo, min(state), target,
489 489 state, targetancestors)
490 490 editopt = opts.get('edit')
491 491 editform = 'rebase.collapse'
492 492 if collapsemsg:
493 493 commitmsg = collapsemsg
494 494 else:
495 495 commitmsg = 'Collapsed revision'
496 496 for rebased in state:
497 497 if rebased not in skipped and state[rebased] > nullmerge:
498 498 commitmsg += '\n* %s' % repo[rebased].description()
499 499 editopt = True
500 500 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
501 501 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
502 502 extrafn=extrafn, editor=editor,
503 503 keepbranches=keepbranchesf,
504 504 date=date)
505 505 if newnode is None:
506 506 newrev = target
507 507 else:
508 508 newrev = repo[newnode].rev()
509 509 for oldrev in state.iterkeys():
510 510 if state[oldrev] > nullmerge:
511 511 state[oldrev] = newrev
512 512
513 513 if 'qtip' in repo.tags():
514 514 updatemq(repo, state, skipped, **opts)
515 515
516 516 if currentbookmarks:
517 517 # Nodeids are needed to reset bookmarks
518 518 nstate = {}
519 519 for k, v in state.iteritems():
520 520 if v > nullmerge:
521 521 nstate[repo[k].node()] = repo[v].node()
522 522 # XXX this is the same as dest.node() for the non-continue path --
523 523 # this should probably be cleaned up
524 524 targetnode = repo[target].node()
525 525
526 526 # restore original working directory
527 527 # (we do this before stripping)
528 528 newwd = state.get(originalwd, originalwd)
529 529 if newwd < 0:
530 530 # original directory is a parent of rebase set root or ignored
531 531 newwd = originalwd
532 532 if newwd not in [c.rev() for c in repo[None].parents()]:
533 533 ui.note(_("update back to initial working directory parent\n"))
534 534 hg.updaterepo(repo, newwd, False)
535 535
536 536 if not keepf:
537 537 collapsedas = None
538 538 if collapsef:
539 539 collapsedas = newnode
540 540 clearrebased(ui, repo, state, skipped, collapsedas)
541 541
542 542 tr = None
543 543 try:
544 544 tr = repo.transaction('bookmark')
545 545 if currentbookmarks:
546 546 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
547 547 if activebookmark not in repo._bookmarks:
548 548 # active bookmark was divergent one and has been deleted
549 549 activebookmark = None
550 550 tr.close()
551 551 finally:
552 552 release(tr)
553 553 clearstatus(repo)
554 554
555 555 ui.note(_("rebase completed\n"))
556 556 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
557 557 if skipped:
558 558 ui.note(_("%d revisions have been skipped\n") % len(skipped))
559 559
560 560 if (activebookmark and
561 561 repo['.'].node() == repo._bookmarks[activebookmark]):
562 562 bookmarks.activate(repo, activebookmark)
563 563
564 564 finally:
565 565 release(lock, wlock)
566 566
567 567 def externalparent(repo, state, targetancestors):
568 568 """Return the revision that should be used as the second parent
569 569 when the revisions in state is collapsed on top of targetancestors.
570 570 Abort if there is more than one parent.
571 571 """
572 572 parents = set()
573 573 source = min(state)
574 574 for rev in state:
575 575 if rev == source:
576 576 continue
577 577 for p in repo[rev].parents():
578 578 if (p.rev() not in state
579 579 and p.rev() not in targetancestors):
580 580 parents.add(p.rev())
581 581 if not parents:
582 582 return nullrev
583 583 if len(parents) == 1:
584 584 return parents.pop()
585 585 raise error.Abort(_('unable to collapse on top of %s, there is more '
586 586 'than one external parent: %s') %
587 587 (max(targetancestors),
588 588 ', '.join(str(p) for p in sorted(parents))))
589 589
590 590 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
591 591 keepbranches=False, date=None):
592 592 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
593 593 but also store useful information in extra.
594 594 Return node of committed revision.'''
595 595 dsguard = cmdutil.dirstateguard(repo, 'rebase')
596 596 try:
597 597 repo.setparents(repo[p1].node(), repo[p2].node())
598 598 ctx = repo[rev]
599 599 if commitmsg is None:
600 600 commitmsg = ctx.description()
601 601 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
602 602 extra = ctx.extra().copy()
603 603 if not keepbranches:
604 604 del extra['branch']
605 605 extra['rebase_source'] = ctx.hex()
606 606 if extrafn:
607 607 extrafn(ctx, extra)
608 608
609 609 backup = repo.ui.backupconfig('phases', 'new-commit')
610 610 try:
611 611 targetphase = max(ctx.phase(), phases.draft)
612 612 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
613 613 if keepbranch:
614 614 repo.ui.setconfig('ui', 'allowemptycommit', True)
615 615 # Commit might fail if unresolved files exist
616 616 if date is None:
617 617 date = ctx.date()
618 618 newnode = repo.commit(text=commitmsg, user=ctx.user(),
619 619 date=date, extra=extra, editor=editor)
620 620 finally:
621 621 repo.ui.restoreconfig(backup)
622 622
623 623 repo.dirstate.setbranch(repo[newnode].branch())
624 624 dsguard.close()
625 625 return newnode
626 626 finally:
627 627 release(dsguard)
628 628
629 629 def rebasenode(repo, rev, p1, base, state, collapse, target):
630 630 'Rebase a single revision rev on top of p1 using base as merge ancestor'
631 631 # Merge phase
632 632 # Update to target and merge it with local
633 633 if repo['.'].rev() != p1:
634 634 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
635 merge.update(repo, p1, False, True, False)
635 merge.update(repo, p1, False, True)
636 636 else:
637 637 repo.ui.debug(" already in target\n")
638 638 repo.dirstate.write(repo.currenttransaction())
639 639 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
640 640 if base is not None:
641 641 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
642 642 # When collapsing in-place, the parent is the common ancestor, we
643 643 # have to allow merging with it.
644 stats = merge.update(repo, rev, True, True, False, base, collapse,
644 stats = merge.update(repo, rev, True, True, base, collapse,
645 645 labels=['dest', 'source'])
646 646 if collapse:
647 647 copies.duplicatecopies(repo, rev, target)
648 648 else:
649 649 # If we're not using --collapse, we need to
650 650 # duplicate copies between the revision we're
651 651 # rebasing and its first parent, but *not*
652 652 # duplicate any copies that have already been
653 653 # performed in the destination.
654 654 p1rev = repo[rev].p1().rev()
655 655 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
656 656 return stats
657 657
658 658 def nearestrebased(repo, rev, state):
659 659 """return the nearest ancestors of rev in the rebase result"""
660 660 rebased = [r for r in state if state[r] > nullmerge]
661 661 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
662 662 if candidates:
663 663 return state[candidates.first()]
664 664 else:
665 665 return None
666 666
667 667 def defineparents(repo, rev, target, state, targetancestors):
668 668 'Return the new parent relationship of the revision that will be rebased'
669 669 parents = repo[rev].parents()
670 670 p1 = p2 = nullrev
671 671
672 672 p1n = parents[0].rev()
673 673 if p1n in targetancestors:
674 674 p1 = target
675 675 elif p1n in state:
676 676 if state[p1n] == nullmerge:
677 677 p1 = target
678 678 elif state[p1n] in revskipped:
679 679 p1 = nearestrebased(repo, p1n, state)
680 680 if p1 is None:
681 681 p1 = target
682 682 else:
683 683 p1 = state[p1n]
684 684 else: # p1n external
685 685 p1 = target
686 686 p2 = p1n
687 687
688 688 if len(parents) == 2 and parents[1].rev() not in targetancestors:
689 689 p2n = parents[1].rev()
690 690 # interesting second parent
691 691 if p2n in state:
692 692 if p1 == target: # p1n in targetancestors or external
693 693 p1 = state[p2n]
694 694 elif state[p2n] in revskipped:
695 695 p2 = nearestrebased(repo, p2n, state)
696 696 if p2 is None:
697 697 # no ancestors rebased yet, detach
698 698 p2 = target
699 699 else:
700 700 p2 = state[p2n]
701 701 else: # p2n external
702 702 if p2 != nullrev: # p1n external too => rev is a merged revision
703 703 raise error.Abort(_('cannot use revision %d as base, result '
704 704 'would have 3 parents') % rev)
705 705 p2 = p2n
706 706 repo.ui.debug(" future parents are %d and %d\n" %
707 707 (repo[p1].rev(), repo[p2].rev()))
708 708
709 709 if rev == min(state):
710 710 # Case (1) initial changeset of a non-detaching rebase.
711 711 # Let the merge mechanism find the base itself.
712 712 base = None
713 713 elif not repo[rev].p2():
714 714 # Case (2) detaching the node with a single parent, use this parent
715 715 base = repo[rev].p1().rev()
716 716 else:
717 717 # Assuming there is a p1, this is the case where there also is a p2.
718 718 # We are thus rebasing a merge and need to pick the right merge base.
719 719 #
720 720 # Imagine we have:
721 721 # - M: current rebase revision in this step
722 722 # - A: one parent of M
723 723 # - B: other parent of M
724 724 # - D: destination of this merge step (p1 var)
725 725 #
726 726 # Consider the case where D is a descendant of A or B and the other is
727 727 # 'outside'. In this case, the right merge base is the D ancestor.
728 728 #
729 729 # An informal proof, assuming A is 'outside' and B is the D ancestor:
730 730 #
731 731 # If we pick B as the base, the merge involves:
732 732 # - changes from B to M (actual changeset payload)
733 733 # - changes from B to D (induced by rebase) as D is a rebased
734 734 # version of B)
735 735 # Which exactly represent the rebase operation.
736 736 #
737 737 # If we pick A as the base, the merge involves:
738 738 # - changes from A to M (actual changeset payload)
739 739 # - changes from A to D (with include changes between unrelated A and B
740 740 # plus changes induced by rebase)
741 741 # Which does not represent anything sensible and creates a lot of
742 742 # conflicts. A is thus not the right choice - B is.
743 743 #
744 744 # Note: The base found in this 'proof' is only correct in the specified
745 745 # case. This base does not make sense if is not D a descendant of A or B
746 746 # or if the other is not parent 'outside' (especially not if the other
747 747 # parent has been rebased). The current implementation does not
748 748 # make it feasible to consider different cases separately. In these
749 749 # other cases we currently just leave it to the user to correctly
750 750 # resolve an impossible merge using a wrong ancestor.
751 751 for p in repo[rev].parents():
752 752 if state.get(p.rev()) == p1:
753 753 base = p.rev()
754 754 break
755 755 else: # fallback when base not found
756 756 base = None
757 757
758 758 # Raise because this function is called wrong (see issue 4106)
759 759 raise AssertionError('no base found to rebase on '
760 760 '(defineparents called wrong)')
761 761 return p1, p2, base
762 762
763 763 def isagitpatch(repo, patchname):
764 764 'Return true if the given patch is in git format'
765 765 mqpatch = os.path.join(repo.mq.path, patchname)
766 766 for line in patch.linereader(file(mqpatch, 'rb')):
767 767 if line.startswith('diff --git'):
768 768 return True
769 769 return False
770 770
771 771 def updatemq(repo, state, skipped, **opts):
772 772 'Update rebased mq patches - finalize and then import them'
773 773 mqrebase = {}
774 774 mq = repo.mq
775 775 original_series = mq.fullseries[:]
776 776 skippedpatches = set()
777 777
778 778 for p in mq.applied:
779 779 rev = repo[p.node].rev()
780 780 if rev in state:
781 781 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
782 782 (rev, p.name))
783 783 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
784 784 else:
785 785 # Applied but not rebased, not sure this should happen
786 786 skippedpatches.add(p.name)
787 787
788 788 if mqrebase:
789 789 mq.finish(repo, mqrebase.keys())
790 790
791 791 # We must start import from the newest revision
792 792 for rev in sorted(mqrebase, reverse=True):
793 793 if rev not in skipped:
794 794 name, isgit = mqrebase[rev]
795 795 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
796 796 (name, state[rev], repo[state[rev]]))
797 797 mq.qimport(repo, (), patchname=name, git=isgit,
798 798 rev=[str(state[rev])])
799 799 else:
800 800 # Rebased and skipped
801 801 skippedpatches.add(mqrebase[rev][0])
802 802
803 803 # Patches were either applied and rebased and imported in
804 804 # order, applied and removed or unapplied. Discard the removed
805 805 # ones while preserving the original series order and guards.
806 806 newseries = [s for s in original_series
807 807 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
808 808 mq.fullseries[:] = newseries
809 809 mq.seriesdirty = True
810 810 mq.savedirty()
811 811
812 812 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
813 813 'Move bookmarks to their correct changesets, and delete divergent ones'
814 814 marks = repo._bookmarks
815 815 for k, v in originalbookmarks.iteritems():
816 816 if v in nstate:
817 817 # update the bookmarks for revs that have moved
818 818 marks[k] = nstate[v]
819 819 bookmarks.deletedivergent(repo, [targetnode], k)
820 820 marks.recordchange(tr)
821 821
822 822 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
823 823 external, activebookmark):
824 824 'Store the current status to allow recovery'
825 825 f = repo.vfs("rebasestate", "w")
826 826 f.write(repo[originalwd].hex() + '\n')
827 827 f.write(repo[target].hex() + '\n')
828 828 f.write(repo[external].hex() + '\n')
829 829 f.write('%d\n' % int(collapse))
830 830 f.write('%d\n' % int(keep))
831 831 f.write('%d\n' % int(keepbranches))
832 832 f.write('%s\n' % (activebookmark or ''))
833 833 for d, v in state.iteritems():
834 834 oldrev = repo[d].hex()
835 835 if v >= 0:
836 836 newrev = repo[v].hex()
837 837 elif v == revtodo:
838 838 # To maintain format compatibility, we have to use nullid.
839 839 # Please do remove this special case when upgrading the format.
840 840 newrev = hex(nullid)
841 841 else:
842 842 newrev = v
843 843 f.write("%s:%s\n" % (oldrev, newrev))
844 844 f.close()
845 845 repo.ui.debug('rebase status stored\n')
846 846
847 847 def clearstatus(repo):
848 848 'Remove the status files'
849 849 _clearrebasesetvisibiliy(repo)
850 850 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
851 851
852 852 def restorestatus(repo):
853 853 'Restore a previously stored status'
854 854 keepbranches = None
855 855 target = None
856 856 collapse = False
857 857 external = nullrev
858 858 activebookmark = None
859 859 state = {}
860 860
861 861 try:
862 862 f = repo.vfs("rebasestate")
863 863 for i, l in enumerate(f.read().splitlines()):
864 864 if i == 0:
865 865 originalwd = repo[l].rev()
866 866 elif i == 1:
867 867 target = repo[l].rev()
868 868 elif i == 2:
869 869 external = repo[l].rev()
870 870 elif i == 3:
871 871 collapse = bool(int(l))
872 872 elif i == 4:
873 873 keep = bool(int(l))
874 874 elif i == 5:
875 875 keepbranches = bool(int(l))
876 876 elif i == 6 and not (len(l) == 81 and ':' in l):
877 877 # line 6 is a recent addition, so for backwards compatibility
878 878 # check that the line doesn't look like the oldrev:newrev lines
879 879 activebookmark = l
880 880 else:
881 881 oldrev, newrev = l.split(':')
882 882 if newrev in (str(nullmerge), str(revignored),
883 883 str(revprecursor), str(revpruned)):
884 884 state[repo[oldrev].rev()] = int(newrev)
885 885 elif newrev == nullid:
886 886 state[repo[oldrev].rev()] = revtodo
887 887 # Legacy compat special case
888 888 else:
889 889 state[repo[oldrev].rev()] = repo[newrev].rev()
890 890
891 891 except IOError as err:
892 892 if err.errno != errno.ENOENT:
893 893 raise
894 894 raise error.Abort(_('no rebase in progress'))
895 895
896 896 if keepbranches is None:
897 897 raise error.Abort(_('.hg/rebasestate is incomplete'))
898 898
899 899 skipped = set()
900 900 # recompute the set of skipped revs
901 901 if not collapse:
902 902 seen = set([target])
903 903 for old, new in sorted(state.items()):
904 904 if new != revtodo and new in seen:
905 905 skipped.add(old)
906 906 seen.add(new)
907 907 repo.ui.debug('computed skipped revs: %s\n' %
908 908 (' '.join(str(r) for r in sorted(skipped)) or None))
909 909 repo.ui.debug('rebase status resumed\n')
910 910 _setrebasesetvisibility(repo, state.keys())
911 911 return (originalwd, target, state, skipped,
912 912 collapse, keep, keepbranches, external, activebookmark)
913 913
914 914 def needupdate(repo, state):
915 915 '''check whether we should `update --clean` away from a merge, or if
916 916 somehow the working dir got forcibly updated, e.g. by older hg'''
917 917 parents = [p.rev() for p in repo[None].parents()]
918 918
919 919 # Are we in a merge state at all?
920 920 if len(parents) < 2:
921 921 return False
922 922
923 923 # We should be standing on the first as-of-yet unrebased commit.
924 924 firstunrebased = min([old for old, new in state.iteritems()
925 925 if new == nullrev])
926 926 if firstunrebased in parents:
927 927 return True
928 928
929 929 return False
930 930
931 931 def abort(repo, originalwd, target, state, activebookmark=None):
932 932 '''Restore the repository to its original state. Additional args:
933 933
934 934 activebookmark: the name of the bookmark that should be active after the
935 935 restore'''
936 936
937 937 try:
938 938 # If the first commits in the rebased set get skipped during the rebase,
939 939 # their values within the state mapping will be the target rev id. The
940 940 # dstates list must must not contain the target rev (issue4896)
941 941 dstates = [s for s in state.values() if s >= 0 and s != target]
942 942 immutable = [d for d in dstates if not repo[d].mutable()]
943 943 cleanup = True
944 944 if immutable:
945 945 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
946 946 % ', '.join(str(repo[r]) for r in immutable),
947 947 hint=_('see "hg help phases" for details'))
948 948 cleanup = False
949 949
950 950 descendants = set()
951 951 if dstates:
952 952 descendants = set(repo.changelog.descendants(dstates))
953 953 if descendants - set(dstates):
954 954 repo.ui.warn(_("warning: new changesets detected on target branch, "
955 955 "can't strip\n"))
956 956 cleanup = False
957 957
958 958 if cleanup:
959 959 # Update away from the rebase if necessary
960 960 if needupdate(repo, state):
961 merge.update(repo, originalwd, False, True, False)
961 merge.update(repo, originalwd, False, True)
962 962
963 963 # Strip from the first rebased revision
964 964 rebased = filter(lambda x: x >= 0 and x != target, state.values())
965 965 if rebased:
966 966 strippoints = [
967 967 c.node() for c in repo.set('roots(%ld)', rebased)]
968 968 # no backup of rebased cset versions needed
969 969 repair.strip(repo.ui, repo, strippoints)
970 970
971 971 if activebookmark and activebookmark in repo._bookmarks:
972 972 bookmarks.activate(repo, activebookmark)
973 973
974 974 finally:
975 975 clearstatus(repo)
976 976 repo.ui.warn(_('rebase aborted\n'))
977 977 return 0
978 978
979 979 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
980 980 '''Define which revisions are going to be rebased and where
981 981
982 982 repo: repo
983 983 dest: context
984 984 rebaseset: set of rev
985 985 '''
986 986 _setrebasesetvisibility(repo, rebaseset)
987 987
988 988 # This check isn't strictly necessary, since mq detects commits over an
989 989 # applied patch. But it prevents messing up the working directory when
990 990 # a partially completed rebase is blocked by mq.
991 991 if 'qtip' in repo.tags() and (dest.node() in
992 992 [s.node for s in repo.mq.applied]):
993 993 raise error.Abort(_('cannot rebase onto an applied mq patch'))
994 994
995 995 roots = list(repo.set('roots(%ld)', rebaseset))
996 996 if not roots:
997 997 raise error.Abort(_('no matching revisions'))
998 998 roots.sort()
999 999 state = {}
1000 1000 detachset = set()
1001 1001 for root in roots:
1002 1002 commonbase = root.ancestor(dest)
1003 1003 if commonbase == root:
1004 1004 raise error.Abort(_('source is ancestor of destination'))
1005 1005 if commonbase == dest:
1006 1006 samebranch = root.branch() == dest.branch()
1007 1007 if not collapse and samebranch and root in dest.children():
1008 1008 repo.ui.debug('source is a child of destination\n')
1009 1009 return None
1010 1010
1011 1011 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1012 1012 state.update(dict.fromkeys(rebaseset, revtodo))
1013 1013 # Rebase tries to turn <dest> into a parent of <root> while
1014 1014 # preserving the number of parents of rebased changesets:
1015 1015 #
1016 1016 # - A changeset with a single parent will always be rebased as a
1017 1017 # changeset with a single parent.
1018 1018 #
1019 1019 # - A merge will be rebased as merge unless its parents are both
1020 1020 # ancestors of <dest> or are themselves in the rebased set and
1021 1021 # pruned while rebased.
1022 1022 #
1023 1023 # If one parent of <root> is an ancestor of <dest>, the rebased
1024 1024 # version of this parent will be <dest>. This is always true with
1025 1025 # --base option.
1026 1026 #
1027 1027 # Otherwise, we need to *replace* the original parents with
1028 1028 # <dest>. This "detaches" the rebased set from its former location
1029 1029 # and rebases it onto <dest>. Changes introduced by ancestors of
1030 1030 # <root> not common with <dest> (the detachset, marked as
1031 1031 # nullmerge) are "removed" from the rebased changesets.
1032 1032 #
1033 1033 # - If <root> has a single parent, set it to <dest>.
1034 1034 #
1035 1035 # - If <root> is a merge, we cannot decide which parent to
1036 1036 # replace, the rebase operation is not clearly defined.
1037 1037 #
1038 1038 # The table below sums up this behavior:
1039 1039 #
1040 1040 # +------------------+----------------------+-------------------------+
1041 1041 # | | one parent | merge |
1042 1042 # +------------------+----------------------+-------------------------+
1043 1043 # | parent in | new parent is <dest> | parents in ::<dest> are |
1044 1044 # | ::<dest> | | remapped to <dest> |
1045 1045 # +------------------+----------------------+-------------------------+
1046 1046 # | unrelated source | new parent is <dest> | ambiguous, abort |
1047 1047 # +------------------+----------------------+-------------------------+
1048 1048 #
1049 1049 # The actual abort is handled by `defineparents`
1050 1050 if len(root.parents()) <= 1:
1051 1051 # ancestors of <root> not ancestors of <dest>
1052 1052 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1053 1053 [root.rev()]))
1054 1054 for r in detachset:
1055 1055 if r not in state:
1056 1056 state[r] = nullmerge
1057 1057 if len(roots) > 1:
1058 1058 # If we have multiple roots, we may have "hole" in the rebase set.
1059 1059 # Rebase roots that descend from those "hole" should not be detached as
1060 1060 # other root are. We use the special `revignored` to inform rebase that
1061 1061 # the revision should be ignored but that `defineparents` should search
1062 1062 # a rebase destination that make sense regarding rebased topology.
1063 1063 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1064 1064 for ignored in set(rebasedomain) - set(rebaseset):
1065 1065 state[ignored] = revignored
1066 1066 for r in obsoletenotrebased:
1067 1067 if obsoletenotrebased[r] is None:
1068 1068 state[r] = revpruned
1069 1069 else:
1070 1070 state[r] = revprecursor
1071 1071 return repo['.'].rev(), dest.rev(), state
1072 1072
1073 1073 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1074 1074 """dispose of rebased revision at the end of the rebase
1075 1075
1076 1076 If `collapsedas` is not None, the rebase was a collapse whose result if the
1077 1077 `collapsedas` node."""
1078 1078 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1079 1079 markers = []
1080 1080 for rev, newrev in sorted(state.items()):
1081 1081 if newrev >= 0:
1082 1082 if rev in skipped:
1083 1083 succs = ()
1084 1084 elif collapsedas is not None:
1085 1085 succs = (repo[collapsedas],)
1086 1086 else:
1087 1087 succs = (repo[newrev],)
1088 1088 markers.append((repo[rev], succs))
1089 1089 if markers:
1090 1090 obsolete.createmarkers(repo, markers)
1091 1091 else:
1092 1092 rebased = [rev for rev in state if state[rev] > nullmerge]
1093 1093 if rebased:
1094 1094 stripped = []
1095 1095 for root in repo.set('roots(%ld)', rebased):
1096 1096 if set(repo.changelog.descendants([root.rev()])) - set(state):
1097 1097 ui.warn(_("warning: new changesets detected "
1098 1098 "on source branch, not stripping\n"))
1099 1099 else:
1100 1100 stripped.append(root.node())
1101 1101 if stripped:
1102 1102 # backup the old csets by default
1103 1103 repair.strip(ui, repo, stripped, "all")
1104 1104
1105 1105
1106 1106 def pullrebase(orig, ui, repo, *args, **opts):
1107 1107 'Call rebase after pull if the latter has been invoked with --rebase'
1108 1108 ret = None
1109 1109 if opts.get('rebase'):
1110 1110 wlock = lock = None
1111 1111 try:
1112 1112 wlock = repo.wlock()
1113 1113 lock = repo.lock()
1114 1114 if opts.get('update'):
1115 1115 del opts['update']
1116 1116 ui.debug('--update and --rebase are not compatible, ignoring '
1117 1117 'the update flag\n')
1118 1118
1119 1119 movemarkfrom = repo['.'].node()
1120 1120 revsprepull = len(repo)
1121 1121 origpostincoming = commands.postincoming
1122 1122 def _dummy(*args, **kwargs):
1123 1123 pass
1124 1124 commands.postincoming = _dummy
1125 1125 try:
1126 1126 ret = orig(ui, repo, *args, **opts)
1127 1127 finally:
1128 1128 commands.postincoming = origpostincoming
1129 1129 revspostpull = len(repo)
1130 1130 if revspostpull > revsprepull:
1131 1131 # --rev option from pull conflict with rebase own --rev
1132 1132 # dropping it
1133 1133 if 'rev' in opts:
1134 1134 del opts['rev']
1135 1135 # positional argument from pull conflicts with rebase's own
1136 1136 # --source.
1137 1137 if 'source' in opts:
1138 1138 del opts['source']
1139 1139 rebase(ui, repo, **opts)
1140 1140 branch = repo[None].branch()
1141 1141 dest = repo[branch].rev()
1142 1142 if dest != repo['.'].rev():
1143 1143 # there was nothing to rebase we force an update
1144 1144 hg.update(repo, dest)
1145 1145 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1146 1146 ui.status(_("updating bookmark %s\n")
1147 1147 % repo._activebookmark)
1148 1148 finally:
1149 1149 release(lock, wlock)
1150 1150 else:
1151 1151 if opts.get('tool'):
1152 1152 raise error.Abort(_('--tool can only be used with --rebase'))
1153 1153 ret = orig(ui, repo, *args, **opts)
1154 1154
1155 1155 return ret
1156 1156
1157 1157 def _setrebasesetvisibility(repo, revs):
1158 1158 """store the currently rebased set on the repo object
1159 1159
1160 1160 This is used by another function to prevent rebased revision to because
1161 1161 hidden (see issue4505)"""
1162 1162 repo = repo.unfiltered()
1163 1163 revs = set(revs)
1164 1164 repo._rebaseset = revs
1165 1165 # invalidate cache if visibility changes
1166 1166 hiddens = repo.filteredrevcache.get('visible', set())
1167 1167 if revs & hiddens:
1168 1168 repo.invalidatevolatilesets()
1169 1169
1170 1170 def _clearrebasesetvisibiliy(repo):
1171 1171 """remove rebaseset data from the repo"""
1172 1172 repo = repo.unfiltered()
1173 1173 if '_rebaseset' in vars(repo):
1174 1174 del repo._rebaseset
1175 1175
1176 1176 def _rebasedvisible(orig, repo):
1177 1177 """ensure rebased revs stay visible (see issue4505)"""
1178 1178 blockers = orig(repo)
1179 1179 blockers.update(getattr(repo, '_rebaseset', ()))
1180 1180 return blockers
1181 1181
1182 1182 def _computeobsoletenotrebased(repo, rebasesetrevs, dest):
1183 1183 """return a mapping obsolete => successor for all obsolete nodes to be
1184 1184 rebased that have a successors in the destination
1185 1185
1186 1186 obsolete => None entries in the mapping indicate nodes with no succesor"""
1187 1187 obsoletenotrebased = {}
1188 1188
1189 1189 # Build a mapping successor => obsolete nodes for the obsolete
1190 1190 # nodes to be rebased
1191 1191 allsuccessors = {}
1192 1192 cl = repo.changelog
1193 1193 for r in rebasesetrevs:
1194 1194 n = repo[r]
1195 1195 if n.obsolete():
1196 1196 node = cl.node(r)
1197 1197 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1198 1198 try:
1199 1199 allsuccessors[cl.rev(s)] = cl.rev(node)
1200 1200 except LookupError:
1201 1201 pass
1202 1202
1203 1203 if allsuccessors:
1204 1204 # Look for successors of obsolete nodes to be rebased among
1205 1205 # the ancestors of dest
1206 1206 ancs = cl.ancestors([repo[dest].rev()],
1207 1207 stoprev=min(allsuccessors),
1208 1208 inclusive=True)
1209 1209 for s in allsuccessors:
1210 1210 if s in ancs:
1211 1211 obsoletenotrebased[allsuccessors[s]] = s
1212 1212 elif (s == allsuccessors[s] and
1213 1213 allsuccessors.values().count(s) == 1):
1214 1214 # plain prune
1215 1215 obsoletenotrebased[s] = None
1216 1216
1217 1217 return obsoletenotrebased
1218 1218
1219 1219 def summaryhook(ui, repo):
1220 1220 if not os.path.exists(repo.join('rebasestate')):
1221 1221 return
1222 1222 try:
1223 1223 state = restorestatus(repo)[2]
1224 1224 except error.RepoLookupError:
1225 1225 # i18n: column positioning for "hg summary"
1226 1226 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1227 1227 ui.write(msg)
1228 1228 return
1229 1229 numrebased = len([i for i in state.itervalues() if i >= 0])
1230 1230 # i18n: column positioning for "hg summary"
1231 1231 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1232 1232 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1233 1233 ui.label(_('%d remaining'), 'rebase.remaining') %
1234 1234 (len(state) - numrebased)))
1235 1235
1236 1236 def uisetup(ui):
1237 1237 #Replace pull with a decorator to provide --rebase option
1238 1238 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1239 1239 entry[1].append(('', 'rebase', None,
1240 1240 _("rebase working directory to branch head")))
1241 1241 entry[1].append(('t', 'tool', '',
1242 1242 _("specify merge tool for rebase")))
1243 1243 cmdutil.summaryhooks.add('rebase', summaryhook)
1244 1244 cmdutil.unfinishedstates.append(
1245 1245 ['rebasestate', False, False, _('rebase in progress'),
1246 1246 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1247 1247 # ensure rebased rev are not hidden
1248 1248 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1249 1249 revset.symbols['_destrebase'] = _revsetdestrebase
@@ -1,721 +1,721 b''
1 1 # Patch transplanting extension for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to transplant changesets from another branch
9 9
10 10 This extension allows you to transplant changes to another parent revision,
11 11 possibly in another repository. The transplant is done using 'diff' patches.
12 12
13 13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 14 map from a changeset hash to its hash in the source repository.
15 15 '''
16 16
17 17 from mercurial.i18n import _
18 18 import os, tempfile
19 19 from mercurial.node import short
20 20 from mercurial import bundlerepo, hg, merge, match
21 21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
22 22 from mercurial import revset, templatekw, exchange
23 23 from mercurial import lock as lockmod
24 24
25 25 class TransplantError(error.Abort):
26 26 pass
27 27
28 28 cmdtable = {}
29 29 command = cmdutil.command(cmdtable)
30 30 # Note for extension authors: ONLY specify testedwith = 'internal' for
31 31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 32 # be specifying the version(s) of Mercurial they are tested with, or
33 33 # leave the attribute unspecified.
34 34 testedwith = 'internal'
35 35
36 36 class transplantentry(object):
37 37 def __init__(self, lnode, rnode):
38 38 self.lnode = lnode
39 39 self.rnode = rnode
40 40
41 41 class transplants(object):
42 42 def __init__(self, path=None, transplantfile=None, opener=None):
43 43 self.path = path
44 44 self.transplantfile = transplantfile
45 45 self.opener = opener
46 46
47 47 if not opener:
48 48 self.opener = scmutil.opener(self.path)
49 49 self.transplants = {}
50 50 self.dirty = False
51 51 self.read()
52 52
53 53 def read(self):
54 54 abspath = os.path.join(self.path, self.transplantfile)
55 55 if self.transplantfile and os.path.exists(abspath):
56 56 for line in self.opener.read(self.transplantfile).splitlines():
57 57 lnode, rnode = map(revlog.bin, line.split(':'))
58 58 list = self.transplants.setdefault(rnode, [])
59 59 list.append(transplantentry(lnode, rnode))
60 60
61 61 def write(self):
62 62 if self.dirty and self.transplantfile:
63 63 if not os.path.isdir(self.path):
64 64 os.mkdir(self.path)
65 65 fp = self.opener(self.transplantfile, 'w')
66 66 for list in self.transplants.itervalues():
67 67 for t in list:
68 68 l, r = map(revlog.hex, (t.lnode, t.rnode))
69 69 fp.write(l + ':' + r + '\n')
70 70 fp.close()
71 71 self.dirty = False
72 72
73 73 def get(self, rnode):
74 74 return self.transplants.get(rnode) or []
75 75
76 76 def set(self, lnode, rnode):
77 77 list = self.transplants.setdefault(rnode, [])
78 78 list.append(transplantentry(lnode, rnode))
79 79 self.dirty = True
80 80
81 81 def remove(self, transplant):
82 82 list = self.transplants.get(transplant.rnode)
83 83 if list:
84 84 del list[list.index(transplant)]
85 85 self.dirty = True
86 86
87 87 class transplanter(object):
88 88 def __init__(self, ui, repo, opts):
89 89 self.ui = ui
90 90 self.path = repo.join('transplant')
91 91 self.opener = scmutil.opener(self.path)
92 92 self.transplants = transplants(self.path, 'transplants',
93 93 opener=self.opener)
94 94 def getcommiteditor():
95 95 editform = cmdutil.mergeeditform(repo[None], 'transplant')
96 96 return cmdutil.getcommiteditor(editform=editform, **opts)
97 97 self.getcommiteditor = getcommiteditor
98 98
99 99 def applied(self, repo, node, parent):
100 100 '''returns True if a node is already an ancestor of parent
101 101 or is parent or has already been transplanted'''
102 102 if hasnode(repo, parent):
103 103 parentrev = repo.changelog.rev(parent)
104 104 if hasnode(repo, node):
105 105 rev = repo.changelog.rev(node)
106 106 reachable = repo.changelog.ancestors([parentrev], rev,
107 107 inclusive=True)
108 108 if rev in reachable:
109 109 return True
110 110 for t in self.transplants.get(node):
111 111 # it might have been stripped
112 112 if not hasnode(repo, t.lnode):
113 113 self.transplants.remove(t)
114 114 return False
115 115 lnoderev = repo.changelog.rev(t.lnode)
116 116 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
117 117 inclusive=True):
118 118 return True
119 119 return False
120 120
121 121 def apply(self, repo, source, revmap, merges, opts=None):
122 122 '''apply the revisions in revmap one by one in revision order'''
123 123 if opts is None:
124 124 opts = {}
125 125 revs = sorted(revmap)
126 126 p1, p2 = repo.dirstate.parents()
127 127 pulls = []
128 128 diffopts = patch.difffeatureopts(self.ui, opts)
129 129 diffopts.git = True
130 130
131 131 lock = tr = None
132 132 try:
133 133 lock = repo.lock()
134 134 tr = repo.transaction('transplant')
135 135 for rev in revs:
136 136 node = revmap[rev]
137 137 revstr = '%s:%s' % (rev, short(node))
138 138
139 139 if self.applied(repo, node, p1):
140 140 self.ui.warn(_('skipping already applied revision %s\n') %
141 141 revstr)
142 142 continue
143 143
144 144 parents = source.changelog.parents(node)
145 145 if not (opts.get('filter') or opts.get('log')):
146 146 # If the changeset parent is the same as the
147 147 # wdir's parent, just pull it.
148 148 if parents[0] == p1:
149 149 pulls.append(node)
150 150 p1 = node
151 151 continue
152 152 if pulls:
153 153 if source != repo:
154 154 exchange.pull(repo, source.peer(), heads=pulls)
155 merge.update(repo, pulls[-1], False, False, None)
155 merge.update(repo, pulls[-1], False, False)
156 156 p1, p2 = repo.dirstate.parents()
157 157 pulls = []
158 158
159 159 domerge = False
160 160 if node in merges:
161 161 # pulling all the merge revs at once would mean we
162 162 # couldn't transplant after the latest even if
163 163 # transplants before them fail.
164 164 domerge = True
165 165 if not hasnode(repo, node):
166 166 exchange.pull(repo, source.peer(), heads=[node])
167 167
168 168 skipmerge = False
169 169 if parents[1] != revlog.nullid:
170 170 if not opts.get('parent'):
171 171 self.ui.note(_('skipping merge changeset %s:%s\n')
172 172 % (rev, short(node)))
173 173 skipmerge = True
174 174 else:
175 175 parent = source.lookup(opts['parent'])
176 176 if parent not in parents:
177 177 raise error.Abort(_('%s is not a parent of %s') %
178 178 (short(parent), short(node)))
179 179 else:
180 180 parent = parents[0]
181 181
182 182 if skipmerge:
183 183 patchfile = None
184 184 else:
185 185 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
186 186 fp = os.fdopen(fd, 'w')
187 187 gen = patch.diff(source, parent, node, opts=diffopts)
188 188 for chunk in gen:
189 189 fp.write(chunk)
190 190 fp.close()
191 191
192 192 del revmap[rev]
193 193 if patchfile or domerge:
194 194 try:
195 195 try:
196 196 n = self.applyone(repo, node,
197 197 source.changelog.read(node),
198 198 patchfile, merge=domerge,
199 199 log=opts.get('log'),
200 200 filter=opts.get('filter'))
201 201 except TransplantError:
202 202 # Do not rollback, it is up to the user to
203 203 # fix the merge or cancel everything
204 204 tr.close()
205 205 raise
206 206 if n and domerge:
207 207 self.ui.status(_('%s merged at %s\n') % (revstr,
208 208 short(n)))
209 209 elif n:
210 210 self.ui.status(_('%s transplanted to %s\n')
211 211 % (short(node),
212 212 short(n)))
213 213 finally:
214 214 if patchfile:
215 215 os.unlink(patchfile)
216 216 tr.close()
217 217 if pulls:
218 218 exchange.pull(repo, source.peer(), heads=pulls)
219 merge.update(repo, pulls[-1], False, False, None)
219 merge.update(repo, pulls[-1], False, False)
220 220 finally:
221 221 self.saveseries(revmap, merges)
222 222 self.transplants.write()
223 223 if tr:
224 224 tr.release()
225 225 if lock:
226 226 lock.release()
227 227
228 228 def filter(self, filter, node, changelog, patchfile):
229 229 '''arbitrarily rewrite changeset before applying it'''
230 230
231 231 self.ui.status(_('filtering %s\n') % patchfile)
232 232 user, date, msg = (changelog[1], changelog[2], changelog[4])
233 233 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
234 234 fp = os.fdopen(fd, 'w')
235 235 fp.write("# HG changeset patch\n")
236 236 fp.write("# User %s\n" % user)
237 237 fp.write("# Date %d %d\n" % date)
238 238 fp.write(msg + '\n')
239 239 fp.close()
240 240
241 241 try:
242 242 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
243 243 util.shellquote(patchfile)),
244 244 environ={'HGUSER': changelog[1],
245 245 'HGREVISION': revlog.hex(node),
246 246 },
247 247 onerr=error.Abort, errprefix=_('filter failed'))
248 248 user, date, msg = self.parselog(file(headerfile))[1:4]
249 249 finally:
250 250 os.unlink(headerfile)
251 251
252 252 return (user, date, msg)
253 253
254 254 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
255 255 filter=None):
256 256 '''apply the patch in patchfile to the repository as a transplant'''
257 257 (manifest, user, (time, timezone), files, message) = cl[:5]
258 258 date = "%d %d" % (time, timezone)
259 259 extra = {'transplant_source': node}
260 260 if filter:
261 261 (user, date, message) = self.filter(filter, node, cl, patchfile)
262 262
263 263 if log:
264 264 # we don't translate messages inserted into commits
265 265 message += '\n(transplanted from %s)' % revlog.hex(node)
266 266
267 267 self.ui.status(_('applying %s\n') % short(node))
268 268 self.ui.note('%s %s\n%s\n' % (user, date, message))
269 269
270 270 if not patchfile and not merge:
271 271 raise error.Abort(_('can only omit patchfile if merging'))
272 272 if patchfile:
273 273 try:
274 274 files = set()
275 275 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
276 276 files = list(files)
277 277 except Exception as inst:
278 278 seriespath = os.path.join(self.path, 'series')
279 279 if os.path.exists(seriespath):
280 280 os.unlink(seriespath)
281 281 p1 = repo.dirstate.p1()
282 282 p2 = node
283 283 self.log(user, date, message, p1, p2, merge=merge)
284 284 self.ui.write(str(inst) + '\n')
285 285 raise TransplantError(_('fix up the merge and run '
286 286 'hg transplant --continue'))
287 287 else:
288 288 files = None
289 289 if merge:
290 290 p1, p2 = repo.dirstate.parents()
291 291 repo.setparents(p1, node)
292 292 m = match.always(repo.root, '')
293 293 else:
294 294 m = match.exact(repo.root, '', files)
295 295
296 296 n = repo.commit(message, user, date, extra=extra, match=m,
297 297 editor=self.getcommiteditor())
298 298 if not n:
299 299 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
300 300 return None
301 301 if not merge:
302 302 self.transplants.set(n, node)
303 303
304 304 return n
305 305
306 306 def resume(self, repo, source, opts):
307 307 '''recover last transaction and apply remaining changesets'''
308 308 if os.path.exists(os.path.join(self.path, 'journal')):
309 309 n, node = self.recover(repo, source, opts)
310 310 if n:
311 311 self.ui.status(_('%s transplanted as %s\n') % (short(node),
312 312 short(n)))
313 313 else:
314 314 self.ui.status(_('%s skipped due to empty diff\n')
315 315 % (short(node),))
316 316 seriespath = os.path.join(self.path, 'series')
317 317 if not os.path.exists(seriespath):
318 318 self.transplants.write()
319 319 return
320 320 nodes, merges = self.readseries()
321 321 revmap = {}
322 322 for n in nodes:
323 323 revmap[source.changelog.rev(n)] = n
324 324 os.unlink(seriespath)
325 325
326 326 self.apply(repo, source, revmap, merges, opts)
327 327
328 328 def recover(self, repo, source, opts):
329 329 '''commit working directory using journal metadata'''
330 330 node, user, date, message, parents = self.readlog()
331 331 merge = False
332 332
333 333 if not user or not date or not message or not parents[0]:
334 334 raise error.Abort(_('transplant log file is corrupt'))
335 335
336 336 parent = parents[0]
337 337 if len(parents) > 1:
338 338 if opts.get('parent'):
339 339 parent = source.lookup(opts['parent'])
340 340 if parent not in parents:
341 341 raise error.Abort(_('%s is not a parent of %s') %
342 342 (short(parent), short(node)))
343 343 else:
344 344 merge = True
345 345
346 346 extra = {'transplant_source': node}
347 347 try:
348 348 p1, p2 = repo.dirstate.parents()
349 349 if p1 != parent:
350 350 raise error.Abort(_('working directory not at transplant '
351 351 'parent %s') % revlog.hex(parent))
352 352 if merge:
353 353 repo.setparents(p1, parents[1])
354 354 modified, added, removed, deleted = repo.status()[:4]
355 355 if merge or modified or added or removed or deleted:
356 356 n = repo.commit(message, user, date, extra=extra,
357 357 editor=self.getcommiteditor())
358 358 if not n:
359 359 raise error.Abort(_('commit failed'))
360 360 if not merge:
361 361 self.transplants.set(n, node)
362 362 else:
363 363 n = None
364 364 self.unlog()
365 365
366 366 return n, node
367 367 finally:
368 368 # TODO: get rid of this meaningless try/finally enclosing.
369 369 # this is kept only to reduce changes in a patch.
370 370 pass
371 371
372 372 def readseries(self):
373 373 nodes = []
374 374 merges = []
375 375 cur = nodes
376 376 for line in self.opener.read('series').splitlines():
377 377 if line.startswith('# Merges'):
378 378 cur = merges
379 379 continue
380 380 cur.append(revlog.bin(line))
381 381
382 382 return (nodes, merges)
383 383
384 384 def saveseries(self, revmap, merges):
385 385 if not revmap:
386 386 return
387 387
388 388 if not os.path.isdir(self.path):
389 389 os.mkdir(self.path)
390 390 series = self.opener('series', 'w')
391 391 for rev in sorted(revmap):
392 392 series.write(revlog.hex(revmap[rev]) + '\n')
393 393 if merges:
394 394 series.write('# Merges\n')
395 395 for m in merges:
396 396 series.write(revlog.hex(m) + '\n')
397 397 series.close()
398 398
399 399 def parselog(self, fp):
400 400 parents = []
401 401 message = []
402 402 node = revlog.nullid
403 403 inmsg = False
404 404 user = None
405 405 date = None
406 406 for line in fp.read().splitlines():
407 407 if inmsg:
408 408 message.append(line)
409 409 elif line.startswith('# User '):
410 410 user = line[7:]
411 411 elif line.startswith('# Date '):
412 412 date = line[7:]
413 413 elif line.startswith('# Node ID '):
414 414 node = revlog.bin(line[10:])
415 415 elif line.startswith('# Parent '):
416 416 parents.append(revlog.bin(line[9:]))
417 417 elif not line.startswith('# '):
418 418 inmsg = True
419 419 message.append(line)
420 420 if None in (user, date):
421 421 raise error.Abort(_("filter corrupted changeset (no user or date)"))
422 422 return (node, user, date, '\n'.join(message), parents)
423 423
424 424 def log(self, user, date, message, p1, p2, merge=False):
425 425 '''journal changelog metadata for later recover'''
426 426
427 427 if not os.path.isdir(self.path):
428 428 os.mkdir(self.path)
429 429 fp = self.opener('journal', 'w')
430 430 fp.write('# User %s\n' % user)
431 431 fp.write('# Date %s\n' % date)
432 432 fp.write('# Node ID %s\n' % revlog.hex(p2))
433 433 fp.write('# Parent ' + revlog.hex(p1) + '\n')
434 434 if merge:
435 435 fp.write('# Parent ' + revlog.hex(p2) + '\n')
436 436 fp.write(message.rstrip() + '\n')
437 437 fp.close()
438 438
439 439 def readlog(self):
440 440 return self.parselog(self.opener('journal'))
441 441
442 442 def unlog(self):
443 443 '''remove changelog journal'''
444 444 absdst = os.path.join(self.path, 'journal')
445 445 if os.path.exists(absdst):
446 446 os.unlink(absdst)
447 447
448 448 def transplantfilter(self, repo, source, root):
449 449 def matchfn(node):
450 450 if self.applied(repo, node, root):
451 451 return False
452 452 if source.changelog.parents(node)[1] != revlog.nullid:
453 453 return False
454 454 extra = source.changelog.read(node)[5]
455 455 cnode = extra.get('transplant_source')
456 456 if cnode and self.applied(repo, cnode, root):
457 457 return False
458 458 return True
459 459
460 460 return matchfn
461 461
462 462 def hasnode(repo, node):
463 463 try:
464 464 return repo.changelog.rev(node) is not None
465 465 except error.RevlogError:
466 466 return False
467 467
468 468 def browserevs(ui, repo, nodes, opts):
469 469 '''interactively transplant changesets'''
470 470 displayer = cmdutil.show_changeset(ui, repo, opts)
471 471 transplants = []
472 472 merges = []
473 473 prompt = _('apply changeset? [ynmpcq?]:'
474 474 '$$ &yes, transplant this changeset'
475 475 '$$ &no, skip this changeset'
476 476 '$$ &merge at this changeset'
477 477 '$$ show &patch'
478 478 '$$ &commit selected changesets'
479 479 '$$ &quit and cancel transplant'
480 480 '$$ &? (show this help)')
481 481 for node in nodes:
482 482 displayer.show(repo[node])
483 483 action = None
484 484 while not action:
485 485 action = 'ynmpcq?'[ui.promptchoice(prompt)]
486 486 if action == '?':
487 487 for c, t in ui.extractchoices(prompt)[1]:
488 488 ui.write('%s: %s\n' % (c, t))
489 489 action = None
490 490 elif action == 'p':
491 491 parent = repo.changelog.parents(node)[0]
492 492 for chunk in patch.diff(repo, parent, node):
493 493 ui.write(chunk)
494 494 action = None
495 495 if action == 'y':
496 496 transplants.append(node)
497 497 elif action == 'm':
498 498 merges.append(node)
499 499 elif action == 'c':
500 500 break
501 501 elif action == 'q':
502 502 transplants = ()
503 503 merges = ()
504 504 break
505 505 displayer.close()
506 506 return (transplants, merges)
507 507
508 508 @command('transplant',
509 509 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
510 510 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
511 511 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
512 512 ('p', 'prune', [], _('skip over REV'), _('REV')),
513 513 ('m', 'merge', [], _('merge at REV'), _('REV')),
514 514 ('', 'parent', '',
515 515 _('parent to choose when transplanting merge'), _('REV')),
516 516 ('e', 'edit', False, _('invoke editor on commit messages')),
517 517 ('', 'log', None, _('append transplant info to log message')),
518 518 ('c', 'continue', None, _('continue last transplant session '
519 519 'after fixing conflicts')),
520 520 ('', 'filter', '',
521 521 _('filter changesets through command'), _('CMD'))],
522 522 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
523 523 '[-m REV] [REV]...'))
524 524 def transplant(ui, repo, *revs, **opts):
525 525 '''transplant changesets from another branch
526 526
527 527 Selected changesets will be applied on top of the current working
528 528 directory with the log of the original changeset. The changesets
529 529 are copied and will thus appear twice in the history with different
530 530 identities.
531 531
532 532 Consider using the graft command if everything is inside the same
533 533 repository - it will use merges and will usually give a better result.
534 534 Use the rebase extension if the changesets are unpublished and you want
535 535 to move them instead of copying them.
536 536
537 537 If --log is specified, log messages will have a comment appended
538 538 of the form::
539 539
540 540 (transplanted from CHANGESETHASH)
541 541
542 542 You can rewrite the changelog message with the --filter option.
543 543 Its argument will be invoked with the current changelog message as
544 544 $1 and the patch as $2.
545 545
546 546 --source/-s specifies another repository to use for selecting changesets,
547 547 just as if it temporarily had been pulled.
548 548 If --branch/-b is specified, these revisions will be used as
549 549 heads when deciding which changesets to transplant, just as if only
550 550 these revisions had been pulled.
551 551 If --all/-a is specified, all the revisions up to the heads specified
552 552 with --branch will be transplanted.
553 553
554 554 Example:
555 555
556 556 - transplant all changes up to REV on top of your current revision::
557 557
558 558 hg transplant --branch REV --all
559 559
560 560 You can optionally mark selected transplanted changesets as merge
561 561 changesets. You will not be prompted to transplant any ancestors
562 562 of a merged transplant, and you can merge descendants of them
563 563 normally instead of transplanting them.
564 564
565 565 Merge changesets may be transplanted directly by specifying the
566 566 proper parent changeset by calling :hg:`transplant --parent`.
567 567
568 568 If no merges or revisions are provided, :hg:`transplant` will
569 569 start an interactive changeset browser.
570 570
571 571 If a changeset application fails, you can fix the merge by hand
572 572 and then resume where you left off by calling :hg:`transplant
573 573 --continue/-c`.
574 574 '''
575 575 wlock = None
576 576 try:
577 577 wlock = repo.wlock()
578 578 return _dotransplant(ui, repo, *revs, **opts)
579 579 finally:
580 580 lockmod.release(wlock)
581 581
582 582 def _dotransplant(ui, repo, *revs, **opts):
583 583 def incwalk(repo, csets, match=util.always):
584 584 for node in csets:
585 585 if match(node):
586 586 yield node
587 587
588 588 def transplantwalk(repo, dest, heads, match=util.always):
589 589 '''Yield all nodes that are ancestors of a head but not ancestors
590 590 of dest.
591 591 If no heads are specified, the heads of repo will be used.'''
592 592 if not heads:
593 593 heads = repo.heads()
594 594 ancestors = []
595 595 ctx = repo[dest]
596 596 for head in heads:
597 597 ancestors.append(ctx.ancestor(repo[head]).node())
598 598 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
599 599 if match(node):
600 600 yield node
601 601
602 602 def checkopts(opts, revs):
603 603 if opts.get('continue'):
604 604 if opts.get('branch') or opts.get('all') or opts.get('merge'):
605 605 raise error.Abort(_('--continue is incompatible with '
606 606 '--branch, --all and --merge'))
607 607 return
608 608 if not (opts.get('source') or revs or
609 609 opts.get('merge') or opts.get('branch')):
610 610 raise error.Abort(_('no source URL, branch revision, or revision '
611 611 'list provided'))
612 612 if opts.get('all'):
613 613 if not opts.get('branch'):
614 614 raise error.Abort(_('--all requires a branch revision'))
615 615 if revs:
616 616 raise error.Abort(_('--all is incompatible with a '
617 617 'revision list'))
618 618
619 619 checkopts(opts, revs)
620 620
621 621 if not opts.get('log'):
622 622 # deprecated config: transplant.log
623 623 opts['log'] = ui.config('transplant', 'log')
624 624 if not opts.get('filter'):
625 625 # deprecated config: transplant.filter
626 626 opts['filter'] = ui.config('transplant', 'filter')
627 627
628 628 tp = transplanter(ui, repo, opts)
629 629
630 630 cmdutil.checkunfinished(repo)
631 631 p1, p2 = repo.dirstate.parents()
632 632 if len(repo) > 0 and p1 == revlog.nullid:
633 633 raise error.Abort(_('no revision checked out'))
634 634 if not opts.get('continue'):
635 635 if p2 != revlog.nullid:
636 636 raise error.Abort(_('outstanding uncommitted merges'))
637 637 m, a, r, d = repo.status()[:4]
638 638 if m or a or r or d:
639 639 raise error.Abort(_('outstanding local changes'))
640 640
641 641 sourcerepo = opts.get('source')
642 642 if sourcerepo:
643 643 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
644 644 heads = map(peer.lookup, opts.get('branch', ()))
645 645 target = set(heads)
646 646 for r in revs:
647 647 try:
648 648 target.add(peer.lookup(r))
649 649 except error.RepoError:
650 650 pass
651 651 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
652 652 onlyheads=sorted(target), force=True)
653 653 else:
654 654 source = repo
655 655 heads = map(source.lookup, opts.get('branch', ()))
656 656 cleanupfn = None
657 657
658 658 try:
659 659 if opts.get('continue'):
660 660 tp.resume(repo, source, opts)
661 661 return
662 662
663 663 tf = tp.transplantfilter(repo, source, p1)
664 664 if opts.get('prune'):
665 665 prune = set(source.lookup(r)
666 666 for r in scmutil.revrange(source, opts.get('prune')))
667 667 matchfn = lambda x: tf(x) and x not in prune
668 668 else:
669 669 matchfn = tf
670 670 merges = map(source.lookup, opts.get('merge', ()))
671 671 revmap = {}
672 672 if revs:
673 673 for r in scmutil.revrange(source, revs):
674 674 revmap[int(r)] = source.lookup(r)
675 675 elif opts.get('all') or not merges:
676 676 if source != repo:
677 677 alltransplants = incwalk(source, csets, match=matchfn)
678 678 else:
679 679 alltransplants = transplantwalk(source, p1, heads,
680 680 match=matchfn)
681 681 if opts.get('all'):
682 682 revs = alltransplants
683 683 else:
684 684 revs, newmerges = browserevs(ui, source, alltransplants, opts)
685 685 merges.extend(newmerges)
686 686 for r in revs:
687 687 revmap[source.changelog.rev(r)] = r
688 688 for r in merges:
689 689 revmap[source.changelog.rev(r)] = r
690 690
691 691 tp.apply(repo, source, revmap, merges, opts)
692 692 finally:
693 693 if cleanupfn:
694 694 cleanupfn()
695 695
696 696 def revsettransplanted(repo, subset, x):
697 697 """``transplanted([set])``
698 698 Transplanted changesets in set, or all transplanted changesets.
699 699 """
700 700 if x:
701 701 s = revset.getset(repo, subset, x)
702 702 else:
703 703 s = subset
704 704 return revset.baseset([r for r in s if
705 705 repo[r].extra().get('transplant_source')])
706 706
707 707 def kwtransplanted(repo, ctx, **args):
708 708 """:transplanted: String. The node identifier of the transplanted
709 709 changeset if any."""
710 710 n = ctx.extra().get('transplant_source')
711 711 return n and revlog.hex(n) or ''
712 712
713 713 def extsetup(ui):
714 714 revset.symbols['transplanted'] = revsettransplanted
715 715 templatekw.keywords['transplanted'] = kwtransplanted
716 716 cmdutil.unfinishedstates.append(
717 717 ['series', True, False, _('transplant in progress'),
718 718 _("use 'hg transplant --continue' or 'hg update' to abort")])
719 719
720 720 # tell hggettext to extract docstrings from these functions:
721 721 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3422 +1,3422 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from node import hex, bin, nullid, nullrev, short
9 9 from i18n import _
10 10 import os, sys, errno, re, tempfile, cStringIO, shutil
11 11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 12 import match as matchmod
13 13 import repair, graphmod, revset, phases, obsolete, pathutil
14 14 import changelog
15 15 import bookmarks
16 16 import encoding
17 17 import formatter
18 18 import crecord as crecordmod
19 19 import lock as lockmod
20 20
21 21 def ishunk(x):
22 22 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
23 23 return isinstance(x, hunkclasses)
24 24
25 25 def newandmodified(chunks, originalchunks):
26 26 newlyaddedandmodifiedfiles = set()
27 27 for chunk in chunks:
28 28 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
29 29 originalchunks:
30 30 newlyaddedandmodifiedfiles.add(chunk.header.filename())
31 31 return newlyaddedandmodifiedfiles
32 32
33 33 def parsealiases(cmd):
34 34 return cmd.lstrip("^").split("|")
35 35
36 36 def setupwrapcolorwrite(ui):
37 37 # wrap ui.write so diff output can be labeled/colorized
38 38 def wrapwrite(orig, *args, **kw):
39 39 label = kw.pop('label', '')
40 40 for chunk, l in patch.difflabel(lambda: args):
41 41 orig(chunk, label=label + l)
42 42
43 43 oldwrite = ui.write
44 44 def wrap(*args, **kwargs):
45 45 return wrapwrite(oldwrite, *args, **kwargs)
46 46 setattr(ui, 'write', wrap)
47 47 return oldwrite
48 48
49 49 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
50 50 if usecurses:
51 51 if testfile:
52 52 recordfn = crecordmod.testdecorator(testfile,
53 53 crecordmod.testchunkselector)
54 54 else:
55 55 recordfn = crecordmod.chunkselector
56 56
57 57 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
58 58
59 59 else:
60 60 return patch.filterpatch(ui, originalhunks, operation)
61 61
62 62 def recordfilter(ui, originalhunks, operation=None):
63 63 """ Prompts the user to filter the originalhunks and return a list of
64 64 selected hunks.
65 65 *operation* is used for ui purposes to indicate the user
66 66 what kind of filtering they are doing: reverting, committing, shelving, etc.
67 67 *operation* has to be a translated string.
68 68 """
69 69 usecurses = ui.configbool('experimental', 'crecord', False)
70 70 testfile = ui.config('experimental', 'crecordtest', None)
71 71 oldwrite = setupwrapcolorwrite(ui)
72 72 try:
73 73 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
74 74 testfile, operation)
75 75 finally:
76 76 ui.write = oldwrite
77 77 return newchunks, newopts
78 78
79 79 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
80 80 filterfn, *pats, **opts):
81 81 import merge as mergemod
82 82
83 83 if not ui.interactive():
84 84 if cmdsuggest:
85 85 msg = _('running non-interactively, use %s instead') % cmdsuggest
86 86 else:
87 87 msg = _('running non-interactively')
88 88 raise error.Abort(msg)
89 89
90 90 # make sure username is set before going interactive
91 91 if not opts.get('user'):
92 92 ui.username() # raise exception, username not provided
93 93
94 94 def recordfunc(ui, repo, message, match, opts):
95 95 """This is generic record driver.
96 96
97 97 Its job is to interactively filter local changes, and
98 98 accordingly prepare working directory into a state in which the
99 99 job can be delegated to a non-interactive commit command such as
100 100 'commit' or 'qrefresh'.
101 101
102 102 After the actual job is done by non-interactive command, the
103 103 working directory is restored to its original state.
104 104
105 105 In the end we'll record interesting changes, and everything else
106 106 will be left in place, so the user can continue working.
107 107 """
108 108
109 109 checkunfinished(repo, commit=True)
110 110 merge = len(repo[None].parents()) > 1
111 111 if merge:
112 112 raise error.Abort(_('cannot partially commit a merge '
113 113 '(use "hg commit" instead)'))
114 114
115 115 status = repo.status(match=match)
116 116 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
117 117 diffopts.nodates = True
118 118 diffopts.git = True
119 119 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
120 120 originalchunks = patch.parsepatch(originaldiff)
121 121
122 122 # 1. filter patch, so we have intending-to apply subset of it
123 123 try:
124 124 chunks, newopts = filterfn(ui, originalchunks)
125 125 except patch.PatchError as err:
126 126 raise error.Abort(_('error parsing patch: %s') % err)
127 127 opts.update(newopts)
128 128
129 129 # We need to keep a backup of files that have been newly added and
130 130 # modified during the recording process because there is a previous
131 131 # version without the edit in the workdir
132 132 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
133 133 contenders = set()
134 134 for h in chunks:
135 135 try:
136 136 contenders.update(set(h.files()))
137 137 except AttributeError:
138 138 pass
139 139
140 140 changed = status.modified + status.added + status.removed
141 141 newfiles = [f for f in changed if f in contenders]
142 142 if not newfiles:
143 143 ui.status(_('no changes to record\n'))
144 144 return 0
145 145
146 146 modified = set(status.modified)
147 147
148 148 # 2. backup changed files, so we can restore them in the end
149 149
150 150 if backupall:
151 151 tobackup = changed
152 152 else:
153 153 tobackup = [f for f in newfiles if f in modified or f in \
154 154 newlyaddedandmodifiedfiles]
155 155 backups = {}
156 156 if tobackup:
157 157 backupdir = repo.join('record-backups')
158 158 try:
159 159 os.mkdir(backupdir)
160 160 except OSError as err:
161 161 if err.errno != errno.EEXIST:
162 162 raise
163 163 try:
164 164 # backup continues
165 165 for f in tobackup:
166 166 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
167 167 dir=backupdir)
168 168 os.close(fd)
169 169 ui.debug('backup %r as %r\n' % (f, tmpname))
170 170 util.copyfile(repo.wjoin(f), tmpname)
171 171 shutil.copystat(repo.wjoin(f), tmpname)
172 172 backups[f] = tmpname
173 173
174 174 fp = cStringIO.StringIO()
175 175 for c in chunks:
176 176 fname = c.filename()
177 177 if fname in backups:
178 178 c.write(fp)
179 179 dopatch = fp.tell()
180 180 fp.seek(0)
181 181
182 182 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
183 183 # 3a. apply filtered patch to clean repo (clean)
184 184 if backups:
185 185 # Equivalent to hg.revert
186 choices = lambda key: key in backups
186 m = scmutil.matchfiles(repo, backups.keys())
187 187 mergemod.update(repo, repo.dirstate.p1(),
188 False, True, choices)
188 False, True, matcher=m)
189 189
190 190 # 3b. (apply)
191 191 if dopatch:
192 192 try:
193 193 ui.debug('applying patch\n')
194 194 ui.debug(fp.getvalue())
195 195 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
196 196 except patch.PatchError as err:
197 197 raise error.Abort(str(err))
198 198 del fp
199 199
200 200 # 4. We prepared working directory according to filtered
201 201 # patch. Now is the time to delegate the job to
202 202 # commit/qrefresh or the like!
203 203
204 204 # Make all of the pathnames absolute.
205 205 newfiles = [repo.wjoin(nf) for nf in newfiles]
206 206 return commitfunc(ui, repo, *newfiles, **opts)
207 207 finally:
208 208 # 5. finally restore backed-up files
209 209 try:
210 210 dirstate = repo.dirstate
211 211 for realname, tmpname in backups.iteritems():
212 212 ui.debug('restoring %r to %r\n' % (tmpname, realname))
213 213
214 214 if dirstate[realname] == 'n':
215 215 # without normallookup, restoring timestamp
216 216 # may cause partially committed files
217 217 # to be treated as unmodified
218 218 dirstate.normallookup(realname)
219 219
220 220 util.copyfile(tmpname, repo.wjoin(realname))
221 221 # Our calls to copystat() here and above are a
222 222 # hack to trick any editors that have f open that
223 223 # we haven't modified them.
224 224 #
225 225 # Also note that this racy as an editor could
226 226 # notice the file's mtime before we've finished
227 227 # writing it.
228 228 shutil.copystat(tmpname, repo.wjoin(realname))
229 229 os.unlink(tmpname)
230 230 if tobackup:
231 231 os.rmdir(backupdir)
232 232 except OSError:
233 233 pass
234 234
235 235 def recordinwlock(ui, repo, message, match, opts):
236 236 wlock = repo.wlock()
237 237 try:
238 238 return recordfunc(ui, repo, message, match, opts)
239 239 finally:
240 240 wlock.release()
241 241
242 242 return commit(ui, repo, recordinwlock, pats, opts)
243 243
244 244 def findpossible(cmd, table, strict=False):
245 245 """
246 246 Return cmd -> (aliases, command table entry)
247 247 for each matching command.
248 248 Return debug commands (or their aliases) only if no normal command matches.
249 249 """
250 250 choice = {}
251 251 debugchoice = {}
252 252
253 253 if cmd in table:
254 254 # short-circuit exact matches, "log" alias beats "^log|history"
255 255 keys = [cmd]
256 256 else:
257 257 keys = table.keys()
258 258
259 259 allcmds = []
260 260 for e in keys:
261 261 aliases = parsealiases(e)
262 262 allcmds.extend(aliases)
263 263 found = None
264 264 if cmd in aliases:
265 265 found = cmd
266 266 elif not strict:
267 267 for a in aliases:
268 268 if a.startswith(cmd):
269 269 found = a
270 270 break
271 271 if found is not None:
272 272 if aliases[0].startswith("debug") or found.startswith("debug"):
273 273 debugchoice[found] = (aliases, table[e])
274 274 else:
275 275 choice[found] = (aliases, table[e])
276 276
277 277 if not choice and debugchoice:
278 278 choice = debugchoice
279 279
280 280 return choice, allcmds
281 281
282 282 def findcmd(cmd, table, strict=True):
283 283 """Return (aliases, command table entry) for command string."""
284 284 choice, allcmds = findpossible(cmd, table, strict)
285 285
286 286 if cmd in choice:
287 287 return choice[cmd]
288 288
289 289 if len(choice) > 1:
290 290 clist = choice.keys()
291 291 clist.sort()
292 292 raise error.AmbiguousCommand(cmd, clist)
293 293
294 294 if choice:
295 295 return choice.values()[0]
296 296
297 297 raise error.UnknownCommand(cmd, allcmds)
298 298
299 299 def findrepo(p):
300 300 while not os.path.isdir(os.path.join(p, ".hg")):
301 301 oldp, p = p, os.path.dirname(p)
302 302 if p == oldp:
303 303 return None
304 304
305 305 return p
306 306
307 307 def bailifchanged(repo, merge=True):
308 308 if merge and repo.dirstate.p2() != nullid:
309 309 raise error.Abort(_('outstanding uncommitted merge'))
310 310 modified, added, removed, deleted = repo.status()[:4]
311 311 if modified or added or removed or deleted:
312 312 raise error.Abort(_('uncommitted changes'))
313 313 ctx = repo[None]
314 314 for s in sorted(ctx.substate):
315 315 ctx.sub(s).bailifchanged()
316 316
317 317 def logmessage(ui, opts):
318 318 """ get the log message according to -m and -l option """
319 319 message = opts.get('message')
320 320 logfile = opts.get('logfile')
321 321
322 322 if message and logfile:
323 323 raise error.Abort(_('options --message and --logfile are mutually '
324 324 'exclusive'))
325 325 if not message and logfile:
326 326 try:
327 327 if logfile == '-':
328 328 message = ui.fin.read()
329 329 else:
330 330 message = '\n'.join(util.readfile(logfile).splitlines())
331 331 except IOError as inst:
332 332 raise error.Abort(_("can't read commit message '%s': %s") %
333 333 (logfile, inst.strerror))
334 334 return message
335 335
336 336 def mergeeditform(ctxorbool, baseformname):
337 337 """return appropriate editform name (referencing a committemplate)
338 338
339 339 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
340 340 merging is committed.
341 341
342 342 This returns baseformname with '.merge' appended if it is a merge,
343 343 otherwise '.normal' is appended.
344 344 """
345 345 if isinstance(ctxorbool, bool):
346 346 if ctxorbool:
347 347 return baseformname + ".merge"
348 348 elif 1 < len(ctxorbool.parents()):
349 349 return baseformname + ".merge"
350 350
351 351 return baseformname + ".normal"
352 352
353 353 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
354 354 editform='', **opts):
355 355 """get appropriate commit message editor according to '--edit' option
356 356
357 357 'finishdesc' is a function to be called with edited commit message
358 358 (= 'description' of the new changeset) just after editing, but
359 359 before checking empty-ness. It should return actual text to be
360 360 stored into history. This allows to change description before
361 361 storing.
362 362
363 363 'extramsg' is a extra message to be shown in the editor instead of
364 364 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
365 365 is automatically added.
366 366
367 367 'editform' is a dot-separated list of names, to distinguish
368 368 the purpose of commit text editing.
369 369
370 370 'getcommiteditor' returns 'commitforceeditor' regardless of
371 371 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
372 372 they are specific for usage in MQ.
373 373 """
374 374 if edit or finishdesc or extramsg:
375 375 return lambda r, c, s: commitforceeditor(r, c, s,
376 376 finishdesc=finishdesc,
377 377 extramsg=extramsg,
378 378 editform=editform)
379 379 elif editform:
380 380 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
381 381 else:
382 382 return commiteditor
383 383
384 384 def loglimit(opts):
385 385 """get the log limit according to option -l/--limit"""
386 386 limit = opts.get('limit')
387 387 if limit:
388 388 try:
389 389 limit = int(limit)
390 390 except ValueError:
391 391 raise error.Abort(_('limit must be a positive integer'))
392 392 if limit <= 0:
393 393 raise error.Abort(_('limit must be positive'))
394 394 else:
395 395 limit = None
396 396 return limit
397 397
398 398 def makefilename(repo, pat, node, desc=None,
399 399 total=None, seqno=None, revwidth=None, pathname=None):
400 400 node_expander = {
401 401 'H': lambda: hex(node),
402 402 'R': lambda: str(repo.changelog.rev(node)),
403 403 'h': lambda: short(node),
404 404 'm': lambda: re.sub('[^\w]', '_', str(desc))
405 405 }
406 406 expander = {
407 407 '%': lambda: '%',
408 408 'b': lambda: os.path.basename(repo.root),
409 409 }
410 410
411 411 try:
412 412 if node:
413 413 expander.update(node_expander)
414 414 if node:
415 415 expander['r'] = (lambda:
416 416 str(repo.changelog.rev(node)).zfill(revwidth or 0))
417 417 if total is not None:
418 418 expander['N'] = lambda: str(total)
419 419 if seqno is not None:
420 420 expander['n'] = lambda: str(seqno)
421 421 if total is not None and seqno is not None:
422 422 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
423 423 if pathname is not None:
424 424 expander['s'] = lambda: os.path.basename(pathname)
425 425 expander['d'] = lambda: os.path.dirname(pathname) or '.'
426 426 expander['p'] = lambda: pathname
427 427
428 428 newname = []
429 429 patlen = len(pat)
430 430 i = 0
431 431 while i < patlen:
432 432 c = pat[i]
433 433 if c == '%':
434 434 i += 1
435 435 c = pat[i]
436 436 c = expander[c]()
437 437 newname.append(c)
438 438 i += 1
439 439 return ''.join(newname)
440 440 except KeyError as inst:
441 441 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
442 442 inst.args[0])
443 443
444 444 def makefileobj(repo, pat, node=None, desc=None, total=None,
445 445 seqno=None, revwidth=None, mode='wb', modemap=None,
446 446 pathname=None):
447 447
448 448 writable = mode not in ('r', 'rb')
449 449
450 450 if not pat or pat == '-':
451 451 if writable:
452 452 fp = repo.ui.fout
453 453 else:
454 454 fp = repo.ui.fin
455 455 if util.safehasattr(fp, 'fileno'):
456 456 return os.fdopen(os.dup(fp.fileno()), mode)
457 457 else:
458 458 # if this fp can't be duped properly, return
459 459 # a dummy object that can be closed
460 460 class wrappedfileobj(object):
461 461 noop = lambda x: None
462 462 def __init__(self, f):
463 463 self.f = f
464 464 def __getattr__(self, attr):
465 465 if attr == 'close':
466 466 return self.noop
467 467 else:
468 468 return getattr(self.f, attr)
469 469
470 470 return wrappedfileobj(fp)
471 471 if util.safehasattr(pat, 'write') and writable:
472 472 return pat
473 473 if util.safehasattr(pat, 'read') and 'r' in mode:
474 474 return pat
475 475 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
476 476 if modemap is not None:
477 477 mode = modemap.get(fn, mode)
478 478 if mode == 'wb':
479 479 modemap[fn] = 'ab'
480 480 return open(fn, mode)
481 481
482 482 def openrevlog(repo, cmd, file_, opts):
483 483 """opens the changelog, manifest, a filelog or a given revlog"""
484 484 cl = opts['changelog']
485 485 mf = opts['manifest']
486 486 dir = opts['dir']
487 487 msg = None
488 488 if cl and mf:
489 489 msg = _('cannot specify --changelog and --manifest at the same time')
490 490 elif cl and dir:
491 491 msg = _('cannot specify --changelog and --dir at the same time')
492 492 elif cl or mf:
493 493 if file_:
494 494 msg = _('cannot specify filename with --changelog or --manifest')
495 495 elif not repo:
496 496 msg = _('cannot specify --changelog or --manifest or --dir '
497 497 'without a repository')
498 498 if msg:
499 499 raise error.Abort(msg)
500 500
501 501 r = None
502 502 if repo:
503 503 if cl:
504 504 r = repo.unfiltered().changelog
505 505 elif dir:
506 506 if 'treemanifest' not in repo.requirements:
507 507 raise error.Abort(_("--dir can only be used on repos with "
508 508 "treemanifest enabled"))
509 509 dirlog = repo.dirlog(file_)
510 510 if len(dirlog):
511 511 r = dirlog
512 512 elif mf:
513 513 r = repo.manifest
514 514 elif file_:
515 515 filelog = repo.file(file_)
516 516 if len(filelog):
517 517 r = filelog
518 518 if not r:
519 519 if not file_:
520 520 raise error.CommandError(cmd, _('invalid arguments'))
521 521 if not os.path.isfile(file_):
522 522 raise error.Abort(_("revlog '%s' not found") % file_)
523 523 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
524 524 file_[:-2] + ".i")
525 525 return r
526 526
527 527 def copy(ui, repo, pats, opts, rename=False):
528 528 # called with the repo lock held
529 529 #
530 530 # hgsep => pathname that uses "/" to separate directories
531 531 # ossep => pathname that uses os.sep to separate directories
532 532 cwd = repo.getcwd()
533 533 targets = {}
534 534 after = opts.get("after")
535 535 dryrun = opts.get("dry_run")
536 536 wctx = repo[None]
537 537
538 538 def walkpat(pat):
539 539 srcs = []
540 540 if after:
541 541 badstates = '?'
542 542 else:
543 543 badstates = '?r'
544 544 m = scmutil.match(repo[None], [pat], opts, globbed=True)
545 545 for abs in repo.walk(m):
546 546 state = repo.dirstate[abs]
547 547 rel = m.rel(abs)
548 548 exact = m.exact(abs)
549 549 if state in badstates:
550 550 if exact and state == '?':
551 551 ui.warn(_('%s: not copying - file is not managed\n') % rel)
552 552 if exact and state == 'r':
553 553 ui.warn(_('%s: not copying - file has been marked for'
554 554 ' remove\n') % rel)
555 555 continue
556 556 # abs: hgsep
557 557 # rel: ossep
558 558 srcs.append((abs, rel, exact))
559 559 return srcs
560 560
561 561 # abssrc: hgsep
562 562 # relsrc: ossep
563 563 # otarget: ossep
564 564 def copyfile(abssrc, relsrc, otarget, exact):
565 565 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
566 566 if '/' in abstarget:
567 567 # We cannot normalize abstarget itself, this would prevent
568 568 # case only renames, like a => A.
569 569 abspath, absname = abstarget.rsplit('/', 1)
570 570 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
571 571 reltarget = repo.pathto(abstarget, cwd)
572 572 target = repo.wjoin(abstarget)
573 573 src = repo.wjoin(abssrc)
574 574 state = repo.dirstate[abstarget]
575 575
576 576 scmutil.checkportable(ui, abstarget)
577 577
578 578 # check for collisions
579 579 prevsrc = targets.get(abstarget)
580 580 if prevsrc is not None:
581 581 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
582 582 (reltarget, repo.pathto(abssrc, cwd),
583 583 repo.pathto(prevsrc, cwd)))
584 584 return
585 585
586 586 # check for overwrites
587 587 exists = os.path.lexists(target)
588 588 samefile = False
589 589 if exists and abssrc != abstarget:
590 590 if (repo.dirstate.normalize(abssrc) ==
591 591 repo.dirstate.normalize(abstarget)):
592 592 if not rename:
593 593 ui.warn(_("%s: can't copy - same file\n") % reltarget)
594 594 return
595 595 exists = False
596 596 samefile = True
597 597
598 598 if not after and exists or after and state in 'mn':
599 599 if not opts['force']:
600 600 ui.warn(_('%s: not overwriting - file exists\n') %
601 601 reltarget)
602 602 return
603 603
604 604 if after:
605 605 if not exists:
606 606 if rename:
607 607 ui.warn(_('%s: not recording move - %s does not exist\n') %
608 608 (relsrc, reltarget))
609 609 else:
610 610 ui.warn(_('%s: not recording copy - %s does not exist\n') %
611 611 (relsrc, reltarget))
612 612 return
613 613 elif not dryrun:
614 614 try:
615 615 if exists:
616 616 os.unlink(target)
617 617 targetdir = os.path.dirname(target) or '.'
618 618 if not os.path.isdir(targetdir):
619 619 os.makedirs(targetdir)
620 620 if samefile:
621 621 tmp = target + "~hgrename"
622 622 os.rename(src, tmp)
623 623 os.rename(tmp, target)
624 624 else:
625 625 util.copyfile(src, target)
626 626 srcexists = True
627 627 except IOError as inst:
628 628 if inst.errno == errno.ENOENT:
629 629 ui.warn(_('%s: deleted in working directory\n') % relsrc)
630 630 srcexists = False
631 631 else:
632 632 ui.warn(_('%s: cannot copy - %s\n') %
633 633 (relsrc, inst.strerror))
634 634 return True # report a failure
635 635
636 636 if ui.verbose or not exact:
637 637 if rename:
638 638 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
639 639 else:
640 640 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
641 641
642 642 targets[abstarget] = abssrc
643 643
644 644 # fix up dirstate
645 645 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
646 646 dryrun=dryrun, cwd=cwd)
647 647 if rename and not dryrun:
648 648 if not after and srcexists and not samefile:
649 649 util.unlinkpath(repo.wjoin(abssrc))
650 650 wctx.forget([abssrc])
651 651
652 652 # pat: ossep
653 653 # dest ossep
654 654 # srcs: list of (hgsep, hgsep, ossep, bool)
655 655 # return: function that takes hgsep and returns ossep
656 656 def targetpathfn(pat, dest, srcs):
657 657 if os.path.isdir(pat):
658 658 abspfx = pathutil.canonpath(repo.root, cwd, pat)
659 659 abspfx = util.localpath(abspfx)
660 660 if destdirexists:
661 661 striplen = len(os.path.split(abspfx)[0])
662 662 else:
663 663 striplen = len(abspfx)
664 664 if striplen:
665 665 striplen += len(os.sep)
666 666 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
667 667 elif destdirexists:
668 668 res = lambda p: os.path.join(dest,
669 669 os.path.basename(util.localpath(p)))
670 670 else:
671 671 res = lambda p: dest
672 672 return res
673 673
674 674 # pat: ossep
675 675 # dest ossep
676 676 # srcs: list of (hgsep, hgsep, ossep, bool)
677 677 # return: function that takes hgsep and returns ossep
678 678 def targetpathafterfn(pat, dest, srcs):
679 679 if matchmod.patkind(pat):
680 680 # a mercurial pattern
681 681 res = lambda p: os.path.join(dest,
682 682 os.path.basename(util.localpath(p)))
683 683 else:
684 684 abspfx = pathutil.canonpath(repo.root, cwd, pat)
685 685 if len(abspfx) < len(srcs[0][0]):
686 686 # A directory. Either the target path contains the last
687 687 # component of the source path or it does not.
688 688 def evalpath(striplen):
689 689 score = 0
690 690 for s in srcs:
691 691 t = os.path.join(dest, util.localpath(s[0])[striplen:])
692 692 if os.path.lexists(t):
693 693 score += 1
694 694 return score
695 695
696 696 abspfx = util.localpath(abspfx)
697 697 striplen = len(abspfx)
698 698 if striplen:
699 699 striplen += len(os.sep)
700 700 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
701 701 score = evalpath(striplen)
702 702 striplen1 = len(os.path.split(abspfx)[0])
703 703 if striplen1:
704 704 striplen1 += len(os.sep)
705 705 if evalpath(striplen1) > score:
706 706 striplen = striplen1
707 707 res = lambda p: os.path.join(dest,
708 708 util.localpath(p)[striplen:])
709 709 else:
710 710 # a file
711 711 if destdirexists:
712 712 res = lambda p: os.path.join(dest,
713 713 os.path.basename(util.localpath(p)))
714 714 else:
715 715 res = lambda p: dest
716 716 return res
717 717
718 718 pats = scmutil.expandpats(pats)
719 719 if not pats:
720 720 raise error.Abort(_('no source or destination specified'))
721 721 if len(pats) == 1:
722 722 raise error.Abort(_('no destination specified'))
723 723 dest = pats.pop()
724 724 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
725 725 if not destdirexists:
726 726 if len(pats) > 1 or matchmod.patkind(pats[0]):
727 727 raise error.Abort(_('with multiple sources, destination must be an '
728 728 'existing directory'))
729 729 if util.endswithsep(dest):
730 730 raise error.Abort(_('destination %s is not a directory') % dest)
731 731
732 732 tfn = targetpathfn
733 733 if after:
734 734 tfn = targetpathafterfn
735 735 copylist = []
736 736 for pat in pats:
737 737 srcs = walkpat(pat)
738 738 if not srcs:
739 739 continue
740 740 copylist.append((tfn(pat, dest, srcs), srcs))
741 741 if not copylist:
742 742 raise error.Abort(_('no files to copy'))
743 743
744 744 errors = 0
745 745 for targetpath, srcs in copylist:
746 746 for abssrc, relsrc, exact in srcs:
747 747 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
748 748 errors += 1
749 749
750 750 if errors:
751 751 ui.warn(_('(consider using --after)\n'))
752 752
753 753 return errors != 0
754 754
755 755 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
756 756 runargs=None, appendpid=False):
757 757 '''Run a command as a service.'''
758 758
759 759 def writepid(pid):
760 760 if opts['pid_file']:
761 761 if appendpid:
762 762 mode = 'a'
763 763 else:
764 764 mode = 'w'
765 765 fp = open(opts['pid_file'], mode)
766 766 fp.write(str(pid) + '\n')
767 767 fp.close()
768 768
769 769 if opts['daemon'] and not opts['daemon_pipefds']:
770 770 # Signal child process startup with file removal
771 771 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
772 772 os.close(lockfd)
773 773 try:
774 774 if not runargs:
775 775 runargs = util.hgcmd() + sys.argv[1:]
776 776 runargs.append('--daemon-pipefds=%s' % lockpath)
777 777 # Don't pass --cwd to the child process, because we've already
778 778 # changed directory.
779 779 for i in xrange(1, len(runargs)):
780 780 if runargs[i].startswith('--cwd='):
781 781 del runargs[i]
782 782 break
783 783 elif runargs[i].startswith('--cwd'):
784 784 del runargs[i:i + 2]
785 785 break
786 786 def condfn():
787 787 return not os.path.exists(lockpath)
788 788 pid = util.rundetached(runargs, condfn)
789 789 if pid < 0:
790 790 raise error.Abort(_('child process failed to start'))
791 791 writepid(pid)
792 792 finally:
793 793 try:
794 794 os.unlink(lockpath)
795 795 except OSError as e:
796 796 if e.errno != errno.ENOENT:
797 797 raise
798 798 if parentfn:
799 799 return parentfn(pid)
800 800 else:
801 801 return
802 802
803 803 if initfn:
804 804 initfn()
805 805
806 806 if not opts['daemon']:
807 807 writepid(os.getpid())
808 808
809 809 if opts['daemon_pipefds']:
810 810 lockpath = opts['daemon_pipefds']
811 811 try:
812 812 os.setsid()
813 813 except AttributeError:
814 814 pass
815 815 os.unlink(lockpath)
816 816 util.hidewindow()
817 817 sys.stdout.flush()
818 818 sys.stderr.flush()
819 819
820 820 nullfd = os.open(os.devnull, os.O_RDWR)
821 821 logfilefd = nullfd
822 822 if logfile:
823 823 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
824 824 os.dup2(nullfd, 0)
825 825 os.dup2(logfilefd, 1)
826 826 os.dup2(logfilefd, 2)
827 827 if nullfd not in (0, 1, 2):
828 828 os.close(nullfd)
829 829 if logfile and logfilefd not in (0, 1, 2):
830 830 os.close(logfilefd)
831 831
832 832 if runfn:
833 833 return runfn()
834 834
835 835 ## facility to let extension process additional data into an import patch
836 836 # list of identifier to be executed in order
837 837 extrapreimport = [] # run before commit
838 838 extrapostimport = [] # run after commit
839 839 # mapping from identifier to actual import function
840 840 #
841 841 # 'preimport' are run before the commit is made and are provided the following
842 842 # arguments:
843 843 # - repo: the localrepository instance,
844 844 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
845 845 # - extra: the future extra dictionary of the changeset, please mutate it,
846 846 # - opts: the import options.
847 847 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
848 848 # mutation of in memory commit and more. Feel free to rework the code to get
849 849 # there.
850 850 extrapreimportmap = {}
851 851 # 'postimport' are run after the commit is made and are provided the following
852 852 # argument:
853 853 # - ctx: the changectx created by import.
854 854 extrapostimportmap = {}
855 855
856 856 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
857 857 """Utility function used by commands.import to import a single patch
858 858
859 859 This function is explicitly defined here to help the evolve extension to
860 860 wrap this part of the import logic.
861 861
862 862 The API is currently a bit ugly because it a simple code translation from
863 863 the import command. Feel free to make it better.
864 864
865 865 :hunk: a patch (as a binary string)
866 866 :parents: nodes that will be parent of the created commit
867 867 :opts: the full dict of option passed to the import command
868 868 :msgs: list to save commit message to.
869 869 (used in case we need to save it when failing)
870 870 :updatefunc: a function that update a repo to a given node
871 871 updatefunc(<repo>, <node>)
872 872 """
873 873 # avoid cycle context -> subrepo -> cmdutil
874 874 import context
875 875 extractdata = patch.extract(ui, hunk)
876 876 tmpname = extractdata.get('filename')
877 877 message = extractdata.get('message')
878 878 user = extractdata.get('user')
879 879 date = extractdata.get('date')
880 880 branch = extractdata.get('branch')
881 881 nodeid = extractdata.get('nodeid')
882 882 p1 = extractdata.get('p1')
883 883 p2 = extractdata.get('p2')
884 884
885 885 update = not opts.get('bypass')
886 886 strip = opts["strip"]
887 887 prefix = opts["prefix"]
888 888 sim = float(opts.get('similarity') or 0)
889 889 if not tmpname:
890 890 return (None, None, False)
891 891 msg = _('applied to working directory')
892 892
893 893 rejects = False
894 894
895 895 try:
896 896 cmdline_message = logmessage(ui, opts)
897 897 if cmdline_message:
898 898 # pickup the cmdline msg
899 899 message = cmdline_message
900 900 elif message:
901 901 # pickup the patch msg
902 902 message = message.strip()
903 903 else:
904 904 # launch the editor
905 905 message = None
906 906 ui.debug('message:\n%s\n' % message)
907 907
908 908 if len(parents) == 1:
909 909 parents.append(repo[nullid])
910 910 if opts.get('exact'):
911 911 if not nodeid or not p1:
912 912 raise error.Abort(_('not a Mercurial patch'))
913 913 p1 = repo[p1]
914 914 p2 = repo[p2 or nullid]
915 915 elif p2:
916 916 try:
917 917 p1 = repo[p1]
918 918 p2 = repo[p2]
919 919 # Without any options, consider p2 only if the
920 920 # patch is being applied on top of the recorded
921 921 # first parent.
922 922 if p1 != parents[0]:
923 923 p1 = parents[0]
924 924 p2 = repo[nullid]
925 925 except error.RepoError:
926 926 p1, p2 = parents
927 927 if p2.node() == nullid:
928 928 ui.warn(_("warning: import the patch as a normal revision\n"
929 929 "(use --exact to import the patch as a merge)\n"))
930 930 else:
931 931 p1, p2 = parents
932 932
933 933 n = None
934 934 if update:
935 935 if p1 != parents[0]:
936 936 updatefunc(repo, p1.node())
937 937 if p2 != parents[1]:
938 938 repo.setparents(p1.node(), p2.node())
939 939
940 940 if opts.get('exact') or opts.get('import_branch'):
941 941 repo.dirstate.setbranch(branch or 'default')
942 942
943 943 partial = opts.get('partial', False)
944 944 files = set()
945 945 try:
946 946 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
947 947 files=files, eolmode=None, similarity=sim / 100.0)
948 948 except patch.PatchError as e:
949 949 if not partial:
950 950 raise error.Abort(str(e))
951 951 if partial:
952 952 rejects = True
953 953
954 954 files = list(files)
955 955 if opts.get('no_commit'):
956 956 if message:
957 957 msgs.append(message)
958 958 else:
959 959 if opts.get('exact') or p2:
960 960 # If you got here, you either use --force and know what
961 961 # you are doing or used --exact or a merge patch while
962 962 # being updated to its first parent.
963 963 m = None
964 964 else:
965 965 m = scmutil.matchfiles(repo, files or [])
966 966 editform = mergeeditform(repo[None], 'import.normal')
967 967 if opts.get('exact'):
968 968 editor = None
969 969 else:
970 970 editor = getcommiteditor(editform=editform, **opts)
971 971 allowemptyback = repo.ui.backupconfig('ui', 'allowemptycommit')
972 972 extra = {}
973 973 for idfunc in extrapreimport:
974 974 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
975 975 try:
976 976 if partial:
977 977 repo.ui.setconfig('ui', 'allowemptycommit', True)
978 978 n = repo.commit(message, opts.get('user') or user,
979 979 opts.get('date') or date, match=m,
980 980 editor=editor, extra=extra)
981 981 for idfunc in extrapostimport:
982 982 extrapostimportmap[idfunc](repo[n])
983 983 finally:
984 984 repo.ui.restoreconfig(allowemptyback)
985 985 else:
986 986 if opts.get('exact') or opts.get('import_branch'):
987 987 branch = branch or 'default'
988 988 else:
989 989 branch = p1.branch()
990 990 store = patch.filestore()
991 991 try:
992 992 files = set()
993 993 try:
994 994 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
995 995 files, eolmode=None)
996 996 except patch.PatchError as e:
997 997 raise error.Abort(str(e))
998 998 if opts.get('exact'):
999 999 editor = None
1000 1000 else:
1001 1001 editor = getcommiteditor(editform='import.bypass')
1002 1002 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1003 1003 message,
1004 1004 opts.get('user') or user,
1005 1005 opts.get('date') or date,
1006 1006 branch, files, store,
1007 1007 editor=editor)
1008 1008 n = memctx.commit()
1009 1009 finally:
1010 1010 store.close()
1011 1011 if opts.get('exact') and opts.get('no_commit'):
1012 1012 # --exact with --no-commit is still useful in that it does merge
1013 1013 # and branch bits
1014 1014 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1015 1015 elif opts.get('exact') and hex(n) != nodeid:
1016 1016 raise error.Abort(_('patch is damaged or loses information'))
1017 1017 if n:
1018 1018 # i18n: refers to a short changeset id
1019 1019 msg = _('created %s') % short(n)
1020 1020 return (msg, n, rejects)
1021 1021 finally:
1022 1022 os.unlink(tmpname)
1023 1023
1024 1024 # facility to let extensions include additional data in an exported patch
1025 1025 # list of identifiers to be executed in order
1026 1026 extraexport = []
1027 1027 # mapping from identifier to actual export function
1028 1028 # function as to return a string to be added to the header or None
1029 1029 # it is given two arguments (sequencenumber, changectx)
1030 1030 extraexportmap = {}
1031 1031
1032 1032 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1033 1033 opts=None, match=None):
1034 1034 '''export changesets as hg patches.'''
1035 1035
1036 1036 total = len(revs)
1037 1037 revwidth = max([len(str(rev)) for rev in revs])
1038 1038 filemode = {}
1039 1039
1040 1040 def single(rev, seqno, fp):
1041 1041 ctx = repo[rev]
1042 1042 node = ctx.node()
1043 1043 parents = [p.node() for p in ctx.parents() if p]
1044 1044 branch = ctx.branch()
1045 1045 if switch_parent:
1046 1046 parents.reverse()
1047 1047
1048 1048 if parents:
1049 1049 prev = parents[0]
1050 1050 else:
1051 1051 prev = nullid
1052 1052
1053 1053 shouldclose = False
1054 1054 if not fp and len(template) > 0:
1055 1055 desc_lines = ctx.description().rstrip().split('\n')
1056 1056 desc = desc_lines[0] #Commit always has a first line.
1057 1057 fp = makefileobj(repo, template, node, desc=desc, total=total,
1058 1058 seqno=seqno, revwidth=revwidth, mode='wb',
1059 1059 modemap=filemode)
1060 1060 if fp != template:
1061 1061 shouldclose = True
1062 1062 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
1063 1063 repo.ui.note("%s\n" % fp.name)
1064 1064
1065 1065 if not fp:
1066 1066 write = repo.ui.write
1067 1067 else:
1068 1068 def write(s, **kw):
1069 1069 fp.write(s)
1070 1070
1071 1071 write("# HG changeset patch\n")
1072 1072 write("# User %s\n" % ctx.user())
1073 1073 write("# Date %d %d\n" % ctx.date())
1074 1074 write("# %s\n" % util.datestr(ctx.date()))
1075 1075 if branch and branch != 'default':
1076 1076 write("# Branch %s\n" % branch)
1077 1077 write("# Node ID %s\n" % hex(node))
1078 1078 write("# Parent %s\n" % hex(prev))
1079 1079 if len(parents) > 1:
1080 1080 write("# Parent %s\n" % hex(parents[1]))
1081 1081
1082 1082 for headerid in extraexport:
1083 1083 header = extraexportmap[headerid](seqno, ctx)
1084 1084 if header is not None:
1085 1085 write('# %s\n' % header)
1086 1086 write(ctx.description().rstrip())
1087 1087 write("\n\n")
1088 1088
1089 1089 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1090 1090 write(chunk, label=label)
1091 1091
1092 1092 if shouldclose:
1093 1093 fp.close()
1094 1094
1095 1095 for seqno, rev in enumerate(revs):
1096 1096 single(rev, seqno + 1, fp)
1097 1097
1098 1098 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1099 1099 changes=None, stat=False, fp=None, prefix='',
1100 1100 root='', listsubrepos=False):
1101 1101 '''show diff or diffstat.'''
1102 1102 if fp is None:
1103 1103 write = ui.write
1104 1104 else:
1105 1105 def write(s, **kw):
1106 1106 fp.write(s)
1107 1107
1108 1108 if root:
1109 1109 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1110 1110 else:
1111 1111 relroot = ''
1112 1112 if relroot != '':
1113 1113 # XXX relative roots currently don't work if the root is within a
1114 1114 # subrepo
1115 1115 uirelroot = match.uipath(relroot)
1116 1116 relroot += '/'
1117 1117 for matchroot in match.files():
1118 1118 if not matchroot.startswith(relroot):
1119 1119 ui.warn(_('warning: %s not inside relative root %s\n') % (
1120 1120 match.uipath(matchroot), uirelroot))
1121 1121
1122 1122 if stat:
1123 1123 diffopts = diffopts.copy(context=0)
1124 1124 width = 80
1125 1125 if not ui.plain():
1126 1126 width = ui.termwidth()
1127 1127 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1128 1128 prefix=prefix, relroot=relroot)
1129 1129 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1130 1130 width=width,
1131 1131 git=diffopts.git):
1132 1132 write(chunk, label=label)
1133 1133 else:
1134 1134 for chunk, label in patch.diffui(repo, node1, node2, match,
1135 1135 changes, diffopts, prefix=prefix,
1136 1136 relroot=relroot):
1137 1137 write(chunk, label=label)
1138 1138
1139 1139 if listsubrepos:
1140 1140 ctx1 = repo[node1]
1141 1141 ctx2 = repo[node2]
1142 1142 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1143 1143 tempnode2 = node2
1144 1144 try:
1145 1145 if node2 is not None:
1146 1146 tempnode2 = ctx2.substate[subpath][1]
1147 1147 except KeyError:
1148 1148 # A subrepo that existed in node1 was deleted between node1 and
1149 1149 # node2 (inclusive). Thus, ctx2's substate won't contain that
1150 1150 # subpath. The best we can do is to ignore it.
1151 1151 tempnode2 = None
1152 1152 submatch = matchmod.narrowmatcher(subpath, match)
1153 1153 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1154 1154 stat=stat, fp=fp, prefix=prefix)
1155 1155
1156 1156 class changeset_printer(object):
1157 1157 '''show changeset information when templating not requested.'''
1158 1158
1159 1159 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1160 1160 self.ui = ui
1161 1161 self.repo = repo
1162 1162 self.buffered = buffered
1163 1163 self.matchfn = matchfn
1164 1164 self.diffopts = diffopts
1165 1165 self.header = {}
1166 1166 self.hunk = {}
1167 1167 self.lastheader = None
1168 1168 self.footer = None
1169 1169
1170 1170 def flush(self, ctx):
1171 1171 rev = ctx.rev()
1172 1172 if rev in self.header:
1173 1173 h = self.header[rev]
1174 1174 if h != self.lastheader:
1175 1175 self.lastheader = h
1176 1176 self.ui.write(h)
1177 1177 del self.header[rev]
1178 1178 if rev in self.hunk:
1179 1179 self.ui.write(self.hunk[rev])
1180 1180 del self.hunk[rev]
1181 1181 return 1
1182 1182 return 0
1183 1183
1184 1184 def close(self):
1185 1185 if self.footer:
1186 1186 self.ui.write(self.footer)
1187 1187
1188 1188 def show(self, ctx, copies=None, matchfn=None, **props):
1189 1189 if self.buffered:
1190 1190 self.ui.pushbuffer(labeled=True)
1191 1191 self._show(ctx, copies, matchfn, props)
1192 1192 self.hunk[ctx.rev()] = self.ui.popbuffer()
1193 1193 else:
1194 1194 self._show(ctx, copies, matchfn, props)
1195 1195
1196 1196 def _show(self, ctx, copies, matchfn, props):
1197 1197 '''show a single changeset or file revision'''
1198 1198 changenode = ctx.node()
1199 1199 rev = ctx.rev()
1200 1200 if self.ui.debugflag:
1201 1201 hexfunc = hex
1202 1202 else:
1203 1203 hexfunc = short
1204 1204 # as of now, wctx.node() and wctx.rev() return None, but we want to
1205 1205 # show the same values as {node} and {rev} templatekw
1206 1206 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1207 1207
1208 1208 if self.ui.quiet:
1209 1209 self.ui.write("%d:%s\n" % revnode, label='log.node')
1210 1210 return
1211 1211
1212 1212 date = util.datestr(ctx.date())
1213 1213
1214 1214 # i18n: column positioning for "hg log"
1215 1215 self.ui.write(_("changeset: %d:%s\n") % revnode,
1216 1216 label='log.changeset changeset.%s' % ctx.phasestr())
1217 1217
1218 1218 # branches are shown first before any other names due to backwards
1219 1219 # compatibility
1220 1220 branch = ctx.branch()
1221 1221 # don't show the default branch name
1222 1222 if branch != 'default':
1223 1223 # i18n: column positioning for "hg log"
1224 1224 self.ui.write(_("branch: %s\n") % branch,
1225 1225 label='log.branch')
1226 1226
1227 1227 for name, ns in self.repo.names.iteritems():
1228 1228 # branches has special logic already handled above, so here we just
1229 1229 # skip it
1230 1230 if name == 'branches':
1231 1231 continue
1232 1232 # we will use the templatename as the color name since those two
1233 1233 # should be the same
1234 1234 for name in ns.names(self.repo, changenode):
1235 1235 self.ui.write(ns.logfmt % name,
1236 1236 label='log.%s' % ns.colorname)
1237 1237 if self.ui.debugflag:
1238 1238 # i18n: column positioning for "hg log"
1239 1239 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1240 1240 label='log.phase')
1241 1241 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1242 1242 label = 'log.parent changeset.%s' % pctx.phasestr()
1243 1243 # i18n: column positioning for "hg log"
1244 1244 self.ui.write(_("parent: %d:%s\n")
1245 1245 % (pctx.rev(), hexfunc(pctx.node())),
1246 1246 label=label)
1247 1247
1248 1248 if self.ui.debugflag and rev is not None:
1249 1249 mnode = ctx.manifestnode()
1250 1250 # i18n: column positioning for "hg log"
1251 1251 self.ui.write(_("manifest: %d:%s\n") %
1252 1252 (self.repo.manifest.rev(mnode), hex(mnode)),
1253 1253 label='ui.debug log.manifest')
1254 1254 # i18n: column positioning for "hg log"
1255 1255 self.ui.write(_("user: %s\n") % ctx.user(),
1256 1256 label='log.user')
1257 1257 # i18n: column positioning for "hg log"
1258 1258 self.ui.write(_("date: %s\n") % date,
1259 1259 label='log.date')
1260 1260
1261 1261 if self.ui.debugflag:
1262 1262 files = ctx.p1().status(ctx)[:3]
1263 1263 for key, value in zip([# i18n: column positioning for "hg log"
1264 1264 _("files:"),
1265 1265 # i18n: column positioning for "hg log"
1266 1266 _("files+:"),
1267 1267 # i18n: column positioning for "hg log"
1268 1268 _("files-:")], files):
1269 1269 if value:
1270 1270 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1271 1271 label='ui.debug log.files')
1272 1272 elif ctx.files() and self.ui.verbose:
1273 1273 # i18n: column positioning for "hg log"
1274 1274 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1275 1275 label='ui.note log.files')
1276 1276 if copies and self.ui.verbose:
1277 1277 copies = ['%s (%s)' % c for c in copies]
1278 1278 # i18n: column positioning for "hg log"
1279 1279 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1280 1280 label='ui.note log.copies')
1281 1281
1282 1282 extra = ctx.extra()
1283 1283 if extra and self.ui.debugflag:
1284 1284 for key, value in sorted(extra.items()):
1285 1285 # i18n: column positioning for "hg log"
1286 1286 self.ui.write(_("extra: %s=%s\n")
1287 1287 % (key, value.encode('string_escape')),
1288 1288 label='ui.debug log.extra')
1289 1289
1290 1290 description = ctx.description().strip()
1291 1291 if description:
1292 1292 if self.ui.verbose:
1293 1293 self.ui.write(_("description:\n"),
1294 1294 label='ui.note log.description')
1295 1295 self.ui.write(description,
1296 1296 label='ui.note log.description')
1297 1297 self.ui.write("\n\n")
1298 1298 else:
1299 1299 # i18n: column positioning for "hg log"
1300 1300 self.ui.write(_("summary: %s\n") %
1301 1301 description.splitlines()[0],
1302 1302 label='log.summary')
1303 1303 self.ui.write("\n")
1304 1304
1305 1305 self.showpatch(ctx, matchfn)
1306 1306
1307 1307 def showpatch(self, ctx, matchfn):
1308 1308 if not matchfn:
1309 1309 matchfn = self.matchfn
1310 1310 if matchfn:
1311 1311 stat = self.diffopts.get('stat')
1312 1312 diff = self.diffopts.get('patch')
1313 1313 diffopts = patch.diffallopts(self.ui, self.diffopts)
1314 1314 node = ctx.node()
1315 1315 prev = ctx.p1()
1316 1316 if stat:
1317 1317 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1318 1318 match=matchfn, stat=True)
1319 1319 if diff:
1320 1320 if stat:
1321 1321 self.ui.write("\n")
1322 1322 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1323 1323 match=matchfn, stat=False)
1324 1324 self.ui.write("\n")
1325 1325
1326 1326 class jsonchangeset(changeset_printer):
1327 1327 '''format changeset information.'''
1328 1328
1329 1329 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1330 1330 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1331 1331 self.cache = {}
1332 1332 self._first = True
1333 1333
1334 1334 def close(self):
1335 1335 if not self._first:
1336 1336 self.ui.write("\n]\n")
1337 1337 else:
1338 1338 self.ui.write("[]\n")
1339 1339
1340 1340 def _show(self, ctx, copies, matchfn, props):
1341 1341 '''show a single changeset or file revision'''
1342 1342 rev = ctx.rev()
1343 1343 if rev is None:
1344 1344 jrev = jnode = 'null'
1345 1345 else:
1346 1346 jrev = str(rev)
1347 1347 jnode = '"%s"' % hex(ctx.node())
1348 1348 j = encoding.jsonescape
1349 1349
1350 1350 if self._first:
1351 1351 self.ui.write("[\n {")
1352 1352 self._first = False
1353 1353 else:
1354 1354 self.ui.write(",\n {")
1355 1355
1356 1356 if self.ui.quiet:
1357 1357 self.ui.write('\n "rev": %s' % jrev)
1358 1358 self.ui.write(',\n "node": %s' % jnode)
1359 1359 self.ui.write('\n }')
1360 1360 return
1361 1361
1362 1362 self.ui.write('\n "rev": %s' % jrev)
1363 1363 self.ui.write(',\n "node": %s' % jnode)
1364 1364 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1365 1365 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1366 1366 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1367 1367 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1368 1368 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1369 1369
1370 1370 self.ui.write(',\n "bookmarks": [%s]' %
1371 1371 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1372 1372 self.ui.write(',\n "tags": [%s]' %
1373 1373 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1374 1374 self.ui.write(',\n "parents": [%s]' %
1375 1375 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1376 1376
1377 1377 if self.ui.debugflag:
1378 1378 if rev is None:
1379 1379 jmanifestnode = 'null'
1380 1380 else:
1381 1381 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1382 1382 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1383 1383
1384 1384 self.ui.write(',\n "extra": {%s}' %
1385 1385 ", ".join('"%s": "%s"' % (j(k), j(v))
1386 1386 for k, v in ctx.extra().items()))
1387 1387
1388 1388 files = ctx.p1().status(ctx)
1389 1389 self.ui.write(',\n "modified": [%s]' %
1390 1390 ", ".join('"%s"' % j(f) for f in files[0]))
1391 1391 self.ui.write(',\n "added": [%s]' %
1392 1392 ", ".join('"%s"' % j(f) for f in files[1]))
1393 1393 self.ui.write(',\n "removed": [%s]' %
1394 1394 ", ".join('"%s"' % j(f) for f in files[2]))
1395 1395
1396 1396 elif self.ui.verbose:
1397 1397 self.ui.write(',\n "files": [%s]' %
1398 1398 ", ".join('"%s"' % j(f) for f in ctx.files()))
1399 1399
1400 1400 if copies:
1401 1401 self.ui.write(',\n "copies": {%s}' %
1402 1402 ", ".join('"%s": "%s"' % (j(k), j(v))
1403 1403 for k, v in copies))
1404 1404
1405 1405 matchfn = self.matchfn
1406 1406 if matchfn:
1407 1407 stat = self.diffopts.get('stat')
1408 1408 diff = self.diffopts.get('patch')
1409 1409 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1410 1410 node, prev = ctx.node(), ctx.p1().node()
1411 1411 if stat:
1412 1412 self.ui.pushbuffer()
1413 1413 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1414 1414 match=matchfn, stat=True)
1415 1415 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1416 1416 if diff:
1417 1417 self.ui.pushbuffer()
1418 1418 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1419 1419 match=matchfn, stat=False)
1420 1420 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1421 1421
1422 1422 self.ui.write("\n }")
1423 1423
1424 1424 class changeset_templater(changeset_printer):
1425 1425 '''format changeset information.'''
1426 1426
1427 1427 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1428 1428 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1429 1429 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1430 1430 defaulttempl = {
1431 1431 'parent': '{rev}:{node|formatnode} ',
1432 1432 'manifest': '{rev}:{node|formatnode}',
1433 1433 'file_copy': '{name} ({source})',
1434 1434 'extra': '{key}={value|stringescape}'
1435 1435 }
1436 1436 # filecopy is preserved for compatibility reasons
1437 1437 defaulttempl['filecopy'] = defaulttempl['file_copy']
1438 1438 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1439 1439 cache=defaulttempl)
1440 1440 if tmpl:
1441 1441 self.t.cache['changeset'] = tmpl
1442 1442
1443 1443 self.cache = {}
1444 1444
1445 1445 # find correct templates for current mode
1446 1446 tmplmodes = [
1447 1447 (True, None),
1448 1448 (self.ui.verbose, 'verbose'),
1449 1449 (self.ui.quiet, 'quiet'),
1450 1450 (self.ui.debugflag, 'debug'),
1451 1451 ]
1452 1452
1453 1453 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1454 1454 'docheader': '', 'docfooter': ''}
1455 1455 for mode, postfix in tmplmodes:
1456 1456 for t in self._parts:
1457 1457 cur = t
1458 1458 if postfix:
1459 1459 cur += "_" + postfix
1460 1460 if mode and cur in self.t:
1461 1461 self._parts[t] = cur
1462 1462
1463 1463 if self._parts['docheader']:
1464 1464 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1465 1465
1466 1466 def close(self):
1467 1467 if self._parts['docfooter']:
1468 1468 if not self.footer:
1469 1469 self.footer = ""
1470 1470 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1471 1471 return super(changeset_templater, self).close()
1472 1472
1473 1473 def _show(self, ctx, copies, matchfn, props):
1474 1474 '''show a single changeset or file revision'''
1475 1475 props = props.copy()
1476 1476 props.update(templatekw.keywords)
1477 1477 props['templ'] = self.t
1478 1478 props['ctx'] = ctx
1479 1479 props['repo'] = self.repo
1480 1480 props['revcache'] = {'copies': copies}
1481 1481 props['cache'] = self.cache
1482 1482
1483 1483 try:
1484 1484 # write header
1485 1485 if self._parts['header']:
1486 1486 h = templater.stringify(self.t(self._parts['header'], **props))
1487 1487 if self.buffered:
1488 1488 self.header[ctx.rev()] = h
1489 1489 else:
1490 1490 if self.lastheader != h:
1491 1491 self.lastheader = h
1492 1492 self.ui.write(h)
1493 1493
1494 1494 # write changeset metadata, then patch if requested
1495 1495 key = self._parts['changeset']
1496 1496 self.ui.write(templater.stringify(self.t(key, **props)))
1497 1497 self.showpatch(ctx, matchfn)
1498 1498
1499 1499 if self._parts['footer']:
1500 1500 if not self.footer:
1501 1501 self.footer = templater.stringify(
1502 1502 self.t(self._parts['footer'], **props))
1503 1503 except KeyError as inst:
1504 1504 msg = _("%s: no key named '%s'")
1505 1505 raise error.Abort(msg % (self.t.mapfile, inst.args[0]))
1506 1506 except SyntaxError as inst:
1507 1507 raise error.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1508 1508
1509 1509 def gettemplate(ui, tmpl, style):
1510 1510 """
1511 1511 Find the template matching the given template spec or style.
1512 1512 """
1513 1513
1514 1514 # ui settings
1515 1515 if not tmpl and not style: # template are stronger than style
1516 1516 tmpl = ui.config('ui', 'logtemplate')
1517 1517 if tmpl:
1518 1518 try:
1519 1519 tmpl = templater.unquotestring(tmpl)
1520 1520 except SyntaxError:
1521 1521 pass
1522 1522 return tmpl, None
1523 1523 else:
1524 1524 style = util.expandpath(ui.config('ui', 'style', ''))
1525 1525
1526 1526 if not tmpl and style:
1527 1527 mapfile = style
1528 1528 if not os.path.split(mapfile)[0]:
1529 1529 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1530 1530 or templater.templatepath(mapfile))
1531 1531 if mapname:
1532 1532 mapfile = mapname
1533 1533 return None, mapfile
1534 1534
1535 1535 if not tmpl:
1536 1536 return None, None
1537 1537
1538 1538 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1539 1539
1540 1540 def show_changeset(ui, repo, opts, buffered=False):
1541 1541 """show one changeset using template or regular display.
1542 1542
1543 1543 Display format will be the first non-empty hit of:
1544 1544 1. option 'template'
1545 1545 2. option 'style'
1546 1546 3. [ui] setting 'logtemplate'
1547 1547 4. [ui] setting 'style'
1548 1548 If all of these values are either the unset or the empty string,
1549 1549 regular display via changeset_printer() is done.
1550 1550 """
1551 1551 # options
1552 1552 matchfn = None
1553 1553 if opts.get('patch') or opts.get('stat'):
1554 1554 matchfn = scmutil.matchall(repo)
1555 1555
1556 1556 if opts.get('template') == 'json':
1557 1557 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1558 1558
1559 1559 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1560 1560
1561 1561 if not tmpl and not mapfile:
1562 1562 return changeset_printer(ui, repo, matchfn, opts, buffered)
1563 1563
1564 1564 try:
1565 1565 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1566 1566 buffered)
1567 1567 except SyntaxError as inst:
1568 1568 raise error.Abort(inst.args[0])
1569 1569 return t
1570 1570
1571 1571 def showmarker(ui, marker):
1572 1572 """utility function to display obsolescence marker in a readable way
1573 1573
1574 1574 To be used by debug function."""
1575 1575 ui.write(hex(marker.precnode()))
1576 1576 for repl in marker.succnodes():
1577 1577 ui.write(' ')
1578 1578 ui.write(hex(repl))
1579 1579 ui.write(' %X ' % marker.flags())
1580 1580 parents = marker.parentnodes()
1581 1581 if parents is not None:
1582 1582 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1583 1583 ui.write('(%s) ' % util.datestr(marker.date()))
1584 1584 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1585 1585 sorted(marker.metadata().items())
1586 1586 if t[0] != 'date')))
1587 1587 ui.write('\n')
1588 1588
1589 1589 def finddate(ui, repo, date):
1590 1590 """Find the tipmost changeset that matches the given date spec"""
1591 1591
1592 1592 df = util.matchdate(date)
1593 1593 m = scmutil.matchall(repo)
1594 1594 results = {}
1595 1595
1596 1596 def prep(ctx, fns):
1597 1597 d = ctx.date()
1598 1598 if df(d[0]):
1599 1599 results[ctx.rev()] = d
1600 1600
1601 1601 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1602 1602 rev = ctx.rev()
1603 1603 if rev in results:
1604 1604 ui.status(_("found revision %s from %s\n") %
1605 1605 (rev, util.datestr(results[rev])))
1606 1606 return str(rev)
1607 1607
1608 1608 raise error.Abort(_("revision matching date not found"))
1609 1609
1610 1610 def increasingwindows(windowsize=8, sizelimit=512):
1611 1611 while True:
1612 1612 yield windowsize
1613 1613 if windowsize < sizelimit:
1614 1614 windowsize *= 2
1615 1615
1616 1616 class FileWalkError(Exception):
1617 1617 pass
1618 1618
1619 1619 def walkfilerevs(repo, match, follow, revs, fncache):
1620 1620 '''Walks the file history for the matched files.
1621 1621
1622 1622 Returns the changeset revs that are involved in the file history.
1623 1623
1624 1624 Throws FileWalkError if the file history can't be walked using
1625 1625 filelogs alone.
1626 1626 '''
1627 1627 wanted = set()
1628 1628 copies = []
1629 1629 minrev, maxrev = min(revs), max(revs)
1630 1630 def filerevgen(filelog, last):
1631 1631 """
1632 1632 Only files, no patterns. Check the history of each file.
1633 1633
1634 1634 Examines filelog entries within minrev, maxrev linkrev range
1635 1635 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1636 1636 tuples in backwards order
1637 1637 """
1638 1638 cl_count = len(repo)
1639 1639 revs = []
1640 1640 for j in xrange(0, last + 1):
1641 1641 linkrev = filelog.linkrev(j)
1642 1642 if linkrev < minrev:
1643 1643 continue
1644 1644 # only yield rev for which we have the changelog, it can
1645 1645 # happen while doing "hg log" during a pull or commit
1646 1646 if linkrev >= cl_count:
1647 1647 break
1648 1648
1649 1649 parentlinkrevs = []
1650 1650 for p in filelog.parentrevs(j):
1651 1651 if p != nullrev:
1652 1652 parentlinkrevs.append(filelog.linkrev(p))
1653 1653 n = filelog.node(j)
1654 1654 revs.append((linkrev, parentlinkrevs,
1655 1655 follow and filelog.renamed(n)))
1656 1656
1657 1657 return reversed(revs)
1658 1658 def iterfiles():
1659 1659 pctx = repo['.']
1660 1660 for filename in match.files():
1661 1661 if follow:
1662 1662 if filename not in pctx:
1663 1663 raise error.Abort(_('cannot follow file not in parent '
1664 1664 'revision: "%s"') % filename)
1665 1665 yield filename, pctx[filename].filenode()
1666 1666 else:
1667 1667 yield filename, None
1668 1668 for filename_node in copies:
1669 1669 yield filename_node
1670 1670
1671 1671 for file_, node in iterfiles():
1672 1672 filelog = repo.file(file_)
1673 1673 if not len(filelog):
1674 1674 if node is None:
1675 1675 # A zero count may be a directory or deleted file, so
1676 1676 # try to find matching entries on the slow path.
1677 1677 if follow:
1678 1678 raise error.Abort(
1679 1679 _('cannot follow nonexistent file: "%s"') % file_)
1680 1680 raise FileWalkError("Cannot walk via filelog")
1681 1681 else:
1682 1682 continue
1683 1683
1684 1684 if node is None:
1685 1685 last = len(filelog) - 1
1686 1686 else:
1687 1687 last = filelog.rev(node)
1688 1688
1689 1689 # keep track of all ancestors of the file
1690 1690 ancestors = set([filelog.linkrev(last)])
1691 1691
1692 1692 # iterate from latest to oldest revision
1693 1693 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1694 1694 if not follow:
1695 1695 if rev > maxrev:
1696 1696 continue
1697 1697 else:
1698 1698 # Note that last might not be the first interesting
1699 1699 # rev to us:
1700 1700 # if the file has been changed after maxrev, we'll
1701 1701 # have linkrev(last) > maxrev, and we still need
1702 1702 # to explore the file graph
1703 1703 if rev not in ancestors:
1704 1704 continue
1705 1705 # XXX insert 1327 fix here
1706 1706 if flparentlinkrevs:
1707 1707 ancestors.update(flparentlinkrevs)
1708 1708
1709 1709 fncache.setdefault(rev, []).append(file_)
1710 1710 wanted.add(rev)
1711 1711 if copied:
1712 1712 copies.append(copied)
1713 1713
1714 1714 return wanted
1715 1715
1716 1716 class _followfilter(object):
1717 1717 def __init__(self, repo, onlyfirst=False):
1718 1718 self.repo = repo
1719 1719 self.startrev = nullrev
1720 1720 self.roots = set()
1721 1721 self.onlyfirst = onlyfirst
1722 1722
1723 1723 def match(self, rev):
1724 1724 def realparents(rev):
1725 1725 if self.onlyfirst:
1726 1726 return self.repo.changelog.parentrevs(rev)[0:1]
1727 1727 else:
1728 1728 return filter(lambda x: x != nullrev,
1729 1729 self.repo.changelog.parentrevs(rev))
1730 1730
1731 1731 if self.startrev == nullrev:
1732 1732 self.startrev = rev
1733 1733 return True
1734 1734
1735 1735 if rev > self.startrev:
1736 1736 # forward: all descendants
1737 1737 if not self.roots:
1738 1738 self.roots.add(self.startrev)
1739 1739 for parent in realparents(rev):
1740 1740 if parent in self.roots:
1741 1741 self.roots.add(rev)
1742 1742 return True
1743 1743 else:
1744 1744 # backwards: all parents
1745 1745 if not self.roots:
1746 1746 self.roots.update(realparents(self.startrev))
1747 1747 if rev in self.roots:
1748 1748 self.roots.remove(rev)
1749 1749 self.roots.update(realparents(rev))
1750 1750 return True
1751 1751
1752 1752 return False
1753 1753
1754 1754 def walkchangerevs(repo, match, opts, prepare):
1755 1755 '''Iterate over files and the revs in which they changed.
1756 1756
1757 1757 Callers most commonly need to iterate backwards over the history
1758 1758 in which they are interested. Doing so has awful (quadratic-looking)
1759 1759 performance, so we use iterators in a "windowed" way.
1760 1760
1761 1761 We walk a window of revisions in the desired order. Within the
1762 1762 window, we first walk forwards to gather data, then in the desired
1763 1763 order (usually backwards) to display it.
1764 1764
1765 1765 This function returns an iterator yielding contexts. Before
1766 1766 yielding each context, the iterator will first call the prepare
1767 1767 function on each context in the window in forward order.'''
1768 1768
1769 1769 follow = opts.get('follow') or opts.get('follow_first')
1770 1770 revs = _logrevs(repo, opts)
1771 1771 if not revs:
1772 1772 return []
1773 1773 wanted = set()
1774 1774 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1775 1775 opts.get('removed'))
1776 1776 fncache = {}
1777 1777 change = repo.changectx
1778 1778
1779 1779 # First step is to fill wanted, the set of revisions that we want to yield.
1780 1780 # When it does not induce extra cost, we also fill fncache for revisions in
1781 1781 # wanted: a cache of filenames that were changed (ctx.files()) and that
1782 1782 # match the file filtering conditions.
1783 1783
1784 1784 if match.always():
1785 1785 # No files, no patterns. Display all revs.
1786 1786 wanted = revs
1787 1787 elif not slowpath:
1788 1788 # We only have to read through the filelog to find wanted revisions
1789 1789
1790 1790 try:
1791 1791 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1792 1792 except FileWalkError:
1793 1793 slowpath = True
1794 1794
1795 1795 # We decided to fall back to the slowpath because at least one
1796 1796 # of the paths was not a file. Check to see if at least one of them
1797 1797 # existed in history, otherwise simply return
1798 1798 for path in match.files():
1799 1799 if path == '.' or path in repo.store:
1800 1800 break
1801 1801 else:
1802 1802 return []
1803 1803
1804 1804 if slowpath:
1805 1805 # We have to read the changelog to match filenames against
1806 1806 # changed files
1807 1807
1808 1808 if follow:
1809 1809 raise error.Abort(_('can only follow copies/renames for explicit '
1810 1810 'filenames'))
1811 1811
1812 1812 # The slow path checks files modified in every changeset.
1813 1813 # This is really slow on large repos, so compute the set lazily.
1814 1814 class lazywantedset(object):
1815 1815 def __init__(self):
1816 1816 self.set = set()
1817 1817 self.revs = set(revs)
1818 1818
1819 1819 # No need to worry about locality here because it will be accessed
1820 1820 # in the same order as the increasing window below.
1821 1821 def __contains__(self, value):
1822 1822 if value in self.set:
1823 1823 return True
1824 1824 elif not value in self.revs:
1825 1825 return False
1826 1826 else:
1827 1827 self.revs.discard(value)
1828 1828 ctx = change(value)
1829 1829 matches = filter(match, ctx.files())
1830 1830 if matches:
1831 1831 fncache[value] = matches
1832 1832 self.set.add(value)
1833 1833 return True
1834 1834 return False
1835 1835
1836 1836 def discard(self, value):
1837 1837 self.revs.discard(value)
1838 1838 self.set.discard(value)
1839 1839
1840 1840 wanted = lazywantedset()
1841 1841
1842 1842 # it might be worthwhile to do this in the iterator if the rev range
1843 1843 # is descending and the prune args are all within that range
1844 1844 for rev in opts.get('prune', ()):
1845 1845 rev = repo[rev].rev()
1846 1846 ff = _followfilter(repo)
1847 1847 stop = min(revs[0], revs[-1])
1848 1848 for x in xrange(rev, stop - 1, -1):
1849 1849 if ff.match(x):
1850 1850 wanted = wanted - [x]
1851 1851
1852 1852 # Now that wanted is correctly initialized, we can iterate over the
1853 1853 # revision range, yielding only revisions in wanted.
1854 1854 def iterate():
1855 1855 if follow and match.always():
1856 1856 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1857 1857 def want(rev):
1858 1858 return ff.match(rev) and rev in wanted
1859 1859 else:
1860 1860 def want(rev):
1861 1861 return rev in wanted
1862 1862
1863 1863 it = iter(revs)
1864 1864 stopiteration = False
1865 1865 for windowsize in increasingwindows():
1866 1866 nrevs = []
1867 1867 for i in xrange(windowsize):
1868 1868 rev = next(it, None)
1869 1869 if rev is None:
1870 1870 stopiteration = True
1871 1871 break
1872 1872 elif want(rev):
1873 1873 nrevs.append(rev)
1874 1874 for rev in sorted(nrevs):
1875 1875 fns = fncache.get(rev)
1876 1876 ctx = change(rev)
1877 1877 if not fns:
1878 1878 def fns_generator():
1879 1879 for f in ctx.files():
1880 1880 if match(f):
1881 1881 yield f
1882 1882 fns = fns_generator()
1883 1883 prepare(ctx, fns)
1884 1884 for rev in nrevs:
1885 1885 yield change(rev)
1886 1886
1887 1887 if stopiteration:
1888 1888 break
1889 1889
1890 1890 return iterate()
1891 1891
1892 1892 def _makefollowlogfilematcher(repo, files, followfirst):
1893 1893 # When displaying a revision with --patch --follow FILE, we have
1894 1894 # to know which file of the revision must be diffed. With
1895 1895 # --follow, we want the names of the ancestors of FILE in the
1896 1896 # revision, stored in "fcache". "fcache" is populated by
1897 1897 # reproducing the graph traversal already done by --follow revset
1898 1898 # and relating linkrevs to file names (which is not "correct" but
1899 1899 # good enough).
1900 1900 fcache = {}
1901 1901 fcacheready = [False]
1902 1902 pctx = repo['.']
1903 1903
1904 1904 def populate():
1905 1905 for fn in files:
1906 1906 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1907 1907 for c in i:
1908 1908 fcache.setdefault(c.linkrev(), set()).add(c.path())
1909 1909
1910 1910 def filematcher(rev):
1911 1911 if not fcacheready[0]:
1912 1912 # Lazy initialization
1913 1913 fcacheready[0] = True
1914 1914 populate()
1915 1915 return scmutil.matchfiles(repo, fcache.get(rev, []))
1916 1916
1917 1917 return filematcher
1918 1918
1919 1919 def _makenofollowlogfilematcher(repo, pats, opts):
1920 1920 '''hook for extensions to override the filematcher for non-follow cases'''
1921 1921 return None
1922 1922
1923 1923 def _makelogrevset(repo, pats, opts, revs):
1924 1924 """Return (expr, filematcher) where expr is a revset string built
1925 1925 from log options and file patterns or None. If --stat or --patch
1926 1926 are not passed filematcher is None. Otherwise it is a callable
1927 1927 taking a revision number and returning a match objects filtering
1928 1928 the files to be detailed when displaying the revision.
1929 1929 """
1930 1930 opt2revset = {
1931 1931 'no_merges': ('not merge()', None),
1932 1932 'only_merges': ('merge()', None),
1933 1933 '_ancestors': ('ancestors(%(val)s)', None),
1934 1934 '_fancestors': ('_firstancestors(%(val)s)', None),
1935 1935 '_descendants': ('descendants(%(val)s)', None),
1936 1936 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1937 1937 '_matchfiles': ('_matchfiles(%(val)s)', None),
1938 1938 'date': ('date(%(val)r)', None),
1939 1939 'branch': ('branch(%(val)r)', ' or '),
1940 1940 '_patslog': ('filelog(%(val)r)', ' or '),
1941 1941 '_patsfollow': ('follow(%(val)r)', ' or '),
1942 1942 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1943 1943 'keyword': ('keyword(%(val)r)', ' or '),
1944 1944 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1945 1945 'user': ('user(%(val)r)', ' or '),
1946 1946 }
1947 1947
1948 1948 opts = dict(opts)
1949 1949 # follow or not follow?
1950 1950 follow = opts.get('follow') or opts.get('follow_first')
1951 1951 if opts.get('follow_first'):
1952 1952 followfirst = 1
1953 1953 else:
1954 1954 followfirst = 0
1955 1955 # --follow with FILE behavior depends on revs...
1956 1956 it = iter(revs)
1957 1957 startrev = it.next()
1958 1958 followdescendants = startrev < next(it, startrev)
1959 1959
1960 1960 # branch and only_branch are really aliases and must be handled at
1961 1961 # the same time
1962 1962 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1963 1963 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1964 1964 # pats/include/exclude are passed to match.match() directly in
1965 1965 # _matchfiles() revset but walkchangerevs() builds its matcher with
1966 1966 # scmutil.match(). The difference is input pats are globbed on
1967 1967 # platforms without shell expansion (windows).
1968 1968 wctx = repo[None]
1969 1969 match, pats = scmutil.matchandpats(wctx, pats, opts)
1970 1970 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1971 1971 opts.get('removed'))
1972 1972 if not slowpath:
1973 1973 for f in match.files():
1974 1974 if follow and f not in wctx:
1975 1975 # If the file exists, it may be a directory, so let it
1976 1976 # take the slow path.
1977 1977 if os.path.exists(repo.wjoin(f)):
1978 1978 slowpath = True
1979 1979 continue
1980 1980 else:
1981 1981 raise error.Abort(_('cannot follow file not in parent '
1982 1982 'revision: "%s"') % f)
1983 1983 filelog = repo.file(f)
1984 1984 if not filelog:
1985 1985 # A zero count may be a directory or deleted file, so
1986 1986 # try to find matching entries on the slow path.
1987 1987 if follow:
1988 1988 raise error.Abort(
1989 1989 _('cannot follow nonexistent file: "%s"') % f)
1990 1990 slowpath = True
1991 1991
1992 1992 # We decided to fall back to the slowpath because at least one
1993 1993 # of the paths was not a file. Check to see if at least one of them
1994 1994 # existed in history - in that case, we'll continue down the
1995 1995 # slowpath; otherwise, we can turn off the slowpath
1996 1996 if slowpath:
1997 1997 for path in match.files():
1998 1998 if path == '.' or path in repo.store:
1999 1999 break
2000 2000 else:
2001 2001 slowpath = False
2002 2002
2003 2003 fpats = ('_patsfollow', '_patsfollowfirst')
2004 2004 fnopats = (('_ancestors', '_fancestors'),
2005 2005 ('_descendants', '_fdescendants'))
2006 2006 if slowpath:
2007 2007 # See walkchangerevs() slow path.
2008 2008 #
2009 2009 # pats/include/exclude cannot be represented as separate
2010 2010 # revset expressions as their filtering logic applies at file
2011 2011 # level. For instance "-I a -X a" matches a revision touching
2012 2012 # "a" and "b" while "file(a) and not file(b)" does
2013 2013 # not. Besides, filesets are evaluated against the working
2014 2014 # directory.
2015 2015 matchargs = ['r:', 'd:relpath']
2016 2016 for p in pats:
2017 2017 matchargs.append('p:' + p)
2018 2018 for p in opts.get('include', []):
2019 2019 matchargs.append('i:' + p)
2020 2020 for p in opts.get('exclude', []):
2021 2021 matchargs.append('x:' + p)
2022 2022 matchargs = ','.join(('%r' % p) for p in matchargs)
2023 2023 opts['_matchfiles'] = matchargs
2024 2024 if follow:
2025 2025 opts[fnopats[0][followfirst]] = '.'
2026 2026 else:
2027 2027 if follow:
2028 2028 if pats:
2029 2029 # follow() revset interprets its file argument as a
2030 2030 # manifest entry, so use match.files(), not pats.
2031 2031 opts[fpats[followfirst]] = list(match.files())
2032 2032 else:
2033 2033 op = fnopats[followdescendants][followfirst]
2034 2034 opts[op] = 'rev(%d)' % startrev
2035 2035 else:
2036 2036 opts['_patslog'] = list(pats)
2037 2037
2038 2038 filematcher = None
2039 2039 if opts.get('patch') or opts.get('stat'):
2040 2040 # When following files, track renames via a special matcher.
2041 2041 # If we're forced to take the slowpath it means we're following
2042 2042 # at least one pattern/directory, so don't bother with rename tracking.
2043 2043 if follow and not match.always() and not slowpath:
2044 2044 # _makefollowlogfilematcher expects its files argument to be
2045 2045 # relative to the repo root, so use match.files(), not pats.
2046 2046 filematcher = _makefollowlogfilematcher(repo, match.files(),
2047 2047 followfirst)
2048 2048 else:
2049 2049 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2050 2050 if filematcher is None:
2051 2051 filematcher = lambda rev: match
2052 2052
2053 2053 expr = []
2054 2054 for op, val in sorted(opts.iteritems()):
2055 2055 if not val:
2056 2056 continue
2057 2057 if op not in opt2revset:
2058 2058 continue
2059 2059 revop, andor = opt2revset[op]
2060 2060 if '%(val)' not in revop:
2061 2061 expr.append(revop)
2062 2062 else:
2063 2063 if not isinstance(val, list):
2064 2064 e = revop % {'val': val}
2065 2065 else:
2066 2066 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2067 2067 expr.append(e)
2068 2068
2069 2069 if expr:
2070 2070 expr = '(' + ' and '.join(expr) + ')'
2071 2071 else:
2072 2072 expr = None
2073 2073 return expr, filematcher
2074 2074
2075 2075 def _logrevs(repo, opts):
2076 2076 # Default --rev value depends on --follow but --follow behavior
2077 2077 # depends on revisions resolved from --rev...
2078 2078 follow = opts.get('follow') or opts.get('follow_first')
2079 2079 if opts.get('rev'):
2080 2080 revs = scmutil.revrange(repo, opts['rev'])
2081 2081 elif follow and repo.dirstate.p1() == nullid:
2082 2082 revs = revset.baseset()
2083 2083 elif follow:
2084 2084 revs = repo.revs('reverse(:.)')
2085 2085 else:
2086 2086 revs = revset.spanset(repo)
2087 2087 revs.reverse()
2088 2088 return revs
2089 2089
2090 2090 def getgraphlogrevs(repo, pats, opts):
2091 2091 """Return (revs, expr, filematcher) where revs is an iterable of
2092 2092 revision numbers, expr is a revset string built from log options
2093 2093 and file patterns or None, and used to filter 'revs'. If --stat or
2094 2094 --patch are not passed filematcher is None. Otherwise it is a
2095 2095 callable taking a revision number and returning a match objects
2096 2096 filtering the files to be detailed when displaying the revision.
2097 2097 """
2098 2098 limit = loglimit(opts)
2099 2099 revs = _logrevs(repo, opts)
2100 2100 if not revs:
2101 2101 return revset.baseset(), None, None
2102 2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2103 2103 if opts.get('rev'):
2104 2104 # User-specified revs might be unsorted, but don't sort before
2105 2105 # _makelogrevset because it might depend on the order of revs
2106 2106 revs.sort(reverse=True)
2107 2107 if expr:
2108 2108 # Revset matchers often operate faster on revisions in changelog
2109 2109 # order, because most filters deal with the changelog.
2110 2110 revs.reverse()
2111 2111 matcher = revset.match(repo.ui, expr)
2112 2112 # Revset matches can reorder revisions. "A or B" typically returns
2113 2113 # returns the revision matching A then the revision matching B. Sort
2114 2114 # again to fix that.
2115 2115 revs = matcher(repo, revs)
2116 2116 revs.sort(reverse=True)
2117 2117 if limit is not None:
2118 2118 limitedrevs = []
2119 2119 for idx, rev in enumerate(revs):
2120 2120 if idx >= limit:
2121 2121 break
2122 2122 limitedrevs.append(rev)
2123 2123 revs = revset.baseset(limitedrevs)
2124 2124
2125 2125 return revs, expr, filematcher
2126 2126
2127 2127 def getlogrevs(repo, pats, opts):
2128 2128 """Return (revs, expr, filematcher) where revs is an iterable of
2129 2129 revision numbers, expr is a revset string built from log options
2130 2130 and file patterns or None, and used to filter 'revs'. If --stat or
2131 2131 --patch are not passed filematcher is None. Otherwise it is a
2132 2132 callable taking a revision number and returning a match objects
2133 2133 filtering the files to be detailed when displaying the revision.
2134 2134 """
2135 2135 limit = loglimit(opts)
2136 2136 revs = _logrevs(repo, opts)
2137 2137 if not revs:
2138 2138 return revset.baseset([]), None, None
2139 2139 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2140 2140 if expr:
2141 2141 # Revset matchers often operate faster on revisions in changelog
2142 2142 # order, because most filters deal with the changelog.
2143 2143 if not opts.get('rev'):
2144 2144 revs.reverse()
2145 2145 matcher = revset.match(repo.ui, expr)
2146 2146 # Revset matches can reorder revisions. "A or B" typically returns
2147 2147 # returns the revision matching A then the revision matching B. Sort
2148 2148 # again to fix that.
2149 2149 revs = matcher(repo, revs)
2150 2150 if not opts.get('rev'):
2151 2151 revs.sort(reverse=True)
2152 2152 if limit is not None:
2153 2153 limitedrevs = []
2154 2154 for idx, r in enumerate(revs):
2155 2155 if limit <= idx:
2156 2156 break
2157 2157 limitedrevs.append(r)
2158 2158 revs = revset.baseset(limitedrevs)
2159 2159
2160 2160 return revs, expr, filematcher
2161 2161
2162 2162 def _graphnodeformatter(ui, displayer):
2163 2163 spec = ui.config('ui', 'graphnodetemplate')
2164 2164 if not spec:
2165 2165 return templatekw.showgraphnode # fast path for "{graphnode}"
2166 2166
2167 2167 templ = formatter.gettemplater(ui, 'graphnode', spec)
2168 2168 cache = {}
2169 2169 if isinstance(displayer, changeset_templater):
2170 2170 cache = displayer.cache # reuse cache of slow templates
2171 2171 props = templatekw.keywords.copy()
2172 2172 props['templ'] = templ
2173 2173 props['cache'] = cache
2174 2174 def formatnode(repo, ctx):
2175 2175 props['ctx'] = ctx
2176 2176 props['repo'] = repo
2177 2177 props['revcache'] = {}
2178 2178 return templater.stringify(templ('graphnode', **props))
2179 2179 return formatnode
2180 2180
2181 2181 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2182 2182 filematcher=None):
2183 2183 formatnode = _graphnodeformatter(ui, displayer)
2184 2184 seen, state = [], graphmod.asciistate()
2185 2185 for rev, type, ctx, parents in dag:
2186 2186 char = formatnode(repo, ctx)
2187 2187 copies = None
2188 2188 if getrenamed and ctx.rev():
2189 2189 copies = []
2190 2190 for fn in ctx.files():
2191 2191 rename = getrenamed(fn, ctx.rev())
2192 2192 if rename:
2193 2193 copies.append((fn, rename[0]))
2194 2194 revmatchfn = None
2195 2195 if filematcher is not None:
2196 2196 revmatchfn = filematcher(ctx.rev())
2197 2197 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2198 2198 lines = displayer.hunk.pop(rev).split('\n')
2199 2199 if not lines[-1]:
2200 2200 del lines[-1]
2201 2201 displayer.flush(ctx)
2202 2202 edges = edgefn(type, char, lines, seen, rev, parents)
2203 2203 for type, char, lines, coldata in edges:
2204 2204 graphmod.ascii(ui, state, type, char, lines, coldata)
2205 2205 displayer.close()
2206 2206
2207 2207 def graphlog(ui, repo, *pats, **opts):
2208 2208 # Parameters are identical to log command ones
2209 2209 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2210 2210 revdag = graphmod.dagwalker(repo, revs)
2211 2211
2212 2212 getrenamed = None
2213 2213 if opts.get('copies'):
2214 2214 endrev = None
2215 2215 if opts.get('rev'):
2216 2216 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2217 2217 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2218 2218 displayer = show_changeset(ui, repo, opts, buffered=True)
2219 2219 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2220 2220 filematcher)
2221 2221
2222 2222 def checkunsupportedgraphflags(pats, opts):
2223 2223 for op in ["newest_first"]:
2224 2224 if op in opts and opts[op]:
2225 2225 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2226 2226 % op.replace("_", "-"))
2227 2227
2228 2228 def graphrevs(repo, nodes, opts):
2229 2229 limit = loglimit(opts)
2230 2230 nodes.reverse()
2231 2231 if limit is not None:
2232 2232 nodes = nodes[:limit]
2233 2233 return graphmod.nodes(repo, nodes)
2234 2234
2235 2235 def add(ui, repo, match, prefix, explicitonly, **opts):
2236 2236 join = lambda f: os.path.join(prefix, f)
2237 2237 bad = []
2238 2238
2239 2239 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2240 2240 names = []
2241 2241 wctx = repo[None]
2242 2242 cca = None
2243 2243 abort, warn = scmutil.checkportabilityalert(ui)
2244 2244 if abort or warn:
2245 2245 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2246 2246
2247 2247 badmatch = matchmod.badmatch(match, badfn)
2248 2248 dirstate = repo.dirstate
2249 2249 # We don't want to just call wctx.walk here, since it would return a lot of
2250 2250 # clean files, which we aren't interested in and takes time.
2251 2251 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2252 2252 True, False, full=False)):
2253 2253 exact = match.exact(f)
2254 2254 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2255 2255 if cca:
2256 2256 cca(f)
2257 2257 names.append(f)
2258 2258 if ui.verbose or not exact:
2259 2259 ui.status(_('adding %s\n') % match.rel(f))
2260 2260
2261 2261 for subpath in sorted(wctx.substate):
2262 2262 sub = wctx.sub(subpath)
2263 2263 try:
2264 2264 submatch = matchmod.narrowmatcher(subpath, match)
2265 2265 if opts.get('subrepos'):
2266 2266 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2267 2267 else:
2268 2268 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2269 2269 except error.LookupError:
2270 2270 ui.status(_("skipping missing subrepository: %s\n")
2271 2271 % join(subpath))
2272 2272
2273 2273 if not opts.get('dry_run'):
2274 2274 rejected = wctx.add(names, prefix)
2275 2275 bad.extend(f for f in rejected if f in match.files())
2276 2276 return bad
2277 2277
2278 2278 def forget(ui, repo, match, prefix, explicitonly):
2279 2279 join = lambda f: os.path.join(prefix, f)
2280 2280 bad = []
2281 2281 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2282 2282 wctx = repo[None]
2283 2283 forgot = []
2284 2284
2285 2285 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2286 2286 forget = sorted(s[0] + s[1] + s[3] + s[6])
2287 2287 if explicitonly:
2288 2288 forget = [f for f in forget if match.exact(f)]
2289 2289
2290 2290 for subpath in sorted(wctx.substate):
2291 2291 sub = wctx.sub(subpath)
2292 2292 try:
2293 2293 submatch = matchmod.narrowmatcher(subpath, match)
2294 2294 subbad, subforgot = sub.forget(submatch, prefix)
2295 2295 bad.extend([subpath + '/' + f for f in subbad])
2296 2296 forgot.extend([subpath + '/' + f for f in subforgot])
2297 2297 except error.LookupError:
2298 2298 ui.status(_("skipping missing subrepository: %s\n")
2299 2299 % join(subpath))
2300 2300
2301 2301 if not explicitonly:
2302 2302 for f in match.files():
2303 2303 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2304 2304 if f not in forgot:
2305 2305 if repo.wvfs.exists(f):
2306 2306 # Don't complain if the exact case match wasn't given.
2307 2307 # But don't do this until after checking 'forgot', so
2308 2308 # that subrepo files aren't normalized, and this op is
2309 2309 # purely from data cached by the status walk above.
2310 2310 if repo.dirstate.normalize(f) in repo.dirstate:
2311 2311 continue
2312 2312 ui.warn(_('not removing %s: '
2313 2313 'file is already untracked\n')
2314 2314 % match.rel(f))
2315 2315 bad.append(f)
2316 2316
2317 2317 for f in forget:
2318 2318 if ui.verbose or not match.exact(f):
2319 2319 ui.status(_('removing %s\n') % match.rel(f))
2320 2320
2321 2321 rejected = wctx.forget(forget, prefix)
2322 2322 bad.extend(f for f in rejected if f in match.files())
2323 2323 forgot.extend(f for f in forget if f not in rejected)
2324 2324 return bad, forgot
2325 2325
2326 2326 def files(ui, ctx, m, fm, fmt, subrepos):
2327 2327 rev = ctx.rev()
2328 2328 ret = 1
2329 2329 ds = ctx.repo().dirstate
2330 2330
2331 2331 for f in ctx.matches(m):
2332 2332 if rev is None and ds[f] == 'r':
2333 2333 continue
2334 2334 fm.startitem()
2335 2335 if ui.verbose:
2336 2336 fc = ctx[f]
2337 2337 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2338 2338 fm.data(abspath=f)
2339 2339 fm.write('path', fmt, m.rel(f))
2340 2340 ret = 0
2341 2341
2342 2342 for subpath in sorted(ctx.substate):
2343 2343 def matchessubrepo(subpath):
2344 2344 return (m.always() or m.exact(subpath)
2345 2345 or any(f.startswith(subpath + '/') for f in m.files()))
2346 2346
2347 2347 if subrepos or matchessubrepo(subpath):
2348 2348 sub = ctx.sub(subpath)
2349 2349 try:
2350 2350 submatch = matchmod.narrowmatcher(subpath, m)
2351 2351 if sub.printfiles(ui, submatch, fm, fmt, subrepos) == 0:
2352 2352 ret = 0
2353 2353 except error.LookupError:
2354 2354 ui.status(_("skipping missing subrepository: %s\n")
2355 2355 % m.abs(subpath))
2356 2356
2357 2357 return ret
2358 2358
2359 2359 def remove(ui, repo, m, prefix, after, force, subrepos):
2360 2360 join = lambda f: os.path.join(prefix, f)
2361 2361 ret = 0
2362 2362 s = repo.status(match=m, clean=True)
2363 2363 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2364 2364
2365 2365 wctx = repo[None]
2366 2366
2367 2367 for subpath in sorted(wctx.substate):
2368 2368 def matchessubrepo(matcher, subpath):
2369 2369 if matcher.exact(subpath):
2370 2370 return True
2371 2371 for f in matcher.files():
2372 2372 if f.startswith(subpath):
2373 2373 return True
2374 2374 return False
2375 2375
2376 2376 if subrepos or matchessubrepo(m, subpath):
2377 2377 sub = wctx.sub(subpath)
2378 2378 try:
2379 2379 submatch = matchmod.narrowmatcher(subpath, m)
2380 2380 if sub.removefiles(submatch, prefix, after, force, subrepos):
2381 2381 ret = 1
2382 2382 except error.LookupError:
2383 2383 ui.status(_("skipping missing subrepository: %s\n")
2384 2384 % join(subpath))
2385 2385
2386 2386 # warn about failure to delete explicit files/dirs
2387 2387 deleteddirs = util.dirs(deleted)
2388 2388 for f in m.files():
2389 2389 def insubrepo():
2390 2390 for subpath in wctx.substate:
2391 2391 if f.startswith(subpath):
2392 2392 return True
2393 2393 return False
2394 2394
2395 2395 isdir = f in deleteddirs or wctx.hasdir(f)
2396 2396 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2397 2397 continue
2398 2398
2399 2399 if repo.wvfs.exists(f):
2400 2400 if repo.wvfs.isdir(f):
2401 2401 ui.warn(_('not removing %s: no tracked files\n')
2402 2402 % m.rel(f))
2403 2403 else:
2404 2404 ui.warn(_('not removing %s: file is untracked\n')
2405 2405 % m.rel(f))
2406 2406 # missing files will generate a warning elsewhere
2407 2407 ret = 1
2408 2408
2409 2409 if force:
2410 2410 list = modified + deleted + clean + added
2411 2411 elif after:
2412 2412 list = deleted
2413 2413 for f in modified + added + clean:
2414 2414 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2415 2415 ret = 1
2416 2416 else:
2417 2417 list = deleted + clean
2418 2418 for f in modified:
2419 2419 ui.warn(_('not removing %s: file is modified (use -f'
2420 2420 ' to force removal)\n') % m.rel(f))
2421 2421 ret = 1
2422 2422 for f in added:
2423 2423 ui.warn(_('not removing %s: file has been marked for add'
2424 2424 ' (use forget to undo)\n') % m.rel(f))
2425 2425 ret = 1
2426 2426
2427 2427 for f in sorted(list):
2428 2428 if ui.verbose or not m.exact(f):
2429 2429 ui.status(_('removing %s\n') % m.rel(f))
2430 2430
2431 2431 wlock = repo.wlock()
2432 2432 try:
2433 2433 if not after:
2434 2434 for f in list:
2435 2435 if f in added:
2436 2436 continue # we never unlink added files on remove
2437 2437 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2438 2438 repo[None].forget(list)
2439 2439 finally:
2440 2440 wlock.release()
2441 2441
2442 2442 return ret
2443 2443
2444 2444 def cat(ui, repo, ctx, matcher, prefix, **opts):
2445 2445 err = 1
2446 2446
2447 2447 def write(path):
2448 2448 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2449 2449 pathname=os.path.join(prefix, path))
2450 2450 data = ctx[path].data()
2451 2451 if opts.get('decode'):
2452 2452 data = repo.wwritedata(path, data)
2453 2453 fp.write(data)
2454 2454 fp.close()
2455 2455
2456 2456 # Automation often uses hg cat on single files, so special case it
2457 2457 # for performance to avoid the cost of parsing the manifest.
2458 2458 if len(matcher.files()) == 1 and not matcher.anypats():
2459 2459 file = matcher.files()[0]
2460 2460 mf = repo.manifest
2461 2461 mfnode = ctx.manifestnode()
2462 2462 if mfnode and mf.find(mfnode, file)[0]:
2463 2463 write(file)
2464 2464 return 0
2465 2465
2466 2466 # Don't warn about "missing" files that are really in subrepos
2467 2467 def badfn(path, msg):
2468 2468 for subpath in ctx.substate:
2469 2469 if path.startswith(subpath):
2470 2470 return
2471 2471 matcher.bad(path, msg)
2472 2472
2473 2473 for abs in ctx.walk(matchmod.badmatch(matcher, badfn)):
2474 2474 write(abs)
2475 2475 err = 0
2476 2476
2477 2477 for subpath in sorted(ctx.substate):
2478 2478 sub = ctx.sub(subpath)
2479 2479 try:
2480 2480 submatch = matchmod.narrowmatcher(subpath, matcher)
2481 2481
2482 2482 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2483 2483 **opts):
2484 2484 err = 0
2485 2485 except error.RepoLookupError:
2486 2486 ui.status(_("skipping missing subrepository: %s\n")
2487 2487 % os.path.join(prefix, subpath))
2488 2488
2489 2489 return err
2490 2490
2491 2491 def commit(ui, repo, commitfunc, pats, opts):
2492 2492 '''commit the specified files or all outstanding changes'''
2493 2493 date = opts.get('date')
2494 2494 if date:
2495 2495 opts['date'] = util.parsedate(date)
2496 2496 message = logmessage(ui, opts)
2497 2497 matcher = scmutil.match(repo[None], pats, opts)
2498 2498
2499 2499 # extract addremove carefully -- this function can be called from a command
2500 2500 # that doesn't support addremove
2501 2501 if opts.get('addremove'):
2502 2502 if scmutil.addremove(repo, matcher, "", opts) != 0:
2503 2503 raise error.Abort(
2504 2504 _("failed to mark all new/missing files as added/removed"))
2505 2505
2506 2506 return commitfunc(ui, repo, message, matcher, opts)
2507 2507
2508 2508 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2509 2509 # avoid cycle context -> subrepo -> cmdutil
2510 2510 import context
2511 2511
2512 2512 # amend will reuse the existing user if not specified, but the obsolete
2513 2513 # marker creation requires that the current user's name is specified.
2514 2514 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2515 2515 ui.username() # raise exception if username not set
2516 2516
2517 2517 ui.note(_('amending changeset %s\n') % old)
2518 2518 base = old.p1()
2519 2519 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2520 2520
2521 2521 wlock = lock = newid = None
2522 2522 try:
2523 2523 wlock = repo.wlock()
2524 2524 lock = repo.lock()
2525 2525 tr = repo.transaction('amend')
2526 2526 try:
2527 2527 # See if we got a message from -m or -l, if not, open the editor
2528 2528 # with the message of the changeset to amend
2529 2529 message = logmessage(ui, opts)
2530 2530 # ensure logfile does not conflict with later enforcement of the
2531 2531 # message. potential logfile content has been processed by
2532 2532 # `logmessage` anyway.
2533 2533 opts.pop('logfile')
2534 2534 # First, do a regular commit to record all changes in the working
2535 2535 # directory (if there are any)
2536 2536 ui.callhooks = False
2537 2537 activebookmark = repo._activebookmark
2538 2538 try:
2539 2539 repo._activebookmark = None
2540 2540 opts['message'] = 'temporary amend commit for %s' % old
2541 2541 node = commit(ui, repo, commitfunc, pats, opts)
2542 2542 finally:
2543 2543 repo._activebookmark = activebookmark
2544 2544 ui.callhooks = True
2545 2545 ctx = repo[node]
2546 2546
2547 2547 # Participating changesets:
2548 2548 #
2549 2549 # node/ctx o - new (intermediate) commit that contains changes
2550 2550 # | from working dir to go into amending commit
2551 2551 # | (or a workingctx if there were no changes)
2552 2552 # |
2553 2553 # old o - changeset to amend
2554 2554 # |
2555 2555 # base o - parent of amending changeset
2556 2556
2557 2557 # Update extra dict from amended commit (e.g. to preserve graft
2558 2558 # source)
2559 2559 extra.update(old.extra())
2560 2560
2561 2561 # Also update it from the intermediate commit or from the wctx
2562 2562 extra.update(ctx.extra())
2563 2563
2564 2564 if len(old.parents()) > 1:
2565 2565 # ctx.files() isn't reliable for merges, so fall back to the
2566 2566 # slower repo.status() method
2567 2567 files = set([fn for st in repo.status(base, old)[:3]
2568 2568 for fn in st])
2569 2569 else:
2570 2570 files = set(old.files())
2571 2571
2572 2572 # Second, we use either the commit we just did, or if there were no
2573 2573 # changes the parent of the working directory as the version of the
2574 2574 # files in the final amend commit
2575 2575 if node:
2576 2576 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2577 2577
2578 2578 user = ctx.user()
2579 2579 date = ctx.date()
2580 2580 # Recompute copies (avoid recording a -> b -> a)
2581 2581 copied = copies.pathcopies(base, ctx)
2582 2582 if old.p2:
2583 2583 copied.update(copies.pathcopies(old.p2(), ctx))
2584 2584
2585 2585 # Prune files which were reverted by the updates: if old
2586 2586 # introduced file X and our intermediate commit, node,
2587 2587 # renamed that file, then those two files are the same and
2588 2588 # we can discard X from our list of files. Likewise if X
2589 2589 # was deleted, it's no longer relevant
2590 2590 files.update(ctx.files())
2591 2591
2592 2592 def samefile(f):
2593 2593 if f in ctx.manifest():
2594 2594 a = ctx.filectx(f)
2595 2595 if f in base.manifest():
2596 2596 b = base.filectx(f)
2597 2597 return (not a.cmp(b)
2598 2598 and a.flags() == b.flags())
2599 2599 else:
2600 2600 return False
2601 2601 else:
2602 2602 return f not in base.manifest()
2603 2603 files = [f for f in files if not samefile(f)]
2604 2604
2605 2605 def filectxfn(repo, ctx_, path):
2606 2606 try:
2607 2607 fctx = ctx[path]
2608 2608 flags = fctx.flags()
2609 2609 mctx = context.memfilectx(repo,
2610 2610 fctx.path(), fctx.data(),
2611 2611 islink='l' in flags,
2612 2612 isexec='x' in flags,
2613 2613 copied=copied.get(path))
2614 2614 return mctx
2615 2615 except KeyError:
2616 2616 return None
2617 2617 else:
2618 2618 ui.note(_('copying changeset %s to %s\n') % (old, base))
2619 2619
2620 2620 # Use version of files as in the old cset
2621 2621 def filectxfn(repo, ctx_, path):
2622 2622 try:
2623 2623 return old.filectx(path)
2624 2624 except KeyError:
2625 2625 return None
2626 2626
2627 2627 user = opts.get('user') or old.user()
2628 2628 date = opts.get('date') or old.date()
2629 2629 editform = mergeeditform(old, 'commit.amend')
2630 2630 editor = getcommiteditor(editform=editform, **opts)
2631 2631 if not message:
2632 2632 editor = getcommiteditor(edit=True, editform=editform)
2633 2633 message = old.description()
2634 2634
2635 2635 pureextra = extra.copy()
2636 2636 if 'amend_source' in pureextra:
2637 2637 del pureextra['amend_source']
2638 2638 pureoldextra = old.extra()
2639 2639 if 'amend_source' in pureoldextra:
2640 2640 del pureoldextra['amend_source']
2641 2641 extra['amend_source'] = old.hex()
2642 2642
2643 2643 new = context.memctx(repo,
2644 2644 parents=[base.node(), old.p2().node()],
2645 2645 text=message,
2646 2646 files=files,
2647 2647 filectxfn=filectxfn,
2648 2648 user=user,
2649 2649 date=date,
2650 2650 extra=extra,
2651 2651 editor=editor)
2652 2652
2653 2653 newdesc = changelog.stripdesc(new.description())
2654 2654 if ((not node)
2655 2655 and newdesc == old.description()
2656 2656 and user == old.user()
2657 2657 and date == old.date()
2658 2658 and pureextra == pureoldextra):
2659 2659 # nothing changed. continuing here would create a new node
2660 2660 # anyway because of the amend_source noise.
2661 2661 #
2662 2662 # This not what we expect from amend.
2663 2663 return old.node()
2664 2664
2665 2665 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2666 2666 try:
2667 2667 if opts.get('secret'):
2668 2668 commitphase = 'secret'
2669 2669 else:
2670 2670 commitphase = old.phase()
2671 2671 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2672 2672 newid = repo.commitctx(new)
2673 2673 finally:
2674 2674 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2675 2675 if newid != old.node():
2676 2676 # Reroute the working copy parent to the new changeset
2677 2677 repo.setparents(newid, nullid)
2678 2678
2679 2679 # Move bookmarks from old parent to amend commit
2680 2680 bms = repo.nodebookmarks(old.node())
2681 2681 if bms:
2682 2682 marks = repo._bookmarks
2683 2683 for bm in bms:
2684 2684 ui.debug('moving bookmarks %r from %s to %s\n' %
2685 2685 (marks, old.hex(), hex(newid)))
2686 2686 marks[bm] = newid
2687 2687 marks.recordchange(tr)
2688 2688 #commit the whole amend process
2689 2689 if createmarkers:
2690 2690 # mark the new changeset as successor of the rewritten one
2691 2691 new = repo[newid]
2692 2692 obs = [(old, (new,))]
2693 2693 if node:
2694 2694 obs.append((ctx, ()))
2695 2695
2696 2696 obsolete.createmarkers(repo, obs)
2697 2697 tr.close()
2698 2698 finally:
2699 2699 tr.release()
2700 2700 if not createmarkers and newid != old.node():
2701 2701 # Strip the intermediate commit (if there was one) and the amended
2702 2702 # commit
2703 2703 if node:
2704 2704 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2705 2705 ui.note(_('stripping amended changeset %s\n') % old)
2706 2706 repair.strip(ui, repo, old.node(), topic='amend-backup')
2707 2707 finally:
2708 2708 lockmod.release(lock, wlock)
2709 2709 return newid
2710 2710
2711 2711 def commiteditor(repo, ctx, subs, editform=''):
2712 2712 if ctx.description():
2713 2713 return ctx.description()
2714 2714 return commitforceeditor(repo, ctx, subs, editform=editform,
2715 2715 unchangedmessagedetection=True)
2716 2716
2717 2717 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2718 2718 editform='', unchangedmessagedetection=False):
2719 2719 if not extramsg:
2720 2720 extramsg = _("Leave message empty to abort commit.")
2721 2721
2722 2722 forms = [e for e in editform.split('.') if e]
2723 2723 forms.insert(0, 'changeset')
2724 2724 templatetext = None
2725 2725 while forms:
2726 2726 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2727 2727 if tmpl:
2728 2728 templatetext = committext = buildcommittemplate(
2729 2729 repo, ctx, subs, extramsg, tmpl)
2730 2730 break
2731 2731 forms.pop()
2732 2732 else:
2733 2733 committext = buildcommittext(repo, ctx, subs, extramsg)
2734 2734
2735 2735 # run editor in the repository root
2736 2736 olddir = os.getcwd()
2737 2737 os.chdir(repo.root)
2738 2738
2739 2739 # make in-memory changes visible to external process
2740 2740 tr = repo.currenttransaction()
2741 2741 repo.dirstate.write(tr)
2742 2742 pending = tr and tr.writepending() and repo.root
2743 2743
2744 2744 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2745 2745 editform=editform, pending=pending)
2746 2746 text = re.sub("(?m)^HG:.*(\n|$)", "", editortext)
2747 2747 os.chdir(olddir)
2748 2748
2749 2749 if finishdesc:
2750 2750 text = finishdesc(text)
2751 2751 if not text.strip():
2752 2752 raise error.Abort(_("empty commit message"))
2753 2753 if unchangedmessagedetection and editortext == templatetext:
2754 2754 raise error.Abort(_("commit message unchanged"))
2755 2755
2756 2756 return text
2757 2757
2758 2758 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2759 2759 ui = repo.ui
2760 2760 tmpl, mapfile = gettemplate(ui, tmpl, None)
2761 2761
2762 2762 try:
2763 2763 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2764 2764 except SyntaxError as inst:
2765 2765 raise error.Abort(inst.args[0])
2766 2766
2767 2767 for k, v in repo.ui.configitems('committemplate'):
2768 2768 if k != 'changeset':
2769 2769 t.t.cache[k] = v
2770 2770
2771 2771 if not extramsg:
2772 2772 extramsg = '' # ensure that extramsg is string
2773 2773
2774 2774 ui.pushbuffer()
2775 2775 t.show(ctx, extramsg=extramsg)
2776 2776 return ui.popbuffer()
2777 2777
2778 2778 def hgprefix(msg):
2779 2779 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2780 2780
2781 2781 def buildcommittext(repo, ctx, subs, extramsg):
2782 2782 edittext = []
2783 2783 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2784 2784 if ctx.description():
2785 2785 edittext.append(ctx.description())
2786 2786 edittext.append("")
2787 2787 edittext.append("") # Empty line between message and comments.
2788 2788 edittext.append(hgprefix(_("Enter commit message."
2789 2789 " Lines beginning with 'HG:' are removed.")))
2790 2790 edittext.append(hgprefix(extramsg))
2791 2791 edittext.append("HG: --")
2792 2792 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2793 2793 if ctx.p2():
2794 2794 edittext.append(hgprefix(_("branch merge")))
2795 2795 if ctx.branch():
2796 2796 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2797 2797 if bookmarks.isactivewdirparent(repo):
2798 2798 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2799 2799 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2800 2800 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2801 2801 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2802 2802 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2803 2803 if not added and not modified and not removed:
2804 2804 edittext.append(hgprefix(_("no files changed")))
2805 2805 edittext.append("")
2806 2806
2807 2807 return "\n".join(edittext)
2808 2808
2809 2809 def commitstatus(repo, node, branch, bheads=None, opts=None):
2810 2810 if opts is None:
2811 2811 opts = {}
2812 2812 ctx = repo[node]
2813 2813 parents = ctx.parents()
2814 2814
2815 2815 if (not opts.get('amend') and bheads and node not in bheads and not
2816 2816 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2817 2817 repo.ui.status(_('created new head\n'))
2818 2818 # The message is not printed for initial roots. For the other
2819 2819 # changesets, it is printed in the following situations:
2820 2820 #
2821 2821 # Par column: for the 2 parents with ...
2822 2822 # N: null or no parent
2823 2823 # B: parent is on another named branch
2824 2824 # C: parent is a regular non head changeset
2825 2825 # H: parent was a branch head of the current branch
2826 2826 # Msg column: whether we print "created new head" message
2827 2827 # In the following, it is assumed that there already exists some
2828 2828 # initial branch heads of the current branch, otherwise nothing is
2829 2829 # printed anyway.
2830 2830 #
2831 2831 # Par Msg Comment
2832 2832 # N N y additional topo root
2833 2833 #
2834 2834 # B N y additional branch root
2835 2835 # C N y additional topo head
2836 2836 # H N n usual case
2837 2837 #
2838 2838 # B B y weird additional branch root
2839 2839 # C B y branch merge
2840 2840 # H B n merge with named branch
2841 2841 #
2842 2842 # C C y additional head from merge
2843 2843 # C H n merge with a head
2844 2844 #
2845 2845 # H H n head merge: head count decreases
2846 2846
2847 2847 if not opts.get('close_branch'):
2848 2848 for r in parents:
2849 2849 if r.closesbranch() and r.branch() == branch:
2850 2850 repo.ui.status(_('reopening closed branch head %d\n') % r)
2851 2851
2852 2852 if repo.ui.debugflag:
2853 2853 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2854 2854 elif repo.ui.verbose:
2855 2855 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2856 2856
2857 2857 def revert(ui, repo, ctx, parents, *pats, **opts):
2858 2858 parent, p2 = parents
2859 2859 node = ctx.node()
2860 2860
2861 2861 mf = ctx.manifest()
2862 2862 if node == p2:
2863 2863 parent = p2
2864 2864 if node == parent:
2865 2865 pmf = mf
2866 2866 else:
2867 2867 pmf = None
2868 2868
2869 2869 # need all matching names in dirstate and manifest of target rev,
2870 2870 # so have to walk both. do not print errors if files exist in one
2871 2871 # but not other. in both cases, filesets should be evaluated against
2872 2872 # workingctx to get consistent result (issue4497). this means 'set:**'
2873 2873 # cannot be used to select missing files from target rev.
2874 2874
2875 2875 # `names` is a mapping for all elements in working copy and target revision
2876 2876 # The mapping is in the form:
2877 2877 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2878 2878 names = {}
2879 2879
2880 2880 wlock = repo.wlock()
2881 2881 try:
2882 2882 ## filling of the `names` mapping
2883 2883 # walk dirstate to fill `names`
2884 2884
2885 2885 interactive = opts.get('interactive', False)
2886 2886 wctx = repo[None]
2887 2887 m = scmutil.match(wctx, pats, opts)
2888 2888
2889 2889 # we'll need this later
2890 2890 targetsubs = sorted(s for s in wctx.substate if m(s))
2891 2891
2892 2892 if not m.always():
2893 2893 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2894 2894 names[abs] = m.rel(abs), m.exact(abs)
2895 2895
2896 2896 # walk target manifest to fill `names`
2897 2897
2898 2898 def badfn(path, msg):
2899 2899 if path in names:
2900 2900 return
2901 2901 if path in ctx.substate:
2902 2902 return
2903 2903 path_ = path + '/'
2904 2904 for f in names:
2905 2905 if f.startswith(path_):
2906 2906 return
2907 2907 ui.warn("%s: %s\n" % (m.rel(path), msg))
2908 2908
2909 2909 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2910 2910 if abs not in names:
2911 2911 names[abs] = m.rel(abs), m.exact(abs)
2912 2912
2913 2913 # Find status of all file in `names`.
2914 2914 m = scmutil.matchfiles(repo, names)
2915 2915
2916 2916 changes = repo.status(node1=node, match=m,
2917 2917 unknown=True, ignored=True, clean=True)
2918 2918 else:
2919 2919 changes = repo.status(node1=node, match=m)
2920 2920 for kind in changes:
2921 2921 for abs in kind:
2922 2922 names[abs] = m.rel(abs), m.exact(abs)
2923 2923
2924 2924 m = scmutil.matchfiles(repo, names)
2925 2925
2926 2926 modified = set(changes.modified)
2927 2927 added = set(changes.added)
2928 2928 removed = set(changes.removed)
2929 2929 _deleted = set(changes.deleted)
2930 2930 unknown = set(changes.unknown)
2931 2931 unknown.update(changes.ignored)
2932 2932 clean = set(changes.clean)
2933 2933 modadded = set()
2934 2934
2935 2935 # split between files known in target manifest and the others
2936 2936 smf = set(mf)
2937 2937
2938 2938 # determine the exact nature of the deleted changesets
2939 2939 deladded = _deleted - smf
2940 2940 deleted = _deleted - deladded
2941 2941
2942 2942 # We need to account for the state of the file in the dirstate,
2943 2943 # even when we revert against something else than parent. This will
2944 2944 # slightly alter the behavior of revert (doing back up or not, delete
2945 2945 # or just forget etc).
2946 2946 if parent == node:
2947 2947 dsmodified = modified
2948 2948 dsadded = added
2949 2949 dsremoved = removed
2950 2950 # store all local modifications, useful later for rename detection
2951 2951 localchanges = dsmodified | dsadded
2952 2952 modified, added, removed = set(), set(), set()
2953 2953 else:
2954 2954 changes = repo.status(node1=parent, match=m)
2955 2955 dsmodified = set(changes.modified)
2956 2956 dsadded = set(changes.added)
2957 2957 dsremoved = set(changes.removed)
2958 2958 # store all local modifications, useful later for rename detection
2959 2959 localchanges = dsmodified | dsadded
2960 2960
2961 2961 # only take into account for removes between wc and target
2962 2962 clean |= dsremoved - removed
2963 2963 dsremoved &= removed
2964 2964 # distinct between dirstate remove and other
2965 2965 removed -= dsremoved
2966 2966
2967 2967 modadded = added & dsmodified
2968 2968 added -= modadded
2969 2969
2970 2970 # tell newly modified apart.
2971 2971 dsmodified &= modified
2972 2972 dsmodified |= modified & dsadded # dirstate added may needs backup
2973 2973 modified -= dsmodified
2974 2974
2975 2975 # We need to wait for some post-processing to update this set
2976 2976 # before making the distinction. The dirstate will be used for
2977 2977 # that purpose.
2978 2978 dsadded = added
2979 2979
2980 2980 # in case of merge, files that are actually added can be reported as
2981 2981 # modified, we need to post process the result
2982 2982 if p2 != nullid:
2983 2983 if pmf is None:
2984 2984 # only need parent manifest in the merge case,
2985 2985 # so do not read by default
2986 2986 pmf = repo[parent].manifest()
2987 2987 mergeadd = dsmodified - set(pmf)
2988 2988 dsadded |= mergeadd
2989 2989 dsmodified -= mergeadd
2990 2990
2991 2991 # if f is a rename, update `names` to also revert the source
2992 2992 cwd = repo.getcwd()
2993 2993 for f in localchanges:
2994 2994 src = repo.dirstate.copied(f)
2995 2995 # XXX should we check for rename down to target node?
2996 2996 if src and src not in names and repo.dirstate[src] == 'r':
2997 2997 dsremoved.add(src)
2998 2998 names[src] = (repo.pathto(src, cwd), True)
2999 2999
3000 3000 # distinguish between file to forget and the other
3001 3001 added = set()
3002 3002 for abs in dsadded:
3003 3003 if repo.dirstate[abs] != 'a':
3004 3004 added.add(abs)
3005 3005 dsadded -= added
3006 3006
3007 3007 for abs in deladded:
3008 3008 if repo.dirstate[abs] == 'a':
3009 3009 dsadded.add(abs)
3010 3010 deladded -= dsadded
3011 3011
3012 3012 # For files marked as removed, we check if an unknown file is present at
3013 3013 # the same path. If a such file exists it may need to be backed up.
3014 3014 # Making the distinction at this stage helps have simpler backup
3015 3015 # logic.
3016 3016 removunk = set()
3017 3017 for abs in removed:
3018 3018 target = repo.wjoin(abs)
3019 3019 if os.path.lexists(target):
3020 3020 removunk.add(abs)
3021 3021 removed -= removunk
3022 3022
3023 3023 dsremovunk = set()
3024 3024 for abs in dsremoved:
3025 3025 target = repo.wjoin(abs)
3026 3026 if os.path.lexists(target):
3027 3027 dsremovunk.add(abs)
3028 3028 dsremoved -= dsremovunk
3029 3029
3030 3030 # action to be actually performed by revert
3031 3031 # (<list of file>, message>) tuple
3032 3032 actions = {'revert': ([], _('reverting %s\n')),
3033 3033 'add': ([], _('adding %s\n')),
3034 3034 'remove': ([], _('removing %s\n')),
3035 3035 'drop': ([], _('removing %s\n')),
3036 3036 'forget': ([], _('forgetting %s\n')),
3037 3037 'undelete': ([], _('undeleting %s\n')),
3038 3038 'noop': (None, _('no changes needed to %s\n')),
3039 3039 'unknown': (None, _('file not managed: %s\n')),
3040 3040 }
3041 3041
3042 3042 # "constant" that convey the backup strategy.
3043 3043 # All set to `discard` if `no-backup` is set do avoid checking
3044 3044 # no_backup lower in the code.
3045 3045 # These values are ordered for comparison purposes
3046 3046 backup = 2 # unconditionally do backup
3047 3047 check = 1 # check if the existing file differs from target
3048 3048 discard = 0 # never do backup
3049 3049 if opts.get('no_backup'):
3050 3050 backup = check = discard
3051 3051
3052 3052 backupanddel = actions['remove']
3053 3053 if not opts.get('no_backup'):
3054 3054 backupanddel = actions['drop']
3055 3055
3056 3056 disptable = (
3057 3057 # dispatch table:
3058 3058 # file state
3059 3059 # action
3060 3060 # make backup
3061 3061
3062 3062 ## Sets that results that will change file on disk
3063 3063 # Modified compared to target, no local change
3064 3064 (modified, actions['revert'], discard),
3065 3065 # Modified compared to target, but local file is deleted
3066 3066 (deleted, actions['revert'], discard),
3067 3067 # Modified compared to target, local change
3068 3068 (dsmodified, actions['revert'], backup),
3069 3069 # Added since target
3070 3070 (added, actions['remove'], discard),
3071 3071 # Added in working directory
3072 3072 (dsadded, actions['forget'], discard),
3073 3073 # Added since target, have local modification
3074 3074 (modadded, backupanddel, backup),
3075 3075 # Added since target but file is missing in working directory
3076 3076 (deladded, actions['drop'], discard),
3077 3077 # Removed since target, before working copy parent
3078 3078 (removed, actions['add'], discard),
3079 3079 # Same as `removed` but an unknown file exists at the same path
3080 3080 (removunk, actions['add'], check),
3081 3081 # Removed since targe, marked as such in working copy parent
3082 3082 (dsremoved, actions['undelete'], discard),
3083 3083 # Same as `dsremoved` but an unknown file exists at the same path
3084 3084 (dsremovunk, actions['undelete'], check),
3085 3085 ## the following sets does not result in any file changes
3086 3086 # File with no modification
3087 3087 (clean, actions['noop'], discard),
3088 3088 # Existing file, not tracked anywhere
3089 3089 (unknown, actions['unknown'], discard),
3090 3090 )
3091 3091
3092 3092 for abs, (rel, exact) in sorted(names.items()):
3093 3093 # target file to be touch on disk (relative to cwd)
3094 3094 target = repo.wjoin(abs)
3095 3095 # search the entry in the dispatch table.
3096 3096 # if the file is in any of these sets, it was touched in the working
3097 3097 # directory parent and we are sure it needs to be reverted.
3098 3098 for table, (xlist, msg), dobackup in disptable:
3099 3099 if abs not in table:
3100 3100 continue
3101 3101 if xlist is not None:
3102 3102 xlist.append(abs)
3103 3103 if dobackup and (backup <= dobackup
3104 3104 or wctx[abs].cmp(ctx[abs])):
3105 3105 bakname = origpath(ui, repo, rel)
3106 3106 ui.note(_('saving current version of %s as %s\n') %
3107 3107 (rel, bakname))
3108 3108 if not opts.get('dry_run'):
3109 3109 if interactive:
3110 3110 util.copyfile(target, bakname)
3111 3111 else:
3112 3112 util.rename(target, bakname)
3113 3113 if ui.verbose or not exact:
3114 3114 if not isinstance(msg, basestring):
3115 3115 msg = msg(abs)
3116 3116 ui.status(msg % rel)
3117 3117 elif exact:
3118 3118 ui.warn(msg % rel)
3119 3119 break
3120 3120
3121 3121 if not opts.get('dry_run'):
3122 3122 needdata = ('revert', 'add', 'undelete')
3123 3123 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3124 3124 _performrevert(repo, parents, ctx, actions, interactive)
3125 3125
3126 3126 if targetsubs:
3127 3127 # Revert the subrepos on the revert list
3128 3128 for sub in targetsubs:
3129 3129 try:
3130 3130 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3131 3131 except KeyError:
3132 3132 raise error.Abort("subrepository '%s' does not exist in %s!"
3133 3133 % (sub, short(ctx.node())))
3134 3134 finally:
3135 3135 wlock.release()
3136 3136
3137 3137 def origpath(ui, repo, filepath):
3138 3138 '''customize where .orig files are created
3139 3139
3140 3140 Fetch user defined path from config file: [ui] origbackuppath = <path>
3141 3141 Fall back to default (filepath) if not specified
3142 3142 '''
3143 3143 origbackuppath = ui.config('ui', 'origbackuppath', None)
3144 3144 if origbackuppath is None:
3145 3145 return filepath + ".orig"
3146 3146
3147 3147 filepathfromroot = os.path.relpath(filepath, start=repo.root)
3148 3148 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
3149 3149
3150 3150 origbackupdir = repo.vfs.dirname(fullorigpath)
3151 3151 if not repo.vfs.exists(origbackupdir):
3152 3152 ui.note(_('creating directory: %s\n') % origbackupdir)
3153 3153 util.makedirs(origbackupdir)
3154 3154
3155 3155 return fullorigpath + ".orig"
3156 3156
3157 3157 def _revertprefetch(repo, ctx, *files):
3158 3158 """Let extension changing the storage layer prefetch content"""
3159 3159 pass
3160 3160
3161 3161 def _performrevert(repo, parents, ctx, actions, interactive=False):
3162 3162 """function that actually perform all the actions computed for revert
3163 3163
3164 3164 This is an independent function to let extension to plug in and react to
3165 3165 the imminent revert.
3166 3166
3167 3167 Make sure you have the working directory locked when calling this function.
3168 3168 """
3169 3169 parent, p2 = parents
3170 3170 node = ctx.node()
3171 3171 def checkout(f):
3172 3172 fc = ctx[f]
3173 3173 repo.wwrite(f, fc.data(), fc.flags())
3174 3174
3175 3175 audit_path = pathutil.pathauditor(repo.root)
3176 3176 for f in actions['forget'][0]:
3177 3177 repo.dirstate.drop(f)
3178 3178 for f in actions['remove'][0]:
3179 3179 audit_path(f)
3180 3180 try:
3181 3181 util.unlinkpath(repo.wjoin(f))
3182 3182 except OSError:
3183 3183 pass
3184 3184 repo.dirstate.remove(f)
3185 3185 for f in actions['drop'][0]:
3186 3186 audit_path(f)
3187 3187 repo.dirstate.remove(f)
3188 3188
3189 3189 normal = None
3190 3190 if node == parent:
3191 3191 # We're reverting to our parent. If possible, we'd like status
3192 3192 # to report the file as clean. We have to use normallookup for
3193 3193 # merges to avoid losing information about merged/dirty files.
3194 3194 if p2 != nullid:
3195 3195 normal = repo.dirstate.normallookup
3196 3196 else:
3197 3197 normal = repo.dirstate.normal
3198 3198
3199 3199 newlyaddedandmodifiedfiles = set()
3200 3200 if interactive:
3201 3201 # Prompt the user for changes to revert
3202 3202 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3203 3203 m = scmutil.match(ctx, torevert, {})
3204 3204 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3205 3205 diffopts.nodates = True
3206 3206 diffopts.git = True
3207 3207 reversehunks = repo.ui.configbool('experimental',
3208 3208 'revertalternateinteractivemode',
3209 3209 True)
3210 3210 if reversehunks:
3211 3211 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3212 3212 else:
3213 3213 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3214 3214 originalchunks = patch.parsepatch(diff)
3215 3215
3216 3216 try:
3217 3217
3218 3218 chunks, opts = recordfilter(repo.ui, originalchunks)
3219 3219 if reversehunks:
3220 3220 chunks = patch.reversehunks(chunks)
3221 3221
3222 3222 except patch.PatchError as err:
3223 3223 raise error.Abort(_('error parsing patch: %s') % err)
3224 3224
3225 3225 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3226 3226 # Apply changes
3227 3227 fp = cStringIO.StringIO()
3228 3228 for c in chunks:
3229 3229 c.write(fp)
3230 3230 dopatch = fp.tell()
3231 3231 fp.seek(0)
3232 3232 if dopatch:
3233 3233 try:
3234 3234 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3235 3235 except patch.PatchError as err:
3236 3236 raise error.Abort(str(err))
3237 3237 del fp
3238 3238 else:
3239 3239 for f in actions['revert'][0]:
3240 3240 checkout(f)
3241 3241 if normal:
3242 3242 normal(f)
3243 3243
3244 3244 for f in actions['add'][0]:
3245 3245 # Don't checkout modified files, they are already created by the diff
3246 3246 if f not in newlyaddedandmodifiedfiles:
3247 3247 checkout(f)
3248 3248 repo.dirstate.add(f)
3249 3249
3250 3250 normal = repo.dirstate.normallookup
3251 3251 if node == parent and p2 == nullid:
3252 3252 normal = repo.dirstate.normal
3253 3253 for f in actions['undelete'][0]:
3254 3254 checkout(f)
3255 3255 normal(f)
3256 3256
3257 3257 copied = copies.pathcopies(repo[parent], ctx)
3258 3258
3259 3259 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3260 3260 if f in copied:
3261 3261 repo.dirstate.copy(copied[f], f)
3262 3262
3263 3263 def command(table):
3264 3264 """Returns a function object to be used as a decorator for making commands.
3265 3265
3266 3266 This function receives a command table as its argument. The table should
3267 3267 be a dict.
3268 3268
3269 3269 The returned function can be used as a decorator for adding commands
3270 3270 to that command table. This function accepts multiple arguments to define
3271 3271 a command.
3272 3272
3273 3273 The first argument is the command name.
3274 3274
3275 3275 The options argument is an iterable of tuples defining command arguments.
3276 3276 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3277 3277
3278 3278 The synopsis argument defines a short, one line summary of how to use the
3279 3279 command. This shows up in the help output.
3280 3280
3281 3281 The norepo argument defines whether the command does not require a
3282 3282 local repository. Most commands operate against a repository, thus the
3283 3283 default is False.
3284 3284
3285 3285 The optionalrepo argument defines whether the command optionally requires
3286 3286 a local repository.
3287 3287
3288 3288 The inferrepo argument defines whether to try to find a repository from the
3289 3289 command line arguments. If True, arguments will be examined for potential
3290 3290 repository locations. See ``findrepo()``. If a repository is found, it
3291 3291 will be used.
3292 3292 """
3293 3293 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3294 3294 inferrepo=False):
3295 3295 def decorator(func):
3296 3296 if synopsis:
3297 3297 table[name] = func, list(options), synopsis
3298 3298 else:
3299 3299 table[name] = func, list(options)
3300 3300
3301 3301 if norepo:
3302 3302 # Avoid import cycle.
3303 3303 import commands
3304 3304 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3305 3305
3306 3306 if optionalrepo:
3307 3307 import commands
3308 3308 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3309 3309
3310 3310 if inferrepo:
3311 3311 import commands
3312 3312 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3313 3313
3314 3314 return func
3315 3315 return decorator
3316 3316
3317 3317 return cmd
3318 3318
3319 3319 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3320 3320 # commands.outgoing. "missing" is "missing" of the result of
3321 3321 # "findcommonoutgoing()"
3322 3322 outgoinghooks = util.hooks()
3323 3323
3324 3324 # a list of (ui, repo) functions called by commands.summary
3325 3325 summaryhooks = util.hooks()
3326 3326
3327 3327 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3328 3328 #
3329 3329 # functions should return tuple of booleans below, if 'changes' is None:
3330 3330 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3331 3331 #
3332 3332 # otherwise, 'changes' is a tuple of tuples below:
3333 3333 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3334 3334 # - (desturl, destbranch, destpeer, outgoing)
3335 3335 summaryremotehooks = util.hooks()
3336 3336
3337 3337 # A list of state files kept by multistep operations like graft.
3338 3338 # Since graft cannot be aborted, it is considered 'clearable' by update.
3339 3339 # note: bisect is intentionally excluded
3340 3340 # (state file, clearable, allowcommit, error, hint)
3341 3341 unfinishedstates = [
3342 3342 ('graftstate', True, False, _('graft in progress'),
3343 3343 _("use 'hg graft --continue' or 'hg update' to abort")),
3344 3344 ('updatestate', True, False, _('last update was interrupted'),
3345 3345 _("use 'hg update' to get a consistent checkout"))
3346 3346 ]
3347 3347
3348 3348 def checkunfinished(repo, commit=False):
3349 3349 '''Look for an unfinished multistep operation, like graft, and abort
3350 3350 if found. It's probably good to check this right before
3351 3351 bailifchanged().
3352 3352 '''
3353 3353 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3354 3354 if commit and allowcommit:
3355 3355 continue
3356 3356 if repo.vfs.exists(f):
3357 3357 raise error.Abort(msg, hint=hint)
3358 3358
3359 3359 def clearunfinished(repo):
3360 3360 '''Check for unfinished operations (as above), and clear the ones
3361 3361 that are clearable.
3362 3362 '''
3363 3363 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3364 3364 if not clearable and repo.vfs.exists(f):
3365 3365 raise error.Abort(msg, hint=hint)
3366 3366 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3367 3367 if clearable and repo.vfs.exists(f):
3368 3368 util.unlink(repo.join(f))
3369 3369
3370 3370 class dirstateguard(object):
3371 3371 '''Restore dirstate at unexpected failure.
3372 3372
3373 3373 At the construction, this class does:
3374 3374
3375 3375 - write current ``repo.dirstate`` out, and
3376 3376 - save ``.hg/dirstate`` into the backup file
3377 3377
3378 3378 This restores ``.hg/dirstate`` from backup file, if ``release()``
3379 3379 is invoked before ``close()``.
3380 3380
3381 3381 This just removes the backup file at ``close()`` before ``release()``.
3382 3382 '''
3383 3383
3384 3384 def __init__(self, repo, name):
3385 3385 self._repo = repo
3386 3386 self._suffix = '.backup.%s.%d' % (name, id(self))
3387 3387 repo.dirstate._savebackup(repo.currenttransaction(), self._suffix)
3388 3388 self._active = True
3389 3389 self._closed = False
3390 3390
3391 3391 def __del__(self):
3392 3392 if self._active: # still active
3393 3393 # this may occur, even if this class is used correctly:
3394 3394 # for example, releasing other resources like transaction
3395 3395 # may raise exception before ``dirstateguard.release`` in
3396 3396 # ``release(tr, ....)``.
3397 3397 self._abort()
3398 3398
3399 3399 def close(self):
3400 3400 if not self._active: # already inactivated
3401 3401 msg = (_("can't close already inactivated backup: dirstate%s")
3402 3402 % self._suffix)
3403 3403 raise error.Abort(msg)
3404 3404
3405 3405 self._repo.dirstate._clearbackup(self._repo.currenttransaction(),
3406 3406 self._suffix)
3407 3407 self._active = False
3408 3408 self._closed = True
3409 3409
3410 3410 def _abort(self):
3411 3411 self._repo.dirstate._restorebackup(self._repo.currenttransaction(),
3412 3412 self._suffix)
3413 3413 self._active = False
3414 3414
3415 3415 def release(self):
3416 3416 if not self._closed:
3417 3417 if not self._active: # already inactivated
3418 3418 msg = (_("can't release already inactivated backup:"
3419 3419 " dirstate%s")
3420 3420 % self._suffix)
3421 3421 raise error.Abort(msg)
3422 3422 self._abort()
@@ -1,6974 +1,6973 b''
1 1 # commands.py - command processing for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from node import hex, bin, nullhex, nullid, nullrev, short
9 9 from lock import release
10 10 from i18n import _
11 11 import os, re, difflib, time, tempfile, errno, shlex
12 12 import sys, socket
13 13 import hg, scmutil, util, revlog, copies, error, bookmarks
14 14 import patch, help, encoding, templatekw, discovery
15 15 import archival, changegroup, cmdutil, hbisect
16 16 import sshserver, hgweb
17 17 import extensions
18 18 import merge as mergemod
19 19 import minirst, revset, fileset
20 20 import dagparser, context, simplemerge, graphmod, copies
21 21 import random, operator
22 22 import setdiscovery, treediscovery, dagutil, pvec, localrepo, destutil
23 23 import phases, obsolete, exchange, bundle2, repair, lock as lockmod
24 24 import ui as uimod
25 25 import streamclone
26 26
27 27 table = {}
28 28
29 29 command = cmdutil.command(table)
30 30
31 31 # Space delimited list of commands that don't require local repositories.
32 32 # This should be populated by passing norepo=True into the @command decorator.
33 33 norepo = ''
34 34 # Space delimited list of commands that optionally require local repositories.
35 35 # This should be populated by passing optionalrepo=True into the @command
36 36 # decorator.
37 37 optionalrepo = ''
38 38 # Space delimited list of commands that will examine arguments looking for
39 39 # a repository. This should be populated by passing inferrepo=True into the
40 40 # @command decorator.
41 41 inferrepo = ''
42 42
43 43 # label constants
44 44 # until 3.5, bookmarks.current was the advertised name, not
45 45 # bookmarks.active, so we must use both to avoid breaking old
46 46 # custom styles
47 47 activebookmarklabel = 'bookmarks.active bookmarks.current'
48 48
49 49 # common command options
50 50
51 51 globalopts = [
52 52 ('R', 'repository', '',
53 53 _('repository root directory or name of overlay bundle file'),
54 54 _('REPO')),
55 55 ('', 'cwd', '',
56 56 _('change working directory'), _('DIR')),
57 57 ('y', 'noninteractive', None,
58 58 _('do not prompt, automatically pick the first choice for all prompts')),
59 59 ('q', 'quiet', None, _('suppress output')),
60 60 ('v', 'verbose', None, _('enable additional output')),
61 61 ('', 'config', [],
62 62 _('set/override config option (use \'section.name=value\')'),
63 63 _('CONFIG')),
64 64 ('', 'debug', None, _('enable debugging output')),
65 65 ('', 'debugger', None, _('start debugger')),
66 66 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
67 67 _('ENCODE')),
68 68 ('', 'encodingmode', encoding.encodingmode,
69 69 _('set the charset encoding mode'), _('MODE')),
70 70 ('', 'traceback', None, _('always print a traceback on exception')),
71 71 ('', 'time', None, _('time how long the command takes')),
72 72 ('', 'profile', None, _('print command execution profile')),
73 73 ('', 'version', None, _('output version information and exit')),
74 74 ('h', 'help', None, _('display help and exit')),
75 75 ('', 'hidden', False, _('consider hidden changesets')),
76 76 ]
77 77
78 78 dryrunopts = [('n', 'dry-run', None,
79 79 _('do not perform actions, just print output'))]
80 80
81 81 remoteopts = [
82 82 ('e', 'ssh', '',
83 83 _('specify ssh command to use'), _('CMD')),
84 84 ('', 'remotecmd', '',
85 85 _('specify hg command to run on the remote side'), _('CMD')),
86 86 ('', 'insecure', None,
87 87 _('do not verify server certificate (ignoring web.cacerts config)')),
88 88 ]
89 89
90 90 walkopts = [
91 91 ('I', 'include', [],
92 92 _('include names matching the given patterns'), _('PATTERN')),
93 93 ('X', 'exclude', [],
94 94 _('exclude names matching the given patterns'), _('PATTERN')),
95 95 ]
96 96
97 97 commitopts = [
98 98 ('m', 'message', '',
99 99 _('use text as commit message'), _('TEXT')),
100 100 ('l', 'logfile', '',
101 101 _('read commit message from file'), _('FILE')),
102 102 ]
103 103
104 104 commitopts2 = [
105 105 ('d', 'date', '',
106 106 _('record the specified date as commit date'), _('DATE')),
107 107 ('u', 'user', '',
108 108 _('record the specified user as committer'), _('USER')),
109 109 ]
110 110
111 111 # hidden for now
112 112 formatteropts = [
113 113 ('T', 'template', '',
114 114 _('display with template (EXPERIMENTAL)'), _('TEMPLATE')),
115 115 ]
116 116
117 117 templateopts = [
118 118 ('', 'style', '',
119 119 _('display using template map file (DEPRECATED)'), _('STYLE')),
120 120 ('T', 'template', '',
121 121 _('display with template'), _('TEMPLATE')),
122 122 ]
123 123
124 124 logopts = [
125 125 ('p', 'patch', None, _('show patch')),
126 126 ('g', 'git', None, _('use git extended diff format')),
127 127 ('l', 'limit', '',
128 128 _('limit number of changes displayed'), _('NUM')),
129 129 ('M', 'no-merges', None, _('do not show merges')),
130 130 ('', 'stat', None, _('output diffstat-style summary of changes')),
131 131 ('G', 'graph', None, _("show the revision DAG")),
132 132 ] + templateopts
133 133
134 134 diffopts = [
135 135 ('a', 'text', None, _('treat all files as text')),
136 136 ('g', 'git', None, _('use git extended diff format')),
137 137 ('', 'nodates', None, _('omit dates from diff headers'))
138 138 ]
139 139
140 140 diffwsopts = [
141 141 ('w', 'ignore-all-space', None,
142 142 _('ignore white space when comparing lines')),
143 143 ('b', 'ignore-space-change', None,
144 144 _('ignore changes in the amount of white space')),
145 145 ('B', 'ignore-blank-lines', None,
146 146 _('ignore changes whose lines are all blank')),
147 147 ]
148 148
149 149 diffopts2 = [
150 150 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
151 151 ('p', 'show-function', None, _('show which function each change is in')),
152 152 ('', 'reverse', None, _('produce a diff that undoes the changes')),
153 153 ] + diffwsopts + [
154 154 ('U', 'unified', '',
155 155 _('number of lines of context to show'), _('NUM')),
156 156 ('', 'stat', None, _('output diffstat-style summary of changes')),
157 157 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
158 158 ]
159 159
160 160 mergetoolopts = [
161 161 ('t', 'tool', '', _('specify merge tool')),
162 162 ]
163 163
164 164 similarityopts = [
165 165 ('s', 'similarity', '',
166 166 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
167 167 ]
168 168
169 169 subrepoopts = [
170 170 ('S', 'subrepos', None,
171 171 _('recurse into subrepositories'))
172 172 ]
173 173
174 174 debugrevlogopts = [
175 175 ('c', 'changelog', False, _('open changelog')),
176 176 ('m', 'manifest', False, _('open manifest')),
177 177 ('', 'dir', False, _('open directory manifest')),
178 178 ]
179 179
180 180 # Commands start here, listed alphabetically
181 181
182 182 @command('^add',
183 183 walkopts + subrepoopts + dryrunopts,
184 184 _('[OPTION]... [FILE]...'),
185 185 inferrepo=True)
186 186 def add(ui, repo, *pats, **opts):
187 187 """add the specified files on the next commit
188 188
189 189 Schedule files to be version controlled and added to the
190 190 repository.
191 191
192 192 The files will be added to the repository at the next commit. To
193 193 undo an add before that, see :hg:`forget`.
194 194
195 195 If no names are given, add all files to the repository.
196 196
197 197 .. container:: verbose
198 198
199 199 Examples:
200 200
201 201 - New (unknown) files are added
202 202 automatically by :hg:`add`::
203 203
204 204 $ ls
205 205 foo.c
206 206 $ hg status
207 207 ? foo.c
208 208 $ hg add
209 209 adding foo.c
210 210 $ hg status
211 211 A foo.c
212 212
213 213 - Specific files to be added can be specified::
214 214
215 215 $ ls
216 216 bar.c foo.c
217 217 $ hg status
218 218 ? bar.c
219 219 ? foo.c
220 220 $ hg add bar.c
221 221 $ hg status
222 222 A bar.c
223 223 ? foo.c
224 224
225 225 Returns 0 if all files are successfully added.
226 226 """
227 227
228 228 m = scmutil.match(repo[None], pats, opts)
229 229 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
230 230 return rejected and 1 or 0
231 231
232 232 @command('addremove',
233 233 similarityopts + subrepoopts + walkopts + dryrunopts,
234 234 _('[OPTION]... [FILE]...'),
235 235 inferrepo=True)
236 236 def addremove(ui, repo, *pats, **opts):
237 237 """add all new files, delete all missing files
238 238
239 239 Add all new files and remove all missing files from the
240 240 repository.
241 241
242 242 New files are ignored if they match any of the patterns in
243 243 ``.hgignore``. As with add, these changes take effect at the next
244 244 commit.
245 245
246 246 Use the -s/--similarity option to detect renamed files. This
247 247 option takes a percentage between 0 (disabled) and 100 (files must
248 248 be identical) as its parameter. With a parameter greater than 0,
249 249 this compares every removed file with every added file and records
250 250 those similar enough as renames. Detecting renamed files this way
251 251 can be expensive. After using this option, :hg:`status -C` can be
252 252 used to check which files were identified as moved or renamed. If
253 253 not specified, -s/--similarity defaults to 100 and only renames of
254 254 identical files are detected.
255 255
256 256 .. container:: verbose
257 257
258 258 Examples:
259 259
260 260 - A number of files (bar.c and foo.c) are new,
261 261 while foobar.c has been removed (without using :hg:`remove`)
262 262 from the repository::
263 263
264 264 $ ls
265 265 bar.c foo.c
266 266 $ hg status
267 267 ! foobar.c
268 268 ? bar.c
269 269 ? foo.c
270 270 $ hg addremove
271 271 adding bar.c
272 272 adding foo.c
273 273 removing foobar.c
274 274 $ hg status
275 275 A bar.c
276 276 A foo.c
277 277 R foobar.c
278 278
279 279 - A file foobar.c was moved to foo.c without using :hg:`rename`.
280 280 Afterwards, it was edited slightly::
281 281
282 282 $ ls
283 283 foo.c
284 284 $ hg status
285 285 ! foobar.c
286 286 ? foo.c
287 287 $ hg addremove --similarity 90
288 288 removing foobar.c
289 289 adding foo.c
290 290 recording removal of foobar.c as rename to foo.c (94% similar)
291 291 $ hg status -C
292 292 A foo.c
293 293 foobar.c
294 294 R foobar.c
295 295
296 296 Returns 0 if all files are successfully added.
297 297 """
298 298 try:
299 299 sim = float(opts.get('similarity') or 100)
300 300 except ValueError:
301 301 raise error.Abort(_('similarity must be a number'))
302 302 if sim < 0 or sim > 100:
303 303 raise error.Abort(_('similarity must be between 0 and 100'))
304 304 matcher = scmutil.match(repo[None], pats, opts)
305 305 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
306 306
307 307 @command('^annotate|blame',
308 308 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
309 309 ('', 'follow', None,
310 310 _('follow copies/renames and list the filename (DEPRECATED)')),
311 311 ('', 'no-follow', None, _("don't follow copies and renames")),
312 312 ('a', 'text', None, _('treat all files as text')),
313 313 ('u', 'user', None, _('list the author (long with -v)')),
314 314 ('f', 'file', None, _('list the filename')),
315 315 ('d', 'date', None, _('list the date (short with -q)')),
316 316 ('n', 'number', None, _('list the revision number (default)')),
317 317 ('c', 'changeset', None, _('list the changeset')),
318 318 ('l', 'line-number', None, _('show line number at the first appearance'))
319 319 ] + diffwsopts + walkopts + formatteropts,
320 320 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
321 321 inferrepo=True)
322 322 def annotate(ui, repo, *pats, **opts):
323 323 """show changeset information by line for each file
324 324
325 325 List changes in files, showing the revision id responsible for
326 326 each line
327 327
328 328 This command is useful for discovering when a change was made and
329 329 by whom.
330 330
331 331 Without the -a/--text option, annotate will avoid processing files
332 332 it detects as binary. With -a, annotate will annotate the file
333 333 anyway, although the results will probably be neither useful
334 334 nor desirable.
335 335
336 336 Returns 0 on success.
337 337 """
338 338 if not pats:
339 339 raise error.Abort(_('at least one filename or pattern is required'))
340 340
341 341 if opts.get('follow'):
342 342 # --follow is deprecated and now just an alias for -f/--file
343 343 # to mimic the behavior of Mercurial before version 1.5
344 344 opts['file'] = True
345 345
346 346 ctx = scmutil.revsingle(repo, opts.get('rev'))
347 347
348 348 fm = ui.formatter('annotate', opts)
349 349 if ui.quiet:
350 350 datefunc = util.shortdate
351 351 else:
352 352 datefunc = util.datestr
353 353 if ctx.rev() is None:
354 354 def hexfn(node):
355 355 if node is None:
356 356 return None
357 357 else:
358 358 return fm.hexfunc(node)
359 359 if opts.get('changeset'):
360 360 # omit "+" suffix which is appended to node hex
361 361 def formatrev(rev):
362 362 if rev is None:
363 363 return '%d' % ctx.p1().rev()
364 364 else:
365 365 return '%d' % rev
366 366 else:
367 367 def formatrev(rev):
368 368 if rev is None:
369 369 return '%d+' % ctx.p1().rev()
370 370 else:
371 371 return '%d ' % rev
372 372 def formathex(hex):
373 373 if hex is None:
374 374 return '%s+' % fm.hexfunc(ctx.p1().node())
375 375 else:
376 376 return '%s ' % hex
377 377 else:
378 378 hexfn = fm.hexfunc
379 379 formatrev = formathex = str
380 380
381 381 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
382 382 ('number', ' ', lambda x: x[0].rev(), formatrev),
383 383 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
384 384 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
385 385 ('file', ' ', lambda x: x[0].path(), str),
386 386 ('line_number', ':', lambda x: x[1], str),
387 387 ]
388 388 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
389 389
390 390 if (not opts.get('user') and not opts.get('changeset')
391 391 and not opts.get('date') and not opts.get('file')):
392 392 opts['number'] = True
393 393
394 394 linenumber = opts.get('line_number') is not None
395 395 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
396 396 raise error.Abort(_('at least one of -n/-c is required for -l'))
397 397
398 398 if fm:
399 399 def makefunc(get, fmt):
400 400 return get
401 401 else:
402 402 def makefunc(get, fmt):
403 403 return lambda x: fmt(get(x))
404 404 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
405 405 if opts.get(op)]
406 406 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
407 407 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
408 408 if opts.get(op))
409 409
410 410 def bad(x, y):
411 411 raise error.Abort("%s: %s" % (x, y))
412 412
413 413 m = scmutil.match(ctx, pats, opts, badfn=bad)
414 414
415 415 follow = not opts.get('no_follow')
416 416 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
417 417 whitespace=True)
418 418 for abs in ctx.walk(m):
419 419 fctx = ctx[abs]
420 420 if not opts.get('text') and util.binary(fctx.data()):
421 421 fm.plain(_("%s: binary file\n") % ((pats and m.rel(abs)) or abs))
422 422 continue
423 423
424 424 lines = fctx.annotate(follow=follow, linenumber=linenumber,
425 425 diffopts=diffopts)
426 426 formats = []
427 427 pieces = []
428 428
429 429 for f, sep in funcmap:
430 430 l = [f(n) for n, dummy in lines]
431 431 if l:
432 432 if fm:
433 433 formats.append(['%s' for x in l])
434 434 else:
435 435 sizes = [encoding.colwidth(x) for x in l]
436 436 ml = max(sizes)
437 437 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
438 438 pieces.append(l)
439 439
440 440 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
441 441 fm.startitem()
442 442 fm.write(fields, "".join(f), *p)
443 443 fm.write('line', ": %s", l[1])
444 444
445 445 if lines and not lines[-1][1].endswith('\n'):
446 446 fm.plain('\n')
447 447
448 448 fm.end()
449 449
450 450 @command('archive',
451 451 [('', 'no-decode', None, _('do not pass files through decoders')),
452 452 ('p', 'prefix', '', _('directory prefix for files in archive'),
453 453 _('PREFIX')),
454 454 ('r', 'rev', '', _('revision to distribute'), _('REV')),
455 455 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
456 456 ] + subrepoopts + walkopts,
457 457 _('[OPTION]... DEST'))
458 458 def archive(ui, repo, dest, **opts):
459 459 '''create an unversioned archive of a repository revision
460 460
461 461 By default, the revision used is the parent of the working
462 462 directory; use -r/--rev to specify a different revision.
463 463
464 464 The archive type is automatically detected based on file
465 465 extension (or override using -t/--type).
466 466
467 467 .. container:: verbose
468 468
469 469 Examples:
470 470
471 471 - create a zip file containing the 1.0 release::
472 472
473 473 hg archive -r 1.0 project-1.0.zip
474 474
475 475 - create a tarball excluding .hg files::
476 476
477 477 hg archive project.tar.gz -X ".hg*"
478 478
479 479 Valid types are:
480 480
481 481 :``files``: a directory full of files (default)
482 482 :``tar``: tar archive, uncompressed
483 483 :``tbz2``: tar archive, compressed using bzip2
484 484 :``tgz``: tar archive, compressed using gzip
485 485 :``uzip``: zip archive, uncompressed
486 486 :``zip``: zip archive, compressed using deflate
487 487
488 488 The exact name of the destination archive or directory is given
489 489 using a format string; see :hg:`help export` for details.
490 490
491 491 Each member added to an archive file has a directory prefix
492 492 prepended. Use -p/--prefix to specify a format string for the
493 493 prefix. The default is the basename of the archive, with suffixes
494 494 removed.
495 495
496 496 Returns 0 on success.
497 497 '''
498 498
499 499 ctx = scmutil.revsingle(repo, opts.get('rev'))
500 500 if not ctx:
501 501 raise error.Abort(_('no working directory: please specify a revision'))
502 502 node = ctx.node()
503 503 dest = cmdutil.makefilename(repo, dest, node)
504 504 if os.path.realpath(dest) == repo.root:
505 505 raise error.Abort(_('repository root cannot be destination'))
506 506
507 507 kind = opts.get('type') or archival.guesskind(dest) or 'files'
508 508 prefix = opts.get('prefix')
509 509
510 510 if dest == '-':
511 511 if kind == 'files':
512 512 raise error.Abort(_('cannot archive plain files to stdout'))
513 513 dest = cmdutil.makefileobj(repo, dest)
514 514 if not prefix:
515 515 prefix = os.path.basename(repo.root) + '-%h'
516 516
517 517 prefix = cmdutil.makefilename(repo, prefix, node)
518 518 matchfn = scmutil.match(ctx, [], opts)
519 519 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
520 520 matchfn, prefix, subrepos=opts.get('subrepos'))
521 521
522 522 @command('backout',
523 523 [('', 'merge', None, _('merge with old dirstate parent after backout')),
524 524 ('', 'commit', None, _('commit if no conflicts were encountered')),
525 525 ('', 'parent', '',
526 526 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
527 527 ('r', 'rev', '', _('revision to backout'), _('REV')),
528 528 ('e', 'edit', False, _('invoke editor on commit messages')),
529 529 ] + mergetoolopts + walkopts + commitopts + commitopts2,
530 530 _('[OPTION]... [-r] REV'))
531 531 def backout(ui, repo, node=None, rev=None, commit=False, **opts):
532 532 '''reverse effect of earlier changeset
533 533
534 534 Prepare a new changeset with the effect of REV undone in the
535 535 current working directory.
536 536
537 537 If REV is the parent of the working directory, then this new changeset
538 538 is committed automatically. Otherwise, hg needs to merge the
539 539 changes and the merged result is left uncommitted.
540 540
541 541 .. note::
542 542
543 543 backout cannot be used to fix either an unwanted or
544 544 incorrect merge.
545 545
546 546 .. container:: verbose
547 547
548 548 Examples:
549 549
550 550 - Reverse the effect of the parent of the working directory.
551 551 This backout will be committed immediately::
552 552
553 553 hg backout -r .
554 554
555 555 - Reverse the effect of previous bad revision 23::
556 556
557 557 hg backout -r 23
558 558 hg commit -m "Backout revision 23"
559 559
560 560 - Reverse the effect of previous bad revision 23 and
561 561 commit the backout immediately::
562 562
563 563 hg backout -r 23 --commit
564 564
565 565 By default, the pending changeset will have one parent,
566 566 maintaining a linear history. With --merge, the pending
567 567 changeset will instead have two parents: the old parent of the
568 568 working directory and a new child of REV that simply undoes REV.
569 569
570 570 Before version 1.7, the behavior without --merge was equivalent
571 571 to specifying --merge followed by :hg:`update --clean .` to
572 572 cancel the merge and leave the child of REV as a head to be
573 573 merged separately.
574 574
575 575 See :hg:`help dates` for a list of formats valid for -d/--date.
576 576
577 577 See :hg:`help revert` for a way to restore files to the state
578 578 of another revision.
579 579
580 580 Returns 0 on success, 1 if nothing to backout or there are unresolved
581 581 files.
582 582 '''
583 583 wlock = lock = None
584 584 try:
585 585 wlock = repo.wlock()
586 586 lock = repo.lock()
587 587 return _dobackout(ui, repo, node, rev, commit, **opts)
588 588 finally:
589 589 release(lock, wlock)
590 590
591 591 def _dobackout(ui, repo, node=None, rev=None, commit=False, **opts):
592 592 if rev and node:
593 593 raise error.Abort(_("please specify just one revision"))
594 594
595 595 if not rev:
596 596 rev = node
597 597
598 598 if not rev:
599 599 raise error.Abort(_("please specify a revision to backout"))
600 600
601 601 date = opts.get('date')
602 602 if date:
603 603 opts['date'] = util.parsedate(date)
604 604
605 605 cmdutil.checkunfinished(repo)
606 606 cmdutil.bailifchanged(repo)
607 607 node = scmutil.revsingle(repo, rev).node()
608 608
609 609 op1, op2 = repo.dirstate.parents()
610 610 if not repo.changelog.isancestor(node, op1):
611 611 raise error.Abort(_('cannot backout change that is not an ancestor'))
612 612
613 613 p1, p2 = repo.changelog.parents(node)
614 614 if p1 == nullid:
615 615 raise error.Abort(_('cannot backout a change with no parents'))
616 616 if p2 != nullid:
617 617 if not opts.get('parent'):
618 618 raise error.Abort(_('cannot backout a merge changeset'))
619 619 p = repo.lookup(opts['parent'])
620 620 if p not in (p1, p2):
621 621 raise error.Abort(_('%s is not a parent of %s') %
622 622 (short(p), short(node)))
623 623 parent = p
624 624 else:
625 625 if opts.get('parent'):
626 626 raise error.Abort(_('cannot use --parent on non-merge changeset'))
627 627 parent = p1
628 628
629 629 # the backout should appear on the same branch
630 630 try:
631 631 branch = repo.dirstate.branch()
632 632 bheads = repo.branchheads(branch)
633 633 rctx = scmutil.revsingle(repo, hex(parent))
634 634 if not opts.get('merge') and op1 != node:
635 635 dsguard = cmdutil.dirstateguard(repo, 'backout')
636 636 try:
637 637 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
638 638 'backout')
639 stats = mergemod.update(repo, parent, True, True, False,
640 node, False)
639 stats = mergemod.update(repo, parent, True, True, node, False)
641 640 repo.setparents(op1, op2)
642 641 dsguard.close()
643 642 hg._showstats(repo, stats)
644 643 if stats[3]:
645 644 repo.ui.status(_("use 'hg resolve' to retry unresolved "
646 645 "file merges\n"))
647 646 return 1
648 647 elif not commit:
649 648 msg = _("changeset %s backed out, "
650 649 "don't forget to commit.\n")
651 650 ui.status(msg % short(node))
652 651 return 0
653 652 finally:
654 653 ui.setconfig('ui', 'forcemerge', '', '')
655 654 lockmod.release(dsguard)
656 655 else:
657 656 hg.clean(repo, node, show_stats=False)
658 657 repo.dirstate.setbranch(branch)
659 658 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
660 659
661 660
662 661 def commitfunc(ui, repo, message, match, opts):
663 662 editform = 'backout'
664 663 e = cmdutil.getcommiteditor(editform=editform, **opts)
665 664 if not message:
666 665 # we don't translate commit messages
667 666 message = "Backed out changeset %s" % short(node)
668 667 e = cmdutil.getcommiteditor(edit=True, editform=editform)
669 668 return repo.commit(message, opts.get('user'), opts.get('date'),
670 669 match, editor=e)
671 670 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
672 671 if not newnode:
673 672 ui.status(_("nothing changed\n"))
674 673 return 1
675 674 cmdutil.commitstatus(repo, newnode, branch, bheads)
676 675
677 676 def nice(node):
678 677 return '%d:%s' % (repo.changelog.rev(node), short(node))
679 678 ui.status(_('changeset %s backs out changeset %s\n') %
680 679 (nice(repo.changelog.tip()), nice(node)))
681 680 if opts.get('merge') and op1 != node:
682 681 hg.clean(repo, op1, show_stats=False)
683 682 ui.status(_('merging with changeset %s\n')
684 683 % nice(repo.changelog.tip()))
685 684 try:
686 685 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
687 686 'backout')
688 687 return hg.merge(repo, hex(repo.changelog.tip()))
689 688 finally:
690 689 ui.setconfig('ui', 'forcemerge', '', '')
691 690 finally:
692 691 # TODO: get rid of this meaningless try/finally enclosing.
693 692 # this is kept only to reduce changes in a patch.
694 693 pass
695 694 return 0
696 695
697 696 @command('bisect',
698 697 [('r', 'reset', False, _('reset bisect state')),
699 698 ('g', 'good', False, _('mark changeset good')),
700 699 ('b', 'bad', False, _('mark changeset bad')),
701 700 ('s', 'skip', False, _('skip testing changeset')),
702 701 ('e', 'extend', False, _('extend the bisect range')),
703 702 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
704 703 ('U', 'noupdate', False, _('do not update to target'))],
705 704 _("[-gbsr] [-U] [-c CMD] [REV]"))
706 705 def bisect(ui, repo, rev=None, extra=None, command=None,
707 706 reset=None, good=None, bad=None, skip=None, extend=None,
708 707 noupdate=None):
709 708 """subdivision search of changesets
710 709
711 710 This command helps to find changesets which introduce problems. To
712 711 use, mark the earliest changeset you know exhibits the problem as
713 712 bad, then mark the latest changeset which is free from the problem
714 713 as good. Bisect will update your working directory to a revision
715 714 for testing (unless the -U/--noupdate option is specified). Once
716 715 you have performed tests, mark the working directory as good or
717 716 bad, and bisect will either update to another candidate changeset
718 717 or announce that it has found the bad revision.
719 718
720 719 As a shortcut, you can also use the revision argument to mark a
721 720 revision as good or bad without checking it out first.
722 721
723 722 If you supply a command, it will be used for automatic bisection.
724 723 The environment variable HG_NODE will contain the ID of the
725 724 changeset being tested. The exit status of the command will be
726 725 used to mark revisions as good or bad: status 0 means good, 125
727 726 means to skip the revision, 127 (command not found) will abort the
728 727 bisection, and any other non-zero exit status means the revision
729 728 is bad.
730 729
731 730 .. container:: verbose
732 731
733 732 Some examples:
734 733
735 734 - start a bisection with known bad revision 34, and good revision 12::
736 735
737 736 hg bisect --bad 34
738 737 hg bisect --good 12
739 738
740 739 - advance the current bisection by marking current revision as good or
741 740 bad::
742 741
743 742 hg bisect --good
744 743 hg bisect --bad
745 744
746 745 - mark the current revision, or a known revision, to be skipped (e.g. if
747 746 that revision is not usable because of another issue)::
748 747
749 748 hg bisect --skip
750 749 hg bisect --skip 23
751 750
752 751 - skip all revisions that do not touch directories ``foo`` or ``bar``::
753 752
754 753 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
755 754
756 755 - forget the current bisection::
757 756
758 757 hg bisect --reset
759 758
760 759 - use 'make && make tests' to automatically find the first broken
761 760 revision::
762 761
763 762 hg bisect --reset
764 763 hg bisect --bad 34
765 764 hg bisect --good 12
766 765 hg bisect --command "make && make tests"
767 766
768 767 - see all changesets whose states are already known in the current
769 768 bisection::
770 769
771 770 hg log -r "bisect(pruned)"
772 771
773 772 - see the changeset currently being bisected (especially useful
774 773 if running with -U/--noupdate)::
775 774
776 775 hg log -r "bisect(current)"
777 776
778 777 - see all changesets that took part in the current bisection::
779 778
780 779 hg log -r "bisect(range)"
781 780
782 781 - you can even get a nice graph::
783 782
784 783 hg log --graph -r "bisect(range)"
785 784
786 785 See :hg:`help revsets` for more about the `bisect()` keyword.
787 786
788 787 Returns 0 on success.
789 788 """
790 789 def extendbisectrange(nodes, good):
791 790 # bisect is incomplete when it ends on a merge node and
792 791 # one of the parent was not checked.
793 792 parents = repo[nodes[0]].parents()
794 793 if len(parents) > 1:
795 794 if good:
796 795 side = state['bad']
797 796 else:
798 797 side = state['good']
799 798 num = len(set(i.node() for i in parents) & set(side))
800 799 if num == 1:
801 800 return parents[0].ancestor(parents[1])
802 801 return None
803 802
804 803 def print_result(nodes, good):
805 804 displayer = cmdutil.show_changeset(ui, repo, {})
806 805 if len(nodes) == 1:
807 806 # narrowed it down to a single revision
808 807 if good:
809 808 ui.write(_("The first good revision is:\n"))
810 809 else:
811 810 ui.write(_("The first bad revision is:\n"))
812 811 displayer.show(repo[nodes[0]])
813 812 extendnode = extendbisectrange(nodes, good)
814 813 if extendnode is not None:
815 814 ui.write(_('Not all ancestors of this changeset have been'
816 815 ' checked.\nUse bisect --extend to continue the '
817 816 'bisection from\nthe common ancestor, %s.\n')
818 817 % extendnode)
819 818 else:
820 819 # multiple possible revisions
821 820 if good:
822 821 ui.write(_("Due to skipped revisions, the first "
823 822 "good revision could be any of:\n"))
824 823 else:
825 824 ui.write(_("Due to skipped revisions, the first "
826 825 "bad revision could be any of:\n"))
827 826 for n in nodes:
828 827 displayer.show(repo[n])
829 828 displayer.close()
830 829
831 830 def check_state(state, interactive=True):
832 831 if not state['good'] or not state['bad']:
833 832 if (good or bad or skip or reset) and interactive:
834 833 return
835 834 if not state['good']:
836 835 raise error.Abort(_('cannot bisect (no known good revisions)'))
837 836 else:
838 837 raise error.Abort(_('cannot bisect (no known bad revisions)'))
839 838 return True
840 839
841 840 # backward compatibility
842 841 if rev in "good bad reset init".split():
843 842 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
844 843 cmd, rev, extra = rev, extra, None
845 844 if cmd == "good":
846 845 good = True
847 846 elif cmd == "bad":
848 847 bad = True
849 848 else:
850 849 reset = True
851 850 elif extra or good + bad + skip + reset + extend + bool(command) > 1:
852 851 raise error.Abort(_('incompatible arguments'))
853 852
854 853 cmdutil.checkunfinished(repo)
855 854
856 855 if reset:
857 856 p = repo.join("bisect.state")
858 857 if os.path.exists(p):
859 858 os.unlink(p)
860 859 return
861 860
862 861 state = hbisect.load_state(repo)
863 862
864 863 if command:
865 864 changesets = 1
866 865 if noupdate:
867 866 try:
868 867 node = state['current'][0]
869 868 except LookupError:
870 869 raise error.Abort(_('current bisect revision is unknown - '
871 870 'start a new bisect to fix'))
872 871 else:
873 872 node, p2 = repo.dirstate.parents()
874 873 if p2 != nullid:
875 874 raise error.Abort(_('current bisect revision is a merge'))
876 875 try:
877 876 while changesets:
878 877 # update state
879 878 state['current'] = [node]
880 879 hbisect.save_state(repo, state)
881 880 status = ui.system(command, environ={'HG_NODE': hex(node)})
882 881 if status == 125:
883 882 transition = "skip"
884 883 elif status == 0:
885 884 transition = "good"
886 885 # status < 0 means process was killed
887 886 elif status == 127:
888 887 raise error.Abort(_("failed to execute %s") % command)
889 888 elif status < 0:
890 889 raise error.Abort(_("%s killed") % command)
891 890 else:
892 891 transition = "bad"
893 892 ctx = scmutil.revsingle(repo, rev, node)
894 893 rev = None # clear for future iterations
895 894 state[transition].append(ctx.node())
896 895 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
897 896 check_state(state, interactive=False)
898 897 # bisect
899 898 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
900 899 # update to next check
901 900 node = nodes[0]
902 901 if not noupdate:
903 902 cmdutil.bailifchanged(repo)
904 903 hg.clean(repo, node, show_stats=False)
905 904 finally:
906 905 state['current'] = [node]
907 906 hbisect.save_state(repo, state)
908 907 print_result(nodes, bgood)
909 908 return
910 909
911 910 # update state
912 911
913 912 if rev:
914 913 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
915 914 else:
916 915 nodes = [repo.lookup('.')]
917 916
918 917 if good or bad or skip:
919 918 if good:
920 919 state['good'] += nodes
921 920 elif bad:
922 921 state['bad'] += nodes
923 922 elif skip:
924 923 state['skip'] += nodes
925 924 hbisect.save_state(repo, state)
926 925
927 926 if not check_state(state):
928 927 return
929 928
930 929 # actually bisect
931 930 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
932 931 if extend:
933 932 if not changesets:
934 933 extendnode = extendbisectrange(nodes, good)
935 934 if extendnode is not None:
936 935 ui.write(_("Extending search to changeset %d:%s\n")
937 936 % (extendnode.rev(), extendnode))
938 937 state['current'] = [extendnode.node()]
939 938 hbisect.save_state(repo, state)
940 939 if noupdate:
941 940 return
942 941 cmdutil.bailifchanged(repo)
943 942 return hg.clean(repo, extendnode.node())
944 943 raise error.Abort(_("nothing to extend"))
945 944
946 945 if changesets == 0:
947 946 print_result(nodes, good)
948 947 else:
949 948 assert len(nodes) == 1 # only a single node can be tested next
950 949 node = nodes[0]
951 950 # compute the approximate number of remaining tests
952 951 tests, size = 0, 2
953 952 while size <= changesets:
954 953 tests, size = tests + 1, size * 2
955 954 rev = repo.changelog.rev(node)
956 955 ui.write(_("Testing changeset %d:%s "
957 956 "(%d changesets remaining, ~%d tests)\n")
958 957 % (rev, short(node), changesets, tests))
959 958 state['current'] = [node]
960 959 hbisect.save_state(repo, state)
961 960 if not noupdate:
962 961 cmdutil.bailifchanged(repo)
963 962 return hg.clean(repo, node)
964 963
965 964 @command('bookmarks|bookmark',
966 965 [('f', 'force', False, _('force')),
967 966 ('r', 'rev', '', _('revision'), _('REV')),
968 967 ('d', 'delete', False, _('delete a given bookmark')),
969 968 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
970 969 ('i', 'inactive', False, _('mark a bookmark inactive')),
971 970 ] + formatteropts,
972 971 _('hg bookmarks [OPTIONS]... [NAME]...'))
973 972 def bookmark(ui, repo, *names, **opts):
974 973 '''create a new bookmark or list existing bookmarks
975 974
976 975 Bookmarks are labels on changesets to help track lines of development.
977 976 Bookmarks are unversioned and can be moved, renamed and deleted.
978 977 Deleting or moving a bookmark has no effect on the associated changesets.
979 978
980 979 Creating or updating to a bookmark causes it to be marked as 'active'.
981 980 The active bookmark is indicated with a '*'.
982 981 When a commit is made, the active bookmark will advance to the new commit.
983 982 A plain :hg:`update` will also advance an active bookmark, if possible.
984 983 Updating away from a bookmark will cause it to be deactivated.
985 984
986 985 Bookmarks can be pushed and pulled between repositories (see
987 986 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
988 987 diverged, a new 'divergent bookmark' of the form 'name@path' will
989 988 be created. Using :hg:`merge` will resolve the divergence.
990 989
991 990 A bookmark named '@' has the special property that :hg:`clone` will
992 991 check it out by default if it exists.
993 992
994 993 .. container:: verbose
995 994
996 995 Examples:
997 996
998 997 - create an active bookmark for a new line of development::
999 998
1000 999 hg book new-feature
1001 1000
1002 1001 - create an inactive bookmark as a place marker::
1003 1002
1004 1003 hg book -i reviewed
1005 1004
1006 1005 - create an inactive bookmark on another changeset::
1007 1006
1008 1007 hg book -r .^ tested
1009 1008
1010 1009 - rename bookmark turkey to dinner::
1011 1010
1012 1011 hg book -m turkey dinner
1013 1012
1014 1013 - move the '@' bookmark from another branch::
1015 1014
1016 1015 hg book -f @
1017 1016 '''
1018 1017 force = opts.get('force')
1019 1018 rev = opts.get('rev')
1020 1019 delete = opts.get('delete')
1021 1020 rename = opts.get('rename')
1022 1021 inactive = opts.get('inactive')
1023 1022
1024 1023 def checkformat(mark):
1025 1024 mark = mark.strip()
1026 1025 if not mark:
1027 1026 raise error.Abort(_("bookmark names cannot consist entirely of "
1028 1027 "whitespace"))
1029 1028 scmutil.checknewlabel(repo, mark, 'bookmark')
1030 1029 return mark
1031 1030
1032 1031 def checkconflict(repo, mark, cur, force=False, target=None):
1033 1032 if mark in marks and not force:
1034 1033 if target:
1035 1034 if marks[mark] == target and target == cur:
1036 1035 # re-activating a bookmark
1037 1036 return
1038 1037 anc = repo.changelog.ancestors([repo[target].rev()])
1039 1038 bmctx = repo[marks[mark]]
1040 1039 divs = [repo[b].node() for b in marks
1041 1040 if b.split('@', 1)[0] == mark.split('@', 1)[0]]
1042 1041
1043 1042 # allow resolving a single divergent bookmark even if moving
1044 1043 # the bookmark across branches when a revision is specified
1045 1044 # that contains a divergent bookmark
1046 1045 if bmctx.rev() not in anc and target in divs:
1047 1046 bookmarks.deletedivergent(repo, [target], mark)
1048 1047 return
1049 1048
1050 1049 deletefrom = [b for b in divs
1051 1050 if repo[b].rev() in anc or b == target]
1052 1051 bookmarks.deletedivergent(repo, deletefrom, mark)
1053 1052 if bookmarks.validdest(repo, bmctx, repo[target]):
1054 1053 ui.status(_("moving bookmark '%s' forward from %s\n") %
1055 1054 (mark, short(bmctx.node())))
1056 1055 return
1057 1056 raise error.Abort(_("bookmark '%s' already exists "
1058 1057 "(use -f to force)") % mark)
1059 1058 if ((mark in repo.branchmap() or mark == repo.dirstate.branch())
1060 1059 and not force):
1061 1060 raise error.Abort(
1062 1061 _("a bookmark cannot have the name of an existing branch"))
1063 1062
1064 1063 if delete and rename:
1065 1064 raise error.Abort(_("--delete and --rename are incompatible"))
1066 1065 if delete and rev:
1067 1066 raise error.Abort(_("--rev is incompatible with --delete"))
1068 1067 if rename and rev:
1069 1068 raise error.Abort(_("--rev is incompatible with --rename"))
1070 1069 if not names and (delete or rev):
1071 1070 raise error.Abort(_("bookmark name required"))
1072 1071
1073 1072 if delete or rename or names or inactive:
1074 1073 wlock = lock = tr = None
1075 1074 try:
1076 1075 wlock = repo.wlock()
1077 1076 lock = repo.lock()
1078 1077 cur = repo.changectx('.').node()
1079 1078 marks = repo._bookmarks
1080 1079 if delete:
1081 1080 tr = repo.transaction('bookmark')
1082 1081 for mark in names:
1083 1082 if mark not in marks:
1084 1083 raise error.Abort(_("bookmark '%s' does not exist") %
1085 1084 mark)
1086 1085 if mark == repo._activebookmark:
1087 1086 bookmarks.deactivate(repo)
1088 1087 del marks[mark]
1089 1088
1090 1089 elif rename:
1091 1090 tr = repo.transaction('bookmark')
1092 1091 if not names:
1093 1092 raise error.Abort(_("new bookmark name required"))
1094 1093 elif len(names) > 1:
1095 1094 raise error.Abort(_("only one new bookmark name allowed"))
1096 1095 mark = checkformat(names[0])
1097 1096 if rename not in marks:
1098 1097 raise error.Abort(_("bookmark '%s' does not exist")
1099 1098 % rename)
1100 1099 checkconflict(repo, mark, cur, force)
1101 1100 marks[mark] = marks[rename]
1102 1101 if repo._activebookmark == rename and not inactive:
1103 1102 bookmarks.activate(repo, mark)
1104 1103 del marks[rename]
1105 1104 elif names:
1106 1105 tr = repo.transaction('bookmark')
1107 1106 newact = None
1108 1107 for mark in names:
1109 1108 mark = checkformat(mark)
1110 1109 if newact is None:
1111 1110 newact = mark
1112 1111 if inactive and mark == repo._activebookmark:
1113 1112 bookmarks.deactivate(repo)
1114 1113 return
1115 1114 tgt = cur
1116 1115 if rev:
1117 1116 tgt = scmutil.revsingle(repo, rev).node()
1118 1117 checkconflict(repo, mark, cur, force, tgt)
1119 1118 marks[mark] = tgt
1120 1119 if not inactive and cur == marks[newact] and not rev:
1121 1120 bookmarks.activate(repo, newact)
1122 1121 elif cur != tgt and newact == repo._activebookmark:
1123 1122 bookmarks.deactivate(repo)
1124 1123 elif inactive:
1125 1124 if len(marks) == 0:
1126 1125 ui.status(_("no bookmarks set\n"))
1127 1126 elif not repo._activebookmark:
1128 1127 ui.status(_("no active bookmark\n"))
1129 1128 else:
1130 1129 bookmarks.deactivate(repo)
1131 1130 if tr is not None:
1132 1131 marks.recordchange(tr)
1133 1132 tr.close()
1134 1133 finally:
1135 1134 lockmod.release(tr, lock, wlock)
1136 1135 else: # show bookmarks
1137 1136 fm = ui.formatter('bookmarks', opts)
1138 1137 hexfn = fm.hexfunc
1139 1138 marks = repo._bookmarks
1140 1139 if len(marks) == 0 and not fm:
1141 1140 ui.status(_("no bookmarks set\n"))
1142 1141 for bmark, n in sorted(marks.iteritems()):
1143 1142 active = repo._activebookmark
1144 1143 if bmark == active:
1145 1144 prefix, label = '*', activebookmarklabel
1146 1145 else:
1147 1146 prefix, label = ' ', ''
1148 1147
1149 1148 fm.startitem()
1150 1149 if not ui.quiet:
1151 1150 fm.plain(' %s ' % prefix, label=label)
1152 1151 fm.write('bookmark', '%s', bmark, label=label)
1153 1152 pad = " " * (25 - encoding.colwidth(bmark))
1154 1153 fm.condwrite(not ui.quiet, 'rev node', pad + ' %d:%s',
1155 1154 repo.changelog.rev(n), hexfn(n), label=label)
1156 1155 fm.data(active=(bmark == active))
1157 1156 fm.plain('\n')
1158 1157 fm.end()
1159 1158
1160 1159 @command('branch',
1161 1160 [('f', 'force', None,
1162 1161 _('set branch name even if it shadows an existing branch')),
1163 1162 ('C', 'clean', None, _('reset branch name to parent branch name'))],
1164 1163 _('[-fC] [NAME]'))
1165 1164 def branch(ui, repo, label=None, **opts):
1166 1165 """set or show the current branch name
1167 1166
1168 1167 .. note::
1169 1168
1170 1169 Branch names are permanent and global. Use :hg:`bookmark` to create a
1171 1170 light-weight bookmark instead. See :hg:`help glossary` for more
1172 1171 information about named branches and bookmarks.
1173 1172
1174 1173 With no argument, show the current branch name. With one argument,
1175 1174 set the working directory branch name (the branch will not exist
1176 1175 in the repository until the next commit). Standard practice
1177 1176 recommends that primary development take place on the 'default'
1178 1177 branch.
1179 1178
1180 1179 Unless -f/--force is specified, branch will not let you set a
1181 1180 branch name that already exists.
1182 1181
1183 1182 Use -C/--clean to reset the working directory branch to that of
1184 1183 the parent of the working directory, negating a previous branch
1185 1184 change.
1186 1185
1187 1186 Use the command :hg:`update` to switch to an existing branch. Use
1188 1187 :hg:`commit --close-branch` to mark this branch head as closed.
1189 1188 When all heads of the branch are closed, the branch will be
1190 1189 considered closed.
1191 1190
1192 1191 Returns 0 on success.
1193 1192 """
1194 1193 if label:
1195 1194 label = label.strip()
1196 1195
1197 1196 if not opts.get('clean') and not label:
1198 1197 ui.write("%s\n" % repo.dirstate.branch())
1199 1198 return
1200 1199
1201 1200 wlock = repo.wlock()
1202 1201 try:
1203 1202 if opts.get('clean'):
1204 1203 label = repo[None].p1().branch()
1205 1204 repo.dirstate.setbranch(label)
1206 1205 ui.status(_('reset working directory to branch %s\n') % label)
1207 1206 elif label:
1208 1207 if not opts.get('force') and label in repo.branchmap():
1209 1208 if label not in [p.branch() for p in repo[None].parents()]:
1210 1209 raise error.Abort(_('a branch of the same name already'
1211 1210 ' exists'),
1212 1211 # i18n: "it" refers to an existing branch
1213 1212 hint=_("use 'hg update' to switch to it"))
1214 1213 scmutil.checknewlabel(repo, label, 'branch')
1215 1214 repo.dirstate.setbranch(label)
1216 1215 ui.status(_('marked working directory as branch %s\n') % label)
1217 1216
1218 1217 # find any open named branches aside from default
1219 1218 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1220 1219 if n != "default" and not c]
1221 1220 if not others:
1222 1221 ui.status(_('(branches are permanent and global, '
1223 1222 'did you want a bookmark?)\n'))
1224 1223 finally:
1225 1224 wlock.release()
1226 1225
1227 1226 @command('branches',
1228 1227 [('a', 'active', False,
1229 1228 _('show only branches that have unmerged heads (DEPRECATED)')),
1230 1229 ('c', 'closed', False, _('show normal and closed branches')),
1231 1230 ] + formatteropts,
1232 1231 _('[-ac]'))
1233 1232 def branches(ui, repo, active=False, closed=False, **opts):
1234 1233 """list repository named branches
1235 1234
1236 1235 List the repository's named branches, indicating which ones are
1237 1236 inactive. If -c/--closed is specified, also list branches which have
1238 1237 been marked closed (see :hg:`commit --close-branch`).
1239 1238
1240 1239 Use the command :hg:`update` to switch to an existing branch.
1241 1240
1242 1241 Returns 0.
1243 1242 """
1244 1243
1245 1244 fm = ui.formatter('branches', opts)
1246 1245 hexfunc = fm.hexfunc
1247 1246
1248 1247 allheads = set(repo.heads())
1249 1248 branches = []
1250 1249 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1251 1250 isactive = not isclosed and bool(set(heads) & allheads)
1252 1251 branches.append((tag, repo[tip], isactive, not isclosed))
1253 1252 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1254 1253 reverse=True)
1255 1254
1256 1255 for tag, ctx, isactive, isopen in branches:
1257 1256 if active and not isactive:
1258 1257 continue
1259 1258 if isactive:
1260 1259 label = 'branches.active'
1261 1260 notice = ''
1262 1261 elif not isopen:
1263 1262 if not closed:
1264 1263 continue
1265 1264 label = 'branches.closed'
1266 1265 notice = _(' (closed)')
1267 1266 else:
1268 1267 label = 'branches.inactive'
1269 1268 notice = _(' (inactive)')
1270 1269 current = (tag == repo.dirstate.branch())
1271 1270 if current:
1272 1271 label = 'branches.current'
1273 1272
1274 1273 fm.startitem()
1275 1274 fm.write('branch', '%s', tag, label=label)
1276 1275 rev = ctx.rev()
1277 1276 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1278 1277 fmt = ' ' * padsize + ' %d:%s'
1279 1278 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1280 1279 label='log.changeset changeset.%s' % ctx.phasestr())
1281 1280 fm.data(active=isactive, closed=not isopen, current=current)
1282 1281 if not ui.quiet:
1283 1282 fm.plain(notice)
1284 1283 fm.plain('\n')
1285 1284 fm.end()
1286 1285
1287 1286 @command('bundle',
1288 1287 [('f', 'force', None, _('run even when the destination is unrelated')),
1289 1288 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1290 1289 _('REV')),
1291 1290 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1292 1291 _('BRANCH')),
1293 1292 ('', 'base', [],
1294 1293 _('a base changeset assumed to be available at the destination'),
1295 1294 _('REV')),
1296 1295 ('a', 'all', None, _('bundle all changesets in the repository')),
1297 1296 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1298 1297 ] + remoteopts,
1299 1298 _('[-f] [-t TYPE] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1300 1299 def bundle(ui, repo, fname, dest=None, **opts):
1301 1300 """create a changegroup file
1302 1301
1303 1302 Generate a compressed changegroup file collecting changesets not
1304 1303 known to be in another repository.
1305 1304
1306 1305 If you omit the destination repository, then hg assumes the
1307 1306 destination will have all the nodes you specify with --base
1308 1307 parameters. To create a bundle containing all changesets, use
1309 1308 -a/--all (or --base null).
1310 1309
1311 1310 You can change bundle format with the -t/--type option. You can
1312 1311 specify a compression, a bundle version or both using a dash
1313 1312 (comp-version). The available compression methods are: none, bzip2,
1314 1313 and gzip (by default, bundles are compressed using bzip2). The
1315 1314 available format are: v1, v2 (default to most suitable).
1316 1315
1317 1316 The bundle file can then be transferred using conventional means
1318 1317 and applied to another repository with the unbundle or pull
1319 1318 command. This is useful when direct push and pull are not
1320 1319 available or when exporting an entire repository is undesirable.
1321 1320
1322 1321 Applying bundles preserves all changeset contents including
1323 1322 permissions, copy/rename information, and revision history.
1324 1323
1325 1324 Returns 0 on success, 1 if no changes found.
1326 1325 """
1327 1326 revs = None
1328 1327 if 'rev' in opts:
1329 1328 revs = scmutil.revrange(repo, opts['rev'])
1330 1329
1331 1330 bundletype = opts.get('type', 'bzip2').lower()
1332 1331 try:
1333 1332 bcompression, cgversion, params = exchange.parsebundlespec(
1334 1333 repo, bundletype, strict=False)
1335 1334 except error.UnsupportedBundleSpecification as e:
1336 1335 raise error.Abort(str(e),
1337 1336 hint=_('see "hg help bundle" for supported '
1338 1337 'values for --type'))
1339 1338
1340 1339 # Packed bundles are a pseudo bundle format for now.
1341 1340 if cgversion == 's1':
1342 1341 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1343 1342 hint=_('use "hg debugcreatestreamclonebundle"'))
1344 1343
1345 1344 if opts.get('all'):
1346 1345 base = ['null']
1347 1346 else:
1348 1347 base = scmutil.revrange(repo, opts.get('base'))
1349 1348 # TODO: get desired bundlecaps from command line.
1350 1349 bundlecaps = None
1351 1350 if base:
1352 1351 if dest:
1353 1352 raise error.Abort(_("--base is incompatible with specifying "
1354 1353 "a destination"))
1355 1354 common = [repo.lookup(rev) for rev in base]
1356 1355 heads = revs and map(repo.lookup, revs) or revs
1357 1356 cg = changegroup.getchangegroup(repo, 'bundle', heads=heads,
1358 1357 common=common, bundlecaps=bundlecaps,
1359 1358 version=cgversion)
1360 1359 outgoing = None
1361 1360 else:
1362 1361 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1363 1362 dest, branches = hg.parseurl(dest, opts.get('branch'))
1364 1363 other = hg.peer(repo, opts, dest)
1365 1364 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1366 1365 heads = revs and map(repo.lookup, revs) or revs
1367 1366 outgoing = discovery.findcommonoutgoing(repo, other,
1368 1367 onlyheads=heads,
1369 1368 force=opts.get('force'),
1370 1369 portable=True)
1371 1370 cg = changegroup.getlocalchangegroup(repo, 'bundle', outgoing,
1372 1371 bundlecaps, version=cgversion)
1373 1372 if not cg:
1374 1373 scmutil.nochangesfound(ui, repo, outgoing and outgoing.excluded)
1375 1374 return 1
1376 1375
1377 1376 if cgversion == '01': #bundle1
1378 1377 if bcompression is None:
1379 1378 bcompression = 'UN'
1380 1379 bversion = 'HG10' + bcompression
1381 1380 bcompression = None
1382 1381 else:
1383 1382 assert cgversion == '02'
1384 1383 bversion = 'HG20'
1385 1384
1386 1385
1387 1386 changegroup.writebundle(ui, cg, fname, bversion, compression=bcompression)
1388 1387
1389 1388 @command('cat',
1390 1389 [('o', 'output', '',
1391 1390 _('print output to file with formatted name'), _('FORMAT')),
1392 1391 ('r', 'rev', '', _('print the given revision'), _('REV')),
1393 1392 ('', 'decode', None, _('apply any matching decode filter')),
1394 1393 ] + walkopts,
1395 1394 _('[OPTION]... FILE...'),
1396 1395 inferrepo=True)
1397 1396 def cat(ui, repo, file1, *pats, **opts):
1398 1397 """output the current or given revision of files
1399 1398
1400 1399 Print the specified files as they were at the given revision. If
1401 1400 no revision is given, the parent of the working directory is used.
1402 1401
1403 1402 Output may be to a file, in which case the name of the file is
1404 1403 given using a format string. The formatting rules as follows:
1405 1404
1406 1405 :``%%``: literal "%" character
1407 1406 :``%s``: basename of file being printed
1408 1407 :``%d``: dirname of file being printed, or '.' if in repository root
1409 1408 :``%p``: root-relative path name of file being printed
1410 1409 :``%H``: changeset hash (40 hexadecimal digits)
1411 1410 :``%R``: changeset revision number
1412 1411 :``%h``: short-form changeset hash (12 hexadecimal digits)
1413 1412 :``%r``: zero-padded changeset revision number
1414 1413 :``%b``: basename of the exporting repository
1415 1414
1416 1415 Returns 0 on success.
1417 1416 """
1418 1417 ctx = scmutil.revsingle(repo, opts.get('rev'))
1419 1418 m = scmutil.match(ctx, (file1,) + pats, opts)
1420 1419
1421 1420 return cmdutil.cat(ui, repo, ctx, m, '', **opts)
1422 1421
1423 1422 @command('^clone',
1424 1423 [('U', 'noupdate', None, _('the clone will include an empty working '
1425 1424 'directory (only a repository)')),
1426 1425 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1427 1426 _('REV')),
1428 1427 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1429 1428 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1430 1429 ('', 'pull', None, _('use pull protocol to copy metadata')),
1431 1430 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1432 1431 ] + remoteopts,
1433 1432 _('[OPTION]... SOURCE [DEST]'),
1434 1433 norepo=True)
1435 1434 def clone(ui, source, dest=None, **opts):
1436 1435 """make a copy of an existing repository
1437 1436
1438 1437 Create a copy of an existing repository in a new directory.
1439 1438
1440 1439 If no destination directory name is specified, it defaults to the
1441 1440 basename of the source.
1442 1441
1443 1442 The location of the source is added to the new repository's
1444 1443 ``.hg/hgrc`` file, as the default to be used for future pulls.
1445 1444
1446 1445 Only local paths and ``ssh://`` URLs are supported as
1447 1446 destinations. For ``ssh://`` destinations, no working directory or
1448 1447 ``.hg/hgrc`` will be created on the remote side.
1449 1448
1450 1449 To pull only a subset of changesets, specify one or more revisions
1451 1450 identifiers with -r/--rev or branches with -b/--branch. The
1452 1451 resulting clone will contain only the specified changesets and
1453 1452 their ancestors. These options (or 'clone src#rev dest') imply
1454 1453 --pull, even for local source repositories. Note that specifying a
1455 1454 tag will include the tagged changeset but not the changeset
1456 1455 containing the tag.
1457 1456
1458 1457 If the source repository has a bookmark called '@' set, that
1459 1458 revision will be checked out in the new repository by default.
1460 1459
1461 1460 To check out a particular version, use -u/--update, or
1462 1461 -U/--noupdate to create a clone with no working directory.
1463 1462
1464 1463 .. container:: verbose
1465 1464
1466 1465 For efficiency, hardlinks are used for cloning whenever the
1467 1466 source and destination are on the same filesystem (note this
1468 1467 applies only to the repository data, not to the working
1469 1468 directory). Some filesystems, such as AFS, implement hardlinking
1470 1469 incorrectly, but do not report errors. In these cases, use the
1471 1470 --pull option to avoid hardlinking.
1472 1471
1473 1472 In some cases, you can clone repositories and the working
1474 1473 directory using full hardlinks with ::
1475 1474
1476 1475 $ cp -al REPO REPOCLONE
1477 1476
1478 1477 This is the fastest way to clone, but it is not always safe. The
1479 1478 operation is not atomic (making sure REPO is not modified during
1480 1479 the operation is up to you) and you have to make sure your
1481 1480 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1482 1481 so). Also, this is not compatible with certain extensions that
1483 1482 place their metadata under the .hg directory, such as mq.
1484 1483
1485 1484 Mercurial will update the working directory to the first applicable
1486 1485 revision from this list:
1487 1486
1488 1487 a) null if -U or the source repository has no changesets
1489 1488 b) if -u . and the source repository is local, the first parent of
1490 1489 the source repository's working directory
1491 1490 c) the changeset specified with -u (if a branch name, this means the
1492 1491 latest head of that branch)
1493 1492 d) the changeset specified with -r
1494 1493 e) the tipmost head specified with -b
1495 1494 f) the tipmost head specified with the url#branch source syntax
1496 1495 g) the revision marked with the '@' bookmark, if present
1497 1496 h) the tipmost head of the default branch
1498 1497 i) tip
1499 1498
1500 1499 Examples:
1501 1500
1502 1501 - clone a remote repository to a new directory named hg/::
1503 1502
1504 1503 hg clone http://selenic.com/hg
1505 1504
1506 1505 - create a lightweight local clone::
1507 1506
1508 1507 hg clone project/ project-feature/
1509 1508
1510 1509 - clone from an absolute path on an ssh server (note double-slash)::
1511 1510
1512 1511 hg clone ssh://user@server//home/projects/alpha/
1513 1512
1514 1513 - do a high-speed clone over a LAN while checking out a
1515 1514 specified version::
1516 1515
1517 1516 hg clone --uncompressed http://server/repo -u 1.5
1518 1517
1519 1518 - create a repository without changesets after a particular revision::
1520 1519
1521 1520 hg clone -r 04e544 experimental/ good/
1522 1521
1523 1522 - clone (and track) a particular named branch::
1524 1523
1525 1524 hg clone http://selenic.com/hg#stable
1526 1525
1527 1526 See :hg:`help urls` for details on specifying URLs.
1528 1527
1529 1528 Returns 0 on success.
1530 1529 """
1531 1530 if opts.get('noupdate') and opts.get('updaterev'):
1532 1531 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1533 1532
1534 1533 r = hg.clone(ui, opts, source, dest,
1535 1534 pull=opts.get('pull'),
1536 1535 stream=opts.get('uncompressed'),
1537 1536 rev=opts.get('rev'),
1538 1537 update=opts.get('updaterev') or not opts.get('noupdate'),
1539 1538 branch=opts.get('branch'),
1540 1539 shareopts=opts.get('shareopts'))
1541 1540
1542 1541 return r is None
1543 1542
1544 1543 @command('^commit|ci',
1545 1544 [('A', 'addremove', None,
1546 1545 _('mark new/missing files as added/removed before committing')),
1547 1546 ('', 'close-branch', None,
1548 1547 _('mark a branch head as closed')),
1549 1548 ('', 'amend', None, _('amend the parent of the working directory')),
1550 1549 ('s', 'secret', None, _('use the secret phase for committing')),
1551 1550 ('e', 'edit', None, _('invoke editor on commit messages')),
1552 1551 ('i', 'interactive', None, _('use interactive mode')),
1553 1552 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1554 1553 _('[OPTION]... [FILE]...'),
1555 1554 inferrepo=True)
1556 1555 def commit(ui, repo, *pats, **opts):
1557 1556 """commit the specified files or all outstanding changes
1558 1557
1559 1558 Commit changes to the given files into the repository. Unlike a
1560 1559 centralized SCM, this operation is a local operation. See
1561 1560 :hg:`push` for a way to actively distribute your changes.
1562 1561
1563 1562 If a list of files is omitted, all changes reported by :hg:`status`
1564 1563 will be committed.
1565 1564
1566 1565 If you are committing the result of a merge, do not provide any
1567 1566 filenames or -I/-X filters.
1568 1567
1569 1568 If no commit message is specified, Mercurial starts your
1570 1569 configured editor where you can enter a message. In case your
1571 1570 commit fails, you will find a backup of your message in
1572 1571 ``.hg/last-message.txt``.
1573 1572
1574 1573 The --close-branch flag can be used to mark the current branch
1575 1574 head closed. When all heads of a branch are closed, the branch
1576 1575 will be considered closed and no longer listed.
1577 1576
1578 1577 The --amend flag can be used to amend the parent of the
1579 1578 working directory with a new commit that contains the changes
1580 1579 in the parent in addition to those currently reported by :hg:`status`,
1581 1580 if there are any. The old commit is stored in a backup bundle in
1582 1581 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1583 1582 on how to restore it).
1584 1583
1585 1584 Message, user and date are taken from the amended commit unless
1586 1585 specified. When a message isn't specified on the command line,
1587 1586 the editor will open with the message of the amended commit.
1588 1587
1589 1588 It is not possible to amend public changesets (see :hg:`help phases`)
1590 1589 or changesets that have children.
1591 1590
1592 1591 See :hg:`help dates` for a list of formats valid for -d/--date.
1593 1592
1594 1593 Returns 0 on success, 1 if nothing changed.
1595 1594
1596 1595 .. container:: verbose
1597 1596
1598 1597 Examples:
1599 1598
1600 1599 - commit all files ending in .py::
1601 1600
1602 1601 hg commit --include "set:**.py"
1603 1602
1604 1603 - commit all non-binary files::
1605 1604
1606 1605 hg commit --exclude "set:binary()"
1607 1606
1608 1607 - amend the current commit and set the date to now::
1609 1608
1610 1609 hg commit --amend --date now
1611 1610 """
1612 1611 wlock = lock = None
1613 1612 try:
1614 1613 wlock = repo.wlock()
1615 1614 lock = repo.lock()
1616 1615 return _docommit(ui, repo, *pats, **opts)
1617 1616 finally:
1618 1617 release(lock, wlock)
1619 1618
1620 1619 def _docommit(ui, repo, *pats, **opts):
1621 1620 if opts.get('interactive'):
1622 1621 opts.pop('interactive')
1623 1622 cmdutil.dorecord(ui, repo, commit, None, False,
1624 1623 cmdutil.recordfilter, *pats, **opts)
1625 1624 return
1626 1625
1627 1626 if opts.get('subrepos'):
1628 1627 if opts.get('amend'):
1629 1628 raise error.Abort(_('cannot amend with --subrepos'))
1630 1629 # Let --subrepos on the command line override config setting.
1631 1630 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1632 1631
1633 1632 cmdutil.checkunfinished(repo, commit=True)
1634 1633
1635 1634 branch = repo[None].branch()
1636 1635 bheads = repo.branchheads(branch)
1637 1636
1638 1637 extra = {}
1639 1638 if opts.get('close_branch'):
1640 1639 extra['close'] = 1
1641 1640
1642 1641 if not bheads:
1643 1642 raise error.Abort(_('can only close branch heads'))
1644 1643 elif opts.get('amend'):
1645 1644 if repo[None].parents()[0].p1().branch() != branch and \
1646 1645 repo[None].parents()[0].p2().branch() != branch:
1647 1646 raise error.Abort(_('can only close branch heads'))
1648 1647
1649 1648 if opts.get('amend'):
1650 1649 if ui.configbool('ui', 'commitsubrepos'):
1651 1650 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1652 1651
1653 1652 old = repo['.']
1654 1653 if not old.mutable():
1655 1654 raise error.Abort(_('cannot amend public changesets'))
1656 1655 if len(repo[None].parents()) > 1:
1657 1656 raise error.Abort(_('cannot amend while merging'))
1658 1657 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1659 1658 if not allowunstable and old.children():
1660 1659 raise error.Abort(_('cannot amend changeset with children'))
1661 1660
1662 1661 newextra = extra.copy()
1663 1662 newextra['branch'] = branch
1664 1663 extra = newextra
1665 1664 # commitfunc is used only for temporary amend commit by cmdutil.amend
1666 1665 def commitfunc(ui, repo, message, match, opts):
1667 1666 return repo.commit(message,
1668 1667 opts.get('user') or old.user(),
1669 1668 opts.get('date') or old.date(),
1670 1669 match,
1671 1670 extra=extra)
1672 1671
1673 1672 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1674 1673 if node == old.node():
1675 1674 ui.status(_("nothing changed\n"))
1676 1675 return 1
1677 1676 else:
1678 1677 def commitfunc(ui, repo, message, match, opts):
1679 1678 backup = ui.backupconfig('phases', 'new-commit')
1680 1679 baseui = repo.baseui
1681 1680 basebackup = baseui.backupconfig('phases', 'new-commit')
1682 1681 try:
1683 1682 if opts.get('secret'):
1684 1683 ui.setconfig('phases', 'new-commit', 'secret', 'commit')
1685 1684 # Propagate to subrepos
1686 1685 baseui.setconfig('phases', 'new-commit', 'secret', 'commit')
1687 1686
1688 1687 editform = cmdutil.mergeeditform(repo[None], 'commit.normal')
1689 1688 editor = cmdutil.getcommiteditor(editform=editform, **opts)
1690 1689 return repo.commit(message, opts.get('user'), opts.get('date'),
1691 1690 match,
1692 1691 editor=editor,
1693 1692 extra=extra)
1694 1693 finally:
1695 1694 ui.restoreconfig(backup)
1696 1695 repo.baseui.restoreconfig(basebackup)
1697 1696
1698 1697
1699 1698 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1700 1699
1701 1700 if not node:
1702 1701 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
1703 1702 if stat[3]:
1704 1703 ui.status(_("nothing changed (%d missing files, see "
1705 1704 "'hg status')\n") % len(stat[3]))
1706 1705 else:
1707 1706 ui.status(_("nothing changed\n"))
1708 1707 return 1
1709 1708
1710 1709 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1711 1710
1712 1711 @command('config|showconfig|debugconfig',
1713 1712 [('u', 'untrusted', None, _('show untrusted configuration options')),
1714 1713 ('e', 'edit', None, _('edit user config')),
1715 1714 ('l', 'local', None, _('edit repository config')),
1716 1715 ('g', 'global', None, _('edit global config'))],
1717 1716 _('[-u] [NAME]...'),
1718 1717 optionalrepo=True)
1719 1718 def config(ui, repo, *values, **opts):
1720 1719 """show combined config settings from all hgrc files
1721 1720
1722 1721 With no arguments, print names and values of all config items.
1723 1722
1724 1723 With one argument of the form section.name, print just the value
1725 1724 of that config item.
1726 1725
1727 1726 With multiple arguments, print names and values of all config
1728 1727 items with matching section names.
1729 1728
1730 1729 With --edit, start an editor on the user-level config file. With
1731 1730 --global, edit the system-wide config file. With --local, edit the
1732 1731 repository-level config file.
1733 1732
1734 1733 With --debug, the source (filename and line number) is printed
1735 1734 for each config item.
1736 1735
1737 1736 See :hg:`help config` for more information about config files.
1738 1737
1739 1738 Returns 0 on success, 1 if NAME does not exist.
1740 1739
1741 1740 """
1742 1741
1743 1742 if opts.get('edit') or opts.get('local') or opts.get('global'):
1744 1743 if opts.get('local') and opts.get('global'):
1745 1744 raise error.Abort(_("can't use --local and --global together"))
1746 1745
1747 1746 if opts.get('local'):
1748 1747 if not repo:
1749 1748 raise error.Abort(_("can't use --local outside a repository"))
1750 1749 paths = [repo.join('hgrc')]
1751 1750 elif opts.get('global'):
1752 1751 paths = scmutil.systemrcpath()
1753 1752 else:
1754 1753 paths = scmutil.userrcpath()
1755 1754
1756 1755 for f in paths:
1757 1756 if os.path.exists(f):
1758 1757 break
1759 1758 else:
1760 1759 if opts.get('global'):
1761 1760 samplehgrc = uimod.samplehgrcs['global']
1762 1761 elif opts.get('local'):
1763 1762 samplehgrc = uimod.samplehgrcs['local']
1764 1763 else:
1765 1764 samplehgrc = uimod.samplehgrcs['user']
1766 1765
1767 1766 f = paths[0]
1768 1767 fp = open(f, "w")
1769 1768 fp.write(samplehgrc)
1770 1769 fp.close()
1771 1770
1772 1771 editor = ui.geteditor()
1773 1772 ui.system("%s \"%s\"" % (editor, f),
1774 1773 onerr=error.Abort, errprefix=_("edit failed"))
1775 1774 return
1776 1775
1777 1776 for f in scmutil.rcpath():
1778 1777 ui.debug('read config from: %s\n' % f)
1779 1778 untrusted = bool(opts.get('untrusted'))
1780 1779 if values:
1781 1780 sections = [v for v in values if '.' not in v]
1782 1781 items = [v for v in values if '.' in v]
1783 1782 if len(items) > 1 or items and sections:
1784 1783 raise error.Abort(_('only one config item permitted'))
1785 1784 matched = False
1786 1785 for section, name, value in ui.walkconfig(untrusted=untrusted):
1787 1786 value = str(value).replace('\n', '\\n')
1788 1787 sectname = section + '.' + name
1789 1788 if values:
1790 1789 for v in values:
1791 1790 if v == section:
1792 1791 ui.debug('%s: ' %
1793 1792 ui.configsource(section, name, untrusted))
1794 1793 ui.write('%s=%s\n' % (sectname, value))
1795 1794 matched = True
1796 1795 elif v == sectname:
1797 1796 ui.debug('%s: ' %
1798 1797 ui.configsource(section, name, untrusted))
1799 1798 ui.write(value, '\n')
1800 1799 matched = True
1801 1800 else:
1802 1801 ui.debug('%s: ' %
1803 1802 ui.configsource(section, name, untrusted))
1804 1803 ui.write('%s=%s\n' % (sectname, value))
1805 1804 matched = True
1806 1805 if matched:
1807 1806 return 0
1808 1807 return 1
1809 1808
1810 1809 @command('copy|cp',
1811 1810 [('A', 'after', None, _('record a copy that has already occurred')),
1812 1811 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1813 1812 ] + walkopts + dryrunopts,
1814 1813 _('[OPTION]... [SOURCE]... DEST'))
1815 1814 def copy(ui, repo, *pats, **opts):
1816 1815 """mark files as copied for the next commit
1817 1816
1818 1817 Mark dest as having copies of source files. If dest is a
1819 1818 directory, copies are put in that directory. If dest is a file,
1820 1819 the source must be a single file.
1821 1820
1822 1821 By default, this command copies the contents of files as they
1823 1822 exist in the working directory. If invoked with -A/--after, the
1824 1823 operation is recorded, but no copying is performed.
1825 1824
1826 1825 This command takes effect with the next commit. To undo a copy
1827 1826 before that, see :hg:`revert`.
1828 1827
1829 1828 Returns 0 on success, 1 if errors are encountered.
1830 1829 """
1831 1830 wlock = repo.wlock(False)
1832 1831 try:
1833 1832 return cmdutil.copy(ui, repo, pats, opts)
1834 1833 finally:
1835 1834 wlock.release()
1836 1835
1837 1836 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
1838 1837 def debugancestor(ui, repo, *args):
1839 1838 """find the ancestor revision of two revisions in a given index"""
1840 1839 if len(args) == 3:
1841 1840 index, rev1, rev2 = args
1842 1841 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), index)
1843 1842 lookup = r.lookup
1844 1843 elif len(args) == 2:
1845 1844 if not repo:
1846 1845 raise error.Abort(_("there is no Mercurial repository here "
1847 1846 "(.hg not found)"))
1848 1847 rev1, rev2 = args
1849 1848 r = repo.changelog
1850 1849 lookup = repo.lookup
1851 1850 else:
1852 1851 raise error.Abort(_('either two or three arguments required'))
1853 1852 a = r.ancestor(lookup(rev1), lookup(rev2))
1854 1853 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1855 1854
1856 1855 @command('debugbuilddag',
1857 1856 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
1858 1857 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
1859 1858 ('n', 'new-file', None, _('add new file at each rev'))],
1860 1859 _('[OPTION]... [TEXT]'))
1861 1860 def debugbuilddag(ui, repo, text=None,
1862 1861 mergeable_file=False,
1863 1862 overwritten_file=False,
1864 1863 new_file=False):
1865 1864 """builds a repo with a given DAG from scratch in the current empty repo
1866 1865
1867 1866 The description of the DAG is read from stdin if not given on the
1868 1867 command line.
1869 1868
1870 1869 Elements:
1871 1870
1872 1871 - "+n" is a linear run of n nodes based on the current default parent
1873 1872 - "." is a single node based on the current default parent
1874 1873 - "$" resets the default parent to null (implied at the start);
1875 1874 otherwise the default parent is always the last node created
1876 1875 - "<p" sets the default parent to the backref p
1877 1876 - "*p" is a fork at parent p, which is a backref
1878 1877 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
1879 1878 - "/p2" is a merge of the preceding node and p2
1880 1879 - ":tag" defines a local tag for the preceding node
1881 1880 - "@branch" sets the named branch for subsequent nodes
1882 1881 - "#...\\n" is a comment up to the end of the line
1883 1882
1884 1883 Whitespace between the above elements is ignored.
1885 1884
1886 1885 A backref is either
1887 1886
1888 1887 - a number n, which references the node curr-n, where curr is the current
1889 1888 node, or
1890 1889 - the name of a local tag you placed earlier using ":tag", or
1891 1890 - empty to denote the default parent.
1892 1891
1893 1892 All string valued-elements are either strictly alphanumeric, or must
1894 1893 be enclosed in double quotes ("..."), with "\\" as escape character.
1895 1894 """
1896 1895
1897 1896 if text is None:
1898 1897 ui.status(_("reading DAG from stdin\n"))
1899 1898 text = ui.fin.read()
1900 1899
1901 1900 cl = repo.changelog
1902 1901 if len(cl) > 0:
1903 1902 raise error.Abort(_('repository is not empty'))
1904 1903
1905 1904 # determine number of revs in DAG
1906 1905 total = 0
1907 1906 for type, data in dagparser.parsedag(text):
1908 1907 if type == 'n':
1909 1908 total += 1
1910 1909
1911 1910 if mergeable_file:
1912 1911 linesperrev = 2
1913 1912 # make a file with k lines per rev
1914 1913 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
1915 1914 initialmergedlines.append("")
1916 1915
1917 1916 tags = []
1918 1917
1919 1918 lock = tr = None
1920 1919 try:
1921 1920 lock = repo.lock()
1922 1921 tr = repo.transaction("builddag")
1923 1922
1924 1923 at = -1
1925 1924 atbranch = 'default'
1926 1925 nodeids = []
1927 1926 id = 0
1928 1927 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1929 1928 for type, data in dagparser.parsedag(text):
1930 1929 if type == 'n':
1931 1930 ui.note(('node %s\n' % str(data)))
1932 1931 id, ps = data
1933 1932
1934 1933 files = []
1935 1934 fctxs = {}
1936 1935
1937 1936 p2 = None
1938 1937 if mergeable_file:
1939 1938 fn = "mf"
1940 1939 p1 = repo[ps[0]]
1941 1940 if len(ps) > 1:
1942 1941 p2 = repo[ps[1]]
1943 1942 pa = p1.ancestor(p2)
1944 1943 base, local, other = [x[fn].data() for x in (pa, p1,
1945 1944 p2)]
1946 1945 m3 = simplemerge.Merge3Text(base, local, other)
1947 1946 ml = [l.strip() for l in m3.merge_lines()]
1948 1947 ml.append("")
1949 1948 elif at > 0:
1950 1949 ml = p1[fn].data().split("\n")
1951 1950 else:
1952 1951 ml = initialmergedlines
1953 1952 ml[id * linesperrev] += " r%i" % id
1954 1953 mergedtext = "\n".join(ml)
1955 1954 files.append(fn)
1956 1955 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
1957 1956
1958 1957 if overwritten_file:
1959 1958 fn = "of"
1960 1959 files.append(fn)
1961 1960 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1962 1961
1963 1962 if new_file:
1964 1963 fn = "nf%i" % id
1965 1964 files.append(fn)
1966 1965 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1967 1966 if len(ps) > 1:
1968 1967 if not p2:
1969 1968 p2 = repo[ps[1]]
1970 1969 for fn in p2:
1971 1970 if fn.startswith("nf"):
1972 1971 files.append(fn)
1973 1972 fctxs[fn] = p2[fn]
1974 1973
1975 1974 def fctxfn(repo, cx, path):
1976 1975 return fctxs.get(path)
1977 1976
1978 1977 if len(ps) == 0 or ps[0] < 0:
1979 1978 pars = [None, None]
1980 1979 elif len(ps) == 1:
1981 1980 pars = [nodeids[ps[0]], None]
1982 1981 else:
1983 1982 pars = [nodeids[p] for p in ps]
1984 1983 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
1985 1984 date=(id, 0),
1986 1985 user="debugbuilddag",
1987 1986 extra={'branch': atbranch})
1988 1987 nodeid = repo.commitctx(cx)
1989 1988 nodeids.append(nodeid)
1990 1989 at = id
1991 1990 elif type == 'l':
1992 1991 id, name = data
1993 1992 ui.note(('tag %s\n' % name))
1994 1993 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
1995 1994 elif type == 'a':
1996 1995 ui.note(('branch %s\n' % data))
1997 1996 atbranch = data
1998 1997 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1999 1998 tr.close()
2000 1999
2001 2000 if tags:
2002 2001 repo.vfs.write("localtags", "".join(tags))
2003 2002 finally:
2004 2003 ui.progress(_('building'), None)
2005 2004 release(tr, lock)
2006 2005
2007 2006 @command('debugbundle',
2008 2007 [('a', 'all', None, _('show all details'))],
2009 2008 _('FILE'),
2010 2009 norepo=True)
2011 2010 def debugbundle(ui, bundlepath, all=None, **opts):
2012 2011 """lists the contents of a bundle"""
2013 2012 f = hg.openpath(ui, bundlepath)
2014 2013 try:
2015 2014 gen = exchange.readbundle(ui, f, bundlepath)
2016 2015 if isinstance(gen, bundle2.unbundle20):
2017 2016 return _debugbundle2(ui, gen, all=all, **opts)
2018 2017 if all:
2019 2018 ui.write(("format: id, p1, p2, cset, delta base, len(delta)\n"))
2020 2019
2021 2020 def showchunks(named):
2022 2021 ui.write("\n%s\n" % named)
2023 2022 chain = None
2024 2023 while True:
2025 2024 chunkdata = gen.deltachunk(chain)
2026 2025 if not chunkdata:
2027 2026 break
2028 2027 node = chunkdata['node']
2029 2028 p1 = chunkdata['p1']
2030 2029 p2 = chunkdata['p2']
2031 2030 cs = chunkdata['cs']
2032 2031 deltabase = chunkdata['deltabase']
2033 2032 delta = chunkdata['delta']
2034 2033 ui.write("%s %s %s %s %s %s\n" %
2035 2034 (hex(node), hex(p1), hex(p2),
2036 2035 hex(cs), hex(deltabase), len(delta)))
2037 2036 chain = node
2038 2037
2039 2038 chunkdata = gen.changelogheader()
2040 2039 showchunks("changelog")
2041 2040 chunkdata = gen.manifestheader()
2042 2041 showchunks("manifest")
2043 2042 while True:
2044 2043 chunkdata = gen.filelogheader()
2045 2044 if not chunkdata:
2046 2045 break
2047 2046 fname = chunkdata['filename']
2048 2047 showchunks(fname)
2049 2048 else:
2050 2049 if isinstance(gen, bundle2.unbundle20):
2051 2050 raise error.Abort(_('use debugbundle2 for this file'))
2052 2051 chunkdata = gen.changelogheader()
2053 2052 chain = None
2054 2053 while True:
2055 2054 chunkdata = gen.deltachunk(chain)
2056 2055 if not chunkdata:
2057 2056 break
2058 2057 node = chunkdata['node']
2059 2058 ui.write("%s\n" % hex(node))
2060 2059 chain = node
2061 2060 finally:
2062 2061 f.close()
2063 2062
2064 2063 def _debugbundle2(ui, gen, **opts):
2065 2064 """lists the contents of a bundle2"""
2066 2065 if not isinstance(gen, bundle2.unbundle20):
2067 2066 raise error.Abort(_('not a bundle2 file'))
2068 2067 ui.write(('Stream params: %s\n' % repr(gen.params)))
2069 2068 for part in gen.iterparts():
2070 2069 ui.write('%s -- %r\n' % (part.type, repr(part.params)))
2071 2070 if part.type == 'changegroup':
2072 2071 version = part.params.get('version', '01')
2073 2072 cg = changegroup.packermap[version][1](part, 'UN')
2074 2073 chunkdata = cg.changelogheader()
2075 2074 chain = None
2076 2075 while True:
2077 2076 chunkdata = cg.deltachunk(chain)
2078 2077 if not chunkdata:
2079 2078 break
2080 2079 node = chunkdata['node']
2081 2080 ui.write(" %s\n" % hex(node))
2082 2081 chain = node
2083 2082
2084 2083 @command('debugcreatestreamclonebundle', [], 'FILE')
2085 2084 def debugcreatestreamclonebundle(ui, repo, fname):
2086 2085 """create a stream clone bundle file
2087 2086
2088 2087 Stream bundles are special bundles that are essentially archives of
2089 2088 revlog files. They are commonly used for cloning very quickly.
2090 2089 """
2091 2090 requirements, gen = streamclone.generatebundlev1(repo)
2092 2091 changegroup.writechunks(ui, gen, fname)
2093 2092
2094 2093 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
2095 2094
2096 2095 @command('debugapplystreamclonebundle', [], 'FILE')
2097 2096 def debugapplystreamclonebundle(ui, repo, fname):
2098 2097 """apply a stream clone bundle file"""
2099 2098 f = hg.openpath(ui, fname)
2100 2099 gen = exchange.readbundle(ui, f, fname)
2101 2100 gen.apply(repo)
2102 2101
2103 2102 @command('debugcheckstate', [], '')
2104 2103 def debugcheckstate(ui, repo):
2105 2104 """validate the correctness of the current dirstate"""
2106 2105 parent1, parent2 = repo.dirstate.parents()
2107 2106 m1 = repo[parent1].manifest()
2108 2107 m2 = repo[parent2].manifest()
2109 2108 errors = 0
2110 2109 for f in repo.dirstate:
2111 2110 state = repo.dirstate[f]
2112 2111 if state in "nr" and f not in m1:
2113 2112 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
2114 2113 errors += 1
2115 2114 if state in "a" and f in m1:
2116 2115 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
2117 2116 errors += 1
2118 2117 if state in "m" and f not in m1 and f not in m2:
2119 2118 ui.warn(_("%s in state %s, but not in either manifest\n") %
2120 2119 (f, state))
2121 2120 errors += 1
2122 2121 for f in m1:
2123 2122 state = repo.dirstate[f]
2124 2123 if state not in "nrm":
2125 2124 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
2126 2125 errors += 1
2127 2126 if errors:
2128 2127 error = _(".hg/dirstate inconsistent with current parent's manifest")
2129 2128 raise error.Abort(error)
2130 2129
2131 2130 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
2132 2131 def debugcommands(ui, cmd='', *args):
2133 2132 """list all available commands and options"""
2134 2133 for cmd, vals in sorted(table.iteritems()):
2135 2134 cmd = cmd.split('|')[0].strip('^')
2136 2135 opts = ', '.join([i[1] for i in vals[1]])
2137 2136 ui.write('%s: %s\n' % (cmd, opts))
2138 2137
2139 2138 @command('debugcomplete',
2140 2139 [('o', 'options', None, _('show the command options'))],
2141 2140 _('[-o] CMD'),
2142 2141 norepo=True)
2143 2142 def debugcomplete(ui, cmd='', **opts):
2144 2143 """returns the completion list associated with the given command"""
2145 2144
2146 2145 if opts.get('options'):
2147 2146 options = []
2148 2147 otables = [globalopts]
2149 2148 if cmd:
2150 2149 aliases, entry = cmdutil.findcmd(cmd, table, False)
2151 2150 otables.append(entry[1])
2152 2151 for t in otables:
2153 2152 for o in t:
2154 2153 if "(DEPRECATED)" in o[3]:
2155 2154 continue
2156 2155 if o[0]:
2157 2156 options.append('-%s' % o[0])
2158 2157 options.append('--%s' % o[1])
2159 2158 ui.write("%s\n" % "\n".join(options))
2160 2159 return
2161 2160
2162 2161 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
2163 2162 if ui.verbose:
2164 2163 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
2165 2164 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
2166 2165
2167 2166 @command('debugdag',
2168 2167 [('t', 'tags', None, _('use tags as labels')),
2169 2168 ('b', 'branches', None, _('annotate with branch names')),
2170 2169 ('', 'dots', None, _('use dots for runs')),
2171 2170 ('s', 'spaces', None, _('separate elements by spaces'))],
2172 2171 _('[OPTION]... [FILE [REV]...]'),
2173 2172 optionalrepo=True)
2174 2173 def debugdag(ui, repo, file_=None, *revs, **opts):
2175 2174 """format the changelog or an index DAG as a concise textual description
2176 2175
2177 2176 If you pass a revlog index, the revlog's DAG is emitted. If you list
2178 2177 revision numbers, they get labeled in the output as rN.
2179 2178
2180 2179 Otherwise, the changelog DAG of the current repo is emitted.
2181 2180 """
2182 2181 spaces = opts.get('spaces')
2183 2182 dots = opts.get('dots')
2184 2183 if file_:
2185 2184 rlog = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), file_)
2186 2185 revs = set((int(r) for r in revs))
2187 2186 def events():
2188 2187 for r in rlog:
2189 2188 yield 'n', (r, list(p for p in rlog.parentrevs(r)
2190 2189 if p != -1))
2191 2190 if r in revs:
2192 2191 yield 'l', (r, "r%i" % r)
2193 2192 elif repo:
2194 2193 cl = repo.changelog
2195 2194 tags = opts.get('tags')
2196 2195 branches = opts.get('branches')
2197 2196 if tags:
2198 2197 labels = {}
2199 2198 for l, n in repo.tags().items():
2200 2199 labels.setdefault(cl.rev(n), []).append(l)
2201 2200 def events():
2202 2201 b = "default"
2203 2202 for r in cl:
2204 2203 if branches:
2205 2204 newb = cl.read(cl.node(r))[5]['branch']
2206 2205 if newb != b:
2207 2206 yield 'a', newb
2208 2207 b = newb
2209 2208 yield 'n', (r, list(p for p in cl.parentrevs(r)
2210 2209 if p != -1))
2211 2210 if tags:
2212 2211 ls = labels.get(r)
2213 2212 if ls:
2214 2213 for l in ls:
2215 2214 yield 'l', (r, l)
2216 2215 else:
2217 2216 raise error.Abort(_('need repo for changelog dag'))
2218 2217
2219 2218 for line in dagparser.dagtextlines(events(),
2220 2219 addspaces=spaces,
2221 2220 wraplabels=True,
2222 2221 wrapannotations=True,
2223 2222 wrapnonlinear=dots,
2224 2223 usedots=dots,
2225 2224 maxlinewidth=70):
2226 2225 ui.write(line)
2227 2226 ui.write("\n")
2228 2227
2229 2228 @command('debugdata', debugrevlogopts, _('-c|-m|FILE REV'))
2230 2229 def debugdata(ui, repo, file_, rev=None, **opts):
2231 2230 """dump the contents of a data file revision"""
2232 2231 if opts.get('changelog') or opts.get('manifest'):
2233 2232 file_, rev = None, file_
2234 2233 elif rev is None:
2235 2234 raise error.CommandError('debugdata', _('invalid arguments'))
2236 2235 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
2237 2236 try:
2238 2237 ui.write(r.revision(r.lookup(rev)))
2239 2238 except KeyError:
2240 2239 raise error.Abort(_('invalid revision identifier %s') % rev)
2241 2240
2242 2241 @command('debugdate',
2243 2242 [('e', 'extended', None, _('try extended date formats'))],
2244 2243 _('[-e] DATE [RANGE]'),
2245 2244 norepo=True, optionalrepo=True)
2246 2245 def debugdate(ui, date, range=None, **opts):
2247 2246 """parse and display a date"""
2248 2247 if opts["extended"]:
2249 2248 d = util.parsedate(date, util.extendeddateformats)
2250 2249 else:
2251 2250 d = util.parsedate(date)
2252 2251 ui.write(("internal: %s %s\n") % d)
2253 2252 ui.write(("standard: %s\n") % util.datestr(d))
2254 2253 if range:
2255 2254 m = util.matchdate(range)
2256 2255 ui.write(("match: %s\n") % m(d[0]))
2257 2256
2258 2257 @command('debugdiscovery',
2259 2258 [('', 'old', None, _('use old-style discovery')),
2260 2259 ('', 'nonheads', None,
2261 2260 _('use old-style discovery with non-heads included')),
2262 2261 ] + remoteopts,
2263 2262 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
2264 2263 def debugdiscovery(ui, repo, remoteurl="default", **opts):
2265 2264 """runs the changeset discovery protocol in isolation"""
2266 2265 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
2267 2266 opts.get('branch'))
2268 2267 remote = hg.peer(repo, opts, remoteurl)
2269 2268 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
2270 2269
2271 2270 # make sure tests are repeatable
2272 2271 random.seed(12323)
2273 2272
2274 2273 def doit(localheads, remoteheads, remote=remote):
2275 2274 if opts.get('old'):
2276 2275 if localheads:
2277 2276 raise error.Abort('cannot use localheads with old style '
2278 2277 'discovery')
2279 2278 if not util.safehasattr(remote, 'branches'):
2280 2279 # enable in-client legacy support
2281 2280 remote = localrepo.locallegacypeer(remote.local())
2282 2281 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
2283 2282 force=True)
2284 2283 common = set(common)
2285 2284 if not opts.get('nonheads'):
2286 2285 ui.write(("unpruned common: %s\n") %
2287 2286 " ".join(sorted(short(n) for n in common)))
2288 2287 dag = dagutil.revlogdag(repo.changelog)
2289 2288 all = dag.ancestorset(dag.internalizeall(common))
2290 2289 common = dag.externalizeall(dag.headsetofconnecteds(all))
2291 2290 else:
2292 2291 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
2293 2292 common = set(common)
2294 2293 rheads = set(hds)
2295 2294 lheads = set(repo.heads())
2296 2295 ui.write(("common heads: %s\n") %
2297 2296 " ".join(sorted(short(n) for n in common)))
2298 2297 if lheads <= common:
2299 2298 ui.write(("local is subset\n"))
2300 2299 elif rheads <= common:
2301 2300 ui.write(("remote is subset\n"))
2302 2301
2303 2302 serverlogs = opts.get('serverlog')
2304 2303 if serverlogs:
2305 2304 for filename in serverlogs:
2306 2305 logfile = open(filename, 'r')
2307 2306 try:
2308 2307 line = logfile.readline()
2309 2308 while line:
2310 2309 parts = line.strip().split(';')
2311 2310 op = parts[1]
2312 2311 if op == 'cg':
2313 2312 pass
2314 2313 elif op == 'cgss':
2315 2314 doit(parts[2].split(' '), parts[3].split(' '))
2316 2315 elif op == 'unb':
2317 2316 doit(parts[3].split(' '), parts[2].split(' '))
2318 2317 line = logfile.readline()
2319 2318 finally:
2320 2319 logfile.close()
2321 2320
2322 2321 else:
2323 2322 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
2324 2323 opts.get('remote_head'))
2325 2324 localrevs = opts.get('local_head')
2326 2325 doit(localrevs, remoterevs)
2327 2326
2328 2327 @command('debugextensions', formatteropts, [], norepo=True)
2329 2328 def debugextensions(ui, **opts):
2330 2329 '''show information about active extensions'''
2331 2330 exts = extensions.extensions(ui)
2332 2331 fm = ui.formatter('debugextensions', opts)
2333 2332 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
2334 2333 extsource = extmod.__file__
2335 2334 exttestedwith = getattr(extmod, 'testedwith', None)
2336 2335 if exttestedwith is not None:
2337 2336 exttestedwith = exttestedwith.split()
2338 2337 extbuglink = getattr(extmod, 'buglink', None)
2339 2338
2340 2339 fm.startitem()
2341 2340
2342 2341 if ui.quiet or ui.verbose:
2343 2342 fm.write('name', '%s\n', extname)
2344 2343 else:
2345 2344 fm.write('name', '%s', extname)
2346 2345 if not exttestedwith:
2347 2346 fm.plain(_(' (untested!)\n'))
2348 2347 else:
2349 2348 if exttestedwith == ['internal'] or \
2350 2349 util.version() in exttestedwith:
2351 2350 fm.plain('\n')
2352 2351 else:
2353 2352 lasttestedversion = exttestedwith[-1]
2354 2353 fm.plain(' (%s!)\n' % lasttestedversion)
2355 2354
2356 2355 fm.condwrite(ui.verbose and extsource, 'source',
2357 2356 _(' location: %s\n'), extsource or "")
2358 2357
2359 2358 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
2360 2359 _(' tested with: %s\n'), ' '.join(exttestedwith or []))
2361 2360
2362 2361 fm.condwrite(ui.verbose and extbuglink, 'buglink',
2363 2362 _(' bug reporting: %s\n'), extbuglink or "")
2364 2363
2365 2364 fm.end()
2366 2365
2367 2366 @command('debugfileset',
2368 2367 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
2369 2368 _('[-r REV] FILESPEC'))
2370 2369 def debugfileset(ui, repo, expr, **opts):
2371 2370 '''parse and apply a fileset specification'''
2372 2371 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
2373 2372 if ui.verbose:
2374 2373 tree = fileset.parse(expr)
2375 2374 ui.note(fileset.prettyformat(tree), "\n")
2376 2375
2377 2376 for f in ctx.getfileset(expr):
2378 2377 ui.write("%s\n" % f)
2379 2378
2380 2379 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
2381 2380 def debugfsinfo(ui, path="."):
2382 2381 """show information detected about current filesystem"""
2383 2382 util.writefile('.debugfsinfo', '')
2384 2383 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
2385 2384 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
2386 2385 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
2387 2386 ui.write(('case-sensitive: %s\n') % (util.checkcase('.debugfsinfo')
2388 2387 and 'yes' or 'no'))
2389 2388 os.unlink('.debugfsinfo')
2390 2389
2391 2390 @command('debuggetbundle',
2392 2391 [('H', 'head', [], _('id of head node'), _('ID')),
2393 2392 ('C', 'common', [], _('id of common node'), _('ID')),
2394 2393 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
2395 2394 _('REPO FILE [-H|-C ID]...'),
2396 2395 norepo=True)
2397 2396 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
2398 2397 """retrieves a bundle from a repo
2399 2398
2400 2399 Every ID must be a full-length hex node id string. Saves the bundle to the
2401 2400 given file.
2402 2401 """
2403 2402 repo = hg.peer(ui, opts, repopath)
2404 2403 if not repo.capable('getbundle'):
2405 2404 raise error.Abort("getbundle() not supported by target repository")
2406 2405 args = {}
2407 2406 if common:
2408 2407 args['common'] = [bin(s) for s in common]
2409 2408 if head:
2410 2409 args['heads'] = [bin(s) for s in head]
2411 2410 # TODO: get desired bundlecaps from command line.
2412 2411 args['bundlecaps'] = None
2413 2412 bundle = repo.getbundle('debug', **args)
2414 2413
2415 2414 bundletype = opts.get('type', 'bzip2').lower()
2416 2415 btypes = {'none': 'HG10UN',
2417 2416 'bzip2': 'HG10BZ',
2418 2417 'gzip': 'HG10GZ',
2419 2418 'bundle2': 'HG20'}
2420 2419 bundletype = btypes.get(bundletype)
2421 2420 if bundletype not in changegroup.bundletypes:
2422 2421 raise error.Abort(_('unknown bundle type specified with --type'))
2423 2422 changegroup.writebundle(ui, bundle, bundlepath, bundletype)
2424 2423
2425 2424 @command('debugignore', [], '')
2426 2425 def debugignore(ui, repo, *values, **opts):
2427 2426 """display the combined ignore pattern"""
2428 2427 ignore = repo.dirstate._ignore
2429 2428 includepat = getattr(ignore, 'includepat', None)
2430 2429 if includepat is not None:
2431 2430 ui.write("%s\n" % includepat)
2432 2431 else:
2433 2432 raise error.Abort(_("no ignore patterns found"))
2434 2433
2435 2434 @command('debugindex', debugrevlogopts +
2436 2435 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
2437 2436 _('[-f FORMAT] -c|-m|FILE'),
2438 2437 optionalrepo=True)
2439 2438 def debugindex(ui, repo, file_=None, **opts):
2440 2439 """dump the contents of an index file"""
2441 2440 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
2442 2441 format = opts.get('format', 0)
2443 2442 if format not in (0, 1):
2444 2443 raise error.Abort(_("unknown format %d") % format)
2445 2444
2446 2445 generaldelta = r.version & revlog.REVLOGGENERALDELTA
2447 2446 if generaldelta:
2448 2447 basehdr = ' delta'
2449 2448 else:
2450 2449 basehdr = ' base'
2451 2450
2452 2451 if ui.debugflag:
2453 2452 shortfn = hex
2454 2453 else:
2455 2454 shortfn = short
2456 2455
2457 2456 # There might not be anything in r, so have a sane default
2458 2457 idlen = 12
2459 2458 for i in r:
2460 2459 idlen = len(shortfn(r.node(i)))
2461 2460 break
2462 2461
2463 2462 if format == 0:
2464 2463 ui.write(" rev offset length " + basehdr + " linkrev"
2465 2464 " %s %s p2\n" % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
2466 2465 elif format == 1:
2467 2466 ui.write(" rev flag offset length"
2468 2467 " size " + basehdr + " link p1 p2"
2469 2468 " %s\n" % "nodeid".rjust(idlen))
2470 2469
2471 2470 for i in r:
2472 2471 node = r.node(i)
2473 2472 if generaldelta:
2474 2473 base = r.deltaparent(i)
2475 2474 else:
2476 2475 base = r.chainbase(i)
2477 2476 if format == 0:
2478 2477 try:
2479 2478 pp = r.parents(node)
2480 2479 except Exception:
2481 2480 pp = [nullid, nullid]
2482 2481 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
2483 2482 i, r.start(i), r.length(i), base, r.linkrev(i),
2484 2483 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
2485 2484 elif format == 1:
2486 2485 pr = r.parentrevs(i)
2487 2486 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
2488 2487 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
2489 2488 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
2490 2489
2491 2490 @command('debugindexdot', debugrevlogopts,
2492 2491 _('-c|-m|FILE'), optionalrepo=True)
2493 2492 def debugindexdot(ui, repo, file_=None, **opts):
2494 2493 """dump an index DAG as a graphviz dot file"""
2495 2494 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
2496 2495 ui.write(("digraph G {\n"))
2497 2496 for i in r:
2498 2497 node = r.node(i)
2499 2498 pp = r.parents(node)
2500 2499 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
2501 2500 if pp[1] != nullid:
2502 2501 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
2503 2502 ui.write("}\n")
2504 2503
2505 2504 @command('debugdeltachain',
2506 2505 debugrevlogopts + formatteropts,
2507 2506 _('-c|-m|FILE'),
2508 2507 optionalrepo=True)
2509 2508 def debugdeltachain(ui, repo, file_=None, **opts):
2510 2509 """dump information about delta chains in a revlog
2511 2510
2512 2511 Output can be templatized. Available template keywords are:
2513 2512
2514 2513 rev revision number
2515 2514 chainid delta chain identifier (numbered by unique base)
2516 2515 chainlen delta chain length to this revision
2517 2516 prevrev previous revision in delta chain
2518 2517 deltatype role of delta / how it was computed
2519 2518 compsize compressed size of revision
2520 2519 uncompsize uncompressed size of revision
2521 2520 chainsize total size of compressed revisions in chain
2522 2521 chainratio total chain size divided by uncompressed revision size
2523 2522 (new delta chains typically start at ratio 2.00)
2524 2523 lindist linear distance from base revision in delta chain to end
2525 2524 of this revision
2526 2525 extradist total size of revisions not part of this delta chain from
2527 2526 base of delta chain to end of this revision; a measurement
2528 2527 of how much extra data we need to read/seek across to read
2529 2528 the delta chain for this revision
2530 2529 extraratio extradist divided by chainsize; another representation of
2531 2530 how much unrelated data is needed to load this delta chain
2532 2531 """
2533 2532 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
2534 2533 index = r.index
2535 2534 generaldelta = r.version & revlog.REVLOGGENERALDELTA
2536 2535
2537 2536 def revinfo(rev):
2538 2537 iterrev = rev
2539 2538 e = index[iterrev]
2540 2539 chain = []
2541 2540 compsize = e[1]
2542 2541 uncompsize = e[2]
2543 2542 chainsize = 0
2544 2543
2545 2544 if generaldelta:
2546 2545 if e[3] == e[5]:
2547 2546 deltatype = 'p1'
2548 2547 elif e[3] == e[6]:
2549 2548 deltatype = 'p2'
2550 2549 elif e[3] == rev - 1:
2551 2550 deltatype = 'prev'
2552 2551 elif e[3] == rev:
2553 2552 deltatype = 'base'
2554 2553 else:
2555 2554 deltatype = 'other'
2556 2555 else:
2557 2556 if e[3] == rev:
2558 2557 deltatype = 'base'
2559 2558 else:
2560 2559 deltatype = 'prev'
2561 2560
2562 2561 while iterrev != e[3]:
2563 2562 chain.append(iterrev)
2564 2563 chainsize += e[1]
2565 2564 if generaldelta:
2566 2565 iterrev = e[3]
2567 2566 else:
2568 2567 iterrev -= 1
2569 2568 e = index[iterrev]
2570 2569 else:
2571 2570 chainsize += e[1]
2572 2571 chain.append(iterrev)
2573 2572
2574 2573 chain.reverse()
2575 2574 return compsize, uncompsize, deltatype, chain, chainsize
2576 2575
2577 2576 fm = ui.formatter('debugdeltachain', opts)
2578 2577
2579 2578 fm.plain(' rev chain# chainlen prev delta '
2580 2579 'size rawsize chainsize ratio lindist extradist '
2581 2580 'extraratio\n')
2582 2581
2583 2582 chainbases = {}
2584 2583 for rev in r:
2585 2584 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
2586 2585 chainbase = chain[0]
2587 2586 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
2588 2587 basestart = r.start(chainbase)
2589 2588 revstart = r.start(rev)
2590 2589 lineardist = revstart + comp - basestart
2591 2590 extradist = lineardist - chainsize
2592 2591 try:
2593 2592 prevrev = chain[-2]
2594 2593 except IndexError:
2595 2594 prevrev = -1
2596 2595
2597 2596 chainratio = float(chainsize) / float(uncomp)
2598 2597 extraratio = float(extradist) / float(chainsize)
2599 2598
2600 2599 fm.startitem()
2601 2600 fm.write('rev chainid chainlen prevrev deltatype compsize '
2602 2601 'uncompsize chainsize chainratio lindist extradist '
2603 2602 'extraratio',
2604 2603 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
2605 2604 rev, chainid, len(chain), prevrev, deltatype, comp,
2606 2605 uncomp, chainsize, chainratio, lineardist, extradist,
2607 2606 extraratio,
2608 2607 rev=rev, chainid=chainid, chainlen=len(chain),
2609 2608 prevrev=prevrev, deltatype=deltatype, compsize=comp,
2610 2609 uncompsize=uncomp, chainsize=chainsize,
2611 2610 chainratio=chainratio, lindist=lineardist,
2612 2611 extradist=extradist, extraratio=extraratio)
2613 2612
2614 2613 fm.end()
2615 2614
2616 2615 @command('debuginstall', [], '', norepo=True)
2617 2616 def debuginstall(ui):
2618 2617 '''test Mercurial installation
2619 2618
2620 2619 Returns 0 on success.
2621 2620 '''
2622 2621
2623 2622 def writetemp(contents):
2624 2623 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
2625 2624 f = os.fdopen(fd, "wb")
2626 2625 f.write(contents)
2627 2626 f.close()
2628 2627 return name
2629 2628
2630 2629 problems = 0
2631 2630
2632 2631 # encoding
2633 2632 ui.status(_("checking encoding (%s)...\n") % encoding.encoding)
2634 2633 try:
2635 2634 encoding.fromlocal("test")
2636 2635 except error.Abort as inst:
2637 2636 ui.write(" %s\n" % inst)
2638 2637 ui.write(_(" (check that your locale is properly set)\n"))
2639 2638 problems += 1
2640 2639
2641 2640 # Python
2642 2641 ui.status(_("checking Python executable (%s)\n") % sys.executable)
2643 2642 ui.status(_("checking Python version (%s)\n")
2644 2643 % ("%s.%s.%s" % sys.version_info[:3]))
2645 2644 ui.status(_("checking Python lib (%s)...\n")
2646 2645 % os.path.dirname(os.__file__))
2647 2646
2648 2647 # compiled modules
2649 2648 ui.status(_("checking installed modules (%s)...\n")
2650 2649 % os.path.dirname(__file__))
2651 2650 try:
2652 2651 import bdiff, mpatch, base85, osutil
2653 2652 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
2654 2653 except Exception as inst:
2655 2654 ui.write(" %s\n" % inst)
2656 2655 ui.write(_(" One or more extensions could not be found"))
2657 2656 ui.write(_(" (check that you compiled the extensions)\n"))
2658 2657 problems += 1
2659 2658
2660 2659 # templates
2661 2660 import templater
2662 2661 p = templater.templatepaths()
2663 2662 ui.status(_("checking templates (%s)...\n") % ' '.join(p))
2664 2663 if p:
2665 2664 m = templater.templatepath("map-cmdline.default")
2666 2665 if m:
2667 2666 # template found, check if it is working
2668 2667 try:
2669 2668 templater.templater(m)
2670 2669 except Exception as inst:
2671 2670 ui.write(" %s\n" % inst)
2672 2671 p = None
2673 2672 else:
2674 2673 ui.write(_(" template 'default' not found\n"))
2675 2674 p = None
2676 2675 else:
2677 2676 ui.write(_(" no template directories found\n"))
2678 2677 if not p:
2679 2678 ui.write(_(" (templates seem to have been installed incorrectly)\n"))
2680 2679 problems += 1
2681 2680
2682 2681 # editor
2683 2682 ui.status(_("checking commit editor...\n"))
2684 2683 editor = ui.geteditor()
2685 2684 editor = util.expandpath(editor)
2686 2685 cmdpath = util.findexe(shlex.split(editor)[0])
2687 2686 if not cmdpath:
2688 2687 if editor == 'vi':
2689 2688 ui.write(_(" No commit editor set and can't find vi in PATH\n"))
2690 2689 ui.write(_(" (specify a commit editor in your configuration"
2691 2690 " file)\n"))
2692 2691 else:
2693 2692 ui.write(_(" Can't find editor '%s' in PATH\n") % editor)
2694 2693 ui.write(_(" (specify a commit editor in your configuration"
2695 2694 " file)\n"))
2696 2695 problems += 1
2697 2696
2698 2697 # check username
2699 2698 ui.status(_("checking username...\n"))
2700 2699 try:
2701 2700 ui.username()
2702 2701 except error.Abort as e:
2703 2702 ui.write(" %s\n" % e)
2704 2703 ui.write(_(" (specify a username in your configuration file)\n"))
2705 2704 problems += 1
2706 2705
2707 2706 if not problems:
2708 2707 ui.status(_("no problems detected\n"))
2709 2708 else:
2710 2709 ui.write(_("%s problems detected,"
2711 2710 " please check your install!\n") % problems)
2712 2711
2713 2712 return problems
2714 2713
2715 2714 @command('debugknown', [], _('REPO ID...'), norepo=True)
2716 2715 def debugknown(ui, repopath, *ids, **opts):
2717 2716 """test whether node ids are known to a repo
2718 2717
2719 2718 Every ID must be a full-length hex node id string. Returns a list of 0s
2720 2719 and 1s indicating unknown/known.
2721 2720 """
2722 2721 repo = hg.peer(ui, opts, repopath)
2723 2722 if not repo.capable('known'):
2724 2723 raise error.Abort("known() not supported by target repository")
2725 2724 flags = repo.known([bin(s) for s in ids])
2726 2725 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
2727 2726
2728 2727 @command('debuglabelcomplete', [], _('LABEL...'))
2729 2728 def debuglabelcomplete(ui, repo, *args):
2730 2729 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
2731 2730 debugnamecomplete(ui, repo, *args)
2732 2731
2733 2732 @command('debugmergestate', [], '')
2734 2733 def debugmergestate(ui, repo, *args):
2735 2734 """print merge state
2736 2735
2737 2736 Use --verbose to print out information about whether v1 or v2 merge state
2738 2737 was chosen."""
2739 2738 def _hashornull(h):
2740 2739 if h == nullhex:
2741 2740 return 'null'
2742 2741 else:
2743 2742 return h
2744 2743
2745 2744 def printrecords(version):
2746 2745 ui.write(('* version %s records\n') % version)
2747 2746 if version == 1:
2748 2747 records = v1records
2749 2748 else:
2750 2749 records = v2records
2751 2750
2752 2751 for rtype, record in records:
2753 2752 # pretty print some record types
2754 2753 if rtype == 'L':
2755 2754 ui.write(('local: %s\n') % record)
2756 2755 elif rtype == 'O':
2757 2756 ui.write(('other: %s\n') % record)
2758 2757 elif rtype == 'm':
2759 2758 driver, mdstate = record.split('\0', 1)
2760 2759 ui.write(('merge driver: %s (state "%s")\n')
2761 2760 % (driver, mdstate))
2762 2761 elif rtype in 'FDC':
2763 2762 r = record.split('\0')
2764 2763 f, state, hash, lfile, afile, anode, ofile = r[0:7]
2765 2764 if version == 1:
2766 2765 onode = 'not stored in v1 format'
2767 2766 flags = r[7]
2768 2767 else:
2769 2768 onode, flags = r[7:9]
2770 2769 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
2771 2770 % (f, rtype, state, _hashornull(hash)))
2772 2771 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
2773 2772 ui.write((' ancestor path: %s (node %s)\n')
2774 2773 % (afile, _hashornull(anode)))
2775 2774 ui.write((' other path: %s (node %s)\n')
2776 2775 % (ofile, _hashornull(onode)))
2777 2776 else:
2778 2777 ui.write(('unrecognized entry: %s\t%s\n')
2779 2778 % (rtype, record.replace('\0', '\t')))
2780 2779
2781 2780 # Avoid mergestate.read() since it may raise an exception for unsupported
2782 2781 # merge state records. We shouldn't be doing this, but this is OK since this
2783 2782 # command is pretty low-level.
2784 2783 ms = mergemod.mergestate(repo)
2785 2784
2786 2785 # sort so that reasonable information is on top
2787 2786 v1records = ms._readrecordsv1()
2788 2787 v2records = ms._readrecordsv2()
2789 2788 order = 'LOm'
2790 2789 def key(r):
2791 2790 idx = order.find(r[0])
2792 2791 if idx == -1:
2793 2792 return (1, r[1])
2794 2793 else:
2795 2794 return (0, idx)
2796 2795 v1records.sort(key=key)
2797 2796 v2records.sort(key=key)
2798 2797
2799 2798 if not v1records and not v2records:
2800 2799 ui.write(('no merge state found\n'))
2801 2800 elif not v2records:
2802 2801 ui.note(('no version 2 merge state\n'))
2803 2802 printrecords(1)
2804 2803 elif ms._v1v2match(v1records, v2records):
2805 2804 ui.note(('v1 and v2 states match: using v2\n'))
2806 2805 printrecords(2)
2807 2806 else:
2808 2807 ui.note(('v1 and v2 states mismatch: using v1\n'))
2809 2808 printrecords(1)
2810 2809 if ui.verbose:
2811 2810 printrecords(2)
2812 2811
2813 2812 @command('debugnamecomplete', [], _('NAME...'))
2814 2813 def debugnamecomplete(ui, repo, *args):
2815 2814 '''complete "names" - tags, open branch names, bookmark names'''
2816 2815
2817 2816 names = set()
2818 2817 # since we previously only listed open branches, we will handle that
2819 2818 # specially (after this for loop)
2820 2819 for name, ns in repo.names.iteritems():
2821 2820 if name != 'branches':
2822 2821 names.update(ns.listnames(repo))
2823 2822 names.update(tag for (tag, heads, tip, closed)
2824 2823 in repo.branchmap().iterbranches() if not closed)
2825 2824 completions = set()
2826 2825 if not args:
2827 2826 args = ['']
2828 2827 for a in args:
2829 2828 completions.update(n for n in names if n.startswith(a))
2830 2829 ui.write('\n'.join(sorted(completions)))
2831 2830 ui.write('\n')
2832 2831
2833 2832 @command('debuglocks',
2834 2833 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
2835 2834 ('W', 'force-wlock', None,
2836 2835 _('free the working state lock (DANGEROUS)'))],
2837 2836 _('[OPTION]...'))
2838 2837 def debuglocks(ui, repo, **opts):
2839 2838 """show or modify state of locks
2840 2839
2841 2840 By default, this command will show which locks are held. This
2842 2841 includes the user and process holding the lock, the amount of time
2843 2842 the lock has been held, and the machine name where the process is
2844 2843 running if it's not local.
2845 2844
2846 2845 Locks protect the integrity of Mercurial's data, so should be
2847 2846 treated with care. System crashes or other interruptions may cause
2848 2847 locks to not be properly released, though Mercurial will usually
2849 2848 detect and remove such stale locks automatically.
2850 2849
2851 2850 However, detecting stale locks may not always be possible (for
2852 2851 instance, on a shared filesystem). Removing locks may also be
2853 2852 blocked by filesystem permissions.
2854 2853
2855 2854 Returns 0 if no locks are held.
2856 2855
2857 2856 """
2858 2857
2859 2858 if opts.get('force_lock'):
2860 2859 repo.svfs.unlink('lock')
2861 2860 if opts.get('force_wlock'):
2862 2861 repo.vfs.unlink('wlock')
2863 2862 if opts.get('force_lock') or opts.get('force_lock'):
2864 2863 return 0
2865 2864
2866 2865 now = time.time()
2867 2866 held = 0
2868 2867
2869 2868 def report(vfs, name, method):
2870 2869 # this causes stale locks to get reaped for more accurate reporting
2871 2870 try:
2872 2871 l = method(False)
2873 2872 except error.LockHeld:
2874 2873 l = None
2875 2874
2876 2875 if l:
2877 2876 l.release()
2878 2877 else:
2879 2878 try:
2880 2879 stat = vfs.lstat(name)
2881 2880 age = now - stat.st_mtime
2882 2881 user = util.username(stat.st_uid)
2883 2882 locker = vfs.readlock(name)
2884 2883 if ":" in locker:
2885 2884 host, pid = locker.split(':')
2886 2885 if host == socket.gethostname():
2887 2886 locker = 'user %s, process %s' % (user, pid)
2888 2887 else:
2889 2888 locker = 'user %s, process %s, host %s' \
2890 2889 % (user, pid, host)
2891 2890 ui.write("%-6s %s (%ds)\n" % (name + ":", locker, age))
2892 2891 return 1
2893 2892 except OSError as e:
2894 2893 if e.errno != errno.ENOENT:
2895 2894 raise
2896 2895
2897 2896 ui.write("%-6s free\n" % (name + ":"))
2898 2897 return 0
2899 2898
2900 2899 held += report(repo.svfs, "lock", repo.lock)
2901 2900 held += report(repo.vfs, "wlock", repo.wlock)
2902 2901
2903 2902 return held
2904 2903
2905 2904 @command('debugobsolete',
2906 2905 [('', 'flags', 0, _('markers flag')),
2907 2906 ('', 'record-parents', False,
2908 2907 _('record parent information for the precursor')),
2909 2908 ('r', 'rev', [], _('display markers relevant to REV')),
2910 2909 ] + commitopts2,
2911 2910 _('[OBSOLETED [REPLACEMENT ...]]'))
2912 2911 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
2913 2912 """create arbitrary obsolete marker
2914 2913
2915 2914 With no arguments, displays the list of obsolescence markers."""
2916 2915
2917 2916 def parsenodeid(s):
2918 2917 try:
2919 2918 # We do not use revsingle/revrange functions here to accept
2920 2919 # arbitrary node identifiers, possibly not present in the
2921 2920 # local repository.
2922 2921 n = bin(s)
2923 2922 if len(n) != len(nullid):
2924 2923 raise TypeError()
2925 2924 return n
2926 2925 except TypeError:
2927 2926 raise error.Abort('changeset references must be full hexadecimal '
2928 2927 'node identifiers')
2929 2928
2930 2929 if precursor is not None:
2931 2930 if opts['rev']:
2932 2931 raise error.Abort('cannot select revision when creating marker')
2933 2932 metadata = {}
2934 2933 metadata['user'] = opts['user'] or ui.username()
2935 2934 succs = tuple(parsenodeid(succ) for succ in successors)
2936 2935 l = repo.lock()
2937 2936 try:
2938 2937 tr = repo.transaction('debugobsolete')
2939 2938 try:
2940 2939 date = opts.get('date')
2941 2940 if date:
2942 2941 date = util.parsedate(date)
2943 2942 else:
2944 2943 date = None
2945 2944 prec = parsenodeid(precursor)
2946 2945 parents = None
2947 2946 if opts['record_parents']:
2948 2947 if prec not in repo.unfiltered():
2949 2948 raise error.Abort('cannot used --record-parents on '
2950 2949 'unknown changesets')
2951 2950 parents = repo.unfiltered()[prec].parents()
2952 2951 parents = tuple(p.node() for p in parents)
2953 2952 repo.obsstore.create(tr, prec, succs, opts['flags'],
2954 2953 parents=parents, date=date,
2955 2954 metadata=metadata)
2956 2955 tr.close()
2957 2956 except ValueError as exc:
2958 2957 raise error.Abort(_('bad obsmarker input: %s') % exc)
2959 2958 finally:
2960 2959 tr.release()
2961 2960 finally:
2962 2961 l.release()
2963 2962 else:
2964 2963 if opts['rev']:
2965 2964 revs = scmutil.revrange(repo, opts['rev'])
2966 2965 nodes = [repo[r].node() for r in revs]
2967 2966 markers = list(obsolete.getmarkers(repo, nodes=nodes))
2968 2967 markers.sort(key=lambda x: x._data)
2969 2968 else:
2970 2969 markers = obsolete.getmarkers(repo)
2971 2970
2972 2971 for m in markers:
2973 2972 cmdutil.showmarker(ui, m)
2974 2973
2975 2974 @command('debugpathcomplete',
2976 2975 [('f', 'full', None, _('complete an entire path')),
2977 2976 ('n', 'normal', None, _('show only normal files')),
2978 2977 ('a', 'added', None, _('show only added files')),
2979 2978 ('r', 'removed', None, _('show only removed files'))],
2980 2979 _('FILESPEC...'))
2981 2980 def debugpathcomplete(ui, repo, *specs, **opts):
2982 2981 '''complete part or all of a tracked path
2983 2982
2984 2983 This command supports shells that offer path name completion. It
2985 2984 currently completes only files already known to the dirstate.
2986 2985
2987 2986 Completion extends only to the next path segment unless
2988 2987 --full is specified, in which case entire paths are used.'''
2989 2988
2990 2989 def complete(path, acceptable):
2991 2990 dirstate = repo.dirstate
2992 2991 spec = os.path.normpath(os.path.join(os.getcwd(), path))
2993 2992 rootdir = repo.root + os.sep
2994 2993 if spec != repo.root and not spec.startswith(rootdir):
2995 2994 return [], []
2996 2995 if os.path.isdir(spec):
2997 2996 spec += '/'
2998 2997 spec = spec[len(rootdir):]
2999 2998 fixpaths = os.sep != '/'
3000 2999 if fixpaths:
3001 3000 spec = spec.replace(os.sep, '/')
3002 3001 speclen = len(spec)
3003 3002 fullpaths = opts['full']
3004 3003 files, dirs = set(), set()
3005 3004 adddir, addfile = dirs.add, files.add
3006 3005 for f, st in dirstate.iteritems():
3007 3006 if f.startswith(spec) and st[0] in acceptable:
3008 3007 if fixpaths:
3009 3008 f = f.replace('/', os.sep)
3010 3009 if fullpaths:
3011 3010 addfile(f)
3012 3011 continue
3013 3012 s = f.find(os.sep, speclen)
3014 3013 if s >= 0:
3015 3014 adddir(f[:s])
3016 3015 else:
3017 3016 addfile(f)
3018 3017 return files, dirs
3019 3018
3020 3019 acceptable = ''
3021 3020 if opts['normal']:
3022 3021 acceptable += 'nm'
3023 3022 if opts['added']:
3024 3023 acceptable += 'a'
3025 3024 if opts['removed']:
3026 3025 acceptable += 'r'
3027 3026 cwd = repo.getcwd()
3028 3027 if not specs:
3029 3028 specs = ['.']
3030 3029
3031 3030 files, dirs = set(), set()
3032 3031 for spec in specs:
3033 3032 f, d = complete(spec, acceptable or 'nmar')
3034 3033 files.update(f)
3035 3034 dirs.update(d)
3036 3035 files.update(dirs)
3037 3036 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
3038 3037 ui.write('\n')
3039 3038
3040 3039 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
3041 3040 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
3042 3041 '''access the pushkey key/value protocol
3043 3042
3044 3043 With two args, list the keys in the given namespace.
3045 3044
3046 3045 With five args, set a key to new if it currently is set to old.
3047 3046 Reports success or failure.
3048 3047 '''
3049 3048
3050 3049 target = hg.peer(ui, {}, repopath)
3051 3050 if keyinfo:
3052 3051 key, old, new = keyinfo
3053 3052 r = target.pushkey(namespace, key, old, new)
3054 3053 ui.status(str(r) + '\n')
3055 3054 return not r
3056 3055 else:
3057 3056 for k, v in sorted(target.listkeys(namespace).iteritems()):
3058 3057 ui.write("%s\t%s\n" % (k.encode('string-escape'),
3059 3058 v.encode('string-escape')))
3060 3059
3061 3060 @command('debugpvec', [], _('A B'))
3062 3061 def debugpvec(ui, repo, a, b=None):
3063 3062 ca = scmutil.revsingle(repo, a)
3064 3063 cb = scmutil.revsingle(repo, b)
3065 3064 pa = pvec.ctxpvec(ca)
3066 3065 pb = pvec.ctxpvec(cb)
3067 3066 if pa == pb:
3068 3067 rel = "="
3069 3068 elif pa > pb:
3070 3069 rel = ">"
3071 3070 elif pa < pb:
3072 3071 rel = "<"
3073 3072 elif pa | pb:
3074 3073 rel = "|"
3075 3074 ui.write(_("a: %s\n") % pa)
3076 3075 ui.write(_("b: %s\n") % pb)
3077 3076 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
3078 3077 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
3079 3078 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
3080 3079 pa.distance(pb), rel))
3081 3080
3082 3081 @command('debugrebuilddirstate|debugrebuildstate',
3083 3082 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
3084 3083 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
3085 3084 'the working copy parent')),
3086 3085 ],
3087 3086 _('[-r REV]'))
3088 3087 def debugrebuilddirstate(ui, repo, rev, **opts):
3089 3088 """rebuild the dirstate as it would look like for the given revision
3090 3089
3091 3090 If no revision is specified the first current parent will be used.
3092 3091
3093 3092 The dirstate will be set to the files of the given revision.
3094 3093 The actual working directory content or existing dirstate
3095 3094 information such as adds or removes is not considered.
3096 3095
3097 3096 ``minimal`` will only rebuild the dirstate status for files that claim to be
3098 3097 tracked but are not in the parent manifest, or that exist in the parent
3099 3098 manifest but are not in the dirstate. It will not change adds, removes, or
3100 3099 modified files that are in the working copy parent.
3101 3100
3102 3101 One use of this command is to make the next :hg:`status` invocation
3103 3102 check the actual file content.
3104 3103 """
3105 3104 ctx = scmutil.revsingle(repo, rev)
3106 3105 wlock = repo.wlock()
3107 3106 try:
3108 3107 dirstate = repo.dirstate
3109 3108 changedfiles = None
3110 3109 # See command doc for what minimal does.
3111 3110 if opts.get('minimal'):
3112 3111 manifestfiles = set(ctx.manifest().keys())
3113 3112 dirstatefiles = set(dirstate)
3114 3113 manifestonly = manifestfiles - dirstatefiles
3115 3114 dsonly = dirstatefiles - manifestfiles
3116 3115 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
3117 3116 changedfiles = manifestonly | dsnotadded
3118 3117
3119 3118 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
3120 3119 finally:
3121 3120 wlock.release()
3122 3121
3123 3122 @command('debugrebuildfncache', [], '')
3124 3123 def debugrebuildfncache(ui, repo):
3125 3124 """rebuild the fncache file"""
3126 3125 repair.rebuildfncache(ui, repo)
3127 3126
3128 3127 @command('debugrename',
3129 3128 [('r', 'rev', '', _('revision to debug'), _('REV'))],
3130 3129 _('[-r REV] FILE'))
3131 3130 def debugrename(ui, repo, file1, *pats, **opts):
3132 3131 """dump rename information"""
3133 3132
3134 3133 ctx = scmutil.revsingle(repo, opts.get('rev'))
3135 3134 m = scmutil.match(ctx, (file1,) + pats, opts)
3136 3135 for abs in ctx.walk(m):
3137 3136 fctx = ctx[abs]
3138 3137 o = fctx.filelog().renamed(fctx.filenode())
3139 3138 rel = m.rel(abs)
3140 3139 if o:
3141 3140 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
3142 3141 else:
3143 3142 ui.write(_("%s not renamed\n") % rel)
3144 3143
3145 3144 @command('debugrevlog', debugrevlogopts +
3146 3145 [('d', 'dump', False, _('dump index data'))],
3147 3146 _('-c|-m|FILE'),
3148 3147 optionalrepo=True)
3149 3148 def debugrevlog(ui, repo, file_=None, **opts):
3150 3149 """show data and statistics about a revlog"""
3151 3150 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
3152 3151
3153 3152 if opts.get("dump"):
3154 3153 numrevs = len(r)
3155 3154 ui.write("# rev p1rev p2rev start end deltastart base p1 p2"
3156 3155 " rawsize totalsize compression heads chainlen\n")
3157 3156 ts = 0
3158 3157 heads = set()
3159 3158
3160 3159 for rev in xrange(numrevs):
3161 3160 dbase = r.deltaparent(rev)
3162 3161 if dbase == -1:
3163 3162 dbase = rev
3164 3163 cbase = r.chainbase(rev)
3165 3164 clen = r.chainlen(rev)
3166 3165 p1, p2 = r.parentrevs(rev)
3167 3166 rs = r.rawsize(rev)
3168 3167 ts = ts + rs
3169 3168 heads -= set(r.parentrevs(rev))
3170 3169 heads.add(rev)
3171 3170 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
3172 3171 "%11d %5d %8d\n" %
3173 3172 (rev, p1, p2, r.start(rev), r.end(rev),
3174 3173 r.start(dbase), r.start(cbase),
3175 3174 r.start(p1), r.start(p2),
3176 3175 rs, ts, ts / r.end(rev), len(heads), clen))
3177 3176 return 0
3178 3177
3179 3178 v = r.version
3180 3179 format = v & 0xFFFF
3181 3180 flags = []
3182 3181 gdelta = False
3183 3182 if v & revlog.REVLOGNGINLINEDATA:
3184 3183 flags.append('inline')
3185 3184 if v & revlog.REVLOGGENERALDELTA:
3186 3185 gdelta = True
3187 3186 flags.append('generaldelta')
3188 3187 if not flags:
3189 3188 flags = ['(none)']
3190 3189
3191 3190 nummerges = 0
3192 3191 numfull = 0
3193 3192 numprev = 0
3194 3193 nump1 = 0
3195 3194 nump2 = 0
3196 3195 numother = 0
3197 3196 nump1prev = 0
3198 3197 nump2prev = 0
3199 3198 chainlengths = []
3200 3199
3201 3200 datasize = [None, 0, 0L]
3202 3201 fullsize = [None, 0, 0L]
3203 3202 deltasize = [None, 0, 0L]
3204 3203
3205 3204 def addsize(size, l):
3206 3205 if l[0] is None or size < l[0]:
3207 3206 l[0] = size
3208 3207 if size > l[1]:
3209 3208 l[1] = size
3210 3209 l[2] += size
3211 3210
3212 3211 numrevs = len(r)
3213 3212 for rev in xrange(numrevs):
3214 3213 p1, p2 = r.parentrevs(rev)
3215 3214 delta = r.deltaparent(rev)
3216 3215 if format > 0:
3217 3216 addsize(r.rawsize(rev), datasize)
3218 3217 if p2 != nullrev:
3219 3218 nummerges += 1
3220 3219 size = r.length(rev)
3221 3220 if delta == nullrev:
3222 3221 chainlengths.append(0)
3223 3222 numfull += 1
3224 3223 addsize(size, fullsize)
3225 3224 else:
3226 3225 chainlengths.append(chainlengths[delta] + 1)
3227 3226 addsize(size, deltasize)
3228 3227 if delta == rev - 1:
3229 3228 numprev += 1
3230 3229 if delta == p1:
3231 3230 nump1prev += 1
3232 3231 elif delta == p2:
3233 3232 nump2prev += 1
3234 3233 elif delta == p1:
3235 3234 nump1 += 1
3236 3235 elif delta == p2:
3237 3236 nump2 += 1
3238 3237 elif delta != nullrev:
3239 3238 numother += 1
3240 3239
3241 3240 # Adjust size min value for empty cases
3242 3241 for size in (datasize, fullsize, deltasize):
3243 3242 if size[0] is None:
3244 3243 size[0] = 0
3245 3244
3246 3245 numdeltas = numrevs - numfull
3247 3246 numoprev = numprev - nump1prev - nump2prev
3248 3247 totalrawsize = datasize[2]
3249 3248 datasize[2] /= numrevs
3250 3249 fulltotal = fullsize[2]
3251 3250 fullsize[2] /= numfull
3252 3251 deltatotal = deltasize[2]
3253 3252 if numrevs - numfull > 0:
3254 3253 deltasize[2] /= numrevs - numfull
3255 3254 totalsize = fulltotal + deltatotal
3256 3255 avgchainlen = sum(chainlengths) / numrevs
3257 3256 maxchainlen = max(chainlengths)
3258 3257 compratio = 1
3259 3258 if totalsize:
3260 3259 compratio = totalrawsize / totalsize
3261 3260
3262 3261 basedfmtstr = '%%%dd\n'
3263 3262 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
3264 3263
3265 3264 def dfmtstr(max):
3266 3265 return basedfmtstr % len(str(max))
3267 3266 def pcfmtstr(max, padding=0):
3268 3267 return basepcfmtstr % (len(str(max)), ' ' * padding)
3269 3268
3270 3269 def pcfmt(value, total):
3271 3270 if total:
3272 3271 return (value, 100 * float(value) / total)
3273 3272 else:
3274 3273 return value, 100.0
3275 3274
3276 3275 ui.write(('format : %d\n') % format)
3277 3276 ui.write(('flags : %s\n') % ', '.join(flags))
3278 3277
3279 3278 ui.write('\n')
3280 3279 fmt = pcfmtstr(totalsize)
3281 3280 fmt2 = dfmtstr(totalsize)
3282 3281 ui.write(('revisions : ') + fmt2 % numrevs)
3283 3282 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
3284 3283 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
3285 3284 ui.write(('revisions : ') + fmt2 % numrevs)
3286 3285 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
3287 3286 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
3288 3287 ui.write(('revision size : ') + fmt2 % totalsize)
3289 3288 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
3290 3289 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
3291 3290
3292 3291 ui.write('\n')
3293 3292 fmt = dfmtstr(max(avgchainlen, compratio))
3294 3293 ui.write(('avg chain length : ') + fmt % avgchainlen)
3295 3294 ui.write(('max chain length : ') + fmt % maxchainlen)
3296 3295 ui.write(('compression ratio : ') + fmt % compratio)
3297 3296
3298 3297 if format > 0:
3299 3298 ui.write('\n')
3300 3299 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
3301 3300 % tuple(datasize))
3302 3301 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
3303 3302 % tuple(fullsize))
3304 3303 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
3305 3304 % tuple(deltasize))
3306 3305
3307 3306 if numdeltas > 0:
3308 3307 ui.write('\n')
3309 3308 fmt = pcfmtstr(numdeltas)
3310 3309 fmt2 = pcfmtstr(numdeltas, 4)
3311 3310 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
3312 3311 if numprev > 0:
3313 3312 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
3314 3313 numprev))
3315 3314 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
3316 3315 numprev))
3317 3316 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
3318 3317 numprev))
3319 3318 if gdelta:
3320 3319 ui.write(('deltas against p1 : ')
3321 3320 + fmt % pcfmt(nump1, numdeltas))
3322 3321 ui.write(('deltas against p2 : ')
3323 3322 + fmt % pcfmt(nump2, numdeltas))
3324 3323 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
3325 3324 numdeltas))
3326 3325
3327 3326 @command('debugrevspec',
3328 3327 [('', 'optimize', None, _('print parsed tree after optimizing'))],
3329 3328 ('REVSPEC'))
3330 3329 def debugrevspec(ui, repo, expr, **opts):
3331 3330 """parse and apply a revision specification
3332 3331
3333 3332 Use --verbose to print the parsed tree before and after aliases
3334 3333 expansion.
3335 3334 """
3336 3335 if ui.verbose:
3337 3336 tree = revset.parse(expr, lookup=repo.__contains__)
3338 3337 ui.note(revset.prettyformat(tree), "\n")
3339 3338 newtree = revset.findaliases(ui, tree)
3340 3339 if newtree != tree:
3341 3340 ui.note(revset.prettyformat(newtree), "\n")
3342 3341 tree = newtree
3343 3342 newtree = revset.foldconcat(tree)
3344 3343 if newtree != tree:
3345 3344 ui.note(revset.prettyformat(newtree), "\n")
3346 3345 if opts["optimize"]:
3347 3346 weight, optimizedtree = revset.optimize(newtree, True)
3348 3347 ui.note("* optimized:\n", revset.prettyformat(optimizedtree), "\n")
3349 3348 func = revset.match(ui, expr, repo)
3350 3349 revs = func(repo)
3351 3350 if ui.verbose:
3352 3351 ui.note("* set:\n", revset.prettyformatset(revs), "\n")
3353 3352 for c in revs:
3354 3353 ui.write("%s\n" % c)
3355 3354
3356 3355 @command('debugsetparents', [], _('REV1 [REV2]'))
3357 3356 def debugsetparents(ui, repo, rev1, rev2=None):
3358 3357 """manually set the parents of the current working directory
3359 3358
3360 3359 This is useful for writing repository conversion tools, but should
3361 3360 be used with care. For example, neither the working directory nor the
3362 3361 dirstate is updated, so file status may be incorrect after running this
3363 3362 command.
3364 3363
3365 3364 Returns 0 on success.
3366 3365 """
3367 3366
3368 3367 r1 = scmutil.revsingle(repo, rev1).node()
3369 3368 r2 = scmutil.revsingle(repo, rev2, 'null').node()
3370 3369
3371 3370 wlock = repo.wlock()
3372 3371 try:
3373 3372 repo.dirstate.beginparentchange()
3374 3373 repo.setparents(r1, r2)
3375 3374 repo.dirstate.endparentchange()
3376 3375 finally:
3377 3376 wlock.release()
3378 3377
3379 3378 @command('debugdirstate|debugstate',
3380 3379 [('', 'nodates', None, _('do not display the saved mtime')),
3381 3380 ('', 'datesort', None, _('sort by saved mtime'))],
3382 3381 _('[OPTION]...'))
3383 3382 def debugstate(ui, repo, **opts):
3384 3383 """show the contents of the current dirstate"""
3385 3384
3386 3385 nodates = opts.get('nodates')
3387 3386 datesort = opts.get('datesort')
3388 3387
3389 3388 timestr = ""
3390 3389 if datesort:
3391 3390 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
3392 3391 else:
3393 3392 keyfunc = None # sort by filename
3394 3393 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
3395 3394 if ent[3] == -1:
3396 3395 timestr = 'unset '
3397 3396 elif nodates:
3398 3397 timestr = 'set '
3399 3398 else:
3400 3399 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
3401 3400 time.localtime(ent[3]))
3402 3401 if ent[1] & 0o20000:
3403 3402 mode = 'lnk'
3404 3403 else:
3405 3404 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
3406 3405 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
3407 3406 for f in repo.dirstate.copies():
3408 3407 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
3409 3408
3410 3409 @command('debugsub',
3411 3410 [('r', 'rev', '',
3412 3411 _('revision to check'), _('REV'))],
3413 3412 _('[-r REV] [REV]'))
3414 3413 def debugsub(ui, repo, rev=None):
3415 3414 ctx = scmutil.revsingle(repo, rev, None)
3416 3415 for k, v in sorted(ctx.substate.items()):
3417 3416 ui.write(('path %s\n') % k)
3418 3417 ui.write((' source %s\n') % v[0])
3419 3418 ui.write((' revision %s\n') % v[1])
3420 3419
3421 3420 @command('debugsuccessorssets',
3422 3421 [],
3423 3422 _('[REV]'))
3424 3423 def debugsuccessorssets(ui, repo, *revs):
3425 3424 """show set of successors for revision
3426 3425
3427 3426 A successors set of changeset A is a consistent group of revisions that
3428 3427 succeed A. It contains non-obsolete changesets only.
3429 3428
3430 3429 In most cases a changeset A has a single successors set containing a single
3431 3430 successor (changeset A replaced by A').
3432 3431
3433 3432 A changeset that is made obsolete with no successors are called "pruned".
3434 3433 Such changesets have no successors sets at all.
3435 3434
3436 3435 A changeset that has been "split" will have a successors set containing
3437 3436 more than one successor.
3438 3437
3439 3438 A changeset that has been rewritten in multiple different ways is called
3440 3439 "divergent". Such changesets have multiple successor sets (each of which
3441 3440 may also be split, i.e. have multiple successors).
3442 3441
3443 3442 Results are displayed as follows::
3444 3443
3445 3444 <rev1>
3446 3445 <successors-1A>
3447 3446 <rev2>
3448 3447 <successors-2A>
3449 3448 <successors-2B1> <successors-2B2> <successors-2B3>
3450 3449
3451 3450 Here rev2 has two possible (i.e. divergent) successors sets. The first
3452 3451 holds one element, whereas the second holds three (i.e. the changeset has
3453 3452 been split).
3454 3453 """
3455 3454 # passed to successorssets caching computation from one call to another
3456 3455 cache = {}
3457 3456 ctx2str = str
3458 3457 node2str = short
3459 3458 if ui.debug():
3460 3459 def ctx2str(ctx):
3461 3460 return ctx.hex()
3462 3461 node2str = hex
3463 3462 for rev in scmutil.revrange(repo, revs):
3464 3463 ctx = repo[rev]
3465 3464 ui.write('%s\n'% ctx2str(ctx))
3466 3465 for succsset in obsolete.successorssets(repo, ctx.node(), cache):
3467 3466 if succsset:
3468 3467 ui.write(' ')
3469 3468 ui.write(node2str(succsset[0]))
3470 3469 for node in succsset[1:]:
3471 3470 ui.write(' ')
3472 3471 ui.write(node2str(node))
3473 3472 ui.write('\n')
3474 3473
3475 3474 @command('debugwalk', walkopts, _('[OPTION]... [FILE]...'), inferrepo=True)
3476 3475 def debugwalk(ui, repo, *pats, **opts):
3477 3476 """show how files match on given patterns"""
3478 3477 m = scmutil.match(repo[None], pats, opts)
3479 3478 items = list(repo.walk(m))
3480 3479 if not items:
3481 3480 return
3482 3481 f = lambda fn: fn
3483 3482 if ui.configbool('ui', 'slash') and os.sep != '/':
3484 3483 f = lambda fn: util.normpath(fn)
3485 3484 fmt = 'f %%-%ds %%-%ds %%s' % (
3486 3485 max([len(abs) for abs in items]),
3487 3486 max([len(m.rel(abs)) for abs in items]))
3488 3487 for abs in items:
3489 3488 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
3490 3489 ui.write("%s\n" % line.rstrip())
3491 3490
3492 3491 @command('debugwireargs',
3493 3492 [('', 'three', '', 'three'),
3494 3493 ('', 'four', '', 'four'),
3495 3494 ('', 'five', '', 'five'),
3496 3495 ] + remoteopts,
3497 3496 _('REPO [OPTIONS]... [ONE [TWO]]'),
3498 3497 norepo=True)
3499 3498 def debugwireargs(ui, repopath, *vals, **opts):
3500 3499 repo = hg.peer(ui, opts, repopath)
3501 3500 for opt in remoteopts:
3502 3501 del opts[opt[1]]
3503 3502 args = {}
3504 3503 for k, v in opts.iteritems():
3505 3504 if v:
3506 3505 args[k] = v
3507 3506 # run twice to check that we don't mess up the stream for the next command
3508 3507 res1 = repo.debugwireargs(*vals, **args)
3509 3508 res2 = repo.debugwireargs(*vals, **args)
3510 3509 ui.write("%s\n" % res1)
3511 3510 if res1 != res2:
3512 3511 ui.warn("%s\n" % res2)
3513 3512
3514 3513 @command('^diff',
3515 3514 [('r', 'rev', [], _('revision'), _('REV')),
3516 3515 ('c', 'change', '', _('change made by revision'), _('REV'))
3517 3516 ] + diffopts + diffopts2 + walkopts + subrepoopts,
3518 3517 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
3519 3518 inferrepo=True)
3520 3519 def diff(ui, repo, *pats, **opts):
3521 3520 """diff repository (or selected files)
3522 3521
3523 3522 Show differences between revisions for the specified files.
3524 3523
3525 3524 Differences between files are shown using the unified diff format.
3526 3525
3527 3526 .. note::
3528 3527
3529 3528 diff may generate unexpected results for merges, as it will
3530 3529 default to comparing against the working directory's first
3531 3530 parent changeset if no revisions are specified.
3532 3531
3533 3532 When two revision arguments are given, then changes are shown
3534 3533 between those revisions. If only one revision is specified then
3535 3534 that revision is compared to the working directory, and, when no
3536 3535 revisions are specified, the working directory files are compared
3537 3536 to its parent.
3538 3537
3539 3538 Alternatively you can specify -c/--change with a revision to see
3540 3539 the changes in that changeset relative to its first parent.
3541 3540
3542 3541 Without the -a/--text option, diff will avoid generating diffs of
3543 3542 files it detects as binary. With -a, diff will generate a diff
3544 3543 anyway, probably with undesirable results.
3545 3544
3546 3545 Use the -g/--git option to generate diffs in the git extended diff
3547 3546 format. For more information, read :hg:`help diffs`.
3548 3547
3549 3548 .. container:: verbose
3550 3549
3551 3550 Examples:
3552 3551
3553 3552 - compare a file in the current working directory to its parent::
3554 3553
3555 3554 hg diff foo.c
3556 3555
3557 3556 - compare two historical versions of a directory, with rename info::
3558 3557
3559 3558 hg diff --git -r 1.0:1.2 lib/
3560 3559
3561 3560 - get change stats relative to the last change on some date::
3562 3561
3563 3562 hg diff --stat -r "date('may 2')"
3564 3563
3565 3564 - diff all newly-added files that contain a keyword::
3566 3565
3567 3566 hg diff "set:added() and grep(GNU)"
3568 3567
3569 3568 - compare a revision and its parents::
3570 3569
3571 3570 hg diff -c 9353 # compare against first parent
3572 3571 hg diff -r 9353^:9353 # same using revset syntax
3573 3572 hg diff -r 9353^2:9353 # compare against the second parent
3574 3573
3575 3574 Returns 0 on success.
3576 3575 """
3577 3576
3578 3577 revs = opts.get('rev')
3579 3578 change = opts.get('change')
3580 3579 stat = opts.get('stat')
3581 3580 reverse = opts.get('reverse')
3582 3581
3583 3582 if revs and change:
3584 3583 msg = _('cannot specify --rev and --change at the same time')
3585 3584 raise error.Abort(msg)
3586 3585 elif change:
3587 3586 node2 = scmutil.revsingle(repo, change, None).node()
3588 3587 node1 = repo[node2].p1().node()
3589 3588 else:
3590 3589 node1, node2 = scmutil.revpair(repo, revs)
3591 3590
3592 3591 if reverse:
3593 3592 node1, node2 = node2, node1
3594 3593
3595 3594 diffopts = patch.diffallopts(ui, opts)
3596 3595 m = scmutil.match(repo[node2], pats, opts)
3597 3596 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
3598 3597 listsubrepos=opts.get('subrepos'),
3599 3598 root=opts.get('root'))
3600 3599
3601 3600 @command('^export',
3602 3601 [('o', 'output', '',
3603 3602 _('print output to file with formatted name'), _('FORMAT')),
3604 3603 ('', 'switch-parent', None, _('diff against the second parent')),
3605 3604 ('r', 'rev', [], _('revisions to export'), _('REV')),
3606 3605 ] + diffopts,
3607 3606 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
3608 3607 def export(ui, repo, *changesets, **opts):
3609 3608 """dump the header and diffs for one or more changesets
3610 3609
3611 3610 Print the changeset header and diffs for one or more revisions.
3612 3611 If no revision is given, the parent of the working directory is used.
3613 3612
3614 3613 The information shown in the changeset header is: author, date,
3615 3614 branch name (if non-default), changeset hash, parent(s) and commit
3616 3615 comment.
3617 3616
3618 3617 .. note::
3619 3618
3620 3619 export may generate unexpected diff output for merge
3621 3620 changesets, as it will compare the merge changeset against its
3622 3621 first parent only.
3623 3622
3624 3623 Output may be to a file, in which case the name of the file is
3625 3624 given using a format string. The formatting rules are as follows:
3626 3625
3627 3626 :``%%``: literal "%" character
3628 3627 :``%H``: changeset hash (40 hexadecimal digits)
3629 3628 :``%N``: number of patches being generated
3630 3629 :``%R``: changeset revision number
3631 3630 :``%b``: basename of the exporting repository
3632 3631 :``%h``: short-form changeset hash (12 hexadecimal digits)
3633 3632 :``%m``: first line of the commit message (only alphanumeric characters)
3634 3633 :``%n``: zero-padded sequence number, starting at 1
3635 3634 :``%r``: zero-padded changeset revision number
3636 3635
3637 3636 Without the -a/--text option, export will avoid generating diffs
3638 3637 of files it detects as binary. With -a, export will generate a
3639 3638 diff anyway, probably with undesirable results.
3640 3639
3641 3640 Use the -g/--git option to generate diffs in the git extended diff
3642 3641 format. See :hg:`help diffs` for more information.
3643 3642
3644 3643 With the --switch-parent option, the diff will be against the
3645 3644 second parent. It can be useful to review a merge.
3646 3645
3647 3646 .. container:: verbose
3648 3647
3649 3648 Examples:
3650 3649
3651 3650 - use export and import to transplant a bugfix to the current
3652 3651 branch::
3653 3652
3654 3653 hg export -r 9353 | hg import -
3655 3654
3656 3655 - export all the changesets between two revisions to a file with
3657 3656 rename information::
3658 3657
3659 3658 hg export --git -r 123:150 > changes.txt
3660 3659
3661 3660 - split outgoing changes into a series of patches with
3662 3661 descriptive names::
3663 3662
3664 3663 hg export -r "outgoing()" -o "%n-%m.patch"
3665 3664
3666 3665 Returns 0 on success.
3667 3666 """
3668 3667 changesets += tuple(opts.get('rev', []))
3669 3668 if not changesets:
3670 3669 changesets = ['.']
3671 3670 revs = scmutil.revrange(repo, changesets)
3672 3671 if not revs:
3673 3672 raise error.Abort(_("export requires at least one changeset"))
3674 3673 if len(revs) > 1:
3675 3674 ui.note(_('exporting patches:\n'))
3676 3675 else:
3677 3676 ui.note(_('exporting patch:\n'))
3678 3677 cmdutil.export(repo, revs, template=opts.get('output'),
3679 3678 switch_parent=opts.get('switch_parent'),
3680 3679 opts=patch.diffallopts(ui, opts))
3681 3680
3682 3681 @command('files',
3683 3682 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3684 3683 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3685 3684 ] + walkopts + formatteropts + subrepoopts,
3686 3685 _('[OPTION]... [PATTERN]...'))
3687 3686 def files(ui, repo, *pats, **opts):
3688 3687 """list tracked files
3689 3688
3690 3689 Print files under Mercurial control in the working directory or
3691 3690 specified revision whose names match the given patterns (excluding
3692 3691 removed files).
3693 3692
3694 3693 If no patterns are given to match, this command prints the names
3695 3694 of all files under Mercurial control in the working directory.
3696 3695
3697 3696 .. container:: verbose
3698 3697
3699 3698 Examples:
3700 3699
3701 3700 - list all files under the current directory::
3702 3701
3703 3702 hg files .
3704 3703
3705 3704 - shows sizes and flags for current revision::
3706 3705
3707 3706 hg files -vr .
3708 3707
3709 3708 - list all files named README::
3710 3709
3711 3710 hg files -I "**/README"
3712 3711
3713 3712 - list all binary files::
3714 3713
3715 3714 hg files "set:binary()"
3716 3715
3717 3716 - find files containing a regular expression::
3718 3717
3719 3718 hg files "set:grep('bob')"
3720 3719
3721 3720 - search tracked file contents with xargs and grep::
3722 3721
3723 3722 hg files -0 | xargs -0 grep foo
3724 3723
3725 3724 See :hg:`help patterns` and :hg:`help filesets` for more information
3726 3725 on specifying file patterns.
3727 3726
3728 3727 Returns 0 if a match is found, 1 otherwise.
3729 3728
3730 3729 """
3731 3730 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3732 3731
3733 3732 end = '\n'
3734 3733 if opts.get('print0'):
3735 3734 end = '\0'
3736 3735 fm = ui.formatter('files', opts)
3737 3736 fmt = '%s' + end
3738 3737
3739 3738 m = scmutil.match(ctx, pats, opts)
3740 3739 ret = cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
3741 3740
3742 3741 fm.end()
3743 3742
3744 3743 return ret
3745 3744
3746 3745 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
3747 3746 def forget(ui, repo, *pats, **opts):
3748 3747 """forget the specified files on the next commit
3749 3748
3750 3749 Mark the specified files so they will no longer be tracked
3751 3750 after the next commit.
3752 3751
3753 3752 This only removes files from the current branch, not from the
3754 3753 entire project history, and it does not delete them from the
3755 3754 working directory.
3756 3755
3757 3756 To delete the file from the working directory, see :hg:`remove`.
3758 3757
3759 3758 To undo a forget before the next commit, see :hg:`add`.
3760 3759
3761 3760 .. container:: verbose
3762 3761
3763 3762 Examples:
3764 3763
3765 3764 - forget newly-added binary files::
3766 3765
3767 3766 hg forget "set:added() and binary()"
3768 3767
3769 3768 - forget files that would be excluded by .hgignore::
3770 3769
3771 3770 hg forget "set:hgignore()"
3772 3771
3773 3772 Returns 0 on success.
3774 3773 """
3775 3774
3776 3775 if not pats:
3777 3776 raise error.Abort(_('no files specified'))
3778 3777
3779 3778 m = scmutil.match(repo[None], pats, opts)
3780 3779 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
3781 3780 return rejected and 1 or 0
3782 3781
3783 3782 @command(
3784 3783 'graft',
3785 3784 [('r', 'rev', [], _('revisions to graft'), _('REV')),
3786 3785 ('c', 'continue', False, _('resume interrupted graft')),
3787 3786 ('e', 'edit', False, _('invoke editor on commit messages')),
3788 3787 ('', 'log', None, _('append graft info to log message')),
3789 3788 ('f', 'force', False, _('force graft')),
3790 3789 ('D', 'currentdate', False,
3791 3790 _('record the current date as commit date')),
3792 3791 ('U', 'currentuser', False,
3793 3792 _('record the current user as committer'), _('DATE'))]
3794 3793 + commitopts2 + mergetoolopts + dryrunopts,
3795 3794 _('[OPTION]... [-r] REV...'))
3796 3795 def graft(ui, repo, *revs, **opts):
3797 3796 '''copy changes from other branches onto the current branch
3798 3797
3799 3798 This command uses Mercurial's merge logic to copy individual
3800 3799 changes from other branches without merging branches in the
3801 3800 history graph. This is sometimes known as 'backporting' or
3802 3801 'cherry-picking'. By default, graft will copy user, date, and
3803 3802 description from the source changesets.
3804 3803
3805 3804 Changesets that are ancestors of the current revision, that have
3806 3805 already been grafted, or that are merges will be skipped.
3807 3806
3808 3807 If --log is specified, log messages will have a comment appended
3809 3808 of the form::
3810 3809
3811 3810 (grafted from CHANGESETHASH)
3812 3811
3813 3812 If --force is specified, revisions will be grafted even if they
3814 3813 are already ancestors of or have been grafted to the destination.
3815 3814 This is useful when the revisions have since been backed out.
3816 3815
3817 3816 If a graft merge results in conflicts, the graft process is
3818 3817 interrupted so that the current merge can be manually resolved.
3819 3818 Once all conflicts are addressed, the graft process can be
3820 3819 continued with the -c/--continue option.
3821 3820
3822 3821 .. note::
3823 3822
3824 3823 The -c/--continue option does not reapply earlier options, except
3825 3824 for --force.
3826 3825
3827 3826 .. container:: verbose
3828 3827
3829 3828 Examples:
3830 3829
3831 3830 - copy a single change to the stable branch and edit its description::
3832 3831
3833 3832 hg update stable
3834 3833 hg graft --edit 9393
3835 3834
3836 3835 - graft a range of changesets with one exception, updating dates::
3837 3836
3838 3837 hg graft -D "2085::2093 and not 2091"
3839 3838
3840 3839 - continue a graft after resolving conflicts::
3841 3840
3842 3841 hg graft -c
3843 3842
3844 3843 - show the source of a grafted changeset::
3845 3844
3846 3845 hg log --debug -r .
3847 3846
3848 3847 See :hg:`help revisions` and :hg:`help revsets` for more about
3849 3848 specifying revisions.
3850 3849
3851 3850 Returns 0 on successful completion.
3852 3851 '''
3853 3852 wlock = None
3854 3853 try:
3855 3854 wlock = repo.wlock()
3856 3855 return _dograft(ui, repo, *revs, **opts)
3857 3856 finally:
3858 3857 release(wlock)
3859 3858
3860 3859 def _dograft(ui, repo, *revs, **opts):
3861 3860 revs = list(revs)
3862 3861 revs.extend(opts['rev'])
3863 3862
3864 3863 if not opts.get('user') and opts.get('currentuser'):
3865 3864 opts['user'] = ui.username()
3866 3865 if not opts.get('date') and opts.get('currentdate'):
3867 3866 opts['date'] = "%d %d" % util.makedate()
3868 3867
3869 3868 editor = cmdutil.getcommiteditor(editform='graft', **opts)
3870 3869
3871 3870 cont = False
3872 3871 if opts['continue']:
3873 3872 cont = True
3874 3873 if revs:
3875 3874 raise error.Abort(_("can't specify --continue and revisions"))
3876 3875 # read in unfinished revisions
3877 3876 try:
3878 3877 nodes = repo.vfs.read('graftstate').splitlines()
3879 3878 revs = [repo[node].rev() for node in nodes]
3880 3879 except IOError as inst:
3881 3880 if inst.errno != errno.ENOENT:
3882 3881 raise
3883 3882 raise error.Abort(_("no graft state found, can't continue"))
3884 3883 else:
3885 3884 cmdutil.checkunfinished(repo)
3886 3885 cmdutil.bailifchanged(repo)
3887 3886 if not revs:
3888 3887 raise error.Abort(_('no revisions specified'))
3889 3888 revs = scmutil.revrange(repo, revs)
3890 3889
3891 3890 skipped = set()
3892 3891 # check for merges
3893 3892 for rev in repo.revs('%ld and merge()', revs):
3894 3893 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
3895 3894 skipped.add(rev)
3896 3895 revs = [r for r in revs if r not in skipped]
3897 3896 if not revs:
3898 3897 return -1
3899 3898
3900 3899 # Don't check in the --continue case, in effect retaining --force across
3901 3900 # --continues. That's because without --force, any revisions we decided to
3902 3901 # skip would have been filtered out here, so they wouldn't have made their
3903 3902 # way to the graftstate. With --force, any revisions we would have otherwise
3904 3903 # skipped would not have been filtered out, and if they hadn't been applied
3905 3904 # already, they'd have been in the graftstate.
3906 3905 if not (cont or opts.get('force')):
3907 3906 # check for ancestors of dest branch
3908 3907 crev = repo['.'].rev()
3909 3908 ancestors = repo.changelog.ancestors([crev], inclusive=True)
3910 3909 # Cannot use x.remove(y) on smart set, this has to be a list.
3911 3910 # XXX make this lazy in the future
3912 3911 revs = list(revs)
3913 3912 # don't mutate while iterating, create a copy
3914 3913 for rev in list(revs):
3915 3914 if rev in ancestors:
3916 3915 ui.warn(_('skipping ancestor revision %d:%s\n') %
3917 3916 (rev, repo[rev]))
3918 3917 # XXX remove on list is slow
3919 3918 revs.remove(rev)
3920 3919 if not revs:
3921 3920 return -1
3922 3921
3923 3922 # analyze revs for earlier grafts
3924 3923 ids = {}
3925 3924 for ctx in repo.set("%ld", revs):
3926 3925 ids[ctx.hex()] = ctx.rev()
3927 3926 n = ctx.extra().get('source')
3928 3927 if n:
3929 3928 ids[n] = ctx.rev()
3930 3929
3931 3930 # check ancestors for earlier grafts
3932 3931 ui.debug('scanning for duplicate grafts\n')
3933 3932
3934 3933 for rev in repo.changelog.findmissingrevs(revs, [crev]):
3935 3934 ctx = repo[rev]
3936 3935 n = ctx.extra().get('source')
3937 3936 if n in ids:
3938 3937 try:
3939 3938 r = repo[n].rev()
3940 3939 except error.RepoLookupError:
3941 3940 r = None
3942 3941 if r in revs:
3943 3942 ui.warn(_('skipping revision %d:%s '
3944 3943 '(already grafted to %d:%s)\n')
3945 3944 % (r, repo[r], rev, ctx))
3946 3945 revs.remove(r)
3947 3946 elif ids[n] in revs:
3948 3947 if r is None:
3949 3948 ui.warn(_('skipping already grafted revision %d:%s '
3950 3949 '(%d:%s also has unknown origin %s)\n')
3951 3950 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
3952 3951 else:
3953 3952 ui.warn(_('skipping already grafted revision %d:%s '
3954 3953 '(%d:%s also has origin %d:%s)\n')
3955 3954 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
3956 3955 revs.remove(ids[n])
3957 3956 elif ctx.hex() in ids:
3958 3957 r = ids[ctx.hex()]
3959 3958 ui.warn(_('skipping already grafted revision %d:%s '
3960 3959 '(was grafted from %d:%s)\n') %
3961 3960 (r, repo[r], rev, ctx))
3962 3961 revs.remove(r)
3963 3962 if not revs:
3964 3963 return -1
3965 3964
3966 3965 try:
3967 3966 for pos, ctx in enumerate(repo.set("%ld", revs)):
3968 3967 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
3969 3968 ctx.description().split('\n', 1)[0])
3970 3969 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
3971 3970 if names:
3972 3971 desc += ' (%s)' % ' '.join(names)
3973 3972 ui.status(_('grafting %s\n') % desc)
3974 3973 if opts.get('dry_run'):
3975 3974 continue
3976 3975
3977 3976 extra = ctx.extra().copy()
3978 3977 del extra['branch']
3979 3978 source = extra.get('source')
3980 3979 if source:
3981 3980 extra['intermediate-source'] = ctx.hex()
3982 3981 else:
3983 3982 extra['source'] = ctx.hex()
3984 3983 user = ctx.user()
3985 3984 if opts.get('user'):
3986 3985 user = opts['user']
3987 3986 date = ctx.date()
3988 3987 if opts.get('date'):
3989 3988 date = opts['date']
3990 3989 message = ctx.description()
3991 3990 if opts.get('log'):
3992 3991 message += '\n(grafted from %s)' % ctx.hex()
3993 3992
3994 3993 # we don't merge the first commit when continuing
3995 3994 if not cont:
3996 3995 # perform the graft merge with p1(rev) as 'ancestor'
3997 3996 try:
3998 3997 # ui.forcemerge is an internal variable, do not document
3999 3998 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4000 3999 'graft')
4001 4000 stats = mergemod.graft(repo, ctx, ctx.p1(),
4002 4001 ['local', 'graft'])
4003 4002 finally:
4004 4003 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
4005 4004 # report any conflicts
4006 4005 if stats and stats[3] > 0:
4007 4006 # write out state for --continue
4008 4007 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
4009 4008 repo.vfs.write('graftstate', ''.join(nodelines))
4010 4009 extra = ''
4011 4010 if opts.get('user'):
4012 4011 extra += ' --user %s' % opts['user']
4013 4012 if opts.get('date'):
4014 4013 extra += ' --date %s' % opts['date']
4015 4014 if opts.get('log'):
4016 4015 extra += ' --log'
4017 4016 hint=_('use hg resolve and hg graft --continue%s') % extra
4018 4017 raise error.Abort(
4019 4018 _("unresolved conflicts, can't continue"),
4020 4019 hint=hint)
4021 4020 else:
4022 4021 cont = False
4023 4022
4024 4023 # commit
4025 4024 node = repo.commit(text=message, user=user,
4026 4025 date=date, extra=extra, editor=editor)
4027 4026 if node is None:
4028 4027 ui.warn(
4029 4028 _('note: graft of %d:%s created no changes to commit\n') %
4030 4029 (ctx.rev(), ctx))
4031 4030 finally:
4032 4031 # TODO: get rid of this meaningless try/finally enclosing.
4033 4032 # this is kept only to reduce changes in a patch.
4034 4033 pass
4035 4034
4036 4035 # remove state when we complete successfully
4037 4036 if not opts.get('dry_run'):
4038 4037 util.unlinkpath(repo.join('graftstate'), ignoremissing=True)
4039 4038
4040 4039 return 0
4041 4040
4042 4041 @command('grep',
4043 4042 [('0', 'print0', None, _('end fields with NUL')),
4044 4043 ('', 'all', None, _('print all revisions that match')),
4045 4044 ('a', 'text', None, _('treat all files as text')),
4046 4045 ('f', 'follow', None,
4047 4046 _('follow changeset history,'
4048 4047 ' or file history across copies and renames')),
4049 4048 ('i', 'ignore-case', None, _('ignore case when matching')),
4050 4049 ('l', 'files-with-matches', None,
4051 4050 _('print only filenames and revisions that match')),
4052 4051 ('n', 'line-number', None, _('print matching line numbers')),
4053 4052 ('r', 'rev', [],
4054 4053 _('only search files changed within revision range'), _('REV')),
4055 4054 ('u', 'user', None, _('list the author (long with -v)')),
4056 4055 ('d', 'date', None, _('list the date (short with -q)')),
4057 4056 ] + walkopts,
4058 4057 _('[OPTION]... PATTERN [FILE]...'),
4059 4058 inferrepo=True)
4060 4059 def grep(ui, repo, pattern, *pats, **opts):
4061 4060 """search for a pattern in specified files and revisions
4062 4061
4063 4062 Search revisions of files for a regular expression.
4064 4063
4065 4064 This command behaves differently than Unix grep. It only accepts
4066 4065 Python/Perl regexps. It searches repository history, not the
4067 4066 working directory. It always prints the revision number in which a
4068 4067 match appears.
4069 4068
4070 4069 By default, grep only prints output for the first revision of a
4071 4070 file in which it finds a match. To get it to print every revision
4072 4071 that contains a change in match status ("-" for a match that
4073 4072 becomes a non-match, or "+" for a non-match that becomes a match),
4074 4073 use the --all flag.
4075 4074
4076 4075 Returns 0 if a match is found, 1 otherwise.
4077 4076 """
4078 4077 reflags = re.M
4079 4078 if opts.get('ignore_case'):
4080 4079 reflags |= re.I
4081 4080 try:
4082 4081 regexp = util.re.compile(pattern, reflags)
4083 4082 except re.error as inst:
4084 4083 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
4085 4084 return 1
4086 4085 sep, eol = ':', '\n'
4087 4086 if opts.get('print0'):
4088 4087 sep = eol = '\0'
4089 4088
4090 4089 getfile = util.lrucachefunc(repo.file)
4091 4090
4092 4091 def matchlines(body):
4093 4092 begin = 0
4094 4093 linenum = 0
4095 4094 while begin < len(body):
4096 4095 match = regexp.search(body, begin)
4097 4096 if not match:
4098 4097 break
4099 4098 mstart, mend = match.span()
4100 4099 linenum += body.count('\n', begin, mstart) + 1
4101 4100 lstart = body.rfind('\n', begin, mstart) + 1 or begin
4102 4101 begin = body.find('\n', mend) + 1 or len(body) + 1
4103 4102 lend = begin - 1
4104 4103 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
4105 4104
4106 4105 class linestate(object):
4107 4106 def __init__(self, line, linenum, colstart, colend):
4108 4107 self.line = line
4109 4108 self.linenum = linenum
4110 4109 self.colstart = colstart
4111 4110 self.colend = colend
4112 4111
4113 4112 def __hash__(self):
4114 4113 return hash((self.linenum, self.line))
4115 4114
4116 4115 def __eq__(self, other):
4117 4116 return self.line == other.line
4118 4117
4119 4118 def __iter__(self):
4120 4119 yield (self.line[:self.colstart], '')
4121 4120 yield (self.line[self.colstart:self.colend], 'grep.match')
4122 4121 rest = self.line[self.colend:]
4123 4122 while rest != '':
4124 4123 match = regexp.search(rest)
4125 4124 if not match:
4126 4125 yield (rest, '')
4127 4126 break
4128 4127 mstart, mend = match.span()
4129 4128 yield (rest[:mstart], '')
4130 4129 yield (rest[mstart:mend], 'grep.match')
4131 4130 rest = rest[mend:]
4132 4131
4133 4132 matches = {}
4134 4133 copies = {}
4135 4134 def grepbody(fn, rev, body):
4136 4135 matches[rev].setdefault(fn, [])
4137 4136 m = matches[rev][fn]
4138 4137 for lnum, cstart, cend, line in matchlines(body):
4139 4138 s = linestate(line, lnum, cstart, cend)
4140 4139 m.append(s)
4141 4140
4142 4141 def difflinestates(a, b):
4143 4142 sm = difflib.SequenceMatcher(None, a, b)
4144 4143 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
4145 4144 if tag == 'insert':
4146 4145 for i in xrange(blo, bhi):
4147 4146 yield ('+', b[i])
4148 4147 elif tag == 'delete':
4149 4148 for i in xrange(alo, ahi):
4150 4149 yield ('-', a[i])
4151 4150 elif tag == 'replace':
4152 4151 for i in xrange(alo, ahi):
4153 4152 yield ('-', a[i])
4154 4153 for i in xrange(blo, bhi):
4155 4154 yield ('+', b[i])
4156 4155
4157 4156 def display(fn, ctx, pstates, states):
4158 4157 rev = ctx.rev()
4159 4158 if ui.quiet:
4160 4159 datefunc = util.shortdate
4161 4160 else:
4162 4161 datefunc = util.datestr
4163 4162 found = False
4164 4163 @util.cachefunc
4165 4164 def binary():
4166 4165 flog = getfile(fn)
4167 4166 return util.binary(flog.read(ctx.filenode(fn)))
4168 4167
4169 4168 if opts.get('all'):
4170 4169 iter = difflinestates(pstates, states)
4171 4170 else:
4172 4171 iter = [('', l) for l in states]
4173 4172 for change, l in iter:
4174 4173 cols = [(fn, 'grep.filename'), (str(rev), 'grep.rev')]
4175 4174
4176 4175 if opts.get('line_number'):
4177 4176 cols.append((str(l.linenum), 'grep.linenumber'))
4178 4177 if opts.get('all'):
4179 4178 cols.append((change, 'grep.change'))
4180 4179 if opts.get('user'):
4181 4180 cols.append((ui.shortuser(ctx.user()), 'grep.user'))
4182 4181 if opts.get('date'):
4183 4182 cols.append((datefunc(ctx.date()), 'grep.date'))
4184 4183 for col, label in cols[:-1]:
4185 4184 ui.write(col, label=label)
4186 4185 ui.write(sep, label='grep.sep')
4187 4186 ui.write(cols[-1][0], label=cols[-1][1])
4188 4187 if not opts.get('files_with_matches'):
4189 4188 ui.write(sep, label='grep.sep')
4190 4189 if not opts.get('text') and binary():
4191 4190 ui.write(" Binary file matches")
4192 4191 else:
4193 4192 for s, label in l:
4194 4193 ui.write(s, label=label)
4195 4194 ui.write(eol)
4196 4195 found = True
4197 4196 if opts.get('files_with_matches'):
4198 4197 break
4199 4198 return found
4200 4199
4201 4200 skip = {}
4202 4201 revfiles = {}
4203 4202 matchfn = scmutil.match(repo[None], pats, opts)
4204 4203 found = False
4205 4204 follow = opts.get('follow')
4206 4205
4207 4206 def prep(ctx, fns):
4208 4207 rev = ctx.rev()
4209 4208 pctx = ctx.p1()
4210 4209 parent = pctx.rev()
4211 4210 matches.setdefault(rev, {})
4212 4211 matches.setdefault(parent, {})
4213 4212 files = revfiles.setdefault(rev, [])
4214 4213 for fn in fns:
4215 4214 flog = getfile(fn)
4216 4215 try:
4217 4216 fnode = ctx.filenode(fn)
4218 4217 except error.LookupError:
4219 4218 continue
4220 4219
4221 4220 copied = flog.renamed(fnode)
4222 4221 copy = follow and copied and copied[0]
4223 4222 if copy:
4224 4223 copies.setdefault(rev, {})[fn] = copy
4225 4224 if fn in skip:
4226 4225 if copy:
4227 4226 skip[copy] = True
4228 4227 continue
4229 4228 files.append(fn)
4230 4229
4231 4230 if fn not in matches[rev]:
4232 4231 grepbody(fn, rev, flog.read(fnode))
4233 4232
4234 4233 pfn = copy or fn
4235 4234 if pfn not in matches[parent]:
4236 4235 try:
4237 4236 fnode = pctx.filenode(pfn)
4238 4237 grepbody(pfn, parent, flog.read(fnode))
4239 4238 except error.LookupError:
4240 4239 pass
4241 4240
4242 4241 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
4243 4242 rev = ctx.rev()
4244 4243 parent = ctx.p1().rev()
4245 4244 for fn in sorted(revfiles.get(rev, [])):
4246 4245 states = matches[rev][fn]
4247 4246 copy = copies.get(rev, {}).get(fn)
4248 4247 if fn in skip:
4249 4248 if copy:
4250 4249 skip[copy] = True
4251 4250 continue
4252 4251 pstates = matches.get(parent, {}).get(copy or fn, [])
4253 4252 if pstates or states:
4254 4253 r = display(fn, ctx, pstates, states)
4255 4254 found = found or r
4256 4255 if r and not opts.get('all'):
4257 4256 skip[fn] = True
4258 4257 if copy:
4259 4258 skip[copy] = True
4260 4259 del matches[rev]
4261 4260 del revfiles[rev]
4262 4261
4263 4262 return not found
4264 4263
4265 4264 @command('heads',
4266 4265 [('r', 'rev', '',
4267 4266 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
4268 4267 ('t', 'topo', False, _('show topological heads only')),
4269 4268 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
4270 4269 ('c', 'closed', False, _('show normal and closed branch heads')),
4271 4270 ] + templateopts,
4272 4271 _('[-ct] [-r STARTREV] [REV]...'))
4273 4272 def heads(ui, repo, *branchrevs, **opts):
4274 4273 """show branch heads
4275 4274
4276 4275 With no arguments, show all open branch heads in the repository.
4277 4276 Branch heads are changesets that have no descendants on the
4278 4277 same branch. They are where development generally takes place and
4279 4278 are the usual targets for update and merge operations.
4280 4279
4281 4280 If one or more REVs are given, only open branch heads on the
4282 4281 branches associated with the specified changesets are shown. This
4283 4282 means that you can use :hg:`heads .` to see the heads on the
4284 4283 currently checked-out branch.
4285 4284
4286 4285 If -c/--closed is specified, also show branch heads marked closed
4287 4286 (see :hg:`commit --close-branch`).
4288 4287
4289 4288 If STARTREV is specified, only those heads that are descendants of
4290 4289 STARTREV will be displayed.
4291 4290
4292 4291 If -t/--topo is specified, named branch mechanics will be ignored and only
4293 4292 topological heads (changesets with no children) will be shown.
4294 4293
4295 4294 Returns 0 if matching heads are found, 1 if not.
4296 4295 """
4297 4296
4298 4297 start = None
4299 4298 if 'rev' in opts:
4300 4299 start = scmutil.revsingle(repo, opts['rev'], None).node()
4301 4300
4302 4301 if opts.get('topo'):
4303 4302 heads = [repo[h] for h in repo.heads(start)]
4304 4303 else:
4305 4304 heads = []
4306 4305 for branch in repo.branchmap():
4307 4306 heads += repo.branchheads(branch, start, opts.get('closed'))
4308 4307 heads = [repo[h] for h in heads]
4309 4308
4310 4309 if branchrevs:
4311 4310 branches = set(repo[br].branch() for br in branchrevs)
4312 4311 heads = [h for h in heads if h.branch() in branches]
4313 4312
4314 4313 if opts.get('active') and branchrevs:
4315 4314 dagheads = repo.heads(start)
4316 4315 heads = [h for h in heads if h.node() in dagheads]
4317 4316
4318 4317 if branchrevs:
4319 4318 haveheads = set(h.branch() for h in heads)
4320 4319 if branches - haveheads:
4321 4320 headless = ', '.join(b for b in branches - haveheads)
4322 4321 msg = _('no open branch heads found on branches %s')
4323 4322 if opts.get('rev'):
4324 4323 msg += _(' (started at %s)') % opts['rev']
4325 4324 ui.warn((msg + '\n') % headless)
4326 4325
4327 4326 if not heads:
4328 4327 return 1
4329 4328
4330 4329 heads = sorted(heads, key=lambda x: -x.rev())
4331 4330 displayer = cmdutil.show_changeset(ui, repo, opts)
4332 4331 for ctx in heads:
4333 4332 displayer.show(ctx)
4334 4333 displayer.close()
4335 4334
4336 4335 @command('help',
4337 4336 [('e', 'extension', None, _('show only help for extensions')),
4338 4337 ('c', 'command', None, _('show only help for commands')),
4339 4338 ('k', 'keyword', None, _('show topics matching keyword')),
4340 4339 ],
4341 4340 _('[-eck] [TOPIC]'),
4342 4341 norepo=True)
4343 4342 def help_(ui, name=None, **opts):
4344 4343 """show help for a given topic or a help overview
4345 4344
4346 4345 With no arguments, print a list of commands with short help messages.
4347 4346
4348 4347 Given a topic, extension, or command name, print help for that
4349 4348 topic.
4350 4349
4351 4350 Returns 0 if successful.
4352 4351 """
4353 4352
4354 4353 textwidth = min(ui.termwidth(), 80) - 2
4355 4354
4356 4355 keep = []
4357 4356 if ui.verbose:
4358 4357 keep.append('verbose')
4359 4358 if sys.platform.startswith('win'):
4360 4359 keep.append('windows')
4361 4360 elif sys.platform == 'OpenVMS':
4362 4361 keep.append('vms')
4363 4362 elif sys.platform == 'plan9':
4364 4363 keep.append('plan9')
4365 4364 else:
4366 4365 keep.append('unix')
4367 4366 keep.append(sys.platform.lower())
4368 4367
4369 4368 section = None
4370 4369 if name and '.' in name:
4371 4370 name, section = name.split('.', 1)
4372 4371 section = section.lower()
4373 4372
4374 4373 text = help.help_(ui, name, **opts)
4375 4374
4376 4375 formatted, pruned = minirst.format(text, textwidth, keep=keep,
4377 4376 section=section)
4378 4377
4379 4378 # We could have been given a weird ".foo" section without a name
4380 4379 # to look for, or we could have simply failed to found "foo.bar"
4381 4380 # because bar isn't a section of foo
4382 4381 if section and not (formatted and name):
4383 4382 raise error.Abort(_("help section not found"))
4384 4383
4385 4384 if 'verbose' in pruned:
4386 4385 keep.append('omitted')
4387 4386 else:
4388 4387 keep.append('notomitted')
4389 4388 formatted, pruned = minirst.format(text, textwidth, keep=keep,
4390 4389 section=section)
4391 4390 ui.write(formatted)
4392 4391
4393 4392
4394 4393 @command('identify|id',
4395 4394 [('r', 'rev', '',
4396 4395 _('identify the specified revision'), _('REV')),
4397 4396 ('n', 'num', None, _('show local revision number')),
4398 4397 ('i', 'id', None, _('show global revision id')),
4399 4398 ('b', 'branch', None, _('show branch')),
4400 4399 ('t', 'tags', None, _('show tags')),
4401 4400 ('B', 'bookmarks', None, _('show bookmarks')),
4402 4401 ] + remoteopts,
4403 4402 _('[-nibtB] [-r REV] [SOURCE]'),
4404 4403 optionalrepo=True)
4405 4404 def identify(ui, repo, source=None, rev=None,
4406 4405 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
4407 4406 """identify the working directory or specified revision
4408 4407
4409 4408 Print a summary identifying the repository state at REV using one or
4410 4409 two parent hash identifiers, followed by a "+" if the working
4411 4410 directory has uncommitted changes, the branch name (if not default),
4412 4411 a list of tags, and a list of bookmarks.
4413 4412
4414 4413 When REV is not given, print a summary of the current state of the
4415 4414 repository.
4416 4415
4417 4416 Specifying a path to a repository root or Mercurial bundle will
4418 4417 cause lookup to operate on that repository/bundle.
4419 4418
4420 4419 .. container:: verbose
4421 4420
4422 4421 Examples:
4423 4422
4424 4423 - generate a build identifier for the working directory::
4425 4424
4426 4425 hg id --id > build-id.dat
4427 4426
4428 4427 - find the revision corresponding to a tag::
4429 4428
4430 4429 hg id -n -r 1.3
4431 4430
4432 4431 - check the most recent revision of a remote repository::
4433 4432
4434 4433 hg id -r tip http://selenic.com/hg/
4435 4434
4436 4435 See :hg:`log` for generating more information about specific revisions,
4437 4436 including full hash identifiers.
4438 4437
4439 4438 Returns 0 if successful.
4440 4439 """
4441 4440
4442 4441 if not repo and not source:
4443 4442 raise error.Abort(_("there is no Mercurial repository here "
4444 4443 "(.hg not found)"))
4445 4444
4446 4445 if ui.debugflag:
4447 4446 hexfunc = hex
4448 4447 else:
4449 4448 hexfunc = short
4450 4449 default = not (num or id or branch or tags or bookmarks)
4451 4450 output = []
4452 4451 revs = []
4453 4452
4454 4453 if source:
4455 4454 source, branches = hg.parseurl(ui.expandpath(source))
4456 4455 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
4457 4456 repo = peer.local()
4458 4457 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
4459 4458
4460 4459 if not repo:
4461 4460 if num or branch or tags:
4462 4461 raise error.Abort(
4463 4462 _("can't query remote revision number, branch, or tags"))
4464 4463 if not rev and revs:
4465 4464 rev = revs[0]
4466 4465 if not rev:
4467 4466 rev = "tip"
4468 4467
4469 4468 remoterev = peer.lookup(rev)
4470 4469 if default or id:
4471 4470 output = [hexfunc(remoterev)]
4472 4471
4473 4472 def getbms():
4474 4473 bms = []
4475 4474
4476 4475 if 'bookmarks' in peer.listkeys('namespaces'):
4477 4476 hexremoterev = hex(remoterev)
4478 4477 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
4479 4478 if bmr == hexremoterev]
4480 4479
4481 4480 return sorted(bms)
4482 4481
4483 4482 if bookmarks:
4484 4483 output.extend(getbms())
4485 4484 elif default and not ui.quiet:
4486 4485 # multiple bookmarks for a single parent separated by '/'
4487 4486 bm = '/'.join(getbms())
4488 4487 if bm:
4489 4488 output.append(bm)
4490 4489 else:
4491 4490 ctx = scmutil.revsingle(repo, rev, None)
4492 4491
4493 4492 if ctx.rev() is None:
4494 4493 ctx = repo[None]
4495 4494 parents = ctx.parents()
4496 4495 taglist = []
4497 4496 for p in parents:
4498 4497 taglist.extend(p.tags())
4499 4498
4500 4499 changed = ""
4501 4500 if default or id or num:
4502 4501 if (any(repo.status())
4503 4502 or any(ctx.sub(s).dirty() for s in ctx.substate)):
4504 4503 changed = '+'
4505 4504 if default or id:
4506 4505 output = ["%s%s" %
4507 4506 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
4508 4507 if num:
4509 4508 output.append("%s%s" %
4510 4509 ('+'.join([str(p.rev()) for p in parents]), changed))
4511 4510 else:
4512 4511 if default or id:
4513 4512 output = [hexfunc(ctx.node())]
4514 4513 if num:
4515 4514 output.append(str(ctx.rev()))
4516 4515 taglist = ctx.tags()
4517 4516
4518 4517 if default and not ui.quiet:
4519 4518 b = ctx.branch()
4520 4519 if b != 'default':
4521 4520 output.append("(%s)" % b)
4522 4521
4523 4522 # multiple tags for a single parent separated by '/'
4524 4523 t = '/'.join(taglist)
4525 4524 if t:
4526 4525 output.append(t)
4527 4526
4528 4527 # multiple bookmarks for a single parent separated by '/'
4529 4528 bm = '/'.join(ctx.bookmarks())
4530 4529 if bm:
4531 4530 output.append(bm)
4532 4531 else:
4533 4532 if branch:
4534 4533 output.append(ctx.branch())
4535 4534
4536 4535 if tags:
4537 4536 output.extend(taglist)
4538 4537
4539 4538 if bookmarks:
4540 4539 output.extend(ctx.bookmarks())
4541 4540
4542 4541 ui.write("%s\n" % ' '.join(output))
4543 4542
4544 4543 @command('import|patch',
4545 4544 [('p', 'strip', 1,
4546 4545 _('directory strip option for patch. This has the same '
4547 4546 'meaning as the corresponding patch option'), _('NUM')),
4548 4547 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
4549 4548 ('e', 'edit', False, _('invoke editor on commit messages')),
4550 4549 ('f', 'force', None,
4551 4550 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
4552 4551 ('', 'no-commit', None,
4553 4552 _("don't commit, just update the working directory")),
4554 4553 ('', 'bypass', None,
4555 4554 _("apply patch without touching the working directory")),
4556 4555 ('', 'partial', None,
4557 4556 _('commit even if some hunks fail')),
4558 4557 ('', 'exact', None,
4559 4558 _('apply patch to the nodes from which it was generated')),
4560 4559 ('', 'prefix', '',
4561 4560 _('apply patch to subdirectory'), _('DIR')),
4562 4561 ('', 'import-branch', None,
4563 4562 _('use any branch information in patch (implied by --exact)'))] +
4564 4563 commitopts + commitopts2 + similarityopts,
4565 4564 _('[OPTION]... PATCH...'))
4566 4565 def import_(ui, repo, patch1=None, *patches, **opts):
4567 4566 """import an ordered set of patches
4568 4567
4569 4568 Import a list of patches and commit them individually (unless
4570 4569 --no-commit is specified).
4571 4570
4572 4571 Because import first applies changes to the working directory,
4573 4572 import will abort if there are outstanding changes.
4574 4573
4575 4574 You can import a patch straight from a mail message. Even patches
4576 4575 as attachments work (to use the body part, it must have type
4577 4576 text/plain or text/x-patch). From and Subject headers of email
4578 4577 message are used as default committer and commit message. All
4579 4578 text/plain body parts before first diff are added to commit
4580 4579 message.
4581 4580
4582 4581 If the imported patch was generated by :hg:`export`, user and
4583 4582 description from patch override values from message headers and
4584 4583 body. Values given on command line with -m/--message and -u/--user
4585 4584 override these.
4586 4585
4587 4586 If --exact is specified, import will set the working directory to
4588 4587 the parent of each patch before applying it, and will abort if the
4589 4588 resulting changeset has a different ID than the one recorded in
4590 4589 the patch. This may happen due to character set problems or other
4591 4590 deficiencies in the text patch format.
4592 4591
4593 4592 Use --bypass to apply and commit patches directly to the
4594 4593 repository, not touching the working directory. Without --exact,
4595 4594 patches will be applied on top of the working directory parent
4596 4595 revision.
4597 4596
4598 4597 With -s/--similarity, hg will attempt to discover renames and
4599 4598 copies in the patch in the same way as :hg:`addremove`.
4600 4599
4601 4600 Use --partial to ensure a changeset will be created from the patch
4602 4601 even if some hunks fail to apply. Hunks that fail to apply will be
4603 4602 written to a <target-file>.rej file. Conflicts can then be resolved
4604 4603 by hand before :hg:`commit --amend` is run to update the created
4605 4604 changeset. This flag exists to let people import patches that
4606 4605 partially apply without losing the associated metadata (author,
4607 4606 date, description, ...). Note that when none of the hunk applies
4608 4607 cleanly, :hg:`import --partial` will create an empty changeset,
4609 4608 importing only the patch metadata.
4610 4609
4611 4610 It is possible to use external patch programs to perform the patch
4612 4611 by setting the ``ui.patch`` configuration option. For the default
4613 4612 internal tool, the fuzz can also be configured via ``patch.fuzz``.
4614 4613 See :hg:`help config` for more information about configuration
4615 4614 files and how to use these options.
4616 4615
4617 4616 To read a patch from standard input, use "-" as the patch name. If
4618 4617 a URL is specified, the patch will be downloaded from it.
4619 4618 See :hg:`help dates` for a list of formats valid for -d/--date.
4620 4619
4621 4620 .. container:: verbose
4622 4621
4623 4622 Examples:
4624 4623
4625 4624 - import a traditional patch from a website and detect renames::
4626 4625
4627 4626 hg import -s 80 http://example.com/bugfix.patch
4628 4627
4629 4628 - import a changeset from an hgweb server::
4630 4629
4631 4630 hg import http://www.selenic.com/hg/rev/5ca8c111e9aa
4632 4631
4633 4632 - import all the patches in an Unix-style mbox::
4634 4633
4635 4634 hg import incoming-patches.mbox
4636 4635
4637 4636 - attempt to exactly restore an exported changeset (not always
4638 4637 possible)::
4639 4638
4640 4639 hg import --exact proposed-fix.patch
4641 4640
4642 4641 - use an external tool to apply a patch which is too fuzzy for
4643 4642 the default internal tool.
4644 4643
4645 4644 hg import --config ui.patch="patch --merge" fuzzy.patch
4646 4645
4647 4646 - change the default fuzzing from 2 to a less strict 7
4648 4647
4649 4648 hg import --config ui.fuzz=7 fuzz.patch
4650 4649
4651 4650 Returns 0 on success, 1 on partial success (see --partial).
4652 4651 """
4653 4652
4654 4653 if not patch1:
4655 4654 raise error.Abort(_('need at least one patch to import'))
4656 4655
4657 4656 patches = (patch1,) + patches
4658 4657
4659 4658 date = opts.get('date')
4660 4659 if date:
4661 4660 opts['date'] = util.parsedate(date)
4662 4661
4663 4662 update = not opts.get('bypass')
4664 4663 if not update and opts.get('no_commit'):
4665 4664 raise error.Abort(_('cannot use --no-commit with --bypass'))
4666 4665 try:
4667 4666 sim = float(opts.get('similarity') or 0)
4668 4667 except ValueError:
4669 4668 raise error.Abort(_('similarity must be a number'))
4670 4669 if sim < 0 or sim > 100:
4671 4670 raise error.Abort(_('similarity must be between 0 and 100'))
4672 4671 if sim and not update:
4673 4672 raise error.Abort(_('cannot use --similarity with --bypass'))
4674 4673 if opts.get('exact') and opts.get('edit'):
4675 4674 raise error.Abort(_('cannot use --exact with --edit'))
4676 4675 if opts.get('exact') and opts.get('prefix'):
4677 4676 raise error.Abort(_('cannot use --exact with --prefix'))
4678 4677
4679 4678 base = opts["base"]
4680 4679 wlock = dsguard = lock = tr = None
4681 4680 msgs = []
4682 4681 ret = 0
4683 4682
4684 4683
4685 4684 try:
4686 4685 try:
4687 4686 wlock = repo.wlock()
4688 4687
4689 4688 if update:
4690 4689 cmdutil.checkunfinished(repo)
4691 4690 if (opts.get('exact') or not opts.get('force')) and update:
4692 4691 cmdutil.bailifchanged(repo)
4693 4692
4694 4693 if not opts.get('no_commit'):
4695 4694 lock = repo.lock()
4696 4695 tr = repo.transaction('import')
4697 4696 else:
4698 4697 dsguard = cmdutil.dirstateguard(repo, 'import')
4699 4698 parents = repo[None].parents()
4700 4699 for patchurl in patches:
4701 4700 if patchurl == '-':
4702 4701 ui.status(_('applying patch from stdin\n'))
4703 4702 patchfile = ui.fin
4704 4703 patchurl = 'stdin' # for error message
4705 4704 else:
4706 4705 patchurl = os.path.join(base, patchurl)
4707 4706 ui.status(_('applying %s\n') % patchurl)
4708 4707 patchfile = hg.openpath(ui, patchurl)
4709 4708
4710 4709 haspatch = False
4711 4710 for hunk in patch.split(patchfile):
4712 4711 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
4713 4712 parents, opts,
4714 4713 msgs, hg.clean)
4715 4714 if msg:
4716 4715 haspatch = True
4717 4716 ui.note(msg + '\n')
4718 4717 if update or opts.get('exact'):
4719 4718 parents = repo[None].parents()
4720 4719 else:
4721 4720 parents = [repo[node]]
4722 4721 if rej:
4723 4722 ui.write_err(_("patch applied partially\n"))
4724 4723 ui.write_err(_("(fix the .rej files and run "
4725 4724 "`hg commit --amend`)\n"))
4726 4725 ret = 1
4727 4726 break
4728 4727
4729 4728 if not haspatch:
4730 4729 raise error.Abort(_('%s: no diffs found') % patchurl)
4731 4730
4732 4731 if tr:
4733 4732 tr.close()
4734 4733 if msgs:
4735 4734 repo.savecommitmessage('\n* * *\n'.join(msgs))
4736 4735 if dsguard:
4737 4736 dsguard.close()
4738 4737 return ret
4739 4738 finally:
4740 4739 # TODO: get rid of this meaningless try/finally enclosing.
4741 4740 # this is kept only to reduce changes in a patch.
4742 4741 pass
4743 4742 finally:
4744 4743 if tr:
4745 4744 tr.release()
4746 4745 release(lock, dsguard, wlock)
4747 4746
4748 4747 @command('incoming|in',
4749 4748 [('f', 'force', None,
4750 4749 _('run even if remote repository is unrelated')),
4751 4750 ('n', 'newest-first', None, _('show newest record first')),
4752 4751 ('', 'bundle', '',
4753 4752 _('file to store the bundles into'), _('FILE')),
4754 4753 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
4755 4754 ('B', 'bookmarks', False, _("compare bookmarks")),
4756 4755 ('b', 'branch', [],
4757 4756 _('a specific branch you would like to pull'), _('BRANCH')),
4758 4757 ] + logopts + remoteopts + subrepoopts,
4759 4758 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
4760 4759 def incoming(ui, repo, source="default", **opts):
4761 4760 """show new changesets found in source
4762 4761
4763 4762 Show new changesets found in the specified path/URL or the default
4764 4763 pull location. These are the changesets that would have been pulled
4765 4764 if a pull at the time you issued this command.
4766 4765
4767 4766 See pull for valid source format details.
4768 4767
4769 4768 .. container:: verbose
4770 4769
4771 4770 With -B/--bookmarks, the result of bookmark comparison between
4772 4771 local and remote repositories is displayed. With -v/--verbose,
4773 4772 status is also displayed for each bookmark like below::
4774 4773
4775 4774 BM1 01234567890a added
4776 4775 BM2 1234567890ab advanced
4777 4776 BM3 234567890abc diverged
4778 4777 BM4 34567890abcd changed
4779 4778
4780 4779 The action taken locally when pulling depends on the
4781 4780 status of each bookmark:
4782 4781
4783 4782 :``added``: pull will create it
4784 4783 :``advanced``: pull will update it
4785 4784 :``diverged``: pull will create a divergent bookmark
4786 4785 :``changed``: result depends on remote changesets
4787 4786
4788 4787 From the point of view of pulling behavior, bookmark
4789 4788 existing only in the remote repository are treated as ``added``,
4790 4789 even if it is in fact locally deleted.
4791 4790
4792 4791 .. container:: verbose
4793 4792
4794 4793 For remote repository, using --bundle avoids downloading the
4795 4794 changesets twice if the incoming is followed by a pull.
4796 4795
4797 4796 Examples:
4798 4797
4799 4798 - show incoming changes with patches and full description::
4800 4799
4801 4800 hg incoming -vp
4802 4801
4803 4802 - show incoming changes excluding merges, store a bundle::
4804 4803
4805 4804 hg in -vpM --bundle incoming.hg
4806 4805 hg pull incoming.hg
4807 4806
4808 4807 - briefly list changes inside a bundle::
4809 4808
4810 4809 hg in changes.hg -T "{desc|firstline}\\n"
4811 4810
4812 4811 Returns 0 if there are incoming changes, 1 otherwise.
4813 4812 """
4814 4813 if opts.get('graph'):
4815 4814 cmdutil.checkunsupportedgraphflags([], opts)
4816 4815 def display(other, chlist, displayer):
4817 4816 revdag = cmdutil.graphrevs(other, chlist, opts)
4818 4817 cmdutil.displaygraph(ui, repo, revdag, displayer,
4819 4818 graphmod.asciiedges)
4820 4819
4821 4820 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
4822 4821 return 0
4823 4822
4824 4823 if opts.get('bundle') and opts.get('subrepos'):
4825 4824 raise error.Abort(_('cannot combine --bundle and --subrepos'))
4826 4825
4827 4826 if opts.get('bookmarks'):
4828 4827 source, branches = hg.parseurl(ui.expandpath(source),
4829 4828 opts.get('branch'))
4830 4829 other = hg.peer(repo, opts, source)
4831 4830 if 'bookmarks' not in other.listkeys('namespaces'):
4832 4831 ui.warn(_("remote doesn't support bookmarks\n"))
4833 4832 return 0
4834 4833 ui.status(_('comparing with %s\n') % util.hidepassword(source))
4835 4834 return bookmarks.incoming(ui, repo, other)
4836 4835
4837 4836 repo._subtoppath = ui.expandpath(source)
4838 4837 try:
4839 4838 return hg.incoming(ui, repo, source, opts)
4840 4839 finally:
4841 4840 del repo._subtoppath
4842 4841
4843 4842
4844 4843 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
4845 4844 norepo=True)
4846 4845 def init(ui, dest=".", **opts):
4847 4846 """create a new repository in the given directory
4848 4847
4849 4848 Initialize a new repository in the given directory. If the given
4850 4849 directory does not exist, it will be created.
4851 4850
4852 4851 If no directory is given, the current directory is used.
4853 4852
4854 4853 It is possible to specify an ``ssh://`` URL as the destination.
4855 4854 See :hg:`help urls` for more information.
4856 4855
4857 4856 Returns 0 on success.
4858 4857 """
4859 4858 hg.peer(ui, opts, ui.expandpath(dest), create=True)
4860 4859
4861 4860 @command('locate',
4862 4861 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
4863 4862 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4864 4863 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
4865 4864 ] + walkopts,
4866 4865 _('[OPTION]... [PATTERN]...'))
4867 4866 def locate(ui, repo, *pats, **opts):
4868 4867 """locate files matching specific patterns (DEPRECATED)
4869 4868
4870 4869 Print files under Mercurial control in the working directory whose
4871 4870 names match the given patterns.
4872 4871
4873 4872 By default, this command searches all directories in the working
4874 4873 directory. To search just the current directory and its
4875 4874 subdirectories, use "--include .".
4876 4875
4877 4876 If no patterns are given to match, this command prints the names
4878 4877 of all files under Mercurial control in the working directory.
4879 4878
4880 4879 If you want to feed the output of this command into the "xargs"
4881 4880 command, use the -0 option to both this command and "xargs". This
4882 4881 will avoid the problem of "xargs" treating single filenames that
4883 4882 contain whitespace as multiple filenames.
4884 4883
4885 4884 See :hg:`help files` for a more versatile command.
4886 4885
4887 4886 Returns 0 if a match is found, 1 otherwise.
4888 4887 """
4889 4888 if opts.get('print0'):
4890 4889 end = '\0'
4891 4890 else:
4892 4891 end = '\n'
4893 4892 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
4894 4893
4895 4894 ret = 1
4896 4895 ctx = repo[rev]
4897 4896 m = scmutil.match(ctx, pats, opts, default='relglob',
4898 4897 badfn=lambda x, y: False)
4899 4898
4900 4899 for abs in ctx.matches(m):
4901 4900 if opts.get('fullpath'):
4902 4901 ui.write(repo.wjoin(abs), end)
4903 4902 else:
4904 4903 ui.write(((pats and m.rel(abs)) or abs), end)
4905 4904 ret = 0
4906 4905
4907 4906 return ret
4908 4907
4909 4908 @command('^log|history',
4910 4909 [('f', 'follow', None,
4911 4910 _('follow changeset history, or file history across copies and renames')),
4912 4911 ('', 'follow-first', None,
4913 4912 _('only follow the first parent of merge changesets (DEPRECATED)')),
4914 4913 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
4915 4914 ('C', 'copies', None, _('show copied files')),
4916 4915 ('k', 'keyword', [],
4917 4916 _('do case-insensitive search for a given text'), _('TEXT')),
4918 4917 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
4919 4918 ('', 'removed', None, _('include revisions where files were removed')),
4920 4919 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
4921 4920 ('u', 'user', [], _('revisions committed by user'), _('USER')),
4922 4921 ('', 'only-branch', [],
4923 4922 _('show only changesets within the given named branch (DEPRECATED)'),
4924 4923 _('BRANCH')),
4925 4924 ('b', 'branch', [],
4926 4925 _('show changesets within the given named branch'), _('BRANCH')),
4927 4926 ('P', 'prune', [],
4928 4927 _('do not display revision or any of its ancestors'), _('REV')),
4929 4928 ] + logopts + walkopts,
4930 4929 _('[OPTION]... [FILE]'),
4931 4930 inferrepo=True)
4932 4931 def log(ui, repo, *pats, **opts):
4933 4932 """show revision history of entire repository or files
4934 4933
4935 4934 Print the revision history of the specified files or the entire
4936 4935 project.
4937 4936
4938 4937 If no revision range is specified, the default is ``tip:0`` unless
4939 4938 --follow is set, in which case the working directory parent is
4940 4939 used as the starting revision.
4941 4940
4942 4941 File history is shown without following rename or copy history of
4943 4942 files. Use -f/--follow with a filename to follow history across
4944 4943 renames and copies. --follow without a filename will only show
4945 4944 ancestors or descendants of the starting revision.
4946 4945
4947 4946 By default this command prints revision number and changeset id,
4948 4947 tags, non-trivial parents, user, date and time, and a summary for
4949 4948 each commit. When the -v/--verbose switch is used, the list of
4950 4949 changed files and full commit message are shown.
4951 4950
4952 4951 With --graph the revisions are shown as an ASCII art DAG with the most
4953 4952 recent changeset at the top.
4954 4953 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
4955 4954 and '+' represents a fork where the changeset from the lines below is a
4956 4955 parent of the 'o' merge on the same line.
4957 4956
4958 4957 .. note::
4959 4958
4960 4959 log -p/--patch may generate unexpected diff output for merge
4961 4960 changesets, as it will only compare the merge changeset against
4962 4961 its first parent. Also, only files different from BOTH parents
4963 4962 will appear in files:.
4964 4963
4965 4964 .. note::
4966 4965
4967 4966 for performance reasons, log FILE may omit duplicate changes
4968 4967 made on branches and will not show removals or mode changes. To
4969 4968 see all such changes, use the --removed switch.
4970 4969
4971 4970 .. container:: verbose
4972 4971
4973 4972 Some examples:
4974 4973
4975 4974 - changesets with full descriptions and file lists::
4976 4975
4977 4976 hg log -v
4978 4977
4979 4978 - changesets ancestral to the working directory::
4980 4979
4981 4980 hg log -f
4982 4981
4983 4982 - last 10 commits on the current branch::
4984 4983
4985 4984 hg log -l 10 -b .
4986 4985
4987 4986 - changesets showing all modifications of a file, including removals::
4988 4987
4989 4988 hg log --removed file.c
4990 4989
4991 4990 - all changesets that touch a directory, with diffs, excluding merges::
4992 4991
4993 4992 hg log -Mp lib/
4994 4993
4995 4994 - all revision numbers that match a keyword::
4996 4995
4997 4996 hg log -k bug --template "{rev}\\n"
4998 4997
4999 4998 - the full hash identifier of the working directory parent::
5000 4999
5001 5000 hg log -r . --template "{node}\\n"
5002 5001
5003 5002 - list available log templates::
5004 5003
5005 5004 hg log -T list
5006 5005
5007 5006 - check if a given changeset is included in a tagged release::
5008 5007
5009 5008 hg log -r "a21ccf and ancestor(1.9)"
5010 5009
5011 5010 - find all changesets by some user in a date range::
5012 5011
5013 5012 hg log -k alice -d "may 2008 to jul 2008"
5014 5013
5015 5014 - summary of all changesets after the last tag::
5016 5015
5017 5016 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
5018 5017
5019 5018 See :hg:`help dates` for a list of formats valid for -d/--date.
5020 5019
5021 5020 See :hg:`help revisions` and :hg:`help revsets` for more about
5022 5021 specifying revisions.
5023 5022
5024 5023 See :hg:`help templates` for more about pre-packaged styles and
5025 5024 specifying custom templates.
5026 5025
5027 5026 Returns 0 on success.
5028 5027
5029 5028 """
5030 5029 if opts.get('follow') and opts.get('rev'):
5031 5030 opts['rev'] = [revset.formatspec('reverse(::%lr)', opts.get('rev'))]
5032 5031 del opts['follow']
5033 5032
5034 5033 if opts.get('graph'):
5035 5034 return cmdutil.graphlog(ui, repo, *pats, **opts)
5036 5035
5037 5036 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
5038 5037 limit = cmdutil.loglimit(opts)
5039 5038 count = 0
5040 5039
5041 5040 getrenamed = None
5042 5041 if opts.get('copies'):
5043 5042 endrev = None
5044 5043 if opts.get('rev'):
5045 5044 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
5046 5045 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
5047 5046
5048 5047 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
5049 5048 for rev in revs:
5050 5049 if count == limit:
5051 5050 break
5052 5051 ctx = repo[rev]
5053 5052 copies = None
5054 5053 if getrenamed is not None and rev:
5055 5054 copies = []
5056 5055 for fn in ctx.files():
5057 5056 rename = getrenamed(fn, rev)
5058 5057 if rename:
5059 5058 copies.append((fn, rename[0]))
5060 5059 if filematcher:
5061 5060 revmatchfn = filematcher(ctx.rev())
5062 5061 else:
5063 5062 revmatchfn = None
5064 5063 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
5065 5064 if displayer.flush(ctx):
5066 5065 count += 1
5067 5066
5068 5067 displayer.close()
5069 5068
5070 5069 @command('manifest',
5071 5070 [('r', 'rev', '', _('revision to display'), _('REV')),
5072 5071 ('', 'all', False, _("list files from all revisions"))]
5073 5072 + formatteropts,
5074 5073 _('[-r REV]'))
5075 5074 def manifest(ui, repo, node=None, rev=None, **opts):
5076 5075 """output the current or given revision of the project manifest
5077 5076
5078 5077 Print a list of version controlled files for the given revision.
5079 5078 If no revision is given, the first parent of the working directory
5080 5079 is used, or the null revision if no revision is checked out.
5081 5080
5082 5081 With -v, print file permissions, symlink and executable bits.
5083 5082 With --debug, print file revision hashes.
5084 5083
5085 5084 If option --all is specified, the list of all files from all revisions
5086 5085 is printed. This includes deleted and renamed files.
5087 5086
5088 5087 Returns 0 on success.
5089 5088 """
5090 5089
5091 5090 fm = ui.formatter('manifest', opts)
5092 5091
5093 5092 if opts.get('all'):
5094 5093 if rev or node:
5095 5094 raise error.Abort(_("can't specify a revision with --all"))
5096 5095
5097 5096 res = []
5098 5097 prefix = "data/"
5099 5098 suffix = ".i"
5100 5099 plen = len(prefix)
5101 5100 slen = len(suffix)
5102 5101 lock = repo.lock()
5103 5102 try:
5104 5103 for fn, b, size in repo.store.datafiles():
5105 5104 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
5106 5105 res.append(fn[plen:-slen])
5107 5106 finally:
5108 5107 lock.release()
5109 5108 for f in res:
5110 5109 fm.startitem()
5111 5110 fm.write("path", '%s\n', f)
5112 5111 fm.end()
5113 5112 return
5114 5113
5115 5114 if rev and node:
5116 5115 raise error.Abort(_("please specify just one revision"))
5117 5116
5118 5117 if not node:
5119 5118 node = rev
5120 5119
5121 5120 char = {'l': '@', 'x': '*', '': ''}
5122 5121 mode = {'l': '644', 'x': '755', '': '644'}
5123 5122 ctx = scmutil.revsingle(repo, node)
5124 5123 mf = ctx.manifest()
5125 5124 for f in ctx:
5126 5125 fm.startitem()
5127 5126 fl = ctx[f].flags()
5128 5127 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
5129 5128 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
5130 5129 fm.write('path', '%s\n', f)
5131 5130 fm.end()
5132 5131
5133 5132 @command('^merge',
5134 5133 [('f', 'force', None,
5135 5134 _('force a merge including outstanding changes (DEPRECATED)')),
5136 5135 ('r', 'rev', '', _('revision to merge'), _('REV')),
5137 5136 ('P', 'preview', None,
5138 5137 _('review revisions to merge (no merge is performed)'))
5139 5138 ] + mergetoolopts,
5140 5139 _('[-P] [-f] [[-r] REV]'))
5141 5140 def merge(ui, repo, node=None, **opts):
5142 5141 """merge another revision into working directory
5143 5142
5144 5143 The current working directory is updated with all changes made in
5145 5144 the requested revision since the last common predecessor revision.
5146 5145
5147 5146 Files that changed between either parent are marked as changed for
5148 5147 the next commit and a commit must be performed before any further
5149 5148 updates to the repository are allowed. The next commit will have
5150 5149 two parents.
5151 5150
5152 5151 ``--tool`` can be used to specify the merge tool used for file
5153 5152 merges. It overrides the HGMERGE environment variable and your
5154 5153 configuration files. See :hg:`help merge-tools` for options.
5155 5154
5156 5155 If no revision is specified, the working directory's parent is a
5157 5156 head revision, and the current branch contains exactly one other
5158 5157 head, the other head is merged with by default. Otherwise, an
5159 5158 explicit revision with which to merge with must be provided.
5160 5159
5161 5160 :hg:`resolve` must be used to resolve unresolved files.
5162 5161
5163 5162 To undo an uncommitted merge, use :hg:`update --clean .` which
5164 5163 will check out a clean copy of the original merge parent, losing
5165 5164 all changes.
5166 5165
5167 5166 Returns 0 on success, 1 if there are unresolved files.
5168 5167 """
5169 5168
5170 5169 if opts.get('rev') and node:
5171 5170 raise error.Abort(_("please specify just one revision"))
5172 5171 if not node:
5173 5172 node = opts.get('rev')
5174 5173
5175 5174 if node:
5176 5175 node = scmutil.revsingle(repo, node).node()
5177 5176
5178 5177 if not node:
5179 5178 node = repo[destutil.destmerge(repo)].node()
5180 5179
5181 5180 if opts.get('preview'):
5182 5181 # find nodes that are ancestors of p2 but not of p1
5183 5182 p1 = repo.lookup('.')
5184 5183 p2 = repo.lookup(node)
5185 5184 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
5186 5185
5187 5186 displayer = cmdutil.show_changeset(ui, repo, opts)
5188 5187 for node in nodes:
5189 5188 displayer.show(repo[node])
5190 5189 displayer.close()
5191 5190 return 0
5192 5191
5193 5192 try:
5194 5193 # ui.forcemerge is an internal variable, do not document
5195 5194 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
5196 5195 return hg.merge(repo, node, force=opts.get('force'))
5197 5196 finally:
5198 5197 ui.setconfig('ui', 'forcemerge', '', 'merge')
5199 5198
5200 5199 @command('outgoing|out',
5201 5200 [('f', 'force', None, _('run even when the destination is unrelated')),
5202 5201 ('r', 'rev', [],
5203 5202 _('a changeset intended to be included in the destination'), _('REV')),
5204 5203 ('n', 'newest-first', None, _('show newest record first')),
5205 5204 ('B', 'bookmarks', False, _('compare bookmarks')),
5206 5205 ('b', 'branch', [], _('a specific branch you would like to push'),
5207 5206 _('BRANCH')),
5208 5207 ] + logopts + remoteopts + subrepoopts,
5209 5208 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
5210 5209 def outgoing(ui, repo, dest=None, **opts):
5211 5210 """show changesets not found in the destination
5212 5211
5213 5212 Show changesets not found in the specified destination repository
5214 5213 or the default push location. These are the changesets that would
5215 5214 be pushed if a push was requested.
5216 5215
5217 5216 See pull for details of valid destination formats.
5218 5217
5219 5218 .. container:: verbose
5220 5219
5221 5220 With -B/--bookmarks, the result of bookmark comparison between
5222 5221 local and remote repositories is displayed. With -v/--verbose,
5223 5222 status is also displayed for each bookmark like below::
5224 5223
5225 5224 BM1 01234567890a added
5226 5225 BM2 deleted
5227 5226 BM3 234567890abc advanced
5228 5227 BM4 34567890abcd diverged
5229 5228 BM5 4567890abcde changed
5230 5229
5231 5230 The action taken when pushing depends on the
5232 5231 status of each bookmark:
5233 5232
5234 5233 :``added``: push with ``-B`` will create it
5235 5234 :``deleted``: push with ``-B`` will delete it
5236 5235 :``advanced``: push will update it
5237 5236 :``diverged``: push with ``-B`` will update it
5238 5237 :``changed``: push with ``-B`` will update it
5239 5238
5240 5239 From the point of view of pushing behavior, bookmarks
5241 5240 existing only in the remote repository are treated as
5242 5241 ``deleted``, even if it is in fact added remotely.
5243 5242
5244 5243 Returns 0 if there are outgoing changes, 1 otherwise.
5245 5244 """
5246 5245 if opts.get('graph'):
5247 5246 cmdutil.checkunsupportedgraphflags([], opts)
5248 5247 o, other = hg._outgoing(ui, repo, dest, opts)
5249 5248 if not o:
5250 5249 cmdutil.outgoinghooks(ui, repo, other, opts, o)
5251 5250 return
5252 5251
5253 5252 revdag = cmdutil.graphrevs(repo, o, opts)
5254 5253 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
5255 5254 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
5256 5255 cmdutil.outgoinghooks(ui, repo, other, opts, o)
5257 5256 return 0
5258 5257
5259 5258 if opts.get('bookmarks'):
5260 5259 dest = ui.expandpath(dest or 'default-push', dest or 'default')
5261 5260 dest, branches = hg.parseurl(dest, opts.get('branch'))
5262 5261 other = hg.peer(repo, opts, dest)
5263 5262 if 'bookmarks' not in other.listkeys('namespaces'):
5264 5263 ui.warn(_("remote doesn't support bookmarks\n"))
5265 5264 return 0
5266 5265 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
5267 5266 return bookmarks.outgoing(ui, repo, other)
5268 5267
5269 5268 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
5270 5269 try:
5271 5270 return hg.outgoing(ui, repo, dest, opts)
5272 5271 finally:
5273 5272 del repo._subtoppath
5274 5273
5275 5274 @command('parents',
5276 5275 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
5277 5276 ] + templateopts,
5278 5277 _('[-r REV] [FILE]'),
5279 5278 inferrepo=True)
5280 5279 def parents(ui, repo, file_=None, **opts):
5281 5280 """show the parents of the working directory or revision (DEPRECATED)
5282 5281
5283 5282 Print the working directory's parent revisions. If a revision is
5284 5283 given via -r/--rev, the parent of that revision will be printed.
5285 5284 If a file argument is given, the revision in which the file was
5286 5285 last changed (before the working directory revision or the
5287 5286 argument to --rev if given) is printed.
5288 5287
5289 5288 This command is equivalent to::
5290 5289
5291 5290 hg log -r "parents()" or
5292 5291 hg log -r "parents(REV)" or
5293 5292 hg log -r "max(file(FILE))" or
5294 5293 hg log -r "max(::REV and file(FILE))"
5295 5294
5296 5295 See :hg:`summary` and :hg:`help revsets` for related information.
5297 5296
5298 5297 Returns 0 on success.
5299 5298 """
5300 5299
5301 5300 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
5302 5301
5303 5302 if file_:
5304 5303 m = scmutil.match(ctx, (file_,), opts)
5305 5304 if m.anypats() or len(m.files()) != 1:
5306 5305 raise error.Abort(_('can only specify an explicit filename'))
5307 5306 file_ = m.files()[0]
5308 5307 filenodes = []
5309 5308 for cp in ctx.parents():
5310 5309 if not cp:
5311 5310 continue
5312 5311 try:
5313 5312 filenodes.append(cp.filenode(file_))
5314 5313 except error.LookupError:
5315 5314 pass
5316 5315 if not filenodes:
5317 5316 raise error.Abort(_("'%s' not found in manifest!") % file_)
5318 5317 p = []
5319 5318 for fn in filenodes:
5320 5319 fctx = repo.filectx(file_, fileid=fn)
5321 5320 p.append(fctx.node())
5322 5321 else:
5323 5322 p = [cp.node() for cp in ctx.parents()]
5324 5323
5325 5324 displayer = cmdutil.show_changeset(ui, repo, opts)
5326 5325 for n in p:
5327 5326 if n != nullid:
5328 5327 displayer.show(repo[n])
5329 5328 displayer.close()
5330 5329
5331 5330 @command('paths', [], _('[NAME]'), optionalrepo=True)
5332 5331 def paths(ui, repo, search=None):
5333 5332 """show aliases for remote repositories
5334 5333
5335 5334 Show definition of symbolic path name NAME. If no name is given,
5336 5335 show definition of all available names.
5337 5336
5338 5337 Option -q/--quiet suppresses all output when searching for NAME
5339 5338 and shows only the path names when listing all definitions.
5340 5339
5341 5340 Path names are defined in the [paths] section of your
5342 5341 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
5343 5342 repository, ``.hg/hgrc`` is used, too.
5344 5343
5345 5344 The path names ``default`` and ``default-push`` have a special
5346 5345 meaning. When performing a push or pull operation, they are used
5347 5346 as fallbacks if no location is specified on the command-line.
5348 5347 When ``default-push`` is set, it will be used for push and
5349 5348 ``default`` will be used for pull; otherwise ``default`` is used
5350 5349 as the fallback for both. When cloning a repository, the clone
5351 5350 source is written as ``default`` in ``.hg/hgrc``. Note that
5352 5351 ``default`` and ``default-push`` apply to all inbound (e.g.
5353 5352 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email` and
5354 5353 :hg:`bundle`) operations.
5355 5354
5356 5355 See :hg:`help urls` for more information.
5357 5356
5358 5357 Returns 0 on success.
5359 5358 """
5360 5359 if search:
5361 5360 for name, path in sorted(ui.paths.iteritems()):
5362 5361 if name == search:
5363 5362 ui.status("%s\n" % util.hidepassword(path.rawloc))
5364 5363 return
5365 5364 if not ui.quiet:
5366 5365 ui.warn(_("not found!\n"))
5367 5366 return 1
5368 5367 else:
5369 5368 for name, path in sorted(ui.paths.iteritems()):
5370 5369 if ui.quiet:
5371 5370 ui.write("%s\n" % name)
5372 5371 else:
5373 5372 ui.write("%s = %s\n" % (name,
5374 5373 util.hidepassword(path.rawloc)))
5375 5374 for subopt, value in sorted(path.suboptions.items()):
5376 5375 ui.write('%s:%s = %s\n' % (name, subopt, value))
5377 5376
5378 5377 @command('phase',
5379 5378 [('p', 'public', False, _('set changeset phase to public')),
5380 5379 ('d', 'draft', False, _('set changeset phase to draft')),
5381 5380 ('s', 'secret', False, _('set changeset phase to secret')),
5382 5381 ('f', 'force', False, _('allow to move boundary backward')),
5383 5382 ('r', 'rev', [], _('target revision'), _('REV')),
5384 5383 ],
5385 5384 _('[-p|-d|-s] [-f] [-r] [REV...]'))
5386 5385 def phase(ui, repo, *revs, **opts):
5387 5386 """set or show the current phase name
5388 5387
5389 5388 With no argument, show the phase name of the current revision(s).
5390 5389
5391 5390 With one of -p/--public, -d/--draft or -s/--secret, change the
5392 5391 phase value of the specified revisions.
5393 5392
5394 5393 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
5395 5394 lower phase to an higher phase. Phases are ordered as follows::
5396 5395
5397 5396 public < draft < secret
5398 5397
5399 5398 Returns 0 on success, 1 if some phases could not be changed.
5400 5399
5401 5400 (For more information about the phases concept, see :hg:`help phases`.)
5402 5401 """
5403 5402 # search for a unique phase argument
5404 5403 targetphase = None
5405 5404 for idx, name in enumerate(phases.phasenames):
5406 5405 if opts[name]:
5407 5406 if targetphase is not None:
5408 5407 raise error.Abort(_('only one phase can be specified'))
5409 5408 targetphase = idx
5410 5409
5411 5410 # look for specified revision
5412 5411 revs = list(revs)
5413 5412 revs.extend(opts['rev'])
5414 5413 if not revs:
5415 5414 # display both parents as the second parent phase can influence
5416 5415 # the phase of a merge commit
5417 5416 revs = [c.rev() for c in repo[None].parents()]
5418 5417
5419 5418 revs = scmutil.revrange(repo, revs)
5420 5419
5421 5420 lock = None
5422 5421 ret = 0
5423 5422 if targetphase is None:
5424 5423 # display
5425 5424 for r in revs:
5426 5425 ctx = repo[r]
5427 5426 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
5428 5427 else:
5429 5428 tr = None
5430 5429 lock = repo.lock()
5431 5430 try:
5432 5431 tr = repo.transaction("phase")
5433 5432 # set phase
5434 5433 if not revs:
5435 5434 raise error.Abort(_('empty revision set'))
5436 5435 nodes = [repo[r].node() for r in revs]
5437 5436 # moving revision from public to draft may hide them
5438 5437 # We have to check result on an unfiltered repository
5439 5438 unfi = repo.unfiltered()
5440 5439 getphase = unfi._phasecache.phase
5441 5440 olddata = [getphase(unfi, r) for r in unfi]
5442 5441 phases.advanceboundary(repo, tr, targetphase, nodes)
5443 5442 if opts['force']:
5444 5443 phases.retractboundary(repo, tr, targetphase, nodes)
5445 5444 tr.close()
5446 5445 finally:
5447 5446 if tr is not None:
5448 5447 tr.release()
5449 5448 lock.release()
5450 5449 getphase = unfi._phasecache.phase
5451 5450 newdata = [getphase(unfi, r) for r in unfi]
5452 5451 changes = sum(newdata[r] != olddata[r] for r in unfi)
5453 5452 cl = unfi.changelog
5454 5453 rejected = [n for n in nodes
5455 5454 if newdata[cl.rev(n)] < targetphase]
5456 5455 if rejected:
5457 5456 ui.warn(_('cannot move %i changesets to a higher '
5458 5457 'phase, use --force\n') % len(rejected))
5459 5458 ret = 1
5460 5459 if changes:
5461 5460 msg = _('phase changed for %i changesets\n') % changes
5462 5461 if ret:
5463 5462 ui.status(msg)
5464 5463 else:
5465 5464 ui.note(msg)
5466 5465 else:
5467 5466 ui.warn(_('no phases changed\n'))
5468 5467 return ret
5469 5468
5470 5469 def postincoming(ui, repo, modheads, optupdate, checkout):
5471 5470 if modheads == 0:
5472 5471 return
5473 5472 if optupdate:
5474 5473 try:
5475 5474 brev = checkout
5476 5475 movemarkfrom = None
5477 5476 if not checkout:
5478 5477 updata = destutil.destupdate(repo)
5479 5478 checkout, movemarkfrom, brev = updata
5480 5479 ret = hg.update(repo, checkout)
5481 5480 except error.UpdateAbort as inst:
5482 5481 msg = _("not updating: %s") % str(inst)
5483 5482 hint = inst.hint
5484 5483 raise error.UpdateAbort(msg, hint=hint)
5485 5484 if not ret and not checkout:
5486 5485 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
5487 5486 ui.status(_("updating bookmark %s\n") % repo._activebookmark)
5488 5487 return ret
5489 5488 if modheads > 1:
5490 5489 currentbranchheads = len(repo.branchheads())
5491 5490 if currentbranchheads == modheads:
5492 5491 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
5493 5492 elif currentbranchheads > 1:
5494 5493 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
5495 5494 "merge)\n"))
5496 5495 else:
5497 5496 ui.status(_("(run 'hg heads' to see heads)\n"))
5498 5497 else:
5499 5498 ui.status(_("(run 'hg update' to get a working copy)\n"))
5500 5499
5501 5500 @command('^pull',
5502 5501 [('u', 'update', None,
5503 5502 _('update to new branch head if changesets were pulled')),
5504 5503 ('f', 'force', None, _('run even when remote repository is unrelated')),
5505 5504 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
5506 5505 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
5507 5506 ('b', 'branch', [], _('a specific branch you would like to pull'),
5508 5507 _('BRANCH')),
5509 5508 ] + remoteopts,
5510 5509 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
5511 5510 def pull(ui, repo, source="default", **opts):
5512 5511 """pull changes from the specified source
5513 5512
5514 5513 Pull changes from a remote repository to a local one.
5515 5514
5516 5515 This finds all changes from the repository at the specified path
5517 5516 or URL and adds them to a local repository (the current one unless
5518 5517 -R is specified). By default, this does not update the copy of the
5519 5518 project in the working directory.
5520 5519
5521 5520 Use :hg:`incoming` if you want to see what would have been added
5522 5521 by a pull at the time you issued this command. If you then decide
5523 5522 to add those changes to the repository, you should use :hg:`pull
5524 5523 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
5525 5524
5526 5525 If SOURCE is omitted, the 'default' path will be used.
5527 5526 See :hg:`help urls` for more information.
5528 5527
5529 5528 Returns 0 on success, 1 if an update had unresolved files.
5530 5529 """
5531 5530 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
5532 5531 ui.status(_('pulling from %s\n') % util.hidepassword(source))
5533 5532 other = hg.peer(repo, opts, source)
5534 5533 try:
5535 5534 revs, checkout = hg.addbranchrevs(repo, other, branches,
5536 5535 opts.get('rev'))
5537 5536
5538 5537
5539 5538 pullopargs = {}
5540 5539 if opts.get('bookmark'):
5541 5540 if not revs:
5542 5541 revs = []
5543 5542 # The list of bookmark used here is not the one used to actually
5544 5543 # update the bookmark name. This can result in the revision pulled
5545 5544 # not ending up with the name of the bookmark because of a race
5546 5545 # condition on the server. (See issue 4689 for details)
5547 5546 remotebookmarks = other.listkeys('bookmarks')
5548 5547 pullopargs['remotebookmarks'] = remotebookmarks
5549 5548 for b in opts['bookmark']:
5550 5549 if b not in remotebookmarks:
5551 5550 raise error.Abort(_('remote bookmark %s not found!') % b)
5552 5551 revs.append(remotebookmarks[b])
5553 5552
5554 5553 if revs:
5555 5554 try:
5556 5555 # When 'rev' is a bookmark name, we cannot guarantee that it
5557 5556 # will be updated with that name because of a race condition
5558 5557 # server side. (See issue 4689 for details)
5559 5558 oldrevs = revs
5560 5559 revs = [] # actually, nodes
5561 5560 for r in oldrevs:
5562 5561 node = other.lookup(r)
5563 5562 revs.append(node)
5564 5563 if r == checkout:
5565 5564 checkout = node
5566 5565 except error.CapabilityError:
5567 5566 err = _("other repository doesn't support revision lookup, "
5568 5567 "so a rev cannot be specified.")
5569 5568 raise error.Abort(err)
5570 5569
5571 5570 pullopargs.update(opts.get('opargs', {}))
5572 5571 modheads = exchange.pull(repo, other, heads=revs,
5573 5572 force=opts.get('force'),
5574 5573 bookmarks=opts.get('bookmark', ()),
5575 5574 opargs=pullopargs).cgresult
5576 5575 if checkout:
5577 5576 checkout = str(repo.changelog.rev(checkout))
5578 5577 repo._subtoppath = source
5579 5578 try:
5580 5579 ret = postincoming(ui, repo, modheads, opts.get('update'), checkout)
5581 5580
5582 5581 finally:
5583 5582 del repo._subtoppath
5584 5583
5585 5584 finally:
5586 5585 other.close()
5587 5586 return ret
5588 5587
5589 5588 @command('^push',
5590 5589 [('f', 'force', None, _('force push')),
5591 5590 ('r', 'rev', [],
5592 5591 _('a changeset intended to be included in the destination'),
5593 5592 _('REV')),
5594 5593 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
5595 5594 ('b', 'branch', [],
5596 5595 _('a specific branch you would like to push'), _('BRANCH')),
5597 5596 ('', 'new-branch', False, _('allow pushing a new branch')),
5598 5597 ] + remoteopts,
5599 5598 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
5600 5599 def push(ui, repo, dest=None, **opts):
5601 5600 """push changes to the specified destination
5602 5601
5603 5602 Push changesets from the local repository to the specified
5604 5603 destination.
5605 5604
5606 5605 This operation is symmetrical to pull: it is identical to a pull
5607 5606 in the destination repository from the current one.
5608 5607
5609 5608 By default, push will not allow creation of new heads at the
5610 5609 destination, since multiple heads would make it unclear which head
5611 5610 to use. In this situation, it is recommended to pull and merge
5612 5611 before pushing.
5613 5612
5614 5613 Use --new-branch if you want to allow push to create a new named
5615 5614 branch that is not present at the destination. This allows you to
5616 5615 only create a new branch without forcing other changes.
5617 5616
5618 5617 .. note::
5619 5618
5620 5619 Extra care should be taken with the -f/--force option,
5621 5620 which will push all new heads on all branches, an action which will
5622 5621 almost always cause confusion for collaborators.
5623 5622
5624 5623 If -r/--rev is used, the specified revision and all its ancestors
5625 5624 will be pushed to the remote repository.
5626 5625
5627 5626 If -B/--bookmark is used, the specified bookmarked revision, its
5628 5627 ancestors, and the bookmark will be pushed to the remote
5629 5628 repository.
5630 5629
5631 5630 Please see :hg:`help urls` for important details about ``ssh://``
5632 5631 URLs. If DESTINATION is omitted, a default path will be used.
5633 5632
5634 5633 Returns 0 if push was successful, 1 if nothing to push.
5635 5634 """
5636 5635
5637 5636 if opts.get('bookmark'):
5638 5637 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
5639 5638 for b in opts['bookmark']:
5640 5639 # translate -B options to -r so changesets get pushed
5641 5640 if b in repo._bookmarks:
5642 5641 opts.setdefault('rev', []).append(b)
5643 5642 else:
5644 5643 # if we try to push a deleted bookmark, translate it to null
5645 5644 # this lets simultaneous -r, -b options continue working
5646 5645 opts.setdefault('rev', []).append("null")
5647 5646
5648 5647 path = ui.paths.getpath(dest, default='default')
5649 5648 if not path:
5650 5649 raise error.Abort(_('default repository not configured!'),
5651 5650 hint=_('see the "path" section in "hg help config"'))
5652 5651 dest = path.pushloc or path.loc
5653 5652 branches = (path.branch, opts.get('branch') or [])
5654 5653 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
5655 5654 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
5656 5655 other = hg.peer(repo, opts, dest)
5657 5656
5658 5657 if revs:
5659 5658 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
5660 5659 if not revs:
5661 5660 raise error.Abort(_("specified revisions evaluate to an empty set"),
5662 5661 hint=_("use different revision arguments"))
5663 5662
5664 5663 repo._subtoppath = dest
5665 5664 try:
5666 5665 # push subrepos depth-first for coherent ordering
5667 5666 c = repo['']
5668 5667 subs = c.substate # only repos that are committed
5669 5668 for s in sorted(subs):
5670 5669 result = c.sub(s).push(opts)
5671 5670 if result == 0:
5672 5671 return not result
5673 5672 finally:
5674 5673 del repo._subtoppath
5675 5674 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
5676 5675 newbranch=opts.get('new_branch'),
5677 5676 bookmarks=opts.get('bookmark', ()),
5678 5677 opargs=opts.get('opargs'))
5679 5678
5680 5679 result = not pushop.cgresult
5681 5680
5682 5681 if pushop.bkresult is not None:
5683 5682 if pushop.bkresult == 2:
5684 5683 result = 2
5685 5684 elif not result and pushop.bkresult:
5686 5685 result = 2
5687 5686
5688 5687 return result
5689 5688
5690 5689 @command('recover', [])
5691 5690 def recover(ui, repo):
5692 5691 """roll back an interrupted transaction
5693 5692
5694 5693 Recover from an interrupted commit or pull.
5695 5694
5696 5695 This command tries to fix the repository status after an
5697 5696 interrupted operation. It should only be necessary when Mercurial
5698 5697 suggests it.
5699 5698
5700 5699 Returns 0 if successful, 1 if nothing to recover or verify fails.
5701 5700 """
5702 5701 if repo.recover():
5703 5702 return hg.verify(repo)
5704 5703 return 1
5705 5704
5706 5705 @command('^remove|rm',
5707 5706 [('A', 'after', None, _('record delete for missing files')),
5708 5707 ('f', 'force', None,
5709 5708 _('remove (and delete) file even if added or modified')),
5710 5709 ] + subrepoopts + walkopts,
5711 5710 _('[OPTION]... FILE...'),
5712 5711 inferrepo=True)
5713 5712 def remove(ui, repo, *pats, **opts):
5714 5713 """remove the specified files on the next commit
5715 5714
5716 5715 Schedule the indicated files for removal from the current branch.
5717 5716
5718 5717 This command schedules the files to be removed at the next commit.
5719 5718 To undo a remove before that, see :hg:`revert`. To undo added
5720 5719 files, see :hg:`forget`.
5721 5720
5722 5721 .. container:: verbose
5723 5722
5724 5723 -A/--after can be used to remove only files that have already
5725 5724 been deleted, -f/--force can be used to force deletion, and -Af
5726 5725 can be used to remove files from the next revision without
5727 5726 deleting them from the working directory.
5728 5727
5729 5728 The following table details the behavior of remove for different
5730 5729 file states (columns) and option combinations (rows). The file
5731 5730 states are Added [A], Clean [C], Modified [M] and Missing [!]
5732 5731 (as reported by :hg:`status`). The actions are Warn, Remove
5733 5732 (from branch) and Delete (from disk):
5734 5733
5735 5734 ========= == == == ==
5736 5735 opt/state A C M !
5737 5736 ========= == == == ==
5738 5737 none W RD W R
5739 5738 -f R RD RD R
5740 5739 -A W W W R
5741 5740 -Af R R R R
5742 5741 ========= == == == ==
5743 5742
5744 5743 Note that remove never deletes files in Added [A] state from the
5745 5744 working directory, not even if option --force is specified.
5746 5745
5747 5746 Returns 0 on success, 1 if any warnings encountered.
5748 5747 """
5749 5748
5750 5749 after, force = opts.get('after'), opts.get('force')
5751 5750 if not pats and not after:
5752 5751 raise error.Abort(_('no files specified'))
5753 5752
5754 5753 m = scmutil.match(repo[None], pats, opts)
5755 5754 subrepos = opts.get('subrepos')
5756 5755 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
5757 5756
5758 5757 @command('rename|move|mv',
5759 5758 [('A', 'after', None, _('record a rename that has already occurred')),
5760 5759 ('f', 'force', None, _('forcibly copy over an existing managed file')),
5761 5760 ] + walkopts + dryrunopts,
5762 5761 _('[OPTION]... SOURCE... DEST'))
5763 5762 def rename(ui, repo, *pats, **opts):
5764 5763 """rename files; equivalent of copy + remove
5765 5764
5766 5765 Mark dest as copies of sources; mark sources for deletion. If dest
5767 5766 is a directory, copies are put in that directory. If dest is a
5768 5767 file, there can only be one source.
5769 5768
5770 5769 By default, this command copies the contents of files as they
5771 5770 exist in the working directory. If invoked with -A/--after, the
5772 5771 operation is recorded, but no copying is performed.
5773 5772
5774 5773 This command takes effect at the next commit. To undo a rename
5775 5774 before that, see :hg:`revert`.
5776 5775
5777 5776 Returns 0 on success, 1 if errors are encountered.
5778 5777 """
5779 5778 wlock = repo.wlock(False)
5780 5779 try:
5781 5780 return cmdutil.copy(ui, repo, pats, opts, rename=True)
5782 5781 finally:
5783 5782 wlock.release()
5784 5783
5785 5784 @command('resolve',
5786 5785 [('a', 'all', None, _('select all unresolved files')),
5787 5786 ('l', 'list', None, _('list state of files needing merge')),
5788 5787 ('m', 'mark', None, _('mark files as resolved')),
5789 5788 ('u', 'unmark', None, _('mark files as unresolved')),
5790 5789 ('n', 'no-status', None, _('hide status prefix'))]
5791 5790 + mergetoolopts + walkopts + formatteropts,
5792 5791 _('[OPTION]... [FILE]...'),
5793 5792 inferrepo=True)
5794 5793 def resolve(ui, repo, *pats, **opts):
5795 5794 """redo merges or set/view the merge status of files
5796 5795
5797 5796 Merges with unresolved conflicts are often the result of
5798 5797 non-interactive merging using the ``internal:merge`` configuration
5799 5798 setting, or a command-line merge tool like ``diff3``. The resolve
5800 5799 command is used to manage the files involved in a merge, after
5801 5800 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
5802 5801 working directory must have two parents). See :hg:`help
5803 5802 merge-tools` for information on configuring merge tools.
5804 5803
5805 5804 The resolve command can be used in the following ways:
5806 5805
5807 5806 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
5808 5807 files, discarding any previous merge attempts. Re-merging is not
5809 5808 performed for files already marked as resolved. Use ``--all/-a``
5810 5809 to select all unresolved files. ``--tool`` can be used to specify
5811 5810 the merge tool used for the given files. It overrides the HGMERGE
5812 5811 environment variable and your configuration files. Previous file
5813 5812 contents are saved with a ``.orig`` suffix.
5814 5813
5815 5814 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
5816 5815 (e.g. after having manually fixed-up the files). The default is
5817 5816 to mark all unresolved files.
5818 5817
5819 5818 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
5820 5819 default is to mark all resolved files.
5821 5820
5822 5821 - :hg:`resolve -l`: list files which had or still have conflicts.
5823 5822 In the printed list, ``U`` = unresolved and ``R`` = resolved.
5824 5823
5825 5824 Note that Mercurial will not let you commit files with unresolved
5826 5825 merge conflicts. You must use :hg:`resolve -m ...` before you can
5827 5826 commit after a conflicting merge.
5828 5827
5829 5828 Returns 0 on success, 1 if any files fail a resolve attempt.
5830 5829 """
5831 5830
5832 5831 all, mark, unmark, show, nostatus = \
5833 5832 [opts.get(o) for o in 'all mark unmark list no_status'.split()]
5834 5833
5835 5834 if (show and (mark or unmark)) or (mark and unmark):
5836 5835 raise error.Abort(_("too many options specified"))
5837 5836 if pats and all:
5838 5837 raise error.Abort(_("can't specify --all and patterns"))
5839 5838 if not (all or pats or show or mark or unmark):
5840 5839 raise error.Abort(_('no files or directories specified'),
5841 5840 hint=('use --all to re-merge all unresolved files'))
5842 5841
5843 5842 if show:
5844 5843 fm = ui.formatter('resolve', opts)
5845 5844 ms = mergemod.mergestate.read(repo)
5846 5845 m = scmutil.match(repo[None], pats, opts)
5847 5846 for f in ms:
5848 5847 if not m(f):
5849 5848 continue
5850 5849 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
5851 5850 'd': 'driverresolved'}[ms[f]]
5852 5851 fm.startitem()
5853 5852 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
5854 5853 fm.write('path', '%s\n', f, label=l)
5855 5854 fm.end()
5856 5855 return 0
5857 5856
5858 5857 wlock = repo.wlock()
5859 5858 try:
5860 5859 ms = mergemod.mergestate.read(repo)
5861 5860
5862 5861 if not (ms.active() or repo.dirstate.p2() != nullid):
5863 5862 raise error.Abort(
5864 5863 _('resolve command not applicable when not merging'))
5865 5864
5866 5865 wctx = repo[None]
5867 5866
5868 5867 if ms.mergedriver and ms.mdstate() == 'u':
5869 5868 proceed = mergemod.driverpreprocess(repo, ms, wctx)
5870 5869 ms.commit()
5871 5870 # allow mark and unmark to go through
5872 5871 if not mark and not unmark and not proceed:
5873 5872 return 1
5874 5873
5875 5874 m = scmutil.match(wctx, pats, opts)
5876 5875 ret = 0
5877 5876 didwork = False
5878 5877 runconclude = False
5879 5878
5880 5879 tocomplete = []
5881 5880 for f in ms:
5882 5881 if not m(f):
5883 5882 continue
5884 5883
5885 5884 didwork = True
5886 5885
5887 5886 # don't let driver-resolved files be marked, and run the conclude
5888 5887 # step if asked to resolve
5889 5888 if ms[f] == "d":
5890 5889 exact = m.exact(f)
5891 5890 if mark:
5892 5891 if exact:
5893 5892 ui.warn(_('not marking %s as it is driver-resolved\n')
5894 5893 % f)
5895 5894 elif unmark:
5896 5895 if exact:
5897 5896 ui.warn(_('not unmarking %s as it is driver-resolved\n')
5898 5897 % f)
5899 5898 else:
5900 5899 runconclude = True
5901 5900 continue
5902 5901
5903 5902 if mark:
5904 5903 ms.mark(f, "r")
5905 5904 elif unmark:
5906 5905 ms.mark(f, "u")
5907 5906 else:
5908 5907 # backup pre-resolve (merge uses .orig for its own purposes)
5909 5908 a = repo.wjoin(f)
5910 5909 try:
5911 5910 util.copyfile(a, a + ".resolve")
5912 5911 except (IOError, OSError) as inst:
5913 5912 if inst.errno != errno.ENOENT:
5914 5913 raise
5915 5914
5916 5915 try:
5917 5916 # preresolve file
5918 5917 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
5919 5918 'resolve')
5920 5919 complete, r = ms.preresolve(f, wctx)
5921 5920 if not complete:
5922 5921 tocomplete.append(f)
5923 5922 elif r:
5924 5923 ret = 1
5925 5924 finally:
5926 5925 ui.setconfig('ui', 'forcemerge', '', 'resolve')
5927 5926 ms.commit()
5928 5927
5929 5928 # replace filemerge's .orig file with our resolve file, but only
5930 5929 # for merges that are complete
5931 5930 if complete:
5932 5931 try:
5933 5932 util.rename(a + ".resolve",
5934 5933 cmdutil.origpath(ui, repo, a))
5935 5934 except OSError as inst:
5936 5935 if inst.errno != errno.ENOENT:
5937 5936 raise
5938 5937
5939 5938 for f in tocomplete:
5940 5939 try:
5941 5940 # resolve file
5942 5941 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
5943 5942 'resolve')
5944 5943 r = ms.resolve(f, wctx)
5945 5944 if r:
5946 5945 ret = 1
5947 5946 finally:
5948 5947 ui.setconfig('ui', 'forcemerge', '', 'resolve')
5949 5948 ms.commit()
5950 5949
5951 5950 # replace filemerge's .orig file with our resolve file
5952 5951 a = repo.wjoin(f)
5953 5952 try:
5954 5953 util.rename(a + ".resolve", cmdutil.origpath(ui, repo, a))
5955 5954 except OSError as inst:
5956 5955 if inst.errno != errno.ENOENT:
5957 5956 raise
5958 5957
5959 5958 ms.commit()
5960 5959 ms.recordactions()
5961 5960
5962 5961 if not didwork and pats:
5963 5962 ui.warn(_("arguments do not match paths that need resolving\n"))
5964 5963 elif ms.mergedriver and ms.mdstate() != 's':
5965 5964 # run conclude step when either a driver-resolved file is requested
5966 5965 # or there are no driver-resolved files
5967 5966 # we can't use 'ret' to determine whether any files are unresolved
5968 5967 # because we might not have tried to resolve some
5969 5968 if ((runconclude or not list(ms.driverresolved()))
5970 5969 and not list(ms.unresolved())):
5971 5970 proceed = mergemod.driverconclude(repo, ms, wctx)
5972 5971 ms.commit()
5973 5972 if not proceed:
5974 5973 return 1
5975 5974
5976 5975 finally:
5977 5976 wlock.release()
5978 5977
5979 5978 # Nudge users into finishing an unfinished operation
5980 5979 unresolvedf = list(ms.unresolved())
5981 5980 driverresolvedf = list(ms.driverresolved())
5982 5981 if not unresolvedf and not driverresolvedf:
5983 5982 ui.status(_('(no more unresolved files)\n'))
5984 5983 elif not unresolvedf:
5985 5984 ui.status(_('(no more unresolved files -- '
5986 5985 'run "hg resolve --all" to conclude)\n'))
5987 5986
5988 5987 return ret
5989 5988
5990 5989 @command('revert',
5991 5990 [('a', 'all', None, _('revert all changes when no arguments given')),
5992 5991 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5993 5992 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
5994 5993 ('C', 'no-backup', None, _('do not save backup copies of files')),
5995 5994 ('i', 'interactive', None,
5996 5995 _('interactively select the changes (EXPERIMENTAL)')),
5997 5996 ] + walkopts + dryrunopts,
5998 5997 _('[OPTION]... [-r REV] [NAME]...'))
5999 5998 def revert(ui, repo, *pats, **opts):
6000 5999 """restore files to their checkout state
6001 6000
6002 6001 .. note::
6003 6002
6004 6003 To check out earlier revisions, you should use :hg:`update REV`.
6005 6004 To cancel an uncommitted merge (and lose your changes),
6006 6005 use :hg:`update --clean .`.
6007 6006
6008 6007 With no revision specified, revert the specified files or directories
6009 6008 to the contents they had in the parent of the working directory.
6010 6009 This restores the contents of files to an unmodified
6011 6010 state and unschedules adds, removes, copies, and renames. If the
6012 6011 working directory has two parents, you must explicitly specify a
6013 6012 revision.
6014 6013
6015 6014 Using the -r/--rev or -d/--date options, revert the given files or
6016 6015 directories to their states as of a specific revision. Because
6017 6016 revert does not change the working directory parents, this will
6018 6017 cause these files to appear modified. This can be helpful to "back
6019 6018 out" some or all of an earlier change. See :hg:`backout` for a
6020 6019 related method.
6021 6020
6022 6021 Modified files are saved with a .orig suffix before reverting.
6023 6022 To disable these backups, use --no-backup.
6024 6023
6025 6024 See :hg:`help dates` for a list of formats valid for -d/--date.
6026 6025
6027 6026 See :hg:`help backout` for a way to reverse the effect of an
6028 6027 earlier changeset.
6029 6028
6030 6029 Returns 0 on success.
6031 6030 """
6032 6031
6033 6032 if opts.get("date"):
6034 6033 if opts.get("rev"):
6035 6034 raise error.Abort(_("you can't specify a revision and a date"))
6036 6035 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
6037 6036
6038 6037 parent, p2 = repo.dirstate.parents()
6039 6038 if not opts.get('rev') and p2 != nullid:
6040 6039 # revert after merge is a trap for new users (issue2915)
6041 6040 raise error.Abort(_('uncommitted merge with no revision specified'),
6042 6041 hint=_('use "hg update" or see "hg help revert"'))
6043 6042
6044 6043 ctx = scmutil.revsingle(repo, opts.get('rev'))
6045 6044
6046 6045 if (not (pats or opts.get('include') or opts.get('exclude') or
6047 6046 opts.get('all') or opts.get('interactive'))):
6048 6047 msg = _("no files or directories specified")
6049 6048 if p2 != nullid:
6050 6049 hint = _("uncommitted merge, use --all to discard all changes,"
6051 6050 " or 'hg update -C .' to abort the merge")
6052 6051 raise error.Abort(msg, hint=hint)
6053 6052 dirty = any(repo.status())
6054 6053 node = ctx.node()
6055 6054 if node != parent:
6056 6055 if dirty:
6057 6056 hint = _("uncommitted changes, use --all to discard all"
6058 6057 " changes, or 'hg update %s' to update") % ctx.rev()
6059 6058 else:
6060 6059 hint = _("use --all to revert all files,"
6061 6060 " or 'hg update %s' to update") % ctx.rev()
6062 6061 elif dirty:
6063 6062 hint = _("uncommitted changes, use --all to discard all changes")
6064 6063 else:
6065 6064 hint = _("use --all to revert all files")
6066 6065 raise error.Abort(msg, hint=hint)
6067 6066
6068 6067 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
6069 6068
6070 6069 @command('rollback', dryrunopts +
6071 6070 [('f', 'force', False, _('ignore safety measures'))])
6072 6071 def rollback(ui, repo, **opts):
6073 6072 """roll back the last transaction (DANGEROUS) (DEPRECATED)
6074 6073
6075 6074 Please use :hg:`commit --amend` instead of rollback to correct
6076 6075 mistakes in the last commit.
6077 6076
6078 6077 This command should be used with care. There is only one level of
6079 6078 rollback, and there is no way to undo a rollback. It will also
6080 6079 restore the dirstate at the time of the last transaction, losing
6081 6080 any dirstate changes since that time. This command does not alter
6082 6081 the working directory.
6083 6082
6084 6083 Transactions are used to encapsulate the effects of all commands
6085 6084 that create new changesets or propagate existing changesets into a
6086 6085 repository.
6087 6086
6088 6087 .. container:: verbose
6089 6088
6090 6089 For example, the following commands are transactional, and their
6091 6090 effects can be rolled back:
6092 6091
6093 6092 - commit
6094 6093 - import
6095 6094 - pull
6096 6095 - push (with this repository as the destination)
6097 6096 - unbundle
6098 6097
6099 6098 To avoid permanent data loss, rollback will refuse to rollback a
6100 6099 commit transaction if it isn't checked out. Use --force to
6101 6100 override this protection.
6102 6101
6103 6102 This command is not intended for use on public repositories. Once
6104 6103 changes are visible for pull by other users, rolling a transaction
6105 6104 back locally is ineffective (someone else may already have pulled
6106 6105 the changes). Furthermore, a race is possible with readers of the
6107 6106 repository; for example an in-progress pull from the repository
6108 6107 may fail if a rollback is performed.
6109 6108
6110 6109 Returns 0 on success, 1 if no rollback data is available.
6111 6110 """
6112 6111 return repo.rollback(dryrun=opts.get('dry_run'),
6113 6112 force=opts.get('force'))
6114 6113
6115 6114 @command('root', [])
6116 6115 def root(ui, repo):
6117 6116 """print the root (top) of the current working directory
6118 6117
6119 6118 Print the root directory of the current repository.
6120 6119
6121 6120 Returns 0 on success.
6122 6121 """
6123 6122 ui.write(repo.root + "\n")
6124 6123
6125 6124 @command('^serve',
6126 6125 [('A', 'accesslog', '', _('name of access log file to write to'),
6127 6126 _('FILE')),
6128 6127 ('d', 'daemon', None, _('run server in background')),
6129 6128 ('', 'daemon-pipefds', '', _('used internally by daemon mode'), _('FILE')),
6130 6129 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
6131 6130 # use string type, then we can check if something was passed
6132 6131 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
6133 6132 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
6134 6133 _('ADDR')),
6135 6134 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
6136 6135 _('PREFIX')),
6137 6136 ('n', 'name', '',
6138 6137 _('name to show in web pages (default: working directory)'), _('NAME')),
6139 6138 ('', 'web-conf', '',
6140 6139 _('name of the hgweb config file (see "hg help hgweb")'), _('FILE')),
6141 6140 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
6142 6141 _('FILE')),
6143 6142 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
6144 6143 ('', 'stdio', None, _('for remote clients')),
6145 6144 ('', 'cmdserver', '', _('for remote clients'), _('MODE')),
6146 6145 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
6147 6146 ('', 'style', '', _('template style to use'), _('STYLE')),
6148 6147 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
6149 6148 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))],
6150 6149 _('[OPTION]...'),
6151 6150 optionalrepo=True)
6152 6151 def serve(ui, repo, **opts):
6153 6152 """start stand-alone webserver
6154 6153
6155 6154 Start a local HTTP repository browser and pull server. You can use
6156 6155 this for ad-hoc sharing and browsing of repositories. It is
6157 6156 recommended to use a real web server to serve a repository for
6158 6157 longer periods of time.
6159 6158
6160 6159 Please note that the server does not implement access control.
6161 6160 This means that, by default, anybody can read from the server and
6162 6161 nobody can write to it by default. Set the ``web.allow_push``
6163 6162 option to ``*`` to allow everybody to push to the server. You
6164 6163 should use a real web server if you need to authenticate users.
6165 6164
6166 6165 By default, the server logs accesses to stdout and errors to
6167 6166 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
6168 6167 files.
6169 6168
6170 6169 To have the server choose a free port number to listen on, specify
6171 6170 a port number of 0; in this case, the server will print the port
6172 6171 number it uses.
6173 6172
6174 6173 Returns 0 on success.
6175 6174 """
6176 6175
6177 6176 if opts["stdio"] and opts["cmdserver"]:
6178 6177 raise error.Abort(_("cannot use --stdio with --cmdserver"))
6179 6178
6180 6179 if opts["stdio"]:
6181 6180 if repo is None:
6182 6181 raise error.RepoError(_("there is no Mercurial repository here"
6183 6182 " (.hg not found)"))
6184 6183 s = sshserver.sshserver(ui, repo)
6185 6184 s.serve_forever()
6186 6185
6187 6186 if opts["cmdserver"]:
6188 6187 import commandserver
6189 6188 service = commandserver.createservice(ui, repo, opts)
6190 6189 else:
6191 6190 service = hgweb.createservice(ui, repo, opts)
6192 6191 return cmdutil.service(opts, initfn=service.init, runfn=service.run)
6193 6192
6194 6193 @command('^status|st',
6195 6194 [('A', 'all', None, _('show status of all files')),
6196 6195 ('m', 'modified', None, _('show only modified files')),
6197 6196 ('a', 'added', None, _('show only added files')),
6198 6197 ('r', 'removed', None, _('show only removed files')),
6199 6198 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
6200 6199 ('c', 'clean', None, _('show only files without changes')),
6201 6200 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
6202 6201 ('i', 'ignored', None, _('show only ignored files')),
6203 6202 ('n', 'no-status', None, _('hide status prefix')),
6204 6203 ('C', 'copies', None, _('show source of copied files')),
6205 6204 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
6206 6205 ('', 'rev', [], _('show difference from revision'), _('REV')),
6207 6206 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
6208 6207 ] + walkopts + subrepoopts + formatteropts,
6209 6208 _('[OPTION]... [FILE]...'),
6210 6209 inferrepo=True)
6211 6210 def status(ui, repo, *pats, **opts):
6212 6211 """show changed files in the working directory
6213 6212
6214 6213 Show status of files in the repository. If names are given, only
6215 6214 files that match are shown. Files that are clean or ignored or
6216 6215 the source of a copy/move operation, are not listed unless
6217 6216 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
6218 6217 Unless options described with "show only ..." are given, the
6219 6218 options -mardu are used.
6220 6219
6221 6220 Option -q/--quiet hides untracked (unknown and ignored) files
6222 6221 unless explicitly requested with -u/--unknown or -i/--ignored.
6223 6222
6224 6223 .. note::
6225 6224
6226 6225 status may appear to disagree with diff if permissions have
6227 6226 changed or a merge has occurred. The standard diff format does
6228 6227 not report permission changes and diff only reports changes
6229 6228 relative to one merge parent.
6230 6229
6231 6230 If one revision is given, it is used as the base revision.
6232 6231 If two revisions are given, the differences between them are
6233 6232 shown. The --change option can also be used as a shortcut to list
6234 6233 the changed files of a revision from its first parent.
6235 6234
6236 6235 The codes used to show the status of files are::
6237 6236
6238 6237 M = modified
6239 6238 A = added
6240 6239 R = removed
6241 6240 C = clean
6242 6241 ! = missing (deleted by non-hg command, but still tracked)
6243 6242 ? = not tracked
6244 6243 I = ignored
6245 6244 = origin of the previous file (with --copies)
6246 6245
6247 6246 .. container:: verbose
6248 6247
6249 6248 Examples:
6250 6249
6251 6250 - show changes in the working directory relative to a
6252 6251 changeset::
6253 6252
6254 6253 hg status --rev 9353
6255 6254
6256 6255 - show changes in the working directory relative to the
6257 6256 current directory (see :hg:`help patterns` for more information)::
6258 6257
6259 6258 hg status re:
6260 6259
6261 6260 - show all changes including copies in an existing changeset::
6262 6261
6263 6262 hg status --copies --change 9353
6264 6263
6265 6264 - get a NUL separated list of added files, suitable for xargs::
6266 6265
6267 6266 hg status -an0
6268 6267
6269 6268 Returns 0 on success.
6270 6269 """
6271 6270
6272 6271 revs = opts.get('rev')
6273 6272 change = opts.get('change')
6274 6273
6275 6274 if revs and change:
6276 6275 msg = _('cannot specify --rev and --change at the same time')
6277 6276 raise error.Abort(msg)
6278 6277 elif change:
6279 6278 node2 = scmutil.revsingle(repo, change, None).node()
6280 6279 node1 = repo[node2].p1().node()
6281 6280 else:
6282 6281 node1, node2 = scmutil.revpair(repo, revs)
6283 6282
6284 6283 if pats:
6285 6284 cwd = repo.getcwd()
6286 6285 else:
6287 6286 cwd = ''
6288 6287
6289 6288 if opts.get('print0'):
6290 6289 end = '\0'
6291 6290 else:
6292 6291 end = '\n'
6293 6292 copy = {}
6294 6293 states = 'modified added removed deleted unknown ignored clean'.split()
6295 6294 show = [k for k in states if opts.get(k)]
6296 6295 if opts.get('all'):
6297 6296 show += ui.quiet and (states[:4] + ['clean']) or states
6298 6297 if not show:
6299 6298 if ui.quiet:
6300 6299 show = states[:4]
6301 6300 else:
6302 6301 show = states[:5]
6303 6302
6304 6303 m = scmutil.match(repo[node2], pats, opts)
6305 6304 stat = repo.status(node1, node2, m,
6306 6305 'ignored' in show, 'clean' in show, 'unknown' in show,
6307 6306 opts.get('subrepos'))
6308 6307 changestates = zip(states, 'MAR!?IC', stat)
6309 6308
6310 6309 if (opts.get('all') or opts.get('copies')
6311 6310 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
6312 6311 copy = copies.pathcopies(repo[node1], repo[node2], m)
6313 6312
6314 6313 fm = ui.formatter('status', opts)
6315 6314 fmt = '%s' + end
6316 6315 showchar = not opts.get('no_status')
6317 6316
6318 6317 for state, char, files in changestates:
6319 6318 if state in show:
6320 6319 label = 'status.' + state
6321 6320 for f in files:
6322 6321 fm.startitem()
6323 6322 fm.condwrite(showchar, 'status', '%s ', char, label=label)
6324 6323 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
6325 6324 if f in copy:
6326 6325 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
6327 6326 label='status.copied')
6328 6327 fm.end()
6329 6328
6330 6329 @command('^summary|sum',
6331 6330 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
6332 6331 def summary(ui, repo, **opts):
6333 6332 """summarize working directory state
6334 6333
6335 6334 This generates a brief summary of the working directory state,
6336 6335 including parents, branch, commit status, phase and available updates.
6337 6336
6338 6337 With the --remote option, this will check the default paths for
6339 6338 incoming and outgoing changes. This can be time-consuming.
6340 6339
6341 6340 Returns 0 on success.
6342 6341 """
6343 6342
6344 6343 ctx = repo[None]
6345 6344 parents = ctx.parents()
6346 6345 pnode = parents[0].node()
6347 6346 marks = []
6348 6347
6349 6348 for p in parents:
6350 6349 # label with log.changeset (instead of log.parent) since this
6351 6350 # shows a working directory parent *changeset*:
6352 6351 # i18n: column positioning for "hg summary"
6353 6352 ui.write(_('parent: %d:%s ') % (p.rev(), str(p)),
6354 6353 label='log.changeset changeset.%s' % p.phasestr())
6355 6354 ui.write(' '.join(p.tags()), label='log.tag')
6356 6355 if p.bookmarks():
6357 6356 marks.extend(p.bookmarks())
6358 6357 if p.rev() == -1:
6359 6358 if not len(repo):
6360 6359 ui.write(_(' (empty repository)'))
6361 6360 else:
6362 6361 ui.write(_(' (no revision checked out)'))
6363 6362 ui.write('\n')
6364 6363 if p.description():
6365 6364 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
6366 6365 label='log.summary')
6367 6366
6368 6367 branch = ctx.branch()
6369 6368 bheads = repo.branchheads(branch)
6370 6369 # i18n: column positioning for "hg summary"
6371 6370 m = _('branch: %s\n') % branch
6372 6371 if branch != 'default':
6373 6372 ui.write(m, label='log.branch')
6374 6373 else:
6375 6374 ui.status(m, label='log.branch')
6376 6375
6377 6376 if marks:
6378 6377 active = repo._activebookmark
6379 6378 # i18n: column positioning for "hg summary"
6380 6379 ui.write(_('bookmarks:'), label='log.bookmark')
6381 6380 if active is not None:
6382 6381 if active in marks:
6383 6382 ui.write(' *' + active, label=activebookmarklabel)
6384 6383 marks.remove(active)
6385 6384 else:
6386 6385 ui.write(' [%s]' % active, label=activebookmarklabel)
6387 6386 for m in marks:
6388 6387 ui.write(' ' + m, label='log.bookmark')
6389 6388 ui.write('\n', label='log.bookmark')
6390 6389
6391 6390 status = repo.status(unknown=True)
6392 6391
6393 6392 c = repo.dirstate.copies()
6394 6393 copied, renamed = [], []
6395 6394 for d, s in c.iteritems():
6396 6395 if s in status.removed:
6397 6396 status.removed.remove(s)
6398 6397 renamed.append(d)
6399 6398 else:
6400 6399 copied.append(d)
6401 6400 if d in status.added:
6402 6401 status.added.remove(d)
6403 6402
6404 6403 try:
6405 6404 ms = mergemod.mergestate.read(repo)
6406 6405 except error.UnsupportedMergeRecords as e:
6407 6406 s = ' '.join(e.recordtypes)
6408 6407 ui.warn(
6409 6408 _('warning: merge state has unsupported record types: %s\n') % s)
6410 6409 unresolved = 0
6411 6410 else:
6412 6411 unresolved = [f for f in ms if ms[f] == 'u']
6413 6412
6414 6413 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
6415 6414
6416 6415 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
6417 6416 (ui.label(_('%d added'), 'status.added'), status.added),
6418 6417 (ui.label(_('%d removed'), 'status.removed'), status.removed),
6419 6418 (ui.label(_('%d renamed'), 'status.copied'), renamed),
6420 6419 (ui.label(_('%d copied'), 'status.copied'), copied),
6421 6420 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
6422 6421 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
6423 6422 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
6424 6423 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
6425 6424 t = []
6426 6425 for l, s in labels:
6427 6426 if s:
6428 6427 t.append(l % len(s))
6429 6428
6430 6429 t = ', '.join(t)
6431 6430 cleanworkdir = False
6432 6431
6433 6432 if repo.vfs.exists('graftstate'):
6434 6433 t += _(' (graft in progress)')
6435 6434 if repo.vfs.exists('updatestate'):
6436 6435 t += _(' (interrupted update)')
6437 6436 elif len(parents) > 1:
6438 6437 t += _(' (merge)')
6439 6438 elif branch != parents[0].branch():
6440 6439 t += _(' (new branch)')
6441 6440 elif (parents[0].closesbranch() and
6442 6441 pnode in repo.branchheads(branch, closed=True)):
6443 6442 t += _(' (head closed)')
6444 6443 elif not (status.modified or status.added or status.removed or renamed or
6445 6444 copied or subs):
6446 6445 t += _(' (clean)')
6447 6446 cleanworkdir = True
6448 6447 elif pnode not in bheads:
6449 6448 t += _(' (new branch head)')
6450 6449
6451 6450 if parents:
6452 6451 pendingphase = max(p.phase() for p in parents)
6453 6452 else:
6454 6453 pendingphase = phases.public
6455 6454
6456 6455 if pendingphase > phases.newcommitphase(ui):
6457 6456 t += ' (%s)' % phases.phasenames[pendingphase]
6458 6457
6459 6458 if cleanworkdir:
6460 6459 # i18n: column positioning for "hg summary"
6461 6460 ui.status(_('commit: %s\n') % t.strip())
6462 6461 else:
6463 6462 # i18n: column positioning for "hg summary"
6464 6463 ui.write(_('commit: %s\n') % t.strip())
6465 6464
6466 6465 # all ancestors of branch heads - all ancestors of parent = new csets
6467 6466 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
6468 6467 bheads))
6469 6468
6470 6469 if new == 0:
6471 6470 # i18n: column positioning for "hg summary"
6472 6471 ui.status(_('update: (current)\n'))
6473 6472 elif pnode not in bheads:
6474 6473 # i18n: column positioning for "hg summary"
6475 6474 ui.write(_('update: %d new changesets (update)\n') % new)
6476 6475 else:
6477 6476 # i18n: column positioning for "hg summary"
6478 6477 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
6479 6478 (new, len(bheads)))
6480 6479
6481 6480 t = []
6482 6481 draft = len(repo.revs('draft()'))
6483 6482 if draft:
6484 6483 t.append(_('%d draft') % draft)
6485 6484 secret = len(repo.revs('secret()'))
6486 6485 if secret:
6487 6486 t.append(_('%d secret') % secret)
6488 6487
6489 6488 if draft or secret:
6490 6489 ui.status(_('phases: %s\n') % ', '.join(t))
6491 6490
6492 6491 cmdutil.summaryhooks(ui, repo)
6493 6492
6494 6493 if opts.get('remote'):
6495 6494 needsincoming, needsoutgoing = True, True
6496 6495 else:
6497 6496 needsincoming, needsoutgoing = False, False
6498 6497 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
6499 6498 if i:
6500 6499 needsincoming = True
6501 6500 if o:
6502 6501 needsoutgoing = True
6503 6502 if not needsincoming and not needsoutgoing:
6504 6503 return
6505 6504
6506 6505 def getincoming():
6507 6506 source, branches = hg.parseurl(ui.expandpath('default'))
6508 6507 sbranch = branches[0]
6509 6508 try:
6510 6509 other = hg.peer(repo, {}, source)
6511 6510 except error.RepoError:
6512 6511 if opts.get('remote'):
6513 6512 raise
6514 6513 return source, sbranch, None, None, None
6515 6514 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
6516 6515 if revs:
6517 6516 revs = [other.lookup(rev) for rev in revs]
6518 6517 ui.debug('comparing with %s\n' % util.hidepassword(source))
6519 6518 repo.ui.pushbuffer()
6520 6519 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
6521 6520 repo.ui.popbuffer()
6522 6521 return source, sbranch, other, commoninc, commoninc[1]
6523 6522
6524 6523 if needsincoming:
6525 6524 source, sbranch, sother, commoninc, incoming = getincoming()
6526 6525 else:
6527 6526 source = sbranch = sother = commoninc = incoming = None
6528 6527
6529 6528 def getoutgoing():
6530 6529 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
6531 6530 dbranch = branches[0]
6532 6531 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
6533 6532 if source != dest:
6534 6533 try:
6535 6534 dother = hg.peer(repo, {}, dest)
6536 6535 except error.RepoError:
6537 6536 if opts.get('remote'):
6538 6537 raise
6539 6538 return dest, dbranch, None, None
6540 6539 ui.debug('comparing with %s\n' % util.hidepassword(dest))
6541 6540 elif sother is None:
6542 6541 # there is no explicit destination peer, but source one is invalid
6543 6542 return dest, dbranch, None, None
6544 6543 else:
6545 6544 dother = sother
6546 6545 if (source != dest or (sbranch is not None and sbranch != dbranch)):
6547 6546 common = None
6548 6547 else:
6549 6548 common = commoninc
6550 6549 if revs:
6551 6550 revs = [repo.lookup(rev) for rev in revs]
6552 6551 repo.ui.pushbuffer()
6553 6552 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
6554 6553 commoninc=common)
6555 6554 repo.ui.popbuffer()
6556 6555 return dest, dbranch, dother, outgoing
6557 6556
6558 6557 if needsoutgoing:
6559 6558 dest, dbranch, dother, outgoing = getoutgoing()
6560 6559 else:
6561 6560 dest = dbranch = dother = outgoing = None
6562 6561
6563 6562 if opts.get('remote'):
6564 6563 t = []
6565 6564 if incoming:
6566 6565 t.append(_('1 or more incoming'))
6567 6566 o = outgoing.missing
6568 6567 if o:
6569 6568 t.append(_('%d outgoing') % len(o))
6570 6569 other = dother or sother
6571 6570 if 'bookmarks' in other.listkeys('namespaces'):
6572 6571 counts = bookmarks.summary(repo, other)
6573 6572 if counts[0] > 0:
6574 6573 t.append(_('%d incoming bookmarks') % counts[0])
6575 6574 if counts[1] > 0:
6576 6575 t.append(_('%d outgoing bookmarks') % counts[1])
6577 6576
6578 6577 if t:
6579 6578 # i18n: column positioning for "hg summary"
6580 6579 ui.write(_('remote: %s\n') % (', '.join(t)))
6581 6580 else:
6582 6581 # i18n: column positioning for "hg summary"
6583 6582 ui.status(_('remote: (synced)\n'))
6584 6583
6585 6584 cmdutil.summaryremotehooks(ui, repo, opts,
6586 6585 ((source, sbranch, sother, commoninc),
6587 6586 (dest, dbranch, dother, outgoing)))
6588 6587
6589 6588 @command('tag',
6590 6589 [('f', 'force', None, _('force tag')),
6591 6590 ('l', 'local', None, _('make the tag local')),
6592 6591 ('r', 'rev', '', _('revision to tag'), _('REV')),
6593 6592 ('', 'remove', None, _('remove a tag')),
6594 6593 # -l/--local is already there, commitopts cannot be used
6595 6594 ('e', 'edit', None, _('invoke editor on commit messages')),
6596 6595 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
6597 6596 ] + commitopts2,
6598 6597 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
6599 6598 def tag(ui, repo, name1, *names, **opts):
6600 6599 """add one or more tags for the current or given revision
6601 6600
6602 6601 Name a particular revision using <name>.
6603 6602
6604 6603 Tags are used to name particular revisions of the repository and are
6605 6604 very useful to compare different revisions, to go back to significant
6606 6605 earlier versions or to mark branch points as releases, etc. Changing
6607 6606 an existing tag is normally disallowed; use -f/--force to override.
6608 6607
6609 6608 If no revision is given, the parent of the working directory is
6610 6609 used.
6611 6610
6612 6611 To facilitate version control, distribution, and merging of tags,
6613 6612 they are stored as a file named ".hgtags" which is managed similarly
6614 6613 to other project files and can be hand-edited if necessary. This
6615 6614 also means that tagging creates a new commit. The file
6616 6615 ".hg/localtags" is used for local tags (not shared among
6617 6616 repositories).
6618 6617
6619 6618 Tag commits are usually made at the head of a branch. If the parent
6620 6619 of the working directory is not a branch head, :hg:`tag` aborts; use
6621 6620 -f/--force to force the tag commit to be based on a non-head
6622 6621 changeset.
6623 6622
6624 6623 See :hg:`help dates` for a list of formats valid for -d/--date.
6625 6624
6626 6625 Since tag names have priority over branch names during revision
6627 6626 lookup, using an existing branch name as a tag name is discouraged.
6628 6627
6629 6628 Returns 0 on success.
6630 6629 """
6631 6630 wlock = lock = None
6632 6631 try:
6633 6632 wlock = repo.wlock()
6634 6633 lock = repo.lock()
6635 6634 rev_ = "."
6636 6635 names = [t.strip() for t in (name1,) + names]
6637 6636 if len(names) != len(set(names)):
6638 6637 raise error.Abort(_('tag names must be unique'))
6639 6638 for n in names:
6640 6639 scmutil.checknewlabel(repo, n, 'tag')
6641 6640 if not n:
6642 6641 raise error.Abort(_('tag names cannot consist entirely of '
6643 6642 'whitespace'))
6644 6643 if opts.get('rev') and opts.get('remove'):
6645 6644 raise error.Abort(_("--rev and --remove are incompatible"))
6646 6645 if opts.get('rev'):
6647 6646 rev_ = opts['rev']
6648 6647 message = opts.get('message')
6649 6648 if opts.get('remove'):
6650 6649 if opts.get('local'):
6651 6650 expectedtype = 'local'
6652 6651 else:
6653 6652 expectedtype = 'global'
6654 6653
6655 6654 for n in names:
6656 6655 if not repo.tagtype(n):
6657 6656 raise error.Abort(_("tag '%s' does not exist") % n)
6658 6657 if repo.tagtype(n) != expectedtype:
6659 6658 if expectedtype == 'global':
6660 6659 raise error.Abort(_("tag '%s' is not a global tag") % n)
6661 6660 else:
6662 6661 raise error.Abort(_("tag '%s' is not a local tag") % n)
6663 6662 rev_ = 'null'
6664 6663 if not message:
6665 6664 # we don't translate commit messages
6666 6665 message = 'Removed tag %s' % ', '.join(names)
6667 6666 elif not opts.get('force'):
6668 6667 for n in names:
6669 6668 if n in repo.tags():
6670 6669 raise error.Abort(_("tag '%s' already exists "
6671 6670 "(use -f to force)") % n)
6672 6671 if not opts.get('local'):
6673 6672 p1, p2 = repo.dirstate.parents()
6674 6673 if p2 != nullid:
6675 6674 raise error.Abort(_('uncommitted merge'))
6676 6675 bheads = repo.branchheads()
6677 6676 if not opts.get('force') and bheads and p1 not in bheads:
6678 6677 raise error.Abort(_('not at a branch head (use -f to force)'))
6679 6678 r = scmutil.revsingle(repo, rev_).node()
6680 6679
6681 6680 if not message:
6682 6681 # we don't translate commit messages
6683 6682 message = ('Added tag %s for changeset %s' %
6684 6683 (', '.join(names), short(r)))
6685 6684
6686 6685 date = opts.get('date')
6687 6686 if date:
6688 6687 date = util.parsedate(date)
6689 6688
6690 6689 if opts.get('remove'):
6691 6690 editform = 'tag.remove'
6692 6691 else:
6693 6692 editform = 'tag.add'
6694 6693 editor = cmdutil.getcommiteditor(editform=editform, **opts)
6695 6694
6696 6695 # don't allow tagging the null rev
6697 6696 if (not opts.get('remove') and
6698 6697 scmutil.revsingle(repo, rev_).rev() == nullrev):
6699 6698 raise error.Abort(_("cannot tag null revision"))
6700 6699
6701 6700 repo.tag(names, r, message, opts.get('local'), opts.get('user'), date,
6702 6701 editor=editor)
6703 6702 finally:
6704 6703 release(lock, wlock)
6705 6704
6706 6705 @command('tags', formatteropts, '')
6707 6706 def tags(ui, repo, **opts):
6708 6707 """list repository tags
6709 6708
6710 6709 This lists both regular and local tags. When the -v/--verbose
6711 6710 switch is used, a third column "local" is printed for local tags.
6712 6711
6713 6712 Returns 0 on success.
6714 6713 """
6715 6714
6716 6715 fm = ui.formatter('tags', opts)
6717 6716 hexfunc = fm.hexfunc
6718 6717 tagtype = ""
6719 6718
6720 6719 for t, n in reversed(repo.tagslist()):
6721 6720 hn = hexfunc(n)
6722 6721 label = 'tags.normal'
6723 6722 tagtype = ''
6724 6723 if repo.tagtype(t) == 'local':
6725 6724 label = 'tags.local'
6726 6725 tagtype = 'local'
6727 6726
6728 6727 fm.startitem()
6729 6728 fm.write('tag', '%s', t, label=label)
6730 6729 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
6731 6730 fm.condwrite(not ui.quiet, 'rev node', fmt,
6732 6731 repo.changelog.rev(n), hn, label=label)
6733 6732 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
6734 6733 tagtype, label=label)
6735 6734 fm.plain('\n')
6736 6735 fm.end()
6737 6736
6738 6737 @command('tip',
6739 6738 [('p', 'patch', None, _('show patch')),
6740 6739 ('g', 'git', None, _('use git extended diff format')),
6741 6740 ] + templateopts,
6742 6741 _('[-p] [-g]'))
6743 6742 def tip(ui, repo, **opts):
6744 6743 """show the tip revision (DEPRECATED)
6745 6744
6746 6745 The tip revision (usually just called the tip) is the changeset
6747 6746 most recently added to the repository (and therefore the most
6748 6747 recently changed head).
6749 6748
6750 6749 If you have just made a commit, that commit will be the tip. If
6751 6750 you have just pulled changes from another repository, the tip of
6752 6751 that repository becomes the current tip. The "tip" tag is special
6753 6752 and cannot be renamed or assigned to a different changeset.
6754 6753
6755 6754 This command is deprecated, please use :hg:`heads` instead.
6756 6755
6757 6756 Returns 0 on success.
6758 6757 """
6759 6758 displayer = cmdutil.show_changeset(ui, repo, opts)
6760 6759 displayer.show(repo['tip'])
6761 6760 displayer.close()
6762 6761
6763 6762 @command('unbundle',
6764 6763 [('u', 'update', None,
6765 6764 _('update to new branch head if changesets were unbundled'))],
6766 6765 _('[-u] FILE...'))
6767 6766 def unbundle(ui, repo, fname1, *fnames, **opts):
6768 6767 """apply one or more changegroup files
6769 6768
6770 6769 Apply one or more compressed changegroup files generated by the
6771 6770 bundle command.
6772 6771
6773 6772 Returns 0 on success, 1 if an update has unresolved files.
6774 6773 """
6775 6774 fnames = (fname1,) + fnames
6776 6775
6777 6776 lock = repo.lock()
6778 6777 try:
6779 6778 for fname in fnames:
6780 6779 f = hg.openpath(ui, fname)
6781 6780 gen = exchange.readbundle(ui, f, fname)
6782 6781 if isinstance(gen, bundle2.unbundle20):
6783 6782 tr = repo.transaction('unbundle')
6784 6783 try:
6785 6784 op = bundle2.applybundle(repo, gen, tr, source='unbundle',
6786 6785 url='bundle:' + fname)
6787 6786 tr.close()
6788 6787 except error.BundleUnknownFeatureError as exc:
6789 6788 raise error.Abort(_('%s: unknown bundle feature, %s')
6790 6789 % (fname, exc),
6791 6790 hint=_("see https://mercurial-scm.org/"
6792 6791 "wiki/BundleFeature for more "
6793 6792 "information"))
6794 6793 finally:
6795 6794 if tr:
6796 6795 tr.release()
6797 6796 changes = [r.get('return', 0)
6798 6797 for r in op.records['changegroup']]
6799 6798 modheads = changegroup.combineresults(changes)
6800 6799 elif isinstance(gen, streamclone.streamcloneapplier):
6801 6800 raise error.Abort(
6802 6801 _('packed bundles cannot be applied with '
6803 6802 '"hg unbundle"'),
6804 6803 hint=_('use "hg debugapplystreamclonebundle"'))
6805 6804 else:
6806 6805 modheads = gen.apply(repo, 'unbundle', 'bundle:' + fname)
6807 6806 finally:
6808 6807 lock.release()
6809 6808
6810 6809 return postincoming(ui, repo, modheads, opts.get('update'), None)
6811 6810
6812 6811 @command('^update|up|checkout|co',
6813 6812 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
6814 6813 ('c', 'check', None,
6815 6814 _('update across branches if no uncommitted changes')),
6816 6815 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
6817 6816 ('r', 'rev', '', _('revision'), _('REV'))
6818 6817 ] + mergetoolopts,
6819 6818 _('[-c] [-C] [-d DATE] [[-r] REV]'))
6820 6819 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
6821 6820 tool=None):
6822 6821 """update working directory (or switch revisions)
6823 6822
6824 6823 Update the repository's working directory to the specified
6825 6824 changeset. If no changeset is specified, update to the tip of the
6826 6825 current named branch and move the active bookmark (see :hg:`help
6827 6826 bookmarks`).
6828 6827
6829 6828 Update sets the working directory's parent revision to the specified
6830 6829 changeset (see :hg:`help parents`).
6831 6830
6832 6831 If the changeset is not a descendant or ancestor of the working
6833 6832 directory's parent, the update is aborted. With the -c/--check
6834 6833 option, the working directory is checked for uncommitted changes; if
6835 6834 none are found, the working directory is updated to the specified
6836 6835 changeset.
6837 6836
6838 6837 .. container:: verbose
6839 6838
6840 6839 The following rules apply when the working directory contains
6841 6840 uncommitted changes:
6842 6841
6843 6842 1. If neither -c/--check nor -C/--clean is specified, and if
6844 6843 the requested changeset is an ancestor or descendant of
6845 6844 the working directory's parent, the uncommitted changes
6846 6845 are merged into the requested changeset and the merged
6847 6846 result is left uncommitted. If the requested changeset is
6848 6847 not an ancestor or descendant (that is, it is on another
6849 6848 branch), the update is aborted and the uncommitted changes
6850 6849 are preserved.
6851 6850
6852 6851 2. With the -c/--check option, the update is aborted and the
6853 6852 uncommitted changes are preserved.
6854 6853
6855 6854 3. With the -C/--clean option, uncommitted changes are discarded and
6856 6855 the working directory is updated to the requested changeset.
6857 6856
6858 6857 To cancel an uncommitted merge (and lose your changes), use
6859 6858 :hg:`update --clean .`.
6860 6859
6861 6860 Use null as the changeset to remove the working directory (like
6862 6861 :hg:`clone -U`).
6863 6862
6864 6863 If you want to revert just one file to an older revision, use
6865 6864 :hg:`revert [-r REV] NAME`.
6866 6865
6867 6866 See :hg:`help dates` for a list of formats valid for -d/--date.
6868 6867
6869 6868 Returns 0 on success, 1 if there are unresolved files.
6870 6869 """
6871 6870 movemarkfrom = None
6872 6871 if rev and node:
6873 6872 raise error.Abort(_("please specify just one revision"))
6874 6873
6875 6874 if rev is None or rev == '':
6876 6875 rev = node
6877 6876
6878 6877 wlock = repo.wlock()
6879 6878 try:
6880 6879 cmdutil.clearunfinished(repo)
6881 6880
6882 6881 if date:
6883 6882 if rev is not None:
6884 6883 raise error.Abort(_("you can't specify a revision and a date"))
6885 6884 rev = cmdutil.finddate(ui, repo, date)
6886 6885
6887 6886 # if we defined a bookmark, we have to remember the original name
6888 6887 brev = rev
6889 6888 rev = scmutil.revsingle(repo, rev, rev).rev()
6890 6889
6891 6890 if check and clean:
6892 6891 raise error.Abort(_("cannot specify both -c/--check and -C/--clean")
6893 6892 )
6894 6893
6895 6894 if check:
6896 6895 cmdutil.bailifchanged(repo, merge=False)
6897 6896 if rev is None:
6898 6897 updata = destutil.destupdate(repo, clean=clean, check=check)
6899 6898 rev, movemarkfrom, brev = updata
6900 6899
6901 6900 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
6902 6901
6903 6902 if clean:
6904 6903 ret = hg.clean(repo, rev)
6905 6904 else:
6906 6905 ret = hg.update(repo, rev)
6907 6906
6908 6907 if not ret and movemarkfrom:
6909 6908 if movemarkfrom == repo['.'].node():
6910 6909 pass # no-op update
6911 6910 elif bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
6912 6911 ui.status(_("updating bookmark %s\n") % repo._activebookmark)
6913 6912 else:
6914 6913 # this can happen with a non-linear update
6915 6914 ui.status(_("(leaving bookmark %s)\n") %
6916 6915 repo._activebookmark)
6917 6916 bookmarks.deactivate(repo)
6918 6917 elif brev in repo._bookmarks:
6919 6918 bookmarks.activate(repo, brev)
6920 6919 ui.status(_("(activating bookmark %s)\n") % brev)
6921 6920 elif brev:
6922 6921 if repo._activebookmark:
6923 6922 ui.status(_("(leaving bookmark %s)\n") %
6924 6923 repo._activebookmark)
6925 6924 bookmarks.deactivate(repo)
6926 6925 finally:
6927 6926 wlock.release()
6928 6927
6929 6928 return ret
6930 6929
6931 6930 @command('verify', [])
6932 6931 def verify(ui, repo):
6933 6932 """verify the integrity of the repository
6934 6933
6935 6934 Verify the integrity of the current repository.
6936 6935
6937 6936 This will perform an extensive check of the repository's
6938 6937 integrity, validating the hashes and checksums of each entry in
6939 6938 the changelog, manifest, and tracked files, as well as the
6940 6939 integrity of their crosslinks and indices.
6941 6940
6942 6941 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
6943 6942 for more information about recovery from corruption of the
6944 6943 repository.
6945 6944
6946 6945 Returns 0 on success, 1 if errors are encountered.
6947 6946 """
6948 6947 return hg.verify(repo)
6949 6948
6950 6949 @command('version', [], norepo=True)
6951 6950 def version_(ui):
6952 6951 """output version and copyright information"""
6953 6952 ui.write(_("Mercurial Distributed SCM (version %s)\n")
6954 6953 % util.version())
6955 6954 ui.status(_(
6956 6955 "(see https://mercurial-scm.org for more information)\n"
6957 6956 "\nCopyright (C) 2005-2015 Matt Mackall and others\n"
6958 6957 "This is free software; see the source for copying conditions. "
6959 6958 "There is NO\nwarranty; "
6960 6959 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
6961 6960 ))
6962 6961
6963 6962 ui.note(_("\nEnabled extensions:\n\n"))
6964 6963 if ui.verbose:
6965 6964 # format names and versions into columns
6966 6965 names = []
6967 6966 vers = []
6968 6967 for name, module in extensions.extensions():
6969 6968 names.append(name)
6970 6969 vers.append(extensions.moduleversion(module))
6971 6970 if names:
6972 6971 maxnamelen = max(len(n) for n in names)
6973 6972 for i, name in enumerate(names):
6974 6973 ui.write(" %-*s %s\n" % (maxnamelen, name, vers[i]))
@@ -1,906 +1,906 b''
1 1 # hg.py - repository classes for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 from __future__ import absolute_import
10 10
11 11 import errno
12 12 import os
13 13 import shutil
14 14
15 15 from .i18n import _
16 16 from .node import nullid
17 17
18 18 from . import (
19 19 bookmarks,
20 20 bundlerepo,
21 21 cmdutil,
22 22 discovery,
23 23 error,
24 24 exchange,
25 25 extensions,
26 26 httppeer,
27 27 localrepo,
28 28 lock,
29 29 merge as mergemod,
30 30 node,
31 31 phases,
32 32 repoview,
33 33 scmutil,
34 34 sshpeer,
35 35 statichttprepo,
36 36 ui as uimod,
37 37 unionrepo,
38 38 url,
39 39 util,
40 40 verify as verifymod,
41 41 )
42 42
43 43 release = lock.release
44 44
45 45 def _local(path):
46 46 path = util.expandpath(util.urllocalpath(path))
47 47 return (os.path.isfile(path) and bundlerepo or localrepo)
48 48
49 49 def addbranchrevs(lrepo, other, branches, revs):
50 50 peer = other.peer() # a courtesy to callers using a localrepo for other
51 51 hashbranch, branches = branches
52 52 if not hashbranch and not branches:
53 53 x = revs or None
54 54 if util.safehasattr(revs, 'first'):
55 55 y = revs.first()
56 56 elif revs:
57 57 y = revs[0]
58 58 else:
59 59 y = None
60 60 return x, y
61 61 if revs:
62 62 revs = list(revs)
63 63 else:
64 64 revs = []
65 65
66 66 if not peer.capable('branchmap'):
67 67 if branches:
68 68 raise error.Abort(_("remote branch lookup not supported"))
69 69 revs.append(hashbranch)
70 70 return revs, revs[0]
71 71 branchmap = peer.branchmap()
72 72
73 73 def primary(branch):
74 74 if branch == '.':
75 75 if not lrepo:
76 76 raise error.Abort(_("dirstate branch not accessible"))
77 77 branch = lrepo.dirstate.branch()
78 78 if branch in branchmap:
79 79 revs.extend(node.hex(r) for r in reversed(branchmap[branch]))
80 80 return True
81 81 else:
82 82 return False
83 83
84 84 for branch in branches:
85 85 if not primary(branch):
86 86 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
87 87 if hashbranch:
88 88 if not primary(hashbranch):
89 89 revs.append(hashbranch)
90 90 return revs, revs[0]
91 91
92 92 def parseurl(path, branches=None):
93 93 '''parse url#branch, returning (url, (branch, branches))'''
94 94
95 95 u = util.url(path)
96 96 branch = None
97 97 if u.fragment:
98 98 branch = u.fragment
99 99 u.fragment = None
100 100 return str(u), (branch, branches or [])
101 101
102 102 schemes = {
103 103 'bundle': bundlerepo,
104 104 'union': unionrepo,
105 105 'file': _local,
106 106 'http': httppeer,
107 107 'https': httppeer,
108 108 'ssh': sshpeer,
109 109 'static-http': statichttprepo,
110 110 }
111 111
112 112 def _peerlookup(path):
113 113 u = util.url(path)
114 114 scheme = u.scheme or 'file'
115 115 thing = schemes.get(scheme) or schemes['file']
116 116 try:
117 117 return thing(path)
118 118 except TypeError:
119 119 # we can't test callable(thing) because 'thing' can be an unloaded
120 120 # module that implements __call__
121 121 if not util.safehasattr(thing, 'instance'):
122 122 raise
123 123 return thing
124 124
125 125 def islocal(repo):
126 126 '''return true if repo (or path pointing to repo) is local'''
127 127 if isinstance(repo, str):
128 128 try:
129 129 return _peerlookup(repo).islocal(repo)
130 130 except AttributeError:
131 131 return False
132 132 return repo.local()
133 133
134 134 def openpath(ui, path):
135 135 '''open path with open if local, url.open if remote'''
136 136 pathurl = util.url(path, parsequery=False, parsefragment=False)
137 137 if pathurl.islocal():
138 138 return util.posixfile(pathurl.localpath(), 'rb')
139 139 else:
140 140 return url.open(ui, path)
141 141
142 142 # a list of (ui, repo) functions called for wire peer initialization
143 143 wirepeersetupfuncs = []
144 144
145 145 def _peerorrepo(ui, path, create=False):
146 146 """return a repository object for the specified path"""
147 147 obj = _peerlookup(path).instance(ui, path, create)
148 148 ui = getattr(obj, "ui", ui)
149 149 for name, module in extensions.extensions(ui):
150 150 hook = getattr(module, 'reposetup', None)
151 151 if hook:
152 152 hook(ui, obj)
153 153 if not obj.local():
154 154 for f in wirepeersetupfuncs:
155 155 f(ui, obj)
156 156 return obj
157 157
158 158 def repository(ui, path='', create=False):
159 159 """return a repository object for the specified path"""
160 160 peer = _peerorrepo(ui, path, create)
161 161 repo = peer.local()
162 162 if not repo:
163 163 raise error.Abort(_("repository '%s' is not local") %
164 164 (path or peer.url()))
165 165 return repo.filtered('visible')
166 166
167 167 def peer(uiorrepo, opts, path, create=False):
168 168 '''return a repository peer for the specified path'''
169 169 rui = remoteui(uiorrepo, opts)
170 170 return _peerorrepo(rui, path, create).peer()
171 171
172 172 def defaultdest(source):
173 173 '''return default destination of clone if none is given
174 174
175 175 >>> defaultdest('foo')
176 176 'foo'
177 177 >>> defaultdest('/foo/bar')
178 178 'bar'
179 179 >>> defaultdest('/')
180 180 ''
181 181 >>> defaultdest('')
182 182 ''
183 183 >>> defaultdest('http://example.org/')
184 184 ''
185 185 >>> defaultdest('http://example.org/foo/')
186 186 'foo'
187 187 '''
188 188 path = util.url(source).path
189 189 if not path:
190 190 return ''
191 191 return os.path.basename(os.path.normpath(path))
192 192
193 193 def share(ui, source, dest=None, update=True, bookmarks=True):
194 194 '''create a shared repository'''
195 195
196 196 if not islocal(source):
197 197 raise error.Abort(_('can only share local repositories'))
198 198
199 199 if not dest:
200 200 dest = defaultdest(source)
201 201 else:
202 202 dest = ui.expandpath(dest)
203 203
204 204 if isinstance(source, str):
205 205 origsource = ui.expandpath(source)
206 206 source, branches = parseurl(origsource)
207 207 srcrepo = repository(ui, source)
208 208 rev, checkout = addbranchrevs(srcrepo, srcrepo, branches, None)
209 209 else:
210 210 srcrepo = source.local()
211 211 origsource = source = srcrepo.url()
212 212 checkout = None
213 213
214 214 sharedpath = srcrepo.sharedpath # if our source is already sharing
215 215
216 216 destwvfs = scmutil.vfs(dest, realpath=True)
217 217 destvfs = scmutil.vfs(os.path.join(destwvfs.base, '.hg'), realpath=True)
218 218
219 219 if destvfs.lexists():
220 220 raise error.Abort(_('destination already exists'))
221 221
222 222 if not destwvfs.isdir():
223 223 destwvfs.mkdir()
224 224 destvfs.makedir()
225 225
226 226 requirements = ''
227 227 try:
228 228 requirements = srcrepo.vfs.read('requires')
229 229 except IOError as inst:
230 230 if inst.errno != errno.ENOENT:
231 231 raise
232 232
233 233 requirements += 'shared\n'
234 234 destvfs.write('requires', requirements)
235 235 destvfs.write('sharedpath', sharedpath)
236 236
237 237 r = repository(ui, destwvfs.base)
238 238
239 239 default = srcrepo.ui.config('paths', 'default')
240 240 if default:
241 241 fp = r.vfs("hgrc", "w", text=True)
242 242 fp.write("[paths]\n")
243 243 fp.write("default = %s\n" % default)
244 244 fp.close()
245 245
246 246 if update:
247 247 r.ui.status(_("updating working directory\n"))
248 248 if update is not True:
249 249 checkout = update
250 250 for test in (checkout, 'default', 'tip'):
251 251 if test is None:
252 252 continue
253 253 try:
254 254 uprev = r.lookup(test)
255 255 break
256 256 except error.RepoLookupError:
257 257 continue
258 258 _update(r, uprev)
259 259
260 260 if bookmarks:
261 261 fp = r.vfs('shared', 'w')
262 262 fp.write('bookmarks\n')
263 263 fp.close()
264 264
265 265 def copystore(ui, srcrepo, destpath):
266 266 '''copy files from store of srcrepo in destpath
267 267
268 268 returns destlock
269 269 '''
270 270 destlock = None
271 271 try:
272 272 hardlink = None
273 273 num = 0
274 274 closetopic = [None]
275 275 def prog(topic, pos):
276 276 if pos is None:
277 277 closetopic[0] = topic
278 278 else:
279 279 ui.progress(topic, pos + num)
280 280 srcpublishing = srcrepo.publishing()
281 281 srcvfs = scmutil.vfs(srcrepo.sharedpath)
282 282 dstvfs = scmutil.vfs(destpath)
283 283 for f in srcrepo.store.copylist():
284 284 if srcpublishing and f.endswith('phaseroots'):
285 285 continue
286 286 dstbase = os.path.dirname(f)
287 287 if dstbase and not dstvfs.exists(dstbase):
288 288 dstvfs.mkdir(dstbase)
289 289 if srcvfs.exists(f):
290 290 if f.endswith('data'):
291 291 # 'dstbase' may be empty (e.g. revlog format 0)
292 292 lockfile = os.path.join(dstbase, "lock")
293 293 # lock to avoid premature writing to the target
294 294 destlock = lock.lock(dstvfs, lockfile)
295 295 hardlink, n = util.copyfiles(srcvfs.join(f), dstvfs.join(f),
296 296 hardlink, progress=prog)
297 297 num += n
298 298 if hardlink:
299 299 ui.debug("linked %d files\n" % num)
300 300 if closetopic[0]:
301 301 ui.progress(closetopic[0], None)
302 302 else:
303 303 ui.debug("copied %d files\n" % num)
304 304 if closetopic[0]:
305 305 ui.progress(closetopic[0], None)
306 306 return destlock
307 307 except: # re-raises
308 308 release(destlock)
309 309 raise
310 310
311 311 def clonewithshare(ui, peeropts, sharepath, source, srcpeer, dest, pull=False,
312 312 rev=None, update=True, stream=False):
313 313 """Perform a clone using a shared repo.
314 314
315 315 The store for the repository will be located at <sharepath>/.hg. The
316 316 specified revisions will be cloned or pulled from "source". A shared repo
317 317 will be created at "dest" and a working copy will be created if "update" is
318 318 True.
319 319 """
320 320 revs = None
321 321 if rev:
322 322 if not srcpeer.capable('lookup'):
323 323 raise error.Abort(_("src repository does not support "
324 324 "revision lookup and so doesn't "
325 325 "support clone by revision"))
326 326 revs = [srcpeer.lookup(r) for r in rev]
327 327
328 328 basename = os.path.basename(sharepath)
329 329
330 330 if os.path.exists(sharepath):
331 331 ui.status(_('(sharing from existing pooled repository %s)\n') %
332 332 basename)
333 333 else:
334 334 ui.status(_('(sharing from new pooled repository %s)\n') % basename)
335 335 # Always use pull mode because hardlinks in share mode don't work well.
336 336 # Never update because working copies aren't necessary in share mode.
337 337 clone(ui, peeropts, source, dest=sharepath, pull=True,
338 338 rev=rev, update=False, stream=stream)
339 339
340 340 sharerepo = repository(ui, path=sharepath)
341 341 share(ui, sharerepo, dest=dest, update=update, bookmarks=False)
342 342
343 343 # We need to perform a pull against the dest repo to fetch bookmarks
344 344 # and other non-store data that isn't shared by default. In the case of
345 345 # non-existing shared repo, this means we pull from the remote twice. This
346 346 # is a bit weird. But at the time it was implemented, there wasn't an easy
347 347 # way to pull just non-changegroup data.
348 348 destrepo = repository(ui, path=dest)
349 349 exchange.pull(destrepo, srcpeer, heads=revs)
350 350
351 351 return srcpeer, peer(ui, peeropts, dest)
352 352
353 353 def clone(ui, peeropts, source, dest=None, pull=False, rev=None,
354 354 update=True, stream=False, branch=None, shareopts=None):
355 355 """Make a copy of an existing repository.
356 356
357 357 Create a copy of an existing repository in a new directory. The
358 358 source and destination are URLs, as passed to the repository
359 359 function. Returns a pair of repository peers, the source and
360 360 newly created destination.
361 361
362 362 The location of the source is added to the new repository's
363 363 .hg/hgrc file, as the default to be used for future pulls and
364 364 pushes.
365 365
366 366 If an exception is raised, the partly cloned/updated destination
367 367 repository will be deleted.
368 368
369 369 Arguments:
370 370
371 371 source: repository object or URL
372 372
373 373 dest: URL of destination repository to create (defaults to base
374 374 name of source repository)
375 375
376 376 pull: always pull from source repository, even in local case or if the
377 377 server prefers streaming
378 378
379 379 stream: stream raw data uncompressed from repository (fast over
380 380 LAN, slow over WAN)
381 381
382 382 rev: revision to clone up to (implies pull=True)
383 383
384 384 update: update working directory after clone completes, if
385 385 destination is local repository (True means update to default rev,
386 386 anything else is treated as a revision)
387 387
388 388 branch: branches to clone
389 389
390 390 shareopts: dict of options to control auto sharing behavior. The "pool" key
391 391 activates auto sharing mode and defines the directory for stores. The
392 392 "mode" key determines how to construct the directory name of the shared
393 393 repository. "identity" means the name is derived from the node of the first
394 394 changeset in the repository. "remote" means the name is derived from the
395 395 remote's path/URL. Defaults to "identity."
396 396 """
397 397
398 398 if isinstance(source, str):
399 399 origsource = ui.expandpath(source)
400 400 source, branch = parseurl(origsource, branch)
401 401 srcpeer = peer(ui, peeropts, source)
402 402 else:
403 403 srcpeer = source.peer() # in case we were called with a localrepo
404 404 branch = (None, branch or [])
405 405 origsource = source = srcpeer.url()
406 406 rev, checkout = addbranchrevs(srcpeer, srcpeer, branch, rev)
407 407
408 408 if dest is None:
409 409 dest = defaultdest(source)
410 410 if dest:
411 411 ui.status(_("destination directory: %s\n") % dest)
412 412 else:
413 413 dest = ui.expandpath(dest)
414 414
415 415 dest = util.urllocalpath(dest)
416 416 source = util.urllocalpath(source)
417 417
418 418 if not dest:
419 419 raise error.Abort(_("empty destination path is not valid"))
420 420
421 421 destvfs = scmutil.vfs(dest, expandpath=True)
422 422 if destvfs.lexists():
423 423 if not destvfs.isdir():
424 424 raise error.Abort(_("destination '%s' already exists") % dest)
425 425 elif destvfs.listdir():
426 426 raise error.Abort(_("destination '%s' is not empty") % dest)
427 427
428 428 shareopts = shareopts or {}
429 429 sharepool = shareopts.get('pool')
430 430 sharenamemode = shareopts.get('mode')
431 431 if sharepool and islocal(dest):
432 432 sharepath = None
433 433 if sharenamemode == 'identity':
434 434 # Resolve the name from the initial changeset in the remote
435 435 # repository. This returns nullid when the remote is empty. It
436 436 # raises RepoLookupError if revision 0 is filtered or otherwise
437 437 # not available. If we fail to resolve, sharing is not enabled.
438 438 try:
439 439 rootnode = srcpeer.lookup('0')
440 440 if rootnode != node.nullid:
441 441 sharepath = os.path.join(sharepool, node.hex(rootnode))
442 442 else:
443 443 ui.status(_('(not using pooled storage: '
444 444 'remote appears to be empty)\n'))
445 445 except error.RepoLookupError:
446 446 ui.status(_('(not using pooled storage: '
447 447 'unable to resolve identity of remote)\n'))
448 448 elif sharenamemode == 'remote':
449 449 sharepath = os.path.join(sharepool, util.sha1(source).hexdigest())
450 450 else:
451 451 raise error.Abort('unknown share naming mode: %s' % sharenamemode)
452 452
453 453 if sharepath:
454 454 return clonewithshare(ui, peeropts, sharepath, source, srcpeer,
455 455 dest, pull=pull, rev=rev, update=update,
456 456 stream=stream)
457 457
458 458 srclock = destlock = cleandir = None
459 459 srcrepo = srcpeer.local()
460 460 try:
461 461 abspath = origsource
462 462 if islocal(origsource):
463 463 abspath = os.path.abspath(util.urllocalpath(origsource))
464 464
465 465 if islocal(dest):
466 466 cleandir = dest
467 467
468 468 copy = False
469 469 if (srcrepo and srcrepo.cancopy() and islocal(dest)
470 470 and not phases.hassecret(srcrepo)):
471 471 copy = not pull and not rev
472 472
473 473 if copy:
474 474 try:
475 475 # we use a lock here because if we race with commit, we
476 476 # can end up with extra data in the cloned revlogs that's
477 477 # not pointed to by changesets, thus causing verify to
478 478 # fail
479 479 srclock = srcrepo.lock(wait=False)
480 480 except error.LockError:
481 481 copy = False
482 482
483 483 if copy:
484 484 srcrepo.hook('preoutgoing', throw=True, source='clone')
485 485 hgdir = os.path.realpath(os.path.join(dest, ".hg"))
486 486 if not os.path.exists(dest):
487 487 os.mkdir(dest)
488 488 else:
489 489 # only clean up directories we create ourselves
490 490 cleandir = hgdir
491 491 try:
492 492 destpath = hgdir
493 493 util.makedir(destpath, notindexed=True)
494 494 except OSError as inst:
495 495 if inst.errno == errno.EEXIST:
496 496 cleandir = None
497 497 raise error.Abort(_("destination '%s' already exists")
498 498 % dest)
499 499 raise
500 500
501 501 destlock = copystore(ui, srcrepo, destpath)
502 502 # copy bookmarks over
503 503 srcbookmarks = srcrepo.join('bookmarks')
504 504 dstbookmarks = os.path.join(destpath, 'bookmarks')
505 505 if os.path.exists(srcbookmarks):
506 506 util.copyfile(srcbookmarks, dstbookmarks)
507 507
508 508 # Recomputing branch cache might be slow on big repos,
509 509 # so just copy it
510 510 def copybranchcache(fname):
511 511 srcbranchcache = srcrepo.join('cache/%s' % fname)
512 512 dstbranchcache = os.path.join(dstcachedir, fname)
513 513 if os.path.exists(srcbranchcache):
514 514 if not os.path.exists(dstcachedir):
515 515 os.mkdir(dstcachedir)
516 516 util.copyfile(srcbranchcache, dstbranchcache)
517 517
518 518 dstcachedir = os.path.join(destpath, 'cache')
519 519 # In local clones we're copying all nodes, not just served
520 520 # ones. Therefore copy all branch caches over.
521 521 copybranchcache('branch2')
522 522 for cachename in repoview.filtertable:
523 523 copybranchcache('branch2-%s' % cachename)
524 524
525 525 # we need to re-init the repo after manually copying the data
526 526 # into it
527 527 destpeer = peer(srcrepo, peeropts, dest)
528 528 srcrepo.hook('outgoing', source='clone',
529 529 node=node.hex(node.nullid))
530 530 else:
531 531 try:
532 532 destpeer = peer(srcrepo or ui, peeropts, dest, create=True)
533 533 # only pass ui when no srcrepo
534 534 except OSError as inst:
535 535 if inst.errno == errno.EEXIST:
536 536 cleandir = None
537 537 raise error.Abort(_("destination '%s' already exists")
538 538 % dest)
539 539 raise
540 540
541 541 revs = None
542 542 if rev:
543 543 if not srcpeer.capable('lookup'):
544 544 raise error.Abort(_("src repository does not support "
545 545 "revision lookup and so doesn't "
546 546 "support clone by revision"))
547 547 revs = [srcpeer.lookup(r) for r in rev]
548 548 checkout = revs[0]
549 549 local = destpeer.local()
550 550 if local:
551 551 if not stream:
552 552 if pull:
553 553 stream = False
554 554 else:
555 555 stream = None
556 556 # internal config: ui.quietbookmarkmove
557 557 quiet = local.ui.backupconfig('ui', 'quietbookmarkmove')
558 558 try:
559 559 local.ui.setconfig(
560 560 'ui', 'quietbookmarkmove', True, 'clone')
561 561 exchange.pull(local, srcpeer, revs,
562 562 streamclonerequested=stream)
563 563 finally:
564 564 local.ui.restoreconfig(quiet)
565 565 elif srcrepo:
566 566 exchange.push(srcrepo, destpeer, revs=revs,
567 567 bookmarks=srcrepo._bookmarks.keys())
568 568 else:
569 569 raise error.Abort(_("clone from remote to remote not supported")
570 570 )
571 571
572 572 cleandir = None
573 573
574 574 destrepo = destpeer.local()
575 575 if destrepo:
576 576 template = uimod.samplehgrcs['cloned']
577 577 fp = destrepo.vfs("hgrc", "w", text=True)
578 578 u = util.url(abspath)
579 579 u.passwd = None
580 580 defaulturl = str(u)
581 581 fp.write(template % defaulturl)
582 582 fp.close()
583 583
584 584 destrepo.ui.setconfig('paths', 'default', defaulturl, 'clone')
585 585
586 586 if update:
587 587 if update is not True:
588 588 checkout = srcpeer.lookup(update)
589 589 uprev = None
590 590 status = None
591 591 if checkout is not None:
592 592 try:
593 593 uprev = destrepo.lookup(checkout)
594 594 except error.RepoLookupError:
595 595 if update is not True:
596 596 try:
597 597 uprev = destrepo.lookup(update)
598 598 except error.RepoLookupError:
599 599 pass
600 600 if uprev is None:
601 601 try:
602 602 uprev = destrepo._bookmarks['@']
603 603 update = '@'
604 604 bn = destrepo[uprev].branch()
605 605 if bn == 'default':
606 606 status = _("updating to bookmark @\n")
607 607 else:
608 608 status = (_("updating to bookmark @ on branch %s\n")
609 609 % bn)
610 610 except KeyError:
611 611 try:
612 612 uprev = destrepo.branchtip('default')
613 613 except error.RepoLookupError:
614 614 uprev = destrepo.lookup('tip')
615 615 if not status:
616 616 bn = destrepo[uprev].branch()
617 617 status = _("updating to branch %s\n") % bn
618 618 destrepo.ui.status(status)
619 619 _update(destrepo, uprev)
620 620 if update in destrepo._bookmarks:
621 621 bookmarks.activate(destrepo, update)
622 622 finally:
623 623 release(srclock, destlock)
624 624 if cleandir is not None:
625 625 shutil.rmtree(cleandir, True)
626 626 if srcpeer is not None:
627 627 srcpeer.close()
628 628 return srcpeer, destpeer
629 629
630 630 def _showstats(repo, stats):
631 631 repo.ui.status(_("%d files updated, %d files merged, "
632 632 "%d files removed, %d files unresolved\n") % stats)
633 633
634 634 def updaterepo(repo, node, overwrite):
635 635 """Update the working directory to node.
636 636
637 637 When overwrite is set, changes are clobbered, merged else
638 638
639 639 returns stats (see pydoc mercurial.merge.applyupdates)"""
640 return mergemod.update(repo, node, False, overwrite, None,
640 return mergemod.update(repo, node, False, overwrite,
641 641 labels=['working copy', 'destination'])
642 642
643 643 def update(repo, node):
644 644 """update the working directory to node, merging linear changes"""
645 645 stats = updaterepo(repo, node, False)
646 646 _showstats(repo, stats)
647 647 if stats[3]:
648 648 repo.ui.status(_("use 'hg resolve' to retry unresolved file merges\n"))
649 649 return stats[3] > 0
650 650
651 651 # naming conflict in clone()
652 652 _update = update
653 653
654 654 def clean(repo, node, show_stats=True):
655 655 """forcibly switch the working directory to node, clobbering changes"""
656 656 stats = updaterepo(repo, node, True)
657 657 util.unlinkpath(repo.join('graftstate'), ignoremissing=True)
658 658 if show_stats:
659 659 _showstats(repo, stats)
660 660 return stats[3] > 0
661 661
662 662 def merge(repo, node, force=None, remind=True):
663 663 """Branch merge with node, resolving changes. Return true if any
664 664 unresolved conflicts."""
665 stats = mergemod.update(repo, node, True, force, False)
665 stats = mergemod.update(repo, node, True, force)
666 666 _showstats(repo, stats)
667 667 if stats[3]:
668 668 repo.ui.status(_("use 'hg resolve' to retry unresolved file merges "
669 669 "or 'hg update -C .' to abandon\n"))
670 670 elif remind:
671 671 repo.ui.status(_("(branch merge, don't forget to commit)\n"))
672 672 return stats[3] > 0
673 673
674 674 def _incoming(displaychlist, subreporecurse, ui, repo, source,
675 675 opts, buffered=False):
676 676 """
677 677 Helper for incoming / gincoming.
678 678 displaychlist gets called with
679 679 (remoterepo, incomingchangesetlist, displayer) parameters,
680 680 and is supposed to contain only code that can't be unified.
681 681 """
682 682 source, branches = parseurl(ui.expandpath(source), opts.get('branch'))
683 683 other = peer(repo, opts, source)
684 684 ui.status(_('comparing with %s\n') % util.hidepassword(source))
685 685 revs, checkout = addbranchrevs(repo, other, branches, opts.get('rev'))
686 686
687 687 if revs:
688 688 revs = [other.lookup(rev) for rev in revs]
689 689 other, chlist, cleanupfn = bundlerepo.getremotechanges(ui, repo, other,
690 690 revs, opts["bundle"], opts["force"])
691 691 try:
692 692 if not chlist:
693 693 ui.status(_("no changes found\n"))
694 694 return subreporecurse()
695 695
696 696 displayer = cmdutil.show_changeset(ui, other, opts, buffered)
697 697 displaychlist(other, chlist, displayer)
698 698 displayer.close()
699 699 finally:
700 700 cleanupfn()
701 701 subreporecurse()
702 702 return 0 # exit code is zero since we found incoming changes
703 703
704 704 def incoming(ui, repo, source, opts):
705 705 def subreporecurse():
706 706 ret = 1
707 707 if opts.get('subrepos'):
708 708 ctx = repo[None]
709 709 for subpath in sorted(ctx.substate):
710 710 sub = ctx.sub(subpath)
711 711 ret = min(ret, sub.incoming(ui, source, opts))
712 712 return ret
713 713
714 714 def display(other, chlist, displayer):
715 715 limit = cmdutil.loglimit(opts)
716 716 if opts.get('newest_first'):
717 717 chlist.reverse()
718 718 count = 0
719 719 for n in chlist:
720 720 if limit is not None and count >= limit:
721 721 break
722 722 parents = [p for p in other.changelog.parents(n) if p != nullid]
723 723 if opts.get('no_merges') and len(parents) == 2:
724 724 continue
725 725 count += 1
726 726 displayer.show(other[n])
727 727 return _incoming(display, subreporecurse, ui, repo, source, opts)
728 728
729 729 def _outgoing(ui, repo, dest, opts):
730 730 dest = ui.expandpath(dest or 'default-push', dest or 'default')
731 731 dest, branches = parseurl(dest, opts.get('branch'))
732 732 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
733 733 revs, checkout = addbranchrevs(repo, repo, branches, opts.get('rev'))
734 734 if revs:
735 735 revs = [repo.lookup(rev) for rev in scmutil.revrange(repo, revs)]
736 736
737 737 other = peer(repo, opts, dest)
738 738 outgoing = discovery.findcommonoutgoing(repo.unfiltered(), other, revs,
739 739 force=opts.get('force'))
740 740 o = outgoing.missing
741 741 if not o:
742 742 scmutil.nochangesfound(repo.ui, repo, outgoing.excluded)
743 743 return o, other
744 744
745 745 def outgoing(ui, repo, dest, opts):
746 746 def recurse():
747 747 ret = 1
748 748 if opts.get('subrepos'):
749 749 ctx = repo[None]
750 750 for subpath in sorted(ctx.substate):
751 751 sub = ctx.sub(subpath)
752 752 ret = min(ret, sub.outgoing(ui, dest, opts))
753 753 return ret
754 754
755 755 limit = cmdutil.loglimit(opts)
756 756 o, other = _outgoing(ui, repo, dest, opts)
757 757 if not o:
758 758 cmdutil.outgoinghooks(ui, repo, other, opts, o)
759 759 return recurse()
760 760
761 761 if opts.get('newest_first'):
762 762 o.reverse()
763 763 displayer = cmdutil.show_changeset(ui, repo, opts)
764 764 count = 0
765 765 for n in o:
766 766 if limit is not None and count >= limit:
767 767 break
768 768 parents = [p for p in repo.changelog.parents(n) if p != nullid]
769 769 if opts.get('no_merges') and len(parents) == 2:
770 770 continue
771 771 count += 1
772 772 displayer.show(repo[n])
773 773 displayer.close()
774 774 cmdutil.outgoinghooks(ui, repo, other, opts, o)
775 775 recurse()
776 776 return 0 # exit code is zero since we found outgoing changes
777 777
778 778 def verify(repo):
779 779 """verify the consistency of a repository"""
780 780 ret = verifymod.verify(repo)
781 781
782 782 # Broken subrepo references in hidden csets don't seem worth worrying about,
783 783 # since they can't be pushed/pulled, and --hidden can be used if they are a
784 784 # concern.
785 785
786 786 # pathto() is needed for -R case
787 787 revs = repo.revs("filelog(%s)",
788 788 util.pathto(repo.root, repo.getcwd(), '.hgsubstate'))
789 789
790 790 if revs:
791 791 repo.ui.status(_('checking subrepo links\n'))
792 792 for rev in revs:
793 793 ctx = repo[rev]
794 794 try:
795 795 for subpath in ctx.substate:
796 796 ret = ctx.sub(subpath).verify() or ret
797 797 except Exception:
798 798 repo.ui.warn(_('.hgsubstate is corrupt in revision %s\n') %
799 799 node.short(ctx.node()))
800 800
801 801 return ret
802 802
803 803 def remoteui(src, opts):
804 804 'build a remote ui from ui or repo and opts'
805 805 if util.safehasattr(src, 'baseui'): # looks like a repository
806 806 dst = src.baseui.copy() # drop repo-specific config
807 807 src = src.ui # copy target options from repo
808 808 else: # assume it's a global ui object
809 809 dst = src.copy() # keep all global options
810 810
811 811 # copy ssh-specific options
812 812 for o in 'ssh', 'remotecmd':
813 813 v = opts.get(o) or src.config('ui', o)
814 814 if v:
815 815 dst.setconfig("ui", o, v, 'copied')
816 816
817 817 # copy bundle-specific options
818 818 r = src.config('bundle', 'mainreporoot')
819 819 if r:
820 820 dst.setconfig('bundle', 'mainreporoot', r, 'copied')
821 821
822 822 # copy selected local settings to the remote ui
823 823 for sect in ('auth', 'hostfingerprints', 'http_proxy'):
824 824 for key, val in src.configitems(sect):
825 825 dst.setconfig(sect, key, val, 'copied')
826 826 v = src.config('web', 'cacerts')
827 827 if v == '!':
828 828 dst.setconfig('web', 'cacerts', v, 'copied')
829 829 elif v:
830 830 dst.setconfig('web', 'cacerts', util.expandpath(v), 'copied')
831 831
832 832 return dst
833 833
834 834 # Files of interest
835 835 # Used to check if the repository has changed looking at mtime and size of
836 836 # these files.
837 837 foi = [('spath', '00changelog.i'),
838 838 ('spath', 'phaseroots'), # ! phase can change content at the same size
839 839 ('spath', 'obsstore'),
840 840 ('path', 'bookmarks'), # ! bookmark can change content at the same size
841 841 ]
842 842
843 843 class cachedlocalrepo(object):
844 844 """Holds a localrepository that can be cached and reused."""
845 845
846 846 def __init__(self, repo):
847 847 """Create a new cached repo from an existing repo.
848 848
849 849 We assume the passed in repo was recently created. If the
850 850 repo has changed between when it was created and when it was
851 851 turned into a cache, it may not refresh properly.
852 852 """
853 853 assert isinstance(repo, localrepo.localrepository)
854 854 self._repo = repo
855 855 self._state, self.mtime = self._repostate()
856 856
857 857 def fetch(self):
858 858 """Refresh (if necessary) and return a repository.
859 859
860 860 If the cached instance is out of date, it will be recreated
861 861 automatically and returned.
862 862
863 863 Returns a tuple of the repo and a boolean indicating whether a new
864 864 repo instance was created.
865 865 """
866 866 # We compare the mtimes and sizes of some well-known files to
867 867 # determine if the repo changed. This is not precise, as mtimes
868 868 # are susceptible to clock skew and imprecise filesystems and
869 869 # file content can change while maintaining the same size.
870 870
871 871 state, mtime = self._repostate()
872 872 if state == self._state:
873 873 return self._repo, False
874 874
875 875 self._repo = repository(self._repo.baseui, self._repo.url())
876 876 self._state = state
877 877 self.mtime = mtime
878 878
879 879 return self._repo, True
880 880
881 881 def _repostate(self):
882 882 state = []
883 883 maxmtime = -1
884 884 for attr, fname in foi:
885 885 prefix = getattr(self._repo, attr)
886 886 p = os.path.join(prefix, fname)
887 887 try:
888 888 st = os.stat(p)
889 889 except OSError:
890 890 st = os.stat(prefix)
891 891 state.append((st.st_mtime, st.st_size))
892 892 maxmtime = max(maxmtime, st.st_mtime)
893 893
894 894 return tuple(state), maxmtime
895 895
896 896 def copy(self):
897 897 """Obtain a copy of this class instance.
898 898
899 899 A new localrepository instance is obtained. The new instance should be
900 900 completely independent of the original.
901 901 """
902 902 repo = repository(self._repo.baseui, self._repo.origroot)
903 903 c = cachedlocalrepo(repo)
904 904 c._state = self._state
905 905 c.mtime = self.mtime
906 906 return c
@@ -1,1534 +1,1545 b''
1 1 # merge.py - directory-level update/merge handling for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import os
12 12 import shutil
13 13 import struct
14 14
15 15 from .i18n import _
16 16 from .node import (
17 17 bin,
18 18 hex,
19 19 nullhex,
20 20 nullid,
21 21 nullrev,
22 22 )
23 23 from . import (
24 24 copies,
25 25 destutil,
26 26 error,
27 27 filemerge,
28 28 obsolete,
29 29 subrepo,
30 30 util,
31 31 worker,
32 32 )
33 33
34 34 _pack = struct.pack
35 35 _unpack = struct.unpack
36 36
37 37 def _droponode(data):
38 38 # used for compatibility for v1
39 39 bits = data.split('\0')
40 40 bits = bits[:-2] + bits[-1:]
41 41 return '\0'.join(bits)
42 42
43 43 class mergestate(object):
44 44 '''track 3-way merge state of individual files
45 45
46 46 The merge state is stored on disk when needed. Two files are used: one with
47 47 an old format (version 1), and one with a new format (version 2). Version 2
48 48 stores a superset of the data in version 1, including new kinds of records
49 49 in the future. For more about the new format, see the documentation for
50 50 `_readrecordsv2`.
51 51
52 52 Each record can contain arbitrary content, and has an associated type. This
53 53 `type` should be a letter. If `type` is uppercase, the record is mandatory:
54 54 versions of Mercurial that don't support it should abort. If `type` is
55 55 lowercase, the record can be safely ignored.
56 56
57 57 Currently known records:
58 58
59 59 L: the node of the "local" part of the merge (hexified version)
60 60 O: the node of the "other" part of the merge (hexified version)
61 61 F: a file to be merged entry
62 62 C: a change/delete or delete/change conflict
63 63 D: a file that the external merge driver will merge internally
64 64 (experimental)
65 65 m: the external merge driver defined for this merge plus its run state
66 66 (experimental)
67 67 X: unsupported mandatory record type (used in tests)
68 68 x: unsupported advisory record type (used in tests)
69 69
70 70 Merge driver run states (experimental):
71 71 u: driver-resolved files unmarked -- needs to be run next time we're about
72 72 to resolve or commit
73 73 m: driver-resolved files marked -- only needs to be run before commit
74 74 s: success/skipped -- does not need to be run any more
75 75
76 76 '''
77 77 statepathv1 = 'merge/state'
78 78 statepathv2 = 'merge/state2'
79 79
80 80 @staticmethod
81 81 def clean(repo, node=None, other=None):
82 82 """Initialize a brand new merge state, removing any existing state on
83 83 disk."""
84 84 ms = mergestate(repo)
85 85 ms.reset(node, other)
86 86 return ms
87 87
88 88 @staticmethod
89 89 def read(repo):
90 90 """Initialize the merge state, reading it from disk."""
91 91 ms = mergestate(repo)
92 92 ms._read()
93 93 return ms
94 94
95 95 def __init__(self, repo):
96 96 """Initialize the merge state.
97 97
98 98 Do not use this directly! Instead call read() or clean()."""
99 99 self._repo = repo
100 100 self._dirty = False
101 101
102 102 def reset(self, node=None, other=None):
103 103 self._state = {}
104 104 self._local = None
105 105 self._other = None
106 106 for var in ('localctx', 'otherctx'):
107 107 if var in vars(self):
108 108 delattr(self, var)
109 109 if node:
110 110 self._local = node
111 111 self._other = other
112 112 self._readmergedriver = None
113 113 if self.mergedriver:
114 114 self._mdstate = 's'
115 115 else:
116 116 self._mdstate = 'u'
117 117 shutil.rmtree(self._repo.join('merge'), True)
118 118 self._results = {}
119 119 self._dirty = False
120 120
121 121 def _read(self):
122 122 """Analyse each record content to restore a serialized state from disk
123 123
124 124 This function process "record" entry produced by the de-serialization
125 125 of on disk file.
126 126 """
127 127 self._state = {}
128 128 self._local = None
129 129 self._other = None
130 130 for var in ('localctx', 'otherctx'):
131 131 if var in vars(self):
132 132 delattr(self, var)
133 133 self._readmergedriver = None
134 134 self._mdstate = 's'
135 135 unsupported = set()
136 136 records = self._readrecords()
137 137 for rtype, record in records:
138 138 if rtype == 'L':
139 139 self._local = bin(record)
140 140 elif rtype == 'O':
141 141 self._other = bin(record)
142 142 elif rtype == 'm':
143 143 bits = record.split('\0', 1)
144 144 mdstate = bits[1]
145 145 if len(mdstate) != 1 or mdstate not in 'ums':
146 146 # the merge driver should be idempotent, so just rerun it
147 147 mdstate = 'u'
148 148
149 149 self._readmergedriver = bits[0]
150 150 self._mdstate = mdstate
151 151 elif rtype in 'FDC':
152 152 bits = record.split('\0')
153 153 self._state[bits[0]] = bits[1:]
154 154 elif not rtype.islower():
155 155 unsupported.add(rtype)
156 156 self._results = {}
157 157 self._dirty = False
158 158
159 159 if unsupported:
160 160 raise error.UnsupportedMergeRecords(unsupported)
161 161
162 162 def _readrecords(self):
163 163 """Read merge state from disk and return a list of record (TYPE, data)
164 164
165 165 We read data from both v1 and v2 files and decide which one to use.
166 166
167 167 V1 has been used by version prior to 2.9.1 and contains less data than
168 168 v2. We read both versions and check if no data in v2 contradicts
169 169 v1. If there is not contradiction we can safely assume that both v1
170 170 and v2 were written at the same time and use the extract data in v2. If
171 171 there is contradiction we ignore v2 content as we assume an old version
172 172 of Mercurial has overwritten the mergestate file and left an old v2
173 173 file around.
174 174
175 175 returns list of record [(TYPE, data), ...]"""
176 176 v1records = self._readrecordsv1()
177 177 v2records = self._readrecordsv2()
178 178 if self._v1v2match(v1records, v2records):
179 179 return v2records
180 180 else:
181 181 # v1 file is newer than v2 file, use it
182 182 # we have to infer the "other" changeset of the merge
183 183 # we cannot do better than that with v1 of the format
184 184 mctx = self._repo[None].parents()[-1]
185 185 v1records.append(('O', mctx.hex()))
186 186 # add place holder "other" file node information
187 187 # nobody is using it yet so we do no need to fetch the data
188 188 # if mctx was wrong `mctx[bits[-2]]` may fails.
189 189 for idx, r in enumerate(v1records):
190 190 if r[0] == 'F':
191 191 bits = r[1].split('\0')
192 192 bits.insert(-2, '')
193 193 v1records[idx] = (r[0], '\0'.join(bits))
194 194 return v1records
195 195
196 196 def _v1v2match(self, v1records, v2records):
197 197 oldv2 = set() # old format version of v2 record
198 198 for rec in v2records:
199 199 if rec[0] == 'L':
200 200 oldv2.add(rec)
201 201 elif rec[0] == 'F':
202 202 # drop the onode data (not contained in v1)
203 203 oldv2.add(('F', _droponode(rec[1])))
204 204 for rec in v1records:
205 205 if rec not in oldv2:
206 206 return False
207 207 else:
208 208 return True
209 209
210 210 def _readrecordsv1(self):
211 211 """read on disk merge state for version 1 file
212 212
213 213 returns list of record [(TYPE, data), ...]
214 214
215 215 Note: the "F" data from this file are one entry short
216 216 (no "other file node" entry)
217 217 """
218 218 records = []
219 219 try:
220 220 f = self._repo.vfs(self.statepathv1)
221 221 for i, l in enumerate(f):
222 222 if i == 0:
223 223 records.append(('L', l[:-1]))
224 224 else:
225 225 records.append(('F', l[:-1]))
226 226 f.close()
227 227 except IOError as err:
228 228 if err.errno != errno.ENOENT:
229 229 raise
230 230 return records
231 231
232 232 def _readrecordsv2(self):
233 233 """read on disk merge state for version 2 file
234 234
235 235 This format is a list of arbitrary records of the form:
236 236
237 237 [type][length][content]
238 238
239 239 `type` is a single character, `length` is a 4 byte integer, and
240 240 `content` is an arbitrary byte sequence of length `length`.
241 241
242 242 Mercurial versions prior to 3.7 have a bug where if there are
243 243 unsupported mandatory merge records, attempting to clear out the merge
244 244 state with hg update --clean or similar aborts. The 't' record type
245 245 works around that by writing out what those versions treat as an
246 246 advisory record, but later versions interpret as special: the first
247 247 character is the 'real' record type and everything onwards is the data.
248 248
249 249 Returns list of records [(TYPE, data), ...]."""
250 250 records = []
251 251 try:
252 252 f = self._repo.vfs(self.statepathv2)
253 253 data = f.read()
254 254 off = 0
255 255 end = len(data)
256 256 while off < end:
257 257 rtype = data[off]
258 258 off += 1
259 259 length = _unpack('>I', data[off:(off + 4)])[0]
260 260 off += 4
261 261 record = data[off:(off + length)]
262 262 off += length
263 263 if rtype == 't':
264 264 rtype, record = record[0], record[1:]
265 265 records.append((rtype, record))
266 266 f.close()
267 267 except IOError as err:
268 268 if err.errno != errno.ENOENT:
269 269 raise
270 270 return records
271 271
272 272 @util.propertycache
273 273 def mergedriver(self):
274 274 # protect against the following:
275 275 # - A configures a malicious merge driver in their hgrc, then
276 276 # pauses the merge
277 277 # - A edits their hgrc to remove references to the merge driver
278 278 # - A gives a copy of their entire repo, including .hg, to B
279 279 # - B inspects .hgrc and finds it to be clean
280 280 # - B then continues the merge and the malicious merge driver
281 281 # gets invoked
282 282 configmergedriver = self._repo.ui.config('experimental', 'mergedriver')
283 283 if (self._readmergedriver is not None
284 284 and self._readmergedriver != configmergedriver):
285 285 raise error.ConfigError(
286 286 _("merge driver changed since merge started"),
287 287 hint=_("revert merge driver change or abort merge"))
288 288
289 289 return configmergedriver
290 290
291 291 @util.propertycache
292 292 def localctx(self):
293 293 if self._local is None:
294 294 raise RuntimeError("localctx accessed but self._local isn't set")
295 295 return self._repo[self._local]
296 296
297 297 @util.propertycache
298 298 def otherctx(self):
299 299 if self._other is None:
300 300 raise RuntimeError("localctx accessed but self._local isn't set")
301 301 return self._repo[self._other]
302 302
303 303 def active(self):
304 304 """Whether mergestate is active.
305 305
306 306 Returns True if there appears to be mergestate. This is a rough proxy
307 307 for "is a merge in progress."
308 308 """
309 309 # Check local variables before looking at filesystem for performance
310 310 # reasons.
311 311 return bool(self._local) or bool(self._state) or \
312 312 self._repo.vfs.exists(self.statepathv1) or \
313 313 self._repo.vfs.exists(self.statepathv2)
314 314
315 315 def commit(self):
316 316 """Write current state on disk (if necessary)"""
317 317 if self._dirty:
318 318 records = self._makerecords()
319 319 self._writerecords(records)
320 320 self._dirty = False
321 321
322 322 def _makerecords(self):
323 323 records = []
324 324 records.append(('L', hex(self._local)))
325 325 records.append(('O', hex(self._other)))
326 326 if self.mergedriver:
327 327 records.append(('m', '\0'.join([
328 328 self.mergedriver, self._mdstate])))
329 329 for d, v in self._state.iteritems():
330 330 if v[0] == 'd':
331 331 records.append(('D', '\0'.join([d] + v)))
332 332 # v[1] == local ('cd'), v[6] == other ('dc') -- not supported by
333 333 # older versions of Mercurial
334 334 elif v[1] == nullhex or v[6] == nullhex:
335 335 records.append(('C', '\0'.join([d] + v)))
336 336 else:
337 337 records.append(('F', '\0'.join([d] + v)))
338 338 return records
339 339
340 340 def _writerecords(self, records):
341 341 """Write current state on disk (both v1 and v2)"""
342 342 self._writerecordsv1(records)
343 343 self._writerecordsv2(records)
344 344
345 345 def _writerecordsv1(self, records):
346 346 """Write current state on disk in a version 1 file"""
347 347 f = self._repo.vfs(self.statepathv1, 'w')
348 348 irecords = iter(records)
349 349 lrecords = irecords.next()
350 350 assert lrecords[0] == 'L'
351 351 f.write(hex(self._local) + '\n')
352 352 for rtype, data in irecords:
353 353 if rtype == 'F':
354 354 f.write('%s\n' % _droponode(data))
355 355 f.close()
356 356
357 357 def _writerecordsv2(self, records):
358 358 """Write current state on disk in a version 2 file
359 359
360 360 See the docstring for _readrecordsv2 for why we use 't'."""
361 361 # these are the records that all version 2 clients can read
362 362 whitelist = 'LOF'
363 363 f = self._repo.vfs(self.statepathv2, 'w')
364 364 for key, data in records:
365 365 assert len(key) == 1
366 366 if key not in whitelist:
367 367 key, data = 't', '%s%s' % (key, data)
368 368 format = '>sI%is' % len(data)
369 369 f.write(_pack(format, key, len(data), data))
370 370 f.close()
371 371
372 372 def add(self, fcl, fco, fca, fd):
373 373 """add a new (potentially?) conflicting file the merge state
374 374 fcl: file context for local,
375 375 fco: file context for remote,
376 376 fca: file context for ancestors,
377 377 fd: file path of the resulting merge.
378 378
379 379 note: also write the local version to the `.hg/merge` directory.
380 380 """
381 381 if fcl.isabsent():
382 382 hash = nullhex
383 383 else:
384 384 hash = util.sha1(fcl.path()).hexdigest()
385 385 self._repo.vfs.write('merge/' + hash, fcl.data())
386 386 self._state[fd] = ['u', hash, fcl.path(),
387 387 fca.path(), hex(fca.filenode()),
388 388 fco.path(), hex(fco.filenode()),
389 389 fcl.flags()]
390 390 self._dirty = True
391 391
392 392 def __contains__(self, dfile):
393 393 return dfile in self._state
394 394
395 395 def __getitem__(self, dfile):
396 396 return self._state[dfile][0]
397 397
398 398 def __iter__(self):
399 399 return iter(sorted(self._state))
400 400
401 401 def files(self):
402 402 return self._state.keys()
403 403
404 404 def mark(self, dfile, state):
405 405 self._state[dfile][0] = state
406 406 self._dirty = True
407 407
408 408 def mdstate(self):
409 409 return self._mdstate
410 410
411 411 def unresolved(self):
412 412 """Obtain the paths of unresolved files."""
413 413
414 414 for f, entry in self._state.items():
415 415 if entry[0] == 'u':
416 416 yield f
417 417
418 418 def driverresolved(self):
419 419 """Obtain the paths of driver-resolved files."""
420 420
421 421 for f, entry in self._state.items():
422 422 if entry[0] == 'd':
423 423 yield f
424 424
425 425 def _resolve(self, preresolve, dfile, wctx, labels=None):
426 426 """rerun merge process for file path `dfile`"""
427 427 if self[dfile] in 'rd':
428 428 return True, 0
429 429 stateentry = self._state[dfile]
430 430 state, hash, lfile, afile, anode, ofile, onode, flags = stateentry
431 431 octx = self._repo[self._other]
432 432 fcd = self._filectxorabsent(hash, wctx, dfile)
433 433 fco = self._filectxorabsent(onode, octx, ofile)
434 434 # TODO: move this to filectxorabsent
435 435 fca = self._repo.filectx(afile, fileid=anode)
436 436 # "premerge" x flags
437 437 flo = fco.flags()
438 438 fla = fca.flags()
439 439 if 'x' in flags + flo + fla and 'l' not in flags + flo + fla:
440 440 if fca.node() == nullid:
441 441 if preresolve:
442 442 self._repo.ui.warn(
443 443 _('warning: cannot merge flags for %s\n') % afile)
444 444 elif flags == fla:
445 445 flags = flo
446 446 if preresolve:
447 447 # restore local
448 448 if hash != nullhex:
449 449 f = self._repo.vfs('merge/' + hash)
450 450 self._repo.wwrite(dfile, f.read(), flags)
451 451 f.close()
452 452 else:
453 453 self._repo.wvfs.unlinkpath(dfile, ignoremissing=True)
454 454 complete, r, deleted = filemerge.premerge(self._repo, self._local,
455 455 lfile, fcd, fco, fca,
456 456 labels=labels)
457 457 else:
458 458 complete, r, deleted = filemerge.filemerge(self._repo, self._local,
459 459 lfile, fcd, fco, fca,
460 460 labels=labels)
461 461 if r is None:
462 462 # no real conflict
463 463 del self._state[dfile]
464 464 self._dirty = True
465 465 elif not r:
466 466 self.mark(dfile, 'r')
467 467
468 468 if complete:
469 469 action = None
470 470 if deleted:
471 471 if fcd.isabsent():
472 472 # dc: local picked. Need to drop if present, which may
473 473 # happen on re-resolves.
474 474 action = 'f'
475 475 else:
476 476 # cd: remote picked (or otherwise deleted)
477 477 action = 'r'
478 478 else:
479 479 if fcd.isabsent(): # dc: remote picked
480 480 action = 'g'
481 481 elif fco.isabsent(): # cd: local picked
482 482 if dfile in self.localctx:
483 483 action = 'am'
484 484 else:
485 485 action = 'a'
486 486 # else: regular merges (no action necessary)
487 487 self._results[dfile] = r, action
488 488
489 489 return complete, r
490 490
491 491 def _filectxorabsent(self, hexnode, ctx, f):
492 492 if hexnode == nullhex:
493 493 return filemerge.absentfilectx(ctx, f)
494 494 else:
495 495 return ctx[f]
496 496
497 497 def preresolve(self, dfile, wctx, labels=None):
498 498 """run premerge process for dfile
499 499
500 500 Returns whether the merge is complete, and the exit code."""
501 501 return self._resolve(True, dfile, wctx, labels=labels)
502 502
503 503 def resolve(self, dfile, wctx, labels=None):
504 504 """run merge process (assuming premerge was run) for dfile
505 505
506 506 Returns the exit code of the merge."""
507 507 return self._resolve(False, dfile, wctx, labels=labels)[1]
508 508
509 509 def counts(self):
510 510 """return counts for updated, merged and removed files in this
511 511 session"""
512 512 updated, merged, removed = 0, 0, 0
513 513 for r, action in self._results.itervalues():
514 514 if r is None:
515 515 updated += 1
516 516 elif r == 0:
517 517 if action == 'r':
518 518 removed += 1
519 519 else:
520 520 merged += 1
521 521 return updated, merged, removed
522 522
523 523 def unresolvedcount(self):
524 524 """get unresolved count for this merge (persistent)"""
525 525 return len([True for f, entry in self._state.iteritems()
526 526 if entry[0] == 'u'])
527 527
528 528 def actions(self):
529 529 """return lists of actions to perform on the dirstate"""
530 530 actions = {'r': [], 'f': [], 'a': [], 'am': [], 'g': []}
531 531 for f, (r, action) in self._results.iteritems():
532 532 if action is not None:
533 533 actions[action].append((f, None, "merge result"))
534 534 return actions
535 535
536 536 def recordactions(self):
537 537 """record remove/add/get actions in the dirstate"""
538 538 branchmerge = self._repo.dirstate.p2() != nullid
539 539 recordupdates(self._repo, self.actions(), branchmerge)
540 540
541 541 def queueremove(self, f):
542 542 """queues a file to be removed from the dirstate
543 543
544 544 Meant for use by custom merge drivers."""
545 545 self._results[f] = 0, 'r'
546 546
547 547 def queueadd(self, f):
548 548 """queues a file to be added to the dirstate
549 549
550 550 Meant for use by custom merge drivers."""
551 551 self._results[f] = 0, 'a'
552 552
553 553 def queueget(self, f):
554 554 """queues a file to be marked modified in the dirstate
555 555
556 556 Meant for use by custom merge drivers."""
557 557 self._results[f] = 0, 'g'
558 558
559 559 def _checkunknownfile(repo, wctx, mctx, f, f2=None):
560 560 if f2 is None:
561 561 f2 = f
562 562 return (os.path.isfile(repo.wjoin(f))
563 563 and repo.wvfs.audit.check(f)
564 564 and repo.dirstate.normalize(f) not in repo.dirstate
565 565 and mctx[f2].cmp(wctx[f]))
566 566
567 567 def _checkunknownfiles(repo, wctx, mctx, force, actions):
568 568 """
569 569 Considers any actions that care about the presence of conflicting unknown
570 570 files. For some actions, the result is to abort; for others, it is to
571 571 choose a different action.
572 572 """
573 573 aborts = []
574 574 if not force:
575 575 for f, (m, args, msg) in actions.iteritems():
576 576 if m in ('c', 'dc'):
577 577 if _checkunknownfile(repo, wctx, mctx, f):
578 578 aborts.append(f)
579 579 elif m == 'dg':
580 580 if _checkunknownfile(repo, wctx, mctx, f, args[0]):
581 581 aborts.append(f)
582 582
583 583 for f in sorted(aborts):
584 584 repo.ui.warn(_("%s: untracked file differs\n") % f)
585 585 if aborts:
586 586 raise error.Abort(_("untracked files in working directory differ "
587 587 "from files in requested revision"))
588 588
589 589 for f, (m, args, msg) in actions.iteritems():
590 590 if m == 'c':
591 591 actions[f] = ('g', args, msg)
592 592 elif m == 'cm':
593 593 fl2, anc = args
594 594 different = _checkunknownfile(repo, wctx, mctx, f)
595 595 if different:
596 596 actions[f] = ('m', (f, f, None, False, anc),
597 597 "remote differs from untracked local")
598 598 else:
599 599 actions[f] = ('g', (fl2,), "remote created")
600 600
601 601 def _forgetremoved(wctx, mctx, branchmerge):
602 602 """
603 603 Forget removed files
604 604
605 605 If we're jumping between revisions (as opposed to merging), and if
606 606 neither the working directory nor the target rev has the file,
607 607 then we need to remove it from the dirstate, to prevent the
608 608 dirstate from listing the file when it is no longer in the
609 609 manifest.
610 610
611 611 If we're merging, and the other revision has removed a file
612 612 that is not present in the working directory, we need to mark it
613 613 as removed.
614 614 """
615 615
616 616 actions = {}
617 617 m = 'f'
618 618 if branchmerge:
619 619 m = 'r'
620 620 for f in wctx.deleted():
621 621 if f not in mctx:
622 622 actions[f] = m, None, "forget deleted"
623 623
624 624 if not branchmerge:
625 625 for f in wctx.removed():
626 626 if f not in mctx:
627 627 actions[f] = 'f', None, "forget removed"
628 628
629 629 return actions
630 630
631 631 def _checkcollision(repo, wmf, actions):
632 632 # build provisional merged manifest up
633 633 pmmf = set(wmf)
634 634
635 635 if actions:
636 636 # k, dr, e and rd are no-op
637 637 for m in 'a', 'am', 'f', 'g', 'cd', 'dc':
638 638 for f, args, msg in actions[m]:
639 639 pmmf.add(f)
640 640 for f, args, msg in actions['r']:
641 641 pmmf.discard(f)
642 642 for f, args, msg in actions['dm']:
643 643 f2, flags = args
644 644 pmmf.discard(f2)
645 645 pmmf.add(f)
646 646 for f, args, msg in actions['dg']:
647 647 pmmf.add(f)
648 648 for f, args, msg in actions['m']:
649 649 f1, f2, fa, move, anc = args
650 650 if move:
651 651 pmmf.discard(f1)
652 652 pmmf.add(f)
653 653
654 654 # check case-folding collision in provisional merged manifest
655 655 foldmap = {}
656 656 for f in sorted(pmmf):
657 657 fold = util.normcase(f)
658 658 if fold in foldmap:
659 659 raise error.Abort(_("case-folding collision between %s and %s")
660 660 % (f, foldmap[fold]))
661 661 foldmap[fold] = f
662 662
663 663 # check case-folding of directories
664 664 foldprefix = unfoldprefix = lastfull = ''
665 665 for fold, f in sorted(foldmap.items()):
666 666 if fold.startswith(foldprefix) and not f.startswith(unfoldprefix):
667 667 # the folded prefix matches but actual casing is different
668 668 raise error.Abort(_("case-folding collision between "
669 669 "%s and directory of %s") % (lastfull, f))
670 670 foldprefix = fold + '/'
671 671 unfoldprefix = f + '/'
672 672 lastfull = f
673 673
674 674 def driverpreprocess(repo, ms, wctx, labels=None):
675 675 """run the preprocess step of the merge driver, if any
676 676
677 677 This is currently not implemented -- it's an extension point."""
678 678 return True
679 679
680 680 def driverconclude(repo, ms, wctx, labels=None):
681 681 """run the conclude step of the merge driver, if any
682 682
683 683 This is currently not implemented -- it's an extension point."""
684 684 return True
685 685
686 686 def manifestmerge(repo, wctx, p2, pa, branchmerge, force, partial,
687 687 acceptremote, followcopies):
688 688 """
689 689 Merge p1 and p2 with ancestor pa and generate merge action list
690 690
691 691 branchmerge and force are as passed in to update
692 692 partial = function to filter file lists
693 693 acceptremote = accept the incoming changes without prompting
694 694 """
695 695
696 696 copy, movewithdir, diverge, renamedelete = {}, {}, {}, {}
697 697
698 698 # manifests fetched in order are going to be faster, so prime the caches
699 699 [x.manifest() for x in
700 700 sorted(wctx.parents() + [p2, pa], key=lambda x: x.rev())]
701 701
702 702 if followcopies:
703 703 ret = copies.mergecopies(repo, wctx, p2, pa)
704 704 copy, movewithdir, diverge, renamedelete = ret
705 705
706 706 repo.ui.note(_("resolving manifests\n"))
707 707 repo.ui.debug(" branchmerge: %s, force: %s, partial: %s\n"
708 708 % (bool(branchmerge), bool(force), bool(partial)))
709 709 repo.ui.debug(" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
710 710
711 711 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
712 712 copied = set(copy.values())
713 713 copied.update(movewithdir.values())
714 714
715 715 if '.hgsubstate' in m1:
716 716 # check whether sub state is modified
717 717 for s in sorted(wctx.substate):
718 718 if wctx.sub(s).dirty():
719 719 m1['.hgsubstate'] += '+'
720 720 break
721 721
722 722 # Compare manifests
723 723 diff = m1.diff(m2)
724 724
725 725 actions = {}
726 726 for f, ((n1, fl1), (n2, fl2)) in diff.iteritems():
727 727 if partial and not partial(f):
728 728 continue
729 729 if n1 and n2: # file exists on both local and remote side
730 730 if f not in ma:
731 731 fa = copy.get(f, None)
732 732 if fa is not None:
733 733 actions[f] = ('m', (f, f, fa, False, pa.node()),
734 734 "both renamed from " + fa)
735 735 else:
736 736 actions[f] = ('m', (f, f, None, False, pa.node()),
737 737 "both created")
738 738 else:
739 739 a = ma[f]
740 740 fla = ma.flags(f)
741 741 nol = 'l' not in fl1 + fl2 + fla
742 742 if n2 == a and fl2 == fla:
743 743 actions[f] = ('k' , (), "remote unchanged")
744 744 elif n1 == a and fl1 == fla: # local unchanged - use remote
745 745 if n1 == n2: # optimization: keep local content
746 746 actions[f] = ('e', (fl2,), "update permissions")
747 747 else:
748 748 actions[f] = ('g', (fl2,), "remote is newer")
749 749 elif nol and n2 == a: # remote only changed 'x'
750 750 actions[f] = ('e', (fl2,), "update permissions")
751 751 elif nol and n1 == a: # local only changed 'x'
752 752 actions[f] = ('g', (fl1,), "remote is newer")
753 753 else: # both changed something
754 754 actions[f] = ('m', (f, f, f, False, pa.node()),
755 755 "versions differ")
756 756 elif n1: # file exists only on local side
757 757 if f in copied:
758 758 pass # we'll deal with it on m2 side
759 759 elif f in movewithdir: # directory rename, move local
760 760 f2 = movewithdir[f]
761 761 if f2 in m2:
762 762 actions[f2] = ('m', (f, f2, None, True, pa.node()),
763 763 "remote directory rename, both created")
764 764 else:
765 765 actions[f2] = ('dm', (f, fl1),
766 766 "remote directory rename - move from " + f)
767 767 elif f in copy:
768 768 f2 = copy[f]
769 769 actions[f] = ('m', (f, f2, f2, False, pa.node()),
770 770 "local copied/moved from " + f2)
771 771 elif f in ma: # clean, a different, no remote
772 772 if n1 != ma[f]:
773 773 if acceptremote:
774 774 actions[f] = ('r', None, "remote delete")
775 775 else:
776 776 actions[f] = ('cd', (f, None, f, False, pa.node()),
777 777 "prompt changed/deleted")
778 778 elif n1[20:] == 'a':
779 779 # This extra 'a' is added by working copy manifest to mark
780 780 # the file as locally added. We should forget it instead of
781 781 # deleting it.
782 782 actions[f] = ('f', None, "remote deleted")
783 783 else:
784 784 actions[f] = ('r', None, "other deleted")
785 785 elif n2: # file exists only on remote side
786 786 if f in copied:
787 787 pass # we'll deal with it on m1 side
788 788 elif f in movewithdir:
789 789 f2 = movewithdir[f]
790 790 if f2 in m1:
791 791 actions[f2] = ('m', (f2, f, None, False, pa.node()),
792 792 "local directory rename, both created")
793 793 else:
794 794 actions[f2] = ('dg', (f, fl2),
795 795 "local directory rename - get from " + f)
796 796 elif f in copy:
797 797 f2 = copy[f]
798 798 if f2 in m2:
799 799 actions[f] = ('m', (f2, f, f2, False, pa.node()),
800 800 "remote copied from " + f2)
801 801 else:
802 802 actions[f] = ('m', (f2, f, f2, True, pa.node()),
803 803 "remote moved from " + f2)
804 804 elif f not in ma:
805 805 # local unknown, remote created: the logic is described by the
806 806 # following table:
807 807 #
808 808 # force branchmerge different | action
809 809 # n * * | create
810 810 # y n * | create
811 811 # y y n | create
812 812 # y y y | merge
813 813 #
814 814 # Checking whether the files are different is expensive, so we
815 815 # don't do that when we can avoid it.
816 816 if not force:
817 817 actions[f] = ('c', (fl2,), "remote created")
818 818 elif not branchmerge:
819 819 actions[f] = ('c', (fl2,), "remote created")
820 820 else:
821 821 actions[f] = ('cm', (fl2, pa.node()),
822 822 "remote created, get or merge")
823 823 elif n2 != ma[f]:
824 824 if acceptremote:
825 825 actions[f] = ('c', (fl2,), "remote recreating")
826 826 else:
827 827 actions[f] = ('dc', (None, f, f, False, pa.node()),
828 828 "prompt deleted/changed")
829 829
830 830 return actions, diverge, renamedelete
831 831
832 832 def _resolvetrivial(repo, wctx, mctx, ancestor, actions):
833 833 """Resolves false conflicts where the nodeid changed but the content
834 834 remained the same."""
835 835
836 836 for f, (m, args, msg) in actions.items():
837 837 if m == 'cd' and f in ancestor and not wctx[f].cmp(ancestor[f]):
838 838 # local did change but ended up with same content
839 839 actions[f] = 'r', None, "prompt same"
840 840 elif m == 'dc' and f in ancestor and not mctx[f].cmp(ancestor[f]):
841 841 # remote did change but ended up with same content
842 842 del actions[f] # don't get = keep local deleted
843 843
844 844 def calculateupdates(repo, wctx, mctx, ancestors, branchmerge, force, partial,
845 845 acceptremote, followcopies):
846 846 "Calculate the actions needed to merge mctx into wctx using ancestors"
847 847
848 848 if len(ancestors) == 1: # default
849 849 actions, diverge, renamedelete = manifestmerge(
850 850 repo, wctx, mctx, ancestors[0], branchmerge, force, partial,
851 851 acceptremote, followcopies)
852 852 _checkunknownfiles(repo, wctx, mctx, force, actions)
853 853
854 854 else: # only when merge.preferancestor=* - the default
855 855 repo.ui.note(
856 856 _("note: merging %s and %s using bids from ancestors %s\n") %
857 857 (wctx, mctx, _(' and ').join(str(anc) for anc in ancestors)))
858 858
859 859 # Call for bids
860 860 fbids = {} # mapping filename to bids (action method to list af actions)
861 861 diverge, renamedelete = None, None
862 862 for ancestor in ancestors:
863 863 repo.ui.note(_('\ncalculating bids for ancestor %s\n') % ancestor)
864 864 actions, diverge1, renamedelete1 = manifestmerge(
865 865 repo, wctx, mctx, ancestor, branchmerge, force, partial,
866 866 acceptremote, followcopies)
867 867 _checkunknownfiles(repo, wctx, mctx, force, actions)
868 868
869 869 # Track the shortest set of warning on the theory that bid
870 870 # merge will correctly incorporate more information
871 871 if diverge is None or len(diverge1) < len(diverge):
872 872 diverge = diverge1
873 873 if renamedelete is None or len(renamedelete) < len(renamedelete1):
874 874 renamedelete = renamedelete1
875 875
876 876 for f, a in sorted(actions.iteritems()):
877 877 m, args, msg = a
878 878 repo.ui.debug(' %s: %s -> %s\n' % (f, msg, m))
879 879 if f in fbids:
880 880 d = fbids[f]
881 881 if m in d:
882 882 d[m].append(a)
883 883 else:
884 884 d[m] = [a]
885 885 else:
886 886 fbids[f] = {m: [a]}
887 887
888 888 # Pick the best bid for each file
889 889 repo.ui.note(_('\nauction for merging merge bids\n'))
890 890 actions = {}
891 891 for f, bids in sorted(fbids.items()):
892 892 # bids is a mapping from action method to list af actions
893 893 # Consensus?
894 894 if len(bids) == 1: # all bids are the same kind of method
895 895 m, l = bids.items()[0]
896 896 if all(a == l[0] for a in l[1:]): # len(bids) is > 1
897 897 repo.ui.note(" %s: consensus for %s\n" % (f, m))
898 898 actions[f] = l[0]
899 899 continue
900 900 # If keep is an option, just do it.
901 901 if 'k' in bids:
902 902 repo.ui.note(" %s: picking 'keep' action\n" % f)
903 903 actions[f] = bids['k'][0]
904 904 continue
905 905 # If there are gets and they all agree [how could they not?], do it.
906 906 if 'g' in bids:
907 907 ga0 = bids['g'][0]
908 908 if all(a == ga0 for a in bids['g'][1:]):
909 909 repo.ui.note(" %s: picking 'get' action\n" % f)
910 910 actions[f] = ga0
911 911 continue
912 912 # TODO: Consider other simple actions such as mode changes
913 913 # Handle inefficient democrazy.
914 914 repo.ui.note(_(' %s: multiple bids for merge action:\n') % f)
915 915 for m, l in sorted(bids.items()):
916 916 for _f, args, msg in l:
917 917 repo.ui.note(' %s -> %s\n' % (msg, m))
918 918 # Pick random action. TODO: Instead, prompt user when resolving
919 919 m, l = bids.items()[0]
920 920 repo.ui.warn(_(' %s: ambiguous merge - picked %s action\n') %
921 921 (f, m))
922 922 actions[f] = l[0]
923 923 continue
924 924 repo.ui.note(_('end of auction\n\n'))
925 925
926 926 _resolvetrivial(repo, wctx, mctx, ancestors[0], actions)
927 927
928 928 if wctx.rev() is None:
929 929 fractions = _forgetremoved(wctx, mctx, branchmerge)
930 930 actions.update(fractions)
931 931
932 932 return actions, diverge, renamedelete
933 933
934 934 def batchremove(repo, actions):
935 935 """apply removes to the working directory
936 936
937 937 yields tuples for progress updates
938 938 """
939 939 verbose = repo.ui.verbose
940 940 unlink = util.unlinkpath
941 941 wjoin = repo.wjoin
942 942 audit = repo.wvfs.audit
943 943 i = 0
944 944 for f, args, msg in actions:
945 945 repo.ui.debug(" %s: %s -> r\n" % (f, msg))
946 946 if verbose:
947 947 repo.ui.note(_("removing %s\n") % f)
948 948 audit(f)
949 949 try:
950 950 unlink(wjoin(f), ignoremissing=True)
951 951 except OSError as inst:
952 952 repo.ui.warn(_("update failed to remove %s: %s!\n") %
953 953 (f, inst.strerror))
954 954 if i == 100:
955 955 yield i, f
956 956 i = 0
957 957 i += 1
958 958 if i > 0:
959 959 yield i, f
960 960
961 961 def batchget(repo, mctx, actions):
962 962 """apply gets to the working directory
963 963
964 964 mctx is the context to get from
965 965
966 966 yields tuples for progress updates
967 967 """
968 968 verbose = repo.ui.verbose
969 969 fctx = mctx.filectx
970 970 wwrite = repo.wwrite
971 971 i = 0
972 972 for f, args, msg in actions:
973 973 repo.ui.debug(" %s: %s -> g\n" % (f, msg))
974 974 if verbose:
975 975 repo.ui.note(_("getting %s\n") % f)
976 976 wwrite(f, fctx(f).data(), args[0])
977 977 if i == 100:
978 978 yield i, f
979 979 i = 0
980 980 i += 1
981 981 if i > 0:
982 982 yield i, f
983 983
984 984 def applyupdates(repo, actions, wctx, mctx, overwrite, labels=None):
985 985 """apply the merge action list to the working directory
986 986
987 987 wctx is the working copy context
988 988 mctx is the context to be merged into the working copy
989 989
990 990 Return a tuple of counts (updated, merged, removed, unresolved) that
991 991 describes how many files were affected by the update.
992 992 """
993 993
994 994 updated, merged, removed = 0, 0, 0
995 995 ms = mergestate.clean(repo, wctx.p1().node(), mctx.node())
996 996 moves = []
997 997 for m, l in actions.items():
998 998 l.sort()
999 999
1000 1000 # 'cd' and 'dc' actions are treated like other merge conflicts
1001 1001 mergeactions = sorted(actions['cd'])
1002 1002 mergeactions.extend(sorted(actions['dc']))
1003 1003 mergeactions.extend(actions['m'])
1004 1004 for f, args, msg in mergeactions:
1005 1005 f1, f2, fa, move, anc = args
1006 1006 if f == '.hgsubstate': # merged internally
1007 1007 continue
1008 1008 if f1 is None:
1009 1009 fcl = filemerge.absentfilectx(wctx, fa)
1010 1010 else:
1011 1011 repo.ui.debug(" preserving %s for resolve of %s\n" % (f1, f))
1012 1012 fcl = wctx[f1]
1013 1013 if f2 is None:
1014 1014 fco = filemerge.absentfilectx(mctx, fa)
1015 1015 else:
1016 1016 fco = mctx[f2]
1017 1017 actx = repo[anc]
1018 1018 if fa in actx:
1019 1019 fca = actx[fa]
1020 1020 else:
1021 1021 # TODO: move to absentfilectx
1022 1022 fca = repo.filectx(f1, fileid=nullrev)
1023 1023 ms.add(fcl, fco, fca, f)
1024 1024 if f1 != f and move:
1025 1025 moves.append(f1)
1026 1026
1027 1027 audit = repo.wvfs.audit
1028 1028 _updating = _('updating')
1029 1029 _files = _('files')
1030 1030 progress = repo.ui.progress
1031 1031
1032 1032 # remove renamed files after safely stored
1033 1033 for f in moves:
1034 1034 if os.path.lexists(repo.wjoin(f)):
1035 1035 repo.ui.debug("removing %s\n" % f)
1036 1036 audit(f)
1037 1037 util.unlinkpath(repo.wjoin(f))
1038 1038
1039 1039 numupdates = sum(len(l) for m, l in actions.items() if m != 'k')
1040 1040
1041 1041 if [a for a in actions['r'] if a[0] == '.hgsubstate']:
1042 1042 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1043 1043
1044 1044 # remove in parallel (must come first)
1045 1045 z = 0
1046 1046 prog = worker.worker(repo.ui, 0.001, batchremove, (repo,), actions['r'])
1047 1047 for i, item in prog:
1048 1048 z += i
1049 1049 progress(_updating, z, item=item, total=numupdates, unit=_files)
1050 1050 removed = len(actions['r'])
1051 1051
1052 1052 # get in parallel
1053 1053 prog = worker.worker(repo.ui, 0.001, batchget, (repo, mctx), actions['g'])
1054 1054 for i, item in prog:
1055 1055 z += i
1056 1056 progress(_updating, z, item=item, total=numupdates, unit=_files)
1057 1057 updated = len(actions['g'])
1058 1058
1059 1059 if [a for a in actions['g'] if a[0] == '.hgsubstate']:
1060 1060 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1061 1061
1062 1062 # forget (manifest only, just log it) (must come first)
1063 1063 for f, args, msg in actions['f']:
1064 1064 repo.ui.debug(" %s: %s -> f\n" % (f, msg))
1065 1065 z += 1
1066 1066 progress(_updating, z, item=f, total=numupdates, unit=_files)
1067 1067
1068 1068 # re-add (manifest only, just log it)
1069 1069 for f, args, msg in actions['a']:
1070 1070 repo.ui.debug(" %s: %s -> a\n" % (f, msg))
1071 1071 z += 1
1072 1072 progress(_updating, z, item=f, total=numupdates, unit=_files)
1073 1073
1074 1074 # re-add/mark as modified (manifest only, just log it)
1075 1075 for f, args, msg in actions['am']:
1076 1076 repo.ui.debug(" %s: %s -> am\n" % (f, msg))
1077 1077 z += 1
1078 1078 progress(_updating, z, item=f, total=numupdates, unit=_files)
1079 1079
1080 1080 # keep (noop, just log it)
1081 1081 for f, args, msg in actions['k']:
1082 1082 repo.ui.debug(" %s: %s -> k\n" % (f, msg))
1083 1083 # no progress
1084 1084
1085 1085 # directory rename, move local
1086 1086 for f, args, msg in actions['dm']:
1087 1087 repo.ui.debug(" %s: %s -> dm\n" % (f, msg))
1088 1088 z += 1
1089 1089 progress(_updating, z, item=f, total=numupdates, unit=_files)
1090 1090 f0, flags = args
1091 1091 repo.ui.note(_("moving %s to %s\n") % (f0, f))
1092 1092 audit(f)
1093 1093 repo.wwrite(f, wctx.filectx(f0).data(), flags)
1094 1094 util.unlinkpath(repo.wjoin(f0))
1095 1095 updated += 1
1096 1096
1097 1097 # local directory rename, get
1098 1098 for f, args, msg in actions['dg']:
1099 1099 repo.ui.debug(" %s: %s -> dg\n" % (f, msg))
1100 1100 z += 1
1101 1101 progress(_updating, z, item=f, total=numupdates, unit=_files)
1102 1102 f0, flags = args
1103 1103 repo.ui.note(_("getting %s to %s\n") % (f0, f))
1104 1104 repo.wwrite(f, mctx.filectx(f0).data(), flags)
1105 1105 updated += 1
1106 1106
1107 1107 # exec
1108 1108 for f, args, msg in actions['e']:
1109 1109 repo.ui.debug(" %s: %s -> e\n" % (f, msg))
1110 1110 z += 1
1111 1111 progress(_updating, z, item=f, total=numupdates, unit=_files)
1112 1112 flags, = args
1113 1113 audit(f)
1114 1114 util.setflags(repo.wjoin(f), 'l' in flags, 'x' in flags)
1115 1115 updated += 1
1116 1116
1117 1117 # the ordering is important here -- ms.mergedriver will raise if the merge
1118 1118 # driver has changed, and we want to be able to bypass it when overwrite is
1119 1119 # True
1120 1120 usemergedriver = not overwrite and mergeactions and ms.mergedriver
1121 1121
1122 1122 if usemergedriver:
1123 1123 ms.commit()
1124 1124 proceed = driverpreprocess(repo, ms, wctx, labels=labels)
1125 1125 # the driver might leave some files unresolved
1126 1126 unresolvedf = set(ms.unresolved())
1127 1127 if not proceed:
1128 1128 # XXX setting unresolved to at least 1 is a hack to make sure we
1129 1129 # error out
1130 1130 return updated, merged, removed, max(len(unresolvedf), 1)
1131 1131 newactions = []
1132 1132 for f, args, msg in mergeactions:
1133 1133 if f in unresolvedf:
1134 1134 newactions.append((f, args, msg))
1135 1135 mergeactions = newactions
1136 1136
1137 1137 # premerge
1138 1138 tocomplete = []
1139 1139 for f, args, msg in mergeactions:
1140 1140 repo.ui.debug(" %s: %s -> m (premerge)\n" % (f, msg))
1141 1141 z += 1
1142 1142 progress(_updating, z, item=f, total=numupdates, unit=_files)
1143 1143 if f == '.hgsubstate': # subrepo states need updating
1144 1144 subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx),
1145 1145 overwrite)
1146 1146 continue
1147 1147 audit(f)
1148 1148 complete, r = ms.preresolve(f, wctx, labels=labels)
1149 1149 if not complete:
1150 1150 numupdates += 1
1151 1151 tocomplete.append((f, args, msg))
1152 1152
1153 1153 # merge
1154 1154 for f, args, msg in tocomplete:
1155 1155 repo.ui.debug(" %s: %s -> m (merge)\n" % (f, msg))
1156 1156 z += 1
1157 1157 progress(_updating, z, item=f, total=numupdates, unit=_files)
1158 1158 ms.resolve(f, wctx, labels=labels)
1159 1159
1160 1160 ms.commit()
1161 1161
1162 1162 unresolved = ms.unresolvedcount()
1163 1163
1164 1164 if usemergedriver and not unresolved and ms.mdstate() != 's':
1165 1165 if not driverconclude(repo, ms, wctx, labels=labels):
1166 1166 # XXX setting unresolved to at least 1 is a hack to make sure we
1167 1167 # error out
1168 1168 unresolved = max(unresolved, 1)
1169 1169
1170 1170 ms.commit()
1171 1171
1172 1172 msupdated, msmerged, msremoved = ms.counts()
1173 1173 updated += msupdated
1174 1174 merged += msmerged
1175 1175 removed += msremoved
1176 1176
1177 1177 extraactions = ms.actions()
1178 1178 for k, acts in extraactions.iteritems():
1179 1179 actions[k].extend(acts)
1180 1180
1181 1181 progress(_updating, None, total=numupdates, unit=_files)
1182 1182
1183 1183 return updated, merged, removed, unresolved
1184 1184
1185 1185 def recordupdates(repo, actions, branchmerge):
1186 1186 "record merge actions to the dirstate"
1187 1187 # remove (must come first)
1188 1188 for f, args, msg in actions.get('r', []):
1189 1189 if branchmerge:
1190 1190 repo.dirstate.remove(f)
1191 1191 else:
1192 1192 repo.dirstate.drop(f)
1193 1193
1194 1194 # forget (must come first)
1195 1195 for f, args, msg in actions.get('f', []):
1196 1196 repo.dirstate.drop(f)
1197 1197
1198 1198 # re-add
1199 1199 for f, args, msg in actions.get('a', []):
1200 1200 repo.dirstate.add(f)
1201 1201
1202 1202 # re-add/mark as modified
1203 1203 for f, args, msg in actions.get('am', []):
1204 1204 if branchmerge:
1205 1205 repo.dirstate.normallookup(f)
1206 1206 else:
1207 1207 repo.dirstate.add(f)
1208 1208
1209 1209 # exec change
1210 1210 for f, args, msg in actions.get('e', []):
1211 1211 repo.dirstate.normallookup(f)
1212 1212
1213 1213 # keep
1214 1214 for f, args, msg in actions.get('k', []):
1215 1215 pass
1216 1216
1217 1217 # get
1218 1218 for f, args, msg in actions.get('g', []):
1219 1219 if branchmerge:
1220 1220 repo.dirstate.otherparent(f)
1221 1221 else:
1222 1222 repo.dirstate.normal(f)
1223 1223
1224 1224 # merge
1225 1225 for f, args, msg in actions.get('m', []):
1226 1226 f1, f2, fa, move, anc = args
1227 1227 if branchmerge:
1228 1228 # We've done a branch merge, mark this file as merged
1229 1229 # so that we properly record the merger later
1230 1230 repo.dirstate.merge(f)
1231 1231 if f1 != f2: # copy/rename
1232 1232 if move:
1233 1233 repo.dirstate.remove(f1)
1234 1234 if f1 != f:
1235 1235 repo.dirstate.copy(f1, f)
1236 1236 else:
1237 1237 repo.dirstate.copy(f2, f)
1238 1238 else:
1239 1239 # We've update-merged a locally modified file, so
1240 1240 # we set the dirstate to emulate a normal checkout
1241 1241 # of that file some time in the past. Thus our
1242 1242 # merge will appear as a normal local file
1243 1243 # modification.
1244 1244 if f2 == f: # file not locally copied/moved
1245 1245 repo.dirstate.normallookup(f)
1246 1246 if move:
1247 1247 repo.dirstate.drop(f1)
1248 1248
1249 1249 # directory rename, move local
1250 1250 for f, args, msg in actions.get('dm', []):
1251 1251 f0, flag = args
1252 1252 if branchmerge:
1253 1253 repo.dirstate.add(f)
1254 1254 repo.dirstate.remove(f0)
1255 1255 repo.dirstate.copy(f0, f)
1256 1256 else:
1257 1257 repo.dirstate.normal(f)
1258 1258 repo.dirstate.drop(f0)
1259 1259
1260 1260 # directory rename, get
1261 1261 for f, args, msg in actions.get('dg', []):
1262 1262 f0, flag = args
1263 1263 if branchmerge:
1264 1264 repo.dirstate.add(f)
1265 1265 repo.dirstate.copy(f0, f)
1266 1266 else:
1267 1267 repo.dirstate.normal(f)
1268 1268
1269 def update(repo, node, branchmerge, force, partial, ancestor=None,
1270 mergeancestor=False, labels=None):
1269 def update(repo, node, branchmerge, force, ancestor=None,
1270 mergeancestor=False, labels=None, matcher=None):
1271 1271 """
1272 1272 Perform a merge between the working directory and the given node
1273 1273
1274 1274 node = the node to update to, or None if unspecified
1275 1275 branchmerge = whether to merge between branches
1276 1276 force = whether to force branch merging or file overwriting
1277 partial = a function to filter file lists (dirstate not updated)
1277 matcher = a matcher to filter file lists (dirstate not updated)
1278 1278 mergeancestor = whether it is merging with an ancestor. If true,
1279 1279 we should accept the incoming changes for any prompts that occur.
1280 1280 If false, merging with an ancestor (fast-forward) is only allowed
1281 1281 between different named branches. This flag is used by rebase extension
1282 1282 as a temporary fix and should be avoided in general.
1283 1283
1284 1284 The table below shows all the behaviors of the update command
1285 1285 given the -c and -C or no options, whether the working directory
1286 1286 is dirty, whether a revision is specified, and the relationship of
1287 1287 the parent rev to the target rev (linear, on the same named
1288 1288 branch, or on another named branch).
1289 1289
1290 1290 This logic is tested by test-update-branches.t.
1291 1291
1292 1292 -c -C dirty rev | linear same cross
1293 1293 n n n n | ok (1) x
1294 1294 n n n y | ok ok ok
1295 1295 n n y n | merge (2) (2)
1296 1296 n n y y | merge (3) (3)
1297 1297 n y * * | discard discard discard
1298 1298 y n y * | (4) (4) (4)
1299 1299 y n n * | ok ok ok
1300 1300 y y * * | (5) (5) (5)
1301 1301
1302 1302 x = can't happen
1303 1303 * = don't-care
1304 1304 1 = abort: not a linear update (merge or update --check to force update)
1305 1305 2 = abort: uncommitted changes (commit and merge, or update --clean to
1306 1306 discard changes)
1307 1307 3 = abort: uncommitted changes (commit or update --clean to discard changes)
1308 1308 4 = abort: uncommitted changes (checked in commands.py)
1309 1309 5 = incompatible options (checked in commands.py)
1310 1310
1311 1311 Return the same tuple as applyupdates().
1312 1312 """
1313 1313
1314 1314 onode = node
1315 1315 wlock = repo.wlock()
1316 # If we're doing a partial update, we need to skip updating
1317 # the dirstate, so make a note of any partial-ness to the
1318 # update here.
1319 if matcher is None or matcher.always():
1320 partial = False
1321 else:
1322 partial = True
1316 1323 try:
1317 1324 wc = repo[None]
1318 1325 pl = wc.parents()
1319 1326 p1 = pl[0]
1320 1327 pas = [None]
1321 1328 if ancestor is not None:
1322 1329 pas = [repo[ancestor]]
1323 1330
1324 1331 if node is None:
1325 1332 if (repo.ui.configbool('devel', 'all-warnings')
1326 1333 or repo.ui.configbool('devel', 'oldapi')):
1327 1334 repo.ui.develwarn('update with no target')
1328 1335 rev, _mark, _act = destutil.destupdate(repo)
1329 1336 node = repo[rev].node()
1330 1337
1331 1338 overwrite = force and not branchmerge
1332 1339
1333 1340 p2 = repo[node]
1334 1341 if pas[0] is None:
1335 1342 if repo.ui.configlist('merge', 'preferancestor', ['*']) == ['*']:
1336 1343 cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node())
1337 1344 pas = [repo[anc] for anc in (sorted(cahs) or [nullid])]
1338 1345 else:
1339 1346 pas = [p1.ancestor(p2, warn=branchmerge)]
1340 1347
1341 1348 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), str(p1), str(p2)
1342 1349
1343 1350 ### check phase
1344 1351 if not overwrite:
1345 1352 if len(pl) > 1:
1346 1353 raise error.Abort(_("outstanding uncommitted merge"))
1347 1354 ms = mergestate.read(repo)
1348 1355 if list(ms.unresolved()):
1349 1356 raise error.Abort(_("outstanding merge conflicts"))
1350 1357 if branchmerge:
1351 1358 if pas == [p2]:
1352 1359 raise error.Abort(_("merging with a working directory ancestor"
1353 1360 " has no effect"))
1354 1361 elif pas == [p1]:
1355 1362 if not mergeancestor and p1.branch() == p2.branch():
1356 1363 raise error.Abort(_("nothing to merge"),
1357 1364 hint=_("use 'hg update' "
1358 1365 "or check 'hg heads'"))
1359 1366 if not force and (wc.files() or wc.deleted()):
1360 1367 raise error.Abort(_("uncommitted changes"),
1361 1368 hint=_("use 'hg status' to list changes"))
1362 1369 for s in sorted(wc.substate):
1363 1370 wc.sub(s).bailifchanged()
1364 1371
1365 1372 elif not overwrite:
1366 1373 if p1 == p2: # no-op update
1367 1374 # call the hooks and exit early
1368 1375 repo.hook('preupdate', throw=True, parent1=xp2, parent2='')
1369 1376 repo.hook('update', parent1=xp2, parent2='', error=0)
1370 1377 return 0, 0, 0, 0
1371 1378
1372 1379 if pas not in ([p1], [p2]): # nonlinear
1373 1380 dirty = wc.dirty(missing=True)
1374 1381 if dirty or onode is None:
1375 1382 # Branching is a bit strange to ensure we do the minimal
1376 1383 # amount of call to obsolete.background.
1377 1384 foreground = obsolete.foreground(repo, [p1.node()])
1378 1385 # note: the <node> variable contains a random identifier
1379 1386 if repo[node].node() in foreground:
1380 1387 pas = [p1] # allow updating to successors
1381 1388 elif dirty:
1382 1389 msg = _("uncommitted changes")
1383 1390 if onode is None:
1384 1391 hint = _("commit and merge, or update --clean to"
1385 1392 " discard changes")
1386 1393 else:
1387 1394 hint = _("commit or update --clean to discard"
1388 1395 " changes")
1389 1396 raise error.Abort(msg, hint=hint)
1390 1397 else: # node is none
1391 1398 msg = _("not a linear update")
1392 1399 hint = _("merge or update --check to force update")
1393 1400 raise error.Abort(msg, hint=hint)
1394 1401 else:
1395 1402 # Allow jumping branches if clean and specific rev given
1396 1403 pas = [p1]
1397 1404
1398 1405 # deprecated config: merge.followcopies
1399 1406 followcopies = False
1400 1407 if overwrite:
1401 1408 pas = [wc]
1402 1409 elif pas == [p2]: # backwards
1403 1410 pas = [wc.p1()]
1404 1411 elif not branchmerge and not wc.dirty(missing=True):
1405 1412 pass
1406 1413 elif pas[0] and repo.ui.configbool('merge', 'followcopies', True):
1407 1414 followcopies = True
1408 1415
1409 1416 ### calculate phase
1417 if matcher is None or matcher.always():
1418 partial = False
1419 else:
1420 partial = matcher.matchfn
1410 1421 actionbyfile, diverge, renamedelete = calculateupdates(
1411 1422 repo, wc, p2, pas, branchmerge, force, partial, mergeancestor,
1412 1423 followcopies)
1413 1424 # Convert to dictionary-of-lists format
1414 1425 actions = dict((m, []) for m in 'a am f g cd dc r dm dg m e k'.split())
1415 1426 for f, (m, args, msg) in actionbyfile.iteritems():
1416 1427 if m not in actions:
1417 1428 actions[m] = []
1418 1429 actions[m].append((f, args, msg))
1419 1430
1420 1431 if not util.checkcase(repo.path):
1421 1432 # check collision between files only in p2 for clean update
1422 1433 if (not branchmerge and
1423 1434 (force or not wc.dirty(missing=True, branch=False))):
1424 1435 _checkcollision(repo, p2.manifest(), None)
1425 1436 else:
1426 1437 _checkcollision(repo, wc.manifest(), actions)
1427 1438
1428 1439 # Prompt and create actions. Most of this is in the resolve phase
1429 1440 # already, but we can't handle .hgsubstate in filemerge or
1430 1441 # subrepo.submerge yet so we have to keep prompting for it.
1431 1442 for f, args, msg in sorted(actions['cd']):
1432 1443 if f != '.hgsubstate':
1433 1444 continue
1434 1445 if repo.ui.promptchoice(
1435 1446 _("local changed %s which remote deleted\n"
1436 1447 "use (c)hanged version or (d)elete?"
1437 1448 "$$ &Changed $$ &Delete") % f, 0):
1438 1449 actions['r'].append((f, None, "prompt delete"))
1439 1450 elif f in p1:
1440 1451 actions['am'].append((f, None, "prompt keep"))
1441 1452 else:
1442 1453 actions['a'].append((f, None, "prompt keep"))
1443 1454
1444 1455 for f, args, msg in sorted(actions['dc']):
1445 1456 if f != '.hgsubstate':
1446 1457 continue
1447 1458 f1, f2, fa, move, anc = args
1448 1459 flags = p2[f2].flags()
1449 1460 if repo.ui.promptchoice(
1450 1461 _("remote changed %s which local deleted\n"
1451 1462 "use (c)hanged version or leave (d)eleted?"
1452 1463 "$$ &Changed $$ &Deleted") % f, 0) == 0:
1453 1464 actions['g'].append((f, (flags,), "prompt recreating"))
1454 1465
1455 1466 # divergent renames
1456 1467 for f, fl in sorted(diverge.iteritems()):
1457 1468 repo.ui.warn(_("note: possible conflict - %s was renamed "
1458 1469 "multiple times to:\n") % f)
1459 1470 for nf in fl:
1460 1471 repo.ui.warn(" %s\n" % nf)
1461 1472
1462 1473 # rename and delete
1463 1474 for f, fl in sorted(renamedelete.iteritems()):
1464 1475 repo.ui.warn(_("note: possible conflict - %s was deleted "
1465 1476 "and renamed to:\n") % f)
1466 1477 for nf in fl:
1467 1478 repo.ui.warn(" %s\n" % nf)
1468 1479
1469 1480 ### apply phase
1470 1481 if not branchmerge: # just jump to the new rev
1471 1482 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, ''
1472 1483 if not partial:
1473 1484 repo.hook('preupdate', throw=True, parent1=xp1, parent2=xp2)
1474 1485 # note that we're in the middle of an update
1475 1486 repo.vfs.write('updatestate', p2.hex())
1476 1487
1477 1488 stats = applyupdates(repo, actions, wc, p2, overwrite, labels=labels)
1478 1489
1479 1490 if not partial:
1480 1491 repo.dirstate.beginparentchange()
1481 1492 repo.setparents(fp1, fp2)
1482 1493 recordupdates(repo, actions, branchmerge)
1483 1494 # update completed, clear state
1484 1495 util.unlink(repo.join('updatestate'))
1485 1496
1486 1497 if not branchmerge:
1487 1498 repo.dirstate.setbranch(p2.branch())
1488 1499 repo.dirstate.endparentchange()
1489 1500 finally:
1490 1501 wlock.release()
1491 1502
1492 1503 if not partial:
1493 1504 repo.hook('update', parent1=xp1, parent2=xp2, error=stats[3])
1494 1505 return stats
1495 1506
1496 1507 def graft(repo, ctx, pctx, labels, keepparent=False):
1497 1508 """Do a graft-like merge.
1498 1509
1499 1510 This is a merge where the merge ancestor is chosen such that one
1500 1511 or more changesets are grafted onto the current changeset. In
1501 1512 addition to the merge, this fixes up the dirstate to include only
1502 1513 a single parent (if keepparent is False) and tries to duplicate any
1503 1514 renames/copies appropriately.
1504 1515
1505 1516 ctx - changeset to rebase
1506 1517 pctx - merge base, usually ctx.p1()
1507 1518 labels - merge labels eg ['local', 'graft']
1508 1519 keepparent - keep second parent if any
1509 1520
1510 1521 """
1511 1522 # If we're grafting a descendant onto an ancestor, be sure to pass
1512 1523 # mergeancestor=True to update. This does two things: 1) allows the merge if
1513 1524 # the destination is the same as the parent of the ctx (so we can use graft
1514 1525 # to copy commits), and 2) informs update that the incoming changes are
1515 1526 # newer than the destination so it doesn't prompt about "remote changed foo
1516 1527 # which local deleted".
1517 1528 mergeancestor = repo.changelog.isancestor(repo['.'].node(), ctx.node())
1518 1529
1519 stats = update(repo, ctx.node(), True, True, False, pctx.node(),
1530 stats = update(repo, ctx.node(), True, True, pctx.node(),
1520 1531 mergeancestor=mergeancestor, labels=labels)
1521 1532
1522 1533 pother = nullid
1523 1534 parents = ctx.parents()
1524 1535 if keepparent and len(parents) == 2 and pctx in parents:
1525 1536 parents.remove(pctx)
1526 1537 pother = parents[0].node()
1527 1538
1528 1539 repo.dirstate.beginparentchange()
1529 1540 repo.setparents(repo['.'].node(), pother)
1530 1541 repo.dirstate.write(repo.currenttransaction())
1531 1542 # fix up dirstate for copies and renames
1532 1543 copies.duplicatecopies(repo, ctx.rev(), pctx.rev())
1533 1544 repo.dirstate.endparentchange()
1534 1545 return stats
@@ -1,85 +1,84 b''
1 1 import os
2 2 from mercurial import hg, ui, merge
3 3
4 4 u = ui.ui()
5 5
6 6 repo = hg.repository(u, 'test1', create=1)
7 7 os.chdir('test1')
8 8
9 9 def commit(text, time):
10 10 repo.commit(text=text, date="%d 0" % time)
11 11
12 12 def addcommit(name, time):
13 13 f = open(name, 'w')
14 14 f.write('%s\n' % name)
15 15 f.close()
16 16 repo[None].add([name])
17 17 commit(name, time)
18 18
19 19 def update(rev):
20 merge.update(repo, rev, False, True, False)
20 merge.update(repo, rev, False, True)
21 21
22 22 def merge_(rev):
23 merge.update(repo, rev, True, False, False)
23 merge.update(repo, rev, True, False)
24 24
25 25 if __name__ == '__main__':
26 26 addcommit("A", 0)
27 27 addcommit("B", 1)
28 28
29 29 update(0)
30 30 addcommit("C", 2)
31 31
32 32 merge_(1)
33 33 commit("D", 3)
34 34
35 35 update(2)
36 36 addcommit("E", 4)
37 37 addcommit("F", 5)
38 38
39 39 update(3)
40 40 addcommit("G", 6)
41 41
42 42 merge_(5)
43 43 commit("H", 7)
44 44
45 45 update(5)
46 46 addcommit("I", 8)
47 47
48 48 # Ancestors
49 49 print 'Ancestors of 5'
50 50 for r in repo.changelog.ancestors([5]):
51 51 print r,
52 52
53 53 print '\nAncestors of 6 and 5'
54 54 for r in repo.changelog.ancestors([6, 5]):
55 55 print r,
56 56
57 57 print '\nAncestors of 5 and 4'
58 58 for r in repo.changelog.ancestors([5, 4]):
59 59 print r,
60 60
61 61 print '\nAncestors of 7, stop at 6'
62 62 for r in repo.changelog.ancestors([7], 6):
63 63 print r,
64 64
65 65 print '\nAncestors of 7, including revs'
66 66 for r in repo.changelog.ancestors([7], inclusive=True):
67 67 print r,
68 68
69 69 print '\nAncestors of 7, 5 and 3, including revs'
70 70 for r in repo.changelog.ancestors([7, 5, 3], inclusive=True):
71 71 print r,
72 72
73 73 # Descendants
74 74 print '\n\nDescendants of 5'
75 75 for r in repo.changelog.descendants([5]):
76 76 print r,
77 77
78 78 print '\nDescendants of 5 and 3'
79 79 for r in repo.changelog.descendants([5, 3]):
80 80 print r,
81 81
82 82 print '\nDescendants of 5 and 4'
83 83 for r in repo.changelog.descendants([5, 4]):
84 84 print r,
85
General Comments 0
You need to be logged in to leave comments. Login now