##// END OF EJS Templates
graftcopies: remove `skip` and `repo` arguments...
Martin von Zweigbergk -
r44551:833210fb default
parent child Browse files
Show More
@@ -1,870 +1,869 b''
1 1 # fix - rewrite file content in changesets and working copy
2 2 #
3 3 # Copyright 2018 Google LLC.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """rewrite file content in changesets or working copy (EXPERIMENTAL)
8 8
9 9 Provides a command that runs configured tools on the contents of modified files,
10 10 writing back any fixes to the working copy or replacing changesets.
11 11
12 12 Here is an example configuration that causes :hg:`fix` to apply automatic
13 13 formatting fixes to modified lines in C++ code::
14 14
15 15 [fix]
16 16 clang-format:command=clang-format --assume-filename={rootpath}
17 17 clang-format:linerange=--lines={first}:{last}
18 18 clang-format:pattern=set:**.cpp or **.hpp
19 19
20 20 The :command suboption forms the first part of the shell command that will be
21 21 used to fix a file. The content of the file is passed on standard input, and the
22 22 fixed file content is expected on standard output. Any output on standard error
23 23 will be displayed as a warning. If the exit status is not zero, the file will
24 24 not be affected. A placeholder warning is displayed if there is a non-zero exit
25 25 status but no standard error output. Some values may be substituted into the
26 26 command::
27 27
28 28 {rootpath} The path of the file being fixed, relative to the repo root
29 29 {basename} The name of the file being fixed, without the directory path
30 30
31 31 If the :linerange suboption is set, the tool will only be run if there are
32 32 changed lines in a file. The value of this suboption is appended to the shell
33 33 command once for every range of changed lines in the file. Some values may be
34 34 substituted into the command::
35 35
36 36 {first} The 1-based line number of the first line in the modified range
37 37 {last} The 1-based line number of the last line in the modified range
38 38
39 39 Deleted sections of a file will be ignored by :linerange, because there is no
40 40 corresponding line range in the version being fixed.
41 41
42 42 By default, tools that set :linerange will only be executed if there is at least
43 43 one changed line range. This is meant to prevent accidents like running a code
44 44 formatter in such a way that it unexpectedly reformats the whole file. If such a
45 45 tool needs to operate on unchanged files, it should set the :skipclean suboption
46 46 to false.
47 47
48 48 The :pattern suboption determines which files will be passed through each
49 49 configured tool. See :hg:`help patterns` for possible values. However, all
50 50 patterns are relative to the repo root, even if that text says they are relative
51 51 to the current working directory. If there are file arguments to :hg:`fix`, the
52 52 intersection of these patterns is used.
53 53
54 54 There is also a configurable limit for the maximum size of file that will be
55 55 processed by :hg:`fix`::
56 56
57 57 [fix]
58 58 maxfilesize = 2MB
59 59
60 60 Normally, execution of configured tools will continue after a failure (indicated
61 61 by a non-zero exit status). It can also be configured to abort after the first
62 62 such failure, so that no files will be affected if any tool fails. This abort
63 63 will also cause :hg:`fix` to exit with a non-zero status::
64 64
65 65 [fix]
66 66 failure = abort
67 67
68 68 When multiple tools are configured to affect a file, they execute in an order
69 69 defined by the :priority suboption. The priority suboption has a default value
70 70 of zero for each tool. Tools are executed in order of descending priority. The
71 71 execution order of tools with equal priority is unspecified. For example, you
72 72 could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers
73 73 in a text file by ensuring that 'sort' runs before 'head'::
74 74
75 75 [fix]
76 76 sort:command = sort -n
77 77 head:command = head -n 10
78 78 sort:pattern = numbers.txt
79 79 head:pattern = numbers.txt
80 80 sort:priority = 2
81 81 head:priority = 1
82 82
83 83 To account for changes made by each tool, the line numbers used for incremental
84 84 formatting are recomputed before executing the next tool. So, each tool may see
85 85 different values for the arguments added by the :linerange suboption.
86 86
87 87 Each fixer tool is allowed to return some metadata in addition to the fixed file
88 88 content. The metadata must be placed before the file content on stdout,
89 89 separated from the file content by a zero byte. The metadata is parsed as a JSON
90 90 value (so, it should be UTF-8 encoded and contain no zero bytes). A fixer tool
91 91 is expected to produce this metadata encoding if and only if the :metadata
92 92 suboption is true::
93 93
94 94 [fix]
95 95 tool:command = tool --prepend-json-metadata
96 96 tool:metadata = true
97 97
98 98 The metadata values are passed to hooks, which can be used to print summaries or
99 99 perform other post-fixing work. The supported hooks are::
100 100
101 101 "postfixfile"
102 102 Run once for each file in each revision where any fixer tools made changes
103 103 to the file content. Provides "$HG_REV" and "$HG_PATH" to identify the file,
104 104 and "$HG_METADATA" with a map of fixer names to metadata values from fixer
105 105 tools that affected the file. Fixer tools that didn't affect the file have a
106 106 valueof None. Only fixer tools that executed are present in the metadata.
107 107
108 108 "postfix"
109 109 Run once after all files and revisions have been handled. Provides
110 110 "$HG_REPLACEMENTS" with information about what revisions were created and
111 111 made obsolete. Provides a boolean "$HG_WDIRWRITTEN" to indicate whether any
112 112 files in the working copy were updated. Provides a list "$HG_METADATA"
113 113 mapping fixer tool names to lists of metadata values returned from
114 114 executions that modified a file. This aggregates the same metadata
115 115 previously passed to the "postfixfile" hook.
116 116
117 117 Fixer tools are run the in repository's root directory. This allows them to read
118 118 configuration files from the working copy, or even write to the working copy.
119 119 The working copy is not updated to match the revision being fixed. In fact,
120 120 several revisions may be fixed in parallel. Writes to the working copy are not
121 121 amended into the revision being fixed; fixer tools should always write fixed
122 122 file content back to stdout as documented above.
123 123 """
124 124
125 125 from __future__ import absolute_import
126 126
127 127 import collections
128 128 import itertools
129 129 import os
130 130 import re
131 131 import subprocess
132 132
133 133 from mercurial.i18n import _
134 134 from mercurial.node import nullrev
135 135 from mercurial.node import wdirrev
136 136
137 137 from mercurial.utils import procutil
138 138
139 139 from mercurial import (
140 140 cmdutil,
141 141 context,
142 142 copies,
143 143 error,
144 144 match as matchmod,
145 145 mdiff,
146 146 merge,
147 147 pycompat,
148 148 registrar,
149 149 rewriteutil,
150 150 scmutil,
151 151 util,
152 152 worker,
153 153 )
154 154
155 155 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
156 156 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
157 157 # be specifying the version(s) of Mercurial they are tested with, or
158 158 # leave the attribute unspecified.
159 159 testedwith = b'ships-with-hg-core'
160 160
161 161 cmdtable = {}
162 162 command = registrar.command(cmdtable)
163 163
164 164 configtable = {}
165 165 configitem = registrar.configitem(configtable)
166 166
167 167 # Register the suboptions allowed for each configured fixer, and default values.
168 168 FIXER_ATTRS = {
169 169 b'command': None,
170 170 b'linerange': None,
171 171 b'pattern': None,
172 172 b'priority': 0,
173 173 b'metadata': False,
174 174 b'skipclean': True,
175 175 b'enabled': True,
176 176 }
177 177
178 178 for key, default in FIXER_ATTRS.items():
179 179 configitem(b'fix', b'.*:%s$' % key, default=default, generic=True)
180 180
181 181 # A good default size allows most source code files to be fixed, but avoids
182 182 # letting fixer tools choke on huge inputs, which could be surprising to the
183 183 # user.
184 184 configitem(b'fix', b'maxfilesize', default=b'2MB')
185 185
186 186 # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero.
187 187 # This helps users do shell scripts that stop when a fixer tool signals a
188 188 # problem.
189 189 configitem(b'fix', b'failure', default=b'continue')
190 190
191 191
192 192 def checktoolfailureaction(ui, message, hint=None):
193 193 """Abort with 'message' if fix.failure=abort"""
194 194 action = ui.config(b'fix', b'failure')
195 195 if action not in (b'continue', b'abort'):
196 196 raise error.Abort(
197 197 _(b'unknown fix.failure action: %s') % (action,),
198 198 hint=_(b'use "continue" or "abort"'),
199 199 )
200 200 if action == b'abort':
201 201 raise error.Abort(message, hint=hint)
202 202
203 203
204 204 allopt = (b'', b'all', False, _(b'fix all non-public non-obsolete revisions'))
205 205 baseopt = (
206 206 b'',
207 207 b'base',
208 208 [],
209 209 _(
210 210 b'revisions to diff against (overrides automatic '
211 211 b'selection, and applies to every revision being '
212 212 b'fixed)'
213 213 ),
214 214 _(b'REV'),
215 215 )
216 216 revopt = (b'r', b'rev', [], _(b'revisions to fix'), _(b'REV'))
217 217 wdiropt = (b'w', b'working-dir', False, _(b'fix the working directory'))
218 218 wholeopt = (b'', b'whole', False, _(b'always fix every line of a file'))
219 219 usage = _(b'[OPTION]... [FILE]...')
220 220
221 221
222 222 @command(
223 223 b'fix',
224 224 [allopt, baseopt, revopt, wdiropt, wholeopt],
225 225 usage,
226 226 helpcategory=command.CATEGORY_FILE_CONTENTS,
227 227 )
228 228 def fix(ui, repo, *pats, **opts):
229 229 """rewrite file content in changesets or working directory
230 230
231 231 Runs any configured tools to fix the content of files. Only affects files
232 232 with changes, unless file arguments are provided. Only affects changed lines
233 233 of files, unless the --whole flag is used. Some tools may always affect the
234 234 whole file regardless of --whole.
235 235
236 236 If revisions are specified with --rev, those revisions will be checked, and
237 237 they may be replaced with new revisions that have fixed file content. It is
238 238 desirable to specify all descendants of each specified revision, so that the
239 239 fixes propagate to the descendants. If all descendants are fixed at the same
240 240 time, no merging, rebasing, or evolution will be required.
241 241
242 242 If --working-dir is used, files with uncommitted changes in the working copy
243 243 will be fixed. If the checked-out revision is also fixed, the working
244 244 directory will update to the replacement revision.
245 245
246 246 When determining what lines of each file to fix at each revision, the whole
247 247 set of revisions being fixed is considered, so that fixes to earlier
248 248 revisions are not forgotten in later ones. The --base flag can be used to
249 249 override this default behavior, though it is not usually desirable to do so.
250 250 """
251 251 opts = pycompat.byteskwargs(opts)
252 252 cmdutil.check_at_most_one_arg(opts, b'all', b'rev')
253 253 if opts[b'all']:
254 254 opts[b'rev'] = [b'not public() and not obsolete()']
255 255 opts[b'working_dir'] = True
256 256 with repo.wlock(), repo.lock(), repo.transaction(b'fix'):
257 257 revstofix = getrevstofix(ui, repo, opts)
258 258 basectxs = getbasectxs(repo, opts, revstofix)
259 259 workqueue, numitems = getworkqueue(
260 260 ui, repo, pats, opts, revstofix, basectxs
261 261 )
262 262 fixers = getfixers(ui)
263 263
264 264 # There are no data dependencies between the workers fixing each file
265 265 # revision, so we can use all available parallelism.
266 266 def getfixes(items):
267 267 for rev, path in items:
268 268 ctx = repo[rev]
269 269 olddata = ctx[path].data()
270 270 metadata, newdata = fixfile(
271 271 ui, repo, opts, fixers, ctx, path, basectxs[rev]
272 272 )
273 273 # Don't waste memory/time passing unchanged content back, but
274 274 # produce one result per item either way.
275 275 yield (
276 276 rev,
277 277 path,
278 278 metadata,
279 279 newdata if newdata != olddata else None,
280 280 )
281 281
282 282 results = worker.worker(
283 283 ui, 1.0, getfixes, tuple(), workqueue, threadsafe=False
284 284 )
285 285
286 286 # We have to hold on to the data for each successor revision in memory
287 287 # until all its parents are committed. We ensure this by committing and
288 288 # freeing memory for the revisions in some topological order. This
289 289 # leaves a little bit of memory efficiency on the table, but also makes
290 290 # the tests deterministic. It might also be considered a feature since
291 291 # it makes the results more easily reproducible.
292 292 filedata = collections.defaultdict(dict)
293 293 aggregatemetadata = collections.defaultdict(list)
294 294 replacements = {}
295 295 wdirwritten = False
296 296 commitorder = sorted(revstofix, reverse=True)
297 297 with ui.makeprogress(
298 298 topic=_(b'fixing'), unit=_(b'files'), total=sum(numitems.values())
299 299 ) as progress:
300 300 for rev, path, filerevmetadata, newdata in results:
301 301 progress.increment(item=path)
302 302 for fixername, fixermetadata in filerevmetadata.items():
303 303 aggregatemetadata[fixername].append(fixermetadata)
304 304 if newdata is not None:
305 305 filedata[rev][path] = newdata
306 306 hookargs = {
307 307 b'rev': rev,
308 308 b'path': path,
309 309 b'metadata': filerevmetadata,
310 310 }
311 311 repo.hook(
312 312 b'postfixfile',
313 313 throw=False,
314 314 **pycompat.strkwargs(hookargs)
315 315 )
316 316 numitems[rev] -= 1
317 317 # Apply the fixes for this and any other revisions that are
318 318 # ready and sitting at the front of the queue. Using a loop here
319 319 # prevents the queue from being blocked by the first revision to
320 320 # be ready out of order.
321 321 while commitorder and not numitems[commitorder[-1]]:
322 322 rev = commitorder.pop()
323 323 ctx = repo[rev]
324 324 if rev == wdirrev:
325 325 writeworkingdir(repo, ctx, filedata[rev], replacements)
326 326 wdirwritten = bool(filedata[rev])
327 327 else:
328 328 replacerev(ui, repo, ctx, filedata[rev], replacements)
329 329 del filedata[rev]
330 330
331 331 cleanup(repo, replacements, wdirwritten)
332 332 hookargs = {
333 333 b'replacements': replacements,
334 334 b'wdirwritten': wdirwritten,
335 335 b'metadata': aggregatemetadata,
336 336 }
337 337 repo.hook(b'postfix', throw=True, **pycompat.strkwargs(hookargs))
338 338
339 339
340 340 def cleanup(repo, replacements, wdirwritten):
341 341 """Calls scmutil.cleanupnodes() with the given replacements.
342 342
343 343 "replacements" is a dict from nodeid to nodeid, with one key and one value
344 344 for every revision that was affected by fixing. This is slightly different
345 345 from cleanupnodes().
346 346
347 347 "wdirwritten" is a bool which tells whether the working copy was affected by
348 348 fixing, since it has no entry in "replacements".
349 349
350 350 Useful as a hook point for extending "hg fix" with output summarizing the
351 351 effects of the command, though we choose not to output anything here.
352 352 """
353 353 replacements = {
354 354 prec: [succ] for prec, succ in pycompat.iteritems(replacements)
355 355 }
356 356 scmutil.cleanupnodes(repo, replacements, b'fix', fixphase=True)
357 357
358 358
359 359 def getworkqueue(ui, repo, pats, opts, revstofix, basectxs):
360 360 """"Constructs the list of files to be fixed at specific revisions
361 361
362 362 It is up to the caller how to consume the work items, and the only
363 363 dependence between them is that replacement revisions must be committed in
364 364 topological order. Each work item represents a file in the working copy or
365 365 in some revision that should be fixed and written back to the working copy
366 366 or into a replacement revision.
367 367
368 368 Work items for the same revision are grouped together, so that a worker
369 369 pool starting with the first N items in parallel is likely to finish the
370 370 first revision's work before other revisions. This can allow us to write
371 371 the result to disk and reduce memory footprint. At time of writing, the
372 372 partition strategy in worker.py seems favorable to this. We also sort the
373 373 items by ascending revision number to match the order in which we commit
374 374 the fixes later.
375 375 """
376 376 workqueue = []
377 377 numitems = collections.defaultdict(int)
378 378 maxfilesize = ui.configbytes(b'fix', b'maxfilesize')
379 379 for rev in sorted(revstofix):
380 380 fixctx = repo[rev]
381 381 match = scmutil.match(fixctx, pats, opts)
382 382 for path in sorted(
383 383 pathstofix(ui, repo, pats, opts, match, basectxs[rev], fixctx)
384 384 ):
385 385 fctx = fixctx[path]
386 386 if fctx.islink():
387 387 continue
388 388 if fctx.size() > maxfilesize:
389 389 ui.warn(
390 390 _(b'ignoring file larger than %s: %s\n')
391 391 % (util.bytecount(maxfilesize), path)
392 392 )
393 393 continue
394 394 workqueue.append((rev, path))
395 395 numitems[rev] += 1
396 396 return workqueue, numitems
397 397
398 398
399 399 def getrevstofix(ui, repo, opts):
400 400 """Returns the set of revision numbers that should be fixed"""
401 401 revs = set(scmutil.revrange(repo, opts[b'rev']))
402 402 for rev in revs:
403 403 checkfixablectx(ui, repo, repo[rev])
404 404 if revs:
405 405 cmdutil.checkunfinished(repo)
406 406 rewriteutil.precheck(repo, revs, b'fix')
407 407 if opts.get(b'working_dir'):
408 408 revs.add(wdirrev)
409 409 if list(merge.mergestate.read(repo).unresolved()):
410 410 raise error.Abort(b'unresolved conflicts', hint=b"use 'hg resolve'")
411 411 if not revs:
412 412 raise error.Abort(
413 413 b'no changesets specified', hint=b'use --rev or --working-dir'
414 414 )
415 415 return revs
416 416
417 417
418 418 def checkfixablectx(ui, repo, ctx):
419 419 """Aborts if the revision shouldn't be replaced with a fixed one."""
420 420 if ctx.obsolete():
421 421 # It would be better to actually check if the revision has a successor.
422 422 allowdivergence = ui.configbool(
423 423 b'experimental', b'evolution.allowdivergence'
424 424 )
425 425 if not allowdivergence:
426 426 raise error.Abort(
427 427 b'fixing obsolete revision could cause divergence'
428 428 )
429 429
430 430
431 431 def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx):
432 432 """Returns the set of files that should be fixed in a context
433 433
434 434 The result depends on the base contexts; we include any file that has
435 435 changed relative to any of the base contexts. Base contexts should be
436 436 ancestors of the context being fixed.
437 437 """
438 438 files = set()
439 439 for basectx in basectxs:
440 440 stat = basectx.status(
441 441 fixctx, match=match, listclean=bool(pats), listunknown=bool(pats)
442 442 )
443 443 files.update(
444 444 set(
445 445 itertools.chain(
446 446 stat.added, stat.modified, stat.clean, stat.unknown
447 447 )
448 448 )
449 449 )
450 450 return files
451 451
452 452
453 453 def lineranges(opts, path, basectxs, fixctx, content2):
454 454 """Returns the set of line ranges that should be fixed in a file
455 455
456 456 Of the form [(10, 20), (30, 40)].
457 457
458 458 This depends on the given base contexts; we must consider lines that have
459 459 changed versus any of the base contexts, and whether the file has been
460 460 renamed versus any of them.
461 461
462 462 Another way to understand this is that we exclude line ranges that are
463 463 common to the file in all base contexts.
464 464 """
465 465 if opts.get(b'whole'):
466 466 # Return a range containing all lines. Rely on the diff implementation's
467 467 # idea of how many lines are in the file, instead of reimplementing it.
468 468 return difflineranges(b'', content2)
469 469
470 470 rangeslist = []
471 471 for basectx in basectxs:
472 472 basepath = copies.pathcopies(basectx, fixctx).get(path, path)
473 473 if basepath in basectx:
474 474 content1 = basectx[basepath].data()
475 475 else:
476 476 content1 = b''
477 477 rangeslist.extend(difflineranges(content1, content2))
478 478 return unionranges(rangeslist)
479 479
480 480
481 481 def unionranges(rangeslist):
482 482 """Return the union of some closed intervals
483 483
484 484 >>> unionranges([])
485 485 []
486 486 >>> unionranges([(1, 100)])
487 487 [(1, 100)]
488 488 >>> unionranges([(1, 100), (1, 100)])
489 489 [(1, 100)]
490 490 >>> unionranges([(1, 100), (2, 100)])
491 491 [(1, 100)]
492 492 >>> unionranges([(1, 99), (1, 100)])
493 493 [(1, 100)]
494 494 >>> unionranges([(1, 100), (40, 60)])
495 495 [(1, 100)]
496 496 >>> unionranges([(1, 49), (50, 100)])
497 497 [(1, 100)]
498 498 >>> unionranges([(1, 48), (50, 100)])
499 499 [(1, 48), (50, 100)]
500 500 >>> unionranges([(1, 2), (3, 4), (5, 6)])
501 501 [(1, 6)]
502 502 """
503 503 rangeslist = sorted(set(rangeslist))
504 504 unioned = []
505 505 if rangeslist:
506 506 unioned, rangeslist = [rangeslist[0]], rangeslist[1:]
507 507 for a, b in rangeslist:
508 508 c, d = unioned[-1]
509 509 if a > d + 1:
510 510 unioned.append((a, b))
511 511 else:
512 512 unioned[-1] = (c, max(b, d))
513 513 return unioned
514 514
515 515
516 516 def difflineranges(content1, content2):
517 517 """Return list of line number ranges in content2 that differ from content1.
518 518
519 519 Line numbers are 1-based. The numbers are the first and last line contained
520 520 in the range. Single-line ranges have the same line number for the first and
521 521 last line. Excludes any empty ranges that result from lines that are only
522 522 present in content1. Relies on mdiff's idea of where the line endings are in
523 523 the string.
524 524
525 525 >>> from mercurial import pycompat
526 526 >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)])
527 527 >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b))
528 528 >>> difflineranges2(b'', b'')
529 529 []
530 530 >>> difflineranges2(b'a', b'')
531 531 []
532 532 >>> difflineranges2(b'', b'A')
533 533 [(1, 1)]
534 534 >>> difflineranges2(b'a', b'a')
535 535 []
536 536 >>> difflineranges2(b'a', b'A')
537 537 [(1, 1)]
538 538 >>> difflineranges2(b'ab', b'')
539 539 []
540 540 >>> difflineranges2(b'', b'AB')
541 541 [(1, 2)]
542 542 >>> difflineranges2(b'abc', b'ac')
543 543 []
544 544 >>> difflineranges2(b'ab', b'aCb')
545 545 [(2, 2)]
546 546 >>> difflineranges2(b'abc', b'aBc')
547 547 [(2, 2)]
548 548 >>> difflineranges2(b'ab', b'AB')
549 549 [(1, 2)]
550 550 >>> difflineranges2(b'abcde', b'aBcDe')
551 551 [(2, 2), (4, 4)]
552 552 >>> difflineranges2(b'abcde', b'aBCDe')
553 553 [(2, 4)]
554 554 """
555 555 ranges = []
556 556 for lines, kind in mdiff.allblocks(content1, content2):
557 557 firstline, lastline = lines[2:4]
558 558 if kind == b'!' and firstline != lastline:
559 559 ranges.append((firstline + 1, lastline))
560 560 return ranges
561 561
562 562
563 563 def getbasectxs(repo, opts, revstofix):
564 564 """Returns a map of the base contexts for each revision
565 565
566 566 The base contexts determine which lines are considered modified when we
567 567 attempt to fix just the modified lines in a file. It also determines which
568 568 files we attempt to fix, so it is important to compute this even when
569 569 --whole is used.
570 570 """
571 571 # The --base flag overrides the usual logic, and we give every revision
572 572 # exactly the set of baserevs that the user specified.
573 573 if opts.get(b'base'):
574 574 baserevs = set(scmutil.revrange(repo, opts.get(b'base')))
575 575 if not baserevs:
576 576 baserevs = {nullrev}
577 577 basectxs = {repo[rev] for rev in baserevs}
578 578 return {rev: basectxs for rev in revstofix}
579 579
580 580 # Proceed in topological order so that we can easily determine each
581 581 # revision's baserevs by looking at its parents and their baserevs.
582 582 basectxs = collections.defaultdict(set)
583 583 for rev in sorted(revstofix):
584 584 ctx = repo[rev]
585 585 for pctx in ctx.parents():
586 586 if pctx.rev() in basectxs:
587 587 basectxs[rev].update(basectxs[pctx.rev()])
588 588 else:
589 589 basectxs[rev].add(pctx)
590 590 return basectxs
591 591
592 592
593 593 def fixfile(ui, repo, opts, fixers, fixctx, path, basectxs):
594 594 """Run any configured fixers that should affect the file in this context
595 595
596 596 Returns the file content that results from applying the fixers in some order
597 597 starting with the file's content in the fixctx. Fixers that support line
598 598 ranges will affect lines that have changed relative to any of the basectxs
599 599 (i.e. they will only avoid lines that are common to all basectxs).
600 600
601 601 A fixer tool's stdout will become the file's new content if and only if it
602 602 exits with code zero. The fixer tool's working directory is the repository's
603 603 root.
604 604 """
605 605 metadata = {}
606 606 newdata = fixctx[path].data()
607 607 for fixername, fixer in pycompat.iteritems(fixers):
608 608 if fixer.affects(opts, fixctx, path):
609 609 ranges = lineranges(opts, path, basectxs, fixctx, newdata)
610 610 command = fixer.command(ui, path, ranges)
611 611 if command is None:
612 612 continue
613 613 ui.debug(b'subprocess: %s\n' % (command,))
614 614 proc = subprocess.Popen(
615 615 procutil.tonativestr(command),
616 616 shell=True,
617 617 cwd=procutil.tonativestr(repo.root),
618 618 stdin=subprocess.PIPE,
619 619 stdout=subprocess.PIPE,
620 620 stderr=subprocess.PIPE,
621 621 )
622 622 stdout, stderr = proc.communicate(newdata)
623 623 if stderr:
624 624 showstderr(ui, fixctx.rev(), fixername, stderr)
625 625 newerdata = stdout
626 626 if fixer.shouldoutputmetadata():
627 627 try:
628 628 metadatajson, newerdata = stdout.split(b'\0', 1)
629 629 metadata[fixername] = pycompat.json_loads(metadatajson)
630 630 except ValueError:
631 631 ui.warn(
632 632 _(b'ignored invalid output from fixer tool: %s\n')
633 633 % (fixername,)
634 634 )
635 635 continue
636 636 else:
637 637 metadata[fixername] = None
638 638 if proc.returncode == 0:
639 639 newdata = newerdata
640 640 else:
641 641 if not stderr:
642 642 message = _(b'exited with status %d\n') % (proc.returncode,)
643 643 showstderr(ui, fixctx.rev(), fixername, message)
644 644 checktoolfailureaction(
645 645 ui,
646 646 _(b'no fixes will be applied'),
647 647 hint=_(
648 648 b'use --config fix.failure=continue to apply any '
649 649 b'successful fixes anyway'
650 650 ),
651 651 )
652 652 return metadata, newdata
653 653
654 654
655 655 def showstderr(ui, rev, fixername, stderr):
656 656 """Writes the lines of the stderr string as warnings on the ui
657 657
658 658 Uses the revision number and fixername to give more context to each line of
659 659 the error message. Doesn't include file names, since those take up a lot of
660 660 space and would tend to be included in the error message if they were
661 661 relevant.
662 662 """
663 663 for line in re.split(b'[\r\n]+', stderr):
664 664 if line:
665 665 ui.warn(b'[')
666 666 if rev is None:
667 667 ui.warn(_(b'wdir'), label=b'evolve.rev')
668 668 else:
669 669 ui.warn(b'%d' % rev, label=b'evolve.rev')
670 670 ui.warn(b'] %s: %s\n' % (fixername, line))
671 671
672 672
673 673 def writeworkingdir(repo, ctx, filedata, replacements):
674 674 """Write new content to the working copy and check out the new p1 if any
675 675
676 676 We check out a new revision if and only if we fixed something in both the
677 677 working directory and its parent revision. This avoids the need for a full
678 678 update/merge, and means that the working directory simply isn't affected
679 679 unless the --working-dir flag is given.
680 680
681 681 Directly updates the dirstate for the affected files.
682 682 """
683 683 for path, data in pycompat.iteritems(filedata):
684 684 fctx = ctx[path]
685 685 fctx.write(data, fctx.flags())
686 686 if repo.dirstate[path] == b'n':
687 687 repo.dirstate.normallookup(path)
688 688
689 689 oldparentnodes = repo.dirstate.parents()
690 690 newparentnodes = [replacements.get(n, n) for n in oldparentnodes]
691 691 if newparentnodes != oldparentnodes:
692 692 repo.setparents(*newparentnodes)
693 693
694 694
695 695 def replacerev(ui, repo, ctx, filedata, replacements):
696 696 """Commit a new revision like the given one, but with file content changes
697 697
698 698 "ctx" is the original revision to be replaced by a modified one.
699 699
700 700 "filedata" is a dict that maps paths to their new file content. All other
701 701 paths will be recreated from the original revision without changes.
702 702 "filedata" may contain paths that didn't exist in the original revision;
703 703 they will be added.
704 704
705 705 "replacements" is a dict that maps a single node to a single node, and it is
706 706 updated to indicate the original revision is replaced by the newly created
707 707 one. No entry is added if the replacement's node already exists.
708 708
709 709 The new revision has the same parents as the old one, unless those parents
710 710 have already been replaced, in which case those replacements are the parents
711 711 of this new revision. Thus, if revisions are replaced in topological order,
712 712 there is no need to rebase them into the original topology later.
713 713 """
714 714
715 715 p1rev, p2rev = repo.changelog.parentrevs(ctx.rev())
716 716 p1ctx, p2ctx = repo[p1rev], repo[p2rev]
717 717 newp1node = replacements.get(p1ctx.node(), p1ctx.node())
718 718 newp2node = replacements.get(p2ctx.node(), p2ctx.node())
719 719
720 720 # We don't want to create a revision that has no changes from the original,
721 721 # but we should if the original revision's parent has been replaced.
722 722 # Otherwise, we would produce an orphan that needs no actual human
723 723 # intervention to evolve. We can't rely on commit() to avoid creating the
724 724 # un-needed revision because the extra field added below produces a new hash
725 725 # regardless of file content changes.
726 726 if (
727 727 not filedata
728 728 and p1ctx.node() not in replacements
729 729 and p2ctx.node() not in replacements
730 730 ):
731 731 return
732 732
733 733 extra = ctx.extra().copy()
734 734 extra[b'fix_source'] = ctx.hex()
735 735
736 736 wctx = context.overlayworkingctx(repo)
737 newp1ctx = repo[newp1node]
738 wctx.setbase(newp1ctx)
737 wctx.setbase(repo[newp1node])
739 738 merge.update(
740 739 repo,
741 740 ctx.rev(),
742 741 branchmerge=False,
743 742 force=True,
744 743 ancestor=p1rev,
745 744 mergeancestor=False,
746 745 wc=wctx,
747 746 )
748 copies.graftcopies(repo, wctx, ctx, ctx.p1(), skip=newp1ctx)
747 copies.graftcopies(wctx, ctx, ctx.p1())
749 748
750 749 for path in filedata.keys():
751 750 fctx = ctx[path]
752 751 copysource = fctx.copysource()
753 752 wctx.write(path, filedata[path], flags=fctx.flags())
754 753 if copysource:
755 754 wctx.markcopied(path, copysource)
756 755
757 756 memctx = wctx.tomemctx(
758 757 text=ctx.description(),
759 758 branch=ctx.branch(),
760 759 extra=extra,
761 760 date=ctx.date(),
762 761 parents=(newp1node, newp2node),
763 762 user=ctx.user(),
764 763 )
765 764
766 765 sucnode = memctx.commit()
767 766 prenode = ctx.node()
768 767 if prenode == sucnode:
769 768 ui.debug(b'node %s already existed\n' % (ctx.hex()))
770 769 else:
771 770 replacements[ctx.node()] = sucnode
772 771
773 772
774 773 def getfixers(ui):
775 774 """Returns a map of configured fixer tools indexed by their names
776 775
777 776 Each value is a Fixer object with methods that implement the behavior of the
778 777 fixer's config suboptions. Does not validate the config values.
779 778 """
780 779 fixers = {}
781 780 for name in fixernames(ui):
782 781 enabled = ui.configbool(b'fix', name + b':enabled')
783 782 command = ui.config(b'fix', name + b':command')
784 783 pattern = ui.config(b'fix', name + b':pattern')
785 784 linerange = ui.config(b'fix', name + b':linerange')
786 785 priority = ui.configint(b'fix', name + b':priority')
787 786 metadata = ui.configbool(b'fix', name + b':metadata')
788 787 skipclean = ui.configbool(b'fix', name + b':skipclean')
789 788 # Don't use a fixer if it has no pattern configured. It would be
790 789 # dangerous to let it affect all files. It would be pointless to let it
791 790 # affect no files. There is no reasonable subset of files to use as the
792 791 # default.
793 792 if command is None:
794 793 ui.warn(
795 794 _(b'fixer tool has no command configuration: %s\n') % (name,)
796 795 )
797 796 elif pattern is None:
798 797 ui.warn(
799 798 _(b'fixer tool has no pattern configuration: %s\n') % (name,)
800 799 )
801 800 elif not enabled:
802 801 ui.debug(b'ignoring disabled fixer tool: %s\n' % (name,))
803 802 else:
804 803 fixers[name] = Fixer(
805 804 command, pattern, linerange, priority, metadata, skipclean
806 805 )
807 806 return collections.OrderedDict(
808 807 sorted(fixers.items(), key=lambda item: item[1]._priority, reverse=True)
809 808 )
810 809
811 810
812 811 def fixernames(ui):
813 812 """Returns the names of [fix] config options that have suboptions"""
814 813 names = set()
815 814 for k, v in ui.configitems(b'fix'):
816 815 if b':' in k:
817 816 names.add(k.split(b':', 1)[0])
818 817 return names
819 818
820 819
821 820 class Fixer(object):
822 821 """Wraps the raw config values for a fixer with methods"""
823 822
824 823 def __init__(
825 824 self, command, pattern, linerange, priority, metadata, skipclean
826 825 ):
827 826 self._command = command
828 827 self._pattern = pattern
829 828 self._linerange = linerange
830 829 self._priority = priority
831 830 self._metadata = metadata
832 831 self._skipclean = skipclean
833 832
834 833 def affects(self, opts, fixctx, path):
835 834 """Should this fixer run on the file at the given path and context?"""
836 835 repo = fixctx.repo()
837 836 matcher = matchmod.match(
838 837 repo.root, repo.root, [self._pattern], ctx=fixctx
839 838 )
840 839 return matcher(path)
841 840
842 841 def shouldoutputmetadata(self):
843 842 """Should the stdout of this fixer start with JSON and a null byte?"""
844 843 return self._metadata
845 844
846 845 def command(self, ui, path, ranges):
847 846 """A shell command to use to invoke this fixer on the given file/lines
848 847
849 848 May return None if there is no appropriate command to run for the given
850 849 parameters.
851 850 """
852 851 expand = cmdutil.rendercommandtemplate
853 852 parts = [
854 853 expand(
855 854 ui,
856 855 self._command,
857 856 {b'rootpath': path, b'basename': os.path.basename(path)},
858 857 )
859 858 ]
860 859 if self._linerange:
861 860 if self._skipclean and not ranges:
862 861 # No line ranges to fix, so don't run the fixer.
863 862 return None
864 863 for first, last in ranges:
865 864 parts.append(
866 865 expand(
867 866 ui, self._linerange, {b'first': first, b'last': last}
868 867 )
869 868 )
870 869 return b' '.join(parts)
@@ -1,2265 +1,2262 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 https://mercurial-scm.org/wiki/RebaseExtension
15 15 '''
16 16
17 17 from __future__ import absolute_import
18 18
19 19 import errno
20 20 import os
21 21
22 22 from mercurial.i18n import _
23 23 from mercurial.node import (
24 24 nullrev,
25 25 short,
26 26 )
27 27 from mercurial.pycompat import open
28 28 from mercurial import (
29 29 bookmarks,
30 30 cmdutil,
31 31 commands,
32 32 copies,
33 33 destutil,
34 34 dirstateguard,
35 35 error,
36 36 extensions,
37 37 hg,
38 38 merge as mergemod,
39 39 mergeutil,
40 40 obsolete,
41 41 obsutil,
42 42 patch,
43 43 phases,
44 44 pycompat,
45 45 registrar,
46 46 repair,
47 47 revset,
48 48 revsetlang,
49 49 rewriteutil,
50 50 scmutil,
51 51 smartset,
52 52 state as statemod,
53 53 util,
54 54 )
55 55
56 56 # The following constants are used throughout the rebase module. The ordering of
57 57 # their values must be maintained.
58 58
59 59 # Indicates that a revision needs to be rebased
60 60 revtodo = -1
61 61 revtodostr = b'-1'
62 62
63 63 # legacy revstates no longer needed in current code
64 64 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
65 65 legacystates = {b'-2', b'-3', b'-4', b'-5'}
66 66
67 67 cmdtable = {}
68 68 command = registrar.command(cmdtable)
69 69 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
70 70 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
71 71 # be specifying the version(s) of Mercurial they are tested with, or
72 72 # leave the attribute unspecified.
73 73 testedwith = b'ships-with-hg-core'
74 74
75 75
76 76 def _nothingtorebase():
77 77 return 1
78 78
79 79
80 80 def _savegraft(ctx, extra):
81 81 s = ctx.extra().get(b'source', None)
82 82 if s is not None:
83 83 extra[b'source'] = s
84 84 s = ctx.extra().get(b'intermediate-source', None)
85 85 if s is not None:
86 86 extra[b'intermediate-source'] = s
87 87
88 88
89 89 def _savebranch(ctx, extra):
90 90 extra[b'branch'] = ctx.branch()
91 91
92 92
93 93 def _destrebase(repo, sourceset, destspace=None):
94 94 """small wrapper around destmerge to pass the right extra args
95 95
96 96 Please wrap destutil.destmerge instead."""
97 97 return destutil.destmerge(
98 98 repo,
99 99 action=b'rebase',
100 100 sourceset=sourceset,
101 101 onheadcheck=False,
102 102 destspace=destspace,
103 103 )
104 104
105 105
106 106 revsetpredicate = registrar.revsetpredicate()
107 107
108 108
109 109 @revsetpredicate(b'_destrebase')
110 110 def _revsetdestrebase(repo, subset, x):
111 111 # ``_rebasedefaultdest()``
112 112
113 113 # default destination for rebase.
114 114 # # XXX: Currently private because I expect the signature to change.
115 115 # # XXX: - bailing out in case of ambiguity vs returning all data.
116 116 # i18n: "_rebasedefaultdest" is a keyword
117 117 sourceset = None
118 118 if x is not None:
119 119 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
120 120 return subset & smartset.baseset([_destrebase(repo, sourceset)])
121 121
122 122
123 123 @revsetpredicate(b'_destautoorphanrebase')
124 124 def _revsetdestautoorphanrebase(repo, subset, x):
125 125 # ``_destautoorphanrebase()``
126 126
127 127 # automatic rebase destination for a single orphan revision.
128 128 unfi = repo.unfiltered()
129 129 obsoleted = unfi.revs(b'obsolete()')
130 130
131 131 src = revset.getset(repo, subset, x).first()
132 132
133 133 # Empty src or already obsoleted - Do not return a destination
134 134 if not src or src in obsoleted:
135 135 return smartset.baseset()
136 136 dests = destutil.orphanpossibledestination(repo, src)
137 137 if len(dests) > 1:
138 138 raise error.Abort(
139 139 _(b"ambiguous automatic rebase: %r could end up on any of %r")
140 140 % (src, dests)
141 141 )
142 142 # We have zero or one destination, so we can just return here.
143 143 return smartset.baseset(dests)
144 144
145 145
146 146 def _ctxdesc(ctx):
147 147 """short description for a context"""
148 148 desc = b'%d:%s "%s"' % (
149 149 ctx.rev(),
150 150 ctx,
151 151 ctx.description().split(b'\n', 1)[0],
152 152 )
153 153 repo = ctx.repo()
154 154 names = []
155 155 for nsname, ns in pycompat.iteritems(repo.names):
156 156 if nsname == b'branches':
157 157 continue
158 158 names.extend(ns.names(repo, ctx.node()))
159 159 if names:
160 160 desc += b' (%s)' % b' '.join(names)
161 161 return desc
162 162
163 163
164 164 class rebaseruntime(object):
165 165 """This class is a container for rebase runtime state"""
166 166
167 167 def __init__(self, repo, ui, inmemory=False, opts=None):
168 168 if opts is None:
169 169 opts = {}
170 170
171 171 # prepared: whether we have rebasestate prepared or not. Currently it
172 172 # decides whether "self.repo" is unfiltered or not.
173 173 # The rebasestate has explicit hash to hash instructions not depending
174 174 # on visibility. If rebasestate exists (in-memory or on-disk), use
175 175 # unfiltered repo to avoid visibility issues.
176 176 # Before knowing rebasestate (i.e. when starting a new rebase (not
177 177 # --continue or --abort)), the original repo should be used so
178 178 # visibility-dependent revsets are correct.
179 179 self.prepared = False
180 180 self._repo = repo
181 181
182 182 self.ui = ui
183 183 self.opts = opts
184 184 self.originalwd = None
185 185 self.external = nullrev
186 186 # Mapping between the old revision id and either what is the new rebased
187 187 # revision or what needs to be done with the old revision. The state
188 188 # dict will be what contains most of the rebase progress state.
189 189 self.state = {}
190 190 self.activebookmark = None
191 191 self.destmap = {}
192 192 self.skipped = set()
193 193
194 194 self.collapsef = opts.get(b'collapse', False)
195 195 self.collapsemsg = cmdutil.logmessage(ui, opts)
196 196 self.date = opts.get(b'date', None)
197 197
198 198 e = opts.get(b'extrafn') # internal, used by e.g. hgsubversion
199 199 self.extrafns = [_savegraft]
200 200 if e:
201 201 self.extrafns = [e]
202 202
203 203 self.backupf = ui.configbool(b'rewrite', b'backup-bundle')
204 204 self.keepf = opts.get(b'keep', False)
205 205 self.keepbranchesf = opts.get(b'keepbranches', False)
206 206 self.obsoletenotrebased = {}
207 207 self.obsoletewithoutsuccessorindestination = set()
208 208 self.inmemory = inmemory
209 209 self.stateobj = statemod.cmdstate(repo, b'rebasestate')
210 210
211 211 @property
212 212 def repo(self):
213 213 if self.prepared:
214 214 return self._repo.unfiltered()
215 215 else:
216 216 return self._repo
217 217
218 218 def storestatus(self, tr=None):
219 219 """Store the current status to allow recovery"""
220 220 if tr:
221 221 tr.addfilegenerator(
222 222 b'rebasestate',
223 223 (b'rebasestate',),
224 224 self._writestatus,
225 225 location=b'plain',
226 226 )
227 227 else:
228 228 with self.repo.vfs(b"rebasestate", b"w") as f:
229 229 self._writestatus(f)
230 230
231 231 def _writestatus(self, f):
232 232 repo = self.repo
233 233 assert repo.filtername is None
234 234 f.write(repo[self.originalwd].hex() + b'\n')
235 235 # was "dest". we now write dest per src root below.
236 236 f.write(b'\n')
237 237 f.write(repo[self.external].hex() + b'\n')
238 238 f.write(b'%d\n' % int(self.collapsef))
239 239 f.write(b'%d\n' % int(self.keepf))
240 240 f.write(b'%d\n' % int(self.keepbranchesf))
241 241 f.write(b'%s\n' % (self.activebookmark or b''))
242 242 destmap = self.destmap
243 243 for d, v in pycompat.iteritems(self.state):
244 244 oldrev = repo[d].hex()
245 245 if v >= 0:
246 246 newrev = repo[v].hex()
247 247 else:
248 248 newrev = b"%d" % v
249 249 destnode = repo[destmap[d]].hex()
250 250 f.write(b"%s:%s:%s\n" % (oldrev, newrev, destnode))
251 251 repo.ui.debug(b'rebase status stored\n')
252 252
253 253 def restorestatus(self):
254 254 """Restore a previously stored status"""
255 255 if not self.stateobj.exists():
256 256 cmdutil.wrongtooltocontinue(self.repo, _(b'rebase'))
257 257
258 258 data = self._read()
259 259 self.repo.ui.debug(b'rebase status resumed\n')
260 260
261 261 self.originalwd = data[b'originalwd']
262 262 self.destmap = data[b'destmap']
263 263 self.state = data[b'state']
264 264 self.skipped = data[b'skipped']
265 265 self.collapsef = data[b'collapse']
266 266 self.keepf = data[b'keep']
267 267 self.keepbranchesf = data[b'keepbranches']
268 268 self.external = data[b'external']
269 269 self.activebookmark = data[b'activebookmark']
270 270
271 271 def _read(self):
272 272 self.prepared = True
273 273 repo = self.repo
274 274 assert repo.filtername is None
275 275 data = {
276 276 b'keepbranches': None,
277 277 b'collapse': None,
278 278 b'activebookmark': None,
279 279 b'external': nullrev,
280 280 b'keep': None,
281 281 b'originalwd': None,
282 282 }
283 283 legacydest = None
284 284 state = {}
285 285 destmap = {}
286 286
287 287 if True:
288 288 f = repo.vfs(b"rebasestate")
289 289 for i, l in enumerate(f.read().splitlines()):
290 290 if i == 0:
291 291 data[b'originalwd'] = repo[l].rev()
292 292 elif i == 1:
293 293 # this line should be empty in newer version. but legacy
294 294 # clients may still use it
295 295 if l:
296 296 legacydest = repo[l].rev()
297 297 elif i == 2:
298 298 data[b'external'] = repo[l].rev()
299 299 elif i == 3:
300 300 data[b'collapse'] = bool(int(l))
301 301 elif i == 4:
302 302 data[b'keep'] = bool(int(l))
303 303 elif i == 5:
304 304 data[b'keepbranches'] = bool(int(l))
305 305 elif i == 6 and not (len(l) == 81 and b':' in l):
306 306 # line 6 is a recent addition, so for backwards
307 307 # compatibility check that the line doesn't look like the
308 308 # oldrev:newrev lines
309 309 data[b'activebookmark'] = l
310 310 else:
311 311 args = l.split(b':')
312 312 oldrev = repo[args[0]].rev()
313 313 newrev = args[1]
314 314 if newrev in legacystates:
315 315 continue
316 316 if len(args) > 2:
317 317 destrev = repo[args[2]].rev()
318 318 else:
319 319 destrev = legacydest
320 320 destmap[oldrev] = destrev
321 321 if newrev == revtodostr:
322 322 state[oldrev] = revtodo
323 323 # Legacy compat special case
324 324 else:
325 325 state[oldrev] = repo[newrev].rev()
326 326
327 327 if data[b'keepbranches'] is None:
328 328 raise error.Abort(_(b'.hg/rebasestate is incomplete'))
329 329
330 330 data[b'destmap'] = destmap
331 331 data[b'state'] = state
332 332 skipped = set()
333 333 # recompute the set of skipped revs
334 334 if not data[b'collapse']:
335 335 seen = set(destmap.values())
336 336 for old, new in sorted(state.items()):
337 337 if new != revtodo and new in seen:
338 338 skipped.add(old)
339 339 seen.add(new)
340 340 data[b'skipped'] = skipped
341 341 repo.ui.debug(
342 342 b'computed skipped revs: %s\n'
343 343 % (b' '.join(b'%d' % r for r in sorted(skipped)) or b'')
344 344 )
345 345
346 346 return data
347 347
348 348 def _handleskippingobsolete(self, obsoleterevs, destmap):
349 349 """Compute structures necessary for skipping obsolete revisions
350 350
351 351 obsoleterevs: iterable of all obsolete revisions in rebaseset
352 352 destmap: {srcrev: destrev} destination revisions
353 353 """
354 354 self.obsoletenotrebased = {}
355 355 if not self.ui.configbool(b'experimental', b'rebaseskipobsolete'):
356 356 return
357 357 obsoleteset = set(obsoleterevs)
358 358 (
359 359 self.obsoletenotrebased,
360 360 self.obsoletewithoutsuccessorindestination,
361 361 obsoleteextinctsuccessors,
362 362 ) = _computeobsoletenotrebased(self.repo, obsoleteset, destmap)
363 363 skippedset = set(self.obsoletenotrebased)
364 364 skippedset.update(self.obsoletewithoutsuccessorindestination)
365 365 skippedset.update(obsoleteextinctsuccessors)
366 366 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
367 367
368 368 def _prepareabortorcontinue(self, isabort, backup=True, suppwarns=False):
369 369 try:
370 370 self.restorestatus()
371 371 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
372 372 except error.RepoLookupError:
373 373 if isabort:
374 374 clearstatus(self.repo)
375 375 clearcollapsemsg(self.repo)
376 376 self.repo.ui.warn(
377 377 _(
378 378 b'rebase aborted (no revision is removed,'
379 379 b' only broken state is cleared)\n'
380 380 )
381 381 )
382 382 return 0
383 383 else:
384 384 msg = _(b'cannot continue inconsistent rebase')
385 385 hint = _(b'use "hg rebase --abort" to clear broken state')
386 386 raise error.Abort(msg, hint=hint)
387 387
388 388 if isabort:
389 389 backup = backup and self.backupf
390 390 return self._abort(backup=backup, suppwarns=suppwarns)
391 391
392 392 def _preparenewrebase(self, destmap):
393 393 if not destmap:
394 394 return _nothingtorebase()
395 395
396 396 rebaseset = destmap.keys()
397 397 if not self.keepf:
398 398 try:
399 399 rewriteutil.precheck(self.repo, rebaseset, action=b'rebase')
400 400 except error.Abort as e:
401 401 if e.hint is None:
402 402 e.hint = _(b'use --keep to keep original changesets')
403 403 raise e
404 404
405 405 result = buildstate(self.repo, destmap, self.collapsef)
406 406
407 407 if not result:
408 408 # Empty state built, nothing to rebase
409 409 self.ui.status(_(b'nothing to rebase\n'))
410 410 return _nothingtorebase()
411 411
412 412 (self.originalwd, self.destmap, self.state) = result
413 413 if self.collapsef:
414 414 dests = set(self.destmap.values())
415 415 if len(dests) != 1:
416 416 raise error.Abort(
417 417 _(b'--collapse does not work with multiple destinations')
418 418 )
419 419 destrev = next(iter(dests))
420 420 destancestors = self.repo.changelog.ancestors(
421 421 [destrev], inclusive=True
422 422 )
423 423 self.external = externalparent(self.repo, self.state, destancestors)
424 424
425 425 for destrev in sorted(set(destmap.values())):
426 426 dest = self.repo[destrev]
427 427 if dest.closesbranch() and not self.keepbranchesf:
428 428 self.ui.status(_(b'reopening closed branch head %s\n') % dest)
429 429
430 430 self.prepared = True
431 431
432 432 def _assignworkingcopy(self):
433 433 if self.inmemory:
434 434 from mercurial.context import overlayworkingctx
435 435
436 436 self.wctx = overlayworkingctx(self.repo)
437 437 self.repo.ui.debug(b"rebasing in-memory\n")
438 438 else:
439 439 self.wctx = self.repo[None]
440 440 self.repo.ui.debug(b"rebasing on disk\n")
441 441 self.repo.ui.log(
442 442 b"rebase",
443 443 b"using in-memory rebase: %r\n",
444 444 self.inmemory,
445 445 rebase_imm_used=self.inmemory,
446 446 )
447 447
448 448 def _performrebase(self, tr):
449 449 self._assignworkingcopy()
450 450 repo, ui = self.repo, self.ui
451 451 if self.keepbranchesf:
452 452 # insert _savebranch at the start of extrafns so if
453 453 # there's a user-provided extrafn it can clobber branch if
454 454 # desired
455 455 self.extrafns.insert(0, _savebranch)
456 456 if self.collapsef:
457 457 branches = set()
458 458 for rev in self.state:
459 459 branches.add(repo[rev].branch())
460 460 if len(branches) > 1:
461 461 raise error.Abort(
462 462 _(b'cannot collapse multiple named branches')
463 463 )
464 464
465 465 # Calculate self.obsoletenotrebased
466 466 obsrevs = _filterobsoleterevs(self.repo, self.state)
467 467 self._handleskippingobsolete(obsrevs, self.destmap)
468 468
469 469 # Keep track of the active bookmarks in order to reset them later
470 470 self.activebookmark = self.activebookmark or repo._activebookmark
471 471 if self.activebookmark:
472 472 bookmarks.deactivate(repo)
473 473
474 474 # Store the state before we begin so users can run 'hg rebase --abort'
475 475 # if we fail before the transaction closes.
476 476 self.storestatus()
477 477 if tr:
478 478 # When using single transaction, store state when transaction
479 479 # commits.
480 480 self.storestatus(tr)
481 481
482 482 cands = [k for k, v in pycompat.iteritems(self.state) if v == revtodo]
483 483 p = repo.ui.makeprogress(
484 484 _(b"rebasing"), unit=_(b'changesets'), total=len(cands)
485 485 )
486 486
487 487 def progress(ctx):
488 488 p.increment(item=(b"%d:%s" % (ctx.rev(), ctx)))
489 489
490 490 allowdivergence = self.ui.configbool(
491 491 b'experimental', b'evolution.allowdivergence'
492 492 )
493 493 for subset in sortsource(self.destmap):
494 494 sortedrevs = self.repo.revs(b'sort(%ld, -topo)', subset)
495 495 if not allowdivergence:
496 496 sortedrevs -= self.repo.revs(
497 497 b'descendants(%ld) and not %ld',
498 498 self.obsoletewithoutsuccessorindestination,
499 499 self.obsoletewithoutsuccessorindestination,
500 500 )
501 501 for rev in sortedrevs:
502 502 self._rebasenode(tr, rev, allowdivergence, progress)
503 503 p.complete()
504 504 ui.note(_(b'rebase merging completed\n'))
505 505
506 506 def _concludenode(self, rev, p1, p2, editor, commitmsg=None):
507 507 '''Commit the wd changes with parents p1 and p2.
508 508
509 509 Reuse commit info from rev but also store useful information in extra.
510 510 Return node of committed revision.'''
511 511 repo = self.repo
512 512 ctx = repo[rev]
513 513 if commitmsg is None:
514 514 commitmsg = ctx.description()
515 515 date = self.date
516 516 if date is None:
517 517 date = ctx.date()
518 518 extra = {b'rebase_source': ctx.hex()}
519 519 for c in self.extrafns:
520 520 c(ctx, extra)
521 521 keepbranch = self.keepbranchesf and repo[p1].branch() != ctx.branch()
522 522 destphase = max(ctx.phase(), phases.draft)
523 523 overrides = {(b'phases', b'new-commit'): destphase}
524 524 if keepbranch:
525 525 overrides[(b'ui', b'allowemptycommit')] = True
526 526 with repo.ui.configoverride(overrides, b'rebase'):
527 527 if self.inmemory:
528 528 newnode = commitmemorynode(
529 529 repo,
530 530 p1,
531 531 p2,
532 532 wctx=self.wctx,
533 533 extra=extra,
534 534 commitmsg=commitmsg,
535 535 editor=editor,
536 536 user=ctx.user(),
537 537 date=date,
538 538 )
539 539 mergemod.mergestate.clean(repo)
540 540 else:
541 541 newnode = commitnode(
542 542 repo,
543 543 p1,
544 544 p2,
545 545 extra=extra,
546 546 commitmsg=commitmsg,
547 547 editor=editor,
548 548 user=ctx.user(),
549 549 date=date,
550 550 )
551 551
552 552 if newnode is None:
553 553 # If it ended up being a no-op commit, then the normal
554 554 # merge state clean-up path doesn't happen, so do it
555 555 # here. Fix issue5494
556 556 mergemod.mergestate.clean(repo)
557 557 return newnode
558 558
559 559 def _rebasenode(self, tr, rev, allowdivergence, progressfn):
560 560 repo, ui, opts = self.repo, self.ui, self.opts
561 561 dest = self.destmap[rev]
562 562 ctx = repo[rev]
563 563 desc = _ctxdesc(ctx)
564 564 if self.state[rev] == rev:
565 565 ui.status(_(b'already rebased %s\n') % desc)
566 566 elif (
567 567 not allowdivergence
568 568 and rev in self.obsoletewithoutsuccessorindestination
569 569 ):
570 570 msg = (
571 571 _(
572 572 b'note: not rebasing %s and its descendants as '
573 573 b'this would cause divergence\n'
574 574 )
575 575 % desc
576 576 )
577 577 repo.ui.status(msg)
578 578 self.skipped.add(rev)
579 579 elif rev in self.obsoletenotrebased:
580 580 succ = self.obsoletenotrebased[rev]
581 581 if succ is None:
582 582 msg = _(b'note: not rebasing %s, it has no successor\n') % desc
583 583 else:
584 584 succdesc = _ctxdesc(repo[succ])
585 585 msg = _(
586 586 b'note: not rebasing %s, already in destination as %s\n'
587 587 ) % (desc, succdesc)
588 588 repo.ui.status(msg)
589 589 # Make clearrebased aware state[rev] is not a true successor
590 590 self.skipped.add(rev)
591 591 # Record rev as moved to its desired destination in self.state.
592 592 # This helps bookmark and working parent movement.
593 593 dest = max(
594 594 adjustdest(repo, rev, self.destmap, self.state, self.skipped)
595 595 )
596 596 self.state[rev] = dest
597 597 elif self.state[rev] == revtodo:
598 598 ui.status(_(b'rebasing %s\n') % desc)
599 599 progressfn(ctx)
600 600 p1, p2, base = defineparents(
601 601 repo,
602 602 rev,
603 603 self.destmap,
604 604 self.state,
605 605 self.skipped,
606 606 self.obsoletenotrebased,
607 607 )
608 608 if not self.inmemory and len(repo[None].parents()) == 2:
609 609 repo.ui.debug(b'resuming interrupted rebase\n')
610 610 else:
611 611 overrides = {(b'ui', b'forcemerge'): opts.get(b'tool', b'')}
612 612 with ui.configoverride(overrides, b'rebase'):
613 613 stats = rebasenode(
614 614 repo,
615 615 rev,
616 616 p1,
617 617 base,
618 618 self.collapsef,
619 619 dest,
620 620 wctx=self.wctx,
621 621 )
622 622 if stats.unresolvedcount > 0:
623 623 if self.inmemory:
624 624 raise error.InMemoryMergeConflictsError()
625 625 else:
626 626 raise error.InterventionRequired(
627 627 _(
628 628 b'unresolved conflicts (see hg '
629 629 b'resolve, then hg rebase --continue)'
630 630 )
631 631 )
632 632 if not self.collapsef:
633 633 merging = p2 != nullrev
634 634 editform = cmdutil.mergeeditform(merging, b'rebase')
635 635 editor = cmdutil.getcommiteditor(
636 636 editform=editform, **pycompat.strkwargs(opts)
637 637 )
638 638 newnode = self._concludenode(rev, p1, p2, editor)
639 639 else:
640 640 # Skip commit if we are collapsing
641 641 if self.inmemory:
642 642 self.wctx.setbase(repo[p1])
643 643 else:
644 644 repo.setparents(repo[p1].node())
645 645 newnode = None
646 646 # Update the state
647 647 if newnode is not None:
648 648 self.state[rev] = repo[newnode].rev()
649 649 ui.debug(b'rebased as %s\n' % short(newnode))
650 650 else:
651 651 if not self.collapsef:
652 652 ui.warn(
653 653 _(
654 654 b'note: not rebasing %s, its destination already '
655 655 b'has all its changes\n'
656 656 )
657 657 % desc
658 658 )
659 659 self.skipped.add(rev)
660 660 self.state[rev] = p1
661 661 ui.debug(b'next revision set to %d\n' % p1)
662 662 else:
663 663 ui.status(
664 664 _(b'already rebased %s as %s\n') % (desc, repo[self.state[rev]])
665 665 )
666 666 if not tr:
667 667 # When not using single transaction, store state after each
668 668 # commit is completely done. On InterventionRequired, we thus
669 669 # won't store the status. Instead, we'll hit the "len(parents) == 2"
670 670 # case and realize that the commit was in progress.
671 671 self.storestatus()
672 672
673 673 def _finishrebase(self):
674 674 repo, ui, opts = self.repo, self.ui, self.opts
675 675 fm = ui.formatter(b'rebase', opts)
676 676 fm.startitem()
677 677 if self.collapsef:
678 678 p1, p2, _base = defineparents(
679 679 repo,
680 680 min(self.state),
681 681 self.destmap,
682 682 self.state,
683 683 self.skipped,
684 684 self.obsoletenotrebased,
685 685 )
686 686 editopt = opts.get(b'edit')
687 687 editform = b'rebase.collapse'
688 688 if self.collapsemsg:
689 689 commitmsg = self.collapsemsg
690 690 else:
691 691 commitmsg = b'Collapsed revision'
692 692 for rebased in sorted(self.state):
693 693 if rebased not in self.skipped:
694 694 commitmsg += b'\n* %s' % repo[rebased].description()
695 695 editopt = True
696 696 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
697 697 revtoreuse = max(self.state)
698 698
699 699 newnode = self._concludenode(
700 700 revtoreuse, p1, self.external, editor, commitmsg=commitmsg
701 701 )
702 702
703 703 if newnode is not None:
704 704 newrev = repo[newnode].rev()
705 705 for oldrev in self.state:
706 706 self.state[oldrev] = newrev
707 707
708 708 if b'qtip' in repo.tags():
709 709 updatemq(repo, self.state, self.skipped, **pycompat.strkwargs(opts))
710 710
711 711 # restore original working directory
712 712 # (we do this before stripping)
713 713 newwd = self.state.get(self.originalwd, self.originalwd)
714 714 if newwd < 0:
715 715 # original directory is a parent of rebase set root or ignored
716 716 newwd = self.originalwd
717 717 if newwd not in [c.rev() for c in repo[None].parents()]:
718 718 ui.note(_(b"update back to initial working directory parent\n"))
719 719 hg.updaterepo(repo, newwd, overwrite=False)
720 720
721 721 collapsedas = None
722 722 if self.collapsef and not self.keepf:
723 723 collapsedas = newnode
724 724 clearrebased(
725 725 ui,
726 726 repo,
727 727 self.destmap,
728 728 self.state,
729 729 self.skipped,
730 730 collapsedas,
731 731 self.keepf,
732 732 fm=fm,
733 733 backup=self.backupf,
734 734 )
735 735
736 736 clearstatus(repo)
737 737 clearcollapsemsg(repo)
738 738
739 739 ui.note(_(b"rebase completed\n"))
740 740 util.unlinkpath(repo.sjoin(b'undo'), ignoremissing=True)
741 741 if self.skipped:
742 742 skippedlen = len(self.skipped)
743 743 ui.note(_(b"%d revisions have been skipped\n") % skippedlen)
744 744 fm.end()
745 745
746 746 if (
747 747 self.activebookmark
748 748 and self.activebookmark in repo._bookmarks
749 749 and repo[b'.'].node() == repo._bookmarks[self.activebookmark]
750 750 ):
751 751 bookmarks.activate(repo, self.activebookmark)
752 752
753 753 def _abort(self, backup=True, suppwarns=False):
754 754 '''Restore the repository to its original state.'''
755 755
756 756 repo = self.repo
757 757 try:
758 758 # If the first commits in the rebased set get skipped during the
759 759 # rebase, their values within the state mapping will be the dest
760 760 # rev id. The rebased list must must not contain the dest rev
761 761 # (issue4896)
762 762 rebased = [
763 763 s
764 764 for r, s in self.state.items()
765 765 if s >= 0 and s != r and s != self.destmap[r]
766 766 ]
767 767 immutable = [d for d in rebased if not repo[d].mutable()]
768 768 cleanup = True
769 769 if immutable:
770 770 repo.ui.warn(
771 771 _(b"warning: can't clean up public changesets %s\n")
772 772 % b', '.join(bytes(repo[r]) for r in immutable),
773 773 hint=_(b"see 'hg help phases' for details"),
774 774 )
775 775 cleanup = False
776 776
777 777 descendants = set()
778 778 if rebased:
779 779 descendants = set(repo.changelog.descendants(rebased))
780 780 if descendants - set(rebased):
781 781 repo.ui.warn(
782 782 _(
783 783 b"warning: new changesets detected on "
784 784 b"destination branch, can't strip\n"
785 785 )
786 786 )
787 787 cleanup = False
788 788
789 789 if cleanup:
790 790 if rebased:
791 791 strippoints = [
792 792 c.node() for c in repo.set(b'roots(%ld)', rebased)
793 793 ]
794 794
795 795 updateifonnodes = set(rebased)
796 796 updateifonnodes.update(self.destmap.values())
797 797 updateifonnodes.add(self.originalwd)
798 798 shouldupdate = repo[b'.'].rev() in updateifonnodes
799 799
800 800 # Update away from the rebase if necessary
801 801 if shouldupdate:
802 802 mergemod.update(
803 803 repo, self.originalwd, branchmerge=False, force=True
804 804 )
805 805
806 806 # Strip from the first rebased revision
807 807 if rebased:
808 808 repair.strip(repo.ui, repo, strippoints, backup=backup)
809 809
810 810 if self.activebookmark and self.activebookmark in repo._bookmarks:
811 811 bookmarks.activate(repo, self.activebookmark)
812 812
813 813 finally:
814 814 clearstatus(repo)
815 815 clearcollapsemsg(repo)
816 816 if not suppwarns:
817 817 repo.ui.warn(_(b'rebase aborted\n'))
818 818 return 0
819 819
820 820
821 821 @command(
822 822 b'rebase',
823 823 [
824 824 (
825 825 b's',
826 826 b'source',
827 827 b'',
828 828 _(b'rebase the specified changeset and descendants'),
829 829 _(b'REV'),
830 830 ),
831 831 (
832 832 b'b',
833 833 b'base',
834 834 b'',
835 835 _(b'rebase everything from branching point of specified changeset'),
836 836 _(b'REV'),
837 837 ),
838 838 (b'r', b'rev', [], _(b'rebase these revisions'), _(b'REV')),
839 839 (
840 840 b'd',
841 841 b'dest',
842 842 b'',
843 843 _(b'rebase onto the specified changeset'),
844 844 _(b'REV'),
845 845 ),
846 846 (b'', b'collapse', False, _(b'collapse the rebased changesets')),
847 847 (
848 848 b'm',
849 849 b'message',
850 850 b'',
851 851 _(b'use text as collapse commit message'),
852 852 _(b'TEXT'),
853 853 ),
854 854 (b'e', b'edit', False, _(b'invoke editor on commit messages')),
855 855 (
856 856 b'l',
857 857 b'logfile',
858 858 b'',
859 859 _(b'read collapse commit message from file'),
860 860 _(b'FILE'),
861 861 ),
862 862 (b'k', b'keep', False, _(b'keep original changesets')),
863 863 (b'', b'keepbranches', False, _(b'keep original branch names')),
864 864 (b'D', b'detach', False, _(b'(DEPRECATED)')),
865 865 (b'i', b'interactive', False, _(b'(DEPRECATED)')),
866 866 (b't', b'tool', b'', _(b'specify merge tool')),
867 867 (b'', b'stop', False, _(b'stop interrupted rebase')),
868 868 (b'c', b'continue', False, _(b'continue an interrupted rebase')),
869 869 (b'a', b'abort', False, _(b'abort an interrupted rebase')),
870 870 (
871 871 b'',
872 872 b'auto-orphans',
873 873 b'',
874 874 _(
875 875 b'automatically rebase orphan revisions '
876 876 b'in the specified revset (EXPERIMENTAL)'
877 877 ),
878 878 ),
879 879 ]
880 880 + cmdutil.dryrunopts
881 881 + cmdutil.formatteropts
882 882 + cmdutil.confirmopts,
883 883 _(b'[-s REV | -b REV] [-d REV] [OPTION]'),
884 884 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
885 885 )
886 886 def rebase(ui, repo, **opts):
887 887 """move changeset (and descendants) to a different branch
888 888
889 889 Rebase uses repeated merging to graft changesets from one part of
890 890 history (the source) onto another (the destination). This can be
891 891 useful for linearizing *local* changes relative to a master
892 892 development tree.
893 893
894 894 Published commits cannot be rebased (see :hg:`help phases`).
895 895 To copy commits, see :hg:`help graft`.
896 896
897 897 If you don't specify a destination changeset (``-d/--dest``), rebase
898 898 will use the same logic as :hg:`merge` to pick a destination. if
899 899 the current branch contains exactly one other head, the other head
900 900 is merged with by default. Otherwise, an explicit revision with
901 901 which to merge with must be provided. (destination changeset is not
902 902 modified by rebasing, but new changesets are added as its
903 903 descendants.)
904 904
905 905 Here are the ways to select changesets:
906 906
907 907 1. Explicitly select them using ``--rev``.
908 908
909 909 2. Use ``--source`` to select a root changeset and include all of its
910 910 descendants.
911 911
912 912 3. Use ``--base`` to select a changeset; rebase will find ancestors
913 913 and their descendants which are not also ancestors of the destination.
914 914
915 915 4. If you do not specify any of ``--rev``, ``--source``, or ``--base``,
916 916 rebase will use ``--base .`` as above.
917 917
918 918 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
919 919 can be used in ``--dest``. Destination would be calculated per source
920 920 revision with ``SRC`` substituted by that single source revision and
921 921 ``ALLSRC`` substituted by all source revisions.
922 922
923 923 Rebase will destroy original changesets unless you use ``--keep``.
924 924 It will also move your bookmarks (even if you do).
925 925
926 926 Some changesets may be dropped if they do not contribute changes
927 927 (e.g. merges from the destination branch).
928 928
929 929 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
930 930 a named branch with two heads. You will need to explicitly specify source
931 931 and/or destination.
932 932
933 933 If you need to use a tool to automate merge/conflict decisions, you
934 934 can specify one with ``--tool``, see :hg:`help merge-tools`.
935 935 As a caveat: the tool will not be used to mediate when a file was
936 936 deleted, there is no hook presently available for this.
937 937
938 938 If a rebase is interrupted to manually resolve a conflict, it can be
939 939 continued with --continue/-c, aborted with --abort/-a, or stopped with
940 940 --stop.
941 941
942 942 .. container:: verbose
943 943
944 944 Examples:
945 945
946 946 - move "local changes" (current commit back to branching point)
947 947 to the current branch tip after a pull::
948 948
949 949 hg rebase
950 950
951 951 - move a single changeset to the stable branch::
952 952
953 953 hg rebase -r 5f493448 -d stable
954 954
955 955 - splice a commit and all its descendants onto another part of history::
956 956
957 957 hg rebase --source c0c3 --dest 4cf9
958 958
959 959 - rebase everything on a branch marked by a bookmark onto the
960 960 default branch::
961 961
962 962 hg rebase --base myfeature --dest default
963 963
964 964 - collapse a sequence of changes into a single commit::
965 965
966 966 hg rebase --collapse -r 1520:1525 -d .
967 967
968 968 - move a named branch while preserving its name::
969 969
970 970 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
971 971
972 972 - stabilize orphaned changesets so history looks linear::
973 973
974 974 hg rebase -r 'orphan()-obsolete()'\
975 975 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
976 976 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
977 977
978 978 Configuration Options:
979 979
980 980 You can make rebase require a destination if you set the following config
981 981 option::
982 982
983 983 [commands]
984 984 rebase.requiredest = True
985 985
986 986 By default, rebase will close the transaction after each commit. For
987 987 performance purposes, you can configure rebase to use a single transaction
988 988 across the entire rebase. WARNING: This setting introduces a significant
989 989 risk of losing the work you've done in a rebase if the rebase aborts
990 990 unexpectedly::
991 991
992 992 [rebase]
993 993 singletransaction = True
994 994
995 995 By default, rebase writes to the working copy, but you can configure it to
996 996 run in-memory for better performance. When the rebase is not moving the
997 997 parent(s) of the working copy (AKA the "currently checked out changesets"),
998 998 this may also allow it to run even if the working copy is dirty::
999 999
1000 1000 [rebase]
1001 1001 experimental.inmemory = True
1002 1002
1003 1003 Return Values:
1004 1004
1005 1005 Returns 0 on success, 1 if nothing to rebase or there are
1006 1006 unresolved conflicts.
1007 1007
1008 1008 """
1009 1009 opts = pycompat.byteskwargs(opts)
1010 1010 inmemory = ui.configbool(b'rebase', b'experimental.inmemory')
1011 1011 action = cmdutil.check_at_most_one_arg(opts, b'abort', b'stop', b'continue')
1012 1012 if action:
1013 1013 cmdutil.check_incompatible_arguments(
1014 1014 opts, action, b'confirm', b'dry_run'
1015 1015 )
1016 1016 cmdutil.check_incompatible_arguments(
1017 1017 opts, action, b'rev', b'source', b'base', b'dest'
1018 1018 )
1019 1019 cmdutil.check_at_most_one_arg(opts, b'confirm', b'dry_run')
1020 1020 cmdutil.check_at_most_one_arg(opts, b'rev', b'source', b'base')
1021 1021
1022 1022 if action or repo.currenttransaction() is not None:
1023 1023 # in-memory rebase is not compatible with resuming rebases.
1024 1024 # (Or if it is run within a transaction, since the restart logic can
1025 1025 # fail the entire transaction.)
1026 1026 inmemory = False
1027 1027
1028 1028 if opts.get(b'auto_orphans'):
1029 1029 disallowed_opts = set(opts) - {b'auto_orphans'}
1030 1030 cmdutil.check_incompatible_arguments(
1031 1031 opts, b'auto_orphans', *disallowed_opts
1032 1032 )
1033 1033
1034 1034 userrevs = list(repo.revs(opts.get(b'auto_orphans')))
1035 1035 opts[b'rev'] = [revsetlang.formatspec(b'%ld and orphan()', userrevs)]
1036 1036 opts[b'dest'] = b'_destautoorphanrebase(SRC)'
1037 1037
1038 1038 if opts.get(b'dry_run') or opts.get(b'confirm'):
1039 1039 return _dryrunrebase(ui, repo, action, opts)
1040 1040 elif action == b'stop':
1041 1041 rbsrt = rebaseruntime(repo, ui)
1042 1042 with repo.wlock(), repo.lock():
1043 1043 rbsrt.restorestatus()
1044 1044 if rbsrt.collapsef:
1045 1045 raise error.Abort(_(b"cannot stop in --collapse session"))
1046 1046 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1047 1047 if not (rbsrt.keepf or allowunstable):
1048 1048 raise error.Abort(
1049 1049 _(
1050 1050 b"cannot remove original changesets with"
1051 1051 b" unrebased descendants"
1052 1052 ),
1053 1053 hint=_(
1054 1054 b'either enable obsmarkers to allow unstable '
1055 1055 b'revisions or use --keep to keep original '
1056 1056 b'changesets'
1057 1057 ),
1058 1058 )
1059 1059 # update to the current working revision
1060 1060 # to clear interrupted merge
1061 1061 hg.updaterepo(repo, rbsrt.originalwd, overwrite=True)
1062 1062 rbsrt._finishrebase()
1063 1063 return 0
1064 1064 elif inmemory:
1065 1065 try:
1066 1066 # in-memory merge doesn't support conflicts, so if we hit any, abort
1067 1067 # and re-run as an on-disk merge.
1068 1068 overrides = {(b'rebase', b'singletransaction'): True}
1069 1069 with ui.configoverride(overrides, b'rebase'):
1070 1070 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
1071 1071 except error.InMemoryMergeConflictsError:
1072 1072 ui.warn(
1073 1073 _(
1074 1074 b'hit merge conflicts; re-running rebase without in-memory'
1075 1075 b' merge\n'
1076 1076 )
1077 1077 )
1078 1078 # TODO: Make in-memory merge not use the on-disk merge state, so
1079 1079 # we don't have to clean it here
1080 1080 mergemod.mergestate.clean(repo)
1081 1081 clearstatus(repo)
1082 1082 clearcollapsemsg(repo)
1083 1083 return _dorebase(ui, repo, action, opts, inmemory=False)
1084 1084 else:
1085 1085 return _dorebase(ui, repo, action, opts)
1086 1086
1087 1087
1088 1088 def _dryrunrebase(ui, repo, action, opts):
1089 1089 rbsrt = rebaseruntime(repo, ui, inmemory=True, opts=opts)
1090 1090 confirm = opts.get(b'confirm')
1091 1091 if confirm:
1092 1092 ui.status(_(b'starting in-memory rebase\n'))
1093 1093 else:
1094 1094 ui.status(
1095 1095 _(b'starting dry-run rebase; repository will not be changed\n')
1096 1096 )
1097 1097 with repo.wlock(), repo.lock():
1098 1098 needsabort = True
1099 1099 try:
1100 1100 overrides = {(b'rebase', b'singletransaction'): True}
1101 1101 with ui.configoverride(overrides, b'rebase'):
1102 1102 _origrebase(
1103 1103 ui,
1104 1104 repo,
1105 1105 action,
1106 1106 opts,
1107 1107 rbsrt,
1108 1108 inmemory=True,
1109 1109 leaveunfinished=True,
1110 1110 )
1111 1111 except error.InMemoryMergeConflictsError:
1112 1112 ui.status(_(b'hit a merge conflict\n'))
1113 1113 return 1
1114 1114 except error.Abort:
1115 1115 needsabort = False
1116 1116 raise
1117 1117 else:
1118 1118 if confirm:
1119 1119 ui.status(_(b'rebase completed successfully\n'))
1120 1120 if not ui.promptchoice(_(b'apply changes (yn)?$$ &Yes $$ &No')):
1121 1121 # finish unfinished rebase
1122 1122 rbsrt._finishrebase()
1123 1123 else:
1124 1124 rbsrt._prepareabortorcontinue(
1125 1125 isabort=True, backup=False, suppwarns=True
1126 1126 )
1127 1127 needsabort = False
1128 1128 else:
1129 1129 ui.status(
1130 1130 _(
1131 1131 b'dry-run rebase completed successfully; run without'
1132 1132 b' -n/--dry-run to perform this rebase\n'
1133 1133 )
1134 1134 )
1135 1135 return 0
1136 1136 finally:
1137 1137 if needsabort:
1138 1138 # no need to store backup in case of dryrun
1139 1139 rbsrt._prepareabortorcontinue(
1140 1140 isabort=True, backup=False, suppwarns=True
1141 1141 )
1142 1142
1143 1143
1144 1144 def _dorebase(ui, repo, action, opts, inmemory=False):
1145 1145 rbsrt = rebaseruntime(repo, ui, inmemory, opts)
1146 1146 return _origrebase(ui, repo, action, opts, rbsrt, inmemory=inmemory)
1147 1147
1148 1148
1149 1149 def _origrebase(
1150 1150 ui, repo, action, opts, rbsrt, inmemory=False, leaveunfinished=False
1151 1151 ):
1152 1152 assert action != b'stop'
1153 1153 with repo.wlock(), repo.lock():
1154 1154 if opts.get(b'interactive'):
1155 1155 try:
1156 1156 if extensions.find(b'histedit'):
1157 1157 enablehistedit = b''
1158 1158 except KeyError:
1159 1159 enablehistedit = b" --config extensions.histedit="
1160 1160 help = b"hg%s help -e histedit" % enablehistedit
1161 1161 msg = (
1162 1162 _(
1163 1163 b"interactive history editing is supported by the "
1164 1164 b"'histedit' extension (see \"%s\")"
1165 1165 )
1166 1166 % help
1167 1167 )
1168 1168 raise error.Abort(msg)
1169 1169
1170 1170 if rbsrt.collapsemsg and not rbsrt.collapsef:
1171 1171 raise error.Abort(_(b'message can only be specified with collapse'))
1172 1172
1173 1173 if action:
1174 1174 if rbsrt.collapsef:
1175 1175 raise error.Abort(
1176 1176 _(b'cannot use collapse with continue or abort')
1177 1177 )
1178 1178 if action == b'abort' and opts.get(b'tool', False):
1179 1179 ui.warn(_(b'tool option will be ignored\n'))
1180 1180 if action == b'continue':
1181 1181 ms = mergemod.mergestate.read(repo)
1182 1182 mergeutil.checkunresolved(ms)
1183 1183
1184 1184 retcode = rbsrt._prepareabortorcontinue(
1185 1185 isabort=(action == b'abort')
1186 1186 )
1187 1187 if retcode is not None:
1188 1188 return retcode
1189 1189 else:
1190 1190 # search default destination in this space
1191 1191 # used in the 'hg pull --rebase' case, see issue 5214.
1192 1192 destspace = opts.get(b'_destspace')
1193 1193 destmap = _definedestmap(
1194 1194 ui,
1195 1195 repo,
1196 1196 inmemory,
1197 1197 opts.get(b'dest', None),
1198 1198 opts.get(b'source', None),
1199 1199 opts.get(b'base', None),
1200 1200 opts.get(b'rev', []),
1201 1201 destspace=destspace,
1202 1202 )
1203 1203 retcode = rbsrt._preparenewrebase(destmap)
1204 1204 if retcode is not None:
1205 1205 return retcode
1206 1206 storecollapsemsg(repo, rbsrt.collapsemsg)
1207 1207
1208 1208 tr = None
1209 1209
1210 1210 singletr = ui.configbool(b'rebase', b'singletransaction')
1211 1211 if singletr:
1212 1212 tr = repo.transaction(b'rebase')
1213 1213
1214 1214 # If `rebase.singletransaction` is enabled, wrap the entire operation in
1215 1215 # one transaction here. Otherwise, transactions are obtained when
1216 1216 # committing each node, which is slower but allows partial success.
1217 1217 with util.acceptintervention(tr):
1218 1218 # Same logic for the dirstate guard, except we don't create one when
1219 1219 # rebasing in-memory (it's not needed).
1220 1220 dsguard = None
1221 1221 if singletr and not inmemory:
1222 1222 dsguard = dirstateguard.dirstateguard(repo, b'rebase')
1223 1223 with util.acceptintervention(dsguard):
1224 1224 rbsrt._performrebase(tr)
1225 1225 if not leaveunfinished:
1226 1226 rbsrt._finishrebase()
1227 1227
1228 1228
1229 1229 def _definedestmap(
1230 1230 ui,
1231 1231 repo,
1232 1232 inmemory,
1233 1233 destf=None,
1234 1234 srcf=None,
1235 1235 basef=None,
1236 1236 revf=None,
1237 1237 destspace=None,
1238 1238 ):
1239 1239 """use revisions argument to define destmap {srcrev: destrev}"""
1240 1240 if revf is None:
1241 1241 revf = []
1242 1242
1243 1243 # destspace is here to work around issues with `hg pull --rebase` see
1244 1244 # issue5214 for details
1245 1245
1246 1246 cmdutil.checkunfinished(repo)
1247 1247 if not inmemory:
1248 1248 cmdutil.bailifchanged(repo)
1249 1249
1250 1250 if ui.configbool(b'commands', b'rebase.requiredest') and not destf:
1251 1251 raise error.Abort(
1252 1252 _(b'you must specify a destination'),
1253 1253 hint=_(b'use: hg rebase -d REV'),
1254 1254 )
1255 1255
1256 1256 dest = None
1257 1257
1258 1258 if revf:
1259 1259 rebaseset = scmutil.revrange(repo, revf)
1260 1260 if not rebaseset:
1261 1261 ui.status(_(b'empty "rev" revision set - nothing to rebase\n'))
1262 1262 return None
1263 1263 elif srcf:
1264 1264 src = scmutil.revrange(repo, [srcf])
1265 1265 if not src:
1266 1266 ui.status(_(b'empty "source" revision set - nothing to rebase\n'))
1267 1267 return None
1268 1268 rebaseset = repo.revs(b'(%ld)::', src)
1269 1269 assert rebaseset
1270 1270 else:
1271 1271 base = scmutil.revrange(repo, [basef or b'.'])
1272 1272 if not base:
1273 1273 ui.status(
1274 1274 _(b'empty "base" revision set - ' b"can't compute rebase set\n")
1275 1275 )
1276 1276 return None
1277 1277 if destf:
1278 1278 # --base does not support multiple destinations
1279 1279 dest = scmutil.revsingle(repo, destf)
1280 1280 else:
1281 1281 dest = repo[_destrebase(repo, base, destspace=destspace)]
1282 1282 destf = bytes(dest)
1283 1283
1284 1284 roots = [] # selected children of branching points
1285 1285 bpbase = {} # {branchingpoint: [origbase]}
1286 1286 for b in base: # group bases by branching points
1287 1287 bp = repo.revs(b'ancestor(%d, %d)', b, dest.rev()).first()
1288 1288 bpbase[bp] = bpbase.get(bp, []) + [b]
1289 1289 if None in bpbase:
1290 1290 # emulate the old behavior, showing "nothing to rebase" (a better
1291 1291 # behavior may be abort with "cannot find branching point" error)
1292 1292 bpbase.clear()
1293 1293 for bp, bs in pycompat.iteritems(bpbase): # calculate roots
1294 1294 roots += list(repo.revs(b'children(%d) & ancestors(%ld)', bp, bs))
1295 1295
1296 1296 rebaseset = repo.revs(b'%ld::', roots)
1297 1297
1298 1298 if not rebaseset:
1299 1299 # transform to list because smartsets are not comparable to
1300 1300 # lists. This should be improved to honor laziness of
1301 1301 # smartset.
1302 1302 if list(base) == [dest.rev()]:
1303 1303 if basef:
1304 1304 ui.status(
1305 1305 _(
1306 1306 b'nothing to rebase - %s is both "base"'
1307 1307 b' and destination\n'
1308 1308 )
1309 1309 % dest
1310 1310 )
1311 1311 else:
1312 1312 ui.status(
1313 1313 _(
1314 1314 b'nothing to rebase - working directory '
1315 1315 b'parent is also destination\n'
1316 1316 )
1317 1317 )
1318 1318 elif not repo.revs(b'%ld - ::%d', base, dest.rev()):
1319 1319 if basef:
1320 1320 ui.status(
1321 1321 _(
1322 1322 b'nothing to rebase - "base" %s is '
1323 1323 b'already an ancestor of destination '
1324 1324 b'%s\n'
1325 1325 )
1326 1326 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1327 1327 )
1328 1328 else:
1329 1329 ui.status(
1330 1330 _(
1331 1331 b'nothing to rebase - working '
1332 1332 b'directory parent is already an '
1333 1333 b'ancestor of destination %s\n'
1334 1334 )
1335 1335 % dest
1336 1336 )
1337 1337 else: # can it happen?
1338 1338 ui.status(
1339 1339 _(b'nothing to rebase from %s to %s\n')
1340 1340 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1341 1341 )
1342 1342 return None
1343 1343
1344 1344 rebasingwcp = repo[b'.'].rev() in rebaseset
1345 1345 ui.log(
1346 1346 b"rebase",
1347 1347 b"rebasing working copy parent: %r\n",
1348 1348 rebasingwcp,
1349 1349 rebase_rebasing_wcp=rebasingwcp,
1350 1350 )
1351 1351 if inmemory and rebasingwcp:
1352 1352 # Check these since we did not before.
1353 1353 cmdutil.checkunfinished(repo)
1354 1354 cmdutil.bailifchanged(repo)
1355 1355
1356 1356 if not destf:
1357 1357 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1358 1358 destf = bytes(dest)
1359 1359
1360 1360 allsrc = revsetlang.formatspec(b'%ld', rebaseset)
1361 1361 alias = {b'ALLSRC': allsrc}
1362 1362
1363 1363 if dest is None:
1364 1364 try:
1365 1365 # fast path: try to resolve dest without SRC alias
1366 1366 dest = scmutil.revsingle(repo, destf, localalias=alias)
1367 1367 except error.RepoLookupError:
1368 1368 # multi-dest path: resolve dest for each SRC separately
1369 1369 destmap = {}
1370 1370 for r in rebaseset:
1371 1371 alias[b'SRC'] = revsetlang.formatspec(b'%d', r)
1372 1372 # use repo.anyrevs instead of scmutil.revsingle because we
1373 1373 # don't want to abort if destset is empty.
1374 1374 destset = repo.anyrevs([destf], user=True, localalias=alias)
1375 1375 size = len(destset)
1376 1376 if size == 1:
1377 1377 destmap[r] = destset.first()
1378 1378 elif size == 0:
1379 1379 ui.note(_(b'skipping %s - empty destination\n') % repo[r])
1380 1380 else:
1381 1381 raise error.Abort(
1382 1382 _(b'rebase destination for %s is not unique') % repo[r]
1383 1383 )
1384 1384
1385 1385 if dest is not None:
1386 1386 # single-dest case: assign dest to each rev in rebaseset
1387 1387 destrev = dest.rev()
1388 1388 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1389 1389
1390 1390 if not destmap:
1391 1391 ui.status(_(b'nothing to rebase - empty destination\n'))
1392 1392 return None
1393 1393
1394 1394 return destmap
1395 1395
1396 1396
1397 1397 def externalparent(repo, state, destancestors):
1398 1398 """Return the revision that should be used as the second parent
1399 1399 when the revisions in state is collapsed on top of destancestors.
1400 1400 Abort if there is more than one parent.
1401 1401 """
1402 1402 parents = set()
1403 1403 source = min(state)
1404 1404 for rev in state:
1405 1405 if rev == source:
1406 1406 continue
1407 1407 for p in repo[rev].parents():
1408 1408 if p.rev() not in state and p.rev() not in destancestors:
1409 1409 parents.add(p.rev())
1410 1410 if not parents:
1411 1411 return nullrev
1412 1412 if len(parents) == 1:
1413 1413 return parents.pop()
1414 1414 raise error.Abort(
1415 1415 _(
1416 1416 b'unable to collapse on top of %d, there is more '
1417 1417 b'than one external parent: %s'
1418 1418 )
1419 1419 % (max(destancestors), b', '.join(b"%d" % p for p in sorted(parents)))
1420 1420 )
1421 1421
1422 1422
1423 1423 def commitmemorynode(repo, p1, p2, wctx, editor, extra, user, date, commitmsg):
1424 1424 '''Commit the memory changes with parents p1 and p2.
1425 1425 Return node of committed revision.'''
1426 1426 # Replicates the empty check in ``repo.commit``.
1427 1427 if wctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'):
1428 1428 return None
1429 1429
1430 1430 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1431 1431 # ``branch`` (used when passing ``--keepbranches``).
1432 1432 branch = None
1433 1433 if b'branch' in extra:
1434 1434 branch = extra[b'branch']
1435 1435
1436 1436 wctx.setparents(repo[p1].node(), repo[p2].node())
1437 1437 memctx = wctx.tomemctx(
1438 1438 commitmsg,
1439 1439 date=date,
1440 1440 extra=extra,
1441 1441 user=user,
1442 1442 branch=branch,
1443 1443 editor=editor,
1444 1444 )
1445 1445 commitres = repo.commitctx(memctx)
1446 1446 wctx.clean() # Might be reused
1447 1447 return commitres
1448 1448
1449 1449
1450 1450 def commitnode(repo, p1, p2, editor, extra, user, date, commitmsg):
1451 1451 '''Commit the wd changes with parents p1 and p2.
1452 1452 Return node of committed revision.'''
1453 1453 dsguard = util.nullcontextmanager()
1454 1454 if not repo.ui.configbool(b'rebase', b'singletransaction'):
1455 1455 dsguard = dirstateguard.dirstateguard(repo, b'rebase')
1456 1456 with dsguard:
1457 1457 repo.setparents(repo[p1].node(), repo[p2].node())
1458 1458
1459 1459 # Commit might fail if unresolved files exist
1460 1460 newnode = repo.commit(
1461 1461 text=commitmsg, user=user, date=date, extra=extra, editor=editor
1462 1462 )
1463 1463
1464 1464 repo.dirstate.setbranch(repo[newnode].branch())
1465 1465 return newnode
1466 1466
1467 1467
1468 1468 def rebasenode(repo, rev, p1, base, collapse, dest, wctx):
1469 1469 """Rebase a single revision rev on top of p1 using base as merge ancestor"""
1470 1470 # Merge phase
1471 1471 # Update to destination and merge it with local
1472 1472 if wctx.isinmemory():
1473 1473 wctx.setbase(repo[p1])
1474 1474 else:
1475 1475 if repo[b'.'].rev() != p1:
1476 1476 repo.ui.debug(b" update to %d:%s\n" % (p1, repo[p1]))
1477 1477 mergemod.update(repo, p1, branchmerge=False, force=True)
1478 1478 else:
1479 1479 repo.ui.debug(b" already in destination\n")
1480 1480 # This is, alas, necessary to invalidate workingctx's manifest cache,
1481 1481 # as well as other data we litter on it in other places.
1482 1482 wctx = repo[None]
1483 1483 repo.dirstate.write(repo.currenttransaction())
1484 1484 ctx = repo[rev]
1485 1485 repo.ui.debug(b" merge against %d:%s\n" % (rev, ctx))
1486 1486 if base is not None:
1487 1487 repo.ui.debug(b" detach base %d:%s\n" % (base, repo[base]))
1488 1488 # When collapsing in-place, the parent is the common ancestor, we
1489 1489 # have to allow merging with it.
1490 1490 stats = mergemod.update(
1491 1491 repo,
1492 1492 rev,
1493 1493 branchmerge=True,
1494 1494 force=True,
1495 1495 ancestor=base,
1496 1496 mergeancestor=collapse,
1497 1497 labels=[b'dest', b'source'],
1498 1498 wc=wctx,
1499 1499 )
1500 destctx = repo[dest]
1501 1500 if collapse:
1502 copies.graftcopies(repo, wctx, ctx, destctx)
1501 copies.graftcopies(wctx, ctx, repo[dest])
1503 1502 else:
1504 1503 # If we're not using --collapse, we need to
1505 1504 # duplicate copies between the revision we're
1506 # rebasing and its first parent, but *not*
1507 # duplicate any copies that have already been
1508 # performed in the destination.
1509 copies.graftcopies(repo, wctx, ctx, ctx.p1(), skip=destctx)
1505 # rebasing and its first parent.
1506 copies.graftcopies(wctx, ctx, ctx.p1())
1510 1507 return stats
1511 1508
1512 1509
1513 1510 def adjustdest(repo, rev, destmap, state, skipped):
1514 1511 r"""adjust rebase destination given the current rebase state
1515 1512
1516 1513 rev is what is being rebased. Return a list of two revs, which are the
1517 1514 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1518 1515 nullrev, return dest without adjustment for it.
1519 1516
1520 1517 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1521 1518 to B1, and E's destination will be adjusted from F to B1.
1522 1519
1523 1520 B1 <- written during rebasing B
1524 1521 |
1525 1522 F <- original destination of B, E
1526 1523 |
1527 1524 | E <- rev, which is being rebased
1528 1525 | |
1529 1526 | D <- prev, one parent of rev being checked
1530 1527 | |
1531 1528 | x <- skipped, ex. no successor or successor in (::dest)
1532 1529 | |
1533 1530 | C <- rebased as C', different destination
1534 1531 | |
1535 1532 | B <- rebased as B1 C'
1536 1533 |/ |
1537 1534 A G <- destination of C, different
1538 1535
1539 1536 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1540 1537 first move C to C1, G to G1, and when it's checking H, the adjusted
1541 1538 destinations will be [C1, G1].
1542 1539
1543 1540 H C1 G1
1544 1541 /| | /
1545 1542 F G |/
1546 1543 K | | -> K
1547 1544 | C D |
1548 1545 | |/ |
1549 1546 | B | ...
1550 1547 |/ |/
1551 1548 A A
1552 1549
1553 1550 Besides, adjust dest according to existing rebase information. For example,
1554 1551
1555 1552 B C D B needs to be rebased on top of C, C needs to be rebased on top
1556 1553 \|/ of D. We will rebase C first.
1557 1554 A
1558 1555
1559 1556 C' After rebasing C, when considering B's destination, use C'
1560 1557 | instead of the original C.
1561 1558 B D
1562 1559 \ /
1563 1560 A
1564 1561 """
1565 1562 # pick already rebased revs with same dest from state as interesting source
1566 1563 dest = destmap[rev]
1567 1564 source = [
1568 1565 s
1569 1566 for s, d in state.items()
1570 1567 if d > 0 and destmap[s] == dest and s not in skipped
1571 1568 ]
1572 1569
1573 1570 result = []
1574 1571 for prev in repo.changelog.parentrevs(rev):
1575 1572 adjusted = dest
1576 1573 if prev != nullrev:
1577 1574 candidate = repo.revs(b'max(%ld and (::%d))', source, prev).first()
1578 1575 if candidate is not None:
1579 1576 adjusted = state[candidate]
1580 1577 if adjusted == dest and dest in state:
1581 1578 adjusted = state[dest]
1582 1579 if adjusted == revtodo:
1583 1580 # sortsource should produce an order that makes this impossible
1584 1581 raise error.ProgrammingError(
1585 1582 b'rev %d should be rebased already at this time' % dest
1586 1583 )
1587 1584 result.append(adjusted)
1588 1585 return result
1589 1586
1590 1587
1591 1588 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1592 1589 """
1593 1590 Abort if rebase will create divergence or rebase is noop because of markers
1594 1591
1595 1592 `rebaseobsrevs`: set of obsolete revision in source
1596 1593 `rebaseobsskipped`: set of revisions from source skipped because they have
1597 1594 successors in destination or no non-obsolete successor.
1598 1595 """
1599 1596 # Obsolete node with successors not in dest leads to divergence
1600 1597 divergenceok = ui.configbool(b'experimental', b'evolution.allowdivergence')
1601 1598 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1602 1599
1603 1600 if divergencebasecandidates and not divergenceok:
1604 1601 divhashes = (bytes(repo[r]) for r in divergencebasecandidates)
1605 1602 msg = _(b"this rebase will cause divergences from: %s")
1606 1603 h = _(
1607 1604 b"to force the rebase please set "
1608 1605 b"experimental.evolution.allowdivergence=True"
1609 1606 )
1610 1607 raise error.Abort(msg % (b",".join(divhashes),), hint=h)
1611 1608
1612 1609
1613 1610 def successorrevs(unfi, rev):
1614 1611 """yield revision numbers for successors of rev"""
1615 1612 assert unfi.filtername is None
1616 1613 get_rev = unfi.changelog.index.get_rev
1617 1614 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1618 1615 r = get_rev(s)
1619 1616 if r is not None:
1620 1617 yield r
1621 1618
1622 1619
1623 1620 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1624 1621 """Return new parents and optionally a merge base for rev being rebased
1625 1622
1626 1623 The destination specified by "dest" cannot always be used directly because
1627 1624 previously rebase result could affect destination. For example,
1628 1625
1629 1626 D E rebase -r C+D+E -d B
1630 1627 |/ C will be rebased to C'
1631 1628 B C D's new destination will be C' instead of B
1632 1629 |/ E's new destination will be C' instead of B
1633 1630 A
1634 1631
1635 1632 The new parents of a merge is slightly more complicated. See the comment
1636 1633 block below.
1637 1634 """
1638 1635 # use unfiltered changelog since successorrevs may return filtered nodes
1639 1636 assert repo.filtername is None
1640 1637 cl = repo.changelog
1641 1638 isancestor = cl.isancestorrev
1642 1639
1643 1640 dest = destmap[rev]
1644 1641 oldps = repo.changelog.parentrevs(rev) # old parents
1645 1642 newps = [nullrev, nullrev] # new parents
1646 1643 dests = adjustdest(repo, rev, destmap, state, skipped)
1647 1644 bases = list(oldps) # merge base candidates, initially just old parents
1648 1645
1649 1646 if all(r == nullrev for r in oldps[1:]):
1650 1647 # For non-merge changeset, just move p to adjusted dest as requested.
1651 1648 newps[0] = dests[0]
1652 1649 else:
1653 1650 # For merge changeset, if we move p to dests[i] unconditionally, both
1654 1651 # parents may change and the end result looks like "the merge loses a
1655 1652 # parent", which is a surprise. This is a limit because "--dest" only
1656 1653 # accepts one dest per src.
1657 1654 #
1658 1655 # Therefore, only move p with reasonable conditions (in this order):
1659 1656 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1660 1657 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1661 1658 #
1662 1659 # Comparing with adjustdest, the logic here does some additional work:
1663 1660 # 1. decide which parents will not be moved towards dest
1664 1661 # 2. if the above decision is "no", should a parent still be moved
1665 1662 # because it was rebased?
1666 1663 #
1667 1664 # For example:
1668 1665 #
1669 1666 # C # "rebase -r C -d D" is an error since none of the parents
1670 1667 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1671 1668 # A B D # B (using rule "2."), since B will be rebased.
1672 1669 #
1673 1670 # The loop tries to be not rely on the fact that a Mercurial node has
1674 1671 # at most 2 parents.
1675 1672 for i, p in enumerate(oldps):
1676 1673 np = p # new parent
1677 1674 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1678 1675 np = dests[i]
1679 1676 elif p in state and state[p] > 0:
1680 1677 np = state[p]
1681 1678
1682 1679 # "bases" only record "special" merge bases that cannot be
1683 1680 # calculated from changelog DAG (i.e. isancestor(p, np) is False).
1684 1681 # For example:
1685 1682 #
1686 1683 # B' # rebase -s B -d D, when B was rebased to B'. dest for C
1687 1684 # | C # is B', but merge base for C is B, instead of
1688 1685 # D | # changelog.ancestor(C, B') == A. If changelog DAG and
1689 1686 # | B # "state" edges are merged (so there will be an edge from
1690 1687 # |/ # B to B'), the merge base is still ancestor(C, B') in
1691 1688 # A # the merged graph.
1692 1689 #
1693 1690 # Also see https://bz.mercurial-scm.org/show_bug.cgi?id=1950#c8
1694 1691 # which uses "virtual null merge" to explain this situation.
1695 1692 if isancestor(p, np):
1696 1693 bases[i] = nullrev
1697 1694
1698 1695 # If one parent becomes an ancestor of the other, drop the ancestor
1699 1696 for j, x in enumerate(newps[:i]):
1700 1697 if x == nullrev:
1701 1698 continue
1702 1699 if isancestor(np, x): # CASE-1
1703 1700 np = nullrev
1704 1701 elif isancestor(x, np): # CASE-2
1705 1702 newps[j] = np
1706 1703 np = nullrev
1707 1704 # New parents forming an ancestor relationship does not
1708 1705 # mean the old parents have a similar relationship. Do not
1709 1706 # set bases[x] to nullrev.
1710 1707 bases[j], bases[i] = bases[i], bases[j]
1711 1708
1712 1709 newps[i] = np
1713 1710
1714 1711 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1715 1712 # base. If only p2 changes, merging using unchanged p1 as merge base is
1716 1713 # suboptimal. Therefore swap parents to make the merge sane.
1717 1714 if newps[1] != nullrev and oldps[0] == newps[0]:
1718 1715 assert len(newps) == 2 and len(oldps) == 2
1719 1716 newps.reverse()
1720 1717 bases.reverse()
1721 1718
1722 1719 # No parent change might be an error because we fail to make rev a
1723 1720 # descendent of requested dest. This can happen, for example:
1724 1721 #
1725 1722 # C # rebase -r C -d D
1726 1723 # /| # None of A and B will be changed to D and rebase fails.
1727 1724 # A B D
1728 1725 if set(newps) == set(oldps) and dest not in newps:
1729 1726 raise error.Abort(
1730 1727 _(
1731 1728 b'cannot rebase %d:%s without '
1732 1729 b'moving at least one of its parents'
1733 1730 )
1734 1731 % (rev, repo[rev])
1735 1732 )
1736 1733
1737 1734 # Source should not be ancestor of dest. The check here guarantees it's
1738 1735 # impossible. With multi-dest, the initial check does not cover complex
1739 1736 # cases since we don't have abstractions to dry-run rebase cheaply.
1740 1737 if any(p != nullrev and isancestor(rev, p) for p in newps):
1741 1738 raise error.Abort(_(b'source is ancestor of destination'))
1742 1739
1743 1740 # "rebasenode" updates to new p1, use the corresponding merge base.
1744 1741 if bases[0] != nullrev:
1745 1742 base = bases[0]
1746 1743 else:
1747 1744 base = None
1748 1745
1749 1746 # Check if the merge will contain unwanted changes. That may happen if
1750 1747 # there are multiple special (non-changelog ancestor) merge bases, which
1751 1748 # cannot be handled well by the 3-way merge algorithm. For example:
1752 1749 #
1753 1750 # F
1754 1751 # /|
1755 1752 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1756 1753 # | | # as merge base, the difference between D and F will include
1757 1754 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1758 1755 # |/ # chosen, the rebased F will contain B.
1759 1756 # A Z
1760 1757 #
1761 1758 # But our merge base candidates (D and E in above case) could still be
1762 1759 # better than the default (ancestor(F, Z) == null). Therefore still
1763 1760 # pick one (so choose p1 above).
1764 1761 if sum(1 for b in set(bases) if b != nullrev) > 1:
1765 1762 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1766 1763 for i, base in enumerate(bases):
1767 1764 if base == nullrev:
1768 1765 continue
1769 1766 # Revisions in the side (not chosen as merge base) branch that
1770 1767 # might contain "surprising" contents
1771 1768 siderevs = list(
1772 1769 repo.revs(b'((%ld-%d) %% (%d+%d))', bases, base, base, dest)
1773 1770 )
1774 1771
1775 1772 # If those revisions are covered by rebaseset, the result is good.
1776 1773 # A merge in rebaseset would be considered to cover its ancestors.
1777 1774 if siderevs:
1778 1775 rebaseset = [
1779 1776 r for r, d in state.items() if d > 0 and r not in obsskipped
1780 1777 ]
1781 1778 merges = [
1782 1779 r for r in rebaseset if cl.parentrevs(r)[1] != nullrev
1783 1780 ]
1784 1781 unwanted[i] = list(
1785 1782 repo.revs(
1786 1783 b'%ld - (::%ld) - %ld', siderevs, merges, rebaseset
1787 1784 )
1788 1785 )
1789 1786
1790 1787 # Choose a merge base that has a minimal number of unwanted revs.
1791 1788 l, i = min(
1792 1789 (len(revs), i)
1793 1790 for i, revs in enumerate(unwanted)
1794 1791 if revs is not None
1795 1792 )
1796 1793 base = bases[i]
1797 1794
1798 1795 # newps[0] should match merge base if possible. Currently, if newps[i]
1799 1796 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1800 1797 # the other's ancestor. In that case, it's fine to not swap newps here.
1801 1798 # (see CASE-1 and CASE-2 above)
1802 1799 if i != 0 and newps[i] != nullrev:
1803 1800 newps[0], newps[i] = newps[i], newps[0]
1804 1801
1805 1802 # The merge will include unwanted revisions. Abort now. Revisit this if
1806 1803 # we have a more advanced merge algorithm that handles multiple bases.
1807 1804 if l > 0:
1808 1805 unwanteddesc = _(b' or ').join(
1809 1806 (
1810 1807 b', '.join(b'%d:%s' % (r, repo[r]) for r in revs)
1811 1808 for revs in unwanted
1812 1809 if revs is not None
1813 1810 )
1814 1811 )
1815 1812 raise error.Abort(
1816 1813 _(b'rebasing %d:%s will include unwanted changes from %s')
1817 1814 % (rev, repo[rev], unwanteddesc)
1818 1815 )
1819 1816
1820 1817 repo.ui.debug(b" future parents are %d and %d\n" % tuple(newps))
1821 1818
1822 1819 return newps[0], newps[1], base
1823 1820
1824 1821
1825 1822 def isagitpatch(repo, patchname):
1826 1823 """Return true if the given patch is in git format"""
1827 1824 mqpatch = os.path.join(repo.mq.path, patchname)
1828 1825 for line in patch.linereader(open(mqpatch, b'rb')):
1829 1826 if line.startswith(b'diff --git'):
1830 1827 return True
1831 1828 return False
1832 1829
1833 1830
1834 1831 def updatemq(repo, state, skipped, **opts):
1835 1832 """Update rebased mq patches - finalize and then import them"""
1836 1833 mqrebase = {}
1837 1834 mq = repo.mq
1838 1835 original_series = mq.fullseries[:]
1839 1836 skippedpatches = set()
1840 1837
1841 1838 for p in mq.applied:
1842 1839 rev = repo[p.node].rev()
1843 1840 if rev in state:
1844 1841 repo.ui.debug(
1845 1842 b'revision %d is an mq patch (%s), finalize it.\n'
1846 1843 % (rev, p.name)
1847 1844 )
1848 1845 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1849 1846 else:
1850 1847 # Applied but not rebased, not sure this should happen
1851 1848 skippedpatches.add(p.name)
1852 1849
1853 1850 if mqrebase:
1854 1851 mq.finish(repo, mqrebase.keys())
1855 1852
1856 1853 # We must start import from the newest revision
1857 1854 for rev in sorted(mqrebase, reverse=True):
1858 1855 if rev not in skipped:
1859 1856 name, isgit = mqrebase[rev]
1860 1857 repo.ui.note(
1861 1858 _(b'updating mq patch %s to %d:%s\n')
1862 1859 % (name, state[rev], repo[state[rev]])
1863 1860 )
1864 1861 mq.qimport(
1865 1862 repo,
1866 1863 (),
1867 1864 patchname=name,
1868 1865 git=isgit,
1869 1866 rev=[b"%d" % state[rev]],
1870 1867 )
1871 1868 else:
1872 1869 # Rebased and skipped
1873 1870 skippedpatches.add(mqrebase[rev][0])
1874 1871
1875 1872 # Patches were either applied and rebased and imported in
1876 1873 # order, applied and removed or unapplied. Discard the removed
1877 1874 # ones while preserving the original series order and guards.
1878 1875 newseries = [
1879 1876 s
1880 1877 for s in original_series
1881 1878 if mq.guard_re.split(s, 1)[0] not in skippedpatches
1882 1879 ]
1883 1880 mq.fullseries[:] = newseries
1884 1881 mq.seriesdirty = True
1885 1882 mq.savedirty()
1886 1883
1887 1884
1888 1885 def storecollapsemsg(repo, collapsemsg):
1889 1886 """Store the collapse message to allow recovery"""
1890 1887 collapsemsg = collapsemsg or b''
1891 1888 f = repo.vfs(b"last-message.txt", b"w")
1892 1889 f.write(b"%s\n" % collapsemsg)
1893 1890 f.close()
1894 1891
1895 1892
1896 1893 def clearcollapsemsg(repo):
1897 1894 """Remove collapse message file"""
1898 1895 repo.vfs.unlinkpath(b"last-message.txt", ignoremissing=True)
1899 1896
1900 1897
1901 1898 def restorecollapsemsg(repo, isabort):
1902 1899 """Restore previously stored collapse message"""
1903 1900 try:
1904 1901 f = repo.vfs(b"last-message.txt")
1905 1902 collapsemsg = f.readline().strip()
1906 1903 f.close()
1907 1904 except IOError as err:
1908 1905 if err.errno != errno.ENOENT:
1909 1906 raise
1910 1907 if isabort:
1911 1908 # Oh well, just abort like normal
1912 1909 collapsemsg = b''
1913 1910 else:
1914 1911 raise error.Abort(_(b'missing .hg/last-message.txt for rebase'))
1915 1912 return collapsemsg
1916 1913
1917 1914
1918 1915 def clearstatus(repo):
1919 1916 """Remove the status files"""
1920 1917 # Make sure the active transaction won't write the state file
1921 1918 tr = repo.currenttransaction()
1922 1919 if tr:
1923 1920 tr.removefilegenerator(b'rebasestate')
1924 1921 repo.vfs.unlinkpath(b"rebasestate", ignoremissing=True)
1925 1922
1926 1923
1927 1924 def sortsource(destmap):
1928 1925 """yield source revisions in an order that we only rebase things once
1929 1926
1930 1927 If source and destination overlaps, we should filter out revisions
1931 1928 depending on other revisions which hasn't been rebased yet.
1932 1929
1933 1930 Yield a sorted list of revisions each time.
1934 1931
1935 1932 For example, when rebasing A to B, B to C. This function yields [B], then
1936 1933 [A], indicating B needs to be rebased first.
1937 1934
1938 1935 Raise if there is a cycle so the rebase is impossible.
1939 1936 """
1940 1937 srcset = set(destmap)
1941 1938 while srcset:
1942 1939 srclist = sorted(srcset)
1943 1940 result = []
1944 1941 for r in srclist:
1945 1942 if destmap[r] not in srcset:
1946 1943 result.append(r)
1947 1944 if not result:
1948 1945 raise error.Abort(_(b'source and destination form a cycle'))
1949 1946 srcset -= set(result)
1950 1947 yield result
1951 1948
1952 1949
1953 1950 def buildstate(repo, destmap, collapse):
1954 1951 '''Define which revisions are going to be rebased and where
1955 1952
1956 1953 repo: repo
1957 1954 destmap: {srcrev: destrev}
1958 1955 '''
1959 1956 rebaseset = destmap.keys()
1960 1957 originalwd = repo[b'.'].rev()
1961 1958
1962 1959 # This check isn't strictly necessary, since mq detects commits over an
1963 1960 # applied patch. But it prevents messing up the working directory when
1964 1961 # a partially completed rebase is blocked by mq.
1965 1962 if b'qtip' in repo.tags():
1966 1963 mqapplied = set(repo[s.node].rev() for s in repo.mq.applied)
1967 1964 if set(destmap.values()) & mqapplied:
1968 1965 raise error.Abort(_(b'cannot rebase onto an applied mq patch'))
1969 1966
1970 1967 # Get "cycle" error early by exhausting the generator.
1971 1968 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
1972 1969 if not sortedsrc:
1973 1970 raise error.Abort(_(b'no matching revisions'))
1974 1971
1975 1972 # Only check the first batch of revisions to rebase not depending on other
1976 1973 # rebaseset. This means "source is ancestor of destination" for the second
1977 1974 # (and following) batches of revisions are not checked here. We rely on
1978 1975 # "defineparents" to do that check.
1979 1976 roots = list(repo.set(b'roots(%ld)', sortedsrc[0]))
1980 1977 if not roots:
1981 1978 raise error.Abort(_(b'no matching revisions'))
1982 1979
1983 1980 def revof(r):
1984 1981 return r.rev()
1985 1982
1986 1983 roots = sorted(roots, key=revof)
1987 1984 state = dict.fromkeys(rebaseset, revtodo)
1988 1985 emptyrebase = len(sortedsrc) == 1
1989 1986 for root in roots:
1990 1987 dest = repo[destmap[root.rev()]]
1991 1988 commonbase = root.ancestor(dest)
1992 1989 if commonbase == root:
1993 1990 raise error.Abort(_(b'source is ancestor of destination'))
1994 1991 if commonbase == dest:
1995 1992 wctx = repo[None]
1996 1993 if dest == wctx.p1():
1997 1994 # when rebasing to '.', it will use the current wd branch name
1998 1995 samebranch = root.branch() == wctx.branch()
1999 1996 else:
2000 1997 samebranch = root.branch() == dest.branch()
2001 1998 if not collapse and samebranch and dest in root.parents():
2002 1999 # mark the revision as done by setting its new revision
2003 2000 # equal to its old (current) revisions
2004 2001 state[root.rev()] = root.rev()
2005 2002 repo.ui.debug(b'source is a child of destination\n')
2006 2003 continue
2007 2004
2008 2005 emptyrebase = False
2009 2006 repo.ui.debug(b'rebase onto %s starting from %s\n' % (dest, root))
2010 2007 if emptyrebase:
2011 2008 return None
2012 2009 for rev in sorted(state):
2013 2010 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
2014 2011 # if all parents of this revision are done, then so is this revision
2015 2012 if parents and all((state.get(p) == p for p in parents)):
2016 2013 state[rev] = rev
2017 2014 return originalwd, destmap, state
2018 2015
2019 2016
2020 2017 def clearrebased(
2021 2018 ui,
2022 2019 repo,
2023 2020 destmap,
2024 2021 state,
2025 2022 skipped,
2026 2023 collapsedas=None,
2027 2024 keepf=False,
2028 2025 fm=None,
2029 2026 backup=True,
2030 2027 ):
2031 2028 """dispose of rebased revision at the end of the rebase
2032 2029
2033 2030 If `collapsedas` is not None, the rebase was a collapse whose result if the
2034 2031 `collapsedas` node.
2035 2032
2036 2033 If `keepf` is not True, the rebase has --keep set and no nodes should be
2037 2034 removed (but bookmarks still need to be moved).
2038 2035
2039 2036 If `backup` is False, no backup will be stored when stripping rebased
2040 2037 revisions.
2041 2038 """
2042 2039 tonode = repo.changelog.node
2043 2040 replacements = {}
2044 2041 moves = {}
2045 2042 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
2046 2043
2047 2044 collapsednodes = []
2048 2045 for rev, newrev in sorted(state.items()):
2049 2046 if newrev >= 0 and newrev != rev:
2050 2047 oldnode = tonode(rev)
2051 2048 newnode = collapsedas or tonode(newrev)
2052 2049 moves[oldnode] = newnode
2053 2050 succs = None
2054 2051 if rev in skipped:
2055 2052 if stripcleanup or not repo[rev].obsolete():
2056 2053 succs = ()
2057 2054 elif collapsedas:
2058 2055 collapsednodes.append(oldnode)
2059 2056 else:
2060 2057 succs = (newnode,)
2061 2058 if succs is not None:
2062 2059 replacements[(oldnode,)] = succs
2063 2060 if collapsednodes:
2064 2061 replacements[tuple(collapsednodes)] = (collapsedas,)
2065 2062 if fm:
2066 2063 hf = fm.hexfunc
2067 2064 fl = fm.formatlist
2068 2065 fd = fm.formatdict
2069 2066 changes = {}
2070 2067 for oldns, newn in pycompat.iteritems(replacements):
2071 2068 for oldn in oldns:
2072 2069 changes[hf(oldn)] = fl([hf(n) for n in newn], name=b'node')
2073 2070 nodechanges = fd(changes, key=b"oldnode", value=b"newnodes")
2074 2071 fm.data(nodechanges=nodechanges)
2075 2072 if keepf:
2076 2073 replacements = {}
2077 2074 scmutil.cleanupnodes(repo, replacements, b'rebase', moves, backup=backup)
2078 2075
2079 2076
2080 2077 def pullrebase(orig, ui, repo, *args, **opts):
2081 2078 """Call rebase after pull if the latter has been invoked with --rebase"""
2082 2079 if opts.get('rebase'):
2083 2080 if ui.configbool(b'commands', b'rebase.requiredest'):
2084 2081 msg = _(b'rebase destination required by configuration')
2085 2082 hint = _(b'use hg pull followed by hg rebase -d DEST')
2086 2083 raise error.Abort(msg, hint=hint)
2087 2084
2088 2085 with repo.wlock(), repo.lock():
2089 2086 if opts.get('update'):
2090 2087 del opts['update']
2091 2088 ui.debug(
2092 2089 b'--update and --rebase are not compatible, ignoring '
2093 2090 b'the update flag\n'
2094 2091 )
2095 2092
2096 2093 cmdutil.checkunfinished(repo, skipmerge=True)
2097 2094 cmdutil.bailifchanged(
2098 2095 repo,
2099 2096 hint=_(
2100 2097 b'cannot pull with rebase: '
2101 2098 b'please commit or shelve your changes first'
2102 2099 ),
2103 2100 )
2104 2101
2105 2102 revsprepull = len(repo)
2106 2103 origpostincoming = commands.postincoming
2107 2104
2108 2105 def _dummy(*args, **kwargs):
2109 2106 pass
2110 2107
2111 2108 commands.postincoming = _dummy
2112 2109 try:
2113 2110 ret = orig(ui, repo, *args, **opts)
2114 2111 finally:
2115 2112 commands.postincoming = origpostincoming
2116 2113 revspostpull = len(repo)
2117 2114 if revspostpull > revsprepull:
2118 2115 # --rev option from pull conflict with rebase own --rev
2119 2116 # dropping it
2120 2117 if 'rev' in opts:
2121 2118 del opts['rev']
2122 2119 # positional argument from pull conflicts with rebase's own
2123 2120 # --source.
2124 2121 if 'source' in opts:
2125 2122 del opts['source']
2126 2123 # revsprepull is the len of the repo, not revnum of tip.
2127 2124 destspace = list(repo.changelog.revs(start=revsprepull))
2128 2125 opts['_destspace'] = destspace
2129 2126 try:
2130 2127 rebase(ui, repo, **opts)
2131 2128 except error.NoMergeDestAbort:
2132 2129 # we can maybe update instead
2133 2130 rev, _a, _b = destutil.destupdate(repo)
2134 2131 if rev == repo[b'.'].rev():
2135 2132 ui.status(_(b'nothing to rebase\n'))
2136 2133 else:
2137 2134 ui.status(_(b'nothing to rebase - updating instead\n'))
2138 2135 # not passing argument to get the bare update behavior
2139 2136 # with warning and trumpets
2140 2137 commands.update(ui, repo)
2141 2138 else:
2142 2139 if opts.get('tool'):
2143 2140 raise error.Abort(_(b'--tool can only be used with --rebase'))
2144 2141 ret = orig(ui, repo, *args, **opts)
2145 2142
2146 2143 return ret
2147 2144
2148 2145
2149 2146 def _filterobsoleterevs(repo, revs):
2150 2147 """returns a set of the obsolete revisions in revs"""
2151 2148 return set(r for r in revs if repo[r].obsolete())
2152 2149
2153 2150
2154 2151 def _computeobsoletenotrebased(repo, rebaseobsrevs, destmap):
2155 2152 """Return (obsoletenotrebased, obsoletewithoutsuccessorindestination).
2156 2153
2157 2154 `obsoletenotrebased` is a mapping mapping obsolete => successor for all
2158 2155 obsolete nodes to be rebased given in `rebaseobsrevs`.
2159 2156
2160 2157 `obsoletewithoutsuccessorindestination` is a set with obsolete revisions
2161 2158 without a successor in destination.
2162 2159
2163 2160 `obsoleteextinctsuccessors` is a set of obsolete revisions with only
2164 2161 obsolete successors.
2165 2162 """
2166 2163 obsoletenotrebased = {}
2167 2164 obsoletewithoutsuccessorindestination = set()
2168 2165 obsoleteextinctsuccessors = set()
2169 2166
2170 2167 assert repo.filtername is None
2171 2168 cl = repo.changelog
2172 2169 get_rev = cl.index.get_rev
2173 2170 extinctrevs = set(repo.revs(b'extinct()'))
2174 2171 for srcrev in rebaseobsrevs:
2175 2172 srcnode = cl.node(srcrev)
2176 2173 # XXX: more advanced APIs are required to handle split correctly
2177 2174 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
2178 2175 # obsutil.allsuccessors includes node itself
2179 2176 successors.remove(srcnode)
2180 2177 succrevs = {get_rev(s) for s in successors}
2181 2178 succrevs.discard(None)
2182 2179 if succrevs.issubset(extinctrevs):
2183 2180 # all successors are extinct
2184 2181 obsoleteextinctsuccessors.add(srcrev)
2185 2182 if not successors:
2186 2183 # no successor
2187 2184 obsoletenotrebased[srcrev] = None
2188 2185 else:
2189 2186 dstrev = destmap[srcrev]
2190 2187 for succrev in succrevs:
2191 2188 if cl.isancestorrev(succrev, dstrev):
2192 2189 obsoletenotrebased[srcrev] = succrev
2193 2190 break
2194 2191 else:
2195 2192 # If 'srcrev' has a successor in rebase set but none in
2196 2193 # destination (which would be catched above), we shall skip it
2197 2194 # and its descendants to avoid divergence.
2198 2195 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
2199 2196 obsoletewithoutsuccessorindestination.add(srcrev)
2200 2197
2201 2198 return (
2202 2199 obsoletenotrebased,
2203 2200 obsoletewithoutsuccessorindestination,
2204 2201 obsoleteextinctsuccessors,
2205 2202 )
2206 2203
2207 2204
2208 2205 def abortrebase(ui, repo):
2209 2206 with repo.wlock(), repo.lock():
2210 2207 rbsrt = rebaseruntime(repo, ui)
2211 2208 rbsrt._prepareabortorcontinue(isabort=True)
2212 2209
2213 2210
2214 2211 def continuerebase(ui, repo):
2215 2212 with repo.wlock(), repo.lock():
2216 2213 rbsrt = rebaseruntime(repo, ui)
2217 2214 ms = mergemod.mergestate.read(repo)
2218 2215 mergeutil.checkunresolved(ms)
2219 2216 retcode = rbsrt._prepareabortorcontinue(isabort=False)
2220 2217 if retcode is not None:
2221 2218 return retcode
2222 2219 rbsrt._performrebase(None)
2223 2220 rbsrt._finishrebase()
2224 2221
2225 2222
2226 2223 def summaryhook(ui, repo):
2227 2224 if not repo.vfs.exists(b'rebasestate'):
2228 2225 return
2229 2226 try:
2230 2227 rbsrt = rebaseruntime(repo, ui, {})
2231 2228 rbsrt.restorestatus()
2232 2229 state = rbsrt.state
2233 2230 except error.RepoLookupError:
2234 2231 # i18n: column positioning for "hg summary"
2235 2232 msg = _(b'rebase: (use "hg rebase --abort" to clear broken state)\n')
2236 2233 ui.write(msg)
2237 2234 return
2238 2235 numrebased = len([i for i in pycompat.itervalues(state) if i >= 0])
2239 2236 # i18n: column positioning for "hg summary"
2240 2237 ui.write(
2241 2238 _(b'rebase: %s, %s (rebase --continue)\n')
2242 2239 % (
2243 2240 ui.label(_(b'%d rebased'), b'rebase.rebased') % numrebased,
2244 2241 ui.label(_(b'%d remaining'), b'rebase.remaining')
2245 2242 % (len(state) - numrebased),
2246 2243 )
2247 2244 )
2248 2245
2249 2246
2250 2247 def uisetup(ui):
2251 2248 # Replace pull with a decorator to provide --rebase option
2252 2249 entry = extensions.wrapcommand(commands.table, b'pull', pullrebase)
2253 2250 entry[1].append(
2254 2251 (b'', b'rebase', None, _(b"rebase working directory to branch head"))
2255 2252 )
2256 2253 entry[1].append((b't', b'tool', b'', _(b"specify merge tool for rebase")))
2257 2254 cmdutil.summaryhooks.add(b'rebase', summaryhook)
2258 2255 statemod.addunfinished(
2259 2256 b'rebase',
2260 2257 fname=b'rebasestate',
2261 2258 stopflag=True,
2262 2259 continueflag=True,
2263 2260 abortfunc=abortrebase,
2264 2261 continuefunc=continuerebase,
2265 2262 )
@@ -1,1130 +1,1111 b''
1 1 # copies.py - copy detection for Mercurial
2 2 #
3 3 # Copyright 2008 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import collections
11 11 import multiprocessing
12 12 import os
13 13
14 14 from .i18n import _
15 15
16 16
17 17 from .revlogutils.flagutil import REVIDX_SIDEDATA
18 18
19 19 from . import (
20 20 error,
21 21 match as matchmod,
22 22 node,
23 23 pathutil,
24 24 pycompat,
25 25 util,
26 26 )
27 27
28 28 from .revlogutils import sidedata as sidedatamod
29 29
30 30 from .utils import stringutil
31 31
32 32
33 33 def _filter(src, dst, t):
34 34 """filters out invalid copies after chaining"""
35 35
36 36 # When _chain()'ing copies in 'a' (from 'src' via some other commit 'mid')
37 37 # with copies in 'b' (from 'mid' to 'dst'), we can get the different cases
38 38 # in the following table (not including trivial cases). For example, case 2
39 39 # is where a file existed in 'src' and remained under that name in 'mid' and
40 40 # then was renamed between 'mid' and 'dst'.
41 41 #
42 42 # case src mid dst result
43 43 # 1 x y - -
44 44 # 2 x y y x->y
45 45 # 3 x y x -
46 46 # 4 x y z x->z
47 47 # 5 - x y -
48 48 # 6 x x y x->y
49 49 #
50 50 # _chain() takes care of chaining the copies in 'a' and 'b', but it
51 51 # cannot tell the difference between cases 1 and 2, between 3 and 4, or
52 52 # between 5 and 6, so it includes all cases in its result.
53 53 # Cases 1, 3, and 5 are then removed by _filter().
54 54
55 55 for k, v in list(t.items()):
56 56 # remove copies from files that didn't exist
57 57 if v not in src:
58 58 del t[k]
59 59 # remove criss-crossed copies
60 60 elif k in src and v in dst:
61 61 del t[k]
62 62 # remove copies to files that were then removed
63 63 elif k not in dst:
64 64 del t[k]
65 65
66 66
67 67 def _chain(prefix, suffix):
68 68 """chain two sets of copies 'prefix' and 'suffix'"""
69 69 result = prefix.copy()
70 70 for key, value in pycompat.iteritems(suffix):
71 71 result[key] = prefix.get(value, value)
72 72 return result
73 73
74 74
75 75 def _tracefile(fctx, am, basemf):
76 76 """return file context that is the ancestor of fctx present in ancestor
77 77 manifest am
78 78
79 79 Note: we used to try and stop after a given limit, however checking if that
80 80 limit is reached turned out to be very expensive. we are better off
81 81 disabling that feature."""
82 82
83 83 for f in fctx.ancestors():
84 84 path = f.path()
85 85 if am.get(path, None) == f.filenode():
86 86 return path
87 87 if basemf and basemf.get(path, None) == f.filenode():
88 88 return path
89 89
90 90
91 91 def _dirstatecopies(repo, match=None):
92 92 ds = repo.dirstate
93 93 c = ds.copies().copy()
94 94 for k in list(c):
95 95 if ds[k] not in b'anm' or (match and not match(k)):
96 96 del c[k]
97 97 return c
98 98
99 99
100 100 def _computeforwardmissing(a, b, match=None):
101 101 """Computes which files are in b but not a.
102 102 This is its own function so extensions can easily wrap this call to see what
103 103 files _forwardcopies is about to process.
104 104 """
105 105 ma = a.manifest()
106 106 mb = b.manifest()
107 107 return mb.filesnotin(ma, match=match)
108 108
109 109
110 110 def usechangesetcentricalgo(repo):
111 111 """Checks if we should use changeset-centric copy algorithms"""
112 112 if repo.filecopiesmode == b'changeset-sidedata':
113 113 return True
114 114 readfrom = repo.ui.config(b'experimental', b'copies.read-from')
115 115 changesetsource = (b'changeset-only', b'compatibility')
116 116 return readfrom in changesetsource
117 117
118 118
119 119 def _committedforwardcopies(a, b, base, match):
120 120 """Like _forwardcopies(), but b.rev() cannot be None (working copy)"""
121 121 # files might have to be traced back to the fctx parent of the last
122 122 # one-side-only changeset, but not further back than that
123 123 repo = a._repo
124 124
125 125 if usechangesetcentricalgo(repo):
126 126 return _changesetforwardcopies(a, b, match)
127 127
128 128 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
129 129 dbg = repo.ui.debug
130 130 if debug:
131 131 dbg(b'debug.copies: looking into rename from %s to %s\n' % (a, b))
132 132 am = a.manifest()
133 133 basemf = None if base is None else base.manifest()
134 134
135 135 # find where new files came from
136 136 # we currently don't try to find where old files went, too expensive
137 137 # this means we can miss a case like 'hg rm b; hg cp a b'
138 138 cm = {}
139 139
140 140 # Computing the forward missing is quite expensive on large manifests, since
141 141 # it compares the entire manifests. We can optimize it in the common use
142 142 # case of computing what copies are in a commit versus its parent (like
143 143 # during a rebase or histedit). Note, we exclude merge commits from this
144 144 # optimization, since the ctx.files() for a merge commit is not correct for
145 145 # this comparison.
146 146 forwardmissingmatch = match
147 147 if b.p1() == a and b.p2().node() == node.nullid:
148 148 filesmatcher = matchmod.exact(b.files())
149 149 forwardmissingmatch = matchmod.intersectmatchers(match, filesmatcher)
150 150 missing = _computeforwardmissing(a, b, match=forwardmissingmatch)
151 151
152 152 ancestrycontext = a._repo.changelog.ancestors([b.rev()], inclusive=True)
153 153
154 154 if debug:
155 155 dbg(b'debug.copies: missing files to search: %d\n' % len(missing))
156 156
157 157 for f in sorted(missing):
158 158 if debug:
159 159 dbg(b'debug.copies: tracing file: %s\n' % f)
160 160 fctx = b[f]
161 161 fctx._ancestrycontext = ancestrycontext
162 162
163 163 if debug:
164 164 start = util.timer()
165 165 opath = _tracefile(fctx, am, basemf)
166 166 if opath:
167 167 if debug:
168 168 dbg(b'debug.copies: rename of: %s\n' % opath)
169 169 cm[f] = opath
170 170 if debug:
171 171 dbg(
172 172 b'debug.copies: time: %f seconds\n'
173 173 % (util.timer() - start)
174 174 )
175 175 return cm
176 176
177 177
178 178 def _revinfogetter(repo):
179 179 """return a function that return multiple data given a <rev>"i
180 180
181 181 * p1: revision number of first parent
182 182 * p2: revision number of first parent
183 183 * p1copies: mapping of copies from p1
184 184 * p2copies: mapping of copies from p2
185 185 * removed: a list of removed files
186 186 """
187 187 cl = repo.changelog
188 188 parents = cl.parentrevs
189 189
190 190 if repo.filecopiesmode == b'changeset-sidedata':
191 191 changelogrevision = cl.changelogrevision
192 192 flags = cl.flags
193 193
194 194 # A small cache to avoid doing the work twice for merges
195 195 #
196 196 # In the vast majority of cases, if we ask information for a revision
197 197 # about 1 parent, we'll later ask it for the other. So it make sense to
198 198 # keep the information around when reaching the first parent of a merge
199 199 # and dropping it after it was provided for the second parents.
200 200 #
201 201 # It exists cases were only one parent of the merge will be walked. It
202 202 # happens when the "destination" the copy tracing is descendant from a
203 203 # new root, not common with the "source". In that case, we will only walk
204 204 # through merge parents that are descendant of changesets common
205 205 # between "source" and "destination".
206 206 #
207 207 # With the current case implementation if such changesets have a copy
208 208 # information, we'll keep them in memory until the end of
209 209 # _changesetforwardcopies. We don't expect the case to be frequent
210 210 # enough to matters.
211 211 #
212 212 # In addition, it would be possible to reach pathological case, were
213 213 # many first parent are met before any second parent is reached. In
214 214 # that case the cache could grow. If this even become an issue one can
215 215 # safely introduce a maximum cache size. This would trade extra CPU/IO
216 216 # time to save memory.
217 217 merge_caches = {}
218 218
219 219 def revinfo(rev):
220 220 p1, p2 = parents(rev)
221 221 if flags(rev) & REVIDX_SIDEDATA:
222 222 e = merge_caches.pop(rev, None)
223 223 if e is not None:
224 224 return e
225 225 c = changelogrevision(rev)
226 226 p1copies = c.p1copies
227 227 p2copies = c.p2copies
228 228 removed = c.filesremoved
229 229 if p1 != node.nullrev and p2 != node.nullrev:
230 230 # XXX some case we over cache, IGNORE
231 231 merge_caches[rev] = (p1, p2, p1copies, p2copies, removed)
232 232 else:
233 233 p1copies = {}
234 234 p2copies = {}
235 235 removed = []
236 236 return p1, p2, p1copies, p2copies, removed
237 237
238 238 else:
239 239
240 240 def revinfo(rev):
241 241 p1, p2 = parents(rev)
242 242 ctx = repo[rev]
243 243 p1copies, p2copies = ctx._copies
244 244 removed = ctx.filesremoved()
245 245 return p1, p2, p1copies, p2copies, removed
246 246
247 247 return revinfo
248 248
249 249
250 250 def _changesetforwardcopies(a, b, match):
251 251 if a.rev() in (node.nullrev, b.rev()):
252 252 return {}
253 253
254 254 repo = a.repo().unfiltered()
255 255 children = {}
256 256 revinfo = _revinfogetter(repo)
257 257
258 258 cl = repo.changelog
259 259 missingrevs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()])
260 260 mrset = set(missingrevs)
261 261 roots = set()
262 262 for r in missingrevs:
263 263 for p in cl.parentrevs(r):
264 264 if p == node.nullrev:
265 265 continue
266 266 if p not in children:
267 267 children[p] = [r]
268 268 else:
269 269 children[p].append(r)
270 270 if p not in mrset:
271 271 roots.add(p)
272 272 if not roots:
273 273 # no common revision to track copies from
274 274 return {}
275 275 min_root = min(roots)
276 276
277 277 from_head = set(
278 278 cl.reachableroots(min_root, [b.rev()], list(roots), includepath=True)
279 279 )
280 280
281 281 iterrevs = set(from_head)
282 282 iterrevs &= mrset
283 283 iterrevs.update(roots)
284 284 iterrevs.remove(b.rev())
285 285 revs = sorted(iterrevs)
286 286 return _combinechangesetcopies(revs, children, b.rev(), revinfo, match)
287 287
288 288
289 289 def _combinechangesetcopies(revs, children, targetrev, revinfo, match):
290 290 """combine the copies information for each item of iterrevs
291 291
292 292 revs: sorted iterable of revision to visit
293 293 children: a {parent: [children]} mapping.
294 294 targetrev: the final copies destination revision (not in iterrevs)
295 295 revinfo(rev): a function that return (p1, p2, p1copies, p2copies, removed)
296 296 match: a matcher
297 297
298 298 It returns the aggregated copies information for `targetrev`.
299 299 """
300 300 all_copies = {}
301 301 alwaysmatch = match.always()
302 302 for r in revs:
303 303 copies = all_copies.pop(r, None)
304 304 if copies is None:
305 305 # this is a root
306 306 copies = {}
307 307 for i, c in enumerate(children[r]):
308 308 p1, p2, p1copies, p2copies, removed = revinfo(c)
309 309 if r == p1:
310 310 parent = 1
311 311 childcopies = p1copies
312 312 else:
313 313 assert r == p2
314 314 parent = 2
315 315 childcopies = p2copies
316 316 if not alwaysmatch:
317 317 childcopies = {
318 318 dst: src for dst, src in childcopies.items() if match(dst)
319 319 }
320 320 newcopies = copies
321 321 if childcopies:
322 322 newcopies = _chain(newcopies, childcopies)
323 323 # _chain makes a copies, we can avoid doing so in some
324 324 # simple/linear cases.
325 325 assert newcopies is not copies
326 326 for f in removed:
327 327 if f in newcopies:
328 328 if newcopies is copies:
329 329 # copy on write to avoid affecting potential other
330 330 # branches. when there are no other branches, this
331 331 # could be avoided.
332 332 newcopies = copies.copy()
333 333 del newcopies[f]
334 334 othercopies = all_copies.get(c)
335 335 if othercopies is None:
336 336 all_copies[c] = newcopies
337 337 else:
338 338 # we are the second parent to work on c, we need to merge our
339 339 # work with the other.
340 340 #
341 341 # Unlike when copies are stored in the filelog, we consider
342 342 # it a copy even if the destination already existed on the
343 343 # other branch. It's simply too expensive to check if the
344 344 # file existed in the manifest.
345 345 #
346 346 # In case of conflict, parent 1 take precedence over parent 2.
347 347 # This is an arbitrary choice made anew when implementing
348 348 # changeset based copies. It was made without regards with
349 349 # potential filelog related behavior.
350 350 if parent == 1:
351 351 othercopies.update(newcopies)
352 352 else:
353 353 newcopies.update(othercopies)
354 354 all_copies[c] = newcopies
355 355 return all_copies[targetrev]
356 356
357 357
358 358 def _forwardcopies(a, b, base=None, match=None):
359 359 """find {dst@b: src@a} copy mapping where a is an ancestor of b"""
360 360
361 361 if base is None:
362 362 base = a
363 363 match = a.repo().narrowmatch(match)
364 364 # check for working copy
365 365 if b.rev() is None:
366 366 cm = _committedforwardcopies(a, b.p1(), base, match)
367 367 # combine copies from dirstate if necessary
368 368 copies = _chain(cm, _dirstatecopies(b._repo, match))
369 369 else:
370 370 copies = _committedforwardcopies(a, b, base, match)
371 371 return copies
372 372
373 373
374 374 def _backwardrenames(a, b, match):
375 375 if a._repo.ui.config(b'experimental', b'copytrace') == b'off':
376 376 return {}
377 377
378 378 # Even though we're not taking copies into account, 1:n rename situations
379 379 # can still exist (e.g. hg cp a b; hg mv a c). In those cases we
380 380 # arbitrarily pick one of the renames.
381 381 # We don't want to pass in "match" here, since that would filter
382 382 # the destination by it. Since we're reversing the copies, we want
383 383 # to filter the source instead.
384 384 f = _forwardcopies(b, a)
385 385 r = {}
386 386 for k, v in sorted(pycompat.iteritems(f)):
387 387 if match and not match(v):
388 388 continue
389 389 # remove copies
390 390 if v in a:
391 391 continue
392 392 r[v] = k
393 393 return r
394 394
395 395
396 396 def pathcopies(x, y, match=None):
397 397 """find {dst@y: src@x} copy mapping for directed compare"""
398 398 repo = x._repo
399 399 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
400 400 if debug:
401 401 repo.ui.debug(
402 402 b'debug.copies: searching copies from %s to %s\n' % (x, y)
403 403 )
404 404 if x == y or not x or not y:
405 405 return {}
406 406 a = y.ancestor(x)
407 407 if a == x:
408 408 if debug:
409 409 repo.ui.debug(b'debug.copies: search mode: forward\n')
410 410 if y.rev() is None and x == y.p1():
411 411 # short-circuit to avoid issues with merge states
412 412 return _dirstatecopies(repo, match)
413 413 copies = _forwardcopies(x, y, match=match)
414 414 elif a == y:
415 415 if debug:
416 416 repo.ui.debug(b'debug.copies: search mode: backward\n')
417 417 copies = _backwardrenames(x, y, match=match)
418 418 else:
419 419 if debug:
420 420 repo.ui.debug(b'debug.copies: search mode: combined\n')
421 421 base = None
422 422 if a.rev() != node.nullrev:
423 423 base = x
424 424 copies = _chain(
425 425 _backwardrenames(x, a, match=match),
426 426 _forwardcopies(a, y, base, match=match),
427 427 )
428 428 _filter(x, y, copies)
429 429 return copies
430 430
431 431
432 432 def mergecopies(repo, c1, c2, base):
433 433 """
434 434 Finds moves and copies between context c1 and c2 that are relevant for
435 435 merging. 'base' will be used as the merge base.
436 436
437 437 Copytracing is used in commands like rebase, merge, unshelve, etc to merge
438 438 files that were moved/ copied in one merge parent and modified in another.
439 439 For example:
440 440
441 441 o ---> 4 another commit
442 442 |
443 443 | o ---> 3 commit that modifies a.txt
444 444 | /
445 445 o / ---> 2 commit that moves a.txt to b.txt
446 446 |/
447 447 o ---> 1 merge base
448 448
449 449 If we try to rebase revision 3 on revision 4, since there is no a.txt in
450 450 revision 4, and if user have copytrace disabled, we prints the following
451 451 message:
452 452
453 453 ```other changed <file> which local deleted```
454 454
455 455 Returns five dicts: "copy", "movewithdir", "diverge", "renamedelete" and
456 456 "dirmove".
457 457
458 458 "copy" is a mapping from destination name -> source name,
459 459 where source is in c1 and destination is in c2 or vice-versa.
460 460
461 461 "movewithdir" is a mapping from source name -> destination name,
462 462 where the file at source present in one context but not the other
463 463 needs to be moved to destination by the merge process, because the
464 464 other context moved the directory it is in.
465 465
466 466 "diverge" is a mapping of source name -> list of destination names
467 467 for divergent renames.
468 468
469 469 "renamedelete" is a mapping of source name -> list of destination
470 470 names for files deleted in c1 that were renamed in c2 or vice-versa.
471 471
472 472 "dirmove" is a mapping of detected source dir -> destination dir renames.
473 473 This is needed for handling changes to new files previously grafted into
474 474 renamed directories.
475 475
476 476 This function calls different copytracing algorithms based on config.
477 477 """
478 478 # avoid silly behavior for update from empty dir
479 479 if not c1 or not c2 or c1 == c2:
480 480 return {}, {}, {}, {}, {}
481 481
482 482 narrowmatch = c1.repo().narrowmatch()
483 483
484 484 # avoid silly behavior for parent -> working dir
485 485 if c2.node() is None and c1.node() == repo.dirstate.p1():
486 486 return _dirstatecopies(repo, narrowmatch), {}, {}, {}, {}
487 487
488 488 copytracing = repo.ui.config(b'experimental', b'copytrace')
489 489 if stringutil.parsebool(copytracing) is False:
490 490 # stringutil.parsebool() returns None when it is unable to parse the
491 491 # value, so we should rely on making sure copytracing is on such cases
492 492 return {}, {}, {}, {}, {}
493 493
494 494 if usechangesetcentricalgo(repo):
495 495 # The heuristics don't make sense when we need changeset-centric algos
496 496 return _fullcopytracing(repo, c1, c2, base)
497 497
498 498 # Copy trace disabling is explicitly below the node == p1 logic above
499 499 # because the logic above is required for a simple copy to be kept across a
500 500 # rebase.
501 501 if copytracing == b'heuristics':
502 502 # Do full copytracing if only non-public revisions are involved as
503 503 # that will be fast enough and will also cover the copies which could
504 504 # be missed by heuristics
505 505 if _isfullcopytraceable(repo, c1, base):
506 506 return _fullcopytracing(repo, c1, c2, base)
507 507 return _heuristicscopytracing(repo, c1, c2, base)
508 508 else:
509 509 return _fullcopytracing(repo, c1, c2, base)
510 510
511 511
512 512 def _isfullcopytraceable(repo, c1, base):
513 513 """ Checks that if base, source and destination are all no-public branches,
514 514 if yes let's use the full copytrace algorithm for increased capabilities
515 515 since it will be fast enough.
516 516
517 517 `experimental.copytrace.sourcecommitlimit` can be used to set a limit for
518 518 number of changesets from c1 to base such that if number of changesets are
519 519 more than the limit, full copytracing algorithm won't be used.
520 520 """
521 521 if c1.rev() is None:
522 522 c1 = c1.p1()
523 523 if c1.mutable() and base.mutable():
524 524 sourcecommitlimit = repo.ui.configint(
525 525 b'experimental', b'copytrace.sourcecommitlimit'
526 526 )
527 527 commits = len(repo.revs(b'%d::%d', base.rev(), c1.rev()))
528 528 return commits < sourcecommitlimit
529 529 return False
530 530
531 531
532 532 def _checksinglesidecopies(
533 533 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
534 534 ):
535 535 if src not in m2:
536 536 # deleted on side 2
537 537 if src not in m1:
538 538 # renamed on side 1, deleted on side 2
539 539 renamedelete[src] = dsts1
540 540 elif m2[src] != mb[src]:
541 541 if not _related(c2[src], base[src]):
542 542 return
543 543 # modified on side 2
544 544 for dst in dsts1:
545 545 if dst not in m2:
546 546 # dst not added on side 2 (handle as regular
547 547 # "both created" case in manifestmerge otherwise)
548 548 copy[dst] = src
549 549
550 550
551 551 def _fullcopytracing(repo, c1, c2, base):
552 552 """ The full copytracing algorithm which finds all the new files that were
553 553 added from merge base up to the top commit and for each file it checks if
554 554 this file was copied from another file.
555 555
556 556 This is pretty slow when a lot of changesets are involved but will track all
557 557 the copies.
558 558 """
559 559 m1 = c1.manifest()
560 560 m2 = c2.manifest()
561 561 mb = base.manifest()
562 562
563 563 copies1 = pathcopies(base, c1)
564 564 copies2 = pathcopies(base, c2)
565 565
566 566 inversecopies1 = {}
567 567 inversecopies2 = {}
568 568 for dst, src in copies1.items():
569 569 inversecopies1.setdefault(src, []).append(dst)
570 570 for dst, src in copies2.items():
571 571 inversecopies2.setdefault(src, []).append(dst)
572 572
573 573 copy = {}
574 574 diverge = {}
575 575 renamedelete = {}
576 576 allsources = set(inversecopies1) | set(inversecopies2)
577 577 for src in allsources:
578 578 dsts1 = inversecopies1.get(src)
579 579 dsts2 = inversecopies2.get(src)
580 580 if dsts1 and dsts2:
581 581 # copied/renamed on both sides
582 582 if src not in m1 and src not in m2:
583 583 # renamed on both sides
584 584 dsts1 = set(dsts1)
585 585 dsts2 = set(dsts2)
586 586 # If there's some overlap in the rename destinations, we
587 587 # consider it not divergent. For example, if side 1 copies 'a'
588 588 # to 'b' and 'c' and deletes 'a', and side 2 copies 'a' to 'c'
589 589 # and 'd' and deletes 'a'.
590 590 if dsts1 & dsts2:
591 591 for dst in dsts1 & dsts2:
592 592 copy[dst] = src
593 593 else:
594 594 diverge[src] = sorted(dsts1 | dsts2)
595 595 elif src in m1 and src in m2:
596 596 # copied on both sides
597 597 dsts1 = set(dsts1)
598 598 dsts2 = set(dsts2)
599 599 for dst in dsts1 & dsts2:
600 600 copy[dst] = src
601 601 # TODO: Handle cases where it was renamed on one side and copied
602 602 # on the other side
603 603 elif dsts1:
604 604 # copied/renamed only on side 1
605 605 _checksinglesidecopies(
606 606 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
607 607 )
608 608 elif dsts2:
609 609 # copied/renamed only on side 2
610 610 _checksinglesidecopies(
611 611 src, dsts2, m2, m1, mb, c1, base, copy, renamedelete
612 612 )
613 613
614 614 renamedeleteset = set()
615 615 divergeset = set()
616 616 for dsts in diverge.values():
617 617 divergeset.update(dsts)
618 618 for dsts in renamedelete.values():
619 619 renamedeleteset.update(dsts)
620 620
621 621 # find interesting file sets from manifests
622 622 addedinm1 = m1.filesnotin(mb, repo.narrowmatch())
623 623 addedinm2 = m2.filesnotin(mb, repo.narrowmatch())
624 624 u1 = sorted(addedinm1 - addedinm2)
625 625 u2 = sorted(addedinm2 - addedinm1)
626 626
627 627 header = b" unmatched files in %s"
628 628 if u1:
629 629 repo.ui.debug(b"%s:\n %s\n" % (header % b'local', b"\n ".join(u1)))
630 630 if u2:
631 631 repo.ui.debug(b"%s:\n %s\n" % (header % b'other', b"\n ".join(u2)))
632 632
633 633 fullcopy = copies1.copy()
634 634 fullcopy.update(copies2)
635 635 if not fullcopy:
636 636 return copy, {}, diverge, renamedelete, {}
637 637
638 638 if repo.ui.debugflag:
639 639 repo.ui.debug(
640 640 b" all copies found (* = to merge, ! = divergent, "
641 641 b"% = renamed and deleted):\n"
642 642 )
643 643 for f in sorted(fullcopy):
644 644 note = b""
645 645 if f in copy:
646 646 note += b"*"
647 647 if f in divergeset:
648 648 note += b"!"
649 649 if f in renamedeleteset:
650 650 note += b"%"
651 651 repo.ui.debug(
652 652 b" src: '%s' -> dst: '%s' %s\n" % (fullcopy[f], f, note)
653 653 )
654 654 del divergeset
655 655
656 656 repo.ui.debug(b" checking for directory renames\n")
657 657
658 658 # generate a directory move map
659 659 d1, d2 = c1.dirs(), c2.dirs()
660 660 invalid = set()
661 661 dirmove = {}
662 662
663 663 # examine each file copy for a potential directory move, which is
664 664 # when all the files in a directory are moved to a new directory
665 665 for dst, src in pycompat.iteritems(fullcopy):
666 666 dsrc, ddst = pathutil.dirname(src), pathutil.dirname(dst)
667 667 if dsrc in invalid:
668 668 # already seen to be uninteresting
669 669 continue
670 670 elif dsrc in d1 and ddst in d1:
671 671 # directory wasn't entirely moved locally
672 672 invalid.add(dsrc)
673 673 elif dsrc in d2 and ddst in d2:
674 674 # directory wasn't entirely moved remotely
675 675 invalid.add(dsrc)
676 676 elif dsrc in dirmove and dirmove[dsrc] != ddst:
677 677 # files from the same directory moved to two different places
678 678 invalid.add(dsrc)
679 679 else:
680 680 # looks good so far
681 681 dirmove[dsrc] = ddst
682 682
683 683 for i in invalid:
684 684 if i in dirmove:
685 685 del dirmove[i]
686 686 del d1, d2, invalid
687 687
688 688 if not dirmove:
689 689 return copy, {}, diverge, renamedelete, {}
690 690
691 691 dirmove = {k + b"/": v + b"/" for k, v in pycompat.iteritems(dirmove)}
692 692
693 693 for d in dirmove:
694 694 repo.ui.debug(
695 695 b" discovered dir src: '%s' -> dst: '%s'\n" % (d, dirmove[d])
696 696 )
697 697
698 698 movewithdir = {}
699 699 # check unaccounted nonoverlapping files against directory moves
700 700 for f in u1 + u2:
701 701 if f not in fullcopy:
702 702 for d in dirmove:
703 703 if f.startswith(d):
704 704 # new file added in a directory that was moved, move it
705 705 df = dirmove[d] + f[len(d) :]
706 706 if df not in copy:
707 707 movewithdir[f] = df
708 708 repo.ui.debug(
709 709 b" pending file src: '%s' -> dst: '%s'\n"
710 710 % (f, df)
711 711 )
712 712 break
713 713
714 714 return copy, movewithdir, diverge, renamedelete, dirmove
715 715
716 716
717 717 def _heuristicscopytracing(repo, c1, c2, base):
718 718 """ Fast copytracing using filename heuristics
719 719
720 720 Assumes that moves or renames are of following two types:
721 721
722 722 1) Inside a directory only (same directory name but different filenames)
723 723 2) Move from one directory to another
724 724 (same filenames but different directory names)
725 725
726 726 Works only when there are no merge commits in the "source branch".
727 727 Source branch is commits from base up to c2 not including base.
728 728
729 729 If merge is involved it fallbacks to _fullcopytracing().
730 730
731 731 Can be used by setting the following config:
732 732
733 733 [experimental]
734 734 copytrace = heuristics
735 735
736 736 In some cases the copy/move candidates found by heuristics can be very large
737 737 in number and that will make the algorithm slow. The number of possible
738 738 candidates to check can be limited by using the config
739 739 `experimental.copytrace.movecandidateslimit` which defaults to 100.
740 740 """
741 741
742 742 if c1.rev() is None:
743 743 c1 = c1.p1()
744 744 if c2.rev() is None:
745 745 c2 = c2.p1()
746 746
747 747 copies = {}
748 748
749 749 changedfiles = set()
750 750 m1 = c1.manifest()
751 751 if not repo.revs(b'%d::%d', base.rev(), c2.rev()):
752 752 # If base is not in c2 branch, we switch to fullcopytracing
753 753 repo.ui.debug(
754 754 b"switching to full copytracing as base is not "
755 755 b"an ancestor of c2\n"
756 756 )
757 757 return _fullcopytracing(repo, c1, c2, base)
758 758
759 759 ctx = c2
760 760 while ctx != base:
761 761 if len(ctx.parents()) == 2:
762 762 # To keep things simple let's not handle merges
763 763 repo.ui.debug(b"switching to full copytracing because of merges\n")
764 764 return _fullcopytracing(repo, c1, c2, base)
765 765 changedfiles.update(ctx.files())
766 766 ctx = ctx.p1()
767 767
768 768 cp = _forwardcopies(base, c2)
769 769 for dst, src in pycompat.iteritems(cp):
770 770 if src in m1:
771 771 copies[dst] = src
772 772
773 773 # file is missing if it isn't present in the destination, but is present in
774 774 # the base and present in the source.
775 775 # Presence in the base is important to exclude added files, presence in the
776 776 # source is important to exclude removed files.
777 777 filt = lambda f: f not in m1 and f in base and f in c2
778 778 missingfiles = [f for f in changedfiles if filt(f)]
779 779
780 780 if missingfiles:
781 781 basenametofilename = collections.defaultdict(list)
782 782 dirnametofilename = collections.defaultdict(list)
783 783
784 784 for f in m1.filesnotin(base.manifest()):
785 785 basename = os.path.basename(f)
786 786 dirname = os.path.dirname(f)
787 787 basenametofilename[basename].append(f)
788 788 dirnametofilename[dirname].append(f)
789 789
790 790 for f in missingfiles:
791 791 basename = os.path.basename(f)
792 792 dirname = os.path.dirname(f)
793 793 samebasename = basenametofilename[basename]
794 794 samedirname = dirnametofilename[dirname]
795 795 movecandidates = samebasename + samedirname
796 796 # f is guaranteed to be present in c2, that's why
797 797 # c2.filectx(f) won't fail
798 798 f2 = c2.filectx(f)
799 799 # we can have a lot of candidates which can slow down the heuristics
800 800 # config value to limit the number of candidates moves to check
801 801 maxcandidates = repo.ui.configint(
802 802 b'experimental', b'copytrace.movecandidateslimit'
803 803 )
804 804
805 805 if len(movecandidates) > maxcandidates:
806 806 repo.ui.status(
807 807 _(
808 808 b"skipping copytracing for '%s', more "
809 809 b"candidates than the limit: %d\n"
810 810 )
811 811 % (f, len(movecandidates))
812 812 )
813 813 continue
814 814
815 815 for candidate in movecandidates:
816 816 f1 = c1.filectx(candidate)
817 817 if _related(f1, f2):
818 818 # if there are a few related copies then we'll merge
819 819 # changes into all of them. This matches the behaviour
820 820 # of upstream copytracing
821 821 copies[candidate] = f
822 822
823 823 return copies, {}, {}, {}, {}
824 824
825 825
826 826 def _related(f1, f2):
827 827 """return True if f1 and f2 filectx have a common ancestor
828 828
829 829 Walk back to common ancestor to see if the two files originate
830 830 from the same file. Since workingfilectx's rev() is None it messes
831 831 up the integer comparison logic, hence the pre-step check for
832 832 None (f1 and f2 can only be workingfilectx's initially).
833 833 """
834 834
835 835 if f1 == f2:
836 836 return True # a match
837 837
838 838 g1, g2 = f1.ancestors(), f2.ancestors()
839 839 try:
840 840 f1r, f2r = f1.linkrev(), f2.linkrev()
841 841
842 842 if f1r is None:
843 843 f1 = next(g1)
844 844 if f2r is None:
845 845 f2 = next(g2)
846 846
847 847 while True:
848 848 f1r, f2r = f1.linkrev(), f2.linkrev()
849 849 if f1r > f2r:
850 850 f1 = next(g1)
851 851 elif f2r > f1r:
852 852 f2 = next(g2)
853 853 else: # f1 and f2 point to files in the same linkrev
854 854 return f1 == f2 # true if they point to the same file
855 855 except StopIteration:
856 856 return False
857 857
858 858
859 def graftcopies(repo, wctx, ctx, base, skip=None):
860 """reproduce copies between base and ctx in the wctx
861
862 If skip is specified, it's a revision that should be used to
863 filter copy records. Any copies that occur between base and
864 skip will not be duplicated, even if they appear in the set of
865 copies between base and ctx.
866 """
867 exclude = {}
868 ctraceconfig = repo.ui.config(b'experimental', b'copytrace')
869 bctrace = stringutil.parsebool(ctraceconfig)
870 if skip is not None and (
871 ctraceconfig == b'heuristics' or bctrace or bctrace is None
872 ):
873 # copytrace='off' skips this line, but not the entire function because
874 # the line below is O(size of the repo) during a rebase, while the rest
875 # of the function is much faster (and is required for carrying copy
876 # metadata across the rebase anyway).
877 exclude = pathcopies(base, skip)
859 def graftcopies(wctx, ctx, base):
860 """reproduce copies between base and ctx in the wctx"""
878 861 new_copies = pathcopies(base, ctx)
879 862 _filter(wctx.p1(), wctx, new_copies)
880 863 for dst, src in pycompat.iteritems(new_copies):
881 if dst in exclude:
882 continue
883 864 wctx[dst].markcopied(src)
884 865
885 866
886 867 def computechangesetfilesadded(ctx):
887 868 """return the list of files added in a changeset
888 869 """
889 870 added = []
890 871 for f in ctx.files():
891 872 if not any(f in p for p in ctx.parents()):
892 873 added.append(f)
893 874 return added
894 875
895 876
896 877 def computechangesetfilesremoved(ctx):
897 878 """return the list of files removed in a changeset
898 879 """
899 880 removed = []
900 881 for f in ctx.files():
901 882 if f not in ctx:
902 883 removed.append(f)
903 884 return removed
904 885
905 886
906 887 def computechangesetcopies(ctx):
907 888 """return the copies data for a changeset
908 889
909 890 The copies data are returned as a pair of dictionnary (p1copies, p2copies).
910 891
911 892 Each dictionnary are in the form: `{newname: oldname}`
912 893 """
913 894 p1copies = {}
914 895 p2copies = {}
915 896 p1 = ctx.p1()
916 897 p2 = ctx.p2()
917 898 narrowmatch = ctx._repo.narrowmatch()
918 899 for dst in ctx.files():
919 900 if not narrowmatch(dst) or dst not in ctx:
920 901 continue
921 902 copied = ctx[dst].renamed()
922 903 if not copied:
923 904 continue
924 905 src, srcnode = copied
925 906 if src in p1 and p1[src].filenode() == srcnode:
926 907 p1copies[dst] = src
927 908 elif src in p2 and p2[src].filenode() == srcnode:
928 909 p2copies[dst] = src
929 910 return p1copies, p2copies
930 911
931 912
932 913 def encodecopies(files, copies):
933 914 items = []
934 915 for i, dst in enumerate(files):
935 916 if dst in copies:
936 917 items.append(b'%d\0%s' % (i, copies[dst]))
937 918 if len(items) != len(copies):
938 919 raise error.ProgrammingError(
939 920 b'some copy targets missing from file list'
940 921 )
941 922 return b"\n".join(items)
942 923
943 924
944 925 def decodecopies(files, data):
945 926 try:
946 927 copies = {}
947 928 if not data:
948 929 return copies
949 930 for l in data.split(b'\n'):
950 931 strindex, src = l.split(b'\0')
951 932 i = int(strindex)
952 933 dst = files[i]
953 934 copies[dst] = src
954 935 return copies
955 936 except (ValueError, IndexError):
956 937 # Perhaps someone had chosen the same key name (e.g. "p1copies") and
957 938 # used different syntax for the value.
958 939 return None
959 940
960 941
961 942 def encodefileindices(files, subset):
962 943 subset = set(subset)
963 944 indices = []
964 945 for i, f in enumerate(files):
965 946 if f in subset:
966 947 indices.append(b'%d' % i)
967 948 return b'\n'.join(indices)
968 949
969 950
970 951 def decodefileindices(files, data):
971 952 try:
972 953 subset = []
973 954 if not data:
974 955 return subset
975 956 for strindex in data.split(b'\n'):
976 957 i = int(strindex)
977 958 if i < 0 or i >= len(files):
978 959 return None
979 960 subset.append(files[i])
980 961 return subset
981 962 except (ValueError, IndexError):
982 963 # Perhaps someone had chosen the same key name (e.g. "added") and
983 964 # used different syntax for the value.
984 965 return None
985 966
986 967
987 968 def _getsidedata(srcrepo, rev):
988 969 ctx = srcrepo[rev]
989 970 filescopies = computechangesetcopies(ctx)
990 971 filesadded = computechangesetfilesadded(ctx)
991 972 filesremoved = computechangesetfilesremoved(ctx)
992 973 sidedata = {}
993 974 if any([filescopies, filesadded, filesremoved]):
994 975 sortedfiles = sorted(ctx.files())
995 976 p1copies, p2copies = filescopies
996 977 p1copies = encodecopies(sortedfiles, p1copies)
997 978 p2copies = encodecopies(sortedfiles, p2copies)
998 979 filesadded = encodefileindices(sortedfiles, filesadded)
999 980 filesremoved = encodefileindices(sortedfiles, filesremoved)
1000 981 if p1copies:
1001 982 sidedata[sidedatamod.SD_P1COPIES] = p1copies
1002 983 if p2copies:
1003 984 sidedata[sidedatamod.SD_P2COPIES] = p2copies
1004 985 if filesadded:
1005 986 sidedata[sidedatamod.SD_FILESADDED] = filesadded
1006 987 if filesremoved:
1007 988 sidedata[sidedatamod.SD_FILESREMOVED] = filesremoved
1008 989 return sidedata
1009 990
1010 991
1011 992 def getsidedataadder(srcrepo, destrepo):
1012 993 use_w = srcrepo.ui.configbool(b'experimental', b'worker.repository-upgrade')
1013 994 if pycompat.iswindows or not use_w:
1014 995 return _get_simple_sidedata_adder(srcrepo, destrepo)
1015 996 else:
1016 997 return _get_worker_sidedata_adder(srcrepo, destrepo)
1017 998
1018 999
1019 1000 def _sidedata_worker(srcrepo, revs_queue, sidedata_queue, tokens):
1020 1001 """The function used by worker precomputing sidedata
1021 1002
1022 1003 It read an input queue containing revision numbers
1023 1004 It write in an output queue containing (rev, <sidedata-map>)
1024 1005
1025 1006 The `None` input value is used as a stop signal.
1026 1007
1027 1008 The `tokens` semaphore is user to avoid having too many unprocessed
1028 1009 entries. The workers needs to acquire one token before fetching a task.
1029 1010 They will be released by the consumer of the produced data.
1030 1011 """
1031 1012 tokens.acquire()
1032 1013 rev = revs_queue.get()
1033 1014 while rev is not None:
1034 1015 data = _getsidedata(srcrepo, rev)
1035 1016 sidedata_queue.put((rev, data))
1036 1017 tokens.acquire()
1037 1018 rev = revs_queue.get()
1038 1019 # processing of `None` is completed, release the token.
1039 1020 tokens.release()
1040 1021
1041 1022
1042 1023 BUFF_PER_WORKER = 50
1043 1024
1044 1025
1045 1026 def _get_worker_sidedata_adder(srcrepo, destrepo):
1046 1027 """The parallel version of the sidedata computation
1047 1028
1048 1029 This code spawn a pool of worker that precompute a buffer of sidedata
1049 1030 before we actually need them"""
1050 1031 # avoid circular import copies -> scmutil -> worker -> copies
1051 1032 from . import worker
1052 1033
1053 1034 nbworkers = worker._numworkers(srcrepo.ui)
1054 1035
1055 1036 tokens = multiprocessing.BoundedSemaphore(nbworkers * BUFF_PER_WORKER)
1056 1037 revsq = multiprocessing.Queue()
1057 1038 sidedataq = multiprocessing.Queue()
1058 1039
1059 1040 assert srcrepo.filtername is None
1060 1041 # queue all tasks beforehand, revision numbers are small and it make
1061 1042 # synchronisation simpler
1062 1043 #
1063 1044 # Since the computation for each node can be quite expensive, the overhead
1064 1045 # of using a single queue is not revelant. In practice, most computation
1065 1046 # are fast but some are very expensive and dominate all the other smaller
1066 1047 # cost.
1067 1048 for r in srcrepo.changelog.revs():
1068 1049 revsq.put(r)
1069 1050 # queue the "no more tasks" markers
1070 1051 for i in range(nbworkers):
1071 1052 revsq.put(None)
1072 1053
1073 1054 allworkers = []
1074 1055 for i in range(nbworkers):
1075 1056 args = (srcrepo, revsq, sidedataq, tokens)
1076 1057 w = multiprocessing.Process(target=_sidedata_worker, args=args)
1077 1058 allworkers.append(w)
1078 1059 w.start()
1079 1060
1080 1061 # dictionnary to store results for revision higher than we one we are
1081 1062 # looking for. For example, if we need the sidedatamap for 42, and 43 is
1082 1063 # received, when shelve 43 for later use.
1083 1064 staging = {}
1084 1065
1085 1066 def sidedata_companion(revlog, rev):
1086 1067 sidedata = {}
1087 1068 if util.safehasattr(revlog, b'filteredrevs'): # this is a changelog
1088 1069 # Is the data previously shelved ?
1089 1070 sidedata = staging.pop(rev, None)
1090 1071 if sidedata is None:
1091 1072 # look at the queued result until we find the one we are lookig
1092 1073 # for (shelve the other ones)
1093 1074 r, sidedata = sidedataq.get()
1094 1075 while r != rev:
1095 1076 staging[r] = sidedata
1096 1077 r, sidedata = sidedataq.get()
1097 1078 tokens.release()
1098 1079 return False, (), sidedata
1099 1080
1100 1081 return sidedata_companion
1101 1082
1102 1083
1103 1084 def _get_simple_sidedata_adder(srcrepo, destrepo):
1104 1085 """The simple version of the sidedata computation
1105 1086
1106 1087 It just compute it in the same thread on request"""
1107 1088
1108 1089 def sidedatacompanion(revlog, rev):
1109 1090 sidedata = {}
1110 1091 if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog
1111 1092 sidedata = _getsidedata(srcrepo, rev)
1112 1093 return False, (), sidedata
1113 1094
1114 1095 return sidedatacompanion
1115 1096
1116 1097
1117 1098 def getsidedataremover(srcrepo, destrepo):
1118 1099 def sidedatacompanion(revlog, rev):
1119 1100 f = ()
1120 1101 if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog
1121 1102 if revlog.flags(rev) & REVIDX_SIDEDATA:
1122 1103 f = (
1123 1104 sidedatamod.SD_P1COPIES,
1124 1105 sidedatamod.SD_P2COPIES,
1125 1106 sidedatamod.SD_FILESADDED,
1126 1107 sidedatamod.SD_FILESREMOVED,
1127 1108 )
1128 1109 return False, f, {}
1129 1110
1130 1111 return sidedatacompanion
@@ -1,2713 +1,2713 b''
1 1 # merge.py - directory-level update/merge handling for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import shutil
12 12 import stat
13 13 import struct
14 14
15 15 from .i18n import _
16 16 from .node import (
17 17 addednodeid,
18 18 bin,
19 19 hex,
20 20 modifiednodeid,
21 21 nullhex,
22 22 nullid,
23 23 nullrev,
24 24 )
25 25 from .pycompat import delattr
26 26 from .thirdparty import attr
27 27 from . import (
28 28 copies,
29 29 encoding,
30 30 error,
31 31 filemerge,
32 32 match as matchmod,
33 33 obsutil,
34 34 pathutil,
35 35 pycompat,
36 36 scmutil,
37 37 subrepoutil,
38 38 util,
39 39 worker,
40 40 )
41 41 from .utils import hashutil
42 42
43 43 _pack = struct.pack
44 44 _unpack = struct.unpack
45 45
46 46
47 47 def _droponode(data):
48 48 # used for compatibility for v1
49 49 bits = data.split(b'\0')
50 50 bits = bits[:-2] + bits[-1:]
51 51 return b'\0'.join(bits)
52 52
53 53
54 54 # Merge state record types. See ``mergestate`` docs for more.
55 55 RECORD_LOCAL = b'L'
56 56 RECORD_OTHER = b'O'
57 57 RECORD_MERGED = b'F'
58 58 RECORD_CHANGEDELETE_CONFLICT = b'C'
59 59 RECORD_MERGE_DRIVER_MERGE = b'D'
60 60 RECORD_PATH_CONFLICT = b'P'
61 61 RECORD_MERGE_DRIVER_STATE = b'm'
62 62 RECORD_FILE_VALUES = b'f'
63 63 RECORD_LABELS = b'l'
64 64 RECORD_OVERRIDE = b't'
65 65 RECORD_UNSUPPORTED_MANDATORY = b'X'
66 66 RECORD_UNSUPPORTED_ADVISORY = b'x'
67 67
68 68 MERGE_DRIVER_STATE_UNMARKED = b'u'
69 69 MERGE_DRIVER_STATE_MARKED = b'm'
70 70 MERGE_DRIVER_STATE_SUCCESS = b's'
71 71
72 72 MERGE_RECORD_UNRESOLVED = b'u'
73 73 MERGE_RECORD_RESOLVED = b'r'
74 74 MERGE_RECORD_UNRESOLVED_PATH = b'pu'
75 75 MERGE_RECORD_RESOLVED_PATH = b'pr'
76 76 MERGE_RECORD_DRIVER_RESOLVED = b'd'
77 77
78 78 ACTION_FORGET = b'f'
79 79 ACTION_REMOVE = b'r'
80 80 ACTION_ADD = b'a'
81 81 ACTION_GET = b'g'
82 82 ACTION_PATH_CONFLICT = b'p'
83 83 ACTION_PATH_CONFLICT_RESOLVE = b'pr'
84 84 ACTION_ADD_MODIFIED = b'am'
85 85 ACTION_CREATED = b'c'
86 86 ACTION_DELETED_CHANGED = b'dc'
87 87 ACTION_CHANGED_DELETED = b'cd'
88 88 ACTION_MERGE = b'm'
89 89 ACTION_LOCAL_DIR_RENAME_GET = b'dg'
90 90 ACTION_DIR_RENAME_MOVE_LOCAL = b'dm'
91 91 ACTION_KEEP = b'k'
92 92 ACTION_EXEC = b'e'
93 93 ACTION_CREATED_MERGE = b'cm'
94 94
95 95
96 96 class mergestate(object):
97 97 '''track 3-way merge state of individual files
98 98
99 99 The merge state is stored on disk when needed. Two files are used: one with
100 100 an old format (version 1), and one with a new format (version 2). Version 2
101 101 stores a superset of the data in version 1, including new kinds of records
102 102 in the future. For more about the new format, see the documentation for
103 103 `_readrecordsv2`.
104 104
105 105 Each record can contain arbitrary content, and has an associated type. This
106 106 `type` should be a letter. If `type` is uppercase, the record is mandatory:
107 107 versions of Mercurial that don't support it should abort. If `type` is
108 108 lowercase, the record can be safely ignored.
109 109
110 110 Currently known records:
111 111
112 112 L: the node of the "local" part of the merge (hexified version)
113 113 O: the node of the "other" part of the merge (hexified version)
114 114 F: a file to be merged entry
115 115 C: a change/delete or delete/change conflict
116 116 D: a file that the external merge driver will merge internally
117 117 (experimental)
118 118 P: a path conflict (file vs directory)
119 119 m: the external merge driver defined for this merge plus its run state
120 120 (experimental)
121 121 f: a (filename, dictionary) tuple of optional values for a given file
122 122 X: unsupported mandatory record type (used in tests)
123 123 x: unsupported advisory record type (used in tests)
124 124 l: the labels for the parts of the merge.
125 125
126 126 Merge driver run states (experimental):
127 127 u: driver-resolved files unmarked -- needs to be run next time we're about
128 128 to resolve or commit
129 129 m: driver-resolved files marked -- only needs to be run before commit
130 130 s: success/skipped -- does not need to be run any more
131 131
132 132 Merge record states (stored in self._state, indexed by filename):
133 133 u: unresolved conflict
134 134 r: resolved conflict
135 135 pu: unresolved path conflict (file conflicts with directory)
136 136 pr: resolved path conflict
137 137 d: driver-resolved conflict
138 138
139 139 The resolve command transitions between 'u' and 'r' for conflicts and
140 140 'pu' and 'pr' for path conflicts.
141 141 '''
142 142
143 143 statepathv1 = b'merge/state'
144 144 statepathv2 = b'merge/state2'
145 145
146 146 @staticmethod
147 147 def clean(repo, node=None, other=None, labels=None):
148 148 """Initialize a brand new merge state, removing any existing state on
149 149 disk."""
150 150 ms = mergestate(repo)
151 151 ms.reset(node, other, labels)
152 152 return ms
153 153
154 154 @staticmethod
155 155 def read(repo):
156 156 """Initialize the merge state, reading it from disk."""
157 157 ms = mergestate(repo)
158 158 ms._read()
159 159 return ms
160 160
161 161 def __init__(self, repo):
162 162 """Initialize the merge state.
163 163
164 164 Do not use this directly! Instead call read() or clean()."""
165 165 self._repo = repo
166 166 self._dirty = False
167 167 self._labels = None
168 168
169 169 def reset(self, node=None, other=None, labels=None):
170 170 self._state = {}
171 171 self._stateextras = {}
172 172 self._local = None
173 173 self._other = None
174 174 self._labels = labels
175 175 for var in ('localctx', 'otherctx'):
176 176 if var in vars(self):
177 177 delattr(self, var)
178 178 if node:
179 179 self._local = node
180 180 self._other = other
181 181 self._readmergedriver = None
182 182 if self.mergedriver:
183 183 self._mdstate = MERGE_DRIVER_STATE_SUCCESS
184 184 else:
185 185 self._mdstate = MERGE_DRIVER_STATE_UNMARKED
186 186 shutil.rmtree(self._repo.vfs.join(b'merge'), True)
187 187 self._results = {}
188 188 self._dirty = False
189 189
190 190 def _read(self):
191 191 """Analyse each record content to restore a serialized state from disk
192 192
193 193 This function process "record" entry produced by the de-serialization
194 194 of on disk file.
195 195 """
196 196 self._state = {}
197 197 self._stateextras = {}
198 198 self._local = None
199 199 self._other = None
200 200 for var in ('localctx', 'otherctx'):
201 201 if var in vars(self):
202 202 delattr(self, var)
203 203 self._readmergedriver = None
204 204 self._mdstate = MERGE_DRIVER_STATE_SUCCESS
205 205 unsupported = set()
206 206 records = self._readrecords()
207 207 for rtype, record in records:
208 208 if rtype == RECORD_LOCAL:
209 209 self._local = bin(record)
210 210 elif rtype == RECORD_OTHER:
211 211 self._other = bin(record)
212 212 elif rtype == RECORD_MERGE_DRIVER_STATE:
213 213 bits = record.split(b'\0', 1)
214 214 mdstate = bits[1]
215 215 if len(mdstate) != 1 or mdstate not in (
216 216 MERGE_DRIVER_STATE_UNMARKED,
217 217 MERGE_DRIVER_STATE_MARKED,
218 218 MERGE_DRIVER_STATE_SUCCESS,
219 219 ):
220 220 # the merge driver should be idempotent, so just rerun it
221 221 mdstate = MERGE_DRIVER_STATE_UNMARKED
222 222
223 223 self._readmergedriver = bits[0]
224 224 self._mdstate = mdstate
225 225 elif rtype in (
226 226 RECORD_MERGED,
227 227 RECORD_CHANGEDELETE_CONFLICT,
228 228 RECORD_PATH_CONFLICT,
229 229 RECORD_MERGE_DRIVER_MERGE,
230 230 ):
231 231 bits = record.split(b'\0')
232 232 self._state[bits[0]] = bits[1:]
233 233 elif rtype == RECORD_FILE_VALUES:
234 234 filename, rawextras = record.split(b'\0', 1)
235 235 extraparts = rawextras.split(b'\0')
236 236 extras = {}
237 237 i = 0
238 238 while i < len(extraparts):
239 239 extras[extraparts[i]] = extraparts[i + 1]
240 240 i += 2
241 241
242 242 self._stateextras[filename] = extras
243 243 elif rtype == RECORD_LABELS:
244 244 labels = record.split(b'\0', 2)
245 245 self._labels = [l for l in labels if len(l) > 0]
246 246 elif not rtype.islower():
247 247 unsupported.add(rtype)
248 248 self._results = {}
249 249 self._dirty = False
250 250
251 251 if unsupported:
252 252 raise error.UnsupportedMergeRecords(unsupported)
253 253
254 254 def _readrecords(self):
255 255 """Read merge state from disk and return a list of record (TYPE, data)
256 256
257 257 We read data from both v1 and v2 files and decide which one to use.
258 258
259 259 V1 has been used by version prior to 2.9.1 and contains less data than
260 260 v2. We read both versions and check if no data in v2 contradicts
261 261 v1. If there is not contradiction we can safely assume that both v1
262 262 and v2 were written at the same time and use the extract data in v2. If
263 263 there is contradiction we ignore v2 content as we assume an old version
264 264 of Mercurial has overwritten the mergestate file and left an old v2
265 265 file around.
266 266
267 267 returns list of record [(TYPE, data), ...]"""
268 268 v1records = self._readrecordsv1()
269 269 v2records = self._readrecordsv2()
270 270 if self._v1v2match(v1records, v2records):
271 271 return v2records
272 272 else:
273 273 # v1 file is newer than v2 file, use it
274 274 # we have to infer the "other" changeset of the merge
275 275 # we cannot do better than that with v1 of the format
276 276 mctx = self._repo[None].parents()[-1]
277 277 v1records.append((RECORD_OTHER, mctx.hex()))
278 278 # add place holder "other" file node information
279 279 # nobody is using it yet so we do no need to fetch the data
280 280 # if mctx was wrong `mctx[bits[-2]]` may fails.
281 281 for idx, r in enumerate(v1records):
282 282 if r[0] == RECORD_MERGED:
283 283 bits = r[1].split(b'\0')
284 284 bits.insert(-2, b'')
285 285 v1records[idx] = (r[0], b'\0'.join(bits))
286 286 return v1records
287 287
288 288 def _v1v2match(self, v1records, v2records):
289 289 oldv2 = set() # old format version of v2 record
290 290 for rec in v2records:
291 291 if rec[0] == RECORD_LOCAL:
292 292 oldv2.add(rec)
293 293 elif rec[0] == RECORD_MERGED:
294 294 # drop the onode data (not contained in v1)
295 295 oldv2.add((RECORD_MERGED, _droponode(rec[1])))
296 296 for rec in v1records:
297 297 if rec not in oldv2:
298 298 return False
299 299 else:
300 300 return True
301 301
302 302 def _readrecordsv1(self):
303 303 """read on disk merge state for version 1 file
304 304
305 305 returns list of record [(TYPE, data), ...]
306 306
307 307 Note: the "F" data from this file are one entry short
308 308 (no "other file node" entry)
309 309 """
310 310 records = []
311 311 try:
312 312 f = self._repo.vfs(self.statepathv1)
313 313 for i, l in enumerate(f):
314 314 if i == 0:
315 315 records.append((RECORD_LOCAL, l[:-1]))
316 316 else:
317 317 records.append((RECORD_MERGED, l[:-1]))
318 318 f.close()
319 319 except IOError as err:
320 320 if err.errno != errno.ENOENT:
321 321 raise
322 322 return records
323 323
324 324 def _readrecordsv2(self):
325 325 """read on disk merge state for version 2 file
326 326
327 327 This format is a list of arbitrary records of the form:
328 328
329 329 [type][length][content]
330 330
331 331 `type` is a single character, `length` is a 4 byte integer, and
332 332 `content` is an arbitrary byte sequence of length `length`.
333 333
334 334 Mercurial versions prior to 3.7 have a bug where if there are
335 335 unsupported mandatory merge records, attempting to clear out the merge
336 336 state with hg update --clean or similar aborts. The 't' record type
337 337 works around that by writing out what those versions treat as an
338 338 advisory record, but later versions interpret as special: the first
339 339 character is the 'real' record type and everything onwards is the data.
340 340
341 341 Returns list of records [(TYPE, data), ...]."""
342 342 records = []
343 343 try:
344 344 f = self._repo.vfs(self.statepathv2)
345 345 data = f.read()
346 346 off = 0
347 347 end = len(data)
348 348 while off < end:
349 349 rtype = data[off : off + 1]
350 350 off += 1
351 351 length = _unpack(b'>I', data[off : (off + 4)])[0]
352 352 off += 4
353 353 record = data[off : (off + length)]
354 354 off += length
355 355 if rtype == RECORD_OVERRIDE:
356 356 rtype, record = record[0:1], record[1:]
357 357 records.append((rtype, record))
358 358 f.close()
359 359 except IOError as err:
360 360 if err.errno != errno.ENOENT:
361 361 raise
362 362 return records
363 363
364 364 @util.propertycache
365 365 def mergedriver(self):
366 366 # protect against the following:
367 367 # - A configures a malicious merge driver in their hgrc, then
368 368 # pauses the merge
369 369 # - A edits their hgrc to remove references to the merge driver
370 370 # - A gives a copy of their entire repo, including .hg, to B
371 371 # - B inspects .hgrc and finds it to be clean
372 372 # - B then continues the merge and the malicious merge driver
373 373 # gets invoked
374 374 configmergedriver = self._repo.ui.config(
375 375 b'experimental', b'mergedriver'
376 376 )
377 377 if (
378 378 self._readmergedriver is not None
379 379 and self._readmergedriver != configmergedriver
380 380 ):
381 381 raise error.ConfigError(
382 382 _(b"merge driver changed since merge started"),
383 383 hint=_(b"revert merge driver change or abort merge"),
384 384 )
385 385
386 386 return configmergedriver
387 387
388 388 @util.propertycache
389 389 def localctx(self):
390 390 if self._local is None:
391 391 msg = b"localctx accessed but self._local isn't set"
392 392 raise error.ProgrammingError(msg)
393 393 return self._repo[self._local]
394 394
395 395 @util.propertycache
396 396 def otherctx(self):
397 397 if self._other is None:
398 398 msg = b"otherctx accessed but self._other isn't set"
399 399 raise error.ProgrammingError(msg)
400 400 return self._repo[self._other]
401 401
402 402 def active(self):
403 403 """Whether mergestate is active.
404 404
405 405 Returns True if there appears to be mergestate. This is a rough proxy
406 406 for "is a merge in progress."
407 407 """
408 408 # Check local variables before looking at filesystem for performance
409 409 # reasons.
410 410 return (
411 411 bool(self._local)
412 412 or bool(self._state)
413 413 or self._repo.vfs.exists(self.statepathv1)
414 414 or self._repo.vfs.exists(self.statepathv2)
415 415 )
416 416
417 417 def commit(self):
418 418 """Write current state on disk (if necessary)"""
419 419 if self._dirty:
420 420 records = self._makerecords()
421 421 self._writerecords(records)
422 422 self._dirty = False
423 423
424 424 def _makerecords(self):
425 425 records = []
426 426 records.append((RECORD_LOCAL, hex(self._local)))
427 427 records.append((RECORD_OTHER, hex(self._other)))
428 428 if self.mergedriver:
429 429 records.append(
430 430 (
431 431 RECORD_MERGE_DRIVER_STATE,
432 432 b'\0'.join([self.mergedriver, self._mdstate]),
433 433 )
434 434 )
435 435 # Write out state items. In all cases, the value of the state map entry
436 436 # is written as the contents of the record. The record type depends on
437 437 # the type of state that is stored, and capital-letter records are used
438 438 # to prevent older versions of Mercurial that do not support the feature
439 439 # from loading them.
440 440 for filename, v in pycompat.iteritems(self._state):
441 441 if v[0] == MERGE_RECORD_DRIVER_RESOLVED:
442 442 # Driver-resolved merge. These are stored in 'D' records.
443 443 records.append(
444 444 (RECORD_MERGE_DRIVER_MERGE, b'\0'.join([filename] + v))
445 445 )
446 446 elif v[0] in (
447 447 MERGE_RECORD_UNRESOLVED_PATH,
448 448 MERGE_RECORD_RESOLVED_PATH,
449 449 ):
450 450 # Path conflicts. These are stored in 'P' records. The current
451 451 # resolution state ('pu' or 'pr') is stored within the record.
452 452 records.append(
453 453 (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v))
454 454 )
455 455 elif v[1] == nullhex or v[6] == nullhex:
456 456 # Change/Delete or Delete/Change conflicts. These are stored in
457 457 # 'C' records. v[1] is the local file, and is nullhex when the
458 458 # file is deleted locally ('dc'). v[6] is the remote file, and
459 459 # is nullhex when the file is deleted remotely ('cd').
460 460 records.append(
461 461 (RECORD_CHANGEDELETE_CONFLICT, b'\0'.join([filename] + v))
462 462 )
463 463 else:
464 464 # Normal files. These are stored in 'F' records.
465 465 records.append((RECORD_MERGED, b'\0'.join([filename] + v)))
466 466 for filename, extras in sorted(pycompat.iteritems(self._stateextras)):
467 467 rawextras = b'\0'.join(
468 468 b'%s\0%s' % (k, v) for k, v in pycompat.iteritems(extras)
469 469 )
470 470 records.append(
471 471 (RECORD_FILE_VALUES, b'%s\0%s' % (filename, rawextras))
472 472 )
473 473 if self._labels is not None:
474 474 labels = b'\0'.join(self._labels)
475 475 records.append((RECORD_LABELS, labels))
476 476 return records
477 477
478 478 def _writerecords(self, records):
479 479 """Write current state on disk (both v1 and v2)"""
480 480 self._writerecordsv1(records)
481 481 self._writerecordsv2(records)
482 482
483 483 def _writerecordsv1(self, records):
484 484 """Write current state on disk in a version 1 file"""
485 485 f = self._repo.vfs(self.statepathv1, b'wb')
486 486 irecords = iter(records)
487 487 lrecords = next(irecords)
488 488 assert lrecords[0] == RECORD_LOCAL
489 489 f.write(hex(self._local) + b'\n')
490 490 for rtype, data in irecords:
491 491 if rtype == RECORD_MERGED:
492 492 f.write(b'%s\n' % _droponode(data))
493 493 f.close()
494 494
495 495 def _writerecordsv2(self, records):
496 496 """Write current state on disk in a version 2 file
497 497
498 498 See the docstring for _readrecordsv2 for why we use 't'."""
499 499 # these are the records that all version 2 clients can read
500 500 allowlist = (RECORD_LOCAL, RECORD_OTHER, RECORD_MERGED)
501 501 f = self._repo.vfs(self.statepathv2, b'wb')
502 502 for key, data in records:
503 503 assert len(key) == 1
504 504 if key not in allowlist:
505 505 key, data = RECORD_OVERRIDE, b'%s%s' % (key, data)
506 506 format = b'>sI%is' % len(data)
507 507 f.write(_pack(format, key, len(data), data))
508 508 f.close()
509 509
510 510 @staticmethod
511 511 def getlocalkey(path):
512 512 """hash the path of a local file context for storage in the .hg/merge
513 513 directory."""
514 514
515 515 return hex(hashutil.sha1(path).digest())
516 516
517 517 def add(self, fcl, fco, fca, fd):
518 518 """add a new (potentially?) conflicting file the merge state
519 519 fcl: file context for local,
520 520 fco: file context for remote,
521 521 fca: file context for ancestors,
522 522 fd: file path of the resulting merge.
523 523
524 524 note: also write the local version to the `.hg/merge` directory.
525 525 """
526 526 if fcl.isabsent():
527 527 localkey = nullhex
528 528 else:
529 529 localkey = mergestate.getlocalkey(fcl.path())
530 530 self._repo.vfs.write(b'merge/' + localkey, fcl.data())
531 531 self._state[fd] = [
532 532 MERGE_RECORD_UNRESOLVED,
533 533 localkey,
534 534 fcl.path(),
535 535 fca.path(),
536 536 hex(fca.filenode()),
537 537 fco.path(),
538 538 hex(fco.filenode()),
539 539 fcl.flags(),
540 540 ]
541 541 self._stateextras[fd] = {b'ancestorlinknode': hex(fca.node())}
542 542 self._dirty = True
543 543
544 544 def addpath(self, path, frename, forigin):
545 545 """add a new conflicting path to the merge state
546 546 path: the path that conflicts
547 547 frename: the filename the conflicting file was renamed to
548 548 forigin: origin of the file ('l' or 'r' for local/remote)
549 549 """
550 550 self._state[path] = [MERGE_RECORD_UNRESOLVED_PATH, frename, forigin]
551 551 self._dirty = True
552 552
553 553 def __contains__(self, dfile):
554 554 return dfile in self._state
555 555
556 556 def __getitem__(self, dfile):
557 557 return self._state[dfile][0]
558 558
559 559 def __iter__(self):
560 560 return iter(sorted(self._state))
561 561
562 562 def files(self):
563 563 return self._state.keys()
564 564
565 565 def mark(self, dfile, state):
566 566 self._state[dfile][0] = state
567 567 self._dirty = True
568 568
569 569 def mdstate(self):
570 570 return self._mdstate
571 571
572 572 def unresolved(self):
573 573 """Obtain the paths of unresolved files."""
574 574
575 575 for f, entry in pycompat.iteritems(self._state):
576 576 if entry[0] in (
577 577 MERGE_RECORD_UNRESOLVED,
578 578 MERGE_RECORD_UNRESOLVED_PATH,
579 579 ):
580 580 yield f
581 581
582 582 def driverresolved(self):
583 583 """Obtain the paths of driver-resolved files."""
584 584
585 585 for f, entry in self._state.items():
586 586 if entry[0] == MERGE_RECORD_DRIVER_RESOLVED:
587 587 yield f
588 588
589 589 def extras(self, filename):
590 590 return self._stateextras.setdefault(filename, {})
591 591
592 592 def _resolve(self, preresolve, dfile, wctx):
593 593 """rerun merge process for file path `dfile`"""
594 594 if self[dfile] in (MERGE_RECORD_RESOLVED, MERGE_RECORD_DRIVER_RESOLVED):
595 595 return True, 0
596 596 stateentry = self._state[dfile]
597 597 state, localkey, lfile, afile, anode, ofile, onode, flags = stateentry
598 598 octx = self._repo[self._other]
599 599 extras = self.extras(dfile)
600 600 anccommitnode = extras.get(b'ancestorlinknode')
601 601 if anccommitnode:
602 602 actx = self._repo[anccommitnode]
603 603 else:
604 604 actx = None
605 605 fcd = self._filectxorabsent(localkey, wctx, dfile)
606 606 fco = self._filectxorabsent(onode, octx, ofile)
607 607 # TODO: move this to filectxorabsent
608 608 fca = self._repo.filectx(afile, fileid=anode, changectx=actx)
609 609 # "premerge" x flags
610 610 flo = fco.flags()
611 611 fla = fca.flags()
612 612 if b'x' in flags + flo + fla and b'l' not in flags + flo + fla:
613 613 if fca.node() == nullid and flags != flo:
614 614 if preresolve:
615 615 self._repo.ui.warn(
616 616 _(
617 617 b'warning: cannot merge flags for %s '
618 618 b'without common ancestor - keeping local flags\n'
619 619 )
620 620 % afile
621 621 )
622 622 elif flags == fla:
623 623 flags = flo
624 624 if preresolve:
625 625 # restore local
626 626 if localkey != nullhex:
627 627 f = self._repo.vfs(b'merge/' + localkey)
628 628 wctx[dfile].write(f.read(), flags)
629 629 f.close()
630 630 else:
631 631 wctx[dfile].remove(ignoremissing=True)
632 632 complete, r, deleted = filemerge.premerge(
633 633 self._repo,
634 634 wctx,
635 635 self._local,
636 636 lfile,
637 637 fcd,
638 638 fco,
639 639 fca,
640 640 labels=self._labels,
641 641 )
642 642 else:
643 643 complete, r, deleted = filemerge.filemerge(
644 644 self._repo,
645 645 wctx,
646 646 self._local,
647 647 lfile,
648 648 fcd,
649 649 fco,
650 650 fca,
651 651 labels=self._labels,
652 652 )
653 653 if r is None:
654 654 # no real conflict
655 655 del self._state[dfile]
656 656 self._stateextras.pop(dfile, None)
657 657 self._dirty = True
658 658 elif not r:
659 659 self.mark(dfile, MERGE_RECORD_RESOLVED)
660 660
661 661 if complete:
662 662 action = None
663 663 if deleted:
664 664 if fcd.isabsent():
665 665 # dc: local picked. Need to drop if present, which may
666 666 # happen on re-resolves.
667 667 action = ACTION_FORGET
668 668 else:
669 669 # cd: remote picked (or otherwise deleted)
670 670 action = ACTION_REMOVE
671 671 else:
672 672 if fcd.isabsent(): # dc: remote picked
673 673 action = ACTION_GET
674 674 elif fco.isabsent(): # cd: local picked
675 675 if dfile in self.localctx:
676 676 action = ACTION_ADD_MODIFIED
677 677 else:
678 678 action = ACTION_ADD
679 679 # else: regular merges (no action necessary)
680 680 self._results[dfile] = r, action
681 681
682 682 return complete, r
683 683
684 684 def _filectxorabsent(self, hexnode, ctx, f):
685 685 if hexnode == nullhex:
686 686 return filemerge.absentfilectx(ctx, f)
687 687 else:
688 688 return ctx[f]
689 689
690 690 def preresolve(self, dfile, wctx):
691 691 """run premerge process for dfile
692 692
693 693 Returns whether the merge is complete, and the exit code."""
694 694 return self._resolve(True, dfile, wctx)
695 695
696 696 def resolve(self, dfile, wctx):
697 697 """run merge process (assuming premerge was run) for dfile
698 698
699 699 Returns the exit code of the merge."""
700 700 return self._resolve(False, dfile, wctx)[1]
701 701
702 702 def counts(self):
703 703 """return counts for updated, merged and removed files in this
704 704 session"""
705 705 updated, merged, removed = 0, 0, 0
706 706 for r, action in pycompat.itervalues(self._results):
707 707 if r is None:
708 708 updated += 1
709 709 elif r == 0:
710 710 if action == ACTION_REMOVE:
711 711 removed += 1
712 712 else:
713 713 merged += 1
714 714 return updated, merged, removed
715 715
716 716 def unresolvedcount(self):
717 717 """get unresolved count for this merge (persistent)"""
718 718 return len(list(self.unresolved()))
719 719
720 720 def actions(self):
721 721 """return lists of actions to perform on the dirstate"""
722 722 actions = {
723 723 ACTION_REMOVE: [],
724 724 ACTION_FORGET: [],
725 725 ACTION_ADD: [],
726 726 ACTION_ADD_MODIFIED: [],
727 727 ACTION_GET: [],
728 728 }
729 729 for f, (r, action) in pycompat.iteritems(self._results):
730 730 if action is not None:
731 731 actions[action].append((f, None, b"merge result"))
732 732 return actions
733 733
734 734 def recordactions(self):
735 735 """record remove/add/get actions in the dirstate"""
736 736 branchmerge = self._repo.dirstate.p2() != nullid
737 737 recordupdates(self._repo, self.actions(), branchmerge, None)
738 738
739 739 def queueremove(self, f):
740 740 """queues a file to be removed from the dirstate
741 741
742 742 Meant for use by custom merge drivers."""
743 743 self._results[f] = 0, ACTION_REMOVE
744 744
745 745 def queueadd(self, f):
746 746 """queues a file to be added to the dirstate
747 747
748 748 Meant for use by custom merge drivers."""
749 749 self._results[f] = 0, ACTION_ADD
750 750
751 751 def queueget(self, f):
752 752 """queues a file to be marked modified in the dirstate
753 753
754 754 Meant for use by custom merge drivers."""
755 755 self._results[f] = 0, ACTION_GET
756 756
757 757
758 758 def _getcheckunknownconfig(repo, section, name):
759 759 config = repo.ui.config(section, name)
760 760 valid = [b'abort', b'ignore', b'warn']
761 761 if config not in valid:
762 762 validstr = b', '.join([b"'" + v + b"'" for v in valid])
763 763 raise error.ConfigError(
764 764 _(b"%s.%s not valid ('%s' is none of %s)")
765 765 % (section, name, config, validstr)
766 766 )
767 767 return config
768 768
769 769
770 770 def _checkunknownfile(repo, wctx, mctx, f, f2=None):
771 771 if wctx.isinmemory():
772 772 # Nothing to do in IMM because nothing in the "working copy" can be an
773 773 # unknown file.
774 774 #
775 775 # Note that we should bail out here, not in ``_checkunknownfiles()``,
776 776 # because that function does other useful work.
777 777 return False
778 778
779 779 if f2 is None:
780 780 f2 = f
781 781 return (
782 782 repo.wvfs.audit.check(f)
783 783 and repo.wvfs.isfileorlink(f)
784 784 and repo.dirstate.normalize(f) not in repo.dirstate
785 785 and mctx[f2].cmp(wctx[f])
786 786 )
787 787
788 788
789 789 class _unknowndirschecker(object):
790 790 """
791 791 Look for any unknown files or directories that may have a path conflict
792 792 with a file. If any path prefix of the file exists as a file or link,
793 793 then it conflicts. If the file itself is a directory that contains any
794 794 file that is not tracked, then it conflicts.
795 795
796 796 Returns the shortest path at which a conflict occurs, or None if there is
797 797 no conflict.
798 798 """
799 799
800 800 def __init__(self):
801 801 # A set of paths known to be good. This prevents repeated checking of
802 802 # dirs. It will be updated with any new dirs that are checked and found
803 803 # to be safe.
804 804 self._unknowndircache = set()
805 805
806 806 # A set of paths that are known to be absent. This prevents repeated
807 807 # checking of subdirectories that are known not to exist. It will be
808 808 # updated with any new dirs that are checked and found to be absent.
809 809 self._missingdircache = set()
810 810
811 811 def __call__(self, repo, wctx, f):
812 812 if wctx.isinmemory():
813 813 # Nothing to do in IMM for the same reason as ``_checkunknownfile``.
814 814 return False
815 815
816 816 # Check for path prefixes that exist as unknown files.
817 817 for p in reversed(list(pathutil.finddirs(f))):
818 818 if p in self._missingdircache:
819 819 return
820 820 if p in self._unknowndircache:
821 821 continue
822 822 if repo.wvfs.audit.check(p):
823 823 if (
824 824 repo.wvfs.isfileorlink(p)
825 825 and repo.dirstate.normalize(p) not in repo.dirstate
826 826 ):
827 827 return p
828 828 if not repo.wvfs.lexists(p):
829 829 self._missingdircache.add(p)
830 830 return
831 831 self._unknowndircache.add(p)
832 832
833 833 # Check if the file conflicts with a directory containing unknown files.
834 834 if repo.wvfs.audit.check(f) and repo.wvfs.isdir(f):
835 835 # Does the directory contain any files that are not in the dirstate?
836 836 for p, dirs, files in repo.wvfs.walk(f):
837 837 for fn in files:
838 838 relf = util.pconvert(repo.wvfs.reljoin(p, fn))
839 839 relf = repo.dirstate.normalize(relf, isknown=True)
840 840 if relf not in repo.dirstate:
841 841 return f
842 842 return None
843 843
844 844
845 845 def _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce):
846 846 """
847 847 Considers any actions that care about the presence of conflicting unknown
848 848 files. For some actions, the result is to abort; for others, it is to
849 849 choose a different action.
850 850 """
851 851 fileconflicts = set()
852 852 pathconflicts = set()
853 853 warnconflicts = set()
854 854 abortconflicts = set()
855 855 unknownconfig = _getcheckunknownconfig(repo, b'merge', b'checkunknown')
856 856 ignoredconfig = _getcheckunknownconfig(repo, b'merge', b'checkignored')
857 857 pathconfig = repo.ui.configbool(
858 858 b'experimental', b'merge.checkpathconflicts'
859 859 )
860 860 if not force:
861 861
862 862 def collectconflicts(conflicts, config):
863 863 if config == b'abort':
864 864 abortconflicts.update(conflicts)
865 865 elif config == b'warn':
866 866 warnconflicts.update(conflicts)
867 867
868 868 checkunknowndirs = _unknowndirschecker()
869 869 for f, (m, args, msg) in pycompat.iteritems(actions):
870 870 if m in (ACTION_CREATED, ACTION_DELETED_CHANGED):
871 871 if _checkunknownfile(repo, wctx, mctx, f):
872 872 fileconflicts.add(f)
873 873 elif pathconfig and f not in wctx:
874 874 path = checkunknowndirs(repo, wctx, f)
875 875 if path is not None:
876 876 pathconflicts.add(path)
877 877 elif m == ACTION_LOCAL_DIR_RENAME_GET:
878 878 if _checkunknownfile(repo, wctx, mctx, f, args[0]):
879 879 fileconflicts.add(f)
880 880
881 881 allconflicts = fileconflicts | pathconflicts
882 882 ignoredconflicts = {c for c in allconflicts if repo.dirstate._ignore(c)}
883 883 unknownconflicts = allconflicts - ignoredconflicts
884 884 collectconflicts(ignoredconflicts, ignoredconfig)
885 885 collectconflicts(unknownconflicts, unknownconfig)
886 886 else:
887 887 for f, (m, args, msg) in pycompat.iteritems(actions):
888 888 if m == ACTION_CREATED_MERGE:
889 889 fl2, anc = args
890 890 different = _checkunknownfile(repo, wctx, mctx, f)
891 891 if repo.dirstate._ignore(f):
892 892 config = ignoredconfig
893 893 else:
894 894 config = unknownconfig
895 895
896 896 # The behavior when force is True is described by this table:
897 897 # config different mergeforce | action backup
898 898 # * n * | get n
899 899 # * y y | merge -
900 900 # abort y n | merge - (1)
901 901 # warn y n | warn + get y
902 902 # ignore y n | get y
903 903 #
904 904 # (1) this is probably the wrong behavior here -- we should
905 905 # probably abort, but some actions like rebases currently
906 906 # don't like an abort happening in the middle of
907 907 # merge.update.
908 908 if not different:
909 909 actions[f] = (ACTION_GET, (fl2, False), b'remote created')
910 910 elif mergeforce or config == b'abort':
911 911 actions[f] = (
912 912 ACTION_MERGE,
913 913 (f, f, None, False, anc),
914 914 b'remote differs from untracked local',
915 915 )
916 916 elif config == b'abort':
917 917 abortconflicts.add(f)
918 918 else:
919 919 if config == b'warn':
920 920 warnconflicts.add(f)
921 921 actions[f] = (ACTION_GET, (fl2, True), b'remote created')
922 922
923 923 for f in sorted(abortconflicts):
924 924 warn = repo.ui.warn
925 925 if f in pathconflicts:
926 926 if repo.wvfs.isfileorlink(f):
927 927 warn(_(b"%s: untracked file conflicts with directory\n") % f)
928 928 else:
929 929 warn(_(b"%s: untracked directory conflicts with file\n") % f)
930 930 else:
931 931 warn(_(b"%s: untracked file differs\n") % f)
932 932 if abortconflicts:
933 933 raise error.Abort(
934 934 _(
935 935 b"untracked files in working directory "
936 936 b"differ from files in requested revision"
937 937 )
938 938 )
939 939
940 940 for f in sorted(warnconflicts):
941 941 if repo.wvfs.isfileorlink(f):
942 942 repo.ui.warn(_(b"%s: replacing untracked file\n") % f)
943 943 else:
944 944 repo.ui.warn(_(b"%s: replacing untracked files in directory\n") % f)
945 945
946 946 for f, (m, args, msg) in pycompat.iteritems(actions):
947 947 if m == ACTION_CREATED:
948 948 backup = (
949 949 f in fileconflicts
950 950 or f in pathconflicts
951 951 or any(p in pathconflicts for p in pathutil.finddirs(f))
952 952 )
953 953 (flags,) = args
954 954 actions[f] = (ACTION_GET, (flags, backup), msg)
955 955
956 956
957 957 def _forgetremoved(wctx, mctx, branchmerge):
958 958 """
959 959 Forget removed files
960 960
961 961 If we're jumping between revisions (as opposed to merging), and if
962 962 neither the working directory nor the target rev has the file,
963 963 then we need to remove it from the dirstate, to prevent the
964 964 dirstate from listing the file when it is no longer in the
965 965 manifest.
966 966
967 967 If we're merging, and the other revision has removed a file
968 968 that is not present in the working directory, we need to mark it
969 969 as removed.
970 970 """
971 971
972 972 actions = {}
973 973 m = ACTION_FORGET
974 974 if branchmerge:
975 975 m = ACTION_REMOVE
976 976 for f in wctx.deleted():
977 977 if f not in mctx:
978 978 actions[f] = m, None, b"forget deleted"
979 979
980 980 if not branchmerge:
981 981 for f in wctx.removed():
982 982 if f not in mctx:
983 983 actions[f] = ACTION_FORGET, None, b"forget removed"
984 984
985 985 return actions
986 986
987 987
988 988 def _checkcollision(repo, wmf, actions):
989 989 """
990 990 Check for case-folding collisions.
991 991 """
992 992
993 993 # If the repo is narrowed, filter out files outside the narrowspec.
994 994 narrowmatch = repo.narrowmatch()
995 995 if not narrowmatch.always():
996 996 wmf = wmf.matches(narrowmatch)
997 997 if actions:
998 998 narrowactions = {}
999 999 for m, actionsfortype in pycompat.iteritems(actions):
1000 1000 narrowactions[m] = []
1001 1001 for (f, args, msg) in actionsfortype:
1002 1002 if narrowmatch(f):
1003 1003 narrowactions[m].append((f, args, msg))
1004 1004 actions = narrowactions
1005 1005
1006 1006 # build provisional merged manifest up
1007 1007 pmmf = set(wmf)
1008 1008
1009 1009 if actions:
1010 1010 # KEEP and EXEC are no-op
1011 1011 for m in (
1012 1012 ACTION_ADD,
1013 1013 ACTION_ADD_MODIFIED,
1014 1014 ACTION_FORGET,
1015 1015 ACTION_GET,
1016 1016 ACTION_CHANGED_DELETED,
1017 1017 ACTION_DELETED_CHANGED,
1018 1018 ):
1019 1019 for f, args, msg in actions[m]:
1020 1020 pmmf.add(f)
1021 1021 for f, args, msg in actions[ACTION_REMOVE]:
1022 1022 pmmf.discard(f)
1023 1023 for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]:
1024 1024 f2, flags = args
1025 1025 pmmf.discard(f2)
1026 1026 pmmf.add(f)
1027 1027 for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]:
1028 1028 pmmf.add(f)
1029 1029 for f, args, msg in actions[ACTION_MERGE]:
1030 1030 f1, f2, fa, move, anc = args
1031 1031 if move:
1032 1032 pmmf.discard(f1)
1033 1033 pmmf.add(f)
1034 1034
1035 1035 # check case-folding collision in provisional merged manifest
1036 1036 foldmap = {}
1037 1037 for f in pmmf:
1038 1038 fold = util.normcase(f)
1039 1039 if fold in foldmap:
1040 1040 raise error.Abort(
1041 1041 _(b"case-folding collision between %s and %s")
1042 1042 % (f, foldmap[fold])
1043 1043 )
1044 1044 foldmap[fold] = f
1045 1045
1046 1046 # check case-folding of directories
1047 1047 foldprefix = unfoldprefix = lastfull = b''
1048 1048 for fold, f in sorted(foldmap.items()):
1049 1049 if fold.startswith(foldprefix) and not f.startswith(unfoldprefix):
1050 1050 # the folded prefix matches but actual casing is different
1051 1051 raise error.Abort(
1052 1052 _(b"case-folding collision between %s and directory of %s")
1053 1053 % (lastfull, f)
1054 1054 )
1055 1055 foldprefix = fold + b'/'
1056 1056 unfoldprefix = f + b'/'
1057 1057 lastfull = f
1058 1058
1059 1059
1060 1060 def driverpreprocess(repo, ms, wctx, labels=None):
1061 1061 """run the preprocess step of the merge driver, if any
1062 1062
1063 1063 This is currently not implemented -- it's an extension point."""
1064 1064 return True
1065 1065
1066 1066
1067 1067 def driverconclude(repo, ms, wctx, labels=None):
1068 1068 """run the conclude step of the merge driver, if any
1069 1069
1070 1070 This is currently not implemented -- it's an extension point."""
1071 1071 return True
1072 1072
1073 1073
1074 1074 def _filesindirs(repo, manifest, dirs):
1075 1075 """
1076 1076 Generator that yields pairs of all the files in the manifest that are found
1077 1077 inside the directories listed in dirs, and which directory they are found
1078 1078 in.
1079 1079 """
1080 1080 for f in manifest:
1081 1081 for p in pathutil.finddirs(f):
1082 1082 if p in dirs:
1083 1083 yield f, p
1084 1084 break
1085 1085
1086 1086
1087 1087 def checkpathconflicts(repo, wctx, mctx, actions):
1088 1088 """
1089 1089 Check if any actions introduce path conflicts in the repository, updating
1090 1090 actions to record or handle the path conflict accordingly.
1091 1091 """
1092 1092 mf = wctx.manifest()
1093 1093
1094 1094 # The set of local files that conflict with a remote directory.
1095 1095 localconflicts = set()
1096 1096
1097 1097 # The set of directories that conflict with a remote file, and so may cause
1098 1098 # conflicts if they still contain any files after the merge.
1099 1099 remoteconflicts = set()
1100 1100
1101 1101 # The set of directories that appear as both a file and a directory in the
1102 1102 # remote manifest. These indicate an invalid remote manifest, which
1103 1103 # can't be updated to cleanly.
1104 1104 invalidconflicts = set()
1105 1105
1106 1106 # The set of directories that contain files that are being created.
1107 1107 createdfiledirs = set()
1108 1108
1109 1109 # The set of files deleted by all the actions.
1110 1110 deletedfiles = set()
1111 1111
1112 1112 for f, (m, args, msg) in actions.items():
1113 1113 if m in (
1114 1114 ACTION_CREATED,
1115 1115 ACTION_DELETED_CHANGED,
1116 1116 ACTION_MERGE,
1117 1117 ACTION_CREATED_MERGE,
1118 1118 ):
1119 1119 # This action may create a new local file.
1120 1120 createdfiledirs.update(pathutil.finddirs(f))
1121 1121 if mf.hasdir(f):
1122 1122 # The file aliases a local directory. This might be ok if all
1123 1123 # the files in the local directory are being deleted. This
1124 1124 # will be checked once we know what all the deleted files are.
1125 1125 remoteconflicts.add(f)
1126 1126 # Track the names of all deleted files.
1127 1127 if m == ACTION_REMOVE:
1128 1128 deletedfiles.add(f)
1129 1129 if m == ACTION_MERGE:
1130 1130 f1, f2, fa, move, anc = args
1131 1131 if move:
1132 1132 deletedfiles.add(f1)
1133 1133 if m == ACTION_DIR_RENAME_MOVE_LOCAL:
1134 1134 f2, flags = args
1135 1135 deletedfiles.add(f2)
1136 1136
1137 1137 # Check all directories that contain created files for path conflicts.
1138 1138 for p in createdfiledirs:
1139 1139 if p in mf:
1140 1140 if p in mctx:
1141 1141 # A file is in a directory which aliases both a local
1142 1142 # and a remote file. This is an internal inconsistency
1143 1143 # within the remote manifest.
1144 1144 invalidconflicts.add(p)
1145 1145 else:
1146 1146 # A file is in a directory which aliases a local file.
1147 1147 # We will need to rename the local file.
1148 1148 localconflicts.add(p)
1149 1149 if p in actions and actions[p][0] in (
1150 1150 ACTION_CREATED,
1151 1151 ACTION_DELETED_CHANGED,
1152 1152 ACTION_MERGE,
1153 1153 ACTION_CREATED_MERGE,
1154 1154 ):
1155 1155 # The file is in a directory which aliases a remote file.
1156 1156 # This is an internal inconsistency within the remote
1157 1157 # manifest.
1158 1158 invalidconflicts.add(p)
1159 1159
1160 1160 # Rename all local conflicting files that have not been deleted.
1161 1161 for p in localconflicts:
1162 1162 if p not in deletedfiles:
1163 1163 ctxname = bytes(wctx).rstrip(b'+')
1164 1164 pnew = util.safename(p, ctxname, wctx, set(actions.keys()))
1165 1165 actions[pnew] = (
1166 1166 ACTION_PATH_CONFLICT_RESOLVE,
1167 1167 (p,),
1168 1168 b'local path conflict',
1169 1169 )
1170 1170 actions[p] = (ACTION_PATH_CONFLICT, (pnew, b'l'), b'path conflict')
1171 1171
1172 1172 if remoteconflicts:
1173 1173 # Check if all files in the conflicting directories have been removed.
1174 1174 ctxname = bytes(mctx).rstrip(b'+')
1175 1175 for f, p in _filesindirs(repo, mf, remoteconflicts):
1176 1176 if f not in deletedfiles:
1177 1177 m, args, msg = actions[p]
1178 1178 pnew = util.safename(p, ctxname, wctx, set(actions.keys()))
1179 1179 if m in (ACTION_DELETED_CHANGED, ACTION_MERGE):
1180 1180 # Action was merge, just update target.
1181 1181 actions[pnew] = (m, args, msg)
1182 1182 else:
1183 1183 # Action was create, change to renamed get action.
1184 1184 fl = args[0]
1185 1185 actions[pnew] = (
1186 1186 ACTION_LOCAL_DIR_RENAME_GET,
1187 1187 (p, fl),
1188 1188 b'remote path conflict',
1189 1189 )
1190 1190 actions[p] = (
1191 1191 ACTION_PATH_CONFLICT,
1192 1192 (pnew, ACTION_REMOVE),
1193 1193 b'path conflict',
1194 1194 )
1195 1195 remoteconflicts.remove(p)
1196 1196 break
1197 1197
1198 1198 if invalidconflicts:
1199 1199 for p in invalidconflicts:
1200 1200 repo.ui.warn(_(b"%s: is both a file and a directory\n") % p)
1201 1201 raise error.Abort(_(b"destination manifest contains path conflicts"))
1202 1202
1203 1203
1204 1204 def _filternarrowactions(narrowmatch, branchmerge, actions):
1205 1205 """
1206 1206 Filters out actions that can ignored because the repo is narrowed.
1207 1207
1208 1208 Raise an exception if the merge cannot be completed because the repo is
1209 1209 narrowed.
1210 1210 """
1211 1211 nooptypes = {b'k'} # TODO: handle with nonconflicttypes
1212 1212 nonconflicttypes = set(b'a am c cm f g r e'.split())
1213 1213 # We mutate the items in the dict during iteration, so iterate
1214 1214 # over a copy.
1215 1215 for f, action in list(actions.items()):
1216 1216 if narrowmatch(f):
1217 1217 pass
1218 1218 elif not branchmerge:
1219 1219 del actions[f] # just updating, ignore changes outside clone
1220 1220 elif action[0] in nooptypes:
1221 1221 del actions[f] # merge does not affect file
1222 1222 elif action[0] in nonconflicttypes:
1223 1223 raise error.Abort(
1224 1224 _(
1225 1225 b'merge affects file \'%s\' outside narrow, '
1226 1226 b'which is not yet supported'
1227 1227 )
1228 1228 % f,
1229 1229 hint=_(b'merging in the other direction may work'),
1230 1230 )
1231 1231 else:
1232 1232 raise error.Abort(
1233 1233 _(b'conflict in file \'%s\' is outside narrow clone') % f
1234 1234 )
1235 1235
1236 1236
1237 1237 def manifestmerge(
1238 1238 repo,
1239 1239 wctx,
1240 1240 p2,
1241 1241 pa,
1242 1242 branchmerge,
1243 1243 force,
1244 1244 matcher,
1245 1245 acceptremote,
1246 1246 followcopies,
1247 1247 forcefulldiff=False,
1248 1248 ):
1249 1249 """
1250 1250 Merge wctx and p2 with ancestor pa and generate merge action list
1251 1251
1252 1252 branchmerge and force are as passed in to update
1253 1253 matcher = matcher to filter file lists
1254 1254 acceptremote = accept the incoming changes without prompting
1255 1255 """
1256 1256 if matcher is not None and matcher.always():
1257 1257 matcher = None
1258 1258
1259 1259 copy, movewithdir, diverge, renamedelete, dirmove = {}, {}, {}, {}, {}
1260 1260
1261 1261 # manifests fetched in order are going to be faster, so prime the caches
1262 1262 [
1263 1263 x.manifest()
1264 1264 for x in sorted(wctx.parents() + [p2, pa], key=scmutil.intrev)
1265 1265 ]
1266 1266
1267 1267 if followcopies:
1268 1268 ret = copies.mergecopies(repo, wctx, p2, pa)
1269 1269 copy, movewithdir, diverge, renamedelete, dirmove = ret
1270 1270
1271 1271 boolbm = pycompat.bytestr(bool(branchmerge))
1272 1272 boolf = pycompat.bytestr(bool(force))
1273 1273 boolm = pycompat.bytestr(bool(matcher))
1274 1274 repo.ui.note(_(b"resolving manifests\n"))
1275 1275 repo.ui.debug(
1276 1276 b" branchmerge: %s, force: %s, partial: %s\n" % (boolbm, boolf, boolm)
1277 1277 )
1278 1278 repo.ui.debug(b" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
1279 1279
1280 1280 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
1281 1281 copied = set(copy.values())
1282 1282 copied.update(movewithdir.values())
1283 1283
1284 1284 if b'.hgsubstate' in m1 and wctx.rev() is None:
1285 1285 # Check whether sub state is modified, and overwrite the manifest
1286 1286 # to flag the change. If wctx is a committed revision, we shouldn't
1287 1287 # care for the dirty state of the working directory.
1288 1288 if any(wctx.sub(s).dirty() for s in wctx.substate):
1289 1289 m1[b'.hgsubstate'] = modifiednodeid
1290 1290
1291 1291 # Don't use m2-vs-ma optimization if:
1292 1292 # - ma is the same as m1 or m2, which we're just going to diff again later
1293 1293 # - The caller specifically asks for a full diff, which is useful during bid
1294 1294 # merge.
1295 1295 if pa not in ([wctx, p2] + wctx.parents()) and not forcefulldiff:
1296 1296 # Identify which files are relevant to the merge, so we can limit the
1297 1297 # total m1-vs-m2 diff to just those files. This has significant
1298 1298 # performance benefits in large repositories.
1299 1299 relevantfiles = set(ma.diff(m2).keys())
1300 1300
1301 1301 # For copied and moved files, we need to add the source file too.
1302 1302 for copykey, copyvalue in pycompat.iteritems(copy):
1303 1303 if copyvalue in relevantfiles:
1304 1304 relevantfiles.add(copykey)
1305 1305 for movedirkey in movewithdir:
1306 1306 relevantfiles.add(movedirkey)
1307 1307 filesmatcher = scmutil.matchfiles(repo, relevantfiles)
1308 1308 matcher = matchmod.intersectmatchers(matcher, filesmatcher)
1309 1309
1310 1310 diff = m1.diff(m2, match=matcher)
1311 1311
1312 1312 actions = {}
1313 1313 for f, ((n1, fl1), (n2, fl2)) in pycompat.iteritems(diff):
1314 1314 if n1 and n2: # file exists on both local and remote side
1315 1315 if f not in ma:
1316 1316 fa = copy.get(f, None)
1317 1317 if fa is not None:
1318 1318 actions[f] = (
1319 1319 ACTION_MERGE,
1320 1320 (f, f, fa, False, pa.node()),
1321 1321 b'both renamed from %s' % fa,
1322 1322 )
1323 1323 else:
1324 1324 actions[f] = (
1325 1325 ACTION_MERGE,
1326 1326 (f, f, None, False, pa.node()),
1327 1327 b'both created',
1328 1328 )
1329 1329 else:
1330 1330 a = ma[f]
1331 1331 fla = ma.flags(f)
1332 1332 nol = b'l' not in fl1 + fl2 + fla
1333 1333 if n2 == a and fl2 == fla:
1334 1334 actions[f] = (ACTION_KEEP, (), b'remote unchanged')
1335 1335 elif n1 == a and fl1 == fla: # local unchanged - use remote
1336 1336 if n1 == n2: # optimization: keep local content
1337 1337 actions[f] = (
1338 1338 ACTION_EXEC,
1339 1339 (fl2,),
1340 1340 b'update permissions',
1341 1341 )
1342 1342 else:
1343 1343 actions[f] = (
1344 1344 ACTION_GET,
1345 1345 (fl2, False),
1346 1346 b'remote is newer',
1347 1347 )
1348 1348 elif nol and n2 == a: # remote only changed 'x'
1349 1349 actions[f] = (ACTION_EXEC, (fl2,), b'update permissions')
1350 1350 elif nol and n1 == a: # local only changed 'x'
1351 1351 actions[f] = (ACTION_GET, (fl1, False), b'remote is newer')
1352 1352 else: # both changed something
1353 1353 actions[f] = (
1354 1354 ACTION_MERGE,
1355 1355 (f, f, f, False, pa.node()),
1356 1356 b'versions differ',
1357 1357 )
1358 1358 elif n1: # file exists only on local side
1359 1359 if f in copied:
1360 1360 pass # we'll deal with it on m2 side
1361 1361 elif f in movewithdir: # directory rename, move local
1362 1362 f2 = movewithdir[f]
1363 1363 if f2 in m2:
1364 1364 actions[f2] = (
1365 1365 ACTION_MERGE,
1366 1366 (f, f2, None, True, pa.node()),
1367 1367 b'remote directory rename, both created',
1368 1368 )
1369 1369 else:
1370 1370 actions[f2] = (
1371 1371 ACTION_DIR_RENAME_MOVE_LOCAL,
1372 1372 (f, fl1),
1373 1373 b'remote directory rename - move from %s' % f,
1374 1374 )
1375 1375 elif f in copy:
1376 1376 f2 = copy[f]
1377 1377 actions[f] = (
1378 1378 ACTION_MERGE,
1379 1379 (f, f2, f2, False, pa.node()),
1380 1380 b'local copied/moved from %s' % f2,
1381 1381 )
1382 1382 elif f in ma: # clean, a different, no remote
1383 1383 if n1 != ma[f]:
1384 1384 if acceptremote:
1385 1385 actions[f] = (ACTION_REMOVE, None, b'remote delete')
1386 1386 else:
1387 1387 actions[f] = (
1388 1388 ACTION_CHANGED_DELETED,
1389 1389 (f, None, f, False, pa.node()),
1390 1390 b'prompt changed/deleted',
1391 1391 )
1392 1392 elif n1 == addednodeid:
1393 1393 # This extra 'a' is added by working copy manifest to mark
1394 1394 # the file as locally added. We should forget it instead of
1395 1395 # deleting it.
1396 1396 actions[f] = (ACTION_FORGET, None, b'remote deleted')
1397 1397 else:
1398 1398 actions[f] = (ACTION_REMOVE, None, b'other deleted')
1399 1399 elif n2: # file exists only on remote side
1400 1400 if f in copied:
1401 1401 pass # we'll deal with it on m1 side
1402 1402 elif f in movewithdir:
1403 1403 f2 = movewithdir[f]
1404 1404 if f2 in m1:
1405 1405 actions[f2] = (
1406 1406 ACTION_MERGE,
1407 1407 (f2, f, None, False, pa.node()),
1408 1408 b'local directory rename, both created',
1409 1409 )
1410 1410 else:
1411 1411 actions[f2] = (
1412 1412 ACTION_LOCAL_DIR_RENAME_GET,
1413 1413 (f, fl2),
1414 1414 b'local directory rename - get from %s' % f,
1415 1415 )
1416 1416 elif f in copy:
1417 1417 f2 = copy[f]
1418 1418 if f2 in m2:
1419 1419 actions[f] = (
1420 1420 ACTION_MERGE,
1421 1421 (f2, f, f2, False, pa.node()),
1422 1422 b'remote copied from %s' % f2,
1423 1423 )
1424 1424 else:
1425 1425 actions[f] = (
1426 1426 ACTION_MERGE,
1427 1427 (f2, f, f2, True, pa.node()),
1428 1428 b'remote moved from %s' % f2,
1429 1429 )
1430 1430 elif f not in ma:
1431 1431 # local unknown, remote created: the logic is described by the
1432 1432 # following table:
1433 1433 #
1434 1434 # force branchmerge different | action
1435 1435 # n * * | create
1436 1436 # y n * | create
1437 1437 # y y n | create
1438 1438 # y y y | merge
1439 1439 #
1440 1440 # Checking whether the files are different is expensive, so we
1441 1441 # don't do that when we can avoid it.
1442 1442 if not force:
1443 1443 actions[f] = (ACTION_CREATED, (fl2,), b'remote created')
1444 1444 elif not branchmerge:
1445 1445 actions[f] = (ACTION_CREATED, (fl2,), b'remote created')
1446 1446 else:
1447 1447 actions[f] = (
1448 1448 ACTION_CREATED_MERGE,
1449 1449 (fl2, pa.node()),
1450 1450 b'remote created, get or merge',
1451 1451 )
1452 1452 elif n2 != ma[f]:
1453 1453 df = None
1454 1454 for d in dirmove:
1455 1455 if f.startswith(d):
1456 1456 # new file added in a directory that was moved
1457 1457 df = dirmove[d] + f[len(d) :]
1458 1458 break
1459 1459 if df is not None and df in m1:
1460 1460 actions[df] = (
1461 1461 ACTION_MERGE,
1462 1462 (df, f, f, False, pa.node()),
1463 1463 b'local directory rename - respect move '
1464 1464 b'from %s' % f,
1465 1465 )
1466 1466 elif acceptremote:
1467 1467 actions[f] = (ACTION_CREATED, (fl2,), b'remote recreating')
1468 1468 else:
1469 1469 actions[f] = (
1470 1470 ACTION_DELETED_CHANGED,
1471 1471 (None, f, f, False, pa.node()),
1472 1472 b'prompt deleted/changed',
1473 1473 )
1474 1474
1475 1475 if repo.ui.configbool(b'experimental', b'merge.checkpathconflicts'):
1476 1476 # If we are merging, look for path conflicts.
1477 1477 checkpathconflicts(repo, wctx, p2, actions)
1478 1478
1479 1479 narrowmatch = repo.narrowmatch()
1480 1480 if not narrowmatch.always():
1481 1481 # Updates "actions" in place
1482 1482 _filternarrowactions(narrowmatch, branchmerge, actions)
1483 1483
1484 1484 return actions, diverge, renamedelete
1485 1485
1486 1486
1487 1487 def _resolvetrivial(repo, wctx, mctx, ancestor, actions):
1488 1488 """Resolves false conflicts where the nodeid changed but the content
1489 1489 remained the same."""
1490 1490 # We force a copy of actions.items() because we're going to mutate
1491 1491 # actions as we resolve trivial conflicts.
1492 1492 for f, (m, args, msg) in list(actions.items()):
1493 1493 if (
1494 1494 m == ACTION_CHANGED_DELETED
1495 1495 and f in ancestor
1496 1496 and not wctx[f].cmp(ancestor[f])
1497 1497 ):
1498 1498 # local did change but ended up with same content
1499 1499 actions[f] = ACTION_REMOVE, None, b'prompt same'
1500 1500 elif (
1501 1501 m == ACTION_DELETED_CHANGED
1502 1502 and f in ancestor
1503 1503 and not mctx[f].cmp(ancestor[f])
1504 1504 ):
1505 1505 # remote did change but ended up with same content
1506 1506 del actions[f] # don't get = keep local deleted
1507 1507
1508 1508
1509 1509 def calculateupdates(
1510 1510 repo,
1511 1511 wctx,
1512 1512 mctx,
1513 1513 ancestors,
1514 1514 branchmerge,
1515 1515 force,
1516 1516 acceptremote,
1517 1517 followcopies,
1518 1518 matcher=None,
1519 1519 mergeforce=False,
1520 1520 ):
1521 1521 """Calculate the actions needed to merge mctx into wctx using ancestors"""
1522 1522 # Avoid cycle.
1523 1523 from . import sparse
1524 1524
1525 1525 if len(ancestors) == 1: # default
1526 1526 actions, diverge, renamedelete = manifestmerge(
1527 1527 repo,
1528 1528 wctx,
1529 1529 mctx,
1530 1530 ancestors[0],
1531 1531 branchmerge,
1532 1532 force,
1533 1533 matcher,
1534 1534 acceptremote,
1535 1535 followcopies,
1536 1536 )
1537 1537 _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce)
1538 1538
1539 1539 else: # only when merge.preferancestor=* - the default
1540 1540 repo.ui.note(
1541 1541 _(b"note: merging %s and %s using bids from ancestors %s\n")
1542 1542 % (
1543 1543 wctx,
1544 1544 mctx,
1545 1545 _(b' and ').join(pycompat.bytestr(anc) for anc in ancestors),
1546 1546 )
1547 1547 )
1548 1548
1549 1549 # Call for bids
1550 1550 fbids = (
1551 1551 {}
1552 1552 ) # mapping filename to bids (action method to list af actions)
1553 1553 diverge, renamedelete = None, None
1554 1554 for ancestor in ancestors:
1555 1555 repo.ui.note(_(b'\ncalculating bids for ancestor %s\n') % ancestor)
1556 1556 actions, diverge1, renamedelete1 = manifestmerge(
1557 1557 repo,
1558 1558 wctx,
1559 1559 mctx,
1560 1560 ancestor,
1561 1561 branchmerge,
1562 1562 force,
1563 1563 matcher,
1564 1564 acceptremote,
1565 1565 followcopies,
1566 1566 forcefulldiff=True,
1567 1567 )
1568 1568 _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce)
1569 1569
1570 1570 # Track the shortest set of warning on the theory that bid
1571 1571 # merge will correctly incorporate more information
1572 1572 if diverge is None or len(diverge1) < len(diverge):
1573 1573 diverge = diverge1
1574 1574 if renamedelete is None or len(renamedelete) < len(renamedelete1):
1575 1575 renamedelete = renamedelete1
1576 1576
1577 1577 for f, a in sorted(pycompat.iteritems(actions)):
1578 1578 m, args, msg = a
1579 1579 repo.ui.debug(b' %s: %s -> %s\n' % (f, msg, m))
1580 1580 if f in fbids:
1581 1581 d = fbids[f]
1582 1582 if m in d:
1583 1583 d[m].append(a)
1584 1584 else:
1585 1585 d[m] = [a]
1586 1586 else:
1587 1587 fbids[f] = {m: [a]}
1588 1588
1589 1589 # Pick the best bid for each file
1590 1590 repo.ui.note(_(b'\nauction for merging merge bids\n'))
1591 1591 actions = {}
1592 1592 for f, bids in sorted(fbids.items()):
1593 1593 # bids is a mapping from action method to list af actions
1594 1594 # Consensus?
1595 1595 if len(bids) == 1: # all bids are the same kind of method
1596 1596 m, l = list(bids.items())[0]
1597 1597 if all(a == l[0] for a in l[1:]): # len(bids) is > 1
1598 1598 repo.ui.note(_(b" %s: consensus for %s\n") % (f, m))
1599 1599 actions[f] = l[0]
1600 1600 continue
1601 1601 # If keep is an option, just do it.
1602 1602 if ACTION_KEEP in bids:
1603 1603 repo.ui.note(_(b" %s: picking 'keep' action\n") % f)
1604 1604 actions[f] = bids[ACTION_KEEP][0]
1605 1605 continue
1606 1606 # If there are gets and they all agree [how could they not?], do it.
1607 1607 if ACTION_GET in bids:
1608 1608 ga0 = bids[ACTION_GET][0]
1609 1609 if all(a == ga0 for a in bids[ACTION_GET][1:]):
1610 1610 repo.ui.note(_(b" %s: picking 'get' action\n") % f)
1611 1611 actions[f] = ga0
1612 1612 continue
1613 1613 # TODO: Consider other simple actions such as mode changes
1614 1614 # Handle inefficient democrazy.
1615 1615 repo.ui.note(_(b' %s: multiple bids for merge action:\n') % f)
1616 1616 for m, l in sorted(bids.items()):
1617 1617 for _f, args, msg in l:
1618 1618 repo.ui.note(b' %s -> %s\n' % (msg, m))
1619 1619 # Pick random action. TODO: Instead, prompt user when resolving
1620 1620 m, l = list(bids.items())[0]
1621 1621 repo.ui.warn(
1622 1622 _(b' %s: ambiguous merge - picked %s action\n') % (f, m)
1623 1623 )
1624 1624 actions[f] = l[0]
1625 1625 continue
1626 1626 repo.ui.note(_(b'end of auction\n\n'))
1627 1627
1628 1628 if wctx.rev() is None:
1629 1629 fractions = _forgetremoved(wctx, mctx, branchmerge)
1630 1630 actions.update(fractions)
1631 1631
1632 1632 prunedactions = sparse.filterupdatesactions(
1633 1633 repo, wctx, mctx, branchmerge, actions
1634 1634 )
1635 1635 _resolvetrivial(repo, wctx, mctx, ancestors[0], actions)
1636 1636
1637 1637 return prunedactions, diverge, renamedelete
1638 1638
1639 1639
1640 1640 def _getcwd():
1641 1641 try:
1642 1642 return encoding.getcwd()
1643 1643 except OSError as err:
1644 1644 if err.errno == errno.ENOENT:
1645 1645 return None
1646 1646 raise
1647 1647
1648 1648
1649 1649 def batchremove(repo, wctx, actions):
1650 1650 """apply removes to the working directory
1651 1651
1652 1652 yields tuples for progress updates
1653 1653 """
1654 1654 verbose = repo.ui.verbose
1655 1655 cwd = _getcwd()
1656 1656 i = 0
1657 1657 for f, args, msg in actions:
1658 1658 repo.ui.debug(b" %s: %s -> r\n" % (f, msg))
1659 1659 if verbose:
1660 1660 repo.ui.note(_(b"removing %s\n") % f)
1661 1661 wctx[f].audit()
1662 1662 try:
1663 1663 wctx[f].remove(ignoremissing=True)
1664 1664 except OSError as inst:
1665 1665 repo.ui.warn(
1666 1666 _(b"update failed to remove %s: %s!\n") % (f, inst.strerror)
1667 1667 )
1668 1668 if i == 100:
1669 1669 yield i, f
1670 1670 i = 0
1671 1671 i += 1
1672 1672 if i > 0:
1673 1673 yield i, f
1674 1674
1675 1675 if cwd and not _getcwd():
1676 1676 # cwd was removed in the course of removing files; print a helpful
1677 1677 # warning.
1678 1678 repo.ui.warn(
1679 1679 _(
1680 1680 b"current directory was removed\n"
1681 1681 b"(consider changing to repo root: %s)\n"
1682 1682 )
1683 1683 % repo.root
1684 1684 )
1685 1685
1686 1686
1687 1687 def batchget(repo, mctx, wctx, wantfiledata, actions):
1688 1688 """apply gets to the working directory
1689 1689
1690 1690 mctx is the context to get from
1691 1691
1692 1692 Yields arbitrarily many (False, tuple) for progress updates, followed by
1693 1693 exactly one (True, filedata). When wantfiledata is false, filedata is an
1694 1694 empty dict. When wantfiledata is true, filedata[f] is a triple (mode, size,
1695 1695 mtime) of the file f written for each action.
1696 1696 """
1697 1697 filedata = {}
1698 1698 verbose = repo.ui.verbose
1699 1699 fctx = mctx.filectx
1700 1700 ui = repo.ui
1701 1701 i = 0
1702 1702 with repo.wvfs.backgroundclosing(ui, expectedcount=len(actions)):
1703 1703 for f, (flags, backup), msg in actions:
1704 1704 repo.ui.debug(b" %s: %s -> g\n" % (f, msg))
1705 1705 if verbose:
1706 1706 repo.ui.note(_(b"getting %s\n") % f)
1707 1707
1708 1708 if backup:
1709 1709 # If a file or directory exists with the same name, back that
1710 1710 # up. Otherwise, look to see if there is a file that conflicts
1711 1711 # with a directory this file is in, and if so, back that up.
1712 1712 conflicting = f
1713 1713 if not repo.wvfs.lexists(f):
1714 1714 for p in pathutil.finddirs(f):
1715 1715 if repo.wvfs.isfileorlink(p):
1716 1716 conflicting = p
1717 1717 break
1718 1718 if repo.wvfs.lexists(conflicting):
1719 1719 orig = scmutil.backuppath(ui, repo, conflicting)
1720 1720 util.rename(repo.wjoin(conflicting), orig)
1721 1721 wfctx = wctx[f]
1722 1722 wfctx.clearunknown()
1723 1723 atomictemp = ui.configbool(b"experimental", b"update.atomic-file")
1724 1724 size = wfctx.write(
1725 1725 fctx(f).data(),
1726 1726 flags,
1727 1727 backgroundclose=True,
1728 1728 atomictemp=atomictemp,
1729 1729 )
1730 1730 if wantfiledata:
1731 1731 s = wfctx.lstat()
1732 1732 mode = s.st_mode
1733 1733 mtime = s[stat.ST_MTIME]
1734 1734 filedata[f] = (mode, size, mtime) # for dirstate.normal
1735 1735 if i == 100:
1736 1736 yield False, (i, f)
1737 1737 i = 0
1738 1738 i += 1
1739 1739 if i > 0:
1740 1740 yield False, (i, f)
1741 1741 yield True, filedata
1742 1742
1743 1743
1744 1744 def _prefetchfiles(repo, ctx, actions):
1745 1745 """Invoke ``scmutil.prefetchfiles()`` for the files relevant to the dict
1746 1746 of merge actions. ``ctx`` is the context being merged in."""
1747 1747
1748 1748 # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they
1749 1749 # don't touch the context to be merged in. 'cd' is skipped, because
1750 1750 # changed/deleted never resolves to something from the remote side.
1751 1751 oplist = [
1752 1752 actions[a]
1753 1753 for a in (
1754 1754 ACTION_GET,
1755 1755 ACTION_DELETED_CHANGED,
1756 1756 ACTION_LOCAL_DIR_RENAME_GET,
1757 1757 ACTION_MERGE,
1758 1758 )
1759 1759 ]
1760 1760 prefetch = scmutil.prefetchfiles
1761 1761 matchfiles = scmutil.matchfiles
1762 1762 prefetch(
1763 1763 repo,
1764 1764 [ctx.rev()],
1765 1765 matchfiles(repo, [f for sublist in oplist for f, args, msg in sublist]),
1766 1766 )
1767 1767
1768 1768
1769 1769 @attr.s(frozen=True)
1770 1770 class updateresult(object):
1771 1771 updatedcount = attr.ib()
1772 1772 mergedcount = attr.ib()
1773 1773 removedcount = attr.ib()
1774 1774 unresolvedcount = attr.ib()
1775 1775
1776 1776 def isempty(self):
1777 1777 return not (
1778 1778 self.updatedcount
1779 1779 or self.mergedcount
1780 1780 or self.removedcount
1781 1781 or self.unresolvedcount
1782 1782 )
1783 1783
1784 1784
1785 1785 def emptyactions():
1786 1786 """create an actions dict, to be populated and passed to applyupdates()"""
1787 1787 return dict(
1788 1788 (m, [])
1789 1789 for m in (
1790 1790 ACTION_ADD,
1791 1791 ACTION_ADD_MODIFIED,
1792 1792 ACTION_FORGET,
1793 1793 ACTION_GET,
1794 1794 ACTION_CHANGED_DELETED,
1795 1795 ACTION_DELETED_CHANGED,
1796 1796 ACTION_REMOVE,
1797 1797 ACTION_DIR_RENAME_MOVE_LOCAL,
1798 1798 ACTION_LOCAL_DIR_RENAME_GET,
1799 1799 ACTION_MERGE,
1800 1800 ACTION_EXEC,
1801 1801 ACTION_KEEP,
1802 1802 ACTION_PATH_CONFLICT,
1803 1803 ACTION_PATH_CONFLICT_RESOLVE,
1804 1804 )
1805 1805 )
1806 1806
1807 1807
1808 1808 def applyupdates(
1809 1809 repo, actions, wctx, mctx, overwrite, wantfiledata, labels=None
1810 1810 ):
1811 1811 """apply the merge action list to the working directory
1812 1812
1813 1813 wctx is the working copy context
1814 1814 mctx is the context to be merged into the working copy
1815 1815
1816 1816 Return a tuple of (counts, filedata), where counts is a tuple
1817 1817 (updated, merged, removed, unresolved) that describes how many
1818 1818 files were affected by the update, and filedata is as described in
1819 1819 batchget.
1820 1820 """
1821 1821
1822 1822 _prefetchfiles(repo, mctx, actions)
1823 1823
1824 1824 updated, merged, removed = 0, 0, 0
1825 1825 ms = mergestate.clean(repo, wctx.p1().node(), mctx.node(), labels)
1826 1826 moves = []
1827 1827 for m, l in actions.items():
1828 1828 l.sort()
1829 1829
1830 1830 # 'cd' and 'dc' actions are treated like other merge conflicts
1831 1831 mergeactions = sorted(actions[ACTION_CHANGED_DELETED])
1832 1832 mergeactions.extend(sorted(actions[ACTION_DELETED_CHANGED]))
1833 1833 mergeactions.extend(actions[ACTION_MERGE])
1834 1834 for f, args, msg in mergeactions:
1835 1835 f1, f2, fa, move, anc = args
1836 1836 if f == b'.hgsubstate': # merged internally
1837 1837 continue
1838 1838 if f1 is None:
1839 1839 fcl = filemerge.absentfilectx(wctx, fa)
1840 1840 else:
1841 1841 repo.ui.debug(b" preserving %s for resolve of %s\n" % (f1, f))
1842 1842 fcl = wctx[f1]
1843 1843 if f2 is None:
1844 1844 fco = filemerge.absentfilectx(mctx, fa)
1845 1845 else:
1846 1846 fco = mctx[f2]
1847 1847 actx = repo[anc]
1848 1848 if fa in actx:
1849 1849 fca = actx[fa]
1850 1850 else:
1851 1851 # TODO: move to absentfilectx
1852 1852 fca = repo.filectx(f1, fileid=nullrev)
1853 1853 ms.add(fcl, fco, fca, f)
1854 1854 if f1 != f and move:
1855 1855 moves.append(f1)
1856 1856
1857 1857 # remove renamed files after safely stored
1858 1858 for f in moves:
1859 1859 if wctx[f].lexists():
1860 1860 repo.ui.debug(b"removing %s\n" % f)
1861 1861 wctx[f].audit()
1862 1862 wctx[f].remove()
1863 1863
1864 1864 numupdates = sum(len(l) for m, l in actions.items() if m != ACTION_KEEP)
1865 1865 progress = repo.ui.makeprogress(
1866 1866 _(b'updating'), unit=_(b'files'), total=numupdates
1867 1867 )
1868 1868
1869 1869 if [a for a in actions[ACTION_REMOVE] if a[0] == b'.hgsubstate']:
1870 1870 subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1871 1871
1872 1872 # record path conflicts
1873 1873 for f, args, msg in actions[ACTION_PATH_CONFLICT]:
1874 1874 f1, fo = args
1875 1875 s = repo.ui.status
1876 1876 s(
1877 1877 _(
1878 1878 b"%s: path conflict - a file or link has the same name as a "
1879 1879 b"directory\n"
1880 1880 )
1881 1881 % f
1882 1882 )
1883 1883 if fo == b'l':
1884 1884 s(_(b"the local file has been renamed to %s\n") % f1)
1885 1885 else:
1886 1886 s(_(b"the remote file has been renamed to %s\n") % f1)
1887 1887 s(_(b"resolve manually then use 'hg resolve --mark %s'\n") % f)
1888 1888 ms.addpath(f, f1, fo)
1889 1889 progress.increment(item=f)
1890 1890
1891 1891 # When merging in-memory, we can't support worker processes, so set the
1892 1892 # per-item cost at 0 in that case.
1893 1893 cost = 0 if wctx.isinmemory() else 0.001
1894 1894
1895 1895 # remove in parallel (must come before resolving path conflicts and getting)
1896 1896 prog = worker.worker(
1897 1897 repo.ui, cost, batchremove, (repo, wctx), actions[ACTION_REMOVE]
1898 1898 )
1899 1899 for i, item in prog:
1900 1900 progress.increment(step=i, item=item)
1901 1901 removed = len(actions[ACTION_REMOVE])
1902 1902
1903 1903 # resolve path conflicts (must come before getting)
1904 1904 for f, args, msg in actions[ACTION_PATH_CONFLICT_RESOLVE]:
1905 1905 repo.ui.debug(b" %s: %s -> pr\n" % (f, msg))
1906 1906 (f0,) = args
1907 1907 if wctx[f0].lexists():
1908 1908 repo.ui.note(_(b"moving %s to %s\n") % (f0, f))
1909 1909 wctx[f].audit()
1910 1910 wctx[f].write(wctx.filectx(f0).data(), wctx.filectx(f0).flags())
1911 1911 wctx[f0].remove()
1912 1912 progress.increment(item=f)
1913 1913
1914 1914 # get in parallel.
1915 1915 threadsafe = repo.ui.configbool(
1916 1916 b'experimental', b'worker.wdir-get-thread-safe'
1917 1917 )
1918 1918 prog = worker.worker(
1919 1919 repo.ui,
1920 1920 cost,
1921 1921 batchget,
1922 1922 (repo, mctx, wctx, wantfiledata),
1923 1923 actions[ACTION_GET],
1924 1924 threadsafe=threadsafe,
1925 1925 hasretval=True,
1926 1926 )
1927 1927 getfiledata = {}
1928 1928 for final, res in prog:
1929 1929 if final:
1930 1930 getfiledata = res
1931 1931 else:
1932 1932 i, item = res
1933 1933 progress.increment(step=i, item=item)
1934 1934 updated = len(actions[ACTION_GET])
1935 1935
1936 1936 if [a for a in actions[ACTION_GET] if a[0] == b'.hgsubstate']:
1937 1937 subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1938 1938
1939 1939 # forget (manifest only, just log it) (must come first)
1940 1940 for f, args, msg in actions[ACTION_FORGET]:
1941 1941 repo.ui.debug(b" %s: %s -> f\n" % (f, msg))
1942 1942 progress.increment(item=f)
1943 1943
1944 1944 # re-add (manifest only, just log it)
1945 1945 for f, args, msg in actions[ACTION_ADD]:
1946 1946 repo.ui.debug(b" %s: %s -> a\n" % (f, msg))
1947 1947 progress.increment(item=f)
1948 1948
1949 1949 # re-add/mark as modified (manifest only, just log it)
1950 1950 for f, args, msg in actions[ACTION_ADD_MODIFIED]:
1951 1951 repo.ui.debug(b" %s: %s -> am\n" % (f, msg))
1952 1952 progress.increment(item=f)
1953 1953
1954 1954 # keep (noop, just log it)
1955 1955 for f, args, msg in actions[ACTION_KEEP]:
1956 1956 repo.ui.debug(b" %s: %s -> k\n" % (f, msg))
1957 1957 # no progress
1958 1958
1959 1959 # directory rename, move local
1960 1960 for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]:
1961 1961 repo.ui.debug(b" %s: %s -> dm\n" % (f, msg))
1962 1962 progress.increment(item=f)
1963 1963 f0, flags = args
1964 1964 repo.ui.note(_(b"moving %s to %s\n") % (f0, f))
1965 1965 wctx[f].audit()
1966 1966 wctx[f].write(wctx.filectx(f0).data(), flags)
1967 1967 wctx[f0].remove()
1968 1968 updated += 1
1969 1969
1970 1970 # local directory rename, get
1971 1971 for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]:
1972 1972 repo.ui.debug(b" %s: %s -> dg\n" % (f, msg))
1973 1973 progress.increment(item=f)
1974 1974 f0, flags = args
1975 1975 repo.ui.note(_(b"getting %s to %s\n") % (f0, f))
1976 1976 wctx[f].write(mctx.filectx(f0).data(), flags)
1977 1977 updated += 1
1978 1978
1979 1979 # exec
1980 1980 for f, args, msg in actions[ACTION_EXEC]:
1981 1981 repo.ui.debug(b" %s: %s -> e\n" % (f, msg))
1982 1982 progress.increment(item=f)
1983 1983 (flags,) = args
1984 1984 wctx[f].audit()
1985 1985 wctx[f].setflags(b'l' in flags, b'x' in flags)
1986 1986 updated += 1
1987 1987
1988 1988 # the ordering is important here -- ms.mergedriver will raise if the merge
1989 1989 # driver has changed, and we want to be able to bypass it when overwrite is
1990 1990 # True
1991 1991 usemergedriver = not overwrite and mergeactions and ms.mergedriver
1992 1992
1993 1993 if usemergedriver:
1994 1994 if wctx.isinmemory():
1995 1995 raise error.InMemoryMergeConflictsError(
1996 1996 b"in-memory merge does not support mergedriver"
1997 1997 )
1998 1998 ms.commit()
1999 1999 proceed = driverpreprocess(repo, ms, wctx, labels=labels)
2000 2000 # the driver might leave some files unresolved
2001 2001 unresolvedf = set(ms.unresolved())
2002 2002 if not proceed:
2003 2003 # XXX setting unresolved to at least 1 is a hack to make sure we
2004 2004 # error out
2005 2005 return updateresult(
2006 2006 updated, merged, removed, max(len(unresolvedf), 1)
2007 2007 )
2008 2008 newactions = []
2009 2009 for f, args, msg in mergeactions:
2010 2010 if f in unresolvedf:
2011 2011 newactions.append((f, args, msg))
2012 2012 mergeactions = newactions
2013 2013
2014 2014 try:
2015 2015 # premerge
2016 2016 tocomplete = []
2017 2017 for f, args, msg in mergeactions:
2018 2018 repo.ui.debug(b" %s: %s -> m (premerge)\n" % (f, msg))
2019 2019 progress.increment(item=f)
2020 2020 if f == b'.hgsubstate': # subrepo states need updating
2021 2021 subrepoutil.submerge(
2022 2022 repo, wctx, mctx, wctx.ancestor(mctx), overwrite, labels
2023 2023 )
2024 2024 continue
2025 2025 wctx[f].audit()
2026 2026 complete, r = ms.preresolve(f, wctx)
2027 2027 if not complete:
2028 2028 numupdates += 1
2029 2029 tocomplete.append((f, args, msg))
2030 2030
2031 2031 # merge
2032 2032 for f, args, msg in tocomplete:
2033 2033 repo.ui.debug(b" %s: %s -> m (merge)\n" % (f, msg))
2034 2034 progress.increment(item=f, total=numupdates)
2035 2035 ms.resolve(f, wctx)
2036 2036
2037 2037 finally:
2038 2038 ms.commit()
2039 2039
2040 2040 unresolved = ms.unresolvedcount()
2041 2041
2042 2042 if (
2043 2043 usemergedriver
2044 2044 and not unresolved
2045 2045 and ms.mdstate() != MERGE_DRIVER_STATE_SUCCESS
2046 2046 ):
2047 2047 if not driverconclude(repo, ms, wctx, labels=labels):
2048 2048 # XXX setting unresolved to at least 1 is a hack to make sure we
2049 2049 # error out
2050 2050 unresolved = max(unresolved, 1)
2051 2051
2052 2052 ms.commit()
2053 2053
2054 2054 msupdated, msmerged, msremoved = ms.counts()
2055 2055 updated += msupdated
2056 2056 merged += msmerged
2057 2057 removed += msremoved
2058 2058
2059 2059 extraactions = ms.actions()
2060 2060 if extraactions:
2061 2061 mfiles = set(a[0] for a in actions[ACTION_MERGE])
2062 2062 for k, acts in pycompat.iteritems(extraactions):
2063 2063 actions[k].extend(acts)
2064 2064 if k == ACTION_GET and wantfiledata:
2065 2065 # no filedata until mergestate is updated to provide it
2066 2066 for a in acts:
2067 2067 getfiledata[a[0]] = None
2068 2068 # Remove these files from actions[ACTION_MERGE] as well. This is
2069 2069 # important because in recordupdates, files in actions[ACTION_MERGE]
2070 2070 # are processed after files in other actions, and the merge driver
2071 2071 # might add files to those actions via extraactions above. This can
2072 2072 # lead to a file being recorded twice, with poor results. This is
2073 2073 # especially problematic for actions[ACTION_REMOVE] (currently only
2074 2074 # possible with the merge driver in the initial merge process;
2075 2075 # interrupted merges don't go through this flow).
2076 2076 #
2077 2077 # The real fix here is to have indexes by both file and action so
2078 2078 # that when the action for a file is changed it is automatically
2079 2079 # reflected in the other action lists. But that involves a more
2080 2080 # complex data structure, so this will do for now.
2081 2081 #
2082 2082 # We don't need to do the same operation for 'dc' and 'cd' because
2083 2083 # those lists aren't consulted again.
2084 2084 mfiles.difference_update(a[0] for a in acts)
2085 2085
2086 2086 actions[ACTION_MERGE] = [
2087 2087 a for a in actions[ACTION_MERGE] if a[0] in mfiles
2088 2088 ]
2089 2089
2090 2090 progress.complete()
2091 2091 assert len(getfiledata) == (len(actions[ACTION_GET]) if wantfiledata else 0)
2092 2092 return updateresult(updated, merged, removed, unresolved), getfiledata
2093 2093
2094 2094
2095 2095 def recordupdates(repo, actions, branchmerge, getfiledata):
2096 2096 """record merge actions to the dirstate"""
2097 2097 # remove (must come first)
2098 2098 for f, args, msg in actions.get(ACTION_REMOVE, []):
2099 2099 if branchmerge:
2100 2100 repo.dirstate.remove(f)
2101 2101 else:
2102 2102 repo.dirstate.drop(f)
2103 2103
2104 2104 # forget (must come first)
2105 2105 for f, args, msg in actions.get(ACTION_FORGET, []):
2106 2106 repo.dirstate.drop(f)
2107 2107
2108 2108 # resolve path conflicts
2109 2109 for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []):
2110 2110 (f0,) = args
2111 2111 origf0 = repo.dirstate.copied(f0) or f0
2112 2112 repo.dirstate.add(f)
2113 2113 repo.dirstate.copy(origf0, f)
2114 2114 if f0 == origf0:
2115 2115 repo.dirstate.remove(f0)
2116 2116 else:
2117 2117 repo.dirstate.drop(f0)
2118 2118
2119 2119 # re-add
2120 2120 for f, args, msg in actions.get(ACTION_ADD, []):
2121 2121 repo.dirstate.add(f)
2122 2122
2123 2123 # re-add/mark as modified
2124 2124 for f, args, msg in actions.get(ACTION_ADD_MODIFIED, []):
2125 2125 if branchmerge:
2126 2126 repo.dirstate.normallookup(f)
2127 2127 else:
2128 2128 repo.dirstate.add(f)
2129 2129
2130 2130 # exec change
2131 2131 for f, args, msg in actions.get(ACTION_EXEC, []):
2132 2132 repo.dirstate.normallookup(f)
2133 2133
2134 2134 # keep
2135 2135 for f, args, msg in actions.get(ACTION_KEEP, []):
2136 2136 pass
2137 2137
2138 2138 # get
2139 2139 for f, args, msg in actions.get(ACTION_GET, []):
2140 2140 if branchmerge:
2141 2141 repo.dirstate.otherparent(f)
2142 2142 else:
2143 2143 parentfiledata = getfiledata[f] if getfiledata else None
2144 2144 repo.dirstate.normal(f, parentfiledata=parentfiledata)
2145 2145
2146 2146 # merge
2147 2147 for f, args, msg in actions.get(ACTION_MERGE, []):
2148 2148 f1, f2, fa, move, anc = args
2149 2149 if branchmerge:
2150 2150 # We've done a branch merge, mark this file as merged
2151 2151 # so that we properly record the merger later
2152 2152 repo.dirstate.merge(f)
2153 2153 if f1 != f2: # copy/rename
2154 2154 if move:
2155 2155 repo.dirstate.remove(f1)
2156 2156 if f1 != f:
2157 2157 repo.dirstate.copy(f1, f)
2158 2158 else:
2159 2159 repo.dirstate.copy(f2, f)
2160 2160 else:
2161 2161 # We've update-merged a locally modified file, so
2162 2162 # we set the dirstate to emulate a normal checkout
2163 2163 # of that file some time in the past. Thus our
2164 2164 # merge will appear as a normal local file
2165 2165 # modification.
2166 2166 if f2 == f: # file not locally copied/moved
2167 2167 repo.dirstate.normallookup(f)
2168 2168 if move:
2169 2169 repo.dirstate.drop(f1)
2170 2170
2171 2171 # directory rename, move local
2172 2172 for f, args, msg in actions.get(ACTION_DIR_RENAME_MOVE_LOCAL, []):
2173 2173 f0, flag = args
2174 2174 if branchmerge:
2175 2175 repo.dirstate.add(f)
2176 2176 repo.dirstate.remove(f0)
2177 2177 repo.dirstate.copy(f0, f)
2178 2178 else:
2179 2179 repo.dirstate.normal(f)
2180 2180 repo.dirstate.drop(f0)
2181 2181
2182 2182 # directory rename, get
2183 2183 for f, args, msg in actions.get(ACTION_LOCAL_DIR_RENAME_GET, []):
2184 2184 f0, flag = args
2185 2185 if branchmerge:
2186 2186 repo.dirstate.add(f)
2187 2187 repo.dirstate.copy(f0, f)
2188 2188 else:
2189 2189 repo.dirstate.normal(f)
2190 2190
2191 2191
2192 2192 UPDATECHECK_ABORT = b'abort' # handled at higher layers
2193 2193 UPDATECHECK_NONE = b'none'
2194 2194 UPDATECHECK_LINEAR = b'linear'
2195 2195 UPDATECHECK_NO_CONFLICT = b'noconflict'
2196 2196
2197 2197
2198 2198 def update(
2199 2199 repo,
2200 2200 node,
2201 2201 branchmerge,
2202 2202 force,
2203 2203 ancestor=None,
2204 2204 mergeancestor=False,
2205 2205 labels=None,
2206 2206 matcher=None,
2207 2207 mergeforce=False,
2208 2208 updatecheck=None,
2209 2209 wc=None,
2210 2210 ):
2211 2211 """
2212 2212 Perform a merge between the working directory and the given node
2213 2213
2214 2214 node = the node to update to
2215 2215 branchmerge = whether to merge between branches
2216 2216 force = whether to force branch merging or file overwriting
2217 2217 matcher = a matcher to filter file lists (dirstate not updated)
2218 2218 mergeancestor = whether it is merging with an ancestor. If true,
2219 2219 we should accept the incoming changes for any prompts that occur.
2220 2220 If false, merging with an ancestor (fast-forward) is only allowed
2221 2221 between different named branches. This flag is used by rebase extension
2222 2222 as a temporary fix and should be avoided in general.
2223 2223 labels = labels to use for base, local and other
2224 2224 mergeforce = whether the merge was run with 'merge --force' (deprecated): if
2225 2225 this is True, then 'force' should be True as well.
2226 2226
2227 2227 The table below shows all the behaviors of the update command given the
2228 2228 -c/--check and -C/--clean or no options, whether the working directory is
2229 2229 dirty, whether a revision is specified, and the relationship of the parent
2230 2230 rev to the target rev (linear or not). Match from top first. The -n
2231 2231 option doesn't exist on the command line, but represents the
2232 2232 experimental.updatecheck=noconflict option.
2233 2233
2234 2234 This logic is tested by test-update-branches.t.
2235 2235
2236 2236 -c -C -n -m dirty rev linear | result
2237 2237 y y * * * * * | (1)
2238 2238 y * y * * * * | (1)
2239 2239 y * * y * * * | (1)
2240 2240 * y y * * * * | (1)
2241 2241 * y * y * * * | (1)
2242 2242 * * y y * * * | (1)
2243 2243 * * * * * n n | x
2244 2244 * * * * n * * | ok
2245 2245 n n n n y * y | merge
2246 2246 n n n n y y n | (2)
2247 2247 n n n y y * * | merge
2248 2248 n n y n y * * | merge if no conflict
2249 2249 n y n n y * * | discard
2250 2250 y n n n y * * | (3)
2251 2251
2252 2252 x = can't happen
2253 2253 * = don't-care
2254 2254 1 = incompatible options (checked in commands.py)
2255 2255 2 = abort: uncommitted changes (commit or update --clean to discard changes)
2256 2256 3 = abort: uncommitted changes (checked in commands.py)
2257 2257
2258 2258 The merge is performed inside ``wc``, a workingctx-like objects. It defaults
2259 2259 to repo[None] if None is passed.
2260 2260
2261 2261 Return the same tuple as applyupdates().
2262 2262 """
2263 2263 # Avoid cycle.
2264 2264 from . import sparse
2265 2265
2266 2266 # This function used to find the default destination if node was None, but
2267 2267 # that's now in destutil.py.
2268 2268 assert node is not None
2269 2269 if not branchmerge and not force:
2270 2270 # TODO: remove the default once all callers that pass branchmerge=False
2271 2271 # and force=False pass a value for updatecheck. We may want to allow
2272 2272 # updatecheck='abort' to better suppport some of these callers.
2273 2273 if updatecheck is None:
2274 2274 updatecheck = UPDATECHECK_LINEAR
2275 2275 if updatecheck not in (
2276 2276 UPDATECHECK_NONE,
2277 2277 UPDATECHECK_LINEAR,
2278 2278 UPDATECHECK_NO_CONFLICT,
2279 2279 ):
2280 2280 raise ValueError(
2281 2281 r'Invalid updatecheck %r (can accept %r)'
2282 2282 % (
2283 2283 updatecheck,
2284 2284 (
2285 2285 UPDATECHECK_NONE,
2286 2286 UPDATECHECK_LINEAR,
2287 2287 UPDATECHECK_NO_CONFLICT,
2288 2288 ),
2289 2289 )
2290 2290 )
2291 2291 # If we're doing a partial update, we need to skip updating
2292 2292 # the dirstate, so make a note of any partial-ness to the
2293 2293 # update here.
2294 2294 if matcher is None or matcher.always():
2295 2295 partial = False
2296 2296 else:
2297 2297 partial = True
2298 2298 with repo.wlock():
2299 2299 if wc is None:
2300 2300 wc = repo[None]
2301 2301 pl = wc.parents()
2302 2302 p1 = pl[0]
2303 2303 p2 = repo[node]
2304 2304 if ancestor is not None:
2305 2305 pas = [repo[ancestor]]
2306 2306 else:
2307 2307 if repo.ui.configlist(b'merge', b'preferancestor') == [b'*']:
2308 2308 cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node())
2309 2309 pas = [repo[anc] for anc in (sorted(cahs) or [nullid])]
2310 2310 else:
2311 2311 pas = [p1.ancestor(p2, warn=branchmerge)]
2312 2312
2313 2313 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), bytes(p1), bytes(p2)
2314 2314
2315 2315 overwrite = force and not branchmerge
2316 2316 ### check phase
2317 2317 if not overwrite:
2318 2318 if len(pl) > 1:
2319 2319 raise error.Abort(_(b"outstanding uncommitted merge"))
2320 2320 ms = mergestate.read(repo)
2321 2321 if list(ms.unresolved()):
2322 2322 raise error.Abort(
2323 2323 _(b"outstanding merge conflicts"),
2324 2324 hint=_(b"use 'hg resolve' to resolve"),
2325 2325 )
2326 2326 if branchmerge:
2327 2327 if pas == [p2]:
2328 2328 raise error.Abort(
2329 2329 _(
2330 2330 b"merging with a working directory ancestor"
2331 2331 b" has no effect"
2332 2332 )
2333 2333 )
2334 2334 elif pas == [p1]:
2335 2335 if not mergeancestor and wc.branch() == p2.branch():
2336 2336 raise error.Abort(
2337 2337 _(b"nothing to merge"),
2338 2338 hint=_(b"use 'hg update' or check 'hg heads'"),
2339 2339 )
2340 2340 if not force and (wc.files() or wc.deleted()):
2341 2341 raise error.Abort(
2342 2342 _(b"uncommitted changes"),
2343 2343 hint=_(b"use 'hg status' to list changes"),
2344 2344 )
2345 2345 if not wc.isinmemory():
2346 2346 for s in sorted(wc.substate):
2347 2347 wc.sub(s).bailifchanged()
2348 2348
2349 2349 elif not overwrite:
2350 2350 if p1 == p2: # no-op update
2351 2351 # call the hooks and exit early
2352 2352 repo.hook(b'preupdate', throw=True, parent1=xp2, parent2=b'')
2353 2353 repo.hook(b'update', parent1=xp2, parent2=b'', error=0)
2354 2354 return updateresult(0, 0, 0, 0)
2355 2355
2356 2356 if updatecheck == UPDATECHECK_LINEAR and pas not in (
2357 2357 [p1],
2358 2358 [p2],
2359 2359 ): # nonlinear
2360 2360 dirty = wc.dirty(missing=True)
2361 2361 if dirty:
2362 2362 # Branching is a bit strange to ensure we do the minimal
2363 2363 # amount of call to obsutil.foreground.
2364 2364 foreground = obsutil.foreground(repo, [p1.node()])
2365 2365 # note: the <node> variable contains a random identifier
2366 2366 if repo[node].node() in foreground:
2367 2367 pass # allow updating to successors
2368 2368 else:
2369 2369 msg = _(b"uncommitted changes")
2370 2370 hint = _(b"commit or update --clean to discard changes")
2371 2371 raise error.UpdateAbort(msg, hint=hint)
2372 2372 else:
2373 2373 # Allow jumping branches if clean and specific rev given
2374 2374 pass
2375 2375
2376 2376 if overwrite:
2377 2377 pas = [wc]
2378 2378 elif not branchmerge:
2379 2379 pas = [p1]
2380 2380
2381 2381 # deprecated config: merge.followcopies
2382 2382 followcopies = repo.ui.configbool(b'merge', b'followcopies')
2383 2383 if overwrite:
2384 2384 followcopies = False
2385 2385 elif not pas[0]:
2386 2386 followcopies = False
2387 2387 if not branchmerge and not wc.dirty(missing=True):
2388 2388 followcopies = False
2389 2389
2390 2390 ### calculate phase
2391 2391 actionbyfile, diverge, renamedelete = calculateupdates(
2392 2392 repo,
2393 2393 wc,
2394 2394 p2,
2395 2395 pas,
2396 2396 branchmerge,
2397 2397 force,
2398 2398 mergeancestor,
2399 2399 followcopies,
2400 2400 matcher=matcher,
2401 2401 mergeforce=mergeforce,
2402 2402 )
2403 2403
2404 2404 if updatecheck == UPDATECHECK_NO_CONFLICT:
2405 2405 for f, (m, args, msg) in pycompat.iteritems(actionbyfile):
2406 2406 if m not in (
2407 2407 ACTION_GET,
2408 2408 ACTION_KEEP,
2409 2409 ACTION_EXEC,
2410 2410 ACTION_REMOVE,
2411 2411 ACTION_PATH_CONFLICT_RESOLVE,
2412 2412 ):
2413 2413 msg = _(b"conflicting changes")
2414 2414 hint = _(b"commit or update --clean to discard changes")
2415 2415 raise error.Abort(msg, hint=hint)
2416 2416
2417 2417 # Prompt and create actions. Most of this is in the resolve phase
2418 2418 # already, but we can't handle .hgsubstate in filemerge or
2419 2419 # subrepoutil.submerge yet so we have to keep prompting for it.
2420 2420 if b'.hgsubstate' in actionbyfile:
2421 2421 f = b'.hgsubstate'
2422 2422 m, args, msg = actionbyfile[f]
2423 2423 prompts = filemerge.partextras(labels)
2424 2424 prompts[b'f'] = f
2425 2425 if m == ACTION_CHANGED_DELETED:
2426 2426 if repo.ui.promptchoice(
2427 2427 _(
2428 2428 b"local%(l)s changed %(f)s which other%(o)s deleted\n"
2429 2429 b"use (c)hanged version or (d)elete?"
2430 2430 b"$$ &Changed $$ &Delete"
2431 2431 )
2432 2432 % prompts,
2433 2433 0,
2434 2434 ):
2435 2435 actionbyfile[f] = (ACTION_REMOVE, None, b'prompt delete')
2436 2436 elif f in p1:
2437 2437 actionbyfile[f] = (
2438 2438 ACTION_ADD_MODIFIED,
2439 2439 None,
2440 2440 b'prompt keep',
2441 2441 )
2442 2442 else:
2443 2443 actionbyfile[f] = (ACTION_ADD, None, b'prompt keep')
2444 2444 elif m == ACTION_DELETED_CHANGED:
2445 2445 f1, f2, fa, move, anc = args
2446 2446 flags = p2[f2].flags()
2447 2447 if (
2448 2448 repo.ui.promptchoice(
2449 2449 _(
2450 2450 b"other%(o)s changed %(f)s which local%(l)s deleted\n"
2451 2451 b"use (c)hanged version or leave (d)eleted?"
2452 2452 b"$$ &Changed $$ &Deleted"
2453 2453 )
2454 2454 % prompts,
2455 2455 0,
2456 2456 )
2457 2457 == 0
2458 2458 ):
2459 2459 actionbyfile[f] = (
2460 2460 ACTION_GET,
2461 2461 (flags, False),
2462 2462 b'prompt recreating',
2463 2463 )
2464 2464 else:
2465 2465 del actionbyfile[f]
2466 2466
2467 2467 # Convert to dictionary-of-lists format
2468 2468 actions = emptyactions()
2469 2469 for f, (m, args, msg) in pycompat.iteritems(actionbyfile):
2470 2470 if m not in actions:
2471 2471 actions[m] = []
2472 2472 actions[m].append((f, args, msg))
2473 2473
2474 2474 if not util.fscasesensitive(repo.path):
2475 2475 # check collision between files only in p2 for clean update
2476 2476 if not branchmerge and (
2477 2477 force or not wc.dirty(missing=True, branch=False)
2478 2478 ):
2479 2479 _checkcollision(repo, p2.manifest(), None)
2480 2480 else:
2481 2481 _checkcollision(repo, wc.manifest(), actions)
2482 2482
2483 2483 # divergent renames
2484 2484 for f, fl in sorted(pycompat.iteritems(diverge)):
2485 2485 repo.ui.warn(
2486 2486 _(
2487 2487 b"note: possible conflict - %s was renamed "
2488 2488 b"multiple times to:\n"
2489 2489 )
2490 2490 % f
2491 2491 )
2492 2492 for nf in sorted(fl):
2493 2493 repo.ui.warn(b" %s\n" % nf)
2494 2494
2495 2495 # rename and delete
2496 2496 for f, fl in sorted(pycompat.iteritems(renamedelete)):
2497 2497 repo.ui.warn(
2498 2498 _(
2499 2499 b"note: possible conflict - %s was deleted "
2500 2500 b"and renamed to:\n"
2501 2501 )
2502 2502 % f
2503 2503 )
2504 2504 for nf in sorted(fl):
2505 2505 repo.ui.warn(b" %s\n" % nf)
2506 2506
2507 2507 ### apply phase
2508 2508 if not branchmerge: # just jump to the new rev
2509 2509 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, b''
2510 2510 if not partial and not wc.isinmemory():
2511 2511 repo.hook(b'preupdate', throw=True, parent1=xp1, parent2=xp2)
2512 2512 # note that we're in the middle of an update
2513 2513 repo.vfs.write(b'updatestate', p2.hex())
2514 2514
2515 2515 # Advertise fsmonitor when its presence could be useful.
2516 2516 #
2517 2517 # We only advertise when performing an update from an empty working
2518 2518 # directory. This typically only occurs during initial clone.
2519 2519 #
2520 2520 # We give users a mechanism to disable the warning in case it is
2521 2521 # annoying.
2522 2522 #
2523 2523 # We only allow on Linux and MacOS because that's where fsmonitor is
2524 2524 # considered stable.
2525 2525 fsmonitorwarning = repo.ui.configbool(b'fsmonitor', b'warn_when_unused')
2526 2526 fsmonitorthreshold = repo.ui.configint(
2527 2527 b'fsmonitor', b'warn_update_file_count'
2528 2528 )
2529 2529 try:
2530 2530 # avoid cycle: extensions -> cmdutil -> merge
2531 2531 from . import extensions
2532 2532
2533 2533 extensions.find(b'fsmonitor')
2534 2534 fsmonitorenabled = repo.ui.config(b'fsmonitor', b'mode') != b'off'
2535 2535 # We intentionally don't look at whether fsmonitor has disabled
2536 2536 # itself because a) fsmonitor may have already printed a warning
2537 2537 # b) we only care about the config state here.
2538 2538 except KeyError:
2539 2539 fsmonitorenabled = False
2540 2540
2541 2541 if (
2542 2542 fsmonitorwarning
2543 2543 and not fsmonitorenabled
2544 2544 and p1.node() == nullid
2545 2545 and len(actions[ACTION_GET]) >= fsmonitorthreshold
2546 2546 and pycompat.sysplatform.startswith((b'linux', b'darwin'))
2547 2547 ):
2548 2548 repo.ui.warn(
2549 2549 _(
2550 2550 b'(warning: large working directory being used without '
2551 2551 b'fsmonitor enabled; enable fsmonitor to improve performance; '
2552 2552 b'see "hg help -e fsmonitor")\n'
2553 2553 )
2554 2554 )
2555 2555
2556 2556 updatedirstate = not partial and not wc.isinmemory()
2557 2557 wantfiledata = updatedirstate and not branchmerge
2558 2558 stats, getfiledata = applyupdates(
2559 2559 repo, actions, wc, p2, overwrite, wantfiledata, labels=labels
2560 2560 )
2561 2561
2562 2562 if updatedirstate:
2563 2563 with repo.dirstate.parentchange():
2564 2564 repo.setparents(fp1, fp2)
2565 2565 recordupdates(repo, actions, branchmerge, getfiledata)
2566 2566 # update completed, clear state
2567 2567 util.unlink(repo.vfs.join(b'updatestate'))
2568 2568
2569 2569 if not branchmerge:
2570 2570 repo.dirstate.setbranch(p2.branch())
2571 2571
2572 2572 # If we're updating to a location, clean up any stale temporary includes
2573 2573 # (ex: this happens during hg rebase --abort).
2574 2574 if not branchmerge:
2575 2575 sparse.prunetemporaryincludes(repo)
2576 2576
2577 2577 if not partial:
2578 2578 repo.hook(
2579 2579 b'update', parent1=xp1, parent2=xp2, error=stats.unresolvedcount
2580 2580 )
2581 2581 return stats
2582 2582
2583 2583
2584 2584 def graft(
2585 2585 repo, ctx, base, labels=None, keepparent=False, keepconflictparent=False
2586 2586 ):
2587 2587 """Do a graft-like merge.
2588 2588
2589 2589 This is a merge where the merge ancestor is chosen such that one
2590 2590 or more changesets are grafted onto the current changeset. In
2591 2591 addition to the merge, this fixes up the dirstate to include only
2592 2592 a single parent (if keepparent is False) and tries to duplicate any
2593 2593 renames/copies appropriately.
2594 2594
2595 2595 ctx - changeset to rebase
2596 2596 base - merge base, usually ctx.p1()
2597 2597 labels - merge labels eg ['local', 'graft']
2598 2598 keepparent - keep second parent if any
2599 2599 keepconflictparent - if unresolved, keep parent used for the merge
2600 2600
2601 2601 """
2602 2602 # If we're grafting a descendant onto an ancestor, be sure to pass
2603 2603 # mergeancestor=True to update. This does two things: 1) allows the merge if
2604 2604 # the destination is the same as the parent of the ctx (so we can use graft
2605 2605 # to copy commits), and 2) informs update that the incoming changes are
2606 2606 # newer than the destination so it doesn't prompt about "remote changed foo
2607 2607 # which local deleted".
2608 2608 wctx = repo[None]
2609 2609 pctx = wctx.p1()
2610 2610 mergeancestor = repo.changelog.isancestor(pctx.node(), ctx.node())
2611 2611
2612 2612 stats = update(
2613 2613 repo,
2614 2614 ctx.node(),
2615 2615 True,
2616 2616 True,
2617 2617 base.node(),
2618 2618 mergeancestor=mergeancestor,
2619 2619 labels=labels,
2620 2620 )
2621 2621
2622 2622 if keepconflictparent and stats.unresolvedcount:
2623 2623 pother = ctx.node()
2624 2624 else:
2625 2625 pother = nullid
2626 2626 parents = ctx.parents()
2627 2627 if keepparent and len(parents) == 2 and base in parents:
2628 2628 parents.remove(base)
2629 2629 pother = parents[0].node()
2630 2630 # Never set both parents equal to each other
2631 2631 if pother == pctx.node():
2632 2632 pother = nullid
2633 2633
2634 2634 with repo.dirstate.parentchange():
2635 2635 repo.setparents(pctx.node(), pother)
2636 2636 repo.dirstate.write(repo.currenttransaction())
2637 2637 # fix up dirstate for copies and renames
2638 copies.graftcopies(repo, wctx, ctx, base)
2638 copies.graftcopies(wctx, ctx, base)
2639 2639 return stats
2640 2640
2641 2641
2642 2642 def purge(
2643 2643 repo,
2644 2644 matcher,
2645 2645 ignored=False,
2646 2646 removeemptydirs=True,
2647 2647 removefiles=True,
2648 2648 abortonerror=False,
2649 2649 noop=False,
2650 2650 ):
2651 2651 """Purge the working directory of untracked files.
2652 2652
2653 2653 ``matcher`` is a matcher configured to scan the working directory -
2654 2654 potentially a subset.
2655 2655
2656 2656 ``ignored`` controls whether ignored files should also be purged.
2657 2657
2658 2658 ``removeemptydirs`` controls whether empty directories should be removed.
2659 2659
2660 2660 ``removefiles`` controls whether files are removed.
2661 2661
2662 2662 ``abortonerror`` causes an exception to be raised if an error occurs
2663 2663 deleting a file or directory.
2664 2664
2665 2665 ``noop`` controls whether to actually remove files. If not defined, actions
2666 2666 will be taken.
2667 2667
2668 2668 Returns an iterable of relative paths in the working directory that were
2669 2669 or would be removed.
2670 2670 """
2671 2671
2672 2672 def remove(removefn, path):
2673 2673 try:
2674 2674 removefn(path)
2675 2675 except OSError:
2676 2676 m = _(b'%s cannot be removed') % path
2677 2677 if abortonerror:
2678 2678 raise error.Abort(m)
2679 2679 else:
2680 2680 repo.ui.warn(_(b'warning: %s\n') % m)
2681 2681
2682 2682 # There's no API to copy a matcher. So mutate the passed matcher and
2683 2683 # restore it when we're done.
2684 2684 oldtraversedir = matcher.traversedir
2685 2685
2686 2686 res = []
2687 2687
2688 2688 try:
2689 2689 if removeemptydirs:
2690 2690 directories = []
2691 2691 matcher.traversedir = directories.append
2692 2692
2693 2693 status = repo.status(match=matcher, ignored=ignored, unknown=True)
2694 2694
2695 2695 if removefiles:
2696 2696 for f in sorted(status.unknown + status.ignored):
2697 2697 if not noop:
2698 2698 repo.ui.note(_(b'removing file %s\n') % f)
2699 2699 remove(repo.wvfs.unlink, f)
2700 2700 res.append(f)
2701 2701
2702 2702 if removeemptydirs:
2703 2703 for f in sorted(directories, reverse=True):
2704 2704 if matcher(f) and not repo.wvfs.listdir(f):
2705 2705 if not noop:
2706 2706 repo.ui.note(_(b'removing directory %s\n') % f)
2707 2707 remove(repo.wvfs.rmdir, f)
2708 2708 res.append(f)
2709 2709
2710 2710 return res
2711 2711
2712 2712 finally:
2713 2713 matcher.traversedir = oldtraversedir
@@ -1,34 +1,35 b''
1 1 == New Features ==
2 2
3 3 * Windows will process hgrc files in %PROGRAMDATA%\Mercurial\hgrc.d.
4 4
5 5
6 6 == New Experimental Features ==
7 7
8 8
9 9 == Bug Fixes ==
10 10
11 11 * The `indent()` template function was documented to not indent empty lines,
12 12 but it still indented the first line even if it was empty. It no longer does
13 13 that.
14 14
15 15 == Backwards Compatibility Changes ==
16 16
17 17
18 18 == Internal API Changes ==
19 19
20 20 * Matcher instances no longer have a `explicitdir` property. Consider
21 21 rewriting your code to use `repo.wvfs.isdir()` and/or
22 22 `ctx.hasdir()` instead. Also, the `traversedir` property is now
23 23 also called when only `explicitdir` used to be called. That may
24 24 mean that you can simply remove the use of `explicitdir` if you
25 25 were already using `traversedir`.
26 26
27 27 * The `revlog.nodemap` object have been merged into the `revlog.index` object.
28 28 * `n in revlog.nodemap` becomes `revlog.index.has_node(n)`,
29 29 * `revlog.nodemap[n]` becomes `revlog.index.rev(n)`,
30 30 * `revlog.nodemap.get(n)` becomes `revlog.index.get_rev(n)`.
31 31
32 32 * `copies.duplicatecopies()` was renamed to
33 33 `copies.graftcopies()`. Its arguments changed from revision numbers
34 to context objects.
34 to context objects. It also lost its `repo` and `skip` arguments
35 (they should no longer be needed).
General Comments 0
You need to be logged in to leave comments. Login now