##// END OF EJS Templates
graftcopies: remove `skip` and `repo` arguments...
Martin von Zweigbergk -
r44551:833210fb default
parent child Browse files
Show More
@@ -1,870 +1,869 b''
1 # fix - rewrite file content in changesets and working copy
1 # fix - rewrite file content in changesets and working copy
2 #
2 #
3 # Copyright 2018 Google LLC.
3 # Copyright 2018 Google LLC.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """rewrite file content in changesets or working copy (EXPERIMENTAL)
7 """rewrite file content in changesets or working copy (EXPERIMENTAL)
8
8
9 Provides a command that runs configured tools on the contents of modified files,
9 Provides a command that runs configured tools on the contents of modified files,
10 writing back any fixes to the working copy or replacing changesets.
10 writing back any fixes to the working copy or replacing changesets.
11
11
12 Here is an example configuration that causes :hg:`fix` to apply automatic
12 Here is an example configuration that causes :hg:`fix` to apply automatic
13 formatting fixes to modified lines in C++ code::
13 formatting fixes to modified lines in C++ code::
14
14
15 [fix]
15 [fix]
16 clang-format:command=clang-format --assume-filename={rootpath}
16 clang-format:command=clang-format --assume-filename={rootpath}
17 clang-format:linerange=--lines={first}:{last}
17 clang-format:linerange=--lines={first}:{last}
18 clang-format:pattern=set:**.cpp or **.hpp
18 clang-format:pattern=set:**.cpp or **.hpp
19
19
20 The :command suboption forms the first part of the shell command that will be
20 The :command suboption forms the first part of the shell command that will be
21 used to fix a file. The content of the file is passed on standard input, and the
21 used to fix a file. The content of the file is passed on standard input, and the
22 fixed file content is expected on standard output. Any output on standard error
22 fixed file content is expected on standard output. Any output on standard error
23 will be displayed as a warning. If the exit status is not zero, the file will
23 will be displayed as a warning. If the exit status is not zero, the file will
24 not be affected. A placeholder warning is displayed if there is a non-zero exit
24 not be affected. A placeholder warning is displayed if there is a non-zero exit
25 status but no standard error output. Some values may be substituted into the
25 status but no standard error output. Some values may be substituted into the
26 command::
26 command::
27
27
28 {rootpath} The path of the file being fixed, relative to the repo root
28 {rootpath} The path of the file being fixed, relative to the repo root
29 {basename} The name of the file being fixed, without the directory path
29 {basename} The name of the file being fixed, without the directory path
30
30
31 If the :linerange suboption is set, the tool will only be run if there are
31 If the :linerange suboption is set, the tool will only be run if there are
32 changed lines in a file. The value of this suboption is appended to the shell
32 changed lines in a file. The value of this suboption is appended to the shell
33 command once for every range of changed lines in the file. Some values may be
33 command once for every range of changed lines in the file. Some values may be
34 substituted into the command::
34 substituted into the command::
35
35
36 {first} The 1-based line number of the first line in the modified range
36 {first} The 1-based line number of the first line in the modified range
37 {last} The 1-based line number of the last line in the modified range
37 {last} The 1-based line number of the last line in the modified range
38
38
39 Deleted sections of a file will be ignored by :linerange, because there is no
39 Deleted sections of a file will be ignored by :linerange, because there is no
40 corresponding line range in the version being fixed.
40 corresponding line range in the version being fixed.
41
41
42 By default, tools that set :linerange will only be executed if there is at least
42 By default, tools that set :linerange will only be executed if there is at least
43 one changed line range. This is meant to prevent accidents like running a code
43 one changed line range. This is meant to prevent accidents like running a code
44 formatter in such a way that it unexpectedly reformats the whole file. If such a
44 formatter in such a way that it unexpectedly reformats the whole file. If such a
45 tool needs to operate on unchanged files, it should set the :skipclean suboption
45 tool needs to operate on unchanged files, it should set the :skipclean suboption
46 to false.
46 to false.
47
47
48 The :pattern suboption determines which files will be passed through each
48 The :pattern suboption determines which files will be passed through each
49 configured tool. See :hg:`help patterns` for possible values. However, all
49 configured tool. See :hg:`help patterns` for possible values. However, all
50 patterns are relative to the repo root, even if that text says they are relative
50 patterns are relative to the repo root, even if that text says they are relative
51 to the current working directory. If there are file arguments to :hg:`fix`, the
51 to the current working directory. If there are file arguments to :hg:`fix`, the
52 intersection of these patterns is used.
52 intersection of these patterns is used.
53
53
54 There is also a configurable limit for the maximum size of file that will be
54 There is also a configurable limit for the maximum size of file that will be
55 processed by :hg:`fix`::
55 processed by :hg:`fix`::
56
56
57 [fix]
57 [fix]
58 maxfilesize = 2MB
58 maxfilesize = 2MB
59
59
60 Normally, execution of configured tools will continue after a failure (indicated
60 Normally, execution of configured tools will continue after a failure (indicated
61 by a non-zero exit status). It can also be configured to abort after the first
61 by a non-zero exit status). It can also be configured to abort after the first
62 such failure, so that no files will be affected if any tool fails. This abort
62 such failure, so that no files will be affected if any tool fails. This abort
63 will also cause :hg:`fix` to exit with a non-zero status::
63 will also cause :hg:`fix` to exit with a non-zero status::
64
64
65 [fix]
65 [fix]
66 failure = abort
66 failure = abort
67
67
68 When multiple tools are configured to affect a file, they execute in an order
68 When multiple tools are configured to affect a file, they execute in an order
69 defined by the :priority suboption. The priority suboption has a default value
69 defined by the :priority suboption. The priority suboption has a default value
70 of zero for each tool. Tools are executed in order of descending priority. The
70 of zero for each tool. Tools are executed in order of descending priority. The
71 execution order of tools with equal priority is unspecified. For example, you
71 execution order of tools with equal priority is unspecified. For example, you
72 could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers
72 could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers
73 in a text file by ensuring that 'sort' runs before 'head'::
73 in a text file by ensuring that 'sort' runs before 'head'::
74
74
75 [fix]
75 [fix]
76 sort:command = sort -n
76 sort:command = sort -n
77 head:command = head -n 10
77 head:command = head -n 10
78 sort:pattern = numbers.txt
78 sort:pattern = numbers.txt
79 head:pattern = numbers.txt
79 head:pattern = numbers.txt
80 sort:priority = 2
80 sort:priority = 2
81 head:priority = 1
81 head:priority = 1
82
82
83 To account for changes made by each tool, the line numbers used for incremental
83 To account for changes made by each tool, the line numbers used for incremental
84 formatting are recomputed before executing the next tool. So, each tool may see
84 formatting are recomputed before executing the next tool. So, each tool may see
85 different values for the arguments added by the :linerange suboption.
85 different values for the arguments added by the :linerange suboption.
86
86
87 Each fixer tool is allowed to return some metadata in addition to the fixed file
87 Each fixer tool is allowed to return some metadata in addition to the fixed file
88 content. The metadata must be placed before the file content on stdout,
88 content. The metadata must be placed before the file content on stdout,
89 separated from the file content by a zero byte. The metadata is parsed as a JSON
89 separated from the file content by a zero byte. The metadata is parsed as a JSON
90 value (so, it should be UTF-8 encoded and contain no zero bytes). A fixer tool
90 value (so, it should be UTF-8 encoded and contain no zero bytes). A fixer tool
91 is expected to produce this metadata encoding if and only if the :metadata
91 is expected to produce this metadata encoding if and only if the :metadata
92 suboption is true::
92 suboption is true::
93
93
94 [fix]
94 [fix]
95 tool:command = tool --prepend-json-metadata
95 tool:command = tool --prepend-json-metadata
96 tool:metadata = true
96 tool:metadata = true
97
97
98 The metadata values are passed to hooks, which can be used to print summaries or
98 The metadata values are passed to hooks, which can be used to print summaries or
99 perform other post-fixing work. The supported hooks are::
99 perform other post-fixing work. The supported hooks are::
100
100
101 "postfixfile"
101 "postfixfile"
102 Run once for each file in each revision where any fixer tools made changes
102 Run once for each file in each revision where any fixer tools made changes
103 to the file content. Provides "$HG_REV" and "$HG_PATH" to identify the file,
103 to the file content. Provides "$HG_REV" and "$HG_PATH" to identify the file,
104 and "$HG_METADATA" with a map of fixer names to metadata values from fixer
104 and "$HG_METADATA" with a map of fixer names to metadata values from fixer
105 tools that affected the file. Fixer tools that didn't affect the file have a
105 tools that affected the file. Fixer tools that didn't affect the file have a
106 valueof None. Only fixer tools that executed are present in the metadata.
106 valueof None. Only fixer tools that executed are present in the metadata.
107
107
108 "postfix"
108 "postfix"
109 Run once after all files and revisions have been handled. Provides
109 Run once after all files and revisions have been handled. Provides
110 "$HG_REPLACEMENTS" with information about what revisions were created and
110 "$HG_REPLACEMENTS" with information about what revisions were created and
111 made obsolete. Provides a boolean "$HG_WDIRWRITTEN" to indicate whether any
111 made obsolete. Provides a boolean "$HG_WDIRWRITTEN" to indicate whether any
112 files in the working copy were updated. Provides a list "$HG_METADATA"
112 files in the working copy were updated. Provides a list "$HG_METADATA"
113 mapping fixer tool names to lists of metadata values returned from
113 mapping fixer tool names to lists of metadata values returned from
114 executions that modified a file. This aggregates the same metadata
114 executions that modified a file. This aggregates the same metadata
115 previously passed to the "postfixfile" hook.
115 previously passed to the "postfixfile" hook.
116
116
117 Fixer tools are run the in repository's root directory. This allows them to read
117 Fixer tools are run the in repository's root directory. This allows them to read
118 configuration files from the working copy, or even write to the working copy.
118 configuration files from the working copy, or even write to the working copy.
119 The working copy is not updated to match the revision being fixed. In fact,
119 The working copy is not updated to match the revision being fixed. In fact,
120 several revisions may be fixed in parallel. Writes to the working copy are not
120 several revisions may be fixed in parallel. Writes to the working copy are not
121 amended into the revision being fixed; fixer tools should always write fixed
121 amended into the revision being fixed; fixer tools should always write fixed
122 file content back to stdout as documented above.
122 file content back to stdout as documented above.
123 """
123 """
124
124
125 from __future__ import absolute_import
125 from __future__ import absolute_import
126
126
127 import collections
127 import collections
128 import itertools
128 import itertools
129 import os
129 import os
130 import re
130 import re
131 import subprocess
131 import subprocess
132
132
133 from mercurial.i18n import _
133 from mercurial.i18n import _
134 from mercurial.node import nullrev
134 from mercurial.node import nullrev
135 from mercurial.node import wdirrev
135 from mercurial.node import wdirrev
136
136
137 from mercurial.utils import procutil
137 from mercurial.utils import procutil
138
138
139 from mercurial import (
139 from mercurial import (
140 cmdutil,
140 cmdutil,
141 context,
141 context,
142 copies,
142 copies,
143 error,
143 error,
144 match as matchmod,
144 match as matchmod,
145 mdiff,
145 mdiff,
146 merge,
146 merge,
147 pycompat,
147 pycompat,
148 registrar,
148 registrar,
149 rewriteutil,
149 rewriteutil,
150 scmutil,
150 scmutil,
151 util,
151 util,
152 worker,
152 worker,
153 )
153 )
154
154
155 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
155 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
156 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
156 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
157 # be specifying the version(s) of Mercurial they are tested with, or
157 # be specifying the version(s) of Mercurial they are tested with, or
158 # leave the attribute unspecified.
158 # leave the attribute unspecified.
159 testedwith = b'ships-with-hg-core'
159 testedwith = b'ships-with-hg-core'
160
160
161 cmdtable = {}
161 cmdtable = {}
162 command = registrar.command(cmdtable)
162 command = registrar.command(cmdtable)
163
163
164 configtable = {}
164 configtable = {}
165 configitem = registrar.configitem(configtable)
165 configitem = registrar.configitem(configtable)
166
166
167 # Register the suboptions allowed for each configured fixer, and default values.
167 # Register the suboptions allowed for each configured fixer, and default values.
168 FIXER_ATTRS = {
168 FIXER_ATTRS = {
169 b'command': None,
169 b'command': None,
170 b'linerange': None,
170 b'linerange': None,
171 b'pattern': None,
171 b'pattern': None,
172 b'priority': 0,
172 b'priority': 0,
173 b'metadata': False,
173 b'metadata': False,
174 b'skipclean': True,
174 b'skipclean': True,
175 b'enabled': True,
175 b'enabled': True,
176 }
176 }
177
177
178 for key, default in FIXER_ATTRS.items():
178 for key, default in FIXER_ATTRS.items():
179 configitem(b'fix', b'.*:%s$' % key, default=default, generic=True)
179 configitem(b'fix', b'.*:%s$' % key, default=default, generic=True)
180
180
181 # A good default size allows most source code files to be fixed, but avoids
181 # A good default size allows most source code files to be fixed, but avoids
182 # letting fixer tools choke on huge inputs, which could be surprising to the
182 # letting fixer tools choke on huge inputs, which could be surprising to the
183 # user.
183 # user.
184 configitem(b'fix', b'maxfilesize', default=b'2MB')
184 configitem(b'fix', b'maxfilesize', default=b'2MB')
185
185
186 # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero.
186 # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero.
187 # This helps users do shell scripts that stop when a fixer tool signals a
187 # This helps users do shell scripts that stop when a fixer tool signals a
188 # problem.
188 # problem.
189 configitem(b'fix', b'failure', default=b'continue')
189 configitem(b'fix', b'failure', default=b'continue')
190
190
191
191
192 def checktoolfailureaction(ui, message, hint=None):
192 def checktoolfailureaction(ui, message, hint=None):
193 """Abort with 'message' if fix.failure=abort"""
193 """Abort with 'message' if fix.failure=abort"""
194 action = ui.config(b'fix', b'failure')
194 action = ui.config(b'fix', b'failure')
195 if action not in (b'continue', b'abort'):
195 if action not in (b'continue', b'abort'):
196 raise error.Abort(
196 raise error.Abort(
197 _(b'unknown fix.failure action: %s') % (action,),
197 _(b'unknown fix.failure action: %s') % (action,),
198 hint=_(b'use "continue" or "abort"'),
198 hint=_(b'use "continue" or "abort"'),
199 )
199 )
200 if action == b'abort':
200 if action == b'abort':
201 raise error.Abort(message, hint=hint)
201 raise error.Abort(message, hint=hint)
202
202
203
203
204 allopt = (b'', b'all', False, _(b'fix all non-public non-obsolete revisions'))
204 allopt = (b'', b'all', False, _(b'fix all non-public non-obsolete revisions'))
205 baseopt = (
205 baseopt = (
206 b'',
206 b'',
207 b'base',
207 b'base',
208 [],
208 [],
209 _(
209 _(
210 b'revisions to diff against (overrides automatic '
210 b'revisions to diff against (overrides automatic '
211 b'selection, and applies to every revision being '
211 b'selection, and applies to every revision being '
212 b'fixed)'
212 b'fixed)'
213 ),
213 ),
214 _(b'REV'),
214 _(b'REV'),
215 )
215 )
216 revopt = (b'r', b'rev', [], _(b'revisions to fix'), _(b'REV'))
216 revopt = (b'r', b'rev', [], _(b'revisions to fix'), _(b'REV'))
217 wdiropt = (b'w', b'working-dir', False, _(b'fix the working directory'))
217 wdiropt = (b'w', b'working-dir', False, _(b'fix the working directory'))
218 wholeopt = (b'', b'whole', False, _(b'always fix every line of a file'))
218 wholeopt = (b'', b'whole', False, _(b'always fix every line of a file'))
219 usage = _(b'[OPTION]... [FILE]...')
219 usage = _(b'[OPTION]... [FILE]...')
220
220
221
221
222 @command(
222 @command(
223 b'fix',
223 b'fix',
224 [allopt, baseopt, revopt, wdiropt, wholeopt],
224 [allopt, baseopt, revopt, wdiropt, wholeopt],
225 usage,
225 usage,
226 helpcategory=command.CATEGORY_FILE_CONTENTS,
226 helpcategory=command.CATEGORY_FILE_CONTENTS,
227 )
227 )
228 def fix(ui, repo, *pats, **opts):
228 def fix(ui, repo, *pats, **opts):
229 """rewrite file content in changesets or working directory
229 """rewrite file content in changesets or working directory
230
230
231 Runs any configured tools to fix the content of files. Only affects files
231 Runs any configured tools to fix the content of files. Only affects files
232 with changes, unless file arguments are provided. Only affects changed lines
232 with changes, unless file arguments are provided. Only affects changed lines
233 of files, unless the --whole flag is used. Some tools may always affect the
233 of files, unless the --whole flag is used. Some tools may always affect the
234 whole file regardless of --whole.
234 whole file regardless of --whole.
235
235
236 If revisions are specified with --rev, those revisions will be checked, and
236 If revisions are specified with --rev, those revisions will be checked, and
237 they may be replaced with new revisions that have fixed file content. It is
237 they may be replaced with new revisions that have fixed file content. It is
238 desirable to specify all descendants of each specified revision, so that the
238 desirable to specify all descendants of each specified revision, so that the
239 fixes propagate to the descendants. If all descendants are fixed at the same
239 fixes propagate to the descendants. If all descendants are fixed at the same
240 time, no merging, rebasing, or evolution will be required.
240 time, no merging, rebasing, or evolution will be required.
241
241
242 If --working-dir is used, files with uncommitted changes in the working copy
242 If --working-dir is used, files with uncommitted changes in the working copy
243 will be fixed. If the checked-out revision is also fixed, the working
243 will be fixed. If the checked-out revision is also fixed, the working
244 directory will update to the replacement revision.
244 directory will update to the replacement revision.
245
245
246 When determining what lines of each file to fix at each revision, the whole
246 When determining what lines of each file to fix at each revision, the whole
247 set of revisions being fixed is considered, so that fixes to earlier
247 set of revisions being fixed is considered, so that fixes to earlier
248 revisions are not forgotten in later ones. The --base flag can be used to
248 revisions are not forgotten in later ones. The --base flag can be used to
249 override this default behavior, though it is not usually desirable to do so.
249 override this default behavior, though it is not usually desirable to do so.
250 """
250 """
251 opts = pycompat.byteskwargs(opts)
251 opts = pycompat.byteskwargs(opts)
252 cmdutil.check_at_most_one_arg(opts, b'all', b'rev')
252 cmdutil.check_at_most_one_arg(opts, b'all', b'rev')
253 if opts[b'all']:
253 if opts[b'all']:
254 opts[b'rev'] = [b'not public() and not obsolete()']
254 opts[b'rev'] = [b'not public() and not obsolete()']
255 opts[b'working_dir'] = True
255 opts[b'working_dir'] = True
256 with repo.wlock(), repo.lock(), repo.transaction(b'fix'):
256 with repo.wlock(), repo.lock(), repo.transaction(b'fix'):
257 revstofix = getrevstofix(ui, repo, opts)
257 revstofix = getrevstofix(ui, repo, opts)
258 basectxs = getbasectxs(repo, opts, revstofix)
258 basectxs = getbasectxs(repo, opts, revstofix)
259 workqueue, numitems = getworkqueue(
259 workqueue, numitems = getworkqueue(
260 ui, repo, pats, opts, revstofix, basectxs
260 ui, repo, pats, opts, revstofix, basectxs
261 )
261 )
262 fixers = getfixers(ui)
262 fixers = getfixers(ui)
263
263
264 # There are no data dependencies between the workers fixing each file
264 # There are no data dependencies between the workers fixing each file
265 # revision, so we can use all available parallelism.
265 # revision, so we can use all available parallelism.
266 def getfixes(items):
266 def getfixes(items):
267 for rev, path in items:
267 for rev, path in items:
268 ctx = repo[rev]
268 ctx = repo[rev]
269 olddata = ctx[path].data()
269 olddata = ctx[path].data()
270 metadata, newdata = fixfile(
270 metadata, newdata = fixfile(
271 ui, repo, opts, fixers, ctx, path, basectxs[rev]
271 ui, repo, opts, fixers, ctx, path, basectxs[rev]
272 )
272 )
273 # Don't waste memory/time passing unchanged content back, but
273 # Don't waste memory/time passing unchanged content back, but
274 # produce one result per item either way.
274 # produce one result per item either way.
275 yield (
275 yield (
276 rev,
276 rev,
277 path,
277 path,
278 metadata,
278 metadata,
279 newdata if newdata != olddata else None,
279 newdata if newdata != olddata else None,
280 )
280 )
281
281
282 results = worker.worker(
282 results = worker.worker(
283 ui, 1.0, getfixes, tuple(), workqueue, threadsafe=False
283 ui, 1.0, getfixes, tuple(), workqueue, threadsafe=False
284 )
284 )
285
285
286 # We have to hold on to the data for each successor revision in memory
286 # We have to hold on to the data for each successor revision in memory
287 # until all its parents are committed. We ensure this by committing and
287 # until all its parents are committed. We ensure this by committing and
288 # freeing memory for the revisions in some topological order. This
288 # freeing memory for the revisions in some topological order. This
289 # leaves a little bit of memory efficiency on the table, but also makes
289 # leaves a little bit of memory efficiency on the table, but also makes
290 # the tests deterministic. It might also be considered a feature since
290 # the tests deterministic. It might also be considered a feature since
291 # it makes the results more easily reproducible.
291 # it makes the results more easily reproducible.
292 filedata = collections.defaultdict(dict)
292 filedata = collections.defaultdict(dict)
293 aggregatemetadata = collections.defaultdict(list)
293 aggregatemetadata = collections.defaultdict(list)
294 replacements = {}
294 replacements = {}
295 wdirwritten = False
295 wdirwritten = False
296 commitorder = sorted(revstofix, reverse=True)
296 commitorder = sorted(revstofix, reverse=True)
297 with ui.makeprogress(
297 with ui.makeprogress(
298 topic=_(b'fixing'), unit=_(b'files'), total=sum(numitems.values())
298 topic=_(b'fixing'), unit=_(b'files'), total=sum(numitems.values())
299 ) as progress:
299 ) as progress:
300 for rev, path, filerevmetadata, newdata in results:
300 for rev, path, filerevmetadata, newdata in results:
301 progress.increment(item=path)
301 progress.increment(item=path)
302 for fixername, fixermetadata in filerevmetadata.items():
302 for fixername, fixermetadata in filerevmetadata.items():
303 aggregatemetadata[fixername].append(fixermetadata)
303 aggregatemetadata[fixername].append(fixermetadata)
304 if newdata is not None:
304 if newdata is not None:
305 filedata[rev][path] = newdata
305 filedata[rev][path] = newdata
306 hookargs = {
306 hookargs = {
307 b'rev': rev,
307 b'rev': rev,
308 b'path': path,
308 b'path': path,
309 b'metadata': filerevmetadata,
309 b'metadata': filerevmetadata,
310 }
310 }
311 repo.hook(
311 repo.hook(
312 b'postfixfile',
312 b'postfixfile',
313 throw=False,
313 throw=False,
314 **pycompat.strkwargs(hookargs)
314 **pycompat.strkwargs(hookargs)
315 )
315 )
316 numitems[rev] -= 1
316 numitems[rev] -= 1
317 # Apply the fixes for this and any other revisions that are
317 # Apply the fixes for this and any other revisions that are
318 # ready and sitting at the front of the queue. Using a loop here
318 # ready and sitting at the front of the queue. Using a loop here
319 # prevents the queue from being blocked by the first revision to
319 # prevents the queue from being blocked by the first revision to
320 # be ready out of order.
320 # be ready out of order.
321 while commitorder and not numitems[commitorder[-1]]:
321 while commitorder and not numitems[commitorder[-1]]:
322 rev = commitorder.pop()
322 rev = commitorder.pop()
323 ctx = repo[rev]
323 ctx = repo[rev]
324 if rev == wdirrev:
324 if rev == wdirrev:
325 writeworkingdir(repo, ctx, filedata[rev], replacements)
325 writeworkingdir(repo, ctx, filedata[rev], replacements)
326 wdirwritten = bool(filedata[rev])
326 wdirwritten = bool(filedata[rev])
327 else:
327 else:
328 replacerev(ui, repo, ctx, filedata[rev], replacements)
328 replacerev(ui, repo, ctx, filedata[rev], replacements)
329 del filedata[rev]
329 del filedata[rev]
330
330
331 cleanup(repo, replacements, wdirwritten)
331 cleanup(repo, replacements, wdirwritten)
332 hookargs = {
332 hookargs = {
333 b'replacements': replacements,
333 b'replacements': replacements,
334 b'wdirwritten': wdirwritten,
334 b'wdirwritten': wdirwritten,
335 b'metadata': aggregatemetadata,
335 b'metadata': aggregatemetadata,
336 }
336 }
337 repo.hook(b'postfix', throw=True, **pycompat.strkwargs(hookargs))
337 repo.hook(b'postfix', throw=True, **pycompat.strkwargs(hookargs))
338
338
339
339
340 def cleanup(repo, replacements, wdirwritten):
340 def cleanup(repo, replacements, wdirwritten):
341 """Calls scmutil.cleanupnodes() with the given replacements.
341 """Calls scmutil.cleanupnodes() with the given replacements.
342
342
343 "replacements" is a dict from nodeid to nodeid, with one key and one value
343 "replacements" is a dict from nodeid to nodeid, with one key and one value
344 for every revision that was affected by fixing. This is slightly different
344 for every revision that was affected by fixing. This is slightly different
345 from cleanupnodes().
345 from cleanupnodes().
346
346
347 "wdirwritten" is a bool which tells whether the working copy was affected by
347 "wdirwritten" is a bool which tells whether the working copy was affected by
348 fixing, since it has no entry in "replacements".
348 fixing, since it has no entry in "replacements".
349
349
350 Useful as a hook point for extending "hg fix" with output summarizing the
350 Useful as a hook point for extending "hg fix" with output summarizing the
351 effects of the command, though we choose not to output anything here.
351 effects of the command, though we choose not to output anything here.
352 """
352 """
353 replacements = {
353 replacements = {
354 prec: [succ] for prec, succ in pycompat.iteritems(replacements)
354 prec: [succ] for prec, succ in pycompat.iteritems(replacements)
355 }
355 }
356 scmutil.cleanupnodes(repo, replacements, b'fix', fixphase=True)
356 scmutil.cleanupnodes(repo, replacements, b'fix', fixphase=True)
357
357
358
358
359 def getworkqueue(ui, repo, pats, opts, revstofix, basectxs):
359 def getworkqueue(ui, repo, pats, opts, revstofix, basectxs):
360 """"Constructs the list of files to be fixed at specific revisions
360 """"Constructs the list of files to be fixed at specific revisions
361
361
362 It is up to the caller how to consume the work items, and the only
362 It is up to the caller how to consume the work items, and the only
363 dependence between them is that replacement revisions must be committed in
363 dependence between them is that replacement revisions must be committed in
364 topological order. Each work item represents a file in the working copy or
364 topological order. Each work item represents a file in the working copy or
365 in some revision that should be fixed and written back to the working copy
365 in some revision that should be fixed and written back to the working copy
366 or into a replacement revision.
366 or into a replacement revision.
367
367
368 Work items for the same revision are grouped together, so that a worker
368 Work items for the same revision are grouped together, so that a worker
369 pool starting with the first N items in parallel is likely to finish the
369 pool starting with the first N items in parallel is likely to finish the
370 first revision's work before other revisions. This can allow us to write
370 first revision's work before other revisions. This can allow us to write
371 the result to disk and reduce memory footprint. At time of writing, the
371 the result to disk and reduce memory footprint. At time of writing, the
372 partition strategy in worker.py seems favorable to this. We also sort the
372 partition strategy in worker.py seems favorable to this. We also sort the
373 items by ascending revision number to match the order in which we commit
373 items by ascending revision number to match the order in which we commit
374 the fixes later.
374 the fixes later.
375 """
375 """
376 workqueue = []
376 workqueue = []
377 numitems = collections.defaultdict(int)
377 numitems = collections.defaultdict(int)
378 maxfilesize = ui.configbytes(b'fix', b'maxfilesize')
378 maxfilesize = ui.configbytes(b'fix', b'maxfilesize')
379 for rev in sorted(revstofix):
379 for rev in sorted(revstofix):
380 fixctx = repo[rev]
380 fixctx = repo[rev]
381 match = scmutil.match(fixctx, pats, opts)
381 match = scmutil.match(fixctx, pats, opts)
382 for path in sorted(
382 for path in sorted(
383 pathstofix(ui, repo, pats, opts, match, basectxs[rev], fixctx)
383 pathstofix(ui, repo, pats, opts, match, basectxs[rev], fixctx)
384 ):
384 ):
385 fctx = fixctx[path]
385 fctx = fixctx[path]
386 if fctx.islink():
386 if fctx.islink():
387 continue
387 continue
388 if fctx.size() > maxfilesize:
388 if fctx.size() > maxfilesize:
389 ui.warn(
389 ui.warn(
390 _(b'ignoring file larger than %s: %s\n')
390 _(b'ignoring file larger than %s: %s\n')
391 % (util.bytecount(maxfilesize), path)
391 % (util.bytecount(maxfilesize), path)
392 )
392 )
393 continue
393 continue
394 workqueue.append((rev, path))
394 workqueue.append((rev, path))
395 numitems[rev] += 1
395 numitems[rev] += 1
396 return workqueue, numitems
396 return workqueue, numitems
397
397
398
398
399 def getrevstofix(ui, repo, opts):
399 def getrevstofix(ui, repo, opts):
400 """Returns the set of revision numbers that should be fixed"""
400 """Returns the set of revision numbers that should be fixed"""
401 revs = set(scmutil.revrange(repo, opts[b'rev']))
401 revs = set(scmutil.revrange(repo, opts[b'rev']))
402 for rev in revs:
402 for rev in revs:
403 checkfixablectx(ui, repo, repo[rev])
403 checkfixablectx(ui, repo, repo[rev])
404 if revs:
404 if revs:
405 cmdutil.checkunfinished(repo)
405 cmdutil.checkunfinished(repo)
406 rewriteutil.precheck(repo, revs, b'fix')
406 rewriteutil.precheck(repo, revs, b'fix')
407 if opts.get(b'working_dir'):
407 if opts.get(b'working_dir'):
408 revs.add(wdirrev)
408 revs.add(wdirrev)
409 if list(merge.mergestate.read(repo).unresolved()):
409 if list(merge.mergestate.read(repo).unresolved()):
410 raise error.Abort(b'unresolved conflicts', hint=b"use 'hg resolve'")
410 raise error.Abort(b'unresolved conflicts', hint=b"use 'hg resolve'")
411 if not revs:
411 if not revs:
412 raise error.Abort(
412 raise error.Abort(
413 b'no changesets specified', hint=b'use --rev or --working-dir'
413 b'no changesets specified', hint=b'use --rev or --working-dir'
414 )
414 )
415 return revs
415 return revs
416
416
417
417
418 def checkfixablectx(ui, repo, ctx):
418 def checkfixablectx(ui, repo, ctx):
419 """Aborts if the revision shouldn't be replaced with a fixed one."""
419 """Aborts if the revision shouldn't be replaced with a fixed one."""
420 if ctx.obsolete():
420 if ctx.obsolete():
421 # It would be better to actually check if the revision has a successor.
421 # It would be better to actually check if the revision has a successor.
422 allowdivergence = ui.configbool(
422 allowdivergence = ui.configbool(
423 b'experimental', b'evolution.allowdivergence'
423 b'experimental', b'evolution.allowdivergence'
424 )
424 )
425 if not allowdivergence:
425 if not allowdivergence:
426 raise error.Abort(
426 raise error.Abort(
427 b'fixing obsolete revision could cause divergence'
427 b'fixing obsolete revision could cause divergence'
428 )
428 )
429
429
430
430
431 def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx):
431 def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx):
432 """Returns the set of files that should be fixed in a context
432 """Returns the set of files that should be fixed in a context
433
433
434 The result depends on the base contexts; we include any file that has
434 The result depends on the base contexts; we include any file that has
435 changed relative to any of the base contexts. Base contexts should be
435 changed relative to any of the base contexts. Base contexts should be
436 ancestors of the context being fixed.
436 ancestors of the context being fixed.
437 """
437 """
438 files = set()
438 files = set()
439 for basectx in basectxs:
439 for basectx in basectxs:
440 stat = basectx.status(
440 stat = basectx.status(
441 fixctx, match=match, listclean=bool(pats), listunknown=bool(pats)
441 fixctx, match=match, listclean=bool(pats), listunknown=bool(pats)
442 )
442 )
443 files.update(
443 files.update(
444 set(
444 set(
445 itertools.chain(
445 itertools.chain(
446 stat.added, stat.modified, stat.clean, stat.unknown
446 stat.added, stat.modified, stat.clean, stat.unknown
447 )
447 )
448 )
448 )
449 )
449 )
450 return files
450 return files
451
451
452
452
453 def lineranges(opts, path, basectxs, fixctx, content2):
453 def lineranges(opts, path, basectxs, fixctx, content2):
454 """Returns the set of line ranges that should be fixed in a file
454 """Returns the set of line ranges that should be fixed in a file
455
455
456 Of the form [(10, 20), (30, 40)].
456 Of the form [(10, 20), (30, 40)].
457
457
458 This depends on the given base contexts; we must consider lines that have
458 This depends on the given base contexts; we must consider lines that have
459 changed versus any of the base contexts, and whether the file has been
459 changed versus any of the base contexts, and whether the file has been
460 renamed versus any of them.
460 renamed versus any of them.
461
461
462 Another way to understand this is that we exclude line ranges that are
462 Another way to understand this is that we exclude line ranges that are
463 common to the file in all base contexts.
463 common to the file in all base contexts.
464 """
464 """
465 if opts.get(b'whole'):
465 if opts.get(b'whole'):
466 # Return a range containing all lines. Rely on the diff implementation's
466 # Return a range containing all lines. Rely on the diff implementation's
467 # idea of how many lines are in the file, instead of reimplementing it.
467 # idea of how many lines are in the file, instead of reimplementing it.
468 return difflineranges(b'', content2)
468 return difflineranges(b'', content2)
469
469
470 rangeslist = []
470 rangeslist = []
471 for basectx in basectxs:
471 for basectx in basectxs:
472 basepath = copies.pathcopies(basectx, fixctx).get(path, path)
472 basepath = copies.pathcopies(basectx, fixctx).get(path, path)
473 if basepath in basectx:
473 if basepath in basectx:
474 content1 = basectx[basepath].data()
474 content1 = basectx[basepath].data()
475 else:
475 else:
476 content1 = b''
476 content1 = b''
477 rangeslist.extend(difflineranges(content1, content2))
477 rangeslist.extend(difflineranges(content1, content2))
478 return unionranges(rangeslist)
478 return unionranges(rangeslist)
479
479
480
480
481 def unionranges(rangeslist):
481 def unionranges(rangeslist):
482 """Return the union of some closed intervals
482 """Return the union of some closed intervals
483
483
484 >>> unionranges([])
484 >>> unionranges([])
485 []
485 []
486 >>> unionranges([(1, 100)])
486 >>> unionranges([(1, 100)])
487 [(1, 100)]
487 [(1, 100)]
488 >>> unionranges([(1, 100), (1, 100)])
488 >>> unionranges([(1, 100), (1, 100)])
489 [(1, 100)]
489 [(1, 100)]
490 >>> unionranges([(1, 100), (2, 100)])
490 >>> unionranges([(1, 100), (2, 100)])
491 [(1, 100)]
491 [(1, 100)]
492 >>> unionranges([(1, 99), (1, 100)])
492 >>> unionranges([(1, 99), (1, 100)])
493 [(1, 100)]
493 [(1, 100)]
494 >>> unionranges([(1, 100), (40, 60)])
494 >>> unionranges([(1, 100), (40, 60)])
495 [(1, 100)]
495 [(1, 100)]
496 >>> unionranges([(1, 49), (50, 100)])
496 >>> unionranges([(1, 49), (50, 100)])
497 [(1, 100)]
497 [(1, 100)]
498 >>> unionranges([(1, 48), (50, 100)])
498 >>> unionranges([(1, 48), (50, 100)])
499 [(1, 48), (50, 100)]
499 [(1, 48), (50, 100)]
500 >>> unionranges([(1, 2), (3, 4), (5, 6)])
500 >>> unionranges([(1, 2), (3, 4), (5, 6)])
501 [(1, 6)]
501 [(1, 6)]
502 """
502 """
503 rangeslist = sorted(set(rangeslist))
503 rangeslist = sorted(set(rangeslist))
504 unioned = []
504 unioned = []
505 if rangeslist:
505 if rangeslist:
506 unioned, rangeslist = [rangeslist[0]], rangeslist[1:]
506 unioned, rangeslist = [rangeslist[0]], rangeslist[1:]
507 for a, b in rangeslist:
507 for a, b in rangeslist:
508 c, d = unioned[-1]
508 c, d = unioned[-1]
509 if a > d + 1:
509 if a > d + 1:
510 unioned.append((a, b))
510 unioned.append((a, b))
511 else:
511 else:
512 unioned[-1] = (c, max(b, d))
512 unioned[-1] = (c, max(b, d))
513 return unioned
513 return unioned
514
514
515
515
516 def difflineranges(content1, content2):
516 def difflineranges(content1, content2):
517 """Return list of line number ranges in content2 that differ from content1.
517 """Return list of line number ranges in content2 that differ from content1.
518
518
519 Line numbers are 1-based. The numbers are the first and last line contained
519 Line numbers are 1-based. The numbers are the first and last line contained
520 in the range. Single-line ranges have the same line number for the first and
520 in the range. Single-line ranges have the same line number for the first and
521 last line. Excludes any empty ranges that result from lines that are only
521 last line. Excludes any empty ranges that result from lines that are only
522 present in content1. Relies on mdiff's idea of where the line endings are in
522 present in content1. Relies on mdiff's idea of where the line endings are in
523 the string.
523 the string.
524
524
525 >>> from mercurial import pycompat
525 >>> from mercurial import pycompat
526 >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)])
526 >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)])
527 >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b))
527 >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b))
528 >>> difflineranges2(b'', b'')
528 >>> difflineranges2(b'', b'')
529 []
529 []
530 >>> difflineranges2(b'a', b'')
530 >>> difflineranges2(b'a', b'')
531 []
531 []
532 >>> difflineranges2(b'', b'A')
532 >>> difflineranges2(b'', b'A')
533 [(1, 1)]
533 [(1, 1)]
534 >>> difflineranges2(b'a', b'a')
534 >>> difflineranges2(b'a', b'a')
535 []
535 []
536 >>> difflineranges2(b'a', b'A')
536 >>> difflineranges2(b'a', b'A')
537 [(1, 1)]
537 [(1, 1)]
538 >>> difflineranges2(b'ab', b'')
538 >>> difflineranges2(b'ab', b'')
539 []
539 []
540 >>> difflineranges2(b'', b'AB')
540 >>> difflineranges2(b'', b'AB')
541 [(1, 2)]
541 [(1, 2)]
542 >>> difflineranges2(b'abc', b'ac')
542 >>> difflineranges2(b'abc', b'ac')
543 []
543 []
544 >>> difflineranges2(b'ab', b'aCb')
544 >>> difflineranges2(b'ab', b'aCb')
545 [(2, 2)]
545 [(2, 2)]
546 >>> difflineranges2(b'abc', b'aBc')
546 >>> difflineranges2(b'abc', b'aBc')
547 [(2, 2)]
547 [(2, 2)]
548 >>> difflineranges2(b'ab', b'AB')
548 >>> difflineranges2(b'ab', b'AB')
549 [(1, 2)]
549 [(1, 2)]
550 >>> difflineranges2(b'abcde', b'aBcDe')
550 >>> difflineranges2(b'abcde', b'aBcDe')
551 [(2, 2), (4, 4)]
551 [(2, 2), (4, 4)]
552 >>> difflineranges2(b'abcde', b'aBCDe')
552 >>> difflineranges2(b'abcde', b'aBCDe')
553 [(2, 4)]
553 [(2, 4)]
554 """
554 """
555 ranges = []
555 ranges = []
556 for lines, kind in mdiff.allblocks(content1, content2):
556 for lines, kind in mdiff.allblocks(content1, content2):
557 firstline, lastline = lines[2:4]
557 firstline, lastline = lines[2:4]
558 if kind == b'!' and firstline != lastline:
558 if kind == b'!' and firstline != lastline:
559 ranges.append((firstline + 1, lastline))
559 ranges.append((firstline + 1, lastline))
560 return ranges
560 return ranges
561
561
562
562
563 def getbasectxs(repo, opts, revstofix):
563 def getbasectxs(repo, opts, revstofix):
564 """Returns a map of the base contexts for each revision
564 """Returns a map of the base contexts for each revision
565
565
566 The base contexts determine which lines are considered modified when we
566 The base contexts determine which lines are considered modified when we
567 attempt to fix just the modified lines in a file. It also determines which
567 attempt to fix just the modified lines in a file. It also determines which
568 files we attempt to fix, so it is important to compute this even when
568 files we attempt to fix, so it is important to compute this even when
569 --whole is used.
569 --whole is used.
570 """
570 """
571 # The --base flag overrides the usual logic, and we give every revision
571 # The --base flag overrides the usual logic, and we give every revision
572 # exactly the set of baserevs that the user specified.
572 # exactly the set of baserevs that the user specified.
573 if opts.get(b'base'):
573 if opts.get(b'base'):
574 baserevs = set(scmutil.revrange(repo, opts.get(b'base')))
574 baserevs = set(scmutil.revrange(repo, opts.get(b'base')))
575 if not baserevs:
575 if not baserevs:
576 baserevs = {nullrev}
576 baserevs = {nullrev}
577 basectxs = {repo[rev] for rev in baserevs}
577 basectxs = {repo[rev] for rev in baserevs}
578 return {rev: basectxs for rev in revstofix}
578 return {rev: basectxs for rev in revstofix}
579
579
580 # Proceed in topological order so that we can easily determine each
580 # Proceed in topological order so that we can easily determine each
581 # revision's baserevs by looking at its parents and their baserevs.
581 # revision's baserevs by looking at its parents and their baserevs.
582 basectxs = collections.defaultdict(set)
582 basectxs = collections.defaultdict(set)
583 for rev in sorted(revstofix):
583 for rev in sorted(revstofix):
584 ctx = repo[rev]
584 ctx = repo[rev]
585 for pctx in ctx.parents():
585 for pctx in ctx.parents():
586 if pctx.rev() in basectxs:
586 if pctx.rev() in basectxs:
587 basectxs[rev].update(basectxs[pctx.rev()])
587 basectxs[rev].update(basectxs[pctx.rev()])
588 else:
588 else:
589 basectxs[rev].add(pctx)
589 basectxs[rev].add(pctx)
590 return basectxs
590 return basectxs
591
591
592
592
593 def fixfile(ui, repo, opts, fixers, fixctx, path, basectxs):
593 def fixfile(ui, repo, opts, fixers, fixctx, path, basectxs):
594 """Run any configured fixers that should affect the file in this context
594 """Run any configured fixers that should affect the file in this context
595
595
596 Returns the file content that results from applying the fixers in some order
596 Returns the file content that results from applying the fixers in some order
597 starting with the file's content in the fixctx. Fixers that support line
597 starting with the file's content in the fixctx. Fixers that support line
598 ranges will affect lines that have changed relative to any of the basectxs
598 ranges will affect lines that have changed relative to any of the basectxs
599 (i.e. they will only avoid lines that are common to all basectxs).
599 (i.e. they will only avoid lines that are common to all basectxs).
600
600
601 A fixer tool's stdout will become the file's new content if and only if it
601 A fixer tool's stdout will become the file's new content if and only if it
602 exits with code zero. The fixer tool's working directory is the repository's
602 exits with code zero. The fixer tool's working directory is the repository's
603 root.
603 root.
604 """
604 """
605 metadata = {}
605 metadata = {}
606 newdata = fixctx[path].data()
606 newdata = fixctx[path].data()
607 for fixername, fixer in pycompat.iteritems(fixers):
607 for fixername, fixer in pycompat.iteritems(fixers):
608 if fixer.affects(opts, fixctx, path):
608 if fixer.affects(opts, fixctx, path):
609 ranges = lineranges(opts, path, basectxs, fixctx, newdata)
609 ranges = lineranges(opts, path, basectxs, fixctx, newdata)
610 command = fixer.command(ui, path, ranges)
610 command = fixer.command(ui, path, ranges)
611 if command is None:
611 if command is None:
612 continue
612 continue
613 ui.debug(b'subprocess: %s\n' % (command,))
613 ui.debug(b'subprocess: %s\n' % (command,))
614 proc = subprocess.Popen(
614 proc = subprocess.Popen(
615 procutil.tonativestr(command),
615 procutil.tonativestr(command),
616 shell=True,
616 shell=True,
617 cwd=procutil.tonativestr(repo.root),
617 cwd=procutil.tonativestr(repo.root),
618 stdin=subprocess.PIPE,
618 stdin=subprocess.PIPE,
619 stdout=subprocess.PIPE,
619 stdout=subprocess.PIPE,
620 stderr=subprocess.PIPE,
620 stderr=subprocess.PIPE,
621 )
621 )
622 stdout, stderr = proc.communicate(newdata)
622 stdout, stderr = proc.communicate(newdata)
623 if stderr:
623 if stderr:
624 showstderr(ui, fixctx.rev(), fixername, stderr)
624 showstderr(ui, fixctx.rev(), fixername, stderr)
625 newerdata = stdout
625 newerdata = stdout
626 if fixer.shouldoutputmetadata():
626 if fixer.shouldoutputmetadata():
627 try:
627 try:
628 metadatajson, newerdata = stdout.split(b'\0', 1)
628 metadatajson, newerdata = stdout.split(b'\0', 1)
629 metadata[fixername] = pycompat.json_loads(metadatajson)
629 metadata[fixername] = pycompat.json_loads(metadatajson)
630 except ValueError:
630 except ValueError:
631 ui.warn(
631 ui.warn(
632 _(b'ignored invalid output from fixer tool: %s\n')
632 _(b'ignored invalid output from fixer tool: %s\n')
633 % (fixername,)
633 % (fixername,)
634 )
634 )
635 continue
635 continue
636 else:
636 else:
637 metadata[fixername] = None
637 metadata[fixername] = None
638 if proc.returncode == 0:
638 if proc.returncode == 0:
639 newdata = newerdata
639 newdata = newerdata
640 else:
640 else:
641 if not stderr:
641 if not stderr:
642 message = _(b'exited with status %d\n') % (proc.returncode,)
642 message = _(b'exited with status %d\n') % (proc.returncode,)
643 showstderr(ui, fixctx.rev(), fixername, message)
643 showstderr(ui, fixctx.rev(), fixername, message)
644 checktoolfailureaction(
644 checktoolfailureaction(
645 ui,
645 ui,
646 _(b'no fixes will be applied'),
646 _(b'no fixes will be applied'),
647 hint=_(
647 hint=_(
648 b'use --config fix.failure=continue to apply any '
648 b'use --config fix.failure=continue to apply any '
649 b'successful fixes anyway'
649 b'successful fixes anyway'
650 ),
650 ),
651 )
651 )
652 return metadata, newdata
652 return metadata, newdata
653
653
654
654
655 def showstderr(ui, rev, fixername, stderr):
655 def showstderr(ui, rev, fixername, stderr):
656 """Writes the lines of the stderr string as warnings on the ui
656 """Writes the lines of the stderr string as warnings on the ui
657
657
658 Uses the revision number and fixername to give more context to each line of
658 Uses the revision number and fixername to give more context to each line of
659 the error message. Doesn't include file names, since those take up a lot of
659 the error message. Doesn't include file names, since those take up a lot of
660 space and would tend to be included in the error message if they were
660 space and would tend to be included in the error message if they were
661 relevant.
661 relevant.
662 """
662 """
663 for line in re.split(b'[\r\n]+', stderr):
663 for line in re.split(b'[\r\n]+', stderr):
664 if line:
664 if line:
665 ui.warn(b'[')
665 ui.warn(b'[')
666 if rev is None:
666 if rev is None:
667 ui.warn(_(b'wdir'), label=b'evolve.rev')
667 ui.warn(_(b'wdir'), label=b'evolve.rev')
668 else:
668 else:
669 ui.warn(b'%d' % rev, label=b'evolve.rev')
669 ui.warn(b'%d' % rev, label=b'evolve.rev')
670 ui.warn(b'] %s: %s\n' % (fixername, line))
670 ui.warn(b'] %s: %s\n' % (fixername, line))
671
671
672
672
673 def writeworkingdir(repo, ctx, filedata, replacements):
673 def writeworkingdir(repo, ctx, filedata, replacements):
674 """Write new content to the working copy and check out the new p1 if any
674 """Write new content to the working copy and check out the new p1 if any
675
675
676 We check out a new revision if and only if we fixed something in both the
676 We check out a new revision if and only if we fixed something in both the
677 working directory and its parent revision. This avoids the need for a full
677 working directory and its parent revision. This avoids the need for a full
678 update/merge, and means that the working directory simply isn't affected
678 update/merge, and means that the working directory simply isn't affected
679 unless the --working-dir flag is given.
679 unless the --working-dir flag is given.
680
680
681 Directly updates the dirstate for the affected files.
681 Directly updates the dirstate for the affected files.
682 """
682 """
683 for path, data in pycompat.iteritems(filedata):
683 for path, data in pycompat.iteritems(filedata):
684 fctx = ctx[path]
684 fctx = ctx[path]
685 fctx.write(data, fctx.flags())
685 fctx.write(data, fctx.flags())
686 if repo.dirstate[path] == b'n':
686 if repo.dirstate[path] == b'n':
687 repo.dirstate.normallookup(path)
687 repo.dirstate.normallookup(path)
688
688
689 oldparentnodes = repo.dirstate.parents()
689 oldparentnodes = repo.dirstate.parents()
690 newparentnodes = [replacements.get(n, n) for n in oldparentnodes]
690 newparentnodes = [replacements.get(n, n) for n in oldparentnodes]
691 if newparentnodes != oldparentnodes:
691 if newparentnodes != oldparentnodes:
692 repo.setparents(*newparentnodes)
692 repo.setparents(*newparentnodes)
693
693
694
694
695 def replacerev(ui, repo, ctx, filedata, replacements):
695 def replacerev(ui, repo, ctx, filedata, replacements):
696 """Commit a new revision like the given one, but with file content changes
696 """Commit a new revision like the given one, but with file content changes
697
697
698 "ctx" is the original revision to be replaced by a modified one.
698 "ctx" is the original revision to be replaced by a modified one.
699
699
700 "filedata" is a dict that maps paths to their new file content. All other
700 "filedata" is a dict that maps paths to their new file content. All other
701 paths will be recreated from the original revision without changes.
701 paths will be recreated from the original revision without changes.
702 "filedata" may contain paths that didn't exist in the original revision;
702 "filedata" may contain paths that didn't exist in the original revision;
703 they will be added.
703 they will be added.
704
704
705 "replacements" is a dict that maps a single node to a single node, and it is
705 "replacements" is a dict that maps a single node to a single node, and it is
706 updated to indicate the original revision is replaced by the newly created
706 updated to indicate the original revision is replaced by the newly created
707 one. No entry is added if the replacement's node already exists.
707 one. No entry is added if the replacement's node already exists.
708
708
709 The new revision has the same parents as the old one, unless those parents
709 The new revision has the same parents as the old one, unless those parents
710 have already been replaced, in which case those replacements are the parents
710 have already been replaced, in which case those replacements are the parents
711 of this new revision. Thus, if revisions are replaced in topological order,
711 of this new revision. Thus, if revisions are replaced in topological order,
712 there is no need to rebase them into the original topology later.
712 there is no need to rebase them into the original topology later.
713 """
713 """
714
714
715 p1rev, p2rev = repo.changelog.parentrevs(ctx.rev())
715 p1rev, p2rev = repo.changelog.parentrevs(ctx.rev())
716 p1ctx, p2ctx = repo[p1rev], repo[p2rev]
716 p1ctx, p2ctx = repo[p1rev], repo[p2rev]
717 newp1node = replacements.get(p1ctx.node(), p1ctx.node())
717 newp1node = replacements.get(p1ctx.node(), p1ctx.node())
718 newp2node = replacements.get(p2ctx.node(), p2ctx.node())
718 newp2node = replacements.get(p2ctx.node(), p2ctx.node())
719
719
720 # We don't want to create a revision that has no changes from the original,
720 # We don't want to create a revision that has no changes from the original,
721 # but we should if the original revision's parent has been replaced.
721 # but we should if the original revision's parent has been replaced.
722 # Otherwise, we would produce an orphan that needs no actual human
722 # Otherwise, we would produce an orphan that needs no actual human
723 # intervention to evolve. We can't rely on commit() to avoid creating the
723 # intervention to evolve. We can't rely on commit() to avoid creating the
724 # un-needed revision because the extra field added below produces a new hash
724 # un-needed revision because the extra field added below produces a new hash
725 # regardless of file content changes.
725 # regardless of file content changes.
726 if (
726 if (
727 not filedata
727 not filedata
728 and p1ctx.node() not in replacements
728 and p1ctx.node() not in replacements
729 and p2ctx.node() not in replacements
729 and p2ctx.node() not in replacements
730 ):
730 ):
731 return
731 return
732
732
733 extra = ctx.extra().copy()
733 extra = ctx.extra().copy()
734 extra[b'fix_source'] = ctx.hex()
734 extra[b'fix_source'] = ctx.hex()
735
735
736 wctx = context.overlayworkingctx(repo)
736 wctx = context.overlayworkingctx(repo)
737 newp1ctx = repo[newp1node]
737 wctx.setbase(repo[newp1node])
738 wctx.setbase(newp1ctx)
739 merge.update(
738 merge.update(
740 repo,
739 repo,
741 ctx.rev(),
740 ctx.rev(),
742 branchmerge=False,
741 branchmerge=False,
743 force=True,
742 force=True,
744 ancestor=p1rev,
743 ancestor=p1rev,
745 mergeancestor=False,
744 mergeancestor=False,
746 wc=wctx,
745 wc=wctx,
747 )
746 )
748 copies.graftcopies(repo, wctx, ctx, ctx.p1(), skip=newp1ctx)
747 copies.graftcopies(wctx, ctx, ctx.p1())
749
748
750 for path in filedata.keys():
749 for path in filedata.keys():
751 fctx = ctx[path]
750 fctx = ctx[path]
752 copysource = fctx.copysource()
751 copysource = fctx.copysource()
753 wctx.write(path, filedata[path], flags=fctx.flags())
752 wctx.write(path, filedata[path], flags=fctx.flags())
754 if copysource:
753 if copysource:
755 wctx.markcopied(path, copysource)
754 wctx.markcopied(path, copysource)
756
755
757 memctx = wctx.tomemctx(
756 memctx = wctx.tomemctx(
758 text=ctx.description(),
757 text=ctx.description(),
759 branch=ctx.branch(),
758 branch=ctx.branch(),
760 extra=extra,
759 extra=extra,
761 date=ctx.date(),
760 date=ctx.date(),
762 parents=(newp1node, newp2node),
761 parents=(newp1node, newp2node),
763 user=ctx.user(),
762 user=ctx.user(),
764 )
763 )
765
764
766 sucnode = memctx.commit()
765 sucnode = memctx.commit()
767 prenode = ctx.node()
766 prenode = ctx.node()
768 if prenode == sucnode:
767 if prenode == sucnode:
769 ui.debug(b'node %s already existed\n' % (ctx.hex()))
768 ui.debug(b'node %s already existed\n' % (ctx.hex()))
770 else:
769 else:
771 replacements[ctx.node()] = sucnode
770 replacements[ctx.node()] = sucnode
772
771
773
772
774 def getfixers(ui):
773 def getfixers(ui):
775 """Returns a map of configured fixer tools indexed by their names
774 """Returns a map of configured fixer tools indexed by their names
776
775
777 Each value is a Fixer object with methods that implement the behavior of the
776 Each value is a Fixer object with methods that implement the behavior of the
778 fixer's config suboptions. Does not validate the config values.
777 fixer's config suboptions. Does not validate the config values.
779 """
778 """
780 fixers = {}
779 fixers = {}
781 for name in fixernames(ui):
780 for name in fixernames(ui):
782 enabled = ui.configbool(b'fix', name + b':enabled')
781 enabled = ui.configbool(b'fix', name + b':enabled')
783 command = ui.config(b'fix', name + b':command')
782 command = ui.config(b'fix', name + b':command')
784 pattern = ui.config(b'fix', name + b':pattern')
783 pattern = ui.config(b'fix', name + b':pattern')
785 linerange = ui.config(b'fix', name + b':linerange')
784 linerange = ui.config(b'fix', name + b':linerange')
786 priority = ui.configint(b'fix', name + b':priority')
785 priority = ui.configint(b'fix', name + b':priority')
787 metadata = ui.configbool(b'fix', name + b':metadata')
786 metadata = ui.configbool(b'fix', name + b':metadata')
788 skipclean = ui.configbool(b'fix', name + b':skipclean')
787 skipclean = ui.configbool(b'fix', name + b':skipclean')
789 # Don't use a fixer if it has no pattern configured. It would be
788 # Don't use a fixer if it has no pattern configured. It would be
790 # dangerous to let it affect all files. It would be pointless to let it
789 # dangerous to let it affect all files. It would be pointless to let it
791 # affect no files. There is no reasonable subset of files to use as the
790 # affect no files. There is no reasonable subset of files to use as the
792 # default.
791 # default.
793 if command is None:
792 if command is None:
794 ui.warn(
793 ui.warn(
795 _(b'fixer tool has no command configuration: %s\n') % (name,)
794 _(b'fixer tool has no command configuration: %s\n') % (name,)
796 )
795 )
797 elif pattern is None:
796 elif pattern is None:
798 ui.warn(
797 ui.warn(
799 _(b'fixer tool has no pattern configuration: %s\n') % (name,)
798 _(b'fixer tool has no pattern configuration: %s\n') % (name,)
800 )
799 )
801 elif not enabled:
800 elif not enabled:
802 ui.debug(b'ignoring disabled fixer tool: %s\n' % (name,))
801 ui.debug(b'ignoring disabled fixer tool: %s\n' % (name,))
803 else:
802 else:
804 fixers[name] = Fixer(
803 fixers[name] = Fixer(
805 command, pattern, linerange, priority, metadata, skipclean
804 command, pattern, linerange, priority, metadata, skipclean
806 )
805 )
807 return collections.OrderedDict(
806 return collections.OrderedDict(
808 sorted(fixers.items(), key=lambda item: item[1]._priority, reverse=True)
807 sorted(fixers.items(), key=lambda item: item[1]._priority, reverse=True)
809 )
808 )
810
809
811
810
812 def fixernames(ui):
811 def fixernames(ui):
813 """Returns the names of [fix] config options that have suboptions"""
812 """Returns the names of [fix] config options that have suboptions"""
814 names = set()
813 names = set()
815 for k, v in ui.configitems(b'fix'):
814 for k, v in ui.configitems(b'fix'):
816 if b':' in k:
815 if b':' in k:
817 names.add(k.split(b':', 1)[0])
816 names.add(k.split(b':', 1)[0])
818 return names
817 return names
819
818
820
819
821 class Fixer(object):
820 class Fixer(object):
822 """Wraps the raw config values for a fixer with methods"""
821 """Wraps the raw config values for a fixer with methods"""
823
822
824 def __init__(
823 def __init__(
825 self, command, pattern, linerange, priority, metadata, skipclean
824 self, command, pattern, linerange, priority, metadata, skipclean
826 ):
825 ):
827 self._command = command
826 self._command = command
828 self._pattern = pattern
827 self._pattern = pattern
829 self._linerange = linerange
828 self._linerange = linerange
830 self._priority = priority
829 self._priority = priority
831 self._metadata = metadata
830 self._metadata = metadata
832 self._skipclean = skipclean
831 self._skipclean = skipclean
833
832
834 def affects(self, opts, fixctx, path):
833 def affects(self, opts, fixctx, path):
835 """Should this fixer run on the file at the given path and context?"""
834 """Should this fixer run on the file at the given path and context?"""
836 repo = fixctx.repo()
835 repo = fixctx.repo()
837 matcher = matchmod.match(
836 matcher = matchmod.match(
838 repo.root, repo.root, [self._pattern], ctx=fixctx
837 repo.root, repo.root, [self._pattern], ctx=fixctx
839 )
838 )
840 return matcher(path)
839 return matcher(path)
841
840
842 def shouldoutputmetadata(self):
841 def shouldoutputmetadata(self):
843 """Should the stdout of this fixer start with JSON and a null byte?"""
842 """Should the stdout of this fixer start with JSON and a null byte?"""
844 return self._metadata
843 return self._metadata
845
844
846 def command(self, ui, path, ranges):
845 def command(self, ui, path, ranges):
847 """A shell command to use to invoke this fixer on the given file/lines
846 """A shell command to use to invoke this fixer on the given file/lines
848
847
849 May return None if there is no appropriate command to run for the given
848 May return None if there is no appropriate command to run for the given
850 parameters.
849 parameters.
851 """
850 """
852 expand = cmdutil.rendercommandtemplate
851 expand = cmdutil.rendercommandtemplate
853 parts = [
852 parts = [
854 expand(
853 expand(
855 ui,
854 ui,
856 self._command,
855 self._command,
857 {b'rootpath': path, b'basename': os.path.basename(path)},
856 {b'rootpath': path, b'basename': os.path.basename(path)},
858 )
857 )
859 ]
858 ]
860 if self._linerange:
859 if self._linerange:
861 if self._skipclean and not ranges:
860 if self._skipclean and not ranges:
862 # No line ranges to fix, so don't run the fixer.
861 # No line ranges to fix, so don't run the fixer.
863 return None
862 return None
864 for first, last in ranges:
863 for first, last in ranges:
865 parts.append(
864 parts.append(
866 expand(
865 expand(
867 ui, self._linerange, {b'first': first, b'last': last}
866 ui, self._linerange, {b'first': first, b'last': last}
868 )
867 )
869 )
868 )
870 return b' '.join(parts)
869 return b' '.join(parts)
@@ -1,2265 +1,2262 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 https://mercurial-scm.org/wiki/RebaseExtension
14 https://mercurial-scm.org/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from __future__ import absolute_import
17 from __future__ import absolute_import
18
18
19 import errno
19 import errno
20 import os
20 import os
21
21
22 from mercurial.i18n import _
22 from mercurial.i18n import _
23 from mercurial.node import (
23 from mercurial.node import (
24 nullrev,
24 nullrev,
25 short,
25 short,
26 )
26 )
27 from mercurial.pycompat import open
27 from mercurial.pycompat import open
28 from mercurial import (
28 from mercurial import (
29 bookmarks,
29 bookmarks,
30 cmdutil,
30 cmdutil,
31 commands,
31 commands,
32 copies,
32 copies,
33 destutil,
33 destutil,
34 dirstateguard,
34 dirstateguard,
35 error,
35 error,
36 extensions,
36 extensions,
37 hg,
37 hg,
38 merge as mergemod,
38 merge as mergemod,
39 mergeutil,
39 mergeutil,
40 obsolete,
40 obsolete,
41 obsutil,
41 obsutil,
42 patch,
42 patch,
43 phases,
43 phases,
44 pycompat,
44 pycompat,
45 registrar,
45 registrar,
46 repair,
46 repair,
47 revset,
47 revset,
48 revsetlang,
48 revsetlang,
49 rewriteutil,
49 rewriteutil,
50 scmutil,
50 scmutil,
51 smartset,
51 smartset,
52 state as statemod,
52 state as statemod,
53 util,
53 util,
54 )
54 )
55
55
56 # The following constants are used throughout the rebase module. The ordering of
56 # The following constants are used throughout the rebase module. The ordering of
57 # their values must be maintained.
57 # their values must be maintained.
58
58
59 # Indicates that a revision needs to be rebased
59 # Indicates that a revision needs to be rebased
60 revtodo = -1
60 revtodo = -1
61 revtodostr = b'-1'
61 revtodostr = b'-1'
62
62
63 # legacy revstates no longer needed in current code
63 # legacy revstates no longer needed in current code
64 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
64 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
65 legacystates = {b'-2', b'-3', b'-4', b'-5'}
65 legacystates = {b'-2', b'-3', b'-4', b'-5'}
66
66
67 cmdtable = {}
67 cmdtable = {}
68 command = registrar.command(cmdtable)
68 command = registrar.command(cmdtable)
69 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
69 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
70 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
70 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
71 # be specifying the version(s) of Mercurial they are tested with, or
71 # be specifying the version(s) of Mercurial they are tested with, or
72 # leave the attribute unspecified.
72 # leave the attribute unspecified.
73 testedwith = b'ships-with-hg-core'
73 testedwith = b'ships-with-hg-core'
74
74
75
75
76 def _nothingtorebase():
76 def _nothingtorebase():
77 return 1
77 return 1
78
78
79
79
80 def _savegraft(ctx, extra):
80 def _savegraft(ctx, extra):
81 s = ctx.extra().get(b'source', None)
81 s = ctx.extra().get(b'source', None)
82 if s is not None:
82 if s is not None:
83 extra[b'source'] = s
83 extra[b'source'] = s
84 s = ctx.extra().get(b'intermediate-source', None)
84 s = ctx.extra().get(b'intermediate-source', None)
85 if s is not None:
85 if s is not None:
86 extra[b'intermediate-source'] = s
86 extra[b'intermediate-source'] = s
87
87
88
88
89 def _savebranch(ctx, extra):
89 def _savebranch(ctx, extra):
90 extra[b'branch'] = ctx.branch()
90 extra[b'branch'] = ctx.branch()
91
91
92
92
93 def _destrebase(repo, sourceset, destspace=None):
93 def _destrebase(repo, sourceset, destspace=None):
94 """small wrapper around destmerge to pass the right extra args
94 """small wrapper around destmerge to pass the right extra args
95
95
96 Please wrap destutil.destmerge instead."""
96 Please wrap destutil.destmerge instead."""
97 return destutil.destmerge(
97 return destutil.destmerge(
98 repo,
98 repo,
99 action=b'rebase',
99 action=b'rebase',
100 sourceset=sourceset,
100 sourceset=sourceset,
101 onheadcheck=False,
101 onheadcheck=False,
102 destspace=destspace,
102 destspace=destspace,
103 )
103 )
104
104
105
105
106 revsetpredicate = registrar.revsetpredicate()
106 revsetpredicate = registrar.revsetpredicate()
107
107
108
108
109 @revsetpredicate(b'_destrebase')
109 @revsetpredicate(b'_destrebase')
110 def _revsetdestrebase(repo, subset, x):
110 def _revsetdestrebase(repo, subset, x):
111 # ``_rebasedefaultdest()``
111 # ``_rebasedefaultdest()``
112
112
113 # default destination for rebase.
113 # default destination for rebase.
114 # # XXX: Currently private because I expect the signature to change.
114 # # XXX: Currently private because I expect the signature to change.
115 # # XXX: - bailing out in case of ambiguity vs returning all data.
115 # # XXX: - bailing out in case of ambiguity vs returning all data.
116 # i18n: "_rebasedefaultdest" is a keyword
116 # i18n: "_rebasedefaultdest" is a keyword
117 sourceset = None
117 sourceset = None
118 if x is not None:
118 if x is not None:
119 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
119 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
120 return subset & smartset.baseset([_destrebase(repo, sourceset)])
120 return subset & smartset.baseset([_destrebase(repo, sourceset)])
121
121
122
122
123 @revsetpredicate(b'_destautoorphanrebase')
123 @revsetpredicate(b'_destautoorphanrebase')
124 def _revsetdestautoorphanrebase(repo, subset, x):
124 def _revsetdestautoorphanrebase(repo, subset, x):
125 # ``_destautoorphanrebase()``
125 # ``_destautoorphanrebase()``
126
126
127 # automatic rebase destination for a single orphan revision.
127 # automatic rebase destination for a single orphan revision.
128 unfi = repo.unfiltered()
128 unfi = repo.unfiltered()
129 obsoleted = unfi.revs(b'obsolete()')
129 obsoleted = unfi.revs(b'obsolete()')
130
130
131 src = revset.getset(repo, subset, x).first()
131 src = revset.getset(repo, subset, x).first()
132
132
133 # Empty src or already obsoleted - Do not return a destination
133 # Empty src or already obsoleted - Do not return a destination
134 if not src or src in obsoleted:
134 if not src or src in obsoleted:
135 return smartset.baseset()
135 return smartset.baseset()
136 dests = destutil.orphanpossibledestination(repo, src)
136 dests = destutil.orphanpossibledestination(repo, src)
137 if len(dests) > 1:
137 if len(dests) > 1:
138 raise error.Abort(
138 raise error.Abort(
139 _(b"ambiguous automatic rebase: %r could end up on any of %r")
139 _(b"ambiguous automatic rebase: %r could end up on any of %r")
140 % (src, dests)
140 % (src, dests)
141 )
141 )
142 # We have zero or one destination, so we can just return here.
142 # We have zero or one destination, so we can just return here.
143 return smartset.baseset(dests)
143 return smartset.baseset(dests)
144
144
145
145
146 def _ctxdesc(ctx):
146 def _ctxdesc(ctx):
147 """short description for a context"""
147 """short description for a context"""
148 desc = b'%d:%s "%s"' % (
148 desc = b'%d:%s "%s"' % (
149 ctx.rev(),
149 ctx.rev(),
150 ctx,
150 ctx,
151 ctx.description().split(b'\n', 1)[0],
151 ctx.description().split(b'\n', 1)[0],
152 )
152 )
153 repo = ctx.repo()
153 repo = ctx.repo()
154 names = []
154 names = []
155 for nsname, ns in pycompat.iteritems(repo.names):
155 for nsname, ns in pycompat.iteritems(repo.names):
156 if nsname == b'branches':
156 if nsname == b'branches':
157 continue
157 continue
158 names.extend(ns.names(repo, ctx.node()))
158 names.extend(ns.names(repo, ctx.node()))
159 if names:
159 if names:
160 desc += b' (%s)' % b' '.join(names)
160 desc += b' (%s)' % b' '.join(names)
161 return desc
161 return desc
162
162
163
163
164 class rebaseruntime(object):
164 class rebaseruntime(object):
165 """This class is a container for rebase runtime state"""
165 """This class is a container for rebase runtime state"""
166
166
167 def __init__(self, repo, ui, inmemory=False, opts=None):
167 def __init__(self, repo, ui, inmemory=False, opts=None):
168 if opts is None:
168 if opts is None:
169 opts = {}
169 opts = {}
170
170
171 # prepared: whether we have rebasestate prepared or not. Currently it
171 # prepared: whether we have rebasestate prepared or not. Currently it
172 # decides whether "self.repo" is unfiltered or not.
172 # decides whether "self.repo" is unfiltered or not.
173 # The rebasestate has explicit hash to hash instructions not depending
173 # The rebasestate has explicit hash to hash instructions not depending
174 # on visibility. If rebasestate exists (in-memory or on-disk), use
174 # on visibility. If rebasestate exists (in-memory or on-disk), use
175 # unfiltered repo to avoid visibility issues.
175 # unfiltered repo to avoid visibility issues.
176 # Before knowing rebasestate (i.e. when starting a new rebase (not
176 # Before knowing rebasestate (i.e. when starting a new rebase (not
177 # --continue or --abort)), the original repo should be used so
177 # --continue or --abort)), the original repo should be used so
178 # visibility-dependent revsets are correct.
178 # visibility-dependent revsets are correct.
179 self.prepared = False
179 self.prepared = False
180 self._repo = repo
180 self._repo = repo
181
181
182 self.ui = ui
182 self.ui = ui
183 self.opts = opts
183 self.opts = opts
184 self.originalwd = None
184 self.originalwd = None
185 self.external = nullrev
185 self.external = nullrev
186 # Mapping between the old revision id and either what is the new rebased
186 # Mapping between the old revision id and either what is the new rebased
187 # revision or what needs to be done with the old revision. The state
187 # revision or what needs to be done with the old revision. The state
188 # dict will be what contains most of the rebase progress state.
188 # dict will be what contains most of the rebase progress state.
189 self.state = {}
189 self.state = {}
190 self.activebookmark = None
190 self.activebookmark = None
191 self.destmap = {}
191 self.destmap = {}
192 self.skipped = set()
192 self.skipped = set()
193
193
194 self.collapsef = opts.get(b'collapse', False)
194 self.collapsef = opts.get(b'collapse', False)
195 self.collapsemsg = cmdutil.logmessage(ui, opts)
195 self.collapsemsg = cmdutil.logmessage(ui, opts)
196 self.date = opts.get(b'date', None)
196 self.date = opts.get(b'date', None)
197
197
198 e = opts.get(b'extrafn') # internal, used by e.g. hgsubversion
198 e = opts.get(b'extrafn') # internal, used by e.g. hgsubversion
199 self.extrafns = [_savegraft]
199 self.extrafns = [_savegraft]
200 if e:
200 if e:
201 self.extrafns = [e]
201 self.extrafns = [e]
202
202
203 self.backupf = ui.configbool(b'rewrite', b'backup-bundle')
203 self.backupf = ui.configbool(b'rewrite', b'backup-bundle')
204 self.keepf = opts.get(b'keep', False)
204 self.keepf = opts.get(b'keep', False)
205 self.keepbranchesf = opts.get(b'keepbranches', False)
205 self.keepbranchesf = opts.get(b'keepbranches', False)
206 self.obsoletenotrebased = {}
206 self.obsoletenotrebased = {}
207 self.obsoletewithoutsuccessorindestination = set()
207 self.obsoletewithoutsuccessorindestination = set()
208 self.inmemory = inmemory
208 self.inmemory = inmemory
209 self.stateobj = statemod.cmdstate(repo, b'rebasestate')
209 self.stateobj = statemod.cmdstate(repo, b'rebasestate')
210
210
211 @property
211 @property
212 def repo(self):
212 def repo(self):
213 if self.prepared:
213 if self.prepared:
214 return self._repo.unfiltered()
214 return self._repo.unfiltered()
215 else:
215 else:
216 return self._repo
216 return self._repo
217
217
218 def storestatus(self, tr=None):
218 def storestatus(self, tr=None):
219 """Store the current status to allow recovery"""
219 """Store the current status to allow recovery"""
220 if tr:
220 if tr:
221 tr.addfilegenerator(
221 tr.addfilegenerator(
222 b'rebasestate',
222 b'rebasestate',
223 (b'rebasestate',),
223 (b'rebasestate',),
224 self._writestatus,
224 self._writestatus,
225 location=b'plain',
225 location=b'plain',
226 )
226 )
227 else:
227 else:
228 with self.repo.vfs(b"rebasestate", b"w") as f:
228 with self.repo.vfs(b"rebasestate", b"w") as f:
229 self._writestatus(f)
229 self._writestatus(f)
230
230
231 def _writestatus(self, f):
231 def _writestatus(self, f):
232 repo = self.repo
232 repo = self.repo
233 assert repo.filtername is None
233 assert repo.filtername is None
234 f.write(repo[self.originalwd].hex() + b'\n')
234 f.write(repo[self.originalwd].hex() + b'\n')
235 # was "dest". we now write dest per src root below.
235 # was "dest". we now write dest per src root below.
236 f.write(b'\n')
236 f.write(b'\n')
237 f.write(repo[self.external].hex() + b'\n')
237 f.write(repo[self.external].hex() + b'\n')
238 f.write(b'%d\n' % int(self.collapsef))
238 f.write(b'%d\n' % int(self.collapsef))
239 f.write(b'%d\n' % int(self.keepf))
239 f.write(b'%d\n' % int(self.keepf))
240 f.write(b'%d\n' % int(self.keepbranchesf))
240 f.write(b'%d\n' % int(self.keepbranchesf))
241 f.write(b'%s\n' % (self.activebookmark or b''))
241 f.write(b'%s\n' % (self.activebookmark or b''))
242 destmap = self.destmap
242 destmap = self.destmap
243 for d, v in pycompat.iteritems(self.state):
243 for d, v in pycompat.iteritems(self.state):
244 oldrev = repo[d].hex()
244 oldrev = repo[d].hex()
245 if v >= 0:
245 if v >= 0:
246 newrev = repo[v].hex()
246 newrev = repo[v].hex()
247 else:
247 else:
248 newrev = b"%d" % v
248 newrev = b"%d" % v
249 destnode = repo[destmap[d]].hex()
249 destnode = repo[destmap[d]].hex()
250 f.write(b"%s:%s:%s\n" % (oldrev, newrev, destnode))
250 f.write(b"%s:%s:%s\n" % (oldrev, newrev, destnode))
251 repo.ui.debug(b'rebase status stored\n')
251 repo.ui.debug(b'rebase status stored\n')
252
252
253 def restorestatus(self):
253 def restorestatus(self):
254 """Restore a previously stored status"""
254 """Restore a previously stored status"""
255 if not self.stateobj.exists():
255 if not self.stateobj.exists():
256 cmdutil.wrongtooltocontinue(self.repo, _(b'rebase'))
256 cmdutil.wrongtooltocontinue(self.repo, _(b'rebase'))
257
257
258 data = self._read()
258 data = self._read()
259 self.repo.ui.debug(b'rebase status resumed\n')
259 self.repo.ui.debug(b'rebase status resumed\n')
260
260
261 self.originalwd = data[b'originalwd']
261 self.originalwd = data[b'originalwd']
262 self.destmap = data[b'destmap']
262 self.destmap = data[b'destmap']
263 self.state = data[b'state']
263 self.state = data[b'state']
264 self.skipped = data[b'skipped']
264 self.skipped = data[b'skipped']
265 self.collapsef = data[b'collapse']
265 self.collapsef = data[b'collapse']
266 self.keepf = data[b'keep']
266 self.keepf = data[b'keep']
267 self.keepbranchesf = data[b'keepbranches']
267 self.keepbranchesf = data[b'keepbranches']
268 self.external = data[b'external']
268 self.external = data[b'external']
269 self.activebookmark = data[b'activebookmark']
269 self.activebookmark = data[b'activebookmark']
270
270
271 def _read(self):
271 def _read(self):
272 self.prepared = True
272 self.prepared = True
273 repo = self.repo
273 repo = self.repo
274 assert repo.filtername is None
274 assert repo.filtername is None
275 data = {
275 data = {
276 b'keepbranches': None,
276 b'keepbranches': None,
277 b'collapse': None,
277 b'collapse': None,
278 b'activebookmark': None,
278 b'activebookmark': None,
279 b'external': nullrev,
279 b'external': nullrev,
280 b'keep': None,
280 b'keep': None,
281 b'originalwd': None,
281 b'originalwd': None,
282 }
282 }
283 legacydest = None
283 legacydest = None
284 state = {}
284 state = {}
285 destmap = {}
285 destmap = {}
286
286
287 if True:
287 if True:
288 f = repo.vfs(b"rebasestate")
288 f = repo.vfs(b"rebasestate")
289 for i, l in enumerate(f.read().splitlines()):
289 for i, l in enumerate(f.read().splitlines()):
290 if i == 0:
290 if i == 0:
291 data[b'originalwd'] = repo[l].rev()
291 data[b'originalwd'] = repo[l].rev()
292 elif i == 1:
292 elif i == 1:
293 # this line should be empty in newer version. but legacy
293 # this line should be empty in newer version. but legacy
294 # clients may still use it
294 # clients may still use it
295 if l:
295 if l:
296 legacydest = repo[l].rev()
296 legacydest = repo[l].rev()
297 elif i == 2:
297 elif i == 2:
298 data[b'external'] = repo[l].rev()
298 data[b'external'] = repo[l].rev()
299 elif i == 3:
299 elif i == 3:
300 data[b'collapse'] = bool(int(l))
300 data[b'collapse'] = bool(int(l))
301 elif i == 4:
301 elif i == 4:
302 data[b'keep'] = bool(int(l))
302 data[b'keep'] = bool(int(l))
303 elif i == 5:
303 elif i == 5:
304 data[b'keepbranches'] = bool(int(l))
304 data[b'keepbranches'] = bool(int(l))
305 elif i == 6 and not (len(l) == 81 and b':' in l):
305 elif i == 6 and not (len(l) == 81 and b':' in l):
306 # line 6 is a recent addition, so for backwards
306 # line 6 is a recent addition, so for backwards
307 # compatibility check that the line doesn't look like the
307 # compatibility check that the line doesn't look like the
308 # oldrev:newrev lines
308 # oldrev:newrev lines
309 data[b'activebookmark'] = l
309 data[b'activebookmark'] = l
310 else:
310 else:
311 args = l.split(b':')
311 args = l.split(b':')
312 oldrev = repo[args[0]].rev()
312 oldrev = repo[args[0]].rev()
313 newrev = args[1]
313 newrev = args[1]
314 if newrev in legacystates:
314 if newrev in legacystates:
315 continue
315 continue
316 if len(args) > 2:
316 if len(args) > 2:
317 destrev = repo[args[2]].rev()
317 destrev = repo[args[2]].rev()
318 else:
318 else:
319 destrev = legacydest
319 destrev = legacydest
320 destmap[oldrev] = destrev
320 destmap[oldrev] = destrev
321 if newrev == revtodostr:
321 if newrev == revtodostr:
322 state[oldrev] = revtodo
322 state[oldrev] = revtodo
323 # Legacy compat special case
323 # Legacy compat special case
324 else:
324 else:
325 state[oldrev] = repo[newrev].rev()
325 state[oldrev] = repo[newrev].rev()
326
326
327 if data[b'keepbranches'] is None:
327 if data[b'keepbranches'] is None:
328 raise error.Abort(_(b'.hg/rebasestate is incomplete'))
328 raise error.Abort(_(b'.hg/rebasestate is incomplete'))
329
329
330 data[b'destmap'] = destmap
330 data[b'destmap'] = destmap
331 data[b'state'] = state
331 data[b'state'] = state
332 skipped = set()
332 skipped = set()
333 # recompute the set of skipped revs
333 # recompute the set of skipped revs
334 if not data[b'collapse']:
334 if not data[b'collapse']:
335 seen = set(destmap.values())
335 seen = set(destmap.values())
336 for old, new in sorted(state.items()):
336 for old, new in sorted(state.items()):
337 if new != revtodo and new in seen:
337 if new != revtodo and new in seen:
338 skipped.add(old)
338 skipped.add(old)
339 seen.add(new)
339 seen.add(new)
340 data[b'skipped'] = skipped
340 data[b'skipped'] = skipped
341 repo.ui.debug(
341 repo.ui.debug(
342 b'computed skipped revs: %s\n'
342 b'computed skipped revs: %s\n'
343 % (b' '.join(b'%d' % r for r in sorted(skipped)) or b'')
343 % (b' '.join(b'%d' % r for r in sorted(skipped)) or b'')
344 )
344 )
345
345
346 return data
346 return data
347
347
348 def _handleskippingobsolete(self, obsoleterevs, destmap):
348 def _handleskippingobsolete(self, obsoleterevs, destmap):
349 """Compute structures necessary for skipping obsolete revisions
349 """Compute structures necessary for skipping obsolete revisions
350
350
351 obsoleterevs: iterable of all obsolete revisions in rebaseset
351 obsoleterevs: iterable of all obsolete revisions in rebaseset
352 destmap: {srcrev: destrev} destination revisions
352 destmap: {srcrev: destrev} destination revisions
353 """
353 """
354 self.obsoletenotrebased = {}
354 self.obsoletenotrebased = {}
355 if not self.ui.configbool(b'experimental', b'rebaseskipobsolete'):
355 if not self.ui.configbool(b'experimental', b'rebaseskipobsolete'):
356 return
356 return
357 obsoleteset = set(obsoleterevs)
357 obsoleteset = set(obsoleterevs)
358 (
358 (
359 self.obsoletenotrebased,
359 self.obsoletenotrebased,
360 self.obsoletewithoutsuccessorindestination,
360 self.obsoletewithoutsuccessorindestination,
361 obsoleteextinctsuccessors,
361 obsoleteextinctsuccessors,
362 ) = _computeobsoletenotrebased(self.repo, obsoleteset, destmap)
362 ) = _computeobsoletenotrebased(self.repo, obsoleteset, destmap)
363 skippedset = set(self.obsoletenotrebased)
363 skippedset = set(self.obsoletenotrebased)
364 skippedset.update(self.obsoletewithoutsuccessorindestination)
364 skippedset.update(self.obsoletewithoutsuccessorindestination)
365 skippedset.update(obsoleteextinctsuccessors)
365 skippedset.update(obsoleteextinctsuccessors)
366 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
366 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
367
367
368 def _prepareabortorcontinue(self, isabort, backup=True, suppwarns=False):
368 def _prepareabortorcontinue(self, isabort, backup=True, suppwarns=False):
369 try:
369 try:
370 self.restorestatus()
370 self.restorestatus()
371 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
371 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
372 except error.RepoLookupError:
372 except error.RepoLookupError:
373 if isabort:
373 if isabort:
374 clearstatus(self.repo)
374 clearstatus(self.repo)
375 clearcollapsemsg(self.repo)
375 clearcollapsemsg(self.repo)
376 self.repo.ui.warn(
376 self.repo.ui.warn(
377 _(
377 _(
378 b'rebase aborted (no revision is removed,'
378 b'rebase aborted (no revision is removed,'
379 b' only broken state is cleared)\n'
379 b' only broken state is cleared)\n'
380 )
380 )
381 )
381 )
382 return 0
382 return 0
383 else:
383 else:
384 msg = _(b'cannot continue inconsistent rebase')
384 msg = _(b'cannot continue inconsistent rebase')
385 hint = _(b'use "hg rebase --abort" to clear broken state')
385 hint = _(b'use "hg rebase --abort" to clear broken state')
386 raise error.Abort(msg, hint=hint)
386 raise error.Abort(msg, hint=hint)
387
387
388 if isabort:
388 if isabort:
389 backup = backup and self.backupf
389 backup = backup and self.backupf
390 return self._abort(backup=backup, suppwarns=suppwarns)
390 return self._abort(backup=backup, suppwarns=suppwarns)
391
391
392 def _preparenewrebase(self, destmap):
392 def _preparenewrebase(self, destmap):
393 if not destmap:
393 if not destmap:
394 return _nothingtorebase()
394 return _nothingtorebase()
395
395
396 rebaseset = destmap.keys()
396 rebaseset = destmap.keys()
397 if not self.keepf:
397 if not self.keepf:
398 try:
398 try:
399 rewriteutil.precheck(self.repo, rebaseset, action=b'rebase')
399 rewriteutil.precheck(self.repo, rebaseset, action=b'rebase')
400 except error.Abort as e:
400 except error.Abort as e:
401 if e.hint is None:
401 if e.hint is None:
402 e.hint = _(b'use --keep to keep original changesets')
402 e.hint = _(b'use --keep to keep original changesets')
403 raise e
403 raise e
404
404
405 result = buildstate(self.repo, destmap, self.collapsef)
405 result = buildstate(self.repo, destmap, self.collapsef)
406
406
407 if not result:
407 if not result:
408 # Empty state built, nothing to rebase
408 # Empty state built, nothing to rebase
409 self.ui.status(_(b'nothing to rebase\n'))
409 self.ui.status(_(b'nothing to rebase\n'))
410 return _nothingtorebase()
410 return _nothingtorebase()
411
411
412 (self.originalwd, self.destmap, self.state) = result
412 (self.originalwd, self.destmap, self.state) = result
413 if self.collapsef:
413 if self.collapsef:
414 dests = set(self.destmap.values())
414 dests = set(self.destmap.values())
415 if len(dests) != 1:
415 if len(dests) != 1:
416 raise error.Abort(
416 raise error.Abort(
417 _(b'--collapse does not work with multiple destinations')
417 _(b'--collapse does not work with multiple destinations')
418 )
418 )
419 destrev = next(iter(dests))
419 destrev = next(iter(dests))
420 destancestors = self.repo.changelog.ancestors(
420 destancestors = self.repo.changelog.ancestors(
421 [destrev], inclusive=True
421 [destrev], inclusive=True
422 )
422 )
423 self.external = externalparent(self.repo, self.state, destancestors)
423 self.external = externalparent(self.repo, self.state, destancestors)
424
424
425 for destrev in sorted(set(destmap.values())):
425 for destrev in sorted(set(destmap.values())):
426 dest = self.repo[destrev]
426 dest = self.repo[destrev]
427 if dest.closesbranch() and not self.keepbranchesf:
427 if dest.closesbranch() and not self.keepbranchesf:
428 self.ui.status(_(b'reopening closed branch head %s\n') % dest)
428 self.ui.status(_(b'reopening closed branch head %s\n') % dest)
429
429
430 self.prepared = True
430 self.prepared = True
431
431
432 def _assignworkingcopy(self):
432 def _assignworkingcopy(self):
433 if self.inmemory:
433 if self.inmemory:
434 from mercurial.context import overlayworkingctx
434 from mercurial.context import overlayworkingctx
435
435
436 self.wctx = overlayworkingctx(self.repo)
436 self.wctx = overlayworkingctx(self.repo)
437 self.repo.ui.debug(b"rebasing in-memory\n")
437 self.repo.ui.debug(b"rebasing in-memory\n")
438 else:
438 else:
439 self.wctx = self.repo[None]
439 self.wctx = self.repo[None]
440 self.repo.ui.debug(b"rebasing on disk\n")
440 self.repo.ui.debug(b"rebasing on disk\n")
441 self.repo.ui.log(
441 self.repo.ui.log(
442 b"rebase",
442 b"rebase",
443 b"using in-memory rebase: %r\n",
443 b"using in-memory rebase: %r\n",
444 self.inmemory,
444 self.inmemory,
445 rebase_imm_used=self.inmemory,
445 rebase_imm_used=self.inmemory,
446 )
446 )
447
447
448 def _performrebase(self, tr):
448 def _performrebase(self, tr):
449 self._assignworkingcopy()
449 self._assignworkingcopy()
450 repo, ui = self.repo, self.ui
450 repo, ui = self.repo, self.ui
451 if self.keepbranchesf:
451 if self.keepbranchesf:
452 # insert _savebranch at the start of extrafns so if
452 # insert _savebranch at the start of extrafns so if
453 # there's a user-provided extrafn it can clobber branch if
453 # there's a user-provided extrafn it can clobber branch if
454 # desired
454 # desired
455 self.extrafns.insert(0, _savebranch)
455 self.extrafns.insert(0, _savebranch)
456 if self.collapsef:
456 if self.collapsef:
457 branches = set()
457 branches = set()
458 for rev in self.state:
458 for rev in self.state:
459 branches.add(repo[rev].branch())
459 branches.add(repo[rev].branch())
460 if len(branches) > 1:
460 if len(branches) > 1:
461 raise error.Abort(
461 raise error.Abort(
462 _(b'cannot collapse multiple named branches')
462 _(b'cannot collapse multiple named branches')
463 )
463 )
464
464
465 # Calculate self.obsoletenotrebased
465 # Calculate self.obsoletenotrebased
466 obsrevs = _filterobsoleterevs(self.repo, self.state)
466 obsrevs = _filterobsoleterevs(self.repo, self.state)
467 self._handleskippingobsolete(obsrevs, self.destmap)
467 self._handleskippingobsolete(obsrevs, self.destmap)
468
468
469 # Keep track of the active bookmarks in order to reset them later
469 # Keep track of the active bookmarks in order to reset them later
470 self.activebookmark = self.activebookmark or repo._activebookmark
470 self.activebookmark = self.activebookmark or repo._activebookmark
471 if self.activebookmark:
471 if self.activebookmark:
472 bookmarks.deactivate(repo)
472 bookmarks.deactivate(repo)
473
473
474 # Store the state before we begin so users can run 'hg rebase --abort'
474 # Store the state before we begin so users can run 'hg rebase --abort'
475 # if we fail before the transaction closes.
475 # if we fail before the transaction closes.
476 self.storestatus()
476 self.storestatus()
477 if tr:
477 if tr:
478 # When using single transaction, store state when transaction
478 # When using single transaction, store state when transaction
479 # commits.
479 # commits.
480 self.storestatus(tr)
480 self.storestatus(tr)
481
481
482 cands = [k for k, v in pycompat.iteritems(self.state) if v == revtodo]
482 cands = [k for k, v in pycompat.iteritems(self.state) if v == revtodo]
483 p = repo.ui.makeprogress(
483 p = repo.ui.makeprogress(
484 _(b"rebasing"), unit=_(b'changesets'), total=len(cands)
484 _(b"rebasing"), unit=_(b'changesets'), total=len(cands)
485 )
485 )
486
486
487 def progress(ctx):
487 def progress(ctx):
488 p.increment(item=(b"%d:%s" % (ctx.rev(), ctx)))
488 p.increment(item=(b"%d:%s" % (ctx.rev(), ctx)))
489
489
490 allowdivergence = self.ui.configbool(
490 allowdivergence = self.ui.configbool(
491 b'experimental', b'evolution.allowdivergence'
491 b'experimental', b'evolution.allowdivergence'
492 )
492 )
493 for subset in sortsource(self.destmap):
493 for subset in sortsource(self.destmap):
494 sortedrevs = self.repo.revs(b'sort(%ld, -topo)', subset)
494 sortedrevs = self.repo.revs(b'sort(%ld, -topo)', subset)
495 if not allowdivergence:
495 if not allowdivergence:
496 sortedrevs -= self.repo.revs(
496 sortedrevs -= self.repo.revs(
497 b'descendants(%ld) and not %ld',
497 b'descendants(%ld) and not %ld',
498 self.obsoletewithoutsuccessorindestination,
498 self.obsoletewithoutsuccessorindestination,
499 self.obsoletewithoutsuccessorindestination,
499 self.obsoletewithoutsuccessorindestination,
500 )
500 )
501 for rev in sortedrevs:
501 for rev in sortedrevs:
502 self._rebasenode(tr, rev, allowdivergence, progress)
502 self._rebasenode(tr, rev, allowdivergence, progress)
503 p.complete()
503 p.complete()
504 ui.note(_(b'rebase merging completed\n'))
504 ui.note(_(b'rebase merging completed\n'))
505
505
506 def _concludenode(self, rev, p1, p2, editor, commitmsg=None):
506 def _concludenode(self, rev, p1, p2, editor, commitmsg=None):
507 '''Commit the wd changes with parents p1 and p2.
507 '''Commit the wd changes with parents p1 and p2.
508
508
509 Reuse commit info from rev but also store useful information in extra.
509 Reuse commit info from rev but also store useful information in extra.
510 Return node of committed revision.'''
510 Return node of committed revision.'''
511 repo = self.repo
511 repo = self.repo
512 ctx = repo[rev]
512 ctx = repo[rev]
513 if commitmsg is None:
513 if commitmsg is None:
514 commitmsg = ctx.description()
514 commitmsg = ctx.description()
515 date = self.date
515 date = self.date
516 if date is None:
516 if date is None:
517 date = ctx.date()
517 date = ctx.date()
518 extra = {b'rebase_source': ctx.hex()}
518 extra = {b'rebase_source': ctx.hex()}
519 for c in self.extrafns:
519 for c in self.extrafns:
520 c(ctx, extra)
520 c(ctx, extra)
521 keepbranch = self.keepbranchesf and repo[p1].branch() != ctx.branch()
521 keepbranch = self.keepbranchesf and repo[p1].branch() != ctx.branch()
522 destphase = max(ctx.phase(), phases.draft)
522 destphase = max(ctx.phase(), phases.draft)
523 overrides = {(b'phases', b'new-commit'): destphase}
523 overrides = {(b'phases', b'new-commit'): destphase}
524 if keepbranch:
524 if keepbranch:
525 overrides[(b'ui', b'allowemptycommit')] = True
525 overrides[(b'ui', b'allowemptycommit')] = True
526 with repo.ui.configoverride(overrides, b'rebase'):
526 with repo.ui.configoverride(overrides, b'rebase'):
527 if self.inmemory:
527 if self.inmemory:
528 newnode = commitmemorynode(
528 newnode = commitmemorynode(
529 repo,
529 repo,
530 p1,
530 p1,
531 p2,
531 p2,
532 wctx=self.wctx,
532 wctx=self.wctx,
533 extra=extra,
533 extra=extra,
534 commitmsg=commitmsg,
534 commitmsg=commitmsg,
535 editor=editor,
535 editor=editor,
536 user=ctx.user(),
536 user=ctx.user(),
537 date=date,
537 date=date,
538 )
538 )
539 mergemod.mergestate.clean(repo)
539 mergemod.mergestate.clean(repo)
540 else:
540 else:
541 newnode = commitnode(
541 newnode = commitnode(
542 repo,
542 repo,
543 p1,
543 p1,
544 p2,
544 p2,
545 extra=extra,
545 extra=extra,
546 commitmsg=commitmsg,
546 commitmsg=commitmsg,
547 editor=editor,
547 editor=editor,
548 user=ctx.user(),
548 user=ctx.user(),
549 date=date,
549 date=date,
550 )
550 )
551
551
552 if newnode is None:
552 if newnode is None:
553 # If it ended up being a no-op commit, then the normal
553 # If it ended up being a no-op commit, then the normal
554 # merge state clean-up path doesn't happen, so do it
554 # merge state clean-up path doesn't happen, so do it
555 # here. Fix issue5494
555 # here. Fix issue5494
556 mergemod.mergestate.clean(repo)
556 mergemod.mergestate.clean(repo)
557 return newnode
557 return newnode
558
558
559 def _rebasenode(self, tr, rev, allowdivergence, progressfn):
559 def _rebasenode(self, tr, rev, allowdivergence, progressfn):
560 repo, ui, opts = self.repo, self.ui, self.opts
560 repo, ui, opts = self.repo, self.ui, self.opts
561 dest = self.destmap[rev]
561 dest = self.destmap[rev]
562 ctx = repo[rev]
562 ctx = repo[rev]
563 desc = _ctxdesc(ctx)
563 desc = _ctxdesc(ctx)
564 if self.state[rev] == rev:
564 if self.state[rev] == rev:
565 ui.status(_(b'already rebased %s\n') % desc)
565 ui.status(_(b'already rebased %s\n') % desc)
566 elif (
566 elif (
567 not allowdivergence
567 not allowdivergence
568 and rev in self.obsoletewithoutsuccessorindestination
568 and rev in self.obsoletewithoutsuccessorindestination
569 ):
569 ):
570 msg = (
570 msg = (
571 _(
571 _(
572 b'note: not rebasing %s and its descendants as '
572 b'note: not rebasing %s and its descendants as '
573 b'this would cause divergence\n'
573 b'this would cause divergence\n'
574 )
574 )
575 % desc
575 % desc
576 )
576 )
577 repo.ui.status(msg)
577 repo.ui.status(msg)
578 self.skipped.add(rev)
578 self.skipped.add(rev)
579 elif rev in self.obsoletenotrebased:
579 elif rev in self.obsoletenotrebased:
580 succ = self.obsoletenotrebased[rev]
580 succ = self.obsoletenotrebased[rev]
581 if succ is None:
581 if succ is None:
582 msg = _(b'note: not rebasing %s, it has no successor\n') % desc
582 msg = _(b'note: not rebasing %s, it has no successor\n') % desc
583 else:
583 else:
584 succdesc = _ctxdesc(repo[succ])
584 succdesc = _ctxdesc(repo[succ])
585 msg = _(
585 msg = _(
586 b'note: not rebasing %s, already in destination as %s\n'
586 b'note: not rebasing %s, already in destination as %s\n'
587 ) % (desc, succdesc)
587 ) % (desc, succdesc)
588 repo.ui.status(msg)
588 repo.ui.status(msg)
589 # Make clearrebased aware state[rev] is not a true successor
589 # Make clearrebased aware state[rev] is not a true successor
590 self.skipped.add(rev)
590 self.skipped.add(rev)
591 # Record rev as moved to its desired destination in self.state.
591 # Record rev as moved to its desired destination in self.state.
592 # This helps bookmark and working parent movement.
592 # This helps bookmark and working parent movement.
593 dest = max(
593 dest = max(
594 adjustdest(repo, rev, self.destmap, self.state, self.skipped)
594 adjustdest(repo, rev, self.destmap, self.state, self.skipped)
595 )
595 )
596 self.state[rev] = dest
596 self.state[rev] = dest
597 elif self.state[rev] == revtodo:
597 elif self.state[rev] == revtodo:
598 ui.status(_(b'rebasing %s\n') % desc)
598 ui.status(_(b'rebasing %s\n') % desc)
599 progressfn(ctx)
599 progressfn(ctx)
600 p1, p2, base = defineparents(
600 p1, p2, base = defineparents(
601 repo,
601 repo,
602 rev,
602 rev,
603 self.destmap,
603 self.destmap,
604 self.state,
604 self.state,
605 self.skipped,
605 self.skipped,
606 self.obsoletenotrebased,
606 self.obsoletenotrebased,
607 )
607 )
608 if not self.inmemory and len(repo[None].parents()) == 2:
608 if not self.inmemory and len(repo[None].parents()) == 2:
609 repo.ui.debug(b'resuming interrupted rebase\n')
609 repo.ui.debug(b'resuming interrupted rebase\n')
610 else:
610 else:
611 overrides = {(b'ui', b'forcemerge'): opts.get(b'tool', b'')}
611 overrides = {(b'ui', b'forcemerge'): opts.get(b'tool', b'')}
612 with ui.configoverride(overrides, b'rebase'):
612 with ui.configoverride(overrides, b'rebase'):
613 stats = rebasenode(
613 stats = rebasenode(
614 repo,
614 repo,
615 rev,
615 rev,
616 p1,
616 p1,
617 base,
617 base,
618 self.collapsef,
618 self.collapsef,
619 dest,
619 dest,
620 wctx=self.wctx,
620 wctx=self.wctx,
621 )
621 )
622 if stats.unresolvedcount > 0:
622 if stats.unresolvedcount > 0:
623 if self.inmemory:
623 if self.inmemory:
624 raise error.InMemoryMergeConflictsError()
624 raise error.InMemoryMergeConflictsError()
625 else:
625 else:
626 raise error.InterventionRequired(
626 raise error.InterventionRequired(
627 _(
627 _(
628 b'unresolved conflicts (see hg '
628 b'unresolved conflicts (see hg '
629 b'resolve, then hg rebase --continue)'
629 b'resolve, then hg rebase --continue)'
630 )
630 )
631 )
631 )
632 if not self.collapsef:
632 if not self.collapsef:
633 merging = p2 != nullrev
633 merging = p2 != nullrev
634 editform = cmdutil.mergeeditform(merging, b'rebase')
634 editform = cmdutil.mergeeditform(merging, b'rebase')
635 editor = cmdutil.getcommiteditor(
635 editor = cmdutil.getcommiteditor(
636 editform=editform, **pycompat.strkwargs(opts)
636 editform=editform, **pycompat.strkwargs(opts)
637 )
637 )
638 newnode = self._concludenode(rev, p1, p2, editor)
638 newnode = self._concludenode(rev, p1, p2, editor)
639 else:
639 else:
640 # Skip commit if we are collapsing
640 # Skip commit if we are collapsing
641 if self.inmemory:
641 if self.inmemory:
642 self.wctx.setbase(repo[p1])
642 self.wctx.setbase(repo[p1])
643 else:
643 else:
644 repo.setparents(repo[p1].node())
644 repo.setparents(repo[p1].node())
645 newnode = None
645 newnode = None
646 # Update the state
646 # Update the state
647 if newnode is not None:
647 if newnode is not None:
648 self.state[rev] = repo[newnode].rev()
648 self.state[rev] = repo[newnode].rev()
649 ui.debug(b'rebased as %s\n' % short(newnode))
649 ui.debug(b'rebased as %s\n' % short(newnode))
650 else:
650 else:
651 if not self.collapsef:
651 if not self.collapsef:
652 ui.warn(
652 ui.warn(
653 _(
653 _(
654 b'note: not rebasing %s, its destination already '
654 b'note: not rebasing %s, its destination already '
655 b'has all its changes\n'
655 b'has all its changes\n'
656 )
656 )
657 % desc
657 % desc
658 )
658 )
659 self.skipped.add(rev)
659 self.skipped.add(rev)
660 self.state[rev] = p1
660 self.state[rev] = p1
661 ui.debug(b'next revision set to %d\n' % p1)
661 ui.debug(b'next revision set to %d\n' % p1)
662 else:
662 else:
663 ui.status(
663 ui.status(
664 _(b'already rebased %s as %s\n') % (desc, repo[self.state[rev]])
664 _(b'already rebased %s as %s\n') % (desc, repo[self.state[rev]])
665 )
665 )
666 if not tr:
666 if not tr:
667 # When not using single transaction, store state after each
667 # When not using single transaction, store state after each
668 # commit is completely done. On InterventionRequired, we thus
668 # commit is completely done. On InterventionRequired, we thus
669 # won't store the status. Instead, we'll hit the "len(parents) == 2"
669 # won't store the status. Instead, we'll hit the "len(parents) == 2"
670 # case and realize that the commit was in progress.
670 # case and realize that the commit was in progress.
671 self.storestatus()
671 self.storestatus()
672
672
673 def _finishrebase(self):
673 def _finishrebase(self):
674 repo, ui, opts = self.repo, self.ui, self.opts
674 repo, ui, opts = self.repo, self.ui, self.opts
675 fm = ui.formatter(b'rebase', opts)
675 fm = ui.formatter(b'rebase', opts)
676 fm.startitem()
676 fm.startitem()
677 if self.collapsef:
677 if self.collapsef:
678 p1, p2, _base = defineparents(
678 p1, p2, _base = defineparents(
679 repo,
679 repo,
680 min(self.state),
680 min(self.state),
681 self.destmap,
681 self.destmap,
682 self.state,
682 self.state,
683 self.skipped,
683 self.skipped,
684 self.obsoletenotrebased,
684 self.obsoletenotrebased,
685 )
685 )
686 editopt = opts.get(b'edit')
686 editopt = opts.get(b'edit')
687 editform = b'rebase.collapse'
687 editform = b'rebase.collapse'
688 if self.collapsemsg:
688 if self.collapsemsg:
689 commitmsg = self.collapsemsg
689 commitmsg = self.collapsemsg
690 else:
690 else:
691 commitmsg = b'Collapsed revision'
691 commitmsg = b'Collapsed revision'
692 for rebased in sorted(self.state):
692 for rebased in sorted(self.state):
693 if rebased not in self.skipped:
693 if rebased not in self.skipped:
694 commitmsg += b'\n* %s' % repo[rebased].description()
694 commitmsg += b'\n* %s' % repo[rebased].description()
695 editopt = True
695 editopt = True
696 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
696 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
697 revtoreuse = max(self.state)
697 revtoreuse = max(self.state)
698
698
699 newnode = self._concludenode(
699 newnode = self._concludenode(
700 revtoreuse, p1, self.external, editor, commitmsg=commitmsg
700 revtoreuse, p1, self.external, editor, commitmsg=commitmsg
701 )
701 )
702
702
703 if newnode is not None:
703 if newnode is not None:
704 newrev = repo[newnode].rev()
704 newrev = repo[newnode].rev()
705 for oldrev in self.state:
705 for oldrev in self.state:
706 self.state[oldrev] = newrev
706 self.state[oldrev] = newrev
707
707
708 if b'qtip' in repo.tags():
708 if b'qtip' in repo.tags():
709 updatemq(repo, self.state, self.skipped, **pycompat.strkwargs(opts))
709 updatemq(repo, self.state, self.skipped, **pycompat.strkwargs(opts))
710
710
711 # restore original working directory
711 # restore original working directory
712 # (we do this before stripping)
712 # (we do this before stripping)
713 newwd = self.state.get(self.originalwd, self.originalwd)
713 newwd = self.state.get(self.originalwd, self.originalwd)
714 if newwd < 0:
714 if newwd < 0:
715 # original directory is a parent of rebase set root or ignored
715 # original directory is a parent of rebase set root or ignored
716 newwd = self.originalwd
716 newwd = self.originalwd
717 if newwd not in [c.rev() for c in repo[None].parents()]:
717 if newwd not in [c.rev() for c in repo[None].parents()]:
718 ui.note(_(b"update back to initial working directory parent\n"))
718 ui.note(_(b"update back to initial working directory parent\n"))
719 hg.updaterepo(repo, newwd, overwrite=False)
719 hg.updaterepo(repo, newwd, overwrite=False)
720
720
721 collapsedas = None
721 collapsedas = None
722 if self.collapsef and not self.keepf:
722 if self.collapsef and not self.keepf:
723 collapsedas = newnode
723 collapsedas = newnode
724 clearrebased(
724 clearrebased(
725 ui,
725 ui,
726 repo,
726 repo,
727 self.destmap,
727 self.destmap,
728 self.state,
728 self.state,
729 self.skipped,
729 self.skipped,
730 collapsedas,
730 collapsedas,
731 self.keepf,
731 self.keepf,
732 fm=fm,
732 fm=fm,
733 backup=self.backupf,
733 backup=self.backupf,
734 )
734 )
735
735
736 clearstatus(repo)
736 clearstatus(repo)
737 clearcollapsemsg(repo)
737 clearcollapsemsg(repo)
738
738
739 ui.note(_(b"rebase completed\n"))
739 ui.note(_(b"rebase completed\n"))
740 util.unlinkpath(repo.sjoin(b'undo'), ignoremissing=True)
740 util.unlinkpath(repo.sjoin(b'undo'), ignoremissing=True)
741 if self.skipped:
741 if self.skipped:
742 skippedlen = len(self.skipped)
742 skippedlen = len(self.skipped)
743 ui.note(_(b"%d revisions have been skipped\n") % skippedlen)
743 ui.note(_(b"%d revisions have been skipped\n") % skippedlen)
744 fm.end()
744 fm.end()
745
745
746 if (
746 if (
747 self.activebookmark
747 self.activebookmark
748 and self.activebookmark in repo._bookmarks
748 and self.activebookmark in repo._bookmarks
749 and repo[b'.'].node() == repo._bookmarks[self.activebookmark]
749 and repo[b'.'].node() == repo._bookmarks[self.activebookmark]
750 ):
750 ):
751 bookmarks.activate(repo, self.activebookmark)
751 bookmarks.activate(repo, self.activebookmark)
752
752
753 def _abort(self, backup=True, suppwarns=False):
753 def _abort(self, backup=True, suppwarns=False):
754 '''Restore the repository to its original state.'''
754 '''Restore the repository to its original state.'''
755
755
756 repo = self.repo
756 repo = self.repo
757 try:
757 try:
758 # If the first commits in the rebased set get skipped during the
758 # If the first commits in the rebased set get skipped during the
759 # rebase, their values within the state mapping will be the dest
759 # rebase, their values within the state mapping will be the dest
760 # rev id. The rebased list must must not contain the dest rev
760 # rev id. The rebased list must must not contain the dest rev
761 # (issue4896)
761 # (issue4896)
762 rebased = [
762 rebased = [
763 s
763 s
764 for r, s in self.state.items()
764 for r, s in self.state.items()
765 if s >= 0 and s != r and s != self.destmap[r]
765 if s >= 0 and s != r and s != self.destmap[r]
766 ]
766 ]
767 immutable = [d for d in rebased if not repo[d].mutable()]
767 immutable = [d for d in rebased if not repo[d].mutable()]
768 cleanup = True
768 cleanup = True
769 if immutable:
769 if immutable:
770 repo.ui.warn(
770 repo.ui.warn(
771 _(b"warning: can't clean up public changesets %s\n")
771 _(b"warning: can't clean up public changesets %s\n")
772 % b', '.join(bytes(repo[r]) for r in immutable),
772 % b', '.join(bytes(repo[r]) for r in immutable),
773 hint=_(b"see 'hg help phases' for details"),
773 hint=_(b"see 'hg help phases' for details"),
774 )
774 )
775 cleanup = False
775 cleanup = False
776
776
777 descendants = set()
777 descendants = set()
778 if rebased:
778 if rebased:
779 descendants = set(repo.changelog.descendants(rebased))
779 descendants = set(repo.changelog.descendants(rebased))
780 if descendants - set(rebased):
780 if descendants - set(rebased):
781 repo.ui.warn(
781 repo.ui.warn(
782 _(
782 _(
783 b"warning: new changesets detected on "
783 b"warning: new changesets detected on "
784 b"destination branch, can't strip\n"
784 b"destination branch, can't strip\n"
785 )
785 )
786 )
786 )
787 cleanup = False
787 cleanup = False
788
788
789 if cleanup:
789 if cleanup:
790 if rebased:
790 if rebased:
791 strippoints = [
791 strippoints = [
792 c.node() for c in repo.set(b'roots(%ld)', rebased)
792 c.node() for c in repo.set(b'roots(%ld)', rebased)
793 ]
793 ]
794
794
795 updateifonnodes = set(rebased)
795 updateifonnodes = set(rebased)
796 updateifonnodes.update(self.destmap.values())
796 updateifonnodes.update(self.destmap.values())
797 updateifonnodes.add(self.originalwd)
797 updateifonnodes.add(self.originalwd)
798 shouldupdate = repo[b'.'].rev() in updateifonnodes
798 shouldupdate = repo[b'.'].rev() in updateifonnodes
799
799
800 # Update away from the rebase if necessary
800 # Update away from the rebase if necessary
801 if shouldupdate:
801 if shouldupdate:
802 mergemod.update(
802 mergemod.update(
803 repo, self.originalwd, branchmerge=False, force=True
803 repo, self.originalwd, branchmerge=False, force=True
804 )
804 )
805
805
806 # Strip from the first rebased revision
806 # Strip from the first rebased revision
807 if rebased:
807 if rebased:
808 repair.strip(repo.ui, repo, strippoints, backup=backup)
808 repair.strip(repo.ui, repo, strippoints, backup=backup)
809
809
810 if self.activebookmark and self.activebookmark in repo._bookmarks:
810 if self.activebookmark and self.activebookmark in repo._bookmarks:
811 bookmarks.activate(repo, self.activebookmark)
811 bookmarks.activate(repo, self.activebookmark)
812
812
813 finally:
813 finally:
814 clearstatus(repo)
814 clearstatus(repo)
815 clearcollapsemsg(repo)
815 clearcollapsemsg(repo)
816 if not suppwarns:
816 if not suppwarns:
817 repo.ui.warn(_(b'rebase aborted\n'))
817 repo.ui.warn(_(b'rebase aborted\n'))
818 return 0
818 return 0
819
819
820
820
821 @command(
821 @command(
822 b'rebase',
822 b'rebase',
823 [
823 [
824 (
824 (
825 b's',
825 b's',
826 b'source',
826 b'source',
827 b'',
827 b'',
828 _(b'rebase the specified changeset and descendants'),
828 _(b'rebase the specified changeset and descendants'),
829 _(b'REV'),
829 _(b'REV'),
830 ),
830 ),
831 (
831 (
832 b'b',
832 b'b',
833 b'base',
833 b'base',
834 b'',
834 b'',
835 _(b'rebase everything from branching point of specified changeset'),
835 _(b'rebase everything from branching point of specified changeset'),
836 _(b'REV'),
836 _(b'REV'),
837 ),
837 ),
838 (b'r', b'rev', [], _(b'rebase these revisions'), _(b'REV')),
838 (b'r', b'rev', [], _(b'rebase these revisions'), _(b'REV')),
839 (
839 (
840 b'd',
840 b'd',
841 b'dest',
841 b'dest',
842 b'',
842 b'',
843 _(b'rebase onto the specified changeset'),
843 _(b'rebase onto the specified changeset'),
844 _(b'REV'),
844 _(b'REV'),
845 ),
845 ),
846 (b'', b'collapse', False, _(b'collapse the rebased changesets')),
846 (b'', b'collapse', False, _(b'collapse the rebased changesets')),
847 (
847 (
848 b'm',
848 b'm',
849 b'message',
849 b'message',
850 b'',
850 b'',
851 _(b'use text as collapse commit message'),
851 _(b'use text as collapse commit message'),
852 _(b'TEXT'),
852 _(b'TEXT'),
853 ),
853 ),
854 (b'e', b'edit', False, _(b'invoke editor on commit messages')),
854 (b'e', b'edit', False, _(b'invoke editor on commit messages')),
855 (
855 (
856 b'l',
856 b'l',
857 b'logfile',
857 b'logfile',
858 b'',
858 b'',
859 _(b'read collapse commit message from file'),
859 _(b'read collapse commit message from file'),
860 _(b'FILE'),
860 _(b'FILE'),
861 ),
861 ),
862 (b'k', b'keep', False, _(b'keep original changesets')),
862 (b'k', b'keep', False, _(b'keep original changesets')),
863 (b'', b'keepbranches', False, _(b'keep original branch names')),
863 (b'', b'keepbranches', False, _(b'keep original branch names')),
864 (b'D', b'detach', False, _(b'(DEPRECATED)')),
864 (b'D', b'detach', False, _(b'(DEPRECATED)')),
865 (b'i', b'interactive', False, _(b'(DEPRECATED)')),
865 (b'i', b'interactive', False, _(b'(DEPRECATED)')),
866 (b't', b'tool', b'', _(b'specify merge tool')),
866 (b't', b'tool', b'', _(b'specify merge tool')),
867 (b'', b'stop', False, _(b'stop interrupted rebase')),
867 (b'', b'stop', False, _(b'stop interrupted rebase')),
868 (b'c', b'continue', False, _(b'continue an interrupted rebase')),
868 (b'c', b'continue', False, _(b'continue an interrupted rebase')),
869 (b'a', b'abort', False, _(b'abort an interrupted rebase')),
869 (b'a', b'abort', False, _(b'abort an interrupted rebase')),
870 (
870 (
871 b'',
871 b'',
872 b'auto-orphans',
872 b'auto-orphans',
873 b'',
873 b'',
874 _(
874 _(
875 b'automatically rebase orphan revisions '
875 b'automatically rebase orphan revisions '
876 b'in the specified revset (EXPERIMENTAL)'
876 b'in the specified revset (EXPERIMENTAL)'
877 ),
877 ),
878 ),
878 ),
879 ]
879 ]
880 + cmdutil.dryrunopts
880 + cmdutil.dryrunopts
881 + cmdutil.formatteropts
881 + cmdutil.formatteropts
882 + cmdutil.confirmopts,
882 + cmdutil.confirmopts,
883 _(b'[-s REV | -b REV] [-d REV] [OPTION]'),
883 _(b'[-s REV | -b REV] [-d REV] [OPTION]'),
884 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
884 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
885 )
885 )
886 def rebase(ui, repo, **opts):
886 def rebase(ui, repo, **opts):
887 """move changeset (and descendants) to a different branch
887 """move changeset (and descendants) to a different branch
888
888
889 Rebase uses repeated merging to graft changesets from one part of
889 Rebase uses repeated merging to graft changesets from one part of
890 history (the source) onto another (the destination). This can be
890 history (the source) onto another (the destination). This can be
891 useful for linearizing *local* changes relative to a master
891 useful for linearizing *local* changes relative to a master
892 development tree.
892 development tree.
893
893
894 Published commits cannot be rebased (see :hg:`help phases`).
894 Published commits cannot be rebased (see :hg:`help phases`).
895 To copy commits, see :hg:`help graft`.
895 To copy commits, see :hg:`help graft`.
896
896
897 If you don't specify a destination changeset (``-d/--dest``), rebase
897 If you don't specify a destination changeset (``-d/--dest``), rebase
898 will use the same logic as :hg:`merge` to pick a destination. if
898 will use the same logic as :hg:`merge` to pick a destination. if
899 the current branch contains exactly one other head, the other head
899 the current branch contains exactly one other head, the other head
900 is merged with by default. Otherwise, an explicit revision with
900 is merged with by default. Otherwise, an explicit revision with
901 which to merge with must be provided. (destination changeset is not
901 which to merge with must be provided. (destination changeset is not
902 modified by rebasing, but new changesets are added as its
902 modified by rebasing, but new changesets are added as its
903 descendants.)
903 descendants.)
904
904
905 Here are the ways to select changesets:
905 Here are the ways to select changesets:
906
906
907 1. Explicitly select them using ``--rev``.
907 1. Explicitly select them using ``--rev``.
908
908
909 2. Use ``--source`` to select a root changeset and include all of its
909 2. Use ``--source`` to select a root changeset and include all of its
910 descendants.
910 descendants.
911
911
912 3. Use ``--base`` to select a changeset; rebase will find ancestors
912 3. Use ``--base`` to select a changeset; rebase will find ancestors
913 and their descendants which are not also ancestors of the destination.
913 and their descendants which are not also ancestors of the destination.
914
914
915 4. If you do not specify any of ``--rev``, ``--source``, or ``--base``,
915 4. If you do not specify any of ``--rev``, ``--source``, or ``--base``,
916 rebase will use ``--base .`` as above.
916 rebase will use ``--base .`` as above.
917
917
918 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
918 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
919 can be used in ``--dest``. Destination would be calculated per source
919 can be used in ``--dest``. Destination would be calculated per source
920 revision with ``SRC`` substituted by that single source revision and
920 revision with ``SRC`` substituted by that single source revision and
921 ``ALLSRC`` substituted by all source revisions.
921 ``ALLSRC`` substituted by all source revisions.
922
922
923 Rebase will destroy original changesets unless you use ``--keep``.
923 Rebase will destroy original changesets unless you use ``--keep``.
924 It will also move your bookmarks (even if you do).
924 It will also move your bookmarks (even if you do).
925
925
926 Some changesets may be dropped if they do not contribute changes
926 Some changesets may be dropped if they do not contribute changes
927 (e.g. merges from the destination branch).
927 (e.g. merges from the destination branch).
928
928
929 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
929 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
930 a named branch with two heads. You will need to explicitly specify source
930 a named branch with two heads. You will need to explicitly specify source
931 and/or destination.
931 and/or destination.
932
932
933 If you need to use a tool to automate merge/conflict decisions, you
933 If you need to use a tool to automate merge/conflict decisions, you
934 can specify one with ``--tool``, see :hg:`help merge-tools`.
934 can specify one with ``--tool``, see :hg:`help merge-tools`.
935 As a caveat: the tool will not be used to mediate when a file was
935 As a caveat: the tool will not be used to mediate when a file was
936 deleted, there is no hook presently available for this.
936 deleted, there is no hook presently available for this.
937
937
938 If a rebase is interrupted to manually resolve a conflict, it can be
938 If a rebase is interrupted to manually resolve a conflict, it can be
939 continued with --continue/-c, aborted with --abort/-a, or stopped with
939 continued with --continue/-c, aborted with --abort/-a, or stopped with
940 --stop.
940 --stop.
941
941
942 .. container:: verbose
942 .. container:: verbose
943
943
944 Examples:
944 Examples:
945
945
946 - move "local changes" (current commit back to branching point)
946 - move "local changes" (current commit back to branching point)
947 to the current branch tip after a pull::
947 to the current branch tip after a pull::
948
948
949 hg rebase
949 hg rebase
950
950
951 - move a single changeset to the stable branch::
951 - move a single changeset to the stable branch::
952
952
953 hg rebase -r 5f493448 -d stable
953 hg rebase -r 5f493448 -d stable
954
954
955 - splice a commit and all its descendants onto another part of history::
955 - splice a commit and all its descendants onto another part of history::
956
956
957 hg rebase --source c0c3 --dest 4cf9
957 hg rebase --source c0c3 --dest 4cf9
958
958
959 - rebase everything on a branch marked by a bookmark onto the
959 - rebase everything on a branch marked by a bookmark onto the
960 default branch::
960 default branch::
961
961
962 hg rebase --base myfeature --dest default
962 hg rebase --base myfeature --dest default
963
963
964 - collapse a sequence of changes into a single commit::
964 - collapse a sequence of changes into a single commit::
965
965
966 hg rebase --collapse -r 1520:1525 -d .
966 hg rebase --collapse -r 1520:1525 -d .
967
967
968 - move a named branch while preserving its name::
968 - move a named branch while preserving its name::
969
969
970 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
970 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
971
971
972 - stabilize orphaned changesets so history looks linear::
972 - stabilize orphaned changesets so history looks linear::
973
973
974 hg rebase -r 'orphan()-obsolete()'\
974 hg rebase -r 'orphan()-obsolete()'\
975 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
975 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
976 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
976 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
977
977
978 Configuration Options:
978 Configuration Options:
979
979
980 You can make rebase require a destination if you set the following config
980 You can make rebase require a destination if you set the following config
981 option::
981 option::
982
982
983 [commands]
983 [commands]
984 rebase.requiredest = True
984 rebase.requiredest = True
985
985
986 By default, rebase will close the transaction after each commit. For
986 By default, rebase will close the transaction after each commit. For
987 performance purposes, you can configure rebase to use a single transaction
987 performance purposes, you can configure rebase to use a single transaction
988 across the entire rebase. WARNING: This setting introduces a significant
988 across the entire rebase. WARNING: This setting introduces a significant
989 risk of losing the work you've done in a rebase if the rebase aborts
989 risk of losing the work you've done in a rebase if the rebase aborts
990 unexpectedly::
990 unexpectedly::
991
991
992 [rebase]
992 [rebase]
993 singletransaction = True
993 singletransaction = True
994
994
995 By default, rebase writes to the working copy, but you can configure it to
995 By default, rebase writes to the working copy, but you can configure it to
996 run in-memory for better performance. When the rebase is not moving the
996 run in-memory for better performance. When the rebase is not moving the
997 parent(s) of the working copy (AKA the "currently checked out changesets"),
997 parent(s) of the working copy (AKA the "currently checked out changesets"),
998 this may also allow it to run even if the working copy is dirty::
998 this may also allow it to run even if the working copy is dirty::
999
999
1000 [rebase]
1000 [rebase]
1001 experimental.inmemory = True
1001 experimental.inmemory = True
1002
1002
1003 Return Values:
1003 Return Values:
1004
1004
1005 Returns 0 on success, 1 if nothing to rebase or there are
1005 Returns 0 on success, 1 if nothing to rebase or there are
1006 unresolved conflicts.
1006 unresolved conflicts.
1007
1007
1008 """
1008 """
1009 opts = pycompat.byteskwargs(opts)
1009 opts = pycompat.byteskwargs(opts)
1010 inmemory = ui.configbool(b'rebase', b'experimental.inmemory')
1010 inmemory = ui.configbool(b'rebase', b'experimental.inmemory')
1011 action = cmdutil.check_at_most_one_arg(opts, b'abort', b'stop', b'continue')
1011 action = cmdutil.check_at_most_one_arg(opts, b'abort', b'stop', b'continue')
1012 if action:
1012 if action:
1013 cmdutil.check_incompatible_arguments(
1013 cmdutil.check_incompatible_arguments(
1014 opts, action, b'confirm', b'dry_run'
1014 opts, action, b'confirm', b'dry_run'
1015 )
1015 )
1016 cmdutil.check_incompatible_arguments(
1016 cmdutil.check_incompatible_arguments(
1017 opts, action, b'rev', b'source', b'base', b'dest'
1017 opts, action, b'rev', b'source', b'base', b'dest'
1018 )
1018 )
1019 cmdutil.check_at_most_one_arg(opts, b'confirm', b'dry_run')
1019 cmdutil.check_at_most_one_arg(opts, b'confirm', b'dry_run')
1020 cmdutil.check_at_most_one_arg(opts, b'rev', b'source', b'base')
1020 cmdutil.check_at_most_one_arg(opts, b'rev', b'source', b'base')
1021
1021
1022 if action or repo.currenttransaction() is not None:
1022 if action or repo.currenttransaction() is not None:
1023 # in-memory rebase is not compatible with resuming rebases.
1023 # in-memory rebase is not compatible with resuming rebases.
1024 # (Or if it is run within a transaction, since the restart logic can
1024 # (Or if it is run within a transaction, since the restart logic can
1025 # fail the entire transaction.)
1025 # fail the entire transaction.)
1026 inmemory = False
1026 inmemory = False
1027
1027
1028 if opts.get(b'auto_orphans'):
1028 if opts.get(b'auto_orphans'):
1029 disallowed_opts = set(opts) - {b'auto_orphans'}
1029 disallowed_opts = set(opts) - {b'auto_orphans'}
1030 cmdutil.check_incompatible_arguments(
1030 cmdutil.check_incompatible_arguments(
1031 opts, b'auto_orphans', *disallowed_opts
1031 opts, b'auto_orphans', *disallowed_opts
1032 )
1032 )
1033
1033
1034 userrevs = list(repo.revs(opts.get(b'auto_orphans')))
1034 userrevs = list(repo.revs(opts.get(b'auto_orphans')))
1035 opts[b'rev'] = [revsetlang.formatspec(b'%ld and orphan()', userrevs)]
1035 opts[b'rev'] = [revsetlang.formatspec(b'%ld and orphan()', userrevs)]
1036 opts[b'dest'] = b'_destautoorphanrebase(SRC)'
1036 opts[b'dest'] = b'_destautoorphanrebase(SRC)'
1037
1037
1038 if opts.get(b'dry_run') or opts.get(b'confirm'):
1038 if opts.get(b'dry_run') or opts.get(b'confirm'):
1039 return _dryrunrebase(ui, repo, action, opts)
1039 return _dryrunrebase(ui, repo, action, opts)
1040 elif action == b'stop':
1040 elif action == b'stop':
1041 rbsrt = rebaseruntime(repo, ui)
1041 rbsrt = rebaseruntime(repo, ui)
1042 with repo.wlock(), repo.lock():
1042 with repo.wlock(), repo.lock():
1043 rbsrt.restorestatus()
1043 rbsrt.restorestatus()
1044 if rbsrt.collapsef:
1044 if rbsrt.collapsef:
1045 raise error.Abort(_(b"cannot stop in --collapse session"))
1045 raise error.Abort(_(b"cannot stop in --collapse session"))
1046 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1046 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1047 if not (rbsrt.keepf or allowunstable):
1047 if not (rbsrt.keepf or allowunstable):
1048 raise error.Abort(
1048 raise error.Abort(
1049 _(
1049 _(
1050 b"cannot remove original changesets with"
1050 b"cannot remove original changesets with"
1051 b" unrebased descendants"
1051 b" unrebased descendants"
1052 ),
1052 ),
1053 hint=_(
1053 hint=_(
1054 b'either enable obsmarkers to allow unstable '
1054 b'either enable obsmarkers to allow unstable '
1055 b'revisions or use --keep to keep original '
1055 b'revisions or use --keep to keep original '
1056 b'changesets'
1056 b'changesets'
1057 ),
1057 ),
1058 )
1058 )
1059 # update to the current working revision
1059 # update to the current working revision
1060 # to clear interrupted merge
1060 # to clear interrupted merge
1061 hg.updaterepo(repo, rbsrt.originalwd, overwrite=True)
1061 hg.updaterepo(repo, rbsrt.originalwd, overwrite=True)
1062 rbsrt._finishrebase()
1062 rbsrt._finishrebase()
1063 return 0
1063 return 0
1064 elif inmemory:
1064 elif inmemory:
1065 try:
1065 try:
1066 # in-memory merge doesn't support conflicts, so if we hit any, abort
1066 # in-memory merge doesn't support conflicts, so if we hit any, abort
1067 # and re-run as an on-disk merge.
1067 # and re-run as an on-disk merge.
1068 overrides = {(b'rebase', b'singletransaction'): True}
1068 overrides = {(b'rebase', b'singletransaction'): True}
1069 with ui.configoverride(overrides, b'rebase'):
1069 with ui.configoverride(overrides, b'rebase'):
1070 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
1070 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
1071 except error.InMemoryMergeConflictsError:
1071 except error.InMemoryMergeConflictsError:
1072 ui.warn(
1072 ui.warn(
1073 _(
1073 _(
1074 b'hit merge conflicts; re-running rebase without in-memory'
1074 b'hit merge conflicts; re-running rebase without in-memory'
1075 b' merge\n'
1075 b' merge\n'
1076 )
1076 )
1077 )
1077 )
1078 # TODO: Make in-memory merge not use the on-disk merge state, so
1078 # TODO: Make in-memory merge not use the on-disk merge state, so
1079 # we don't have to clean it here
1079 # we don't have to clean it here
1080 mergemod.mergestate.clean(repo)
1080 mergemod.mergestate.clean(repo)
1081 clearstatus(repo)
1081 clearstatus(repo)
1082 clearcollapsemsg(repo)
1082 clearcollapsemsg(repo)
1083 return _dorebase(ui, repo, action, opts, inmemory=False)
1083 return _dorebase(ui, repo, action, opts, inmemory=False)
1084 else:
1084 else:
1085 return _dorebase(ui, repo, action, opts)
1085 return _dorebase(ui, repo, action, opts)
1086
1086
1087
1087
1088 def _dryrunrebase(ui, repo, action, opts):
1088 def _dryrunrebase(ui, repo, action, opts):
1089 rbsrt = rebaseruntime(repo, ui, inmemory=True, opts=opts)
1089 rbsrt = rebaseruntime(repo, ui, inmemory=True, opts=opts)
1090 confirm = opts.get(b'confirm')
1090 confirm = opts.get(b'confirm')
1091 if confirm:
1091 if confirm:
1092 ui.status(_(b'starting in-memory rebase\n'))
1092 ui.status(_(b'starting in-memory rebase\n'))
1093 else:
1093 else:
1094 ui.status(
1094 ui.status(
1095 _(b'starting dry-run rebase; repository will not be changed\n')
1095 _(b'starting dry-run rebase; repository will not be changed\n')
1096 )
1096 )
1097 with repo.wlock(), repo.lock():
1097 with repo.wlock(), repo.lock():
1098 needsabort = True
1098 needsabort = True
1099 try:
1099 try:
1100 overrides = {(b'rebase', b'singletransaction'): True}
1100 overrides = {(b'rebase', b'singletransaction'): True}
1101 with ui.configoverride(overrides, b'rebase'):
1101 with ui.configoverride(overrides, b'rebase'):
1102 _origrebase(
1102 _origrebase(
1103 ui,
1103 ui,
1104 repo,
1104 repo,
1105 action,
1105 action,
1106 opts,
1106 opts,
1107 rbsrt,
1107 rbsrt,
1108 inmemory=True,
1108 inmemory=True,
1109 leaveunfinished=True,
1109 leaveunfinished=True,
1110 )
1110 )
1111 except error.InMemoryMergeConflictsError:
1111 except error.InMemoryMergeConflictsError:
1112 ui.status(_(b'hit a merge conflict\n'))
1112 ui.status(_(b'hit a merge conflict\n'))
1113 return 1
1113 return 1
1114 except error.Abort:
1114 except error.Abort:
1115 needsabort = False
1115 needsabort = False
1116 raise
1116 raise
1117 else:
1117 else:
1118 if confirm:
1118 if confirm:
1119 ui.status(_(b'rebase completed successfully\n'))
1119 ui.status(_(b'rebase completed successfully\n'))
1120 if not ui.promptchoice(_(b'apply changes (yn)?$$ &Yes $$ &No')):
1120 if not ui.promptchoice(_(b'apply changes (yn)?$$ &Yes $$ &No')):
1121 # finish unfinished rebase
1121 # finish unfinished rebase
1122 rbsrt._finishrebase()
1122 rbsrt._finishrebase()
1123 else:
1123 else:
1124 rbsrt._prepareabortorcontinue(
1124 rbsrt._prepareabortorcontinue(
1125 isabort=True, backup=False, suppwarns=True
1125 isabort=True, backup=False, suppwarns=True
1126 )
1126 )
1127 needsabort = False
1127 needsabort = False
1128 else:
1128 else:
1129 ui.status(
1129 ui.status(
1130 _(
1130 _(
1131 b'dry-run rebase completed successfully; run without'
1131 b'dry-run rebase completed successfully; run without'
1132 b' -n/--dry-run to perform this rebase\n'
1132 b' -n/--dry-run to perform this rebase\n'
1133 )
1133 )
1134 )
1134 )
1135 return 0
1135 return 0
1136 finally:
1136 finally:
1137 if needsabort:
1137 if needsabort:
1138 # no need to store backup in case of dryrun
1138 # no need to store backup in case of dryrun
1139 rbsrt._prepareabortorcontinue(
1139 rbsrt._prepareabortorcontinue(
1140 isabort=True, backup=False, suppwarns=True
1140 isabort=True, backup=False, suppwarns=True
1141 )
1141 )
1142
1142
1143
1143
1144 def _dorebase(ui, repo, action, opts, inmemory=False):
1144 def _dorebase(ui, repo, action, opts, inmemory=False):
1145 rbsrt = rebaseruntime(repo, ui, inmemory, opts)
1145 rbsrt = rebaseruntime(repo, ui, inmemory, opts)
1146 return _origrebase(ui, repo, action, opts, rbsrt, inmemory=inmemory)
1146 return _origrebase(ui, repo, action, opts, rbsrt, inmemory=inmemory)
1147
1147
1148
1148
1149 def _origrebase(
1149 def _origrebase(
1150 ui, repo, action, opts, rbsrt, inmemory=False, leaveunfinished=False
1150 ui, repo, action, opts, rbsrt, inmemory=False, leaveunfinished=False
1151 ):
1151 ):
1152 assert action != b'stop'
1152 assert action != b'stop'
1153 with repo.wlock(), repo.lock():
1153 with repo.wlock(), repo.lock():
1154 if opts.get(b'interactive'):
1154 if opts.get(b'interactive'):
1155 try:
1155 try:
1156 if extensions.find(b'histedit'):
1156 if extensions.find(b'histedit'):
1157 enablehistedit = b''
1157 enablehistedit = b''
1158 except KeyError:
1158 except KeyError:
1159 enablehistedit = b" --config extensions.histedit="
1159 enablehistedit = b" --config extensions.histedit="
1160 help = b"hg%s help -e histedit" % enablehistedit
1160 help = b"hg%s help -e histedit" % enablehistedit
1161 msg = (
1161 msg = (
1162 _(
1162 _(
1163 b"interactive history editing is supported by the "
1163 b"interactive history editing is supported by the "
1164 b"'histedit' extension (see \"%s\")"
1164 b"'histedit' extension (see \"%s\")"
1165 )
1165 )
1166 % help
1166 % help
1167 )
1167 )
1168 raise error.Abort(msg)
1168 raise error.Abort(msg)
1169
1169
1170 if rbsrt.collapsemsg and not rbsrt.collapsef:
1170 if rbsrt.collapsemsg and not rbsrt.collapsef:
1171 raise error.Abort(_(b'message can only be specified with collapse'))
1171 raise error.Abort(_(b'message can only be specified with collapse'))
1172
1172
1173 if action:
1173 if action:
1174 if rbsrt.collapsef:
1174 if rbsrt.collapsef:
1175 raise error.Abort(
1175 raise error.Abort(
1176 _(b'cannot use collapse with continue or abort')
1176 _(b'cannot use collapse with continue or abort')
1177 )
1177 )
1178 if action == b'abort' and opts.get(b'tool', False):
1178 if action == b'abort' and opts.get(b'tool', False):
1179 ui.warn(_(b'tool option will be ignored\n'))
1179 ui.warn(_(b'tool option will be ignored\n'))
1180 if action == b'continue':
1180 if action == b'continue':
1181 ms = mergemod.mergestate.read(repo)
1181 ms = mergemod.mergestate.read(repo)
1182 mergeutil.checkunresolved(ms)
1182 mergeutil.checkunresolved(ms)
1183
1183
1184 retcode = rbsrt._prepareabortorcontinue(
1184 retcode = rbsrt._prepareabortorcontinue(
1185 isabort=(action == b'abort')
1185 isabort=(action == b'abort')
1186 )
1186 )
1187 if retcode is not None:
1187 if retcode is not None:
1188 return retcode
1188 return retcode
1189 else:
1189 else:
1190 # search default destination in this space
1190 # search default destination in this space
1191 # used in the 'hg pull --rebase' case, see issue 5214.
1191 # used in the 'hg pull --rebase' case, see issue 5214.
1192 destspace = opts.get(b'_destspace')
1192 destspace = opts.get(b'_destspace')
1193 destmap = _definedestmap(
1193 destmap = _definedestmap(
1194 ui,
1194 ui,
1195 repo,
1195 repo,
1196 inmemory,
1196 inmemory,
1197 opts.get(b'dest', None),
1197 opts.get(b'dest', None),
1198 opts.get(b'source', None),
1198 opts.get(b'source', None),
1199 opts.get(b'base', None),
1199 opts.get(b'base', None),
1200 opts.get(b'rev', []),
1200 opts.get(b'rev', []),
1201 destspace=destspace,
1201 destspace=destspace,
1202 )
1202 )
1203 retcode = rbsrt._preparenewrebase(destmap)
1203 retcode = rbsrt._preparenewrebase(destmap)
1204 if retcode is not None:
1204 if retcode is not None:
1205 return retcode
1205 return retcode
1206 storecollapsemsg(repo, rbsrt.collapsemsg)
1206 storecollapsemsg(repo, rbsrt.collapsemsg)
1207
1207
1208 tr = None
1208 tr = None
1209
1209
1210 singletr = ui.configbool(b'rebase', b'singletransaction')
1210 singletr = ui.configbool(b'rebase', b'singletransaction')
1211 if singletr:
1211 if singletr:
1212 tr = repo.transaction(b'rebase')
1212 tr = repo.transaction(b'rebase')
1213
1213
1214 # If `rebase.singletransaction` is enabled, wrap the entire operation in
1214 # If `rebase.singletransaction` is enabled, wrap the entire operation in
1215 # one transaction here. Otherwise, transactions are obtained when
1215 # one transaction here. Otherwise, transactions are obtained when
1216 # committing each node, which is slower but allows partial success.
1216 # committing each node, which is slower but allows partial success.
1217 with util.acceptintervention(tr):
1217 with util.acceptintervention(tr):
1218 # Same logic for the dirstate guard, except we don't create one when
1218 # Same logic for the dirstate guard, except we don't create one when
1219 # rebasing in-memory (it's not needed).
1219 # rebasing in-memory (it's not needed).
1220 dsguard = None
1220 dsguard = None
1221 if singletr and not inmemory:
1221 if singletr and not inmemory:
1222 dsguard = dirstateguard.dirstateguard(repo, b'rebase')
1222 dsguard = dirstateguard.dirstateguard(repo, b'rebase')
1223 with util.acceptintervention(dsguard):
1223 with util.acceptintervention(dsguard):
1224 rbsrt._performrebase(tr)
1224 rbsrt._performrebase(tr)
1225 if not leaveunfinished:
1225 if not leaveunfinished:
1226 rbsrt._finishrebase()
1226 rbsrt._finishrebase()
1227
1227
1228
1228
1229 def _definedestmap(
1229 def _definedestmap(
1230 ui,
1230 ui,
1231 repo,
1231 repo,
1232 inmemory,
1232 inmemory,
1233 destf=None,
1233 destf=None,
1234 srcf=None,
1234 srcf=None,
1235 basef=None,
1235 basef=None,
1236 revf=None,
1236 revf=None,
1237 destspace=None,
1237 destspace=None,
1238 ):
1238 ):
1239 """use revisions argument to define destmap {srcrev: destrev}"""
1239 """use revisions argument to define destmap {srcrev: destrev}"""
1240 if revf is None:
1240 if revf is None:
1241 revf = []
1241 revf = []
1242
1242
1243 # destspace is here to work around issues with `hg pull --rebase` see
1243 # destspace is here to work around issues with `hg pull --rebase` see
1244 # issue5214 for details
1244 # issue5214 for details
1245
1245
1246 cmdutil.checkunfinished(repo)
1246 cmdutil.checkunfinished(repo)
1247 if not inmemory:
1247 if not inmemory:
1248 cmdutil.bailifchanged(repo)
1248 cmdutil.bailifchanged(repo)
1249
1249
1250 if ui.configbool(b'commands', b'rebase.requiredest') and not destf:
1250 if ui.configbool(b'commands', b'rebase.requiredest') and not destf:
1251 raise error.Abort(
1251 raise error.Abort(
1252 _(b'you must specify a destination'),
1252 _(b'you must specify a destination'),
1253 hint=_(b'use: hg rebase -d REV'),
1253 hint=_(b'use: hg rebase -d REV'),
1254 )
1254 )
1255
1255
1256 dest = None
1256 dest = None
1257
1257
1258 if revf:
1258 if revf:
1259 rebaseset = scmutil.revrange(repo, revf)
1259 rebaseset = scmutil.revrange(repo, revf)
1260 if not rebaseset:
1260 if not rebaseset:
1261 ui.status(_(b'empty "rev" revision set - nothing to rebase\n'))
1261 ui.status(_(b'empty "rev" revision set - nothing to rebase\n'))
1262 return None
1262 return None
1263 elif srcf:
1263 elif srcf:
1264 src = scmutil.revrange(repo, [srcf])
1264 src = scmutil.revrange(repo, [srcf])
1265 if not src:
1265 if not src:
1266 ui.status(_(b'empty "source" revision set - nothing to rebase\n'))
1266 ui.status(_(b'empty "source" revision set - nothing to rebase\n'))
1267 return None
1267 return None
1268 rebaseset = repo.revs(b'(%ld)::', src)
1268 rebaseset = repo.revs(b'(%ld)::', src)
1269 assert rebaseset
1269 assert rebaseset
1270 else:
1270 else:
1271 base = scmutil.revrange(repo, [basef or b'.'])
1271 base = scmutil.revrange(repo, [basef or b'.'])
1272 if not base:
1272 if not base:
1273 ui.status(
1273 ui.status(
1274 _(b'empty "base" revision set - ' b"can't compute rebase set\n")
1274 _(b'empty "base" revision set - ' b"can't compute rebase set\n")
1275 )
1275 )
1276 return None
1276 return None
1277 if destf:
1277 if destf:
1278 # --base does not support multiple destinations
1278 # --base does not support multiple destinations
1279 dest = scmutil.revsingle(repo, destf)
1279 dest = scmutil.revsingle(repo, destf)
1280 else:
1280 else:
1281 dest = repo[_destrebase(repo, base, destspace=destspace)]
1281 dest = repo[_destrebase(repo, base, destspace=destspace)]
1282 destf = bytes(dest)
1282 destf = bytes(dest)
1283
1283
1284 roots = [] # selected children of branching points
1284 roots = [] # selected children of branching points
1285 bpbase = {} # {branchingpoint: [origbase]}
1285 bpbase = {} # {branchingpoint: [origbase]}
1286 for b in base: # group bases by branching points
1286 for b in base: # group bases by branching points
1287 bp = repo.revs(b'ancestor(%d, %d)', b, dest.rev()).first()
1287 bp = repo.revs(b'ancestor(%d, %d)', b, dest.rev()).first()
1288 bpbase[bp] = bpbase.get(bp, []) + [b]
1288 bpbase[bp] = bpbase.get(bp, []) + [b]
1289 if None in bpbase:
1289 if None in bpbase:
1290 # emulate the old behavior, showing "nothing to rebase" (a better
1290 # emulate the old behavior, showing "nothing to rebase" (a better
1291 # behavior may be abort with "cannot find branching point" error)
1291 # behavior may be abort with "cannot find branching point" error)
1292 bpbase.clear()
1292 bpbase.clear()
1293 for bp, bs in pycompat.iteritems(bpbase): # calculate roots
1293 for bp, bs in pycompat.iteritems(bpbase): # calculate roots
1294 roots += list(repo.revs(b'children(%d) & ancestors(%ld)', bp, bs))
1294 roots += list(repo.revs(b'children(%d) & ancestors(%ld)', bp, bs))
1295
1295
1296 rebaseset = repo.revs(b'%ld::', roots)
1296 rebaseset = repo.revs(b'%ld::', roots)
1297
1297
1298 if not rebaseset:
1298 if not rebaseset:
1299 # transform to list because smartsets are not comparable to
1299 # transform to list because smartsets are not comparable to
1300 # lists. This should be improved to honor laziness of
1300 # lists. This should be improved to honor laziness of
1301 # smartset.
1301 # smartset.
1302 if list(base) == [dest.rev()]:
1302 if list(base) == [dest.rev()]:
1303 if basef:
1303 if basef:
1304 ui.status(
1304 ui.status(
1305 _(
1305 _(
1306 b'nothing to rebase - %s is both "base"'
1306 b'nothing to rebase - %s is both "base"'
1307 b' and destination\n'
1307 b' and destination\n'
1308 )
1308 )
1309 % dest
1309 % dest
1310 )
1310 )
1311 else:
1311 else:
1312 ui.status(
1312 ui.status(
1313 _(
1313 _(
1314 b'nothing to rebase - working directory '
1314 b'nothing to rebase - working directory '
1315 b'parent is also destination\n'
1315 b'parent is also destination\n'
1316 )
1316 )
1317 )
1317 )
1318 elif not repo.revs(b'%ld - ::%d', base, dest.rev()):
1318 elif not repo.revs(b'%ld - ::%d', base, dest.rev()):
1319 if basef:
1319 if basef:
1320 ui.status(
1320 ui.status(
1321 _(
1321 _(
1322 b'nothing to rebase - "base" %s is '
1322 b'nothing to rebase - "base" %s is '
1323 b'already an ancestor of destination '
1323 b'already an ancestor of destination '
1324 b'%s\n'
1324 b'%s\n'
1325 )
1325 )
1326 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1326 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1327 )
1327 )
1328 else:
1328 else:
1329 ui.status(
1329 ui.status(
1330 _(
1330 _(
1331 b'nothing to rebase - working '
1331 b'nothing to rebase - working '
1332 b'directory parent is already an '
1332 b'directory parent is already an '
1333 b'ancestor of destination %s\n'
1333 b'ancestor of destination %s\n'
1334 )
1334 )
1335 % dest
1335 % dest
1336 )
1336 )
1337 else: # can it happen?
1337 else: # can it happen?
1338 ui.status(
1338 ui.status(
1339 _(b'nothing to rebase from %s to %s\n')
1339 _(b'nothing to rebase from %s to %s\n')
1340 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1340 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1341 )
1341 )
1342 return None
1342 return None
1343
1343
1344 rebasingwcp = repo[b'.'].rev() in rebaseset
1344 rebasingwcp = repo[b'.'].rev() in rebaseset
1345 ui.log(
1345 ui.log(
1346 b"rebase",
1346 b"rebase",
1347 b"rebasing working copy parent: %r\n",
1347 b"rebasing working copy parent: %r\n",
1348 rebasingwcp,
1348 rebasingwcp,
1349 rebase_rebasing_wcp=rebasingwcp,
1349 rebase_rebasing_wcp=rebasingwcp,
1350 )
1350 )
1351 if inmemory and rebasingwcp:
1351 if inmemory and rebasingwcp:
1352 # Check these since we did not before.
1352 # Check these since we did not before.
1353 cmdutil.checkunfinished(repo)
1353 cmdutil.checkunfinished(repo)
1354 cmdutil.bailifchanged(repo)
1354 cmdutil.bailifchanged(repo)
1355
1355
1356 if not destf:
1356 if not destf:
1357 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1357 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1358 destf = bytes(dest)
1358 destf = bytes(dest)
1359
1359
1360 allsrc = revsetlang.formatspec(b'%ld', rebaseset)
1360 allsrc = revsetlang.formatspec(b'%ld', rebaseset)
1361 alias = {b'ALLSRC': allsrc}
1361 alias = {b'ALLSRC': allsrc}
1362
1362
1363 if dest is None:
1363 if dest is None:
1364 try:
1364 try:
1365 # fast path: try to resolve dest without SRC alias
1365 # fast path: try to resolve dest without SRC alias
1366 dest = scmutil.revsingle(repo, destf, localalias=alias)
1366 dest = scmutil.revsingle(repo, destf, localalias=alias)
1367 except error.RepoLookupError:
1367 except error.RepoLookupError:
1368 # multi-dest path: resolve dest for each SRC separately
1368 # multi-dest path: resolve dest for each SRC separately
1369 destmap = {}
1369 destmap = {}
1370 for r in rebaseset:
1370 for r in rebaseset:
1371 alias[b'SRC'] = revsetlang.formatspec(b'%d', r)
1371 alias[b'SRC'] = revsetlang.formatspec(b'%d', r)
1372 # use repo.anyrevs instead of scmutil.revsingle because we
1372 # use repo.anyrevs instead of scmutil.revsingle because we
1373 # don't want to abort if destset is empty.
1373 # don't want to abort if destset is empty.
1374 destset = repo.anyrevs([destf], user=True, localalias=alias)
1374 destset = repo.anyrevs([destf], user=True, localalias=alias)
1375 size = len(destset)
1375 size = len(destset)
1376 if size == 1:
1376 if size == 1:
1377 destmap[r] = destset.first()
1377 destmap[r] = destset.first()
1378 elif size == 0:
1378 elif size == 0:
1379 ui.note(_(b'skipping %s - empty destination\n') % repo[r])
1379 ui.note(_(b'skipping %s - empty destination\n') % repo[r])
1380 else:
1380 else:
1381 raise error.Abort(
1381 raise error.Abort(
1382 _(b'rebase destination for %s is not unique') % repo[r]
1382 _(b'rebase destination for %s is not unique') % repo[r]
1383 )
1383 )
1384
1384
1385 if dest is not None:
1385 if dest is not None:
1386 # single-dest case: assign dest to each rev in rebaseset
1386 # single-dest case: assign dest to each rev in rebaseset
1387 destrev = dest.rev()
1387 destrev = dest.rev()
1388 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1388 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1389
1389
1390 if not destmap:
1390 if not destmap:
1391 ui.status(_(b'nothing to rebase - empty destination\n'))
1391 ui.status(_(b'nothing to rebase - empty destination\n'))
1392 return None
1392 return None
1393
1393
1394 return destmap
1394 return destmap
1395
1395
1396
1396
1397 def externalparent(repo, state, destancestors):
1397 def externalparent(repo, state, destancestors):
1398 """Return the revision that should be used as the second parent
1398 """Return the revision that should be used as the second parent
1399 when the revisions in state is collapsed on top of destancestors.
1399 when the revisions in state is collapsed on top of destancestors.
1400 Abort if there is more than one parent.
1400 Abort if there is more than one parent.
1401 """
1401 """
1402 parents = set()
1402 parents = set()
1403 source = min(state)
1403 source = min(state)
1404 for rev in state:
1404 for rev in state:
1405 if rev == source:
1405 if rev == source:
1406 continue
1406 continue
1407 for p in repo[rev].parents():
1407 for p in repo[rev].parents():
1408 if p.rev() not in state and p.rev() not in destancestors:
1408 if p.rev() not in state and p.rev() not in destancestors:
1409 parents.add(p.rev())
1409 parents.add(p.rev())
1410 if not parents:
1410 if not parents:
1411 return nullrev
1411 return nullrev
1412 if len(parents) == 1:
1412 if len(parents) == 1:
1413 return parents.pop()
1413 return parents.pop()
1414 raise error.Abort(
1414 raise error.Abort(
1415 _(
1415 _(
1416 b'unable to collapse on top of %d, there is more '
1416 b'unable to collapse on top of %d, there is more '
1417 b'than one external parent: %s'
1417 b'than one external parent: %s'
1418 )
1418 )
1419 % (max(destancestors), b', '.join(b"%d" % p for p in sorted(parents)))
1419 % (max(destancestors), b', '.join(b"%d" % p for p in sorted(parents)))
1420 )
1420 )
1421
1421
1422
1422
1423 def commitmemorynode(repo, p1, p2, wctx, editor, extra, user, date, commitmsg):
1423 def commitmemorynode(repo, p1, p2, wctx, editor, extra, user, date, commitmsg):
1424 '''Commit the memory changes with parents p1 and p2.
1424 '''Commit the memory changes with parents p1 and p2.
1425 Return node of committed revision.'''
1425 Return node of committed revision.'''
1426 # Replicates the empty check in ``repo.commit``.
1426 # Replicates the empty check in ``repo.commit``.
1427 if wctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'):
1427 if wctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'):
1428 return None
1428 return None
1429
1429
1430 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1430 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1431 # ``branch`` (used when passing ``--keepbranches``).
1431 # ``branch`` (used when passing ``--keepbranches``).
1432 branch = None
1432 branch = None
1433 if b'branch' in extra:
1433 if b'branch' in extra:
1434 branch = extra[b'branch']
1434 branch = extra[b'branch']
1435
1435
1436 wctx.setparents(repo[p1].node(), repo[p2].node())
1436 wctx.setparents(repo[p1].node(), repo[p2].node())
1437 memctx = wctx.tomemctx(
1437 memctx = wctx.tomemctx(
1438 commitmsg,
1438 commitmsg,
1439 date=date,
1439 date=date,
1440 extra=extra,
1440 extra=extra,
1441 user=user,
1441 user=user,
1442 branch=branch,
1442 branch=branch,
1443 editor=editor,
1443 editor=editor,
1444 )
1444 )
1445 commitres = repo.commitctx(memctx)
1445 commitres = repo.commitctx(memctx)
1446 wctx.clean() # Might be reused
1446 wctx.clean() # Might be reused
1447 return commitres
1447 return commitres
1448
1448
1449
1449
1450 def commitnode(repo, p1, p2, editor, extra, user, date, commitmsg):
1450 def commitnode(repo, p1, p2, editor, extra, user, date, commitmsg):
1451 '''Commit the wd changes with parents p1 and p2.
1451 '''Commit the wd changes with parents p1 and p2.
1452 Return node of committed revision.'''
1452 Return node of committed revision.'''
1453 dsguard = util.nullcontextmanager()
1453 dsguard = util.nullcontextmanager()
1454 if not repo.ui.configbool(b'rebase', b'singletransaction'):
1454 if not repo.ui.configbool(b'rebase', b'singletransaction'):
1455 dsguard = dirstateguard.dirstateguard(repo, b'rebase')
1455 dsguard = dirstateguard.dirstateguard(repo, b'rebase')
1456 with dsguard:
1456 with dsguard:
1457 repo.setparents(repo[p1].node(), repo[p2].node())
1457 repo.setparents(repo[p1].node(), repo[p2].node())
1458
1458
1459 # Commit might fail if unresolved files exist
1459 # Commit might fail if unresolved files exist
1460 newnode = repo.commit(
1460 newnode = repo.commit(
1461 text=commitmsg, user=user, date=date, extra=extra, editor=editor
1461 text=commitmsg, user=user, date=date, extra=extra, editor=editor
1462 )
1462 )
1463
1463
1464 repo.dirstate.setbranch(repo[newnode].branch())
1464 repo.dirstate.setbranch(repo[newnode].branch())
1465 return newnode
1465 return newnode
1466
1466
1467
1467
1468 def rebasenode(repo, rev, p1, base, collapse, dest, wctx):
1468 def rebasenode(repo, rev, p1, base, collapse, dest, wctx):
1469 """Rebase a single revision rev on top of p1 using base as merge ancestor"""
1469 """Rebase a single revision rev on top of p1 using base as merge ancestor"""
1470 # Merge phase
1470 # Merge phase
1471 # Update to destination and merge it with local
1471 # Update to destination and merge it with local
1472 if wctx.isinmemory():
1472 if wctx.isinmemory():
1473 wctx.setbase(repo[p1])
1473 wctx.setbase(repo[p1])
1474 else:
1474 else:
1475 if repo[b'.'].rev() != p1:
1475 if repo[b'.'].rev() != p1:
1476 repo.ui.debug(b" update to %d:%s\n" % (p1, repo[p1]))
1476 repo.ui.debug(b" update to %d:%s\n" % (p1, repo[p1]))
1477 mergemod.update(repo, p1, branchmerge=False, force=True)
1477 mergemod.update(repo, p1, branchmerge=False, force=True)
1478 else:
1478 else:
1479 repo.ui.debug(b" already in destination\n")
1479 repo.ui.debug(b" already in destination\n")
1480 # This is, alas, necessary to invalidate workingctx's manifest cache,
1480 # This is, alas, necessary to invalidate workingctx's manifest cache,
1481 # as well as other data we litter on it in other places.
1481 # as well as other data we litter on it in other places.
1482 wctx = repo[None]
1482 wctx = repo[None]
1483 repo.dirstate.write(repo.currenttransaction())
1483 repo.dirstate.write(repo.currenttransaction())
1484 ctx = repo[rev]
1484 ctx = repo[rev]
1485 repo.ui.debug(b" merge against %d:%s\n" % (rev, ctx))
1485 repo.ui.debug(b" merge against %d:%s\n" % (rev, ctx))
1486 if base is not None:
1486 if base is not None:
1487 repo.ui.debug(b" detach base %d:%s\n" % (base, repo[base]))
1487 repo.ui.debug(b" detach base %d:%s\n" % (base, repo[base]))
1488 # When collapsing in-place, the parent is the common ancestor, we
1488 # When collapsing in-place, the parent is the common ancestor, we
1489 # have to allow merging with it.
1489 # have to allow merging with it.
1490 stats = mergemod.update(
1490 stats = mergemod.update(
1491 repo,
1491 repo,
1492 rev,
1492 rev,
1493 branchmerge=True,
1493 branchmerge=True,
1494 force=True,
1494 force=True,
1495 ancestor=base,
1495 ancestor=base,
1496 mergeancestor=collapse,
1496 mergeancestor=collapse,
1497 labels=[b'dest', b'source'],
1497 labels=[b'dest', b'source'],
1498 wc=wctx,
1498 wc=wctx,
1499 )
1499 )
1500 destctx = repo[dest]
1501 if collapse:
1500 if collapse:
1502 copies.graftcopies(repo, wctx, ctx, destctx)
1501 copies.graftcopies(wctx, ctx, repo[dest])
1503 else:
1502 else:
1504 # If we're not using --collapse, we need to
1503 # If we're not using --collapse, we need to
1505 # duplicate copies between the revision we're
1504 # duplicate copies between the revision we're
1506 # rebasing and its first parent, but *not*
1505 # rebasing and its first parent.
1507 # duplicate any copies that have already been
1506 copies.graftcopies(wctx, ctx, ctx.p1())
1508 # performed in the destination.
1509 copies.graftcopies(repo, wctx, ctx, ctx.p1(), skip=destctx)
1510 return stats
1507 return stats
1511
1508
1512
1509
1513 def adjustdest(repo, rev, destmap, state, skipped):
1510 def adjustdest(repo, rev, destmap, state, skipped):
1514 r"""adjust rebase destination given the current rebase state
1511 r"""adjust rebase destination given the current rebase state
1515
1512
1516 rev is what is being rebased. Return a list of two revs, which are the
1513 rev is what is being rebased. Return a list of two revs, which are the
1517 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1514 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1518 nullrev, return dest without adjustment for it.
1515 nullrev, return dest without adjustment for it.
1519
1516
1520 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1517 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1521 to B1, and E's destination will be adjusted from F to B1.
1518 to B1, and E's destination will be adjusted from F to B1.
1522
1519
1523 B1 <- written during rebasing B
1520 B1 <- written during rebasing B
1524 |
1521 |
1525 F <- original destination of B, E
1522 F <- original destination of B, E
1526 |
1523 |
1527 | E <- rev, which is being rebased
1524 | E <- rev, which is being rebased
1528 | |
1525 | |
1529 | D <- prev, one parent of rev being checked
1526 | D <- prev, one parent of rev being checked
1530 | |
1527 | |
1531 | x <- skipped, ex. no successor or successor in (::dest)
1528 | x <- skipped, ex. no successor or successor in (::dest)
1532 | |
1529 | |
1533 | C <- rebased as C', different destination
1530 | C <- rebased as C', different destination
1534 | |
1531 | |
1535 | B <- rebased as B1 C'
1532 | B <- rebased as B1 C'
1536 |/ |
1533 |/ |
1537 A G <- destination of C, different
1534 A G <- destination of C, different
1538
1535
1539 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1536 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1540 first move C to C1, G to G1, and when it's checking H, the adjusted
1537 first move C to C1, G to G1, and when it's checking H, the adjusted
1541 destinations will be [C1, G1].
1538 destinations will be [C1, G1].
1542
1539
1543 H C1 G1
1540 H C1 G1
1544 /| | /
1541 /| | /
1545 F G |/
1542 F G |/
1546 K | | -> K
1543 K | | -> K
1547 | C D |
1544 | C D |
1548 | |/ |
1545 | |/ |
1549 | B | ...
1546 | B | ...
1550 |/ |/
1547 |/ |/
1551 A A
1548 A A
1552
1549
1553 Besides, adjust dest according to existing rebase information. For example,
1550 Besides, adjust dest according to existing rebase information. For example,
1554
1551
1555 B C D B needs to be rebased on top of C, C needs to be rebased on top
1552 B C D B needs to be rebased on top of C, C needs to be rebased on top
1556 \|/ of D. We will rebase C first.
1553 \|/ of D. We will rebase C first.
1557 A
1554 A
1558
1555
1559 C' After rebasing C, when considering B's destination, use C'
1556 C' After rebasing C, when considering B's destination, use C'
1560 | instead of the original C.
1557 | instead of the original C.
1561 B D
1558 B D
1562 \ /
1559 \ /
1563 A
1560 A
1564 """
1561 """
1565 # pick already rebased revs with same dest from state as interesting source
1562 # pick already rebased revs with same dest from state as interesting source
1566 dest = destmap[rev]
1563 dest = destmap[rev]
1567 source = [
1564 source = [
1568 s
1565 s
1569 for s, d in state.items()
1566 for s, d in state.items()
1570 if d > 0 and destmap[s] == dest and s not in skipped
1567 if d > 0 and destmap[s] == dest and s not in skipped
1571 ]
1568 ]
1572
1569
1573 result = []
1570 result = []
1574 for prev in repo.changelog.parentrevs(rev):
1571 for prev in repo.changelog.parentrevs(rev):
1575 adjusted = dest
1572 adjusted = dest
1576 if prev != nullrev:
1573 if prev != nullrev:
1577 candidate = repo.revs(b'max(%ld and (::%d))', source, prev).first()
1574 candidate = repo.revs(b'max(%ld and (::%d))', source, prev).first()
1578 if candidate is not None:
1575 if candidate is not None:
1579 adjusted = state[candidate]
1576 adjusted = state[candidate]
1580 if adjusted == dest and dest in state:
1577 if adjusted == dest and dest in state:
1581 adjusted = state[dest]
1578 adjusted = state[dest]
1582 if adjusted == revtodo:
1579 if adjusted == revtodo:
1583 # sortsource should produce an order that makes this impossible
1580 # sortsource should produce an order that makes this impossible
1584 raise error.ProgrammingError(
1581 raise error.ProgrammingError(
1585 b'rev %d should be rebased already at this time' % dest
1582 b'rev %d should be rebased already at this time' % dest
1586 )
1583 )
1587 result.append(adjusted)
1584 result.append(adjusted)
1588 return result
1585 return result
1589
1586
1590
1587
1591 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1588 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1592 """
1589 """
1593 Abort if rebase will create divergence or rebase is noop because of markers
1590 Abort if rebase will create divergence or rebase is noop because of markers
1594
1591
1595 `rebaseobsrevs`: set of obsolete revision in source
1592 `rebaseobsrevs`: set of obsolete revision in source
1596 `rebaseobsskipped`: set of revisions from source skipped because they have
1593 `rebaseobsskipped`: set of revisions from source skipped because they have
1597 successors in destination or no non-obsolete successor.
1594 successors in destination or no non-obsolete successor.
1598 """
1595 """
1599 # Obsolete node with successors not in dest leads to divergence
1596 # Obsolete node with successors not in dest leads to divergence
1600 divergenceok = ui.configbool(b'experimental', b'evolution.allowdivergence')
1597 divergenceok = ui.configbool(b'experimental', b'evolution.allowdivergence')
1601 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1598 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1602
1599
1603 if divergencebasecandidates and not divergenceok:
1600 if divergencebasecandidates and not divergenceok:
1604 divhashes = (bytes(repo[r]) for r in divergencebasecandidates)
1601 divhashes = (bytes(repo[r]) for r in divergencebasecandidates)
1605 msg = _(b"this rebase will cause divergences from: %s")
1602 msg = _(b"this rebase will cause divergences from: %s")
1606 h = _(
1603 h = _(
1607 b"to force the rebase please set "
1604 b"to force the rebase please set "
1608 b"experimental.evolution.allowdivergence=True"
1605 b"experimental.evolution.allowdivergence=True"
1609 )
1606 )
1610 raise error.Abort(msg % (b",".join(divhashes),), hint=h)
1607 raise error.Abort(msg % (b",".join(divhashes),), hint=h)
1611
1608
1612
1609
1613 def successorrevs(unfi, rev):
1610 def successorrevs(unfi, rev):
1614 """yield revision numbers for successors of rev"""
1611 """yield revision numbers for successors of rev"""
1615 assert unfi.filtername is None
1612 assert unfi.filtername is None
1616 get_rev = unfi.changelog.index.get_rev
1613 get_rev = unfi.changelog.index.get_rev
1617 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1614 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1618 r = get_rev(s)
1615 r = get_rev(s)
1619 if r is not None:
1616 if r is not None:
1620 yield r
1617 yield r
1621
1618
1622
1619
1623 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1620 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1624 """Return new parents and optionally a merge base for rev being rebased
1621 """Return new parents and optionally a merge base for rev being rebased
1625
1622
1626 The destination specified by "dest" cannot always be used directly because
1623 The destination specified by "dest" cannot always be used directly because
1627 previously rebase result could affect destination. For example,
1624 previously rebase result could affect destination. For example,
1628
1625
1629 D E rebase -r C+D+E -d B
1626 D E rebase -r C+D+E -d B
1630 |/ C will be rebased to C'
1627 |/ C will be rebased to C'
1631 B C D's new destination will be C' instead of B
1628 B C D's new destination will be C' instead of B
1632 |/ E's new destination will be C' instead of B
1629 |/ E's new destination will be C' instead of B
1633 A
1630 A
1634
1631
1635 The new parents of a merge is slightly more complicated. See the comment
1632 The new parents of a merge is slightly more complicated. See the comment
1636 block below.
1633 block below.
1637 """
1634 """
1638 # use unfiltered changelog since successorrevs may return filtered nodes
1635 # use unfiltered changelog since successorrevs may return filtered nodes
1639 assert repo.filtername is None
1636 assert repo.filtername is None
1640 cl = repo.changelog
1637 cl = repo.changelog
1641 isancestor = cl.isancestorrev
1638 isancestor = cl.isancestorrev
1642
1639
1643 dest = destmap[rev]
1640 dest = destmap[rev]
1644 oldps = repo.changelog.parentrevs(rev) # old parents
1641 oldps = repo.changelog.parentrevs(rev) # old parents
1645 newps = [nullrev, nullrev] # new parents
1642 newps = [nullrev, nullrev] # new parents
1646 dests = adjustdest(repo, rev, destmap, state, skipped)
1643 dests = adjustdest(repo, rev, destmap, state, skipped)
1647 bases = list(oldps) # merge base candidates, initially just old parents
1644 bases = list(oldps) # merge base candidates, initially just old parents
1648
1645
1649 if all(r == nullrev for r in oldps[1:]):
1646 if all(r == nullrev for r in oldps[1:]):
1650 # For non-merge changeset, just move p to adjusted dest as requested.
1647 # For non-merge changeset, just move p to adjusted dest as requested.
1651 newps[0] = dests[0]
1648 newps[0] = dests[0]
1652 else:
1649 else:
1653 # For merge changeset, if we move p to dests[i] unconditionally, both
1650 # For merge changeset, if we move p to dests[i] unconditionally, both
1654 # parents may change and the end result looks like "the merge loses a
1651 # parents may change and the end result looks like "the merge loses a
1655 # parent", which is a surprise. This is a limit because "--dest" only
1652 # parent", which is a surprise. This is a limit because "--dest" only
1656 # accepts one dest per src.
1653 # accepts one dest per src.
1657 #
1654 #
1658 # Therefore, only move p with reasonable conditions (in this order):
1655 # Therefore, only move p with reasonable conditions (in this order):
1659 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1656 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1660 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1657 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1661 #
1658 #
1662 # Comparing with adjustdest, the logic here does some additional work:
1659 # Comparing with adjustdest, the logic here does some additional work:
1663 # 1. decide which parents will not be moved towards dest
1660 # 1. decide which parents will not be moved towards dest
1664 # 2. if the above decision is "no", should a parent still be moved
1661 # 2. if the above decision is "no", should a parent still be moved
1665 # because it was rebased?
1662 # because it was rebased?
1666 #
1663 #
1667 # For example:
1664 # For example:
1668 #
1665 #
1669 # C # "rebase -r C -d D" is an error since none of the parents
1666 # C # "rebase -r C -d D" is an error since none of the parents
1670 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1667 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1671 # A B D # B (using rule "2."), since B will be rebased.
1668 # A B D # B (using rule "2."), since B will be rebased.
1672 #
1669 #
1673 # The loop tries to be not rely on the fact that a Mercurial node has
1670 # The loop tries to be not rely on the fact that a Mercurial node has
1674 # at most 2 parents.
1671 # at most 2 parents.
1675 for i, p in enumerate(oldps):
1672 for i, p in enumerate(oldps):
1676 np = p # new parent
1673 np = p # new parent
1677 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1674 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1678 np = dests[i]
1675 np = dests[i]
1679 elif p in state and state[p] > 0:
1676 elif p in state and state[p] > 0:
1680 np = state[p]
1677 np = state[p]
1681
1678
1682 # "bases" only record "special" merge bases that cannot be
1679 # "bases" only record "special" merge bases that cannot be
1683 # calculated from changelog DAG (i.e. isancestor(p, np) is False).
1680 # calculated from changelog DAG (i.e. isancestor(p, np) is False).
1684 # For example:
1681 # For example:
1685 #
1682 #
1686 # B' # rebase -s B -d D, when B was rebased to B'. dest for C
1683 # B' # rebase -s B -d D, when B was rebased to B'. dest for C
1687 # | C # is B', but merge base for C is B, instead of
1684 # | C # is B', but merge base for C is B, instead of
1688 # D | # changelog.ancestor(C, B') == A. If changelog DAG and
1685 # D | # changelog.ancestor(C, B') == A. If changelog DAG and
1689 # | B # "state" edges are merged (so there will be an edge from
1686 # | B # "state" edges are merged (so there will be an edge from
1690 # |/ # B to B'), the merge base is still ancestor(C, B') in
1687 # |/ # B to B'), the merge base is still ancestor(C, B') in
1691 # A # the merged graph.
1688 # A # the merged graph.
1692 #
1689 #
1693 # Also see https://bz.mercurial-scm.org/show_bug.cgi?id=1950#c8
1690 # Also see https://bz.mercurial-scm.org/show_bug.cgi?id=1950#c8
1694 # which uses "virtual null merge" to explain this situation.
1691 # which uses "virtual null merge" to explain this situation.
1695 if isancestor(p, np):
1692 if isancestor(p, np):
1696 bases[i] = nullrev
1693 bases[i] = nullrev
1697
1694
1698 # If one parent becomes an ancestor of the other, drop the ancestor
1695 # If one parent becomes an ancestor of the other, drop the ancestor
1699 for j, x in enumerate(newps[:i]):
1696 for j, x in enumerate(newps[:i]):
1700 if x == nullrev:
1697 if x == nullrev:
1701 continue
1698 continue
1702 if isancestor(np, x): # CASE-1
1699 if isancestor(np, x): # CASE-1
1703 np = nullrev
1700 np = nullrev
1704 elif isancestor(x, np): # CASE-2
1701 elif isancestor(x, np): # CASE-2
1705 newps[j] = np
1702 newps[j] = np
1706 np = nullrev
1703 np = nullrev
1707 # New parents forming an ancestor relationship does not
1704 # New parents forming an ancestor relationship does not
1708 # mean the old parents have a similar relationship. Do not
1705 # mean the old parents have a similar relationship. Do not
1709 # set bases[x] to nullrev.
1706 # set bases[x] to nullrev.
1710 bases[j], bases[i] = bases[i], bases[j]
1707 bases[j], bases[i] = bases[i], bases[j]
1711
1708
1712 newps[i] = np
1709 newps[i] = np
1713
1710
1714 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1711 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1715 # base. If only p2 changes, merging using unchanged p1 as merge base is
1712 # base. If only p2 changes, merging using unchanged p1 as merge base is
1716 # suboptimal. Therefore swap parents to make the merge sane.
1713 # suboptimal. Therefore swap parents to make the merge sane.
1717 if newps[1] != nullrev and oldps[0] == newps[0]:
1714 if newps[1] != nullrev and oldps[0] == newps[0]:
1718 assert len(newps) == 2 and len(oldps) == 2
1715 assert len(newps) == 2 and len(oldps) == 2
1719 newps.reverse()
1716 newps.reverse()
1720 bases.reverse()
1717 bases.reverse()
1721
1718
1722 # No parent change might be an error because we fail to make rev a
1719 # No parent change might be an error because we fail to make rev a
1723 # descendent of requested dest. This can happen, for example:
1720 # descendent of requested dest. This can happen, for example:
1724 #
1721 #
1725 # C # rebase -r C -d D
1722 # C # rebase -r C -d D
1726 # /| # None of A and B will be changed to D and rebase fails.
1723 # /| # None of A and B will be changed to D and rebase fails.
1727 # A B D
1724 # A B D
1728 if set(newps) == set(oldps) and dest not in newps:
1725 if set(newps) == set(oldps) and dest not in newps:
1729 raise error.Abort(
1726 raise error.Abort(
1730 _(
1727 _(
1731 b'cannot rebase %d:%s without '
1728 b'cannot rebase %d:%s without '
1732 b'moving at least one of its parents'
1729 b'moving at least one of its parents'
1733 )
1730 )
1734 % (rev, repo[rev])
1731 % (rev, repo[rev])
1735 )
1732 )
1736
1733
1737 # Source should not be ancestor of dest. The check here guarantees it's
1734 # Source should not be ancestor of dest. The check here guarantees it's
1738 # impossible. With multi-dest, the initial check does not cover complex
1735 # impossible. With multi-dest, the initial check does not cover complex
1739 # cases since we don't have abstractions to dry-run rebase cheaply.
1736 # cases since we don't have abstractions to dry-run rebase cheaply.
1740 if any(p != nullrev and isancestor(rev, p) for p in newps):
1737 if any(p != nullrev and isancestor(rev, p) for p in newps):
1741 raise error.Abort(_(b'source is ancestor of destination'))
1738 raise error.Abort(_(b'source is ancestor of destination'))
1742
1739
1743 # "rebasenode" updates to new p1, use the corresponding merge base.
1740 # "rebasenode" updates to new p1, use the corresponding merge base.
1744 if bases[0] != nullrev:
1741 if bases[0] != nullrev:
1745 base = bases[0]
1742 base = bases[0]
1746 else:
1743 else:
1747 base = None
1744 base = None
1748
1745
1749 # Check if the merge will contain unwanted changes. That may happen if
1746 # Check if the merge will contain unwanted changes. That may happen if
1750 # there are multiple special (non-changelog ancestor) merge bases, which
1747 # there are multiple special (non-changelog ancestor) merge bases, which
1751 # cannot be handled well by the 3-way merge algorithm. For example:
1748 # cannot be handled well by the 3-way merge algorithm. For example:
1752 #
1749 #
1753 # F
1750 # F
1754 # /|
1751 # /|
1755 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1752 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1756 # | | # as merge base, the difference between D and F will include
1753 # | | # as merge base, the difference between D and F will include
1757 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1754 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1758 # |/ # chosen, the rebased F will contain B.
1755 # |/ # chosen, the rebased F will contain B.
1759 # A Z
1756 # A Z
1760 #
1757 #
1761 # But our merge base candidates (D and E in above case) could still be
1758 # But our merge base candidates (D and E in above case) could still be
1762 # better than the default (ancestor(F, Z) == null). Therefore still
1759 # better than the default (ancestor(F, Z) == null). Therefore still
1763 # pick one (so choose p1 above).
1760 # pick one (so choose p1 above).
1764 if sum(1 for b in set(bases) if b != nullrev) > 1:
1761 if sum(1 for b in set(bases) if b != nullrev) > 1:
1765 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1762 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1766 for i, base in enumerate(bases):
1763 for i, base in enumerate(bases):
1767 if base == nullrev:
1764 if base == nullrev:
1768 continue
1765 continue
1769 # Revisions in the side (not chosen as merge base) branch that
1766 # Revisions in the side (not chosen as merge base) branch that
1770 # might contain "surprising" contents
1767 # might contain "surprising" contents
1771 siderevs = list(
1768 siderevs = list(
1772 repo.revs(b'((%ld-%d) %% (%d+%d))', bases, base, base, dest)
1769 repo.revs(b'((%ld-%d) %% (%d+%d))', bases, base, base, dest)
1773 )
1770 )
1774
1771
1775 # If those revisions are covered by rebaseset, the result is good.
1772 # If those revisions are covered by rebaseset, the result is good.
1776 # A merge in rebaseset would be considered to cover its ancestors.
1773 # A merge in rebaseset would be considered to cover its ancestors.
1777 if siderevs:
1774 if siderevs:
1778 rebaseset = [
1775 rebaseset = [
1779 r for r, d in state.items() if d > 0 and r not in obsskipped
1776 r for r, d in state.items() if d > 0 and r not in obsskipped
1780 ]
1777 ]
1781 merges = [
1778 merges = [
1782 r for r in rebaseset if cl.parentrevs(r)[1] != nullrev
1779 r for r in rebaseset if cl.parentrevs(r)[1] != nullrev
1783 ]
1780 ]
1784 unwanted[i] = list(
1781 unwanted[i] = list(
1785 repo.revs(
1782 repo.revs(
1786 b'%ld - (::%ld) - %ld', siderevs, merges, rebaseset
1783 b'%ld - (::%ld) - %ld', siderevs, merges, rebaseset
1787 )
1784 )
1788 )
1785 )
1789
1786
1790 # Choose a merge base that has a minimal number of unwanted revs.
1787 # Choose a merge base that has a minimal number of unwanted revs.
1791 l, i = min(
1788 l, i = min(
1792 (len(revs), i)
1789 (len(revs), i)
1793 for i, revs in enumerate(unwanted)
1790 for i, revs in enumerate(unwanted)
1794 if revs is not None
1791 if revs is not None
1795 )
1792 )
1796 base = bases[i]
1793 base = bases[i]
1797
1794
1798 # newps[0] should match merge base if possible. Currently, if newps[i]
1795 # newps[0] should match merge base if possible. Currently, if newps[i]
1799 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1796 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1800 # the other's ancestor. In that case, it's fine to not swap newps here.
1797 # the other's ancestor. In that case, it's fine to not swap newps here.
1801 # (see CASE-1 and CASE-2 above)
1798 # (see CASE-1 and CASE-2 above)
1802 if i != 0 and newps[i] != nullrev:
1799 if i != 0 and newps[i] != nullrev:
1803 newps[0], newps[i] = newps[i], newps[0]
1800 newps[0], newps[i] = newps[i], newps[0]
1804
1801
1805 # The merge will include unwanted revisions. Abort now. Revisit this if
1802 # The merge will include unwanted revisions. Abort now. Revisit this if
1806 # we have a more advanced merge algorithm that handles multiple bases.
1803 # we have a more advanced merge algorithm that handles multiple bases.
1807 if l > 0:
1804 if l > 0:
1808 unwanteddesc = _(b' or ').join(
1805 unwanteddesc = _(b' or ').join(
1809 (
1806 (
1810 b', '.join(b'%d:%s' % (r, repo[r]) for r in revs)
1807 b', '.join(b'%d:%s' % (r, repo[r]) for r in revs)
1811 for revs in unwanted
1808 for revs in unwanted
1812 if revs is not None
1809 if revs is not None
1813 )
1810 )
1814 )
1811 )
1815 raise error.Abort(
1812 raise error.Abort(
1816 _(b'rebasing %d:%s will include unwanted changes from %s')
1813 _(b'rebasing %d:%s will include unwanted changes from %s')
1817 % (rev, repo[rev], unwanteddesc)
1814 % (rev, repo[rev], unwanteddesc)
1818 )
1815 )
1819
1816
1820 repo.ui.debug(b" future parents are %d and %d\n" % tuple(newps))
1817 repo.ui.debug(b" future parents are %d and %d\n" % tuple(newps))
1821
1818
1822 return newps[0], newps[1], base
1819 return newps[0], newps[1], base
1823
1820
1824
1821
1825 def isagitpatch(repo, patchname):
1822 def isagitpatch(repo, patchname):
1826 """Return true if the given patch is in git format"""
1823 """Return true if the given patch is in git format"""
1827 mqpatch = os.path.join(repo.mq.path, patchname)
1824 mqpatch = os.path.join(repo.mq.path, patchname)
1828 for line in patch.linereader(open(mqpatch, b'rb')):
1825 for line in patch.linereader(open(mqpatch, b'rb')):
1829 if line.startswith(b'diff --git'):
1826 if line.startswith(b'diff --git'):
1830 return True
1827 return True
1831 return False
1828 return False
1832
1829
1833
1830
1834 def updatemq(repo, state, skipped, **opts):
1831 def updatemq(repo, state, skipped, **opts):
1835 """Update rebased mq patches - finalize and then import them"""
1832 """Update rebased mq patches - finalize and then import them"""
1836 mqrebase = {}
1833 mqrebase = {}
1837 mq = repo.mq
1834 mq = repo.mq
1838 original_series = mq.fullseries[:]
1835 original_series = mq.fullseries[:]
1839 skippedpatches = set()
1836 skippedpatches = set()
1840
1837
1841 for p in mq.applied:
1838 for p in mq.applied:
1842 rev = repo[p.node].rev()
1839 rev = repo[p.node].rev()
1843 if rev in state:
1840 if rev in state:
1844 repo.ui.debug(
1841 repo.ui.debug(
1845 b'revision %d is an mq patch (%s), finalize it.\n'
1842 b'revision %d is an mq patch (%s), finalize it.\n'
1846 % (rev, p.name)
1843 % (rev, p.name)
1847 )
1844 )
1848 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1845 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1849 else:
1846 else:
1850 # Applied but not rebased, not sure this should happen
1847 # Applied but not rebased, not sure this should happen
1851 skippedpatches.add(p.name)
1848 skippedpatches.add(p.name)
1852
1849
1853 if mqrebase:
1850 if mqrebase:
1854 mq.finish(repo, mqrebase.keys())
1851 mq.finish(repo, mqrebase.keys())
1855
1852
1856 # We must start import from the newest revision
1853 # We must start import from the newest revision
1857 for rev in sorted(mqrebase, reverse=True):
1854 for rev in sorted(mqrebase, reverse=True):
1858 if rev not in skipped:
1855 if rev not in skipped:
1859 name, isgit = mqrebase[rev]
1856 name, isgit = mqrebase[rev]
1860 repo.ui.note(
1857 repo.ui.note(
1861 _(b'updating mq patch %s to %d:%s\n')
1858 _(b'updating mq patch %s to %d:%s\n')
1862 % (name, state[rev], repo[state[rev]])
1859 % (name, state[rev], repo[state[rev]])
1863 )
1860 )
1864 mq.qimport(
1861 mq.qimport(
1865 repo,
1862 repo,
1866 (),
1863 (),
1867 patchname=name,
1864 patchname=name,
1868 git=isgit,
1865 git=isgit,
1869 rev=[b"%d" % state[rev]],
1866 rev=[b"%d" % state[rev]],
1870 )
1867 )
1871 else:
1868 else:
1872 # Rebased and skipped
1869 # Rebased and skipped
1873 skippedpatches.add(mqrebase[rev][0])
1870 skippedpatches.add(mqrebase[rev][0])
1874
1871
1875 # Patches were either applied and rebased and imported in
1872 # Patches were either applied and rebased and imported in
1876 # order, applied and removed or unapplied. Discard the removed
1873 # order, applied and removed or unapplied. Discard the removed
1877 # ones while preserving the original series order and guards.
1874 # ones while preserving the original series order and guards.
1878 newseries = [
1875 newseries = [
1879 s
1876 s
1880 for s in original_series
1877 for s in original_series
1881 if mq.guard_re.split(s, 1)[0] not in skippedpatches
1878 if mq.guard_re.split(s, 1)[0] not in skippedpatches
1882 ]
1879 ]
1883 mq.fullseries[:] = newseries
1880 mq.fullseries[:] = newseries
1884 mq.seriesdirty = True
1881 mq.seriesdirty = True
1885 mq.savedirty()
1882 mq.savedirty()
1886
1883
1887
1884
1888 def storecollapsemsg(repo, collapsemsg):
1885 def storecollapsemsg(repo, collapsemsg):
1889 """Store the collapse message to allow recovery"""
1886 """Store the collapse message to allow recovery"""
1890 collapsemsg = collapsemsg or b''
1887 collapsemsg = collapsemsg or b''
1891 f = repo.vfs(b"last-message.txt", b"w")
1888 f = repo.vfs(b"last-message.txt", b"w")
1892 f.write(b"%s\n" % collapsemsg)
1889 f.write(b"%s\n" % collapsemsg)
1893 f.close()
1890 f.close()
1894
1891
1895
1892
1896 def clearcollapsemsg(repo):
1893 def clearcollapsemsg(repo):
1897 """Remove collapse message file"""
1894 """Remove collapse message file"""
1898 repo.vfs.unlinkpath(b"last-message.txt", ignoremissing=True)
1895 repo.vfs.unlinkpath(b"last-message.txt", ignoremissing=True)
1899
1896
1900
1897
1901 def restorecollapsemsg(repo, isabort):
1898 def restorecollapsemsg(repo, isabort):
1902 """Restore previously stored collapse message"""
1899 """Restore previously stored collapse message"""
1903 try:
1900 try:
1904 f = repo.vfs(b"last-message.txt")
1901 f = repo.vfs(b"last-message.txt")
1905 collapsemsg = f.readline().strip()
1902 collapsemsg = f.readline().strip()
1906 f.close()
1903 f.close()
1907 except IOError as err:
1904 except IOError as err:
1908 if err.errno != errno.ENOENT:
1905 if err.errno != errno.ENOENT:
1909 raise
1906 raise
1910 if isabort:
1907 if isabort:
1911 # Oh well, just abort like normal
1908 # Oh well, just abort like normal
1912 collapsemsg = b''
1909 collapsemsg = b''
1913 else:
1910 else:
1914 raise error.Abort(_(b'missing .hg/last-message.txt for rebase'))
1911 raise error.Abort(_(b'missing .hg/last-message.txt for rebase'))
1915 return collapsemsg
1912 return collapsemsg
1916
1913
1917
1914
1918 def clearstatus(repo):
1915 def clearstatus(repo):
1919 """Remove the status files"""
1916 """Remove the status files"""
1920 # Make sure the active transaction won't write the state file
1917 # Make sure the active transaction won't write the state file
1921 tr = repo.currenttransaction()
1918 tr = repo.currenttransaction()
1922 if tr:
1919 if tr:
1923 tr.removefilegenerator(b'rebasestate')
1920 tr.removefilegenerator(b'rebasestate')
1924 repo.vfs.unlinkpath(b"rebasestate", ignoremissing=True)
1921 repo.vfs.unlinkpath(b"rebasestate", ignoremissing=True)
1925
1922
1926
1923
1927 def sortsource(destmap):
1924 def sortsource(destmap):
1928 """yield source revisions in an order that we only rebase things once
1925 """yield source revisions in an order that we only rebase things once
1929
1926
1930 If source and destination overlaps, we should filter out revisions
1927 If source and destination overlaps, we should filter out revisions
1931 depending on other revisions which hasn't been rebased yet.
1928 depending on other revisions which hasn't been rebased yet.
1932
1929
1933 Yield a sorted list of revisions each time.
1930 Yield a sorted list of revisions each time.
1934
1931
1935 For example, when rebasing A to B, B to C. This function yields [B], then
1932 For example, when rebasing A to B, B to C. This function yields [B], then
1936 [A], indicating B needs to be rebased first.
1933 [A], indicating B needs to be rebased first.
1937
1934
1938 Raise if there is a cycle so the rebase is impossible.
1935 Raise if there is a cycle so the rebase is impossible.
1939 """
1936 """
1940 srcset = set(destmap)
1937 srcset = set(destmap)
1941 while srcset:
1938 while srcset:
1942 srclist = sorted(srcset)
1939 srclist = sorted(srcset)
1943 result = []
1940 result = []
1944 for r in srclist:
1941 for r in srclist:
1945 if destmap[r] not in srcset:
1942 if destmap[r] not in srcset:
1946 result.append(r)
1943 result.append(r)
1947 if not result:
1944 if not result:
1948 raise error.Abort(_(b'source and destination form a cycle'))
1945 raise error.Abort(_(b'source and destination form a cycle'))
1949 srcset -= set(result)
1946 srcset -= set(result)
1950 yield result
1947 yield result
1951
1948
1952
1949
1953 def buildstate(repo, destmap, collapse):
1950 def buildstate(repo, destmap, collapse):
1954 '''Define which revisions are going to be rebased and where
1951 '''Define which revisions are going to be rebased and where
1955
1952
1956 repo: repo
1953 repo: repo
1957 destmap: {srcrev: destrev}
1954 destmap: {srcrev: destrev}
1958 '''
1955 '''
1959 rebaseset = destmap.keys()
1956 rebaseset = destmap.keys()
1960 originalwd = repo[b'.'].rev()
1957 originalwd = repo[b'.'].rev()
1961
1958
1962 # This check isn't strictly necessary, since mq detects commits over an
1959 # This check isn't strictly necessary, since mq detects commits over an
1963 # applied patch. But it prevents messing up the working directory when
1960 # applied patch. But it prevents messing up the working directory when
1964 # a partially completed rebase is blocked by mq.
1961 # a partially completed rebase is blocked by mq.
1965 if b'qtip' in repo.tags():
1962 if b'qtip' in repo.tags():
1966 mqapplied = set(repo[s.node].rev() for s in repo.mq.applied)
1963 mqapplied = set(repo[s.node].rev() for s in repo.mq.applied)
1967 if set(destmap.values()) & mqapplied:
1964 if set(destmap.values()) & mqapplied:
1968 raise error.Abort(_(b'cannot rebase onto an applied mq patch'))
1965 raise error.Abort(_(b'cannot rebase onto an applied mq patch'))
1969
1966
1970 # Get "cycle" error early by exhausting the generator.
1967 # Get "cycle" error early by exhausting the generator.
1971 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
1968 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
1972 if not sortedsrc:
1969 if not sortedsrc:
1973 raise error.Abort(_(b'no matching revisions'))
1970 raise error.Abort(_(b'no matching revisions'))
1974
1971
1975 # Only check the first batch of revisions to rebase not depending on other
1972 # Only check the first batch of revisions to rebase not depending on other
1976 # rebaseset. This means "source is ancestor of destination" for the second
1973 # rebaseset. This means "source is ancestor of destination" for the second
1977 # (and following) batches of revisions are not checked here. We rely on
1974 # (and following) batches of revisions are not checked here. We rely on
1978 # "defineparents" to do that check.
1975 # "defineparents" to do that check.
1979 roots = list(repo.set(b'roots(%ld)', sortedsrc[0]))
1976 roots = list(repo.set(b'roots(%ld)', sortedsrc[0]))
1980 if not roots:
1977 if not roots:
1981 raise error.Abort(_(b'no matching revisions'))
1978 raise error.Abort(_(b'no matching revisions'))
1982
1979
1983 def revof(r):
1980 def revof(r):
1984 return r.rev()
1981 return r.rev()
1985
1982
1986 roots = sorted(roots, key=revof)
1983 roots = sorted(roots, key=revof)
1987 state = dict.fromkeys(rebaseset, revtodo)
1984 state = dict.fromkeys(rebaseset, revtodo)
1988 emptyrebase = len(sortedsrc) == 1
1985 emptyrebase = len(sortedsrc) == 1
1989 for root in roots:
1986 for root in roots:
1990 dest = repo[destmap[root.rev()]]
1987 dest = repo[destmap[root.rev()]]
1991 commonbase = root.ancestor(dest)
1988 commonbase = root.ancestor(dest)
1992 if commonbase == root:
1989 if commonbase == root:
1993 raise error.Abort(_(b'source is ancestor of destination'))
1990 raise error.Abort(_(b'source is ancestor of destination'))
1994 if commonbase == dest:
1991 if commonbase == dest:
1995 wctx = repo[None]
1992 wctx = repo[None]
1996 if dest == wctx.p1():
1993 if dest == wctx.p1():
1997 # when rebasing to '.', it will use the current wd branch name
1994 # when rebasing to '.', it will use the current wd branch name
1998 samebranch = root.branch() == wctx.branch()
1995 samebranch = root.branch() == wctx.branch()
1999 else:
1996 else:
2000 samebranch = root.branch() == dest.branch()
1997 samebranch = root.branch() == dest.branch()
2001 if not collapse and samebranch and dest in root.parents():
1998 if not collapse and samebranch and dest in root.parents():
2002 # mark the revision as done by setting its new revision
1999 # mark the revision as done by setting its new revision
2003 # equal to its old (current) revisions
2000 # equal to its old (current) revisions
2004 state[root.rev()] = root.rev()
2001 state[root.rev()] = root.rev()
2005 repo.ui.debug(b'source is a child of destination\n')
2002 repo.ui.debug(b'source is a child of destination\n')
2006 continue
2003 continue
2007
2004
2008 emptyrebase = False
2005 emptyrebase = False
2009 repo.ui.debug(b'rebase onto %s starting from %s\n' % (dest, root))
2006 repo.ui.debug(b'rebase onto %s starting from %s\n' % (dest, root))
2010 if emptyrebase:
2007 if emptyrebase:
2011 return None
2008 return None
2012 for rev in sorted(state):
2009 for rev in sorted(state):
2013 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
2010 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
2014 # if all parents of this revision are done, then so is this revision
2011 # if all parents of this revision are done, then so is this revision
2015 if parents and all((state.get(p) == p for p in parents)):
2012 if parents and all((state.get(p) == p for p in parents)):
2016 state[rev] = rev
2013 state[rev] = rev
2017 return originalwd, destmap, state
2014 return originalwd, destmap, state
2018
2015
2019
2016
2020 def clearrebased(
2017 def clearrebased(
2021 ui,
2018 ui,
2022 repo,
2019 repo,
2023 destmap,
2020 destmap,
2024 state,
2021 state,
2025 skipped,
2022 skipped,
2026 collapsedas=None,
2023 collapsedas=None,
2027 keepf=False,
2024 keepf=False,
2028 fm=None,
2025 fm=None,
2029 backup=True,
2026 backup=True,
2030 ):
2027 ):
2031 """dispose of rebased revision at the end of the rebase
2028 """dispose of rebased revision at the end of the rebase
2032
2029
2033 If `collapsedas` is not None, the rebase was a collapse whose result if the
2030 If `collapsedas` is not None, the rebase was a collapse whose result if the
2034 `collapsedas` node.
2031 `collapsedas` node.
2035
2032
2036 If `keepf` is not True, the rebase has --keep set and no nodes should be
2033 If `keepf` is not True, the rebase has --keep set and no nodes should be
2037 removed (but bookmarks still need to be moved).
2034 removed (but bookmarks still need to be moved).
2038
2035
2039 If `backup` is False, no backup will be stored when stripping rebased
2036 If `backup` is False, no backup will be stored when stripping rebased
2040 revisions.
2037 revisions.
2041 """
2038 """
2042 tonode = repo.changelog.node
2039 tonode = repo.changelog.node
2043 replacements = {}
2040 replacements = {}
2044 moves = {}
2041 moves = {}
2045 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
2042 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
2046
2043
2047 collapsednodes = []
2044 collapsednodes = []
2048 for rev, newrev in sorted(state.items()):
2045 for rev, newrev in sorted(state.items()):
2049 if newrev >= 0 and newrev != rev:
2046 if newrev >= 0 and newrev != rev:
2050 oldnode = tonode(rev)
2047 oldnode = tonode(rev)
2051 newnode = collapsedas or tonode(newrev)
2048 newnode = collapsedas or tonode(newrev)
2052 moves[oldnode] = newnode
2049 moves[oldnode] = newnode
2053 succs = None
2050 succs = None
2054 if rev in skipped:
2051 if rev in skipped:
2055 if stripcleanup or not repo[rev].obsolete():
2052 if stripcleanup or not repo[rev].obsolete():
2056 succs = ()
2053 succs = ()
2057 elif collapsedas:
2054 elif collapsedas:
2058 collapsednodes.append(oldnode)
2055 collapsednodes.append(oldnode)
2059 else:
2056 else:
2060 succs = (newnode,)
2057 succs = (newnode,)
2061 if succs is not None:
2058 if succs is not None:
2062 replacements[(oldnode,)] = succs
2059 replacements[(oldnode,)] = succs
2063 if collapsednodes:
2060 if collapsednodes:
2064 replacements[tuple(collapsednodes)] = (collapsedas,)
2061 replacements[tuple(collapsednodes)] = (collapsedas,)
2065 if fm:
2062 if fm:
2066 hf = fm.hexfunc
2063 hf = fm.hexfunc
2067 fl = fm.formatlist
2064 fl = fm.formatlist
2068 fd = fm.formatdict
2065 fd = fm.formatdict
2069 changes = {}
2066 changes = {}
2070 for oldns, newn in pycompat.iteritems(replacements):
2067 for oldns, newn in pycompat.iteritems(replacements):
2071 for oldn in oldns:
2068 for oldn in oldns:
2072 changes[hf(oldn)] = fl([hf(n) for n in newn], name=b'node')
2069 changes[hf(oldn)] = fl([hf(n) for n in newn], name=b'node')
2073 nodechanges = fd(changes, key=b"oldnode", value=b"newnodes")
2070 nodechanges = fd(changes, key=b"oldnode", value=b"newnodes")
2074 fm.data(nodechanges=nodechanges)
2071 fm.data(nodechanges=nodechanges)
2075 if keepf:
2072 if keepf:
2076 replacements = {}
2073 replacements = {}
2077 scmutil.cleanupnodes(repo, replacements, b'rebase', moves, backup=backup)
2074 scmutil.cleanupnodes(repo, replacements, b'rebase', moves, backup=backup)
2078
2075
2079
2076
2080 def pullrebase(orig, ui, repo, *args, **opts):
2077 def pullrebase(orig, ui, repo, *args, **opts):
2081 """Call rebase after pull if the latter has been invoked with --rebase"""
2078 """Call rebase after pull if the latter has been invoked with --rebase"""
2082 if opts.get('rebase'):
2079 if opts.get('rebase'):
2083 if ui.configbool(b'commands', b'rebase.requiredest'):
2080 if ui.configbool(b'commands', b'rebase.requiredest'):
2084 msg = _(b'rebase destination required by configuration')
2081 msg = _(b'rebase destination required by configuration')
2085 hint = _(b'use hg pull followed by hg rebase -d DEST')
2082 hint = _(b'use hg pull followed by hg rebase -d DEST')
2086 raise error.Abort(msg, hint=hint)
2083 raise error.Abort(msg, hint=hint)
2087
2084
2088 with repo.wlock(), repo.lock():
2085 with repo.wlock(), repo.lock():
2089 if opts.get('update'):
2086 if opts.get('update'):
2090 del opts['update']
2087 del opts['update']
2091 ui.debug(
2088 ui.debug(
2092 b'--update and --rebase are not compatible, ignoring '
2089 b'--update and --rebase are not compatible, ignoring '
2093 b'the update flag\n'
2090 b'the update flag\n'
2094 )
2091 )
2095
2092
2096 cmdutil.checkunfinished(repo, skipmerge=True)
2093 cmdutil.checkunfinished(repo, skipmerge=True)
2097 cmdutil.bailifchanged(
2094 cmdutil.bailifchanged(
2098 repo,
2095 repo,
2099 hint=_(
2096 hint=_(
2100 b'cannot pull with rebase: '
2097 b'cannot pull with rebase: '
2101 b'please commit or shelve your changes first'
2098 b'please commit or shelve your changes first'
2102 ),
2099 ),
2103 )
2100 )
2104
2101
2105 revsprepull = len(repo)
2102 revsprepull = len(repo)
2106 origpostincoming = commands.postincoming
2103 origpostincoming = commands.postincoming
2107
2104
2108 def _dummy(*args, **kwargs):
2105 def _dummy(*args, **kwargs):
2109 pass
2106 pass
2110
2107
2111 commands.postincoming = _dummy
2108 commands.postincoming = _dummy
2112 try:
2109 try:
2113 ret = orig(ui, repo, *args, **opts)
2110 ret = orig(ui, repo, *args, **opts)
2114 finally:
2111 finally:
2115 commands.postincoming = origpostincoming
2112 commands.postincoming = origpostincoming
2116 revspostpull = len(repo)
2113 revspostpull = len(repo)
2117 if revspostpull > revsprepull:
2114 if revspostpull > revsprepull:
2118 # --rev option from pull conflict with rebase own --rev
2115 # --rev option from pull conflict with rebase own --rev
2119 # dropping it
2116 # dropping it
2120 if 'rev' in opts:
2117 if 'rev' in opts:
2121 del opts['rev']
2118 del opts['rev']
2122 # positional argument from pull conflicts with rebase's own
2119 # positional argument from pull conflicts with rebase's own
2123 # --source.
2120 # --source.
2124 if 'source' in opts:
2121 if 'source' in opts:
2125 del opts['source']
2122 del opts['source']
2126 # revsprepull is the len of the repo, not revnum of tip.
2123 # revsprepull is the len of the repo, not revnum of tip.
2127 destspace = list(repo.changelog.revs(start=revsprepull))
2124 destspace = list(repo.changelog.revs(start=revsprepull))
2128 opts['_destspace'] = destspace
2125 opts['_destspace'] = destspace
2129 try:
2126 try:
2130 rebase(ui, repo, **opts)
2127 rebase(ui, repo, **opts)
2131 except error.NoMergeDestAbort:
2128 except error.NoMergeDestAbort:
2132 # we can maybe update instead
2129 # we can maybe update instead
2133 rev, _a, _b = destutil.destupdate(repo)
2130 rev, _a, _b = destutil.destupdate(repo)
2134 if rev == repo[b'.'].rev():
2131 if rev == repo[b'.'].rev():
2135 ui.status(_(b'nothing to rebase\n'))
2132 ui.status(_(b'nothing to rebase\n'))
2136 else:
2133 else:
2137 ui.status(_(b'nothing to rebase - updating instead\n'))
2134 ui.status(_(b'nothing to rebase - updating instead\n'))
2138 # not passing argument to get the bare update behavior
2135 # not passing argument to get the bare update behavior
2139 # with warning and trumpets
2136 # with warning and trumpets
2140 commands.update(ui, repo)
2137 commands.update(ui, repo)
2141 else:
2138 else:
2142 if opts.get('tool'):
2139 if opts.get('tool'):
2143 raise error.Abort(_(b'--tool can only be used with --rebase'))
2140 raise error.Abort(_(b'--tool can only be used with --rebase'))
2144 ret = orig(ui, repo, *args, **opts)
2141 ret = orig(ui, repo, *args, **opts)
2145
2142
2146 return ret
2143 return ret
2147
2144
2148
2145
2149 def _filterobsoleterevs(repo, revs):
2146 def _filterobsoleterevs(repo, revs):
2150 """returns a set of the obsolete revisions in revs"""
2147 """returns a set of the obsolete revisions in revs"""
2151 return set(r for r in revs if repo[r].obsolete())
2148 return set(r for r in revs if repo[r].obsolete())
2152
2149
2153
2150
2154 def _computeobsoletenotrebased(repo, rebaseobsrevs, destmap):
2151 def _computeobsoletenotrebased(repo, rebaseobsrevs, destmap):
2155 """Return (obsoletenotrebased, obsoletewithoutsuccessorindestination).
2152 """Return (obsoletenotrebased, obsoletewithoutsuccessorindestination).
2156
2153
2157 `obsoletenotrebased` is a mapping mapping obsolete => successor for all
2154 `obsoletenotrebased` is a mapping mapping obsolete => successor for all
2158 obsolete nodes to be rebased given in `rebaseobsrevs`.
2155 obsolete nodes to be rebased given in `rebaseobsrevs`.
2159
2156
2160 `obsoletewithoutsuccessorindestination` is a set with obsolete revisions
2157 `obsoletewithoutsuccessorindestination` is a set with obsolete revisions
2161 without a successor in destination.
2158 without a successor in destination.
2162
2159
2163 `obsoleteextinctsuccessors` is a set of obsolete revisions with only
2160 `obsoleteextinctsuccessors` is a set of obsolete revisions with only
2164 obsolete successors.
2161 obsolete successors.
2165 """
2162 """
2166 obsoletenotrebased = {}
2163 obsoletenotrebased = {}
2167 obsoletewithoutsuccessorindestination = set()
2164 obsoletewithoutsuccessorindestination = set()
2168 obsoleteextinctsuccessors = set()
2165 obsoleteextinctsuccessors = set()
2169
2166
2170 assert repo.filtername is None
2167 assert repo.filtername is None
2171 cl = repo.changelog
2168 cl = repo.changelog
2172 get_rev = cl.index.get_rev
2169 get_rev = cl.index.get_rev
2173 extinctrevs = set(repo.revs(b'extinct()'))
2170 extinctrevs = set(repo.revs(b'extinct()'))
2174 for srcrev in rebaseobsrevs:
2171 for srcrev in rebaseobsrevs:
2175 srcnode = cl.node(srcrev)
2172 srcnode = cl.node(srcrev)
2176 # XXX: more advanced APIs are required to handle split correctly
2173 # XXX: more advanced APIs are required to handle split correctly
2177 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
2174 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
2178 # obsutil.allsuccessors includes node itself
2175 # obsutil.allsuccessors includes node itself
2179 successors.remove(srcnode)
2176 successors.remove(srcnode)
2180 succrevs = {get_rev(s) for s in successors}
2177 succrevs = {get_rev(s) for s in successors}
2181 succrevs.discard(None)
2178 succrevs.discard(None)
2182 if succrevs.issubset(extinctrevs):
2179 if succrevs.issubset(extinctrevs):
2183 # all successors are extinct
2180 # all successors are extinct
2184 obsoleteextinctsuccessors.add(srcrev)
2181 obsoleteextinctsuccessors.add(srcrev)
2185 if not successors:
2182 if not successors:
2186 # no successor
2183 # no successor
2187 obsoletenotrebased[srcrev] = None
2184 obsoletenotrebased[srcrev] = None
2188 else:
2185 else:
2189 dstrev = destmap[srcrev]
2186 dstrev = destmap[srcrev]
2190 for succrev in succrevs:
2187 for succrev in succrevs:
2191 if cl.isancestorrev(succrev, dstrev):
2188 if cl.isancestorrev(succrev, dstrev):
2192 obsoletenotrebased[srcrev] = succrev
2189 obsoletenotrebased[srcrev] = succrev
2193 break
2190 break
2194 else:
2191 else:
2195 # If 'srcrev' has a successor in rebase set but none in
2192 # If 'srcrev' has a successor in rebase set but none in
2196 # destination (which would be catched above), we shall skip it
2193 # destination (which would be catched above), we shall skip it
2197 # and its descendants to avoid divergence.
2194 # and its descendants to avoid divergence.
2198 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
2195 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
2199 obsoletewithoutsuccessorindestination.add(srcrev)
2196 obsoletewithoutsuccessorindestination.add(srcrev)
2200
2197
2201 return (
2198 return (
2202 obsoletenotrebased,
2199 obsoletenotrebased,
2203 obsoletewithoutsuccessorindestination,
2200 obsoletewithoutsuccessorindestination,
2204 obsoleteextinctsuccessors,
2201 obsoleteextinctsuccessors,
2205 )
2202 )
2206
2203
2207
2204
2208 def abortrebase(ui, repo):
2205 def abortrebase(ui, repo):
2209 with repo.wlock(), repo.lock():
2206 with repo.wlock(), repo.lock():
2210 rbsrt = rebaseruntime(repo, ui)
2207 rbsrt = rebaseruntime(repo, ui)
2211 rbsrt._prepareabortorcontinue(isabort=True)
2208 rbsrt._prepareabortorcontinue(isabort=True)
2212
2209
2213
2210
2214 def continuerebase(ui, repo):
2211 def continuerebase(ui, repo):
2215 with repo.wlock(), repo.lock():
2212 with repo.wlock(), repo.lock():
2216 rbsrt = rebaseruntime(repo, ui)
2213 rbsrt = rebaseruntime(repo, ui)
2217 ms = mergemod.mergestate.read(repo)
2214 ms = mergemod.mergestate.read(repo)
2218 mergeutil.checkunresolved(ms)
2215 mergeutil.checkunresolved(ms)
2219 retcode = rbsrt._prepareabortorcontinue(isabort=False)
2216 retcode = rbsrt._prepareabortorcontinue(isabort=False)
2220 if retcode is not None:
2217 if retcode is not None:
2221 return retcode
2218 return retcode
2222 rbsrt._performrebase(None)
2219 rbsrt._performrebase(None)
2223 rbsrt._finishrebase()
2220 rbsrt._finishrebase()
2224
2221
2225
2222
2226 def summaryhook(ui, repo):
2223 def summaryhook(ui, repo):
2227 if not repo.vfs.exists(b'rebasestate'):
2224 if not repo.vfs.exists(b'rebasestate'):
2228 return
2225 return
2229 try:
2226 try:
2230 rbsrt = rebaseruntime(repo, ui, {})
2227 rbsrt = rebaseruntime(repo, ui, {})
2231 rbsrt.restorestatus()
2228 rbsrt.restorestatus()
2232 state = rbsrt.state
2229 state = rbsrt.state
2233 except error.RepoLookupError:
2230 except error.RepoLookupError:
2234 # i18n: column positioning for "hg summary"
2231 # i18n: column positioning for "hg summary"
2235 msg = _(b'rebase: (use "hg rebase --abort" to clear broken state)\n')
2232 msg = _(b'rebase: (use "hg rebase --abort" to clear broken state)\n')
2236 ui.write(msg)
2233 ui.write(msg)
2237 return
2234 return
2238 numrebased = len([i for i in pycompat.itervalues(state) if i >= 0])
2235 numrebased = len([i for i in pycompat.itervalues(state) if i >= 0])
2239 # i18n: column positioning for "hg summary"
2236 # i18n: column positioning for "hg summary"
2240 ui.write(
2237 ui.write(
2241 _(b'rebase: %s, %s (rebase --continue)\n')
2238 _(b'rebase: %s, %s (rebase --continue)\n')
2242 % (
2239 % (
2243 ui.label(_(b'%d rebased'), b'rebase.rebased') % numrebased,
2240 ui.label(_(b'%d rebased'), b'rebase.rebased') % numrebased,
2244 ui.label(_(b'%d remaining'), b'rebase.remaining')
2241 ui.label(_(b'%d remaining'), b'rebase.remaining')
2245 % (len(state) - numrebased),
2242 % (len(state) - numrebased),
2246 )
2243 )
2247 )
2244 )
2248
2245
2249
2246
2250 def uisetup(ui):
2247 def uisetup(ui):
2251 # Replace pull with a decorator to provide --rebase option
2248 # Replace pull with a decorator to provide --rebase option
2252 entry = extensions.wrapcommand(commands.table, b'pull', pullrebase)
2249 entry = extensions.wrapcommand(commands.table, b'pull', pullrebase)
2253 entry[1].append(
2250 entry[1].append(
2254 (b'', b'rebase', None, _(b"rebase working directory to branch head"))
2251 (b'', b'rebase', None, _(b"rebase working directory to branch head"))
2255 )
2252 )
2256 entry[1].append((b't', b'tool', b'', _(b"specify merge tool for rebase")))
2253 entry[1].append((b't', b'tool', b'', _(b"specify merge tool for rebase")))
2257 cmdutil.summaryhooks.add(b'rebase', summaryhook)
2254 cmdutil.summaryhooks.add(b'rebase', summaryhook)
2258 statemod.addunfinished(
2255 statemod.addunfinished(
2259 b'rebase',
2256 b'rebase',
2260 fname=b'rebasestate',
2257 fname=b'rebasestate',
2261 stopflag=True,
2258 stopflag=True,
2262 continueflag=True,
2259 continueflag=True,
2263 abortfunc=abortrebase,
2260 abortfunc=abortrebase,
2264 continuefunc=continuerebase,
2261 continuefunc=continuerebase,
2265 )
2262 )
@@ -1,1130 +1,1111 b''
1 # copies.py - copy detection for Mercurial
1 # copies.py - copy detection for Mercurial
2 #
2 #
3 # Copyright 2008 Matt Mackall <mpm@selenic.com>
3 # Copyright 2008 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import multiprocessing
11 import multiprocessing
12 import os
12 import os
13
13
14 from .i18n import _
14 from .i18n import _
15
15
16
16
17 from .revlogutils.flagutil import REVIDX_SIDEDATA
17 from .revlogutils.flagutil import REVIDX_SIDEDATA
18
18
19 from . import (
19 from . import (
20 error,
20 error,
21 match as matchmod,
21 match as matchmod,
22 node,
22 node,
23 pathutil,
23 pathutil,
24 pycompat,
24 pycompat,
25 util,
25 util,
26 )
26 )
27
27
28 from .revlogutils import sidedata as sidedatamod
28 from .revlogutils import sidedata as sidedatamod
29
29
30 from .utils import stringutil
30 from .utils import stringutil
31
31
32
32
33 def _filter(src, dst, t):
33 def _filter(src, dst, t):
34 """filters out invalid copies after chaining"""
34 """filters out invalid copies after chaining"""
35
35
36 # When _chain()'ing copies in 'a' (from 'src' via some other commit 'mid')
36 # When _chain()'ing copies in 'a' (from 'src' via some other commit 'mid')
37 # with copies in 'b' (from 'mid' to 'dst'), we can get the different cases
37 # with copies in 'b' (from 'mid' to 'dst'), we can get the different cases
38 # in the following table (not including trivial cases). For example, case 2
38 # in the following table (not including trivial cases). For example, case 2
39 # is where a file existed in 'src' and remained under that name in 'mid' and
39 # is where a file existed in 'src' and remained under that name in 'mid' and
40 # then was renamed between 'mid' and 'dst'.
40 # then was renamed between 'mid' and 'dst'.
41 #
41 #
42 # case src mid dst result
42 # case src mid dst result
43 # 1 x y - -
43 # 1 x y - -
44 # 2 x y y x->y
44 # 2 x y y x->y
45 # 3 x y x -
45 # 3 x y x -
46 # 4 x y z x->z
46 # 4 x y z x->z
47 # 5 - x y -
47 # 5 - x y -
48 # 6 x x y x->y
48 # 6 x x y x->y
49 #
49 #
50 # _chain() takes care of chaining the copies in 'a' and 'b', but it
50 # _chain() takes care of chaining the copies in 'a' and 'b', but it
51 # cannot tell the difference between cases 1 and 2, between 3 and 4, or
51 # cannot tell the difference between cases 1 and 2, between 3 and 4, or
52 # between 5 and 6, so it includes all cases in its result.
52 # between 5 and 6, so it includes all cases in its result.
53 # Cases 1, 3, and 5 are then removed by _filter().
53 # Cases 1, 3, and 5 are then removed by _filter().
54
54
55 for k, v in list(t.items()):
55 for k, v in list(t.items()):
56 # remove copies from files that didn't exist
56 # remove copies from files that didn't exist
57 if v not in src:
57 if v not in src:
58 del t[k]
58 del t[k]
59 # remove criss-crossed copies
59 # remove criss-crossed copies
60 elif k in src and v in dst:
60 elif k in src and v in dst:
61 del t[k]
61 del t[k]
62 # remove copies to files that were then removed
62 # remove copies to files that were then removed
63 elif k not in dst:
63 elif k not in dst:
64 del t[k]
64 del t[k]
65
65
66
66
67 def _chain(prefix, suffix):
67 def _chain(prefix, suffix):
68 """chain two sets of copies 'prefix' and 'suffix'"""
68 """chain two sets of copies 'prefix' and 'suffix'"""
69 result = prefix.copy()
69 result = prefix.copy()
70 for key, value in pycompat.iteritems(suffix):
70 for key, value in pycompat.iteritems(suffix):
71 result[key] = prefix.get(value, value)
71 result[key] = prefix.get(value, value)
72 return result
72 return result
73
73
74
74
75 def _tracefile(fctx, am, basemf):
75 def _tracefile(fctx, am, basemf):
76 """return file context that is the ancestor of fctx present in ancestor
76 """return file context that is the ancestor of fctx present in ancestor
77 manifest am
77 manifest am
78
78
79 Note: we used to try and stop after a given limit, however checking if that
79 Note: we used to try and stop after a given limit, however checking if that
80 limit is reached turned out to be very expensive. we are better off
80 limit is reached turned out to be very expensive. we are better off
81 disabling that feature."""
81 disabling that feature."""
82
82
83 for f in fctx.ancestors():
83 for f in fctx.ancestors():
84 path = f.path()
84 path = f.path()
85 if am.get(path, None) == f.filenode():
85 if am.get(path, None) == f.filenode():
86 return path
86 return path
87 if basemf and basemf.get(path, None) == f.filenode():
87 if basemf and basemf.get(path, None) == f.filenode():
88 return path
88 return path
89
89
90
90
91 def _dirstatecopies(repo, match=None):
91 def _dirstatecopies(repo, match=None):
92 ds = repo.dirstate
92 ds = repo.dirstate
93 c = ds.copies().copy()
93 c = ds.copies().copy()
94 for k in list(c):
94 for k in list(c):
95 if ds[k] not in b'anm' or (match and not match(k)):
95 if ds[k] not in b'anm' or (match and not match(k)):
96 del c[k]
96 del c[k]
97 return c
97 return c
98
98
99
99
100 def _computeforwardmissing(a, b, match=None):
100 def _computeforwardmissing(a, b, match=None):
101 """Computes which files are in b but not a.
101 """Computes which files are in b but not a.
102 This is its own function so extensions can easily wrap this call to see what
102 This is its own function so extensions can easily wrap this call to see what
103 files _forwardcopies is about to process.
103 files _forwardcopies is about to process.
104 """
104 """
105 ma = a.manifest()
105 ma = a.manifest()
106 mb = b.manifest()
106 mb = b.manifest()
107 return mb.filesnotin(ma, match=match)
107 return mb.filesnotin(ma, match=match)
108
108
109
109
110 def usechangesetcentricalgo(repo):
110 def usechangesetcentricalgo(repo):
111 """Checks if we should use changeset-centric copy algorithms"""
111 """Checks if we should use changeset-centric copy algorithms"""
112 if repo.filecopiesmode == b'changeset-sidedata':
112 if repo.filecopiesmode == b'changeset-sidedata':
113 return True
113 return True
114 readfrom = repo.ui.config(b'experimental', b'copies.read-from')
114 readfrom = repo.ui.config(b'experimental', b'copies.read-from')
115 changesetsource = (b'changeset-only', b'compatibility')
115 changesetsource = (b'changeset-only', b'compatibility')
116 return readfrom in changesetsource
116 return readfrom in changesetsource
117
117
118
118
119 def _committedforwardcopies(a, b, base, match):
119 def _committedforwardcopies(a, b, base, match):
120 """Like _forwardcopies(), but b.rev() cannot be None (working copy)"""
120 """Like _forwardcopies(), but b.rev() cannot be None (working copy)"""
121 # files might have to be traced back to the fctx parent of the last
121 # files might have to be traced back to the fctx parent of the last
122 # one-side-only changeset, but not further back than that
122 # one-side-only changeset, but not further back than that
123 repo = a._repo
123 repo = a._repo
124
124
125 if usechangesetcentricalgo(repo):
125 if usechangesetcentricalgo(repo):
126 return _changesetforwardcopies(a, b, match)
126 return _changesetforwardcopies(a, b, match)
127
127
128 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
128 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
129 dbg = repo.ui.debug
129 dbg = repo.ui.debug
130 if debug:
130 if debug:
131 dbg(b'debug.copies: looking into rename from %s to %s\n' % (a, b))
131 dbg(b'debug.copies: looking into rename from %s to %s\n' % (a, b))
132 am = a.manifest()
132 am = a.manifest()
133 basemf = None if base is None else base.manifest()
133 basemf = None if base is None else base.manifest()
134
134
135 # find where new files came from
135 # find where new files came from
136 # we currently don't try to find where old files went, too expensive
136 # we currently don't try to find where old files went, too expensive
137 # this means we can miss a case like 'hg rm b; hg cp a b'
137 # this means we can miss a case like 'hg rm b; hg cp a b'
138 cm = {}
138 cm = {}
139
139
140 # Computing the forward missing is quite expensive on large manifests, since
140 # Computing the forward missing is quite expensive on large manifests, since
141 # it compares the entire manifests. We can optimize it in the common use
141 # it compares the entire manifests. We can optimize it in the common use
142 # case of computing what copies are in a commit versus its parent (like
142 # case of computing what copies are in a commit versus its parent (like
143 # during a rebase or histedit). Note, we exclude merge commits from this
143 # during a rebase or histedit). Note, we exclude merge commits from this
144 # optimization, since the ctx.files() for a merge commit is not correct for
144 # optimization, since the ctx.files() for a merge commit is not correct for
145 # this comparison.
145 # this comparison.
146 forwardmissingmatch = match
146 forwardmissingmatch = match
147 if b.p1() == a and b.p2().node() == node.nullid:
147 if b.p1() == a and b.p2().node() == node.nullid:
148 filesmatcher = matchmod.exact(b.files())
148 filesmatcher = matchmod.exact(b.files())
149 forwardmissingmatch = matchmod.intersectmatchers(match, filesmatcher)
149 forwardmissingmatch = matchmod.intersectmatchers(match, filesmatcher)
150 missing = _computeforwardmissing(a, b, match=forwardmissingmatch)
150 missing = _computeforwardmissing(a, b, match=forwardmissingmatch)
151
151
152 ancestrycontext = a._repo.changelog.ancestors([b.rev()], inclusive=True)
152 ancestrycontext = a._repo.changelog.ancestors([b.rev()], inclusive=True)
153
153
154 if debug:
154 if debug:
155 dbg(b'debug.copies: missing files to search: %d\n' % len(missing))
155 dbg(b'debug.copies: missing files to search: %d\n' % len(missing))
156
156
157 for f in sorted(missing):
157 for f in sorted(missing):
158 if debug:
158 if debug:
159 dbg(b'debug.copies: tracing file: %s\n' % f)
159 dbg(b'debug.copies: tracing file: %s\n' % f)
160 fctx = b[f]
160 fctx = b[f]
161 fctx._ancestrycontext = ancestrycontext
161 fctx._ancestrycontext = ancestrycontext
162
162
163 if debug:
163 if debug:
164 start = util.timer()
164 start = util.timer()
165 opath = _tracefile(fctx, am, basemf)
165 opath = _tracefile(fctx, am, basemf)
166 if opath:
166 if opath:
167 if debug:
167 if debug:
168 dbg(b'debug.copies: rename of: %s\n' % opath)
168 dbg(b'debug.copies: rename of: %s\n' % opath)
169 cm[f] = opath
169 cm[f] = opath
170 if debug:
170 if debug:
171 dbg(
171 dbg(
172 b'debug.copies: time: %f seconds\n'
172 b'debug.copies: time: %f seconds\n'
173 % (util.timer() - start)
173 % (util.timer() - start)
174 )
174 )
175 return cm
175 return cm
176
176
177
177
178 def _revinfogetter(repo):
178 def _revinfogetter(repo):
179 """return a function that return multiple data given a <rev>"i
179 """return a function that return multiple data given a <rev>"i
180
180
181 * p1: revision number of first parent
181 * p1: revision number of first parent
182 * p2: revision number of first parent
182 * p2: revision number of first parent
183 * p1copies: mapping of copies from p1
183 * p1copies: mapping of copies from p1
184 * p2copies: mapping of copies from p2
184 * p2copies: mapping of copies from p2
185 * removed: a list of removed files
185 * removed: a list of removed files
186 """
186 """
187 cl = repo.changelog
187 cl = repo.changelog
188 parents = cl.parentrevs
188 parents = cl.parentrevs
189
189
190 if repo.filecopiesmode == b'changeset-sidedata':
190 if repo.filecopiesmode == b'changeset-sidedata':
191 changelogrevision = cl.changelogrevision
191 changelogrevision = cl.changelogrevision
192 flags = cl.flags
192 flags = cl.flags
193
193
194 # A small cache to avoid doing the work twice for merges
194 # A small cache to avoid doing the work twice for merges
195 #
195 #
196 # In the vast majority of cases, if we ask information for a revision
196 # In the vast majority of cases, if we ask information for a revision
197 # about 1 parent, we'll later ask it for the other. So it make sense to
197 # about 1 parent, we'll later ask it for the other. So it make sense to
198 # keep the information around when reaching the first parent of a merge
198 # keep the information around when reaching the first parent of a merge
199 # and dropping it after it was provided for the second parents.
199 # and dropping it after it was provided for the second parents.
200 #
200 #
201 # It exists cases were only one parent of the merge will be walked. It
201 # It exists cases were only one parent of the merge will be walked. It
202 # happens when the "destination" the copy tracing is descendant from a
202 # happens when the "destination" the copy tracing is descendant from a
203 # new root, not common with the "source". In that case, we will only walk
203 # new root, not common with the "source". In that case, we will only walk
204 # through merge parents that are descendant of changesets common
204 # through merge parents that are descendant of changesets common
205 # between "source" and "destination".
205 # between "source" and "destination".
206 #
206 #
207 # With the current case implementation if such changesets have a copy
207 # With the current case implementation if such changesets have a copy
208 # information, we'll keep them in memory until the end of
208 # information, we'll keep them in memory until the end of
209 # _changesetforwardcopies. We don't expect the case to be frequent
209 # _changesetforwardcopies. We don't expect the case to be frequent
210 # enough to matters.
210 # enough to matters.
211 #
211 #
212 # In addition, it would be possible to reach pathological case, were
212 # In addition, it would be possible to reach pathological case, were
213 # many first parent are met before any second parent is reached. In
213 # many first parent are met before any second parent is reached. In
214 # that case the cache could grow. If this even become an issue one can
214 # that case the cache could grow. If this even become an issue one can
215 # safely introduce a maximum cache size. This would trade extra CPU/IO
215 # safely introduce a maximum cache size. This would trade extra CPU/IO
216 # time to save memory.
216 # time to save memory.
217 merge_caches = {}
217 merge_caches = {}
218
218
219 def revinfo(rev):
219 def revinfo(rev):
220 p1, p2 = parents(rev)
220 p1, p2 = parents(rev)
221 if flags(rev) & REVIDX_SIDEDATA:
221 if flags(rev) & REVIDX_SIDEDATA:
222 e = merge_caches.pop(rev, None)
222 e = merge_caches.pop(rev, None)
223 if e is not None:
223 if e is not None:
224 return e
224 return e
225 c = changelogrevision(rev)
225 c = changelogrevision(rev)
226 p1copies = c.p1copies
226 p1copies = c.p1copies
227 p2copies = c.p2copies
227 p2copies = c.p2copies
228 removed = c.filesremoved
228 removed = c.filesremoved
229 if p1 != node.nullrev and p2 != node.nullrev:
229 if p1 != node.nullrev and p2 != node.nullrev:
230 # XXX some case we over cache, IGNORE
230 # XXX some case we over cache, IGNORE
231 merge_caches[rev] = (p1, p2, p1copies, p2copies, removed)
231 merge_caches[rev] = (p1, p2, p1copies, p2copies, removed)
232 else:
232 else:
233 p1copies = {}
233 p1copies = {}
234 p2copies = {}
234 p2copies = {}
235 removed = []
235 removed = []
236 return p1, p2, p1copies, p2copies, removed
236 return p1, p2, p1copies, p2copies, removed
237
237
238 else:
238 else:
239
239
240 def revinfo(rev):
240 def revinfo(rev):
241 p1, p2 = parents(rev)
241 p1, p2 = parents(rev)
242 ctx = repo[rev]
242 ctx = repo[rev]
243 p1copies, p2copies = ctx._copies
243 p1copies, p2copies = ctx._copies
244 removed = ctx.filesremoved()
244 removed = ctx.filesremoved()
245 return p1, p2, p1copies, p2copies, removed
245 return p1, p2, p1copies, p2copies, removed
246
246
247 return revinfo
247 return revinfo
248
248
249
249
250 def _changesetforwardcopies(a, b, match):
250 def _changesetforwardcopies(a, b, match):
251 if a.rev() in (node.nullrev, b.rev()):
251 if a.rev() in (node.nullrev, b.rev()):
252 return {}
252 return {}
253
253
254 repo = a.repo().unfiltered()
254 repo = a.repo().unfiltered()
255 children = {}
255 children = {}
256 revinfo = _revinfogetter(repo)
256 revinfo = _revinfogetter(repo)
257
257
258 cl = repo.changelog
258 cl = repo.changelog
259 missingrevs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()])
259 missingrevs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()])
260 mrset = set(missingrevs)
260 mrset = set(missingrevs)
261 roots = set()
261 roots = set()
262 for r in missingrevs:
262 for r in missingrevs:
263 for p in cl.parentrevs(r):
263 for p in cl.parentrevs(r):
264 if p == node.nullrev:
264 if p == node.nullrev:
265 continue
265 continue
266 if p not in children:
266 if p not in children:
267 children[p] = [r]
267 children[p] = [r]
268 else:
268 else:
269 children[p].append(r)
269 children[p].append(r)
270 if p not in mrset:
270 if p not in mrset:
271 roots.add(p)
271 roots.add(p)
272 if not roots:
272 if not roots:
273 # no common revision to track copies from
273 # no common revision to track copies from
274 return {}
274 return {}
275 min_root = min(roots)
275 min_root = min(roots)
276
276
277 from_head = set(
277 from_head = set(
278 cl.reachableroots(min_root, [b.rev()], list(roots), includepath=True)
278 cl.reachableroots(min_root, [b.rev()], list(roots), includepath=True)
279 )
279 )
280
280
281 iterrevs = set(from_head)
281 iterrevs = set(from_head)
282 iterrevs &= mrset
282 iterrevs &= mrset
283 iterrevs.update(roots)
283 iterrevs.update(roots)
284 iterrevs.remove(b.rev())
284 iterrevs.remove(b.rev())
285 revs = sorted(iterrevs)
285 revs = sorted(iterrevs)
286 return _combinechangesetcopies(revs, children, b.rev(), revinfo, match)
286 return _combinechangesetcopies(revs, children, b.rev(), revinfo, match)
287
287
288
288
289 def _combinechangesetcopies(revs, children, targetrev, revinfo, match):
289 def _combinechangesetcopies(revs, children, targetrev, revinfo, match):
290 """combine the copies information for each item of iterrevs
290 """combine the copies information for each item of iterrevs
291
291
292 revs: sorted iterable of revision to visit
292 revs: sorted iterable of revision to visit
293 children: a {parent: [children]} mapping.
293 children: a {parent: [children]} mapping.
294 targetrev: the final copies destination revision (not in iterrevs)
294 targetrev: the final copies destination revision (not in iterrevs)
295 revinfo(rev): a function that return (p1, p2, p1copies, p2copies, removed)
295 revinfo(rev): a function that return (p1, p2, p1copies, p2copies, removed)
296 match: a matcher
296 match: a matcher
297
297
298 It returns the aggregated copies information for `targetrev`.
298 It returns the aggregated copies information for `targetrev`.
299 """
299 """
300 all_copies = {}
300 all_copies = {}
301 alwaysmatch = match.always()
301 alwaysmatch = match.always()
302 for r in revs:
302 for r in revs:
303 copies = all_copies.pop(r, None)
303 copies = all_copies.pop(r, None)
304 if copies is None:
304 if copies is None:
305 # this is a root
305 # this is a root
306 copies = {}
306 copies = {}
307 for i, c in enumerate(children[r]):
307 for i, c in enumerate(children[r]):
308 p1, p2, p1copies, p2copies, removed = revinfo(c)
308 p1, p2, p1copies, p2copies, removed = revinfo(c)
309 if r == p1:
309 if r == p1:
310 parent = 1
310 parent = 1
311 childcopies = p1copies
311 childcopies = p1copies
312 else:
312 else:
313 assert r == p2
313 assert r == p2
314 parent = 2
314 parent = 2
315 childcopies = p2copies
315 childcopies = p2copies
316 if not alwaysmatch:
316 if not alwaysmatch:
317 childcopies = {
317 childcopies = {
318 dst: src for dst, src in childcopies.items() if match(dst)
318 dst: src for dst, src in childcopies.items() if match(dst)
319 }
319 }
320 newcopies = copies
320 newcopies = copies
321 if childcopies:
321 if childcopies:
322 newcopies = _chain(newcopies, childcopies)
322 newcopies = _chain(newcopies, childcopies)
323 # _chain makes a copies, we can avoid doing so in some
323 # _chain makes a copies, we can avoid doing so in some
324 # simple/linear cases.
324 # simple/linear cases.
325 assert newcopies is not copies
325 assert newcopies is not copies
326 for f in removed:
326 for f in removed:
327 if f in newcopies:
327 if f in newcopies:
328 if newcopies is copies:
328 if newcopies is copies:
329 # copy on write to avoid affecting potential other
329 # copy on write to avoid affecting potential other
330 # branches. when there are no other branches, this
330 # branches. when there are no other branches, this
331 # could be avoided.
331 # could be avoided.
332 newcopies = copies.copy()
332 newcopies = copies.copy()
333 del newcopies[f]
333 del newcopies[f]
334 othercopies = all_copies.get(c)
334 othercopies = all_copies.get(c)
335 if othercopies is None:
335 if othercopies is None:
336 all_copies[c] = newcopies
336 all_copies[c] = newcopies
337 else:
337 else:
338 # we are the second parent to work on c, we need to merge our
338 # we are the second parent to work on c, we need to merge our
339 # work with the other.
339 # work with the other.
340 #
340 #
341 # Unlike when copies are stored in the filelog, we consider
341 # Unlike when copies are stored in the filelog, we consider
342 # it a copy even if the destination already existed on the
342 # it a copy even if the destination already existed on the
343 # other branch. It's simply too expensive to check if the
343 # other branch. It's simply too expensive to check if the
344 # file existed in the manifest.
344 # file existed in the manifest.
345 #
345 #
346 # In case of conflict, parent 1 take precedence over parent 2.
346 # In case of conflict, parent 1 take precedence over parent 2.
347 # This is an arbitrary choice made anew when implementing
347 # This is an arbitrary choice made anew when implementing
348 # changeset based copies. It was made without regards with
348 # changeset based copies. It was made without regards with
349 # potential filelog related behavior.
349 # potential filelog related behavior.
350 if parent == 1:
350 if parent == 1:
351 othercopies.update(newcopies)
351 othercopies.update(newcopies)
352 else:
352 else:
353 newcopies.update(othercopies)
353 newcopies.update(othercopies)
354 all_copies[c] = newcopies
354 all_copies[c] = newcopies
355 return all_copies[targetrev]
355 return all_copies[targetrev]
356
356
357
357
358 def _forwardcopies(a, b, base=None, match=None):
358 def _forwardcopies(a, b, base=None, match=None):
359 """find {dst@b: src@a} copy mapping where a is an ancestor of b"""
359 """find {dst@b: src@a} copy mapping where a is an ancestor of b"""
360
360
361 if base is None:
361 if base is None:
362 base = a
362 base = a
363 match = a.repo().narrowmatch(match)
363 match = a.repo().narrowmatch(match)
364 # check for working copy
364 # check for working copy
365 if b.rev() is None:
365 if b.rev() is None:
366 cm = _committedforwardcopies(a, b.p1(), base, match)
366 cm = _committedforwardcopies(a, b.p1(), base, match)
367 # combine copies from dirstate if necessary
367 # combine copies from dirstate if necessary
368 copies = _chain(cm, _dirstatecopies(b._repo, match))
368 copies = _chain(cm, _dirstatecopies(b._repo, match))
369 else:
369 else:
370 copies = _committedforwardcopies(a, b, base, match)
370 copies = _committedforwardcopies(a, b, base, match)
371 return copies
371 return copies
372
372
373
373
374 def _backwardrenames(a, b, match):
374 def _backwardrenames(a, b, match):
375 if a._repo.ui.config(b'experimental', b'copytrace') == b'off':
375 if a._repo.ui.config(b'experimental', b'copytrace') == b'off':
376 return {}
376 return {}
377
377
378 # Even though we're not taking copies into account, 1:n rename situations
378 # Even though we're not taking copies into account, 1:n rename situations
379 # can still exist (e.g. hg cp a b; hg mv a c). In those cases we
379 # can still exist (e.g. hg cp a b; hg mv a c). In those cases we
380 # arbitrarily pick one of the renames.
380 # arbitrarily pick one of the renames.
381 # We don't want to pass in "match" here, since that would filter
381 # We don't want to pass in "match" here, since that would filter
382 # the destination by it. Since we're reversing the copies, we want
382 # the destination by it. Since we're reversing the copies, we want
383 # to filter the source instead.
383 # to filter the source instead.
384 f = _forwardcopies(b, a)
384 f = _forwardcopies(b, a)
385 r = {}
385 r = {}
386 for k, v in sorted(pycompat.iteritems(f)):
386 for k, v in sorted(pycompat.iteritems(f)):
387 if match and not match(v):
387 if match and not match(v):
388 continue
388 continue
389 # remove copies
389 # remove copies
390 if v in a:
390 if v in a:
391 continue
391 continue
392 r[v] = k
392 r[v] = k
393 return r
393 return r
394
394
395
395
396 def pathcopies(x, y, match=None):
396 def pathcopies(x, y, match=None):
397 """find {dst@y: src@x} copy mapping for directed compare"""
397 """find {dst@y: src@x} copy mapping for directed compare"""
398 repo = x._repo
398 repo = x._repo
399 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
399 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
400 if debug:
400 if debug:
401 repo.ui.debug(
401 repo.ui.debug(
402 b'debug.copies: searching copies from %s to %s\n' % (x, y)
402 b'debug.copies: searching copies from %s to %s\n' % (x, y)
403 )
403 )
404 if x == y or not x or not y:
404 if x == y or not x or not y:
405 return {}
405 return {}
406 a = y.ancestor(x)
406 a = y.ancestor(x)
407 if a == x:
407 if a == x:
408 if debug:
408 if debug:
409 repo.ui.debug(b'debug.copies: search mode: forward\n')
409 repo.ui.debug(b'debug.copies: search mode: forward\n')
410 if y.rev() is None and x == y.p1():
410 if y.rev() is None and x == y.p1():
411 # short-circuit to avoid issues with merge states
411 # short-circuit to avoid issues with merge states
412 return _dirstatecopies(repo, match)
412 return _dirstatecopies(repo, match)
413 copies = _forwardcopies(x, y, match=match)
413 copies = _forwardcopies(x, y, match=match)
414 elif a == y:
414 elif a == y:
415 if debug:
415 if debug:
416 repo.ui.debug(b'debug.copies: search mode: backward\n')
416 repo.ui.debug(b'debug.copies: search mode: backward\n')
417 copies = _backwardrenames(x, y, match=match)
417 copies = _backwardrenames(x, y, match=match)
418 else:
418 else:
419 if debug:
419 if debug:
420 repo.ui.debug(b'debug.copies: search mode: combined\n')
420 repo.ui.debug(b'debug.copies: search mode: combined\n')
421 base = None
421 base = None
422 if a.rev() != node.nullrev:
422 if a.rev() != node.nullrev:
423 base = x
423 base = x
424 copies = _chain(
424 copies = _chain(
425 _backwardrenames(x, a, match=match),
425 _backwardrenames(x, a, match=match),
426 _forwardcopies(a, y, base, match=match),
426 _forwardcopies(a, y, base, match=match),
427 )
427 )
428 _filter(x, y, copies)
428 _filter(x, y, copies)
429 return copies
429 return copies
430
430
431
431
432 def mergecopies(repo, c1, c2, base):
432 def mergecopies(repo, c1, c2, base):
433 """
433 """
434 Finds moves and copies between context c1 and c2 that are relevant for
434 Finds moves and copies between context c1 and c2 that are relevant for
435 merging. 'base' will be used as the merge base.
435 merging. 'base' will be used as the merge base.
436
436
437 Copytracing is used in commands like rebase, merge, unshelve, etc to merge
437 Copytracing is used in commands like rebase, merge, unshelve, etc to merge
438 files that were moved/ copied in one merge parent and modified in another.
438 files that were moved/ copied in one merge parent and modified in another.
439 For example:
439 For example:
440
440
441 o ---> 4 another commit
441 o ---> 4 another commit
442 |
442 |
443 | o ---> 3 commit that modifies a.txt
443 | o ---> 3 commit that modifies a.txt
444 | /
444 | /
445 o / ---> 2 commit that moves a.txt to b.txt
445 o / ---> 2 commit that moves a.txt to b.txt
446 |/
446 |/
447 o ---> 1 merge base
447 o ---> 1 merge base
448
448
449 If we try to rebase revision 3 on revision 4, since there is no a.txt in
449 If we try to rebase revision 3 on revision 4, since there is no a.txt in
450 revision 4, and if user have copytrace disabled, we prints the following
450 revision 4, and if user have copytrace disabled, we prints the following
451 message:
451 message:
452
452
453 ```other changed <file> which local deleted```
453 ```other changed <file> which local deleted```
454
454
455 Returns five dicts: "copy", "movewithdir", "diverge", "renamedelete" and
455 Returns five dicts: "copy", "movewithdir", "diverge", "renamedelete" and
456 "dirmove".
456 "dirmove".
457
457
458 "copy" is a mapping from destination name -> source name,
458 "copy" is a mapping from destination name -> source name,
459 where source is in c1 and destination is in c2 or vice-versa.
459 where source is in c1 and destination is in c2 or vice-versa.
460
460
461 "movewithdir" is a mapping from source name -> destination name,
461 "movewithdir" is a mapping from source name -> destination name,
462 where the file at source present in one context but not the other
462 where the file at source present in one context but not the other
463 needs to be moved to destination by the merge process, because the
463 needs to be moved to destination by the merge process, because the
464 other context moved the directory it is in.
464 other context moved the directory it is in.
465
465
466 "diverge" is a mapping of source name -> list of destination names
466 "diverge" is a mapping of source name -> list of destination names
467 for divergent renames.
467 for divergent renames.
468
468
469 "renamedelete" is a mapping of source name -> list of destination
469 "renamedelete" is a mapping of source name -> list of destination
470 names for files deleted in c1 that were renamed in c2 or vice-versa.
470 names for files deleted in c1 that were renamed in c2 or vice-versa.
471
471
472 "dirmove" is a mapping of detected source dir -> destination dir renames.
472 "dirmove" is a mapping of detected source dir -> destination dir renames.
473 This is needed for handling changes to new files previously grafted into
473 This is needed for handling changes to new files previously grafted into
474 renamed directories.
474 renamed directories.
475
475
476 This function calls different copytracing algorithms based on config.
476 This function calls different copytracing algorithms based on config.
477 """
477 """
478 # avoid silly behavior for update from empty dir
478 # avoid silly behavior for update from empty dir
479 if not c1 or not c2 or c1 == c2:
479 if not c1 or not c2 or c1 == c2:
480 return {}, {}, {}, {}, {}
480 return {}, {}, {}, {}, {}
481
481
482 narrowmatch = c1.repo().narrowmatch()
482 narrowmatch = c1.repo().narrowmatch()
483
483
484 # avoid silly behavior for parent -> working dir
484 # avoid silly behavior for parent -> working dir
485 if c2.node() is None and c1.node() == repo.dirstate.p1():
485 if c2.node() is None and c1.node() == repo.dirstate.p1():
486 return _dirstatecopies(repo, narrowmatch), {}, {}, {}, {}
486 return _dirstatecopies(repo, narrowmatch), {}, {}, {}, {}
487
487
488 copytracing = repo.ui.config(b'experimental', b'copytrace')
488 copytracing = repo.ui.config(b'experimental', b'copytrace')
489 if stringutil.parsebool(copytracing) is False:
489 if stringutil.parsebool(copytracing) is False:
490 # stringutil.parsebool() returns None when it is unable to parse the
490 # stringutil.parsebool() returns None when it is unable to parse the
491 # value, so we should rely on making sure copytracing is on such cases
491 # value, so we should rely on making sure copytracing is on such cases
492 return {}, {}, {}, {}, {}
492 return {}, {}, {}, {}, {}
493
493
494 if usechangesetcentricalgo(repo):
494 if usechangesetcentricalgo(repo):
495 # The heuristics don't make sense when we need changeset-centric algos
495 # The heuristics don't make sense when we need changeset-centric algos
496 return _fullcopytracing(repo, c1, c2, base)
496 return _fullcopytracing(repo, c1, c2, base)
497
497
498 # Copy trace disabling is explicitly below the node == p1 logic above
498 # Copy trace disabling is explicitly below the node == p1 logic above
499 # because the logic above is required for a simple copy to be kept across a
499 # because the logic above is required for a simple copy to be kept across a
500 # rebase.
500 # rebase.
501 if copytracing == b'heuristics':
501 if copytracing == b'heuristics':
502 # Do full copytracing if only non-public revisions are involved as
502 # Do full copytracing if only non-public revisions are involved as
503 # that will be fast enough and will also cover the copies which could
503 # that will be fast enough and will also cover the copies which could
504 # be missed by heuristics
504 # be missed by heuristics
505 if _isfullcopytraceable(repo, c1, base):
505 if _isfullcopytraceable(repo, c1, base):
506 return _fullcopytracing(repo, c1, c2, base)
506 return _fullcopytracing(repo, c1, c2, base)
507 return _heuristicscopytracing(repo, c1, c2, base)
507 return _heuristicscopytracing(repo, c1, c2, base)
508 else:
508 else:
509 return _fullcopytracing(repo, c1, c2, base)
509 return _fullcopytracing(repo, c1, c2, base)
510
510
511
511
512 def _isfullcopytraceable(repo, c1, base):
512 def _isfullcopytraceable(repo, c1, base):
513 """ Checks that if base, source and destination are all no-public branches,
513 """ Checks that if base, source and destination are all no-public branches,
514 if yes let's use the full copytrace algorithm for increased capabilities
514 if yes let's use the full copytrace algorithm for increased capabilities
515 since it will be fast enough.
515 since it will be fast enough.
516
516
517 `experimental.copytrace.sourcecommitlimit` can be used to set a limit for
517 `experimental.copytrace.sourcecommitlimit` can be used to set a limit for
518 number of changesets from c1 to base such that if number of changesets are
518 number of changesets from c1 to base such that if number of changesets are
519 more than the limit, full copytracing algorithm won't be used.
519 more than the limit, full copytracing algorithm won't be used.
520 """
520 """
521 if c1.rev() is None:
521 if c1.rev() is None:
522 c1 = c1.p1()
522 c1 = c1.p1()
523 if c1.mutable() and base.mutable():
523 if c1.mutable() and base.mutable():
524 sourcecommitlimit = repo.ui.configint(
524 sourcecommitlimit = repo.ui.configint(
525 b'experimental', b'copytrace.sourcecommitlimit'
525 b'experimental', b'copytrace.sourcecommitlimit'
526 )
526 )
527 commits = len(repo.revs(b'%d::%d', base.rev(), c1.rev()))
527 commits = len(repo.revs(b'%d::%d', base.rev(), c1.rev()))
528 return commits < sourcecommitlimit
528 return commits < sourcecommitlimit
529 return False
529 return False
530
530
531
531
532 def _checksinglesidecopies(
532 def _checksinglesidecopies(
533 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
533 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
534 ):
534 ):
535 if src not in m2:
535 if src not in m2:
536 # deleted on side 2
536 # deleted on side 2
537 if src not in m1:
537 if src not in m1:
538 # renamed on side 1, deleted on side 2
538 # renamed on side 1, deleted on side 2
539 renamedelete[src] = dsts1
539 renamedelete[src] = dsts1
540 elif m2[src] != mb[src]:
540 elif m2[src] != mb[src]:
541 if not _related(c2[src], base[src]):
541 if not _related(c2[src], base[src]):
542 return
542 return
543 # modified on side 2
543 # modified on side 2
544 for dst in dsts1:
544 for dst in dsts1:
545 if dst not in m2:
545 if dst not in m2:
546 # dst not added on side 2 (handle as regular
546 # dst not added on side 2 (handle as regular
547 # "both created" case in manifestmerge otherwise)
547 # "both created" case in manifestmerge otherwise)
548 copy[dst] = src
548 copy[dst] = src
549
549
550
550
551 def _fullcopytracing(repo, c1, c2, base):
551 def _fullcopytracing(repo, c1, c2, base):
552 """ The full copytracing algorithm which finds all the new files that were
552 """ The full copytracing algorithm which finds all the new files that were
553 added from merge base up to the top commit and for each file it checks if
553 added from merge base up to the top commit and for each file it checks if
554 this file was copied from another file.
554 this file was copied from another file.
555
555
556 This is pretty slow when a lot of changesets are involved but will track all
556 This is pretty slow when a lot of changesets are involved but will track all
557 the copies.
557 the copies.
558 """
558 """
559 m1 = c1.manifest()
559 m1 = c1.manifest()
560 m2 = c2.manifest()
560 m2 = c2.manifest()
561 mb = base.manifest()
561 mb = base.manifest()
562
562
563 copies1 = pathcopies(base, c1)
563 copies1 = pathcopies(base, c1)
564 copies2 = pathcopies(base, c2)
564 copies2 = pathcopies(base, c2)
565
565
566 inversecopies1 = {}
566 inversecopies1 = {}
567 inversecopies2 = {}
567 inversecopies2 = {}
568 for dst, src in copies1.items():
568 for dst, src in copies1.items():
569 inversecopies1.setdefault(src, []).append(dst)
569 inversecopies1.setdefault(src, []).append(dst)
570 for dst, src in copies2.items():
570 for dst, src in copies2.items():
571 inversecopies2.setdefault(src, []).append(dst)
571 inversecopies2.setdefault(src, []).append(dst)
572
572
573 copy = {}
573 copy = {}
574 diverge = {}
574 diverge = {}
575 renamedelete = {}
575 renamedelete = {}
576 allsources = set(inversecopies1) | set(inversecopies2)
576 allsources = set(inversecopies1) | set(inversecopies2)
577 for src in allsources:
577 for src in allsources:
578 dsts1 = inversecopies1.get(src)
578 dsts1 = inversecopies1.get(src)
579 dsts2 = inversecopies2.get(src)
579 dsts2 = inversecopies2.get(src)
580 if dsts1 and dsts2:
580 if dsts1 and dsts2:
581 # copied/renamed on both sides
581 # copied/renamed on both sides
582 if src not in m1 and src not in m2:
582 if src not in m1 and src not in m2:
583 # renamed on both sides
583 # renamed on both sides
584 dsts1 = set(dsts1)
584 dsts1 = set(dsts1)
585 dsts2 = set(dsts2)
585 dsts2 = set(dsts2)
586 # If there's some overlap in the rename destinations, we
586 # If there's some overlap in the rename destinations, we
587 # consider it not divergent. For example, if side 1 copies 'a'
587 # consider it not divergent. For example, if side 1 copies 'a'
588 # to 'b' and 'c' and deletes 'a', and side 2 copies 'a' to 'c'
588 # to 'b' and 'c' and deletes 'a', and side 2 copies 'a' to 'c'
589 # and 'd' and deletes 'a'.
589 # and 'd' and deletes 'a'.
590 if dsts1 & dsts2:
590 if dsts1 & dsts2:
591 for dst in dsts1 & dsts2:
591 for dst in dsts1 & dsts2:
592 copy[dst] = src
592 copy[dst] = src
593 else:
593 else:
594 diverge[src] = sorted(dsts1 | dsts2)
594 diverge[src] = sorted(dsts1 | dsts2)
595 elif src in m1 and src in m2:
595 elif src in m1 and src in m2:
596 # copied on both sides
596 # copied on both sides
597 dsts1 = set(dsts1)
597 dsts1 = set(dsts1)
598 dsts2 = set(dsts2)
598 dsts2 = set(dsts2)
599 for dst in dsts1 & dsts2:
599 for dst in dsts1 & dsts2:
600 copy[dst] = src
600 copy[dst] = src
601 # TODO: Handle cases where it was renamed on one side and copied
601 # TODO: Handle cases where it was renamed on one side and copied
602 # on the other side
602 # on the other side
603 elif dsts1:
603 elif dsts1:
604 # copied/renamed only on side 1
604 # copied/renamed only on side 1
605 _checksinglesidecopies(
605 _checksinglesidecopies(
606 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
606 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
607 )
607 )
608 elif dsts2:
608 elif dsts2:
609 # copied/renamed only on side 2
609 # copied/renamed only on side 2
610 _checksinglesidecopies(
610 _checksinglesidecopies(
611 src, dsts2, m2, m1, mb, c1, base, copy, renamedelete
611 src, dsts2, m2, m1, mb, c1, base, copy, renamedelete
612 )
612 )
613
613
614 renamedeleteset = set()
614 renamedeleteset = set()
615 divergeset = set()
615 divergeset = set()
616 for dsts in diverge.values():
616 for dsts in diverge.values():
617 divergeset.update(dsts)
617 divergeset.update(dsts)
618 for dsts in renamedelete.values():
618 for dsts in renamedelete.values():
619 renamedeleteset.update(dsts)
619 renamedeleteset.update(dsts)
620
620
621 # find interesting file sets from manifests
621 # find interesting file sets from manifests
622 addedinm1 = m1.filesnotin(mb, repo.narrowmatch())
622 addedinm1 = m1.filesnotin(mb, repo.narrowmatch())
623 addedinm2 = m2.filesnotin(mb, repo.narrowmatch())
623 addedinm2 = m2.filesnotin(mb, repo.narrowmatch())
624 u1 = sorted(addedinm1 - addedinm2)
624 u1 = sorted(addedinm1 - addedinm2)
625 u2 = sorted(addedinm2 - addedinm1)
625 u2 = sorted(addedinm2 - addedinm1)
626
626
627 header = b" unmatched files in %s"
627 header = b" unmatched files in %s"
628 if u1:
628 if u1:
629 repo.ui.debug(b"%s:\n %s\n" % (header % b'local', b"\n ".join(u1)))
629 repo.ui.debug(b"%s:\n %s\n" % (header % b'local', b"\n ".join(u1)))
630 if u2:
630 if u2:
631 repo.ui.debug(b"%s:\n %s\n" % (header % b'other', b"\n ".join(u2)))
631 repo.ui.debug(b"%s:\n %s\n" % (header % b'other', b"\n ".join(u2)))
632
632
633 fullcopy = copies1.copy()
633 fullcopy = copies1.copy()
634 fullcopy.update(copies2)
634 fullcopy.update(copies2)
635 if not fullcopy:
635 if not fullcopy:
636 return copy, {}, diverge, renamedelete, {}
636 return copy, {}, diverge, renamedelete, {}
637
637
638 if repo.ui.debugflag:
638 if repo.ui.debugflag:
639 repo.ui.debug(
639 repo.ui.debug(
640 b" all copies found (* = to merge, ! = divergent, "
640 b" all copies found (* = to merge, ! = divergent, "
641 b"% = renamed and deleted):\n"
641 b"% = renamed and deleted):\n"
642 )
642 )
643 for f in sorted(fullcopy):
643 for f in sorted(fullcopy):
644 note = b""
644 note = b""
645 if f in copy:
645 if f in copy:
646 note += b"*"
646 note += b"*"
647 if f in divergeset:
647 if f in divergeset:
648 note += b"!"
648 note += b"!"
649 if f in renamedeleteset:
649 if f in renamedeleteset:
650 note += b"%"
650 note += b"%"
651 repo.ui.debug(
651 repo.ui.debug(
652 b" src: '%s' -> dst: '%s' %s\n" % (fullcopy[f], f, note)
652 b" src: '%s' -> dst: '%s' %s\n" % (fullcopy[f], f, note)
653 )
653 )
654 del divergeset
654 del divergeset
655
655
656 repo.ui.debug(b" checking for directory renames\n")
656 repo.ui.debug(b" checking for directory renames\n")
657
657
658 # generate a directory move map
658 # generate a directory move map
659 d1, d2 = c1.dirs(), c2.dirs()
659 d1, d2 = c1.dirs(), c2.dirs()
660 invalid = set()
660 invalid = set()
661 dirmove = {}
661 dirmove = {}
662
662
663 # examine each file copy for a potential directory move, which is
663 # examine each file copy for a potential directory move, which is
664 # when all the files in a directory are moved to a new directory
664 # when all the files in a directory are moved to a new directory
665 for dst, src in pycompat.iteritems(fullcopy):
665 for dst, src in pycompat.iteritems(fullcopy):
666 dsrc, ddst = pathutil.dirname(src), pathutil.dirname(dst)
666 dsrc, ddst = pathutil.dirname(src), pathutil.dirname(dst)
667 if dsrc in invalid:
667 if dsrc in invalid:
668 # already seen to be uninteresting
668 # already seen to be uninteresting
669 continue
669 continue
670 elif dsrc in d1 and ddst in d1:
670 elif dsrc in d1 and ddst in d1:
671 # directory wasn't entirely moved locally
671 # directory wasn't entirely moved locally
672 invalid.add(dsrc)
672 invalid.add(dsrc)
673 elif dsrc in d2 and ddst in d2:
673 elif dsrc in d2 and ddst in d2:
674 # directory wasn't entirely moved remotely
674 # directory wasn't entirely moved remotely
675 invalid.add(dsrc)
675 invalid.add(dsrc)
676 elif dsrc in dirmove and dirmove[dsrc] != ddst:
676 elif dsrc in dirmove and dirmove[dsrc] != ddst:
677 # files from the same directory moved to two different places
677 # files from the same directory moved to two different places
678 invalid.add(dsrc)
678 invalid.add(dsrc)
679 else:
679 else:
680 # looks good so far
680 # looks good so far
681 dirmove[dsrc] = ddst
681 dirmove[dsrc] = ddst
682
682
683 for i in invalid:
683 for i in invalid:
684 if i in dirmove:
684 if i in dirmove:
685 del dirmove[i]
685 del dirmove[i]
686 del d1, d2, invalid
686 del d1, d2, invalid
687
687
688 if not dirmove:
688 if not dirmove:
689 return copy, {}, diverge, renamedelete, {}
689 return copy, {}, diverge, renamedelete, {}
690
690
691 dirmove = {k + b"/": v + b"/" for k, v in pycompat.iteritems(dirmove)}
691 dirmove = {k + b"/": v + b"/" for k, v in pycompat.iteritems(dirmove)}
692
692
693 for d in dirmove:
693 for d in dirmove:
694 repo.ui.debug(
694 repo.ui.debug(
695 b" discovered dir src: '%s' -> dst: '%s'\n" % (d, dirmove[d])
695 b" discovered dir src: '%s' -> dst: '%s'\n" % (d, dirmove[d])
696 )
696 )
697
697
698 movewithdir = {}
698 movewithdir = {}
699 # check unaccounted nonoverlapping files against directory moves
699 # check unaccounted nonoverlapping files against directory moves
700 for f in u1 + u2:
700 for f in u1 + u2:
701 if f not in fullcopy:
701 if f not in fullcopy:
702 for d in dirmove:
702 for d in dirmove:
703 if f.startswith(d):
703 if f.startswith(d):
704 # new file added in a directory that was moved, move it
704 # new file added in a directory that was moved, move it
705 df = dirmove[d] + f[len(d) :]
705 df = dirmove[d] + f[len(d) :]
706 if df not in copy:
706 if df not in copy:
707 movewithdir[f] = df
707 movewithdir[f] = df
708 repo.ui.debug(
708 repo.ui.debug(
709 b" pending file src: '%s' -> dst: '%s'\n"
709 b" pending file src: '%s' -> dst: '%s'\n"
710 % (f, df)
710 % (f, df)
711 )
711 )
712 break
712 break
713
713
714 return copy, movewithdir, diverge, renamedelete, dirmove
714 return copy, movewithdir, diverge, renamedelete, dirmove
715
715
716
716
717 def _heuristicscopytracing(repo, c1, c2, base):
717 def _heuristicscopytracing(repo, c1, c2, base):
718 """ Fast copytracing using filename heuristics
718 """ Fast copytracing using filename heuristics
719
719
720 Assumes that moves or renames are of following two types:
720 Assumes that moves or renames are of following two types:
721
721
722 1) Inside a directory only (same directory name but different filenames)
722 1) Inside a directory only (same directory name but different filenames)
723 2) Move from one directory to another
723 2) Move from one directory to another
724 (same filenames but different directory names)
724 (same filenames but different directory names)
725
725
726 Works only when there are no merge commits in the "source branch".
726 Works only when there are no merge commits in the "source branch".
727 Source branch is commits from base up to c2 not including base.
727 Source branch is commits from base up to c2 not including base.
728
728
729 If merge is involved it fallbacks to _fullcopytracing().
729 If merge is involved it fallbacks to _fullcopytracing().
730
730
731 Can be used by setting the following config:
731 Can be used by setting the following config:
732
732
733 [experimental]
733 [experimental]
734 copytrace = heuristics
734 copytrace = heuristics
735
735
736 In some cases the copy/move candidates found by heuristics can be very large
736 In some cases the copy/move candidates found by heuristics can be very large
737 in number and that will make the algorithm slow. The number of possible
737 in number and that will make the algorithm slow. The number of possible
738 candidates to check can be limited by using the config
738 candidates to check can be limited by using the config
739 `experimental.copytrace.movecandidateslimit` which defaults to 100.
739 `experimental.copytrace.movecandidateslimit` which defaults to 100.
740 """
740 """
741
741
742 if c1.rev() is None:
742 if c1.rev() is None:
743 c1 = c1.p1()
743 c1 = c1.p1()
744 if c2.rev() is None:
744 if c2.rev() is None:
745 c2 = c2.p1()
745 c2 = c2.p1()
746
746
747 copies = {}
747 copies = {}
748
748
749 changedfiles = set()
749 changedfiles = set()
750 m1 = c1.manifest()
750 m1 = c1.manifest()
751 if not repo.revs(b'%d::%d', base.rev(), c2.rev()):
751 if not repo.revs(b'%d::%d', base.rev(), c2.rev()):
752 # If base is not in c2 branch, we switch to fullcopytracing
752 # If base is not in c2 branch, we switch to fullcopytracing
753 repo.ui.debug(
753 repo.ui.debug(
754 b"switching to full copytracing as base is not "
754 b"switching to full copytracing as base is not "
755 b"an ancestor of c2\n"
755 b"an ancestor of c2\n"
756 )
756 )
757 return _fullcopytracing(repo, c1, c2, base)
757 return _fullcopytracing(repo, c1, c2, base)
758
758
759 ctx = c2
759 ctx = c2
760 while ctx != base:
760 while ctx != base:
761 if len(ctx.parents()) == 2:
761 if len(ctx.parents()) == 2:
762 # To keep things simple let's not handle merges
762 # To keep things simple let's not handle merges
763 repo.ui.debug(b"switching to full copytracing because of merges\n")
763 repo.ui.debug(b"switching to full copytracing because of merges\n")
764 return _fullcopytracing(repo, c1, c2, base)
764 return _fullcopytracing(repo, c1, c2, base)
765 changedfiles.update(ctx.files())
765 changedfiles.update(ctx.files())
766 ctx = ctx.p1()
766 ctx = ctx.p1()
767
767
768 cp = _forwardcopies(base, c2)
768 cp = _forwardcopies(base, c2)
769 for dst, src in pycompat.iteritems(cp):
769 for dst, src in pycompat.iteritems(cp):
770 if src in m1:
770 if src in m1:
771 copies[dst] = src
771 copies[dst] = src
772
772
773 # file is missing if it isn't present in the destination, but is present in
773 # file is missing if it isn't present in the destination, but is present in
774 # the base and present in the source.
774 # the base and present in the source.
775 # Presence in the base is important to exclude added files, presence in the
775 # Presence in the base is important to exclude added files, presence in the
776 # source is important to exclude removed files.
776 # source is important to exclude removed files.
777 filt = lambda f: f not in m1 and f in base and f in c2
777 filt = lambda f: f not in m1 and f in base and f in c2
778 missingfiles = [f for f in changedfiles if filt(f)]
778 missingfiles = [f for f in changedfiles if filt(f)]
779
779
780 if missingfiles:
780 if missingfiles:
781 basenametofilename = collections.defaultdict(list)
781 basenametofilename = collections.defaultdict(list)
782 dirnametofilename = collections.defaultdict(list)
782 dirnametofilename = collections.defaultdict(list)
783
783
784 for f in m1.filesnotin(base.manifest()):
784 for f in m1.filesnotin(base.manifest()):
785 basename = os.path.basename(f)
785 basename = os.path.basename(f)
786 dirname = os.path.dirname(f)
786 dirname = os.path.dirname(f)
787 basenametofilename[basename].append(f)
787 basenametofilename[basename].append(f)
788 dirnametofilename[dirname].append(f)
788 dirnametofilename[dirname].append(f)
789
789
790 for f in missingfiles:
790 for f in missingfiles:
791 basename = os.path.basename(f)
791 basename = os.path.basename(f)
792 dirname = os.path.dirname(f)
792 dirname = os.path.dirname(f)
793 samebasename = basenametofilename[basename]
793 samebasename = basenametofilename[basename]
794 samedirname = dirnametofilename[dirname]
794 samedirname = dirnametofilename[dirname]
795 movecandidates = samebasename + samedirname
795 movecandidates = samebasename + samedirname
796 # f is guaranteed to be present in c2, that's why
796 # f is guaranteed to be present in c2, that's why
797 # c2.filectx(f) won't fail
797 # c2.filectx(f) won't fail
798 f2 = c2.filectx(f)
798 f2 = c2.filectx(f)
799 # we can have a lot of candidates which can slow down the heuristics
799 # we can have a lot of candidates which can slow down the heuristics
800 # config value to limit the number of candidates moves to check
800 # config value to limit the number of candidates moves to check
801 maxcandidates = repo.ui.configint(
801 maxcandidates = repo.ui.configint(
802 b'experimental', b'copytrace.movecandidateslimit'
802 b'experimental', b'copytrace.movecandidateslimit'
803 )
803 )
804
804
805 if len(movecandidates) > maxcandidates:
805 if len(movecandidates) > maxcandidates:
806 repo.ui.status(
806 repo.ui.status(
807 _(
807 _(
808 b"skipping copytracing for '%s', more "
808 b"skipping copytracing for '%s', more "
809 b"candidates than the limit: %d\n"
809 b"candidates than the limit: %d\n"
810 )
810 )
811 % (f, len(movecandidates))
811 % (f, len(movecandidates))
812 )
812 )
813 continue
813 continue
814
814
815 for candidate in movecandidates:
815 for candidate in movecandidates:
816 f1 = c1.filectx(candidate)
816 f1 = c1.filectx(candidate)
817 if _related(f1, f2):
817 if _related(f1, f2):
818 # if there are a few related copies then we'll merge
818 # if there are a few related copies then we'll merge
819 # changes into all of them. This matches the behaviour
819 # changes into all of them. This matches the behaviour
820 # of upstream copytracing
820 # of upstream copytracing
821 copies[candidate] = f
821 copies[candidate] = f
822
822
823 return copies, {}, {}, {}, {}
823 return copies, {}, {}, {}, {}
824
824
825
825
826 def _related(f1, f2):
826 def _related(f1, f2):
827 """return True if f1 and f2 filectx have a common ancestor
827 """return True if f1 and f2 filectx have a common ancestor
828
828
829 Walk back to common ancestor to see if the two files originate
829 Walk back to common ancestor to see if the two files originate
830 from the same file. Since workingfilectx's rev() is None it messes
830 from the same file. Since workingfilectx's rev() is None it messes
831 up the integer comparison logic, hence the pre-step check for
831 up the integer comparison logic, hence the pre-step check for
832 None (f1 and f2 can only be workingfilectx's initially).
832 None (f1 and f2 can only be workingfilectx's initially).
833 """
833 """
834
834
835 if f1 == f2:
835 if f1 == f2:
836 return True # a match
836 return True # a match
837
837
838 g1, g2 = f1.ancestors(), f2.ancestors()
838 g1, g2 = f1.ancestors(), f2.ancestors()
839 try:
839 try:
840 f1r, f2r = f1.linkrev(), f2.linkrev()
840 f1r, f2r = f1.linkrev(), f2.linkrev()
841
841
842 if f1r is None:
842 if f1r is None:
843 f1 = next(g1)
843 f1 = next(g1)
844 if f2r is None:
844 if f2r is None:
845 f2 = next(g2)
845 f2 = next(g2)
846
846
847 while True:
847 while True:
848 f1r, f2r = f1.linkrev(), f2.linkrev()
848 f1r, f2r = f1.linkrev(), f2.linkrev()
849 if f1r > f2r:
849 if f1r > f2r:
850 f1 = next(g1)
850 f1 = next(g1)
851 elif f2r > f1r:
851 elif f2r > f1r:
852 f2 = next(g2)
852 f2 = next(g2)
853 else: # f1 and f2 point to files in the same linkrev
853 else: # f1 and f2 point to files in the same linkrev
854 return f1 == f2 # true if they point to the same file
854 return f1 == f2 # true if they point to the same file
855 except StopIteration:
855 except StopIteration:
856 return False
856 return False
857
857
858
858
859 def graftcopies(repo, wctx, ctx, base, skip=None):
859 def graftcopies(wctx, ctx, base):
860 """reproduce copies between base and ctx in the wctx
860 """reproduce copies between base and ctx in the wctx"""
861
862 If skip is specified, it's a revision that should be used to
863 filter copy records. Any copies that occur between base and
864 skip will not be duplicated, even if they appear in the set of
865 copies between base and ctx.
866 """
867 exclude = {}
868 ctraceconfig = repo.ui.config(b'experimental', b'copytrace')
869 bctrace = stringutil.parsebool(ctraceconfig)
870 if skip is not None and (
871 ctraceconfig == b'heuristics' or bctrace or bctrace is None
872 ):
873 # copytrace='off' skips this line, but not the entire function because
874 # the line below is O(size of the repo) during a rebase, while the rest
875 # of the function is much faster (and is required for carrying copy
876 # metadata across the rebase anyway).
877 exclude = pathcopies(base, skip)
878 new_copies = pathcopies(base, ctx)
861 new_copies = pathcopies(base, ctx)
879 _filter(wctx.p1(), wctx, new_copies)
862 _filter(wctx.p1(), wctx, new_copies)
880 for dst, src in pycompat.iteritems(new_copies):
863 for dst, src in pycompat.iteritems(new_copies):
881 if dst in exclude:
882 continue
883 wctx[dst].markcopied(src)
864 wctx[dst].markcopied(src)
884
865
885
866
886 def computechangesetfilesadded(ctx):
867 def computechangesetfilesadded(ctx):
887 """return the list of files added in a changeset
868 """return the list of files added in a changeset
888 """
869 """
889 added = []
870 added = []
890 for f in ctx.files():
871 for f in ctx.files():
891 if not any(f in p for p in ctx.parents()):
872 if not any(f in p for p in ctx.parents()):
892 added.append(f)
873 added.append(f)
893 return added
874 return added
894
875
895
876
896 def computechangesetfilesremoved(ctx):
877 def computechangesetfilesremoved(ctx):
897 """return the list of files removed in a changeset
878 """return the list of files removed in a changeset
898 """
879 """
899 removed = []
880 removed = []
900 for f in ctx.files():
881 for f in ctx.files():
901 if f not in ctx:
882 if f not in ctx:
902 removed.append(f)
883 removed.append(f)
903 return removed
884 return removed
904
885
905
886
906 def computechangesetcopies(ctx):
887 def computechangesetcopies(ctx):
907 """return the copies data for a changeset
888 """return the copies data for a changeset
908
889
909 The copies data are returned as a pair of dictionnary (p1copies, p2copies).
890 The copies data are returned as a pair of dictionnary (p1copies, p2copies).
910
891
911 Each dictionnary are in the form: `{newname: oldname}`
892 Each dictionnary are in the form: `{newname: oldname}`
912 """
893 """
913 p1copies = {}
894 p1copies = {}
914 p2copies = {}
895 p2copies = {}
915 p1 = ctx.p1()
896 p1 = ctx.p1()
916 p2 = ctx.p2()
897 p2 = ctx.p2()
917 narrowmatch = ctx._repo.narrowmatch()
898 narrowmatch = ctx._repo.narrowmatch()
918 for dst in ctx.files():
899 for dst in ctx.files():
919 if not narrowmatch(dst) or dst not in ctx:
900 if not narrowmatch(dst) or dst not in ctx:
920 continue
901 continue
921 copied = ctx[dst].renamed()
902 copied = ctx[dst].renamed()
922 if not copied:
903 if not copied:
923 continue
904 continue
924 src, srcnode = copied
905 src, srcnode = copied
925 if src in p1 and p1[src].filenode() == srcnode:
906 if src in p1 and p1[src].filenode() == srcnode:
926 p1copies[dst] = src
907 p1copies[dst] = src
927 elif src in p2 and p2[src].filenode() == srcnode:
908 elif src in p2 and p2[src].filenode() == srcnode:
928 p2copies[dst] = src
909 p2copies[dst] = src
929 return p1copies, p2copies
910 return p1copies, p2copies
930
911
931
912
932 def encodecopies(files, copies):
913 def encodecopies(files, copies):
933 items = []
914 items = []
934 for i, dst in enumerate(files):
915 for i, dst in enumerate(files):
935 if dst in copies:
916 if dst in copies:
936 items.append(b'%d\0%s' % (i, copies[dst]))
917 items.append(b'%d\0%s' % (i, copies[dst]))
937 if len(items) != len(copies):
918 if len(items) != len(copies):
938 raise error.ProgrammingError(
919 raise error.ProgrammingError(
939 b'some copy targets missing from file list'
920 b'some copy targets missing from file list'
940 )
921 )
941 return b"\n".join(items)
922 return b"\n".join(items)
942
923
943
924
944 def decodecopies(files, data):
925 def decodecopies(files, data):
945 try:
926 try:
946 copies = {}
927 copies = {}
947 if not data:
928 if not data:
948 return copies
929 return copies
949 for l in data.split(b'\n'):
930 for l in data.split(b'\n'):
950 strindex, src = l.split(b'\0')
931 strindex, src = l.split(b'\0')
951 i = int(strindex)
932 i = int(strindex)
952 dst = files[i]
933 dst = files[i]
953 copies[dst] = src
934 copies[dst] = src
954 return copies
935 return copies
955 except (ValueError, IndexError):
936 except (ValueError, IndexError):
956 # Perhaps someone had chosen the same key name (e.g. "p1copies") and
937 # Perhaps someone had chosen the same key name (e.g. "p1copies") and
957 # used different syntax for the value.
938 # used different syntax for the value.
958 return None
939 return None
959
940
960
941
961 def encodefileindices(files, subset):
942 def encodefileindices(files, subset):
962 subset = set(subset)
943 subset = set(subset)
963 indices = []
944 indices = []
964 for i, f in enumerate(files):
945 for i, f in enumerate(files):
965 if f in subset:
946 if f in subset:
966 indices.append(b'%d' % i)
947 indices.append(b'%d' % i)
967 return b'\n'.join(indices)
948 return b'\n'.join(indices)
968
949
969
950
970 def decodefileindices(files, data):
951 def decodefileindices(files, data):
971 try:
952 try:
972 subset = []
953 subset = []
973 if not data:
954 if not data:
974 return subset
955 return subset
975 for strindex in data.split(b'\n'):
956 for strindex in data.split(b'\n'):
976 i = int(strindex)
957 i = int(strindex)
977 if i < 0 or i >= len(files):
958 if i < 0 or i >= len(files):
978 return None
959 return None
979 subset.append(files[i])
960 subset.append(files[i])
980 return subset
961 return subset
981 except (ValueError, IndexError):
962 except (ValueError, IndexError):
982 # Perhaps someone had chosen the same key name (e.g. "added") and
963 # Perhaps someone had chosen the same key name (e.g. "added") and
983 # used different syntax for the value.
964 # used different syntax for the value.
984 return None
965 return None
985
966
986
967
987 def _getsidedata(srcrepo, rev):
968 def _getsidedata(srcrepo, rev):
988 ctx = srcrepo[rev]
969 ctx = srcrepo[rev]
989 filescopies = computechangesetcopies(ctx)
970 filescopies = computechangesetcopies(ctx)
990 filesadded = computechangesetfilesadded(ctx)
971 filesadded = computechangesetfilesadded(ctx)
991 filesremoved = computechangesetfilesremoved(ctx)
972 filesremoved = computechangesetfilesremoved(ctx)
992 sidedata = {}
973 sidedata = {}
993 if any([filescopies, filesadded, filesremoved]):
974 if any([filescopies, filesadded, filesremoved]):
994 sortedfiles = sorted(ctx.files())
975 sortedfiles = sorted(ctx.files())
995 p1copies, p2copies = filescopies
976 p1copies, p2copies = filescopies
996 p1copies = encodecopies(sortedfiles, p1copies)
977 p1copies = encodecopies(sortedfiles, p1copies)
997 p2copies = encodecopies(sortedfiles, p2copies)
978 p2copies = encodecopies(sortedfiles, p2copies)
998 filesadded = encodefileindices(sortedfiles, filesadded)
979 filesadded = encodefileindices(sortedfiles, filesadded)
999 filesremoved = encodefileindices(sortedfiles, filesremoved)
980 filesremoved = encodefileindices(sortedfiles, filesremoved)
1000 if p1copies:
981 if p1copies:
1001 sidedata[sidedatamod.SD_P1COPIES] = p1copies
982 sidedata[sidedatamod.SD_P1COPIES] = p1copies
1002 if p2copies:
983 if p2copies:
1003 sidedata[sidedatamod.SD_P2COPIES] = p2copies
984 sidedata[sidedatamod.SD_P2COPIES] = p2copies
1004 if filesadded:
985 if filesadded:
1005 sidedata[sidedatamod.SD_FILESADDED] = filesadded
986 sidedata[sidedatamod.SD_FILESADDED] = filesadded
1006 if filesremoved:
987 if filesremoved:
1007 sidedata[sidedatamod.SD_FILESREMOVED] = filesremoved
988 sidedata[sidedatamod.SD_FILESREMOVED] = filesremoved
1008 return sidedata
989 return sidedata
1009
990
1010
991
1011 def getsidedataadder(srcrepo, destrepo):
992 def getsidedataadder(srcrepo, destrepo):
1012 use_w = srcrepo.ui.configbool(b'experimental', b'worker.repository-upgrade')
993 use_w = srcrepo.ui.configbool(b'experimental', b'worker.repository-upgrade')
1013 if pycompat.iswindows or not use_w:
994 if pycompat.iswindows or not use_w:
1014 return _get_simple_sidedata_adder(srcrepo, destrepo)
995 return _get_simple_sidedata_adder(srcrepo, destrepo)
1015 else:
996 else:
1016 return _get_worker_sidedata_adder(srcrepo, destrepo)
997 return _get_worker_sidedata_adder(srcrepo, destrepo)
1017
998
1018
999
1019 def _sidedata_worker(srcrepo, revs_queue, sidedata_queue, tokens):
1000 def _sidedata_worker(srcrepo, revs_queue, sidedata_queue, tokens):
1020 """The function used by worker precomputing sidedata
1001 """The function used by worker precomputing sidedata
1021
1002
1022 It read an input queue containing revision numbers
1003 It read an input queue containing revision numbers
1023 It write in an output queue containing (rev, <sidedata-map>)
1004 It write in an output queue containing (rev, <sidedata-map>)
1024
1005
1025 The `None` input value is used as a stop signal.
1006 The `None` input value is used as a stop signal.
1026
1007
1027 The `tokens` semaphore is user to avoid having too many unprocessed
1008 The `tokens` semaphore is user to avoid having too many unprocessed
1028 entries. The workers needs to acquire one token before fetching a task.
1009 entries. The workers needs to acquire one token before fetching a task.
1029 They will be released by the consumer of the produced data.
1010 They will be released by the consumer of the produced data.
1030 """
1011 """
1031 tokens.acquire()
1012 tokens.acquire()
1032 rev = revs_queue.get()
1013 rev = revs_queue.get()
1033 while rev is not None:
1014 while rev is not None:
1034 data = _getsidedata(srcrepo, rev)
1015 data = _getsidedata(srcrepo, rev)
1035 sidedata_queue.put((rev, data))
1016 sidedata_queue.put((rev, data))
1036 tokens.acquire()
1017 tokens.acquire()
1037 rev = revs_queue.get()
1018 rev = revs_queue.get()
1038 # processing of `None` is completed, release the token.
1019 # processing of `None` is completed, release the token.
1039 tokens.release()
1020 tokens.release()
1040
1021
1041
1022
1042 BUFF_PER_WORKER = 50
1023 BUFF_PER_WORKER = 50
1043
1024
1044
1025
1045 def _get_worker_sidedata_adder(srcrepo, destrepo):
1026 def _get_worker_sidedata_adder(srcrepo, destrepo):
1046 """The parallel version of the sidedata computation
1027 """The parallel version of the sidedata computation
1047
1028
1048 This code spawn a pool of worker that precompute a buffer of sidedata
1029 This code spawn a pool of worker that precompute a buffer of sidedata
1049 before we actually need them"""
1030 before we actually need them"""
1050 # avoid circular import copies -> scmutil -> worker -> copies
1031 # avoid circular import copies -> scmutil -> worker -> copies
1051 from . import worker
1032 from . import worker
1052
1033
1053 nbworkers = worker._numworkers(srcrepo.ui)
1034 nbworkers = worker._numworkers(srcrepo.ui)
1054
1035
1055 tokens = multiprocessing.BoundedSemaphore(nbworkers * BUFF_PER_WORKER)
1036 tokens = multiprocessing.BoundedSemaphore(nbworkers * BUFF_PER_WORKER)
1056 revsq = multiprocessing.Queue()
1037 revsq = multiprocessing.Queue()
1057 sidedataq = multiprocessing.Queue()
1038 sidedataq = multiprocessing.Queue()
1058
1039
1059 assert srcrepo.filtername is None
1040 assert srcrepo.filtername is None
1060 # queue all tasks beforehand, revision numbers are small and it make
1041 # queue all tasks beforehand, revision numbers are small and it make
1061 # synchronisation simpler
1042 # synchronisation simpler
1062 #
1043 #
1063 # Since the computation for each node can be quite expensive, the overhead
1044 # Since the computation for each node can be quite expensive, the overhead
1064 # of using a single queue is not revelant. In practice, most computation
1045 # of using a single queue is not revelant. In practice, most computation
1065 # are fast but some are very expensive and dominate all the other smaller
1046 # are fast but some are very expensive and dominate all the other smaller
1066 # cost.
1047 # cost.
1067 for r in srcrepo.changelog.revs():
1048 for r in srcrepo.changelog.revs():
1068 revsq.put(r)
1049 revsq.put(r)
1069 # queue the "no more tasks" markers
1050 # queue the "no more tasks" markers
1070 for i in range(nbworkers):
1051 for i in range(nbworkers):
1071 revsq.put(None)
1052 revsq.put(None)
1072
1053
1073 allworkers = []
1054 allworkers = []
1074 for i in range(nbworkers):
1055 for i in range(nbworkers):
1075 args = (srcrepo, revsq, sidedataq, tokens)
1056 args = (srcrepo, revsq, sidedataq, tokens)
1076 w = multiprocessing.Process(target=_sidedata_worker, args=args)
1057 w = multiprocessing.Process(target=_sidedata_worker, args=args)
1077 allworkers.append(w)
1058 allworkers.append(w)
1078 w.start()
1059 w.start()
1079
1060
1080 # dictionnary to store results for revision higher than we one we are
1061 # dictionnary to store results for revision higher than we one we are
1081 # looking for. For example, if we need the sidedatamap for 42, and 43 is
1062 # looking for. For example, if we need the sidedatamap for 42, and 43 is
1082 # received, when shelve 43 for later use.
1063 # received, when shelve 43 for later use.
1083 staging = {}
1064 staging = {}
1084
1065
1085 def sidedata_companion(revlog, rev):
1066 def sidedata_companion(revlog, rev):
1086 sidedata = {}
1067 sidedata = {}
1087 if util.safehasattr(revlog, b'filteredrevs'): # this is a changelog
1068 if util.safehasattr(revlog, b'filteredrevs'): # this is a changelog
1088 # Is the data previously shelved ?
1069 # Is the data previously shelved ?
1089 sidedata = staging.pop(rev, None)
1070 sidedata = staging.pop(rev, None)
1090 if sidedata is None:
1071 if sidedata is None:
1091 # look at the queued result until we find the one we are lookig
1072 # look at the queued result until we find the one we are lookig
1092 # for (shelve the other ones)
1073 # for (shelve the other ones)
1093 r, sidedata = sidedataq.get()
1074 r, sidedata = sidedataq.get()
1094 while r != rev:
1075 while r != rev:
1095 staging[r] = sidedata
1076 staging[r] = sidedata
1096 r, sidedata = sidedataq.get()
1077 r, sidedata = sidedataq.get()
1097 tokens.release()
1078 tokens.release()
1098 return False, (), sidedata
1079 return False, (), sidedata
1099
1080
1100 return sidedata_companion
1081 return sidedata_companion
1101
1082
1102
1083
1103 def _get_simple_sidedata_adder(srcrepo, destrepo):
1084 def _get_simple_sidedata_adder(srcrepo, destrepo):
1104 """The simple version of the sidedata computation
1085 """The simple version of the sidedata computation
1105
1086
1106 It just compute it in the same thread on request"""
1087 It just compute it in the same thread on request"""
1107
1088
1108 def sidedatacompanion(revlog, rev):
1089 def sidedatacompanion(revlog, rev):
1109 sidedata = {}
1090 sidedata = {}
1110 if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog
1091 if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog
1111 sidedata = _getsidedata(srcrepo, rev)
1092 sidedata = _getsidedata(srcrepo, rev)
1112 return False, (), sidedata
1093 return False, (), sidedata
1113
1094
1114 return sidedatacompanion
1095 return sidedatacompanion
1115
1096
1116
1097
1117 def getsidedataremover(srcrepo, destrepo):
1098 def getsidedataremover(srcrepo, destrepo):
1118 def sidedatacompanion(revlog, rev):
1099 def sidedatacompanion(revlog, rev):
1119 f = ()
1100 f = ()
1120 if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog
1101 if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog
1121 if revlog.flags(rev) & REVIDX_SIDEDATA:
1102 if revlog.flags(rev) & REVIDX_SIDEDATA:
1122 f = (
1103 f = (
1123 sidedatamod.SD_P1COPIES,
1104 sidedatamod.SD_P1COPIES,
1124 sidedatamod.SD_P2COPIES,
1105 sidedatamod.SD_P2COPIES,
1125 sidedatamod.SD_FILESADDED,
1106 sidedatamod.SD_FILESADDED,
1126 sidedatamod.SD_FILESREMOVED,
1107 sidedatamod.SD_FILESREMOVED,
1127 )
1108 )
1128 return False, f, {}
1109 return False, f, {}
1129
1110
1130 return sidedatacompanion
1111 return sidedatacompanion
@@ -1,2713 +1,2713 b''
1 # merge.py - directory-level update/merge handling for Mercurial
1 # merge.py - directory-level update/merge handling for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import shutil
11 import shutil
12 import stat
12 import stat
13 import struct
13 import struct
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 addednodeid,
17 addednodeid,
18 bin,
18 bin,
19 hex,
19 hex,
20 modifiednodeid,
20 modifiednodeid,
21 nullhex,
21 nullhex,
22 nullid,
22 nullid,
23 nullrev,
23 nullrev,
24 )
24 )
25 from .pycompat import delattr
25 from .pycompat import delattr
26 from .thirdparty import attr
26 from .thirdparty import attr
27 from . import (
27 from . import (
28 copies,
28 copies,
29 encoding,
29 encoding,
30 error,
30 error,
31 filemerge,
31 filemerge,
32 match as matchmod,
32 match as matchmod,
33 obsutil,
33 obsutil,
34 pathutil,
34 pathutil,
35 pycompat,
35 pycompat,
36 scmutil,
36 scmutil,
37 subrepoutil,
37 subrepoutil,
38 util,
38 util,
39 worker,
39 worker,
40 )
40 )
41 from .utils import hashutil
41 from .utils import hashutil
42
42
43 _pack = struct.pack
43 _pack = struct.pack
44 _unpack = struct.unpack
44 _unpack = struct.unpack
45
45
46
46
47 def _droponode(data):
47 def _droponode(data):
48 # used for compatibility for v1
48 # used for compatibility for v1
49 bits = data.split(b'\0')
49 bits = data.split(b'\0')
50 bits = bits[:-2] + bits[-1:]
50 bits = bits[:-2] + bits[-1:]
51 return b'\0'.join(bits)
51 return b'\0'.join(bits)
52
52
53
53
54 # Merge state record types. See ``mergestate`` docs for more.
54 # Merge state record types. See ``mergestate`` docs for more.
55 RECORD_LOCAL = b'L'
55 RECORD_LOCAL = b'L'
56 RECORD_OTHER = b'O'
56 RECORD_OTHER = b'O'
57 RECORD_MERGED = b'F'
57 RECORD_MERGED = b'F'
58 RECORD_CHANGEDELETE_CONFLICT = b'C'
58 RECORD_CHANGEDELETE_CONFLICT = b'C'
59 RECORD_MERGE_DRIVER_MERGE = b'D'
59 RECORD_MERGE_DRIVER_MERGE = b'D'
60 RECORD_PATH_CONFLICT = b'P'
60 RECORD_PATH_CONFLICT = b'P'
61 RECORD_MERGE_DRIVER_STATE = b'm'
61 RECORD_MERGE_DRIVER_STATE = b'm'
62 RECORD_FILE_VALUES = b'f'
62 RECORD_FILE_VALUES = b'f'
63 RECORD_LABELS = b'l'
63 RECORD_LABELS = b'l'
64 RECORD_OVERRIDE = b't'
64 RECORD_OVERRIDE = b't'
65 RECORD_UNSUPPORTED_MANDATORY = b'X'
65 RECORD_UNSUPPORTED_MANDATORY = b'X'
66 RECORD_UNSUPPORTED_ADVISORY = b'x'
66 RECORD_UNSUPPORTED_ADVISORY = b'x'
67
67
68 MERGE_DRIVER_STATE_UNMARKED = b'u'
68 MERGE_DRIVER_STATE_UNMARKED = b'u'
69 MERGE_DRIVER_STATE_MARKED = b'm'
69 MERGE_DRIVER_STATE_MARKED = b'm'
70 MERGE_DRIVER_STATE_SUCCESS = b's'
70 MERGE_DRIVER_STATE_SUCCESS = b's'
71
71
72 MERGE_RECORD_UNRESOLVED = b'u'
72 MERGE_RECORD_UNRESOLVED = b'u'
73 MERGE_RECORD_RESOLVED = b'r'
73 MERGE_RECORD_RESOLVED = b'r'
74 MERGE_RECORD_UNRESOLVED_PATH = b'pu'
74 MERGE_RECORD_UNRESOLVED_PATH = b'pu'
75 MERGE_RECORD_RESOLVED_PATH = b'pr'
75 MERGE_RECORD_RESOLVED_PATH = b'pr'
76 MERGE_RECORD_DRIVER_RESOLVED = b'd'
76 MERGE_RECORD_DRIVER_RESOLVED = b'd'
77
77
78 ACTION_FORGET = b'f'
78 ACTION_FORGET = b'f'
79 ACTION_REMOVE = b'r'
79 ACTION_REMOVE = b'r'
80 ACTION_ADD = b'a'
80 ACTION_ADD = b'a'
81 ACTION_GET = b'g'
81 ACTION_GET = b'g'
82 ACTION_PATH_CONFLICT = b'p'
82 ACTION_PATH_CONFLICT = b'p'
83 ACTION_PATH_CONFLICT_RESOLVE = b'pr'
83 ACTION_PATH_CONFLICT_RESOLVE = b'pr'
84 ACTION_ADD_MODIFIED = b'am'
84 ACTION_ADD_MODIFIED = b'am'
85 ACTION_CREATED = b'c'
85 ACTION_CREATED = b'c'
86 ACTION_DELETED_CHANGED = b'dc'
86 ACTION_DELETED_CHANGED = b'dc'
87 ACTION_CHANGED_DELETED = b'cd'
87 ACTION_CHANGED_DELETED = b'cd'
88 ACTION_MERGE = b'm'
88 ACTION_MERGE = b'm'
89 ACTION_LOCAL_DIR_RENAME_GET = b'dg'
89 ACTION_LOCAL_DIR_RENAME_GET = b'dg'
90 ACTION_DIR_RENAME_MOVE_LOCAL = b'dm'
90 ACTION_DIR_RENAME_MOVE_LOCAL = b'dm'
91 ACTION_KEEP = b'k'
91 ACTION_KEEP = b'k'
92 ACTION_EXEC = b'e'
92 ACTION_EXEC = b'e'
93 ACTION_CREATED_MERGE = b'cm'
93 ACTION_CREATED_MERGE = b'cm'
94
94
95
95
96 class mergestate(object):
96 class mergestate(object):
97 '''track 3-way merge state of individual files
97 '''track 3-way merge state of individual files
98
98
99 The merge state is stored on disk when needed. Two files are used: one with
99 The merge state is stored on disk when needed. Two files are used: one with
100 an old format (version 1), and one with a new format (version 2). Version 2
100 an old format (version 1), and one with a new format (version 2). Version 2
101 stores a superset of the data in version 1, including new kinds of records
101 stores a superset of the data in version 1, including new kinds of records
102 in the future. For more about the new format, see the documentation for
102 in the future. For more about the new format, see the documentation for
103 `_readrecordsv2`.
103 `_readrecordsv2`.
104
104
105 Each record can contain arbitrary content, and has an associated type. This
105 Each record can contain arbitrary content, and has an associated type. This
106 `type` should be a letter. If `type` is uppercase, the record is mandatory:
106 `type` should be a letter. If `type` is uppercase, the record is mandatory:
107 versions of Mercurial that don't support it should abort. If `type` is
107 versions of Mercurial that don't support it should abort. If `type` is
108 lowercase, the record can be safely ignored.
108 lowercase, the record can be safely ignored.
109
109
110 Currently known records:
110 Currently known records:
111
111
112 L: the node of the "local" part of the merge (hexified version)
112 L: the node of the "local" part of the merge (hexified version)
113 O: the node of the "other" part of the merge (hexified version)
113 O: the node of the "other" part of the merge (hexified version)
114 F: a file to be merged entry
114 F: a file to be merged entry
115 C: a change/delete or delete/change conflict
115 C: a change/delete or delete/change conflict
116 D: a file that the external merge driver will merge internally
116 D: a file that the external merge driver will merge internally
117 (experimental)
117 (experimental)
118 P: a path conflict (file vs directory)
118 P: a path conflict (file vs directory)
119 m: the external merge driver defined for this merge plus its run state
119 m: the external merge driver defined for this merge plus its run state
120 (experimental)
120 (experimental)
121 f: a (filename, dictionary) tuple of optional values for a given file
121 f: a (filename, dictionary) tuple of optional values for a given file
122 X: unsupported mandatory record type (used in tests)
122 X: unsupported mandatory record type (used in tests)
123 x: unsupported advisory record type (used in tests)
123 x: unsupported advisory record type (used in tests)
124 l: the labels for the parts of the merge.
124 l: the labels for the parts of the merge.
125
125
126 Merge driver run states (experimental):
126 Merge driver run states (experimental):
127 u: driver-resolved files unmarked -- needs to be run next time we're about
127 u: driver-resolved files unmarked -- needs to be run next time we're about
128 to resolve or commit
128 to resolve or commit
129 m: driver-resolved files marked -- only needs to be run before commit
129 m: driver-resolved files marked -- only needs to be run before commit
130 s: success/skipped -- does not need to be run any more
130 s: success/skipped -- does not need to be run any more
131
131
132 Merge record states (stored in self._state, indexed by filename):
132 Merge record states (stored in self._state, indexed by filename):
133 u: unresolved conflict
133 u: unresolved conflict
134 r: resolved conflict
134 r: resolved conflict
135 pu: unresolved path conflict (file conflicts with directory)
135 pu: unresolved path conflict (file conflicts with directory)
136 pr: resolved path conflict
136 pr: resolved path conflict
137 d: driver-resolved conflict
137 d: driver-resolved conflict
138
138
139 The resolve command transitions between 'u' and 'r' for conflicts and
139 The resolve command transitions between 'u' and 'r' for conflicts and
140 'pu' and 'pr' for path conflicts.
140 'pu' and 'pr' for path conflicts.
141 '''
141 '''
142
142
143 statepathv1 = b'merge/state'
143 statepathv1 = b'merge/state'
144 statepathv2 = b'merge/state2'
144 statepathv2 = b'merge/state2'
145
145
146 @staticmethod
146 @staticmethod
147 def clean(repo, node=None, other=None, labels=None):
147 def clean(repo, node=None, other=None, labels=None):
148 """Initialize a brand new merge state, removing any existing state on
148 """Initialize a brand new merge state, removing any existing state on
149 disk."""
149 disk."""
150 ms = mergestate(repo)
150 ms = mergestate(repo)
151 ms.reset(node, other, labels)
151 ms.reset(node, other, labels)
152 return ms
152 return ms
153
153
154 @staticmethod
154 @staticmethod
155 def read(repo):
155 def read(repo):
156 """Initialize the merge state, reading it from disk."""
156 """Initialize the merge state, reading it from disk."""
157 ms = mergestate(repo)
157 ms = mergestate(repo)
158 ms._read()
158 ms._read()
159 return ms
159 return ms
160
160
161 def __init__(self, repo):
161 def __init__(self, repo):
162 """Initialize the merge state.
162 """Initialize the merge state.
163
163
164 Do not use this directly! Instead call read() or clean()."""
164 Do not use this directly! Instead call read() or clean()."""
165 self._repo = repo
165 self._repo = repo
166 self._dirty = False
166 self._dirty = False
167 self._labels = None
167 self._labels = None
168
168
169 def reset(self, node=None, other=None, labels=None):
169 def reset(self, node=None, other=None, labels=None):
170 self._state = {}
170 self._state = {}
171 self._stateextras = {}
171 self._stateextras = {}
172 self._local = None
172 self._local = None
173 self._other = None
173 self._other = None
174 self._labels = labels
174 self._labels = labels
175 for var in ('localctx', 'otherctx'):
175 for var in ('localctx', 'otherctx'):
176 if var in vars(self):
176 if var in vars(self):
177 delattr(self, var)
177 delattr(self, var)
178 if node:
178 if node:
179 self._local = node
179 self._local = node
180 self._other = other
180 self._other = other
181 self._readmergedriver = None
181 self._readmergedriver = None
182 if self.mergedriver:
182 if self.mergedriver:
183 self._mdstate = MERGE_DRIVER_STATE_SUCCESS
183 self._mdstate = MERGE_DRIVER_STATE_SUCCESS
184 else:
184 else:
185 self._mdstate = MERGE_DRIVER_STATE_UNMARKED
185 self._mdstate = MERGE_DRIVER_STATE_UNMARKED
186 shutil.rmtree(self._repo.vfs.join(b'merge'), True)
186 shutil.rmtree(self._repo.vfs.join(b'merge'), True)
187 self._results = {}
187 self._results = {}
188 self._dirty = False
188 self._dirty = False
189
189
190 def _read(self):
190 def _read(self):
191 """Analyse each record content to restore a serialized state from disk
191 """Analyse each record content to restore a serialized state from disk
192
192
193 This function process "record" entry produced by the de-serialization
193 This function process "record" entry produced by the de-serialization
194 of on disk file.
194 of on disk file.
195 """
195 """
196 self._state = {}
196 self._state = {}
197 self._stateextras = {}
197 self._stateextras = {}
198 self._local = None
198 self._local = None
199 self._other = None
199 self._other = None
200 for var in ('localctx', 'otherctx'):
200 for var in ('localctx', 'otherctx'):
201 if var in vars(self):
201 if var in vars(self):
202 delattr(self, var)
202 delattr(self, var)
203 self._readmergedriver = None
203 self._readmergedriver = None
204 self._mdstate = MERGE_DRIVER_STATE_SUCCESS
204 self._mdstate = MERGE_DRIVER_STATE_SUCCESS
205 unsupported = set()
205 unsupported = set()
206 records = self._readrecords()
206 records = self._readrecords()
207 for rtype, record in records:
207 for rtype, record in records:
208 if rtype == RECORD_LOCAL:
208 if rtype == RECORD_LOCAL:
209 self._local = bin(record)
209 self._local = bin(record)
210 elif rtype == RECORD_OTHER:
210 elif rtype == RECORD_OTHER:
211 self._other = bin(record)
211 self._other = bin(record)
212 elif rtype == RECORD_MERGE_DRIVER_STATE:
212 elif rtype == RECORD_MERGE_DRIVER_STATE:
213 bits = record.split(b'\0', 1)
213 bits = record.split(b'\0', 1)
214 mdstate = bits[1]
214 mdstate = bits[1]
215 if len(mdstate) != 1 or mdstate not in (
215 if len(mdstate) != 1 or mdstate not in (
216 MERGE_DRIVER_STATE_UNMARKED,
216 MERGE_DRIVER_STATE_UNMARKED,
217 MERGE_DRIVER_STATE_MARKED,
217 MERGE_DRIVER_STATE_MARKED,
218 MERGE_DRIVER_STATE_SUCCESS,
218 MERGE_DRIVER_STATE_SUCCESS,
219 ):
219 ):
220 # the merge driver should be idempotent, so just rerun it
220 # the merge driver should be idempotent, so just rerun it
221 mdstate = MERGE_DRIVER_STATE_UNMARKED
221 mdstate = MERGE_DRIVER_STATE_UNMARKED
222
222
223 self._readmergedriver = bits[0]
223 self._readmergedriver = bits[0]
224 self._mdstate = mdstate
224 self._mdstate = mdstate
225 elif rtype in (
225 elif rtype in (
226 RECORD_MERGED,
226 RECORD_MERGED,
227 RECORD_CHANGEDELETE_CONFLICT,
227 RECORD_CHANGEDELETE_CONFLICT,
228 RECORD_PATH_CONFLICT,
228 RECORD_PATH_CONFLICT,
229 RECORD_MERGE_DRIVER_MERGE,
229 RECORD_MERGE_DRIVER_MERGE,
230 ):
230 ):
231 bits = record.split(b'\0')
231 bits = record.split(b'\0')
232 self._state[bits[0]] = bits[1:]
232 self._state[bits[0]] = bits[1:]
233 elif rtype == RECORD_FILE_VALUES:
233 elif rtype == RECORD_FILE_VALUES:
234 filename, rawextras = record.split(b'\0', 1)
234 filename, rawextras = record.split(b'\0', 1)
235 extraparts = rawextras.split(b'\0')
235 extraparts = rawextras.split(b'\0')
236 extras = {}
236 extras = {}
237 i = 0
237 i = 0
238 while i < len(extraparts):
238 while i < len(extraparts):
239 extras[extraparts[i]] = extraparts[i + 1]
239 extras[extraparts[i]] = extraparts[i + 1]
240 i += 2
240 i += 2
241
241
242 self._stateextras[filename] = extras
242 self._stateextras[filename] = extras
243 elif rtype == RECORD_LABELS:
243 elif rtype == RECORD_LABELS:
244 labels = record.split(b'\0', 2)
244 labels = record.split(b'\0', 2)
245 self._labels = [l for l in labels if len(l) > 0]
245 self._labels = [l for l in labels if len(l) > 0]
246 elif not rtype.islower():
246 elif not rtype.islower():
247 unsupported.add(rtype)
247 unsupported.add(rtype)
248 self._results = {}
248 self._results = {}
249 self._dirty = False
249 self._dirty = False
250
250
251 if unsupported:
251 if unsupported:
252 raise error.UnsupportedMergeRecords(unsupported)
252 raise error.UnsupportedMergeRecords(unsupported)
253
253
254 def _readrecords(self):
254 def _readrecords(self):
255 """Read merge state from disk and return a list of record (TYPE, data)
255 """Read merge state from disk and return a list of record (TYPE, data)
256
256
257 We read data from both v1 and v2 files and decide which one to use.
257 We read data from both v1 and v2 files and decide which one to use.
258
258
259 V1 has been used by version prior to 2.9.1 and contains less data than
259 V1 has been used by version prior to 2.9.1 and contains less data than
260 v2. We read both versions and check if no data in v2 contradicts
260 v2. We read both versions and check if no data in v2 contradicts
261 v1. If there is not contradiction we can safely assume that both v1
261 v1. If there is not contradiction we can safely assume that both v1
262 and v2 were written at the same time and use the extract data in v2. If
262 and v2 were written at the same time and use the extract data in v2. If
263 there is contradiction we ignore v2 content as we assume an old version
263 there is contradiction we ignore v2 content as we assume an old version
264 of Mercurial has overwritten the mergestate file and left an old v2
264 of Mercurial has overwritten the mergestate file and left an old v2
265 file around.
265 file around.
266
266
267 returns list of record [(TYPE, data), ...]"""
267 returns list of record [(TYPE, data), ...]"""
268 v1records = self._readrecordsv1()
268 v1records = self._readrecordsv1()
269 v2records = self._readrecordsv2()
269 v2records = self._readrecordsv2()
270 if self._v1v2match(v1records, v2records):
270 if self._v1v2match(v1records, v2records):
271 return v2records
271 return v2records
272 else:
272 else:
273 # v1 file is newer than v2 file, use it
273 # v1 file is newer than v2 file, use it
274 # we have to infer the "other" changeset of the merge
274 # we have to infer the "other" changeset of the merge
275 # we cannot do better than that with v1 of the format
275 # we cannot do better than that with v1 of the format
276 mctx = self._repo[None].parents()[-1]
276 mctx = self._repo[None].parents()[-1]
277 v1records.append((RECORD_OTHER, mctx.hex()))
277 v1records.append((RECORD_OTHER, mctx.hex()))
278 # add place holder "other" file node information
278 # add place holder "other" file node information
279 # nobody is using it yet so we do no need to fetch the data
279 # nobody is using it yet so we do no need to fetch the data
280 # if mctx was wrong `mctx[bits[-2]]` may fails.
280 # if mctx was wrong `mctx[bits[-2]]` may fails.
281 for idx, r in enumerate(v1records):
281 for idx, r in enumerate(v1records):
282 if r[0] == RECORD_MERGED:
282 if r[0] == RECORD_MERGED:
283 bits = r[1].split(b'\0')
283 bits = r[1].split(b'\0')
284 bits.insert(-2, b'')
284 bits.insert(-2, b'')
285 v1records[idx] = (r[0], b'\0'.join(bits))
285 v1records[idx] = (r[0], b'\0'.join(bits))
286 return v1records
286 return v1records
287
287
288 def _v1v2match(self, v1records, v2records):
288 def _v1v2match(self, v1records, v2records):
289 oldv2 = set() # old format version of v2 record
289 oldv2 = set() # old format version of v2 record
290 for rec in v2records:
290 for rec in v2records:
291 if rec[0] == RECORD_LOCAL:
291 if rec[0] == RECORD_LOCAL:
292 oldv2.add(rec)
292 oldv2.add(rec)
293 elif rec[0] == RECORD_MERGED:
293 elif rec[0] == RECORD_MERGED:
294 # drop the onode data (not contained in v1)
294 # drop the onode data (not contained in v1)
295 oldv2.add((RECORD_MERGED, _droponode(rec[1])))
295 oldv2.add((RECORD_MERGED, _droponode(rec[1])))
296 for rec in v1records:
296 for rec in v1records:
297 if rec not in oldv2:
297 if rec not in oldv2:
298 return False
298 return False
299 else:
299 else:
300 return True
300 return True
301
301
302 def _readrecordsv1(self):
302 def _readrecordsv1(self):
303 """read on disk merge state for version 1 file
303 """read on disk merge state for version 1 file
304
304
305 returns list of record [(TYPE, data), ...]
305 returns list of record [(TYPE, data), ...]
306
306
307 Note: the "F" data from this file are one entry short
307 Note: the "F" data from this file are one entry short
308 (no "other file node" entry)
308 (no "other file node" entry)
309 """
309 """
310 records = []
310 records = []
311 try:
311 try:
312 f = self._repo.vfs(self.statepathv1)
312 f = self._repo.vfs(self.statepathv1)
313 for i, l in enumerate(f):
313 for i, l in enumerate(f):
314 if i == 0:
314 if i == 0:
315 records.append((RECORD_LOCAL, l[:-1]))
315 records.append((RECORD_LOCAL, l[:-1]))
316 else:
316 else:
317 records.append((RECORD_MERGED, l[:-1]))
317 records.append((RECORD_MERGED, l[:-1]))
318 f.close()
318 f.close()
319 except IOError as err:
319 except IOError as err:
320 if err.errno != errno.ENOENT:
320 if err.errno != errno.ENOENT:
321 raise
321 raise
322 return records
322 return records
323
323
324 def _readrecordsv2(self):
324 def _readrecordsv2(self):
325 """read on disk merge state for version 2 file
325 """read on disk merge state for version 2 file
326
326
327 This format is a list of arbitrary records of the form:
327 This format is a list of arbitrary records of the form:
328
328
329 [type][length][content]
329 [type][length][content]
330
330
331 `type` is a single character, `length` is a 4 byte integer, and
331 `type` is a single character, `length` is a 4 byte integer, and
332 `content` is an arbitrary byte sequence of length `length`.
332 `content` is an arbitrary byte sequence of length `length`.
333
333
334 Mercurial versions prior to 3.7 have a bug where if there are
334 Mercurial versions prior to 3.7 have a bug where if there are
335 unsupported mandatory merge records, attempting to clear out the merge
335 unsupported mandatory merge records, attempting to clear out the merge
336 state with hg update --clean or similar aborts. The 't' record type
336 state with hg update --clean or similar aborts. The 't' record type
337 works around that by writing out what those versions treat as an
337 works around that by writing out what those versions treat as an
338 advisory record, but later versions interpret as special: the first
338 advisory record, but later versions interpret as special: the first
339 character is the 'real' record type and everything onwards is the data.
339 character is the 'real' record type and everything onwards is the data.
340
340
341 Returns list of records [(TYPE, data), ...]."""
341 Returns list of records [(TYPE, data), ...]."""
342 records = []
342 records = []
343 try:
343 try:
344 f = self._repo.vfs(self.statepathv2)
344 f = self._repo.vfs(self.statepathv2)
345 data = f.read()
345 data = f.read()
346 off = 0
346 off = 0
347 end = len(data)
347 end = len(data)
348 while off < end:
348 while off < end:
349 rtype = data[off : off + 1]
349 rtype = data[off : off + 1]
350 off += 1
350 off += 1
351 length = _unpack(b'>I', data[off : (off + 4)])[0]
351 length = _unpack(b'>I', data[off : (off + 4)])[0]
352 off += 4
352 off += 4
353 record = data[off : (off + length)]
353 record = data[off : (off + length)]
354 off += length
354 off += length
355 if rtype == RECORD_OVERRIDE:
355 if rtype == RECORD_OVERRIDE:
356 rtype, record = record[0:1], record[1:]
356 rtype, record = record[0:1], record[1:]
357 records.append((rtype, record))
357 records.append((rtype, record))
358 f.close()
358 f.close()
359 except IOError as err:
359 except IOError as err:
360 if err.errno != errno.ENOENT:
360 if err.errno != errno.ENOENT:
361 raise
361 raise
362 return records
362 return records
363
363
364 @util.propertycache
364 @util.propertycache
365 def mergedriver(self):
365 def mergedriver(self):
366 # protect against the following:
366 # protect against the following:
367 # - A configures a malicious merge driver in their hgrc, then
367 # - A configures a malicious merge driver in their hgrc, then
368 # pauses the merge
368 # pauses the merge
369 # - A edits their hgrc to remove references to the merge driver
369 # - A edits their hgrc to remove references to the merge driver
370 # - A gives a copy of their entire repo, including .hg, to B
370 # - A gives a copy of their entire repo, including .hg, to B
371 # - B inspects .hgrc and finds it to be clean
371 # - B inspects .hgrc and finds it to be clean
372 # - B then continues the merge and the malicious merge driver
372 # - B then continues the merge and the malicious merge driver
373 # gets invoked
373 # gets invoked
374 configmergedriver = self._repo.ui.config(
374 configmergedriver = self._repo.ui.config(
375 b'experimental', b'mergedriver'
375 b'experimental', b'mergedriver'
376 )
376 )
377 if (
377 if (
378 self._readmergedriver is not None
378 self._readmergedriver is not None
379 and self._readmergedriver != configmergedriver
379 and self._readmergedriver != configmergedriver
380 ):
380 ):
381 raise error.ConfigError(
381 raise error.ConfigError(
382 _(b"merge driver changed since merge started"),
382 _(b"merge driver changed since merge started"),
383 hint=_(b"revert merge driver change or abort merge"),
383 hint=_(b"revert merge driver change or abort merge"),
384 )
384 )
385
385
386 return configmergedriver
386 return configmergedriver
387
387
388 @util.propertycache
388 @util.propertycache
389 def localctx(self):
389 def localctx(self):
390 if self._local is None:
390 if self._local is None:
391 msg = b"localctx accessed but self._local isn't set"
391 msg = b"localctx accessed but self._local isn't set"
392 raise error.ProgrammingError(msg)
392 raise error.ProgrammingError(msg)
393 return self._repo[self._local]
393 return self._repo[self._local]
394
394
395 @util.propertycache
395 @util.propertycache
396 def otherctx(self):
396 def otherctx(self):
397 if self._other is None:
397 if self._other is None:
398 msg = b"otherctx accessed but self._other isn't set"
398 msg = b"otherctx accessed but self._other isn't set"
399 raise error.ProgrammingError(msg)
399 raise error.ProgrammingError(msg)
400 return self._repo[self._other]
400 return self._repo[self._other]
401
401
402 def active(self):
402 def active(self):
403 """Whether mergestate is active.
403 """Whether mergestate is active.
404
404
405 Returns True if there appears to be mergestate. This is a rough proxy
405 Returns True if there appears to be mergestate. This is a rough proxy
406 for "is a merge in progress."
406 for "is a merge in progress."
407 """
407 """
408 # Check local variables before looking at filesystem for performance
408 # Check local variables before looking at filesystem for performance
409 # reasons.
409 # reasons.
410 return (
410 return (
411 bool(self._local)
411 bool(self._local)
412 or bool(self._state)
412 or bool(self._state)
413 or self._repo.vfs.exists(self.statepathv1)
413 or self._repo.vfs.exists(self.statepathv1)
414 or self._repo.vfs.exists(self.statepathv2)
414 or self._repo.vfs.exists(self.statepathv2)
415 )
415 )
416
416
417 def commit(self):
417 def commit(self):
418 """Write current state on disk (if necessary)"""
418 """Write current state on disk (if necessary)"""
419 if self._dirty:
419 if self._dirty:
420 records = self._makerecords()
420 records = self._makerecords()
421 self._writerecords(records)
421 self._writerecords(records)
422 self._dirty = False
422 self._dirty = False
423
423
424 def _makerecords(self):
424 def _makerecords(self):
425 records = []
425 records = []
426 records.append((RECORD_LOCAL, hex(self._local)))
426 records.append((RECORD_LOCAL, hex(self._local)))
427 records.append((RECORD_OTHER, hex(self._other)))
427 records.append((RECORD_OTHER, hex(self._other)))
428 if self.mergedriver:
428 if self.mergedriver:
429 records.append(
429 records.append(
430 (
430 (
431 RECORD_MERGE_DRIVER_STATE,
431 RECORD_MERGE_DRIVER_STATE,
432 b'\0'.join([self.mergedriver, self._mdstate]),
432 b'\0'.join([self.mergedriver, self._mdstate]),
433 )
433 )
434 )
434 )
435 # Write out state items. In all cases, the value of the state map entry
435 # Write out state items. In all cases, the value of the state map entry
436 # is written as the contents of the record. The record type depends on
436 # is written as the contents of the record. The record type depends on
437 # the type of state that is stored, and capital-letter records are used
437 # the type of state that is stored, and capital-letter records are used
438 # to prevent older versions of Mercurial that do not support the feature
438 # to prevent older versions of Mercurial that do not support the feature
439 # from loading them.
439 # from loading them.
440 for filename, v in pycompat.iteritems(self._state):
440 for filename, v in pycompat.iteritems(self._state):
441 if v[0] == MERGE_RECORD_DRIVER_RESOLVED:
441 if v[0] == MERGE_RECORD_DRIVER_RESOLVED:
442 # Driver-resolved merge. These are stored in 'D' records.
442 # Driver-resolved merge. These are stored in 'D' records.
443 records.append(
443 records.append(
444 (RECORD_MERGE_DRIVER_MERGE, b'\0'.join([filename] + v))
444 (RECORD_MERGE_DRIVER_MERGE, b'\0'.join([filename] + v))
445 )
445 )
446 elif v[0] in (
446 elif v[0] in (
447 MERGE_RECORD_UNRESOLVED_PATH,
447 MERGE_RECORD_UNRESOLVED_PATH,
448 MERGE_RECORD_RESOLVED_PATH,
448 MERGE_RECORD_RESOLVED_PATH,
449 ):
449 ):
450 # Path conflicts. These are stored in 'P' records. The current
450 # Path conflicts. These are stored in 'P' records. The current
451 # resolution state ('pu' or 'pr') is stored within the record.
451 # resolution state ('pu' or 'pr') is stored within the record.
452 records.append(
452 records.append(
453 (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v))
453 (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v))
454 )
454 )
455 elif v[1] == nullhex or v[6] == nullhex:
455 elif v[1] == nullhex or v[6] == nullhex:
456 # Change/Delete or Delete/Change conflicts. These are stored in
456 # Change/Delete or Delete/Change conflicts. These are stored in
457 # 'C' records. v[1] is the local file, and is nullhex when the
457 # 'C' records. v[1] is the local file, and is nullhex when the
458 # file is deleted locally ('dc'). v[6] is the remote file, and
458 # file is deleted locally ('dc'). v[6] is the remote file, and
459 # is nullhex when the file is deleted remotely ('cd').
459 # is nullhex when the file is deleted remotely ('cd').
460 records.append(
460 records.append(
461 (RECORD_CHANGEDELETE_CONFLICT, b'\0'.join([filename] + v))
461 (RECORD_CHANGEDELETE_CONFLICT, b'\0'.join([filename] + v))
462 )
462 )
463 else:
463 else:
464 # Normal files. These are stored in 'F' records.
464 # Normal files. These are stored in 'F' records.
465 records.append((RECORD_MERGED, b'\0'.join([filename] + v)))
465 records.append((RECORD_MERGED, b'\0'.join([filename] + v)))
466 for filename, extras in sorted(pycompat.iteritems(self._stateextras)):
466 for filename, extras in sorted(pycompat.iteritems(self._stateextras)):
467 rawextras = b'\0'.join(
467 rawextras = b'\0'.join(
468 b'%s\0%s' % (k, v) for k, v in pycompat.iteritems(extras)
468 b'%s\0%s' % (k, v) for k, v in pycompat.iteritems(extras)
469 )
469 )
470 records.append(
470 records.append(
471 (RECORD_FILE_VALUES, b'%s\0%s' % (filename, rawextras))
471 (RECORD_FILE_VALUES, b'%s\0%s' % (filename, rawextras))
472 )
472 )
473 if self._labels is not None:
473 if self._labels is not None:
474 labels = b'\0'.join(self._labels)
474 labels = b'\0'.join(self._labels)
475 records.append((RECORD_LABELS, labels))
475 records.append((RECORD_LABELS, labels))
476 return records
476 return records
477
477
478 def _writerecords(self, records):
478 def _writerecords(self, records):
479 """Write current state on disk (both v1 and v2)"""
479 """Write current state on disk (both v1 and v2)"""
480 self._writerecordsv1(records)
480 self._writerecordsv1(records)
481 self._writerecordsv2(records)
481 self._writerecordsv2(records)
482
482
483 def _writerecordsv1(self, records):
483 def _writerecordsv1(self, records):
484 """Write current state on disk in a version 1 file"""
484 """Write current state on disk in a version 1 file"""
485 f = self._repo.vfs(self.statepathv1, b'wb')
485 f = self._repo.vfs(self.statepathv1, b'wb')
486 irecords = iter(records)
486 irecords = iter(records)
487 lrecords = next(irecords)
487 lrecords = next(irecords)
488 assert lrecords[0] == RECORD_LOCAL
488 assert lrecords[0] == RECORD_LOCAL
489 f.write(hex(self._local) + b'\n')
489 f.write(hex(self._local) + b'\n')
490 for rtype, data in irecords:
490 for rtype, data in irecords:
491 if rtype == RECORD_MERGED:
491 if rtype == RECORD_MERGED:
492 f.write(b'%s\n' % _droponode(data))
492 f.write(b'%s\n' % _droponode(data))
493 f.close()
493 f.close()
494
494
495 def _writerecordsv2(self, records):
495 def _writerecordsv2(self, records):
496 """Write current state on disk in a version 2 file
496 """Write current state on disk in a version 2 file
497
497
498 See the docstring for _readrecordsv2 for why we use 't'."""
498 See the docstring for _readrecordsv2 for why we use 't'."""
499 # these are the records that all version 2 clients can read
499 # these are the records that all version 2 clients can read
500 allowlist = (RECORD_LOCAL, RECORD_OTHER, RECORD_MERGED)
500 allowlist = (RECORD_LOCAL, RECORD_OTHER, RECORD_MERGED)
501 f = self._repo.vfs(self.statepathv2, b'wb')
501 f = self._repo.vfs(self.statepathv2, b'wb')
502 for key, data in records:
502 for key, data in records:
503 assert len(key) == 1
503 assert len(key) == 1
504 if key not in allowlist:
504 if key not in allowlist:
505 key, data = RECORD_OVERRIDE, b'%s%s' % (key, data)
505 key, data = RECORD_OVERRIDE, b'%s%s' % (key, data)
506 format = b'>sI%is' % len(data)
506 format = b'>sI%is' % len(data)
507 f.write(_pack(format, key, len(data), data))
507 f.write(_pack(format, key, len(data), data))
508 f.close()
508 f.close()
509
509
510 @staticmethod
510 @staticmethod
511 def getlocalkey(path):
511 def getlocalkey(path):
512 """hash the path of a local file context for storage in the .hg/merge
512 """hash the path of a local file context for storage in the .hg/merge
513 directory."""
513 directory."""
514
514
515 return hex(hashutil.sha1(path).digest())
515 return hex(hashutil.sha1(path).digest())
516
516
517 def add(self, fcl, fco, fca, fd):
517 def add(self, fcl, fco, fca, fd):
518 """add a new (potentially?) conflicting file the merge state
518 """add a new (potentially?) conflicting file the merge state
519 fcl: file context for local,
519 fcl: file context for local,
520 fco: file context for remote,
520 fco: file context for remote,
521 fca: file context for ancestors,
521 fca: file context for ancestors,
522 fd: file path of the resulting merge.
522 fd: file path of the resulting merge.
523
523
524 note: also write the local version to the `.hg/merge` directory.
524 note: also write the local version to the `.hg/merge` directory.
525 """
525 """
526 if fcl.isabsent():
526 if fcl.isabsent():
527 localkey = nullhex
527 localkey = nullhex
528 else:
528 else:
529 localkey = mergestate.getlocalkey(fcl.path())
529 localkey = mergestate.getlocalkey(fcl.path())
530 self._repo.vfs.write(b'merge/' + localkey, fcl.data())
530 self._repo.vfs.write(b'merge/' + localkey, fcl.data())
531 self._state[fd] = [
531 self._state[fd] = [
532 MERGE_RECORD_UNRESOLVED,
532 MERGE_RECORD_UNRESOLVED,
533 localkey,
533 localkey,
534 fcl.path(),
534 fcl.path(),
535 fca.path(),
535 fca.path(),
536 hex(fca.filenode()),
536 hex(fca.filenode()),
537 fco.path(),
537 fco.path(),
538 hex(fco.filenode()),
538 hex(fco.filenode()),
539 fcl.flags(),
539 fcl.flags(),
540 ]
540 ]
541 self._stateextras[fd] = {b'ancestorlinknode': hex(fca.node())}
541 self._stateextras[fd] = {b'ancestorlinknode': hex(fca.node())}
542 self._dirty = True
542 self._dirty = True
543
543
544 def addpath(self, path, frename, forigin):
544 def addpath(self, path, frename, forigin):
545 """add a new conflicting path to the merge state
545 """add a new conflicting path to the merge state
546 path: the path that conflicts
546 path: the path that conflicts
547 frename: the filename the conflicting file was renamed to
547 frename: the filename the conflicting file was renamed to
548 forigin: origin of the file ('l' or 'r' for local/remote)
548 forigin: origin of the file ('l' or 'r' for local/remote)
549 """
549 """
550 self._state[path] = [MERGE_RECORD_UNRESOLVED_PATH, frename, forigin]
550 self._state[path] = [MERGE_RECORD_UNRESOLVED_PATH, frename, forigin]
551 self._dirty = True
551 self._dirty = True
552
552
553 def __contains__(self, dfile):
553 def __contains__(self, dfile):
554 return dfile in self._state
554 return dfile in self._state
555
555
556 def __getitem__(self, dfile):
556 def __getitem__(self, dfile):
557 return self._state[dfile][0]
557 return self._state[dfile][0]
558
558
559 def __iter__(self):
559 def __iter__(self):
560 return iter(sorted(self._state))
560 return iter(sorted(self._state))
561
561
562 def files(self):
562 def files(self):
563 return self._state.keys()
563 return self._state.keys()
564
564
565 def mark(self, dfile, state):
565 def mark(self, dfile, state):
566 self._state[dfile][0] = state
566 self._state[dfile][0] = state
567 self._dirty = True
567 self._dirty = True
568
568
569 def mdstate(self):
569 def mdstate(self):
570 return self._mdstate
570 return self._mdstate
571
571
572 def unresolved(self):
572 def unresolved(self):
573 """Obtain the paths of unresolved files."""
573 """Obtain the paths of unresolved files."""
574
574
575 for f, entry in pycompat.iteritems(self._state):
575 for f, entry in pycompat.iteritems(self._state):
576 if entry[0] in (
576 if entry[0] in (
577 MERGE_RECORD_UNRESOLVED,
577 MERGE_RECORD_UNRESOLVED,
578 MERGE_RECORD_UNRESOLVED_PATH,
578 MERGE_RECORD_UNRESOLVED_PATH,
579 ):
579 ):
580 yield f
580 yield f
581
581
582 def driverresolved(self):
582 def driverresolved(self):
583 """Obtain the paths of driver-resolved files."""
583 """Obtain the paths of driver-resolved files."""
584
584
585 for f, entry in self._state.items():
585 for f, entry in self._state.items():
586 if entry[0] == MERGE_RECORD_DRIVER_RESOLVED:
586 if entry[0] == MERGE_RECORD_DRIVER_RESOLVED:
587 yield f
587 yield f
588
588
589 def extras(self, filename):
589 def extras(self, filename):
590 return self._stateextras.setdefault(filename, {})
590 return self._stateextras.setdefault(filename, {})
591
591
592 def _resolve(self, preresolve, dfile, wctx):
592 def _resolve(self, preresolve, dfile, wctx):
593 """rerun merge process for file path `dfile`"""
593 """rerun merge process for file path `dfile`"""
594 if self[dfile] in (MERGE_RECORD_RESOLVED, MERGE_RECORD_DRIVER_RESOLVED):
594 if self[dfile] in (MERGE_RECORD_RESOLVED, MERGE_RECORD_DRIVER_RESOLVED):
595 return True, 0
595 return True, 0
596 stateentry = self._state[dfile]
596 stateentry = self._state[dfile]
597 state, localkey, lfile, afile, anode, ofile, onode, flags = stateentry
597 state, localkey, lfile, afile, anode, ofile, onode, flags = stateentry
598 octx = self._repo[self._other]
598 octx = self._repo[self._other]
599 extras = self.extras(dfile)
599 extras = self.extras(dfile)
600 anccommitnode = extras.get(b'ancestorlinknode')
600 anccommitnode = extras.get(b'ancestorlinknode')
601 if anccommitnode:
601 if anccommitnode:
602 actx = self._repo[anccommitnode]
602 actx = self._repo[anccommitnode]
603 else:
603 else:
604 actx = None
604 actx = None
605 fcd = self._filectxorabsent(localkey, wctx, dfile)
605 fcd = self._filectxorabsent(localkey, wctx, dfile)
606 fco = self._filectxorabsent(onode, octx, ofile)
606 fco = self._filectxorabsent(onode, octx, ofile)
607 # TODO: move this to filectxorabsent
607 # TODO: move this to filectxorabsent
608 fca = self._repo.filectx(afile, fileid=anode, changectx=actx)
608 fca = self._repo.filectx(afile, fileid=anode, changectx=actx)
609 # "premerge" x flags
609 # "premerge" x flags
610 flo = fco.flags()
610 flo = fco.flags()
611 fla = fca.flags()
611 fla = fca.flags()
612 if b'x' in flags + flo + fla and b'l' not in flags + flo + fla:
612 if b'x' in flags + flo + fla and b'l' not in flags + flo + fla:
613 if fca.node() == nullid and flags != flo:
613 if fca.node() == nullid and flags != flo:
614 if preresolve:
614 if preresolve:
615 self._repo.ui.warn(
615 self._repo.ui.warn(
616 _(
616 _(
617 b'warning: cannot merge flags for %s '
617 b'warning: cannot merge flags for %s '
618 b'without common ancestor - keeping local flags\n'
618 b'without common ancestor - keeping local flags\n'
619 )
619 )
620 % afile
620 % afile
621 )
621 )
622 elif flags == fla:
622 elif flags == fla:
623 flags = flo
623 flags = flo
624 if preresolve:
624 if preresolve:
625 # restore local
625 # restore local
626 if localkey != nullhex:
626 if localkey != nullhex:
627 f = self._repo.vfs(b'merge/' + localkey)
627 f = self._repo.vfs(b'merge/' + localkey)
628 wctx[dfile].write(f.read(), flags)
628 wctx[dfile].write(f.read(), flags)
629 f.close()
629 f.close()
630 else:
630 else:
631 wctx[dfile].remove(ignoremissing=True)
631 wctx[dfile].remove(ignoremissing=True)
632 complete, r, deleted = filemerge.premerge(
632 complete, r, deleted = filemerge.premerge(
633 self._repo,
633 self._repo,
634 wctx,
634 wctx,
635 self._local,
635 self._local,
636 lfile,
636 lfile,
637 fcd,
637 fcd,
638 fco,
638 fco,
639 fca,
639 fca,
640 labels=self._labels,
640 labels=self._labels,
641 )
641 )
642 else:
642 else:
643 complete, r, deleted = filemerge.filemerge(
643 complete, r, deleted = filemerge.filemerge(
644 self._repo,
644 self._repo,
645 wctx,
645 wctx,
646 self._local,
646 self._local,
647 lfile,
647 lfile,
648 fcd,
648 fcd,
649 fco,
649 fco,
650 fca,
650 fca,
651 labels=self._labels,
651 labels=self._labels,
652 )
652 )
653 if r is None:
653 if r is None:
654 # no real conflict
654 # no real conflict
655 del self._state[dfile]
655 del self._state[dfile]
656 self._stateextras.pop(dfile, None)
656 self._stateextras.pop(dfile, None)
657 self._dirty = True
657 self._dirty = True
658 elif not r:
658 elif not r:
659 self.mark(dfile, MERGE_RECORD_RESOLVED)
659 self.mark(dfile, MERGE_RECORD_RESOLVED)
660
660
661 if complete:
661 if complete:
662 action = None
662 action = None
663 if deleted:
663 if deleted:
664 if fcd.isabsent():
664 if fcd.isabsent():
665 # dc: local picked. Need to drop if present, which may
665 # dc: local picked. Need to drop if present, which may
666 # happen on re-resolves.
666 # happen on re-resolves.
667 action = ACTION_FORGET
667 action = ACTION_FORGET
668 else:
668 else:
669 # cd: remote picked (or otherwise deleted)
669 # cd: remote picked (or otherwise deleted)
670 action = ACTION_REMOVE
670 action = ACTION_REMOVE
671 else:
671 else:
672 if fcd.isabsent(): # dc: remote picked
672 if fcd.isabsent(): # dc: remote picked
673 action = ACTION_GET
673 action = ACTION_GET
674 elif fco.isabsent(): # cd: local picked
674 elif fco.isabsent(): # cd: local picked
675 if dfile in self.localctx:
675 if dfile in self.localctx:
676 action = ACTION_ADD_MODIFIED
676 action = ACTION_ADD_MODIFIED
677 else:
677 else:
678 action = ACTION_ADD
678 action = ACTION_ADD
679 # else: regular merges (no action necessary)
679 # else: regular merges (no action necessary)
680 self._results[dfile] = r, action
680 self._results[dfile] = r, action
681
681
682 return complete, r
682 return complete, r
683
683
684 def _filectxorabsent(self, hexnode, ctx, f):
684 def _filectxorabsent(self, hexnode, ctx, f):
685 if hexnode == nullhex:
685 if hexnode == nullhex:
686 return filemerge.absentfilectx(ctx, f)
686 return filemerge.absentfilectx(ctx, f)
687 else:
687 else:
688 return ctx[f]
688 return ctx[f]
689
689
690 def preresolve(self, dfile, wctx):
690 def preresolve(self, dfile, wctx):
691 """run premerge process for dfile
691 """run premerge process for dfile
692
692
693 Returns whether the merge is complete, and the exit code."""
693 Returns whether the merge is complete, and the exit code."""
694 return self._resolve(True, dfile, wctx)
694 return self._resolve(True, dfile, wctx)
695
695
696 def resolve(self, dfile, wctx):
696 def resolve(self, dfile, wctx):
697 """run merge process (assuming premerge was run) for dfile
697 """run merge process (assuming premerge was run) for dfile
698
698
699 Returns the exit code of the merge."""
699 Returns the exit code of the merge."""
700 return self._resolve(False, dfile, wctx)[1]
700 return self._resolve(False, dfile, wctx)[1]
701
701
702 def counts(self):
702 def counts(self):
703 """return counts for updated, merged and removed files in this
703 """return counts for updated, merged and removed files in this
704 session"""
704 session"""
705 updated, merged, removed = 0, 0, 0
705 updated, merged, removed = 0, 0, 0
706 for r, action in pycompat.itervalues(self._results):
706 for r, action in pycompat.itervalues(self._results):
707 if r is None:
707 if r is None:
708 updated += 1
708 updated += 1
709 elif r == 0:
709 elif r == 0:
710 if action == ACTION_REMOVE:
710 if action == ACTION_REMOVE:
711 removed += 1
711 removed += 1
712 else:
712 else:
713 merged += 1
713 merged += 1
714 return updated, merged, removed
714 return updated, merged, removed
715
715
716 def unresolvedcount(self):
716 def unresolvedcount(self):
717 """get unresolved count for this merge (persistent)"""
717 """get unresolved count for this merge (persistent)"""
718 return len(list(self.unresolved()))
718 return len(list(self.unresolved()))
719
719
720 def actions(self):
720 def actions(self):
721 """return lists of actions to perform on the dirstate"""
721 """return lists of actions to perform on the dirstate"""
722 actions = {
722 actions = {
723 ACTION_REMOVE: [],
723 ACTION_REMOVE: [],
724 ACTION_FORGET: [],
724 ACTION_FORGET: [],
725 ACTION_ADD: [],
725 ACTION_ADD: [],
726 ACTION_ADD_MODIFIED: [],
726 ACTION_ADD_MODIFIED: [],
727 ACTION_GET: [],
727 ACTION_GET: [],
728 }
728 }
729 for f, (r, action) in pycompat.iteritems(self._results):
729 for f, (r, action) in pycompat.iteritems(self._results):
730 if action is not None:
730 if action is not None:
731 actions[action].append((f, None, b"merge result"))
731 actions[action].append((f, None, b"merge result"))
732 return actions
732 return actions
733
733
734 def recordactions(self):
734 def recordactions(self):
735 """record remove/add/get actions in the dirstate"""
735 """record remove/add/get actions in the dirstate"""
736 branchmerge = self._repo.dirstate.p2() != nullid
736 branchmerge = self._repo.dirstate.p2() != nullid
737 recordupdates(self._repo, self.actions(), branchmerge, None)
737 recordupdates(self._repo, self.actions(), branchmerge, None)
738
738
739 def queueremove(self, f):
739 def queueremove(self, f):
740 """queues a file to be removed from the dirstate
740 """queues a file to be removed from the dirstate
741
741
742 Meant for use by custom merge drivers."""
742 Meant for use by custom merge drivers."""
743 self._results[f] = 0, ACTION_REMOVE
743 self._results[f] = 0, ACTION_REMOVE
744
744
745 def queueadd(self, f):
745 def queueadd(self, f):
746 """queues a file to be added to the dirstate
746 """queues a file to be added to the dirstate
747
747
748 Meant for use by custom merge drivers."""
748 Meant for use by custom merge drivers."""
749 self._results[f] = 0, ACTION_ADD
749 self._results[f] = 0, ACTION_ADD
750
750
751 def queueget(self, f):
751 def queueget(self, f):
752 """queues a file to be marked modified in the dirstate
752 """queues a file to be marked modified in the dirstate
753
753
754 Meant for use by custom merge drivers."""
754 Meant for use by custom merge drivers."""
755 self._results[f] = 0, ACTION_GET
755 self._results[f] = 0, ACTION_GET
756
756
757
757
758 def _getcheckunknownconfig(repo, section, name):
758 def _getcheckunknownconfig(repo, section, name):
759 config = repo.ui.config(section, name)
759 config = repo.ui.config(section, name)
760 valid = [b'abort', b'ignore', b'warn']
760 valid = [b'abort', b'ignore', b'warn']
761 if config not in valid:
761 if config not in valid:
762 validstr = b', '.join([b"'" + v + b"'" for v in valid])
762 validstr = b', '.join([b"'" + v + b"'" for v in valid])
763 raise error.ConfigError(
763 raise error.ConfigError(
764 _(b"%s.%s not valid ('%s' is none of %s)")
764 _(b"%s.%s not valid ('%s' is none of %s)")
765 % (section, name, config, validstr)
765 % (section, name, config, validstr)
766 )
766 )
767 return config
767 return config
768
768
769
769
770 def _checkunknownfile(repo, wctx, mctx, f, f2=None):
770 def _checkunknownfile(repo, wctx, mctx, f, f2=None):
771 if wctx.isinmemory():
771 if wctx.isinmemory():
772 # Nothing to do in IMM because nothing in the "working copy" can be an
772 # Nothing to do in IMM because nothing in the "working copy" can be an
773 # unknown file.
773 # unknown file.
774 #
774 #
775 # Note that we should bail out here, not in ``_checkunknownfiles()``,
775 # Note that we should bail out here, not in ``_checkunknownfiles()``,
776 # because that function does other useful work.
776 # because that function does other useful work.
777 return False
777 return False
778
778
779 if f2 is None:
779 if f2 is None:
780 f2 = f
780 f2 = f
781 return (
781 return (
782 repo.wvfs.audit.check(f)
782 repo.wvfs.audit.check(f)
783 and repo.wvfs.isfileorlink(f)
783 and repo.wvfs.isfileorlink(f)
784 and repo.dirstate.normalize(f) not in repo.dirstate
784 and repo.dirstate.normalize(f) not in repo.dirstate
785 and mctx[f2].cmp(wctx[f])
785 and mctx[f2].cmp(wctx[f])
786 )
786 )
787
787
788
788
789 class _unknowndirschecker(object):
789 class _unknowndirschecker(object):
790 """
790 """
791 Look for any unknown files or directories that may have a path conflict
791 Look for any unknown files or directories that may have a path conflict
792 with a file. If any path prefix of the file exists as a file or link,
792 with a file. If any path prefix of the file exists as a file or link,
793 then it conflicts. If the file itself is a directory that contains any
793 then it conflicts. If the file itself is a directory that contains any
794 file that is not tracked, then it conflicts.
794 file that is not tracked, then it conflicts.
795
795
796 Returns the shortest path at which a conflict occurs, or None if there is
796 Returns the shortest path at which a conflict occurs, or None if there is
797 no conflict.
797 no conflict.
798 """
798 """
799
799
800 def __init__(self):
800 def __init__(self):
801 # A set of paths known to be good. This prevents repeated checking of
801 # A set of paths known to be good. This prevents repeated checking of
802 # dirs. It will be updated with any new dirs that are checked and found
802 # dirs. It will be updated with any new dirs that are checked and found
803 # to be safe.
803 # to be safe.
804 self._unknowndircache = set()
804 self._unknowndircache = set()
805
805
806 # A set of paths that are known to be absent. This prevents repeated
806 # A set of paths that are known to be absent. This prevents repeated
807 # checking of subdirectories that are known not to exist. It will be
807 # checking of subdirectories that are known not to exist. It will be
808 # updated with any new dirs that are checked and found to be absent.
808 # updated with any new dirs that are checked and found to be absent.
809 self._missingdircache = set()
809 self._missingdircache = set()
810
810
811 def __call__(self, repo, wctx, f):
811 def __call__(self, repo, wctx, f):
812 if wctx.isinmemory():
812 if wctx.isinmemory():
813 # Nothing to do in IMM for the same reason as ``_checkunknownfile``.
813 # Nothing to do in IMM for the same reason as ``_checkunknownfile``.
814 return False
814 return False
815
815
816 # Check for path prefixes that exist as unknown files.
816 # Check for path prefixes that exist as unknown files.
817 for p in reversed(list(pathutil.finddirs(f))):
817 for p in reversed(list(pathutil.finddirs(f))):
818 if p in self._missingdircache:
818 if p in self._missingdircache:
819 return
819 return
820 if p in self._unknowndircache:
820 if p in self._unknowndircache:
821 continue
821 continue
822 if repo.wvfs.audit.check(p):
822 if repo.wvfs.audit.check(p):
823 if (
823 if (
824 repo.wvfs.isfileorlink(p)
824 repo.wvfs.isfileorlink(p)
825 and repo.dirstate.normalize(p) not in repo.dirstate
825 and repo.dirstate.normalize(p) not in repo.dirstate
826 ):
826 ):
827 return p
827 return p
828 if not repo.wvfs.lexists(p):
828 if not repo.wvfs.lexists(p):
829 self._missingdircache.add(p)
829 self._missingdircache.add(p)
830 return
830 return
831 self._unknowndircache.add(p)
831 self._unknowndircache.add(p)
832
832
833 # Check if the file conflicts with a directory containing unknown files.
833 # Check if the file conflicts with a directory containing unknown files.
834 if repo.wvfs.audit.check(f) and repo.wvfs.isdir(f):
834 if repo.wvfs.audit.check(f) and repo.wvfs.isdir(f):
835 # Does the directory contain any files that are not in the dirstate?
835 # Does the directory contain any files that are not in the dirstate?
836 for p, dirs, files in repo.wvfs.walk(f):
836 for p, dirs, files in repo.wvfs.walk(f):
837 for fn in files:
837 for fn in files:
838 relf = util.pconvert(repo.wvfs.reljoin(p, fn))
838 relf = util.pconvert(repo.wvfs.reljoin(p, fn))
839 relf = repo.dirstate.normalize(relf, isknown=True)
839 relf = repo.dirstate.normalize(relf, isknown=True)
840 if relf not in repo.dirstate:
840 if relf not in repo.dirstate:
841 return f
841 return f
842 return None
842 return None
843
843
844
844
845 def _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce):
845 def _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce):
846 """
846 """
847 Considers any actions that care about the presence of conflicting unknown
847 Considers any actions that care about the presence of conflicting unknown
848 files. For some actions, the result is to abort; for others, it is to
848 files. For some actions, the result is to abort; for others, it is to
849 choose a different action.
849 choose a different action.
850 """
850 """
851 fileconflicts = set()
851 fileconflicts = set()
852 pathconflicts = set()
852 pathconflicts = set()
853 warnconflicts = set()
853 warnconflicts = set()
854 abortconflicts = set()
854 abortconflicts = set()
855 unknownconfig = _getcheckunknownconfig(repo, b'merge', b'checkunknown')
855 unknownconfig = _getcheckunknownconfig(repo, b'merge', b'checkunknown')
856 ignoredconfig = _getcheckunknownconfig(repo, b'merge', b'checkignored')
856 ignoredconfig = _getcheckunknownconfig(repo, b'merge', b'checkignored')
857 pathconfig = repo.ui.configbool(
857 pathconfig = repo.ui.configbool(
858 b'experimental', b'merge.checkpathconflicts'
858 b'experimental', b'merge.checkpathconflicts'
859 )
859 )
860 if not force:
860 if not force:
861
861
862 def collectconflicts(conflicts, config):
862 def collectconflicts(conflicts, config):
863 if config == b'abort':
863 if config == b'abort':
864 abortconflicts.update(conflicts)
864 abortconflicts.update(conflicts)
865 elif config == b'warn':
865 elif config == b'warn':
866 warnconflicts.update(conflicts)
866 warnconflicts.update(conflicts)
867
867
868 checkunknowndirs = _unknowndirschecker()
868 checkunknowndirs = _unknowndirschecker()
869 for f, (m, args, msg) in pycompat.iteritems(actions):
869 for f, (m, args, msg) in pycompat.iteritems(actions):
870 if m in (ACTION_CREATED, ACTION_DELETED_CHANGED):
870 if m in (ACTION_CREATED, ACTION_DELETED_CHANGED):
871 if _checkunknownfile(repo, wctx, mctx, f):
871 if _checkunknownfile(repo, wctx, mctx, f):
872 fileconflicts.add(f)
872 fileconflicts.add(f)
873 elif pathconfig and f not in wctx:
873 elif pathconfig and f not in wctx:
874 path = checkunknowndirs(repo, wctx, f)
874 path = checkunknowndirs(repo, wctx, f)
875 if path is not None:
875 if path is not None:
876 pathconflicts.add(path)
876 pathconflicts.add(path)
877 elif m == ACTION_LOCAL_DIR_RENAME_GET:
877 elif m == ACTION_LOCAL_DIR_RENAME_GET:
878 if _checkunknownfile(repo, wctx, mctx, f, args[0]):
878 if _checkunknownfile(repo, wctx, mctx, f, args[0]):
879 fileconflicts.add(f)
879 fileconflicts.add(f)
880
880
881 allconflicts = fileconflicts | pathconflicts
881 allconflicts = fileconflicts | pathconflicts
882 ignoredconflicts = {c for c in allconflicts if repo.dirstate._ignore(c)}
882 ignoredconflicts = {c for c in allconflicts if repo.dirstate._ignore(c)}
883 unknownconflicts = allconflicts - ignoredconflicts
883 unknownconflicts = allconflicts - ignoredconflicts
884 collectconflicts(ignoredconflicts, ignoredconfig)
884 collectconflicts(ignoredconflicts, ignoredconfig)
885 collectconflicts(unknownconflicts, unknownconfig)
885 collectconflicts(unknownconflicts, unknownconfig)
886 else:
886 else:
887 for f, (m, args, msg) in pycompat.iteritems(actions):
887 for f, (m, args, msg) in pycompat.iteritems(actions):
888 if m == ACTION_CREATED_MERGE:
888 if m == ACTION_CREATED_MERGE:
889 fl2, anc = args
889 fl2, anc = args
890 different = _checkunknownfile(repo, wctx, mctx, f)
890 different = _checkunknownfile(repo, wctx, mctx, f)
891 if repo.dirstate._ignore(f):
891 if repo.dirstate._ignore(f):
892 config = ignoredconfig
892 config = ignoredconfig
893 else:
893 else:
894 config = unknownconfig
894 config = unknownconfig
895
895
896 # The behavior when force is True is described by this table:
896 # The behavior when force is True is described by this table:
897 # config different mergeforce | action backup
897 # config different mergeforce | action backup
898 # * n * | get n
898 # * n * | get n
899 # * y y | merge -
899 # * y y | merge -
900 # abort y n | merge - (1)
900 # abort y n | merge - (1)
901 # warn y n | warn + get y
901 # warn y n | warn + get y
902 # ignore y n | get y
902 # ignore y n | get y
903 #
903 #
904 # (1) this is probably the wrong behavior here -- we should
904 # (1) this is probably the wrong behavior here -- we should
905 # probably abort, but some actions like rebases currently
905 # probably abort, but some actions like rebases currently
906 # don't like an abort happening in the middle of
906 # don't like an abort happening in the middle of
907 # merge.update.
907 # merge.update.
908 if not different:
908 if not different:
909 actions[f] = (ACTION_GET, (fl2, False), b'remote created')
909 actions[f] = (ACTION_GET, (fl2, False), b'remote created')
910 elif mergeforce or config == b'abort':
910 elif mergeforce or config == b'abort':
911 actions[f] = (
911 actions[f] = (
912 ACTION_MERGE,
912 ACTION_MERGE,
913 (f, f, None, False, anc),
913 (f, f, None, False, anc),
914 b'remote differs from untracked local',
914 b'remote differs from untracked local',
915 )
915 )
916 elif config == b'abort':
916 elif config == b'abort':
917 abortconflicts.add(f)
917 abortconflicts.add(f)
918 else:
918 else:
919 if config == b'warn':
919 if config == b'warn':
920 warnconflicts.add(f)
920 warnconflicts.add(f)
921 actions[f] = (ACTION_GET, (fl2, True), b'remote created')
921 actions[f] = (ACTION_GET, (fl2, True), b'remote created')
922
922
923 for f in sorted(abortconflicts):
923 for f in sorted(abortconflicts):
924 warn = repo.ui.warn
924 warn = repo.ui.warn
925 if f in pathconflicts:
925 if f in pathconflicts:
926 if repo.wvfs.isfileorlink(f):
926 if repo.wvfs.isfileorlink(f):
927 warn(_(b"%s: untracked file conflicts with directory\n") % f)
927 warn(_(b"%s: untracked file conflicts with directory\n") % f)
928 else:
928 else:
929 warn(_(b"%s: untracked directory conflicts with file\n") % f)
929 warn(_(b"%s: untracked directory conflicts with file\n") % f)
930 else:
930 else:
931 warn(_(b"%s: untracked file differs\n") % f)
931 warn(_(b"%s: untracked file differs\n") % f)
932 if abortconflicts:
932 if abortconflicts:
933 raise error.Abort(
933 raise error.Abort(
934 _(
934 _(
935 b"untracked files in working directory "
935 b"untracked files in working directory "
936 b"differ from files in requested revision"
936 b"differ from files in requested revision"
937 )
937 )
938 )
938 )
939
939
940 for f in sorted(warnconflicts):
940 for f in sorted(warnconflicts):
941 if repo.wvfs.isfileorlink(f):
941 if repo.wvfs.isfileorlink(f):
942 repo.ui.warn(_(b"%s: replacing untracked file\n") % f)
942 repo.ui.warn(_(b"%s: replacing untracked file\n") % f)
943 else:
943 else:
944 repo.ui.warn(_(b"%s: replacing untracked files in directory\n") % f)
944 repo.ui.warn(_(b"%s: replacing untracked files in directory\n") % f)
945
945
946 for f, (m, args, msg) in pycompat.iteritems(actions):
946 for f, (m, args, msg) in pycompat.iteritems(actions):
947 if m == ACTION_CREATED:
947 if m == ACTION_CREATED:
948 backup = (
948 backup = (
949 f in fileconflicts
949 f in fileconflicts
950 or f in pathconflicts
950 or f in pathconflicts
951 or any(p in pathconflicts for p in pathutil.finddirs(f))
951 or any(p in pathconflicts for p in pathutil.finddirs(f))
952 )
952 )
953 (flags,) = args
953 (flags,) = args
954 actions[f] = (ACTION_GET, (flags, backup), msg)
954 actions[f] = (ACTION_GET, (flags, backup), msg)
955
955
956
956
957 def _forgetremoved(wctx, mctx, branchmerge):
957 def _forgetremoved(wctx, mctx, branchmerge):
958 """
958 """
959 Forget removed files
959 Forget removed files
960
960
961 If we're jumping between revisions (as opposed to merging), and if
961 If we're jumping between revisions (as opposed to merging), and if
962 neither the working directory nor the target rev has the file,
962 neither the working directory nor the target rev has the file,
963 then we need to remove it from the dirstate, to prevent the
963 then we need to remove it from the dirstate, to prevent the
964 dirstate from listing the file when it is no longer in the
964 dirstate from listing the file when it is no longer in the
965 manifest.
965 manifest.
966
966
967 If we're merging, and the other revision has removed a file
967 If we're merging, and the other revision has removed a file
968 that is not present in the working directory, we need to mark it
968 that is not present in the working directory, we need to mark it
969 as removed.
969 as removed.
970 """
970 """
971
971
972 actions = {}
972 actions = {}
973 m = ACTION_FORGET
973 m = ACTION_FORGET
974 if branchmerge:
974 if branchmerge:
975 m = ACTION_REMOVE
975 m = ACTION_REMOVE
976 for f in wctx.deleted():
976 for f in wctx.deleted():
977 if f not in mctx:
977 if f not in mctx:
978 actions[f] = m, None, b"forget deleted"
978 actions[f] = m, None, b"forget deleted"
979
979
980 if not branchmerge:
980 if not branchmerge:
981 for f in wctx.removed():
981 for f in wctx.removed():
982 if f not in mctx:
982 if f not in mctx:
983 actions[f] = ACTION_FORGET, None, b"forget removed"
983 actions[f] = ACTION_FORGET, None, b"forget removed"
984
984
985 return actions
985 return actions
986
986
987
987
988 def _checkcollision(repo, wmf, actions):
988 def _checkcollision(repo, wmf, actions):
989 """
989 """
990 Check for case-folding collisions.
990 Check for case-folding collisions.
991 """
991 """
992
992
993 # If the repo is narrowed, filter out files outside the narrowspec.
993 # If the repo is narrowed, filter out files outside the narrowspec.
994 narrowmatch = repo.narrowmatch()
994 narrowmatch = repo.narrowmatch()
995 if not narrowmatch.always():
995 if not narrowmatch.always():
996 wmf = wmf.matches(narrowmatch)
996 wmf = wmf.matches(narrowmatch)
997 if actions:
997 if actions:
998 narrowactions = {}
998 narrowactions = {}
999 for m, actionsfortype in pycompat.iteritems(actions):
999 for m, actionsfortype in pycompat.iteritems(actions):
1000 narrowactions[m] = []
1000 narrowactions[m] = []
1001 for (f, args, msg) in actionsfortype:
1001 for (f, args, msg) in actionsfortype:
1002 if narrowmatch(f):
1002 if narrowmatch(f):
1003 narrowactions[m].append((f, args, msg))
1003 narrowactions[m].append((f, args, msg))
1004 actions = narrowactions
1004 actions = narrowactions
1005
1005
1006 # build provisional merged manifest up
1006 # build provisional merged manifest up
1007 pmmf = set(wmf)
1007 pmmf = set(wmf)
1008
1008
1009 if actions:
1009 if actions:
1010 # KEEP and EXEC are no-op
1010 # KEEP and EXEC are no-op
1011 for m in (
1011 for m in (
1012 ACTION_ADD,
1012 ACTION_ADD,
1013 ACTION_ADD_MODIFIED,
1013 ACTION_ADD_MODIFIED,
1014 ACTION_FORGET,
1014 ACTION_FORGET,
1015 ACTION_GET,
1015 ACTION_GET,
1016 ACTION_CHANGED_DELETED,
1016 ACTION_CHANGED_DELETED,
1017 ACTION_DELETED_CHANGED,
1017 ACTION_DELETED_CHANGED,
1018 ):
1018 ):
1019 for f, args, msg in actions[m]:
1019 for f, args, msg in actions[m]:
1020 pmmf.add(f)
1020 pmmf.add(f)
1021 for f, args, msg in actions[ACTION_REMOVE]:
1021 for f, args, msg in actions[ACTION_REMOVE]:
1022 pmmf.discard(f)
1022 pmmf.discard(f)
1023 for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]:
1023 for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]:
1024 f2, flags = args
1024 f2, flags = args
1025 pmmf.discard(f2)
1025 pmmf.discard(f2)
1026 pmmf.add(f)
1026 pmmf.add(f)
1027 for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]:
1027 for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]:
1028 pmmf.add(f)
1028 pmmf.add(f)
1029 for f, args, msg in actions[ACTION_MERGE]:
1029 for f, args, msg in actions[ACTION_MERGE]:
1030 f1, f2, fa, move, anc = args
1030 f1, f2, fa, move, anc = args
1031 if move:
1031 if move:
1032 pmmf.discard(f1)
1032 pmmf.discard(f1)
1033 pmmf.add(f)
1033 pmmf.add(f)
1034
1034
1035 # check case-folding collision in provisional merged manifest
1035 # check case-folding collision in provisional merged manifest
1036 foldmap = {}
1036 foldmap = {}
1037 for f in pmmf:
1037 for f in pmmf:
1038 fold = util.normcase(f)
1038 fold = util.normcase(f)
1039 if fold in foldmap:
1039 if fold in foldmap:
1040 raise error.Abort(
1040 raise error.Abort(
1041 _(b"case-folding collision between %s and %s")
1041 _(b"case-folding collision between %s and %s")
1042 % (f, foldmap[fold])
1042 % (f, foldmap[fold])
1043 )
1043 )
1044 foldmap[fold] = f
1044 foldmap[fold] = f
1045
1045
1046 # check case-folding of directories
1046 # check case-folding of directories
1047 foldprefix = unfoldprefix = lastfull = b''
1047 foldprefix = unfoldprefix = lastfull = b''
1048 for fold, f in sorted(foldmap.items()):
1048 for fold, f in sorted(foldmap.items()):
1049 if fold.startswith(foldprefix) and not f.startswith(unfoldprefix):
1049 if fold.startswith(foldprefix) and not f.startswith(unfoldprefix):
1050 # the folded prefix matches but actual casing is different
1050 # the folded prefix matches but actual casing is different
1051 raise error.Abort(
1051 raise error.Abort(
1052 _(b"case-folding collision between %s and directory of %s")
1052 _(b"case-folding collision between %s and directory of %s")
1053 % (lastfull, f)
1053 % (lastfull, f)
1054 )
1054 )
1055 foldprefix = fold + b'/'
1055 foldprefix = fold + b'/'
1056 unfoldprefix = f + b'/'
1056 unfoldprefix = f + b'/'
1057 lastfull = f
1057 lastfull = f
1058
1058
1059
1059
1060 def driverpreprocess(repo, ms, wctx, labels=None):
1060 def driverpreprocess(repo, ms, wctx, labels=None):
1061 """run the preprocess step of the merge driver, if any
1061 """run the preprocess step of the merge driver, if any
1062
1062
1063 This is currently not implemented -- it's an extension point."""
1063 This is currently not implemented -- it's an extension point."""
1064 return True
1064 return True
1065
1065
1066
1066
1067 def driverconclude(repo, ms, wctx, labels=None):
1067 def driverconclude(repo, ms, wctx, labels=None):
1068 """run the conclude step of the merge driver, if any
1068 """run the conclude step of the merge driver, if any
1069
1069
1070 This is currently not implemented -- it's an extension point."""
1070 This is currently not implemented -- it's an extension point."""
1071 return True
1071 return True
1072
1072
1073
1073
1074 def _filesindirs(repo, manifest, dirs):
1074 def _filesindirs(repo, manifest, dirs):
1075 """
1075 """
1076 Generator that yields pairs of all the files in the manifest that are found
1076 Generator that yields pairs of all the files in the manifest that are found
1077 inside the directories listed in dirs, and which directory they are found
1077 inside the directories listed in dirs, and which directory they are found
1078 in.
1078 in.
1079 """
1079 """
1080 for f in manifest:
1080 for f in manifest:
1081 for p in pathutil.finddirs(f):
1081 for p in pathutil.finddirs(f):
1082 if p in dirs:
1082 if p in dirs:
1083 yield f, p
1083 yield f, p
1084 break
1084 break
1085
1085
1086
1086
1087 def checkpathconflicts(repo, wctx, mctx, actions):
1087 def checkpathconflicts(repo, wctx, mctx, actions):
1088 """
1088 """
1089 Check if any actions introduce path conflicts in the repository, updating
1089 Check if any actions introduce path conflicts in the repository, updating
1090 actions to record or handle the path conflict accordingly.
1090 actions to record or handle the path conflict accordingly.
1091 """
1091 """
1092 mf = wctx.manifest()
1092 mf = wctx.manifest()
1093
1093
1094 # The set of local files that conflict with a remote directory.
1094 # The set of local files that conflict with a remote directory.
1095 localconflicts = set()
1095 localconflicts = set()
1096
1096
1097 # The set of directories that conflict with a remote file, and so may cause
1097 # The set of directories that conflict with a remote file, and so may cause
1098 # conflicts if they still contain any files after the merge.
1098 # conflicts if they still contain any files after the merge.
1099 remoteconflicts = set()
1099 remoteconflicts = set()
1100
1100
1101 # The set of directories that appear as both a file and a directory in the
1101 # The set of directories that appear as both a file and a directory in the
1102 # remote manifest. These indicate an invalid remote manifest, which
1102 # remote manifest. These indicate an invalid remote manifest, which
1103 # can't be updated to cleanly.
1103 # can't be updated to cleanly.
1104 invalidconflicts = set()
1104 invalidconflicts = set()
1105
1105
1106 # The set of directories that contain files that are being created.
1106 # The set of directories that contain files that are being created.
1107 createdfiledirs = set()
1107 createdfiledirs = set()
1108
1108
1109 # The set of files deleted by all the actions.
1109 # The set of files deleted by all the actions.
1110 deletedfiles = set()
1110 deletedfiles = set()
1111
1111
1112 for f, (m, args, msg) in actions.items():
1112 for f, (m, args, msg) in actions.items():
1113 if m in (
1113 if m in (
1114 ACTION_CREATED,
1114 ACTION_CREATED,
1115 ACTION_DELETED_CHANGED,
1115 ACTION_DELETED_CHANGED,
1116 ACTION_MERGE,
1116 ACTION_MERGE,
1117 ACTION_CREATED_MERGE,
1117 ACTION_CREATED_MERGE,
1118 ):
1118 ):
1119 # This action may create a new local file.
1119 # This action may create a new local file.
1120 createdfiledirs.update(pathutil.finddirs(f))
1120 createdfiledirs.update(pathutil.finddirs(f))
1121 if mf.hasdir(f):
1121 if mf.hasdir(f):
1122 # The file aliases a local directory. This might be ok if all
1122 # The file aliases a local directory. This might be ok if all
1123 # the files in the local directory are being deleted. This
1123 # the files in the local directory are being deleted. This
1124 # will be checked once we know what all the deleted files are.
1124 # will be checked once we know what all the deleted files are.
1125 remoteconflicts.add(f)
1125 remoteconflicts.add(f)
1126 # Track the names of all deleted files.
1126 # Track the names of all deleted files.
1127 if m == ACTION_REMOVE:
1127 if m == ACTION_REMOVE:
1128 deletedfiles.add(f)
1128 deletedfiles.add(f)
1129 if m == ACTION_MERGE:
1129 if m == ACTION_MERGE:
1130 f1, f2, fa, move, anc = args
1130 f1, f2, fa, move, anc = args
1131 if move:
1131 if move:
1132 deletedfiles.add(f1)
1132 deletedfiles.add(f1)
1133 if m == ACTION_DIR_RENAME_MOVE_LOCAL:
1133 if m == ACTION_DIR_RENAME_MOVE_LOCAL:
1134 f2, flags = args
1134 f2, flags = args
1135 deletedfiles.add(f2)
1135 deletedfiles.add(f2)
1136
1136
1137 # Check all directories that contain created files for path conflicts.
1137 # Check all directories that contain created files for path conflicts.
1138 for p in createdfiledirs:
1138 for p in createdfiledirs:
1139 if p in mf:
1139 if p in mf:
1140 if p in mctx:
1140 if p in mctx:
1141 # A file is in a directory which aliases both a local
1141 # A file is in a directory which aliases both a local
1142 # and a remote file. This is an internal inconsistency
1142 # and a remote file. This is an internal inconsistency
1143 # within the remote manifest.
1143 # within the remote manifest.
1144 invalidconflicts.add(p)
1144 invalidconflicts.add(p)
1145 else:
1145 else:
1146 # A file is in a directory which aliases a local file.
1146 # A file is in a directory which aliases a local file.
1147 # We will need to rename the local file.
1147 # We will need to rename the local file.
1148 localconflicts.add(p)
1148 localconflicts.add(p)
1149 if p in actions and actions[p][0] in (
1149 if p in actions and actions[p][0] in (
1150 ACTION_CREATED,
1150 ACTION_CREATED,
1151 ACTION_DELETED_CHANGED,
1151 ACTION_DELETED_CHANGED,
1152 ACTION_MERGE,
1152 ACTION_MERGE,
1153 ACTION_CREATED_MERGE,
1153 ACTION_CREATED_MERGE,
1154 ):
1154 ):
1155 # The file is in a directory which aliases a remote file.
1155 # The file is in a directory which aliases a remote file.
1156 # This is an internal inconsistency within the remote
1156 # This is an internal inconsistency within the remote
1157 # manifest.
1157 # manifest.
1158 invalidconflicts.add(p)
1158 invalidconflicts.add(p)
1159
1159
1160 # Rename all local conflicting files that have not been deleted.
1160 # Rename all local conflicting files that have not been deleted.
1161 for p in localconflicts:
1161 for p in localconflicts:
1162 if p not in deletedfiles:
1162 if p not in deletedfiles:
1163 ctxname = bytes(wctx).rstrip(b'+')
1163 ctxname = bytes(wctx).rstrip(b'+')
1164 pnew = util.safename(p, ctxname, wctx, set(actions.keys()))
1164 pnew = util.safename(p, ctxname, wctx, set(actions.keys()))
1165 actions[pnew] = (
1165 actions[pnew] = (
1166 ACTION_PATH_CONFLICT_RESOLVE,
1166 ACTION_PATH_CONFLICT_RESOLVE,
1167 (p,),
1167 (p,),
1168 b'local path conflict',
1168 b'local path conflict',
1169 )
1169 )
1170 actions[p] = (ACTION_PATH_CONFLICT, (pnew, b'l'), b'path conflict')
1170 actions[p] = (ACTION_PATH_CONFLICT, (pnew, b'l'), b'path conflict')
1171
1171
1172 if remoteconflicts:
1172 if remoteconflicts:
1173 # Check if all files in the conflicting directories have been removed.
1173 # Check if all files in the conflicting directories have been removed.
1174 ctxname = bytes(mctx).rstrip(b'+')
1174 ctxname = bytes(mctx).rstrip(b'+')
1175 for f, p in _filesindirs(repo, mf, remoteconflicts):
1175 for f, p in _filesindirs(repo, mf, remoteconflicts):
1176 if f not in deletedfiles:
1176 if f not in deletedfiles:
1177 m, args, msg = actions[p]
1177 m, args, msg = actions[p]
1178 pnew = util.safename(p, ctxname, wctx, set(actions.keys()))
1178 pnew = util.safename(p, ctxname, wctx, set(actions.keys()))
1179 if m in (ACTION_DELETED_CHANGED, ACTION_MERGE):
1179 if m in (ACTION_DELETED_CHANGED, ACTION_MERGE):
1180 # Action was merge, just update target.
1180 # Action was merge, just update target.
1181 actions[pnew] = (m, args, msg)
1181 actions[pnew] = (m, args, msg)
1182 else:
1182 else:
1183 # Action was create, change to renamed get action.
1183 # Action was create, change to renamed get action.
1184 fl = args[0]
1184 fl = args[0]
1185 actions[pnew] = (
1185 actions[pnew] = (
1186 ACTION_LOCAL_DIR_RENAME_GET,
1186 ACTION_LOCAL_DIR_RENAME_GET,
1187 (p, fl),
1187 (p, fl),
1188 b'remote path conflict',
1188 b'remote path conflict',
1189 )
1189 )
1190 actions[p] = (
1190 actions[p] = (
1191 ACTION_PATH_CONFLICT,
1191 ACTION_PATH_CONFLICT,
1192 (pnew, ACTION_REMOVE),
1192 (pnew, ACTION_REMOVE),
1193 b'path conflict',
1193 b'path conflict',
1194 )
1194 )
1195 remoteconflicts.remove(p)
1195 remoteconflicts.remove(p)
1196 break
1196 break
1197
1197
1198 if invalidconflicts:
1198 if invalidconflicts:
1199 for p in invalidconflicts:
1199 for p in invalidconflicts:
1200 repo.ui.warn(_(b"%s: is both a file and a directory\n") % p)
1200 repo.ui.warn(_(b"%s: is both a file and a directory\n") % p)
1201 raise error.Abort(_(b"destination manifest contains path conflicts"))
1201 raise error.Abort(_(b"destination manifest contains path conflicts"))
1202
1202
1203
1203
1204 def _filternarrowactions(narrowmatch, branchmerge, actions):
1204 def _filternarrowactions(narrowmatch, branchmerge, actions):
1205 """
1205 """
1206 Filters out actions that can ignored because the repo is narrowed.
1206 Filters out actions that can ignored because the repo is narrowed.
1207
1207
1208 Raise an exception if the merge cannot be completed because the repo is
1208 Raise an exception if the merge cannot be completed because the repo is
1209 narrowed.
1209 narrowed.
1210 """
1210 """
1211 nooptypes = {b'k'} # TODO: handle with nonconflicttypes
1211 nooptypes = {b'k'} # TODO: handle with nonconflicttypes
1212 nonconflicttypes = set(b'a am c cm f g r e'.split())
1212 nonconflicttypes = set(b'a am c cm f g r e'.split())
1213 # We mutate the items in the dict during iteration, so iterate
1213 # We mutate the items in the dict during iteration, so iterate
1214 # over a copy.
1214 # over a copy.
1215 for f, action in list(actions.items()):
1215 for f, action in list(actions.items()):
1216 if narrowmatch(f):
1216 if narrowmatch(f):
1217 pass
1217 pass
1218 elif not branchmerge:
1218 elif not branchmerge:
1219 del actions[f] # just updating, ignore changes outside clone
1219 del actions[f] # just updating, ignore changes outside clone
1220 elif action[0] in nooptypes:
1220 elif action[0] in nooptypes:
1221 del actions[f] # merge does not affect file
1221 del actions[f] # merge does not affect file
1222 elif action[0] in nonconflicttypes:
1222 elif action[0] in nonconflicttypes:
1223 raise error.Abort(
1223 raise error.Abort(
1224 _(
1224 _(
1225 b'merge affects file \'%s\' outside narrow, '
1225 b'merge affects file \'%s\' outside narrow, '
1226 b'which is not yet supported'
1226 b'which is not yet supported'
1227 )
1227 )
1228 % f,
1228 % f,
1229 hint=_(b'merging in the other direction may work'),
1229 hint=_(b'merging in the other direction may work'),
1230 )
1230 )
1231 else:
1231 else:
1232 raise error.Abort(
1232 raise error.Abort(
1233 _(b'conflict in file \'%s\' is outside narrow clone') % f
1233 _(b'conflict in file \'%s\' is outside narrow clone') % f
1234 )
1234 )
1235
1235
1236
1236
1237 def manifestmerge(
1237 def manifestmerge(
1238 repo,
1238 repo,
1239 wctx,
1239 wctx,
1240 p2,
1240 p2,
1241 pa,
1241 pa,
1242 branchmerge,
1242 branchmerge,
1243 force,
1243 force,
1244 matcher,
1244 matcher,
1245 acceptremote,
1245 acceptremote,
1246 followcopies,
1246 followcopies,
1247 forcefulldiff=False,
1247 forcefulldiff=False,
1248 ):
1248 ):
1249 """
1249 """
1250 Merge wctx and p2 with ancestor pa and generate merge action list
1250 Merge wctx and p2 with ancestor pa and generate merge action list
1251
1251
1252 branchmerge and force are as passed in to update
1252 branchmerge and force are as passed in to update
1253 matcher = matcher to filter file lists
1253 matcher = matcher to filter file lists
1254 acceptremote = accept the incoming changes without prompting
1254 acceptremote = accept the incoming changes without prompting
1255 """
1255 """
1256 if matcher is not None and matcher.always():
1256 if matcher is not None and matcher.always():
1257 matcher = None
1257 matcher = None
1258
1258
1259 copy, movewithdir, diverge, renamedelete, dirmove = {}, {}, {}, {}, {}
1259 copy, movewithdir, diverge, renamedelete, dirmove = {}, {}, {}, {}, {}
1260
1260
1261 # manifests fetched in order are going to be faster, so prime the caches
1261 # manifests fetched in order are going to be faster, so prime the caches
1262 [
1262 [
1263 x.manifest()
1263 x.manifest()
1264 for x in sorted(wctx.parents() + [p2, pa], key=scmutil.intrev)
1264 for x in sorted(wctx.parents() + [p2, pa], key=scmutil.intrev)
1265 ]
1265 ]
1266
1266
1267 if followcopies:
1267 if followcopies:
1268 ret = copies.mergecopies(repo, wctx, p2, pa)
1268 ret = copies.mergecopies(repo, wctx, p2, pa)
1269 copy, movewithdir, diverge, renamedelete, dirmove = ret
1269 copy, movewithdir, diverge, renamedelete, dirmove = ret
1270
1270
1271 boolbm = pycompat.bytestr(bool(branchmerge))
1271 boolbm = pycompat.bytestr(bool(branchmerge))
1272 boolf = pycompat.bytestr(bool(force))
1272 boolf = pycompat.bytestr(bool(force))
1273 boolm = pycompat.bytestr(bool(matcher))
1273 boolm = pycompat.bytestr(bool(matcher))
1274 repo.ui.note(_(b"resolving manifests\n"))
1274 repo.ui.note(_(b"resolving manifests\n"))
1275 repo.ui.debug(
1275 repo.ui.debug(
1276 b" branchmerge: %s, force: %s, partial: %s\n" % (boolbm, boolf, boolm)
1276 b" branchmerge: %s, force: %s, partial: %s\n" % (boolbm, boolf, boolm)
1277 )
1277 )
1278 repo.ui.debug(b" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
1278 repo.ui.debug(b" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
1279
1279
1280 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
1280 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
1281 copied = set(copy.values())
1281 copied = set(copy.values())
1282 copied.update(movewithdir.values())
1282 copied.update(movewithdir.values())
1283
1283
1284 if b'.hgsubstate' in m1 and wctx.rev() is None:
1284 if b'.hgsubstate' in m1 and wctx.rev() is None:
1285 # Check whether sub state is modified, and overwrite the manifest
1285 # Check whether sub state is modified, and overwrite the manifest
1286 # to flag the change. If wctx is a committed revision, we shouldn't
1286 # to flag the change. If wctx is a committed revision, we shouldn't
1287 # care for the dirty state of the working directory.
1287 # care for the dirty state of the working directory.
1288 if any(wctx.sub(s).dirty() for s in wctx.substate):
1288 if any(wctx.sub(s).dirty() for s in wctx.substate):
1289 m1[b'.hgsubstate'] = modifiednodeid
1289 m1[b'.hgsubstate'] = modifiednodeid
1290
1290
1291 # Don't use m2-vs-ma optimization if:
1291 # Don't use m2-vs-ma optimization if:
1292 # - ma is the same as m1 or m2, which we're just going to diff again later
1292 # - ma is the same as m1 or m2, which we're just going to diff again later
1293 # - The caller specifically asks for a full diff, which is useful during bid
1293 # - The caller specifically asks for a full diff, which is useful during bid
1294 # merge.
1294 # merge.
1295 if pa not in ([wctx, p2] + wctx.parents()) and not forcefulldiff:
1295 if pa not in ([wctx, p2] + wctx.parents()) and not forcefulldiff:
1296 # Identify which files are relevant to the merge, so we can limit the
1296 # Identify which files are relevant to the merge, so we can limit the
1297 # total m1-vs-m2 diff to just those files. This has significant
1297 # total m1-vs-m2 diff to just those files. This has significant
1298 # performance benefits in large repositories.
1298 # performance benefits in large repositories.
1299 relevantfiles = set(ma.diff(m2).keys())
1299 relevantfiles = set(ma.diff(m2).keys())
1300
1300
1301 # For copied and moved files, we need to add the source file too.
1301 # For copied and moved files, we need to add the source file too.
1302 for copykey, copyvalue in pycompat.iteritems(copy):
1302 for copykey, copyvalue in pycompat.iteritems(copy):
1303 if copyvalue in relevantfiles:
1303 if copyvalue in relevantfiles:
1304 relevantfiles.add(copykey)
1304 relevantfiles.add(copykey)
1305 for movedirkey in movewithdir:
1305 for movedirkey in movewithdir:
1306 relevantfiles.add(movedirkey)
1306 relevantfiles.add(movedirkey)
1307 filesmatcher = scmutil.matchfiles(repo, relevantfiles)
1307 filesmatcher = scmutil.matchfiles(repo, relevantfiles)
1308 matcher = matchmod.intersectmatchers(matcher, filesmatcher)
1308 matcher = matchmod.intersectmatchers(matcher, filesmatcher)
1309
1309
1310 diff = m1.diff(m2, match=matcher)
1310 diff = m1.diff(m2, match=matcher)
1311
1311
1312 actions = {}
1312 actions = {}
1313 for f, ((n1, fl1), (n2, fl2)) in pycompat.iteritems(diff):
1313 for f, ((n1, fl1), (n2, fl2)) in pycompat.iteritems(diff):
1314 if n1 and n2: # file exists on both local and remote side
1314 if n1 and n2: # file exists on both local and remote side
1315 if f not in ma:
1315 if f not in ma:
1316 fa = copy.get(f, None)
1316 fa = copy.get(f, None)
1317 if fa is not None:
1317 if fa is not None:
1318 actions[f] = (
1318 actions[f] = (
1319 ACTION_MERGE,
1319 ACTION_MERGE,
1320 (f, f, fa, False, pa.node()),
1320 (f, f, fa, False, pa.node()),
1321 b'both renamed from %s' % fa,
1321 b'both renamed from %s' % fa,
1322 )
1322 )
1323 else:
1323 else:
1324 actions[f] = (
1324 actions[f] = (
1325 ACTION_MERGE,
1325 ACTION_MERGE,
1326 (f, f, None, False, pa.node()),
1326 (f, f, None, False, pa.node()),
1327 b'both created',
1327 b'both created',
1328 )
1328 )
1329 else:
1329 else:
1330 a = ma[f]
1330 a = ma[f]
1331 fla = ma.flags(f)
1331 fla = ma.flags(f)
1332 nol = b'l' not in fl1 + fl2 + fla
1332 nol = b'l' not in fl1 + fl2 + fla
1333 if n2 == a and fl2 == fla:
1333 if n2 == a and fl2 == fla:
1334 actions[f] = (ACTION_KEEP, (), b'remote unchanged')
1334 actions[f] = (ACTION_KEEP, (), b'remote unchanged')
1335 elif n1 == a and fl1 == fla: # local unchanged - use remote
1335 elif n1 == a and fl1 == fla: # local unchanged - use remote
1336 if n1 == n2: # optimization: keep local content
1336 if n1 == n2: # optimization: keep local content
1337 actions[f] = (
1337 actions[f] = (
1338 ACTION_EXEC,
1338 ACTION_EXEC,
1339 (fl2,),
1339 (fl2,),
1340 b'update permissions',
1340 b'update permissions',
1341 )
1341 )
1342 else:
1342 else:
1343 actions[f] = (
1343 actions[f] = (
1344 ACTION_GET,
1344 ACTION_GET,
1345 (fl2, False),
1345 (fl2, False),
1346 b'remote is newer',
1346 b'remote is newer',
1347 )
1347 )
1348 elif nol and n2 == a: # remote only changed 'x'
1348 elif nol and n2 == a: # remote only changed 'x'
1349 actions[f] = (ACTION_EXEC, (fl2,), b'update permissions')
1349 actions[f] = (ACTION_EXEC, (fl2,), b'update permissions')
1350 elif nol and n1 == a: # local only changed 'x'
1350 elif nol and n1 == a: # local only changed 'x'
1351 actions[f] = (ACTION_GET, (fl1, False), b'remote is newer')
1351 actions[f] = (ACTION_GET, (fl1, False), b'remote is newer')
1352 else: # both changed something
1352 else: # both changed something
1353 actions[f] = (
1353 actions[f] = (
1354 ACTION_MERGE,
1354 ACTION_MERGE,
1355 (f, f, f, False, pa.node()),
1355 (f, f, f, False, pa.node()),
1356 b'versions differ',
1356 b'versions differ',
1357 )
1357 )
1358 elif n1: # file exists only on local side
1358 elif n1: # file exists only on local side
1359 if f in copied:
1359 if f in copied:
1360 pass # we'll deal with it on m2 side
1360 pass # we'll deal with it on m2 side
1361 elif f in movewithdir: # directory rename, move local
1361 elif f in movewithdir: # directory rename, move local
1362 f2 = movewithdir[f]
1362 f2 = movewithdir[f]
1363 if f2 in m2:
1363 if f2 in m2:
1364 actions[f2] = (
1364 actions[f2] = (
1365 ACTION_MERGE,
1365 ACTION_MERGE,
1366 (f, f2, None, True, pa.node()),
1366 (f, f2, None, True, pa.node()),
1367 b'remote directory rename, both created',
1367 b'remote directory rename, both created',
1368 )
1368 )
1369 else:
1369 else:
1370 actions[f2] = (
1370 actions[f2] = (
1371 ACTION_DIR_RENAME_MOVE_LOCAL,
1371 ACTION_DIR_RENAME_MOVE_LOCAL,
1372 (f, fl1),
1372 (f, fl1),
1373 b'remote directory rename - move from %s' % f,
1373 b'remote directory rename - move from %s' % f,
1374 )
1374 )
1375 elif f in copy:
1375 elif f in copy:
1376 f2 = copy[f]
1376 f2 = copy[f]
1377 actions[f] = (
1377 actions[f] = (
1378 ACTION_MERGE,
1378 ACTION_MERGE,
1379 (f, f2, f2, False, pa.node()),
1379 (f, f2, f2, False, pa.node()),
1380 b'local copied/moved from %s' % f2,
1380 b'local copied/moved from %s' % f2,
1381 )
1381 )
1382 elif f in ma: # clean, a different, no remote
1382 elif f in ma: # clean, a different, no remote
1383 if n1 != ma[f]:
1383 if n1 != ma[f]:
1384 if acceptremote:
1384 if acceptremote:
1385 actions[f] = (ACTION_REMOVE, None, b'remote delete')
1385 actions[f] = (ACTION_REMOVE, None, b'remote delete')
1386 else:
1386 else:
1387 actions[f] = (
1387 actions[f] = (
1388 ACTION_CHANGED_DELETED,
1388 ACTION_CHANGED_DELETED,
1389 (f, None, f, False, pa.node()),
1389 (f, None, f, False, pa.node()),
1390 b'prompt changed/deleted',
1390 b'prompt changed/deleted',
1391 )
1391 )
1392 elif n1 == addednodeid:
1392 elif n1 == addednodeid:
1393 # This extra 'a' is added by working copy manifest to mark
1393 # This extra 'a' is added by working copy manifest to mark
1394 # the file as locally added. We should forget it instead of
1394 # the file as locally added. We should forget it instead of
1395 # deleting it.
1395 # deleting it.
1396 actions[f] = (ACTION_FORGET, None, b'remote deleted')
1396 actions[f] = (ACTION_FORGET, None, b'remote deleted')
1397 else:
1397 else:
1398 actions[f] = (ACTION_REMOVE, None, b'other deleted')
1398 actions[f] = (ACTION_REMOVE, None, b'other deleted')
1399 elif n2: # file exists only on remote side
1399 elif n2: # file exists only on remote side
1400 if f in copied:
1400 if f in copied:
1401 pass # we'll deal with it on m1 side
1401 pass # we'll deal with it on m1 side
1402 elif f in movewithdir:
1402 elif f in movewithdir:
1403 f2 = movewithdir[f]
1403 f2 = movewithdir[f]
1404 if f2 in m1:
1404 if f2 in m1:
1405 actions[f2] = (
1405 actions[f2] = (
1406 ACTION_MERGE,
1406 ACTION_MERGE,
1407 (f2, f, None, False, pa.node()),
1407 (f2, f, None, False, pa.node()),
1408 b'local directory rename, both created',
1408 b'local directory rename, both created',
1409 )
1409 )
1410 else:
1410 else:
1411 actions[f2] = (
1411 actions[f2] = (
1412 ACTION_LOCAL_DIR_RENAME_GET,
1412 ACTION_LOCAL_DIR_RENAME_GET,
1413 (f, fl2),
1413 (f, fl2),
1414 b'local directory rename - get from %s' % f,
1414 b'local directory rename - get from %s' % f,
1415 )
1415 )
1416 elif f in copy:
1416 elif f in copy:
1417 f2 = copy[f]
1417 f2 = copy[f]
1418 if f2 in m2:
1418 if f2 in m2:
1419 actions[f] = (
1419 actions[f] = (
1420 ACTION_MERGE,
1420 ACTION_MERGE,
1421 (f2, f, f2, False, pa.node()),
1421 (f2, f, f2, False, pa.node()),
1422 b'remote copied from %s' % f2,
1422 b'remote copied from %s' % f2,
1423 )
1423 )
1424 else:
1424 else:
1425 actions[f] = (
1425 actions[f] = (
1426 ACTION_MERGE,
1426 ACTION_MERGE,
1427 (f2, f, f2, True, pa.node()),
1427 (f2, f, f2, True, pa.node()),
1428 b'remote moved from %s' % f2,
1428 b'remote moved from %s' % f2,
1429 )
1429 )
1430 elif f not in ma:
1430 elif f not in ma:
1431 # local unknown, remote created: the logic is described by the
1431 # local unknown, remote created: the logic is described by the
1432 # following table:
1432 # following table:
1433 #
1433 #
1434 # force branchmerge different | action
1434 # force branchmerge different | action
1435 # n * * | create
1435 # n * * | create
1436 # y n * | create
1436 # y n * | create
1437 # y y n | create
1437 # y y n | create
1438 # y y y | merge
1438 # y y y | merge
1439 #
1439 #
1440 # Checking whether the files are different is expensive, so we
1440 # Checking whether the files are different is expensive, so we
1441 # don't do that when we can avoid it.
1441 # don't do that when we can avoid it.
1442 if not force:
1442 if not force:
1443 actions[f] = (ACTION_CREATED, (fl2,), b'remote created')
1443 actions[f] = (ACTION_CREATED, (fl2,), b'remote created')
1444 elif not branchmerge:
1444 elif not branchmerge:
1445 actions[f] = (ACTION_CREATED, (fl2,), b'remote created')
1445 actions[f] = (ACTION_CREATED, (fl2,), b'remote created')
1446 else:
1446 else:
1447 actions[f] = (
1447 actions[f] = (
1448 ACTION_CREATED_MERGE,
1448 ACTION_CREATED_MERGE,
1449 (fl2, pa.node()),
1449 (fl2, pa.node()),
1450 b'remote created, get or merge',
1450 b'remote created, get or merge',
1451 )
1451 )
1452 elif n2 != ma[f]:
1452 elif n2 != ma[f]:
1453 df = None
1453 df = None
1454 for d in dirmove:
1454 for d in dirmove:
1455 if f.startswith(d):
1455 if f.startswith(d):
1456 # new file added in a directory that was moved
1456 # new file added in a directory that was moved
1457 df = dirmove[d] + f[len(d) :]
1457 df = dirmove[d] + f[len(d) :]
1458 break
1458 break
1459 if df is not None and df in m1:
1459 if df is not None and df in m1:
1460 actions[df] = (
1460 actions[df] = (
1461 ACTION_MERGE,
1461 ACTION_MERGE,
1462 (df, f, f, False, pa.node()),
1462 (df, f, f, False, pa.node()),
1463 b'local directory rename - respect move '
1463 b'local directory rename - respect move '
1464 b'from %s' % f,
1464 b'from %s' % f,
1465 )
1465 )
1466 elif acceptremote:
1466 elif acceptremote:
1467 actions[f] = (ACTION_CREATED, (fl2,), b'remote recreating')
1467 actions[f] = (ACTION_CREATED, (fl2,), b'remote recreating')
1468 else:
1468 else:
1469 actions[f] = (
1469 actions[f] = (
1470 ACTION_DELETED_CHANGED,
1470 ACTION_DELETED_CHANGED,
1471 (None, f, f, False, pa.node()),
1471 (None, f, f, False, pa.node()),
1472 b'prompt deleted/changed',
1472 b'prompt deleted/changed',
1473 )
1473 )
1474
1474
1475 if repo.ui.configbool(b'experimental', b'merge.checkpathconflicts'):
1475 if repo.ui.configbool(b'experimental', b'merge.checkpathconflicts'):
1476 # If we are merging, look for path conflicts.
1476 # If we are merging, look for path conflicts.
1477 checkpathconflicts(repo, wctx, p2, actions)
1477 checkpathconflicts(repo, wctx, p2, actions)
1478
1478
1479 narrowmatch = repo.narrowmatch()
1479 narrowmatch = repo.narrowmatch()
1480 if not narrowmatch.always():
1480 if not narrowmatch.always():
1481 # Updates "actions" in place
1481 # Updates "actions" in place
1482 _filternarrowactions(narrowmatch, branchmerge, actions)
1482 _filternarrowactions(narrowmatch, branchmerge, actions)
1483
1483
1484 return actions, diverge, renamedelete
1484 return actions, diverge, renamedelete
1485
1485
1486
1486
1487 def _resolvetrivial(repo, wctx, mctx, ancestor, actions):
1487 def _resolvetrivial(repo, wctx, mctx, ancestor, actions):
1488 """Resolves false conflicts where the nodeid changed but the content
1488 """Resolves false conflicts where the nodeid changed but the content
1489 remained the same."""
1489 remained the same."""
1490 # We force a copy of actions.items() because we're going to mutate
1490 # We force a copy of actions.items() because we're going to mutate
1491 # actions as we resolve trivial conflicts.
1491 # actions as we resolve trivial conflicts.
1492 for f, (m, args, msg) in list(actions.items()):
1492 for f, (m, args, msg) in list(actions.items()):
1493 if (
1493 if (
1494 m == ACTION_CHANGED_DELETED
1494 m == ACTION_CHANGED_DELETED
1495 and f in ancestor
1495 and f in ancestor
1496 and not wctx[f].cmp(ancestor[f])
1496 and not wctx[f].cmp(ancestor[f])
1497 ):
1497 ):
1498 # local did change but ended up with same content
1498 # local did change but ended up with same content
1499 actions[f] = ACTION_REMOVE, None, b'prompt same'
1499 actions[f] = ACTION_REMOVE, None, b'prompt same'
1500 elif (
1500 elif (
1501 m == ACTION_DELETED_CHANGED
1501 m == ACTION_DELETED_CHANGED
1502 and f in ancestor
1502 and f in ancestor
1503 and not mctx[f].cmp(ancestor[f])
1503 and not mctx[f].cmp(ancestor[f])
1504 ):
1504 ):
1505 # remote did change but ended up with same content
1505 # remote did change but ended up with same content
1506 del actions[f] # don't get = keep local deleted
1506 del actions[f] # don't get = keep local deleted
1507
1507
1508
1508
1509 def calculateupdates(
1509 def calculateupdates(
1510 repo,
1510 repo,
1511 wctx,
1511 wctx,
1512 mctx,
1512 mctx,
1513 ancestors,
1513 ancestors,
1514 branchmerge,
1514 branchmerge,
1515 force,
1515 force,
1516 acceptremote,
1516 acceptremote,
1517 followcopies,
1517 followcopies,
1518 matcher=None,
1518 matcher=None,
1519 mergeforce=False,
1519 mergeforce=False,
1520 ):
1520 ):
1521 """Calculate the actions needed to merge mctx into wctx using ancestors"""
1521 """Calculate the actions needed to merge mctx into wctx using ancestors"""
1522 # Avoid cycle.
1522 # Avoid cycle.
1523 from . import sparse
1523 from . import sparse
1524
1524
1525 if len(ancestors) == 1: # default
1525 if len(ancestors) == 1: # default
1526 actions, diverge, renamedelete = manifestmerge(
1526 actions, diverge, renamedelete = manifestmerge(
1527 repo,
1527 repo,
1528 wctx,
1528 wctx,
1529 mctx,
1529 mctx,
1530 ancestors[0],
1530 ancestors[0],
1531 branchmerge,
1531 branchmerge,
1532 force,
1532 force,
1533 matcher,
1533 matcher,
1534 acceptremote,
1534 acceptremote,
1535 followcopies,
1535 followcopies,
1536 )
1536 )
1537 _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce)
1537 _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce)
1538
1538
1539 else: # only when merge.preferancestor=* - the default
1539 else: # only when merge.preferancestor=* - the default
1540 repo.ui.note(
1540 repo.ui.note(
1541 _(b"note: merging %s and %s using bids from ancestors %s\n")
1541 _(b"note: merging %s and %s using bids from ancestors %s\n")
1542 % (
1542 % (
1543 wctx,
1543 wctx,
1544 mctx,
1544 mctx,
1545 _(b' and ').join(pycompat.bytestr(anc) for anc in ancestors),
1545 _(b' and ').join(pycompat.bytestr(anc) for anc in ancestors),
1546 )
1546 )
1547 )
1547 )
1548
1548
1549 # Call for bids
1549 # Call for bids
1550 fbids = (
1550 fbids = (
1551 {}
1551 {}
1552 ) # mapping filename to bids (action method to list af actions)
1552 ) # mapping filename to bids (action method to list af actions)
1553 diverge, renamedelete = None, None
1553 diverge, renamedelete = None, None
1554 for ancestor in ancestors:
1554 for ancestor in ancestors:
1555 repo.ui.note(_(b'\ncalculating bids for ancestor %s\n') % ancestor)
1555 repo.ui.note(_(b'\ncalculating bids for ancestor %s\n') % ancestor)
1556 actions, diverge1, renamedelete1 = manifestmerge(
1556 actions, diverge1, renamedelete1 = manifestmerge(
1557 repo,
1557 repo,
1558 wctx,
1558 wctx,
1559 mctx,
1559 mctx,
1560 ancestor,
1560 ancestor,
1561 branchmerge,
1561 branchmerge,
1562 force,
1562 force,
1563 matcher,
1563 matcher,
1564 acceptremote,
1564 acceptremote,
1565 followcopies,
1565 followcopies,
1566 forcefulldiff=True,
1566 forcefulldiff=True,
1567 )
1567 )
1568 _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce)
1568 _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce)
1569
1569
1570 # Track the shortest set of warning on the theory that bid
1570 # Track the shortest set of warning on the theory that bid
1571 # merge will correctly incorporate more information
1571 # merge will correctly incorporate more information
1572 if diverge is None or len(diverge1) < len(diverge):
1572 if diverge is None or len(diverge1) < len(diverge):
1573 diverge = diverge1
1573 diverge = diverge1
1574 if renamedelete is None or len(renamedelete) < len(renamedelete1):
1574 if renamedelete is None or len(renamedelete) < len(renamedelete1):
1575 renamedelete = renamedelete1
1575 renamedelete = renamedelete1
1576
1576
1577 for f, a in sorted(pycompat.iteritems(actions)):
1577 for f, a in sorted(pycompat.iteritems(actions)):
1578 m, args, msg = a
1578 m, args, msg = a
1579 repo.ui.debug(b' %s: %s -> %s\n' % (f, msg, m))
1579 repo.ui.debug(b' %s: %s -> %s\n' % (f, msg, m))
1580 if f in fbids:
1580 if f in fbids:
1581 d = fbids[f]
1581 d = fbids[f]
1582 if m in d:
1582 if m in d:
1583 d[m].append(a)
1583 d[m].append(a)
1584 else:
1584 else:
1585 d[m] = [a]
1585 d[m] = [a]
1586 else:
1586 else:
1587 fbids[f] = {m: [a]}
1587 fbids[f] = {m: [a]}
1588
1588
1589 # Pick the best bid for each file
1589 # Pick the best bid for each file
1590 repo.ui.note(_(b'\nauction for merging merge bids\n'))
1590 repo.ui.note(_(b'\nauction for merging merge bids\n'))
1591 actions = {}
1591 actions = {}
1592 for f, bids in sorted(fbids.items()):
1592 for f, bids in sorted(fbids.items()):
1593 # bids is a mapping from action method to list af actions
1593 # bids is a mapping from action method to list af actions
1594 # Consensus?
1594 # Consensus?
1595 if len(bids) == 1: # all bids are the same kind of method
1595 if len(bids) == 1: # all bids are the same kind of method
1596 m, l = list(bids.items())[0]
1596 m, l = list(bids.items())[0]
1597 if all(a == l[0] for a in l[1:]): # len(bids) is > 1
1597 if all(a == l[0] for a in l[1:]): # len(bids) is > 1
1598 repo.ui.note(_(b" %s: consensus for %s\n") % (f, m))
1598 repo.ui.note(_(b" %s: consensus for %s\n") % (f, m))
1599 actions[f] = l[0]
1599 actions[f] = l[0]
1600 continue
1600 continue
1601 # If keep is an option, just do it.
1601 # If keep is an option, just do it.
1602 if ACTION_KEEP in bids:
1602 if ACTION_KEEP in bids:
1603 repo.ui.note(_(b" %s: picking 'keep' action\n") % f)
1603 repo.ui.note(_(b" %s: picking 'keep' action\n") % f)
1604 actions[f] = bids[ACTION_KEEP][0]
1604 actions[f] = bids[ACTION_KEEP][0]
1605 continue
1605 continue
1606 # If there are gets and they all agree [how could they not?], do it.
1606 # If there are gets and they all agree [how could they not?], do it.
1607 if ACTION_GET in bids:
1607 if ACTION_GET in bids:
1608 ga0 = bids[ACTION_GET][0]
1608 ga0 = bids[ACTION_GET][0]
1609 if all(a == ga0 for a in bids[ACTION_GET][1:]):
1609 if all(a == ga0 for a in bids[ACTION_GET][1:]):
1610 repo.ui.note(_(b" %s: picking 'get' action\n") % f)
1610 repo.ui.note(_(b" %s: picking 'get' action\n") % f)
1611 actions[f] = ga0
1611 actions[f] = ga0
1612 continue
1612 continue
1613 # TODO: Consider other simple actions such as mode changes
1613 # TODO: Consider other simple actions such as mode changes
1614 # Handle inefficient democrazy.
1614 # Handle inefficient democrazy.
1615 repo.ui.note(_(b' %s: multiple bids for merge action:\n') % f)
1615 repo.ui.note(_(b' %s: multiple bids for merge action:\n') % f)
1616 for m, l in sorted(bids.items()):
1616 for m, l in sorted(bids.items()):
1617 for _f, args, msg in l:
1617 for _f, args, msg in l:
1618 repo.ui.note(b' %s -> %s\n' % (msg, m))
1618 repo.ui.note(b' %s -> %s\n' % (msg, m))
1619 # Pick random action. TODO: Instead, prompt user when resolving
1619 # Pick random action. TODO: Instead, prompt user when resolving
1620 m, l = list(bids.items())[0]
1620 m, l = list(bids.items())[0]
1621 repo.ui.warn(
1621 repo.ui.warn(
1622 _(b' %s: ambiguous merge - picked %s action\n') % (f, m)
1622 _(b' %s: ambiguous merge - picked %s action\n') % (f, m)
1623 )
1623 )
1624 actions[f] = l[0]
1624 actions[f] = l[0]
1625 continue
1625 continue
1626 repo.ui.note(_(b'end of auction\n\n'))
1626 repo.ui.note(_(b'end of auction\n\n'))
1627
1627
1628 if wctx.rev() is None:
1628 if wctx.rev() is None:
1629 fractions = _forgetremoved(wctx, mctx, branchmerge)
1629 fractions = _forgetremoved(wctx, mctx, branchmerge)
1630 actions.update(fractions)
1630 actions.update(fractions)
1631
1631
1632 prunedactions = sparse.filterupdatesactions(
1632 prunedactions = sparse.filterupdatesactions(
1633 repo, wctx, mctx, branchmerge, actions
1633 repo, wctx, mctx, branchmerge, actions
1634 )
1634 )
1635 _resolvetrivial(repo, wctx, mctx, ancestors[0], actions)
1635 _resolvetrivial(repo, wctx, mctx, ancestors[0], actions)
1636
1636
1637 return prunedactions, diverge, renamedelete
1637 return prunedactions, diverge, renamedelete
1638
1638
1639
1639
1640 def _getcwd():
1640 def _getcwd():
1641 try:
1641 try:
1642 return encoding.getcwd()
1642 return encoding.getcwd()
1643 except OSError as err:
1643 except OSError as err:
1644 if err.errno == errno.ENOENT:
1644 if err.errno == errno.ENOENT:
1645 return None
1645 return None
1646 raise
1646 raise
1647
1647
1648
1648
1649 def batchremove(repo, wctx, actions):
1649 def batchremove(repo, wctx, actions):
1650 """apply removes to the working directory
1650 """apply removes to the working directory
1651
1651
1652 yields tuples for progress updates
1652 yields tuples for progress updates
1653 """
1653 """
1654 verbose = repo.ui.verbose
1654 verbose = repo.ui.verbose
1655 cwd = _getcwd()
1655 cwd = _getcwd()
1656 i = 0
1656 i = 0
1657 for f, args, msg in actions:
1657 for f, args, msg in actions:
1658 repo.ui.debug(b" %s: %s -> r\n" % (f, msg))
1658 repo.ui.debug(b" %s: %s -> r\n" % (f, msg))
1659 if verbose:
1659 if verbose:
1660 repo.ui.note(_(b"removing %s\n") % f)
1660 repo.ui.note(_(b"removing %s\n") % f)
1661 wctx[f].audit()
1661 wctx[f].audit()
1662 try:
1662 try:
1663 wctx[f].remove(ignoremissing=True)
1663 wctx[f].remove(ignoremissing=True)
1664 except OSError as inst:
1664 except OSError as inst:
1665 repo.ui.warn(
1665 repo.ui.warn(
1666 _(b"update failed to remove %s: %s!\n") % (f, inst.strerror)
1666 _(b"update failed to remove %s: %s!\n") % (f, inst.strerror)
1667 )
1667 )
1668 if i == 100:
1668 if i == 100:
1669 yield i, f
1669 yield i, f
1670 i = 0
1670 i = 0
1671 i += 1
1671 i += 1
1672 if i > 0:
1672 if i > 0:
1673 yield i, f
1673 yield i, f
1674
1674
1675 if cwd and not _getcwd():
1675 if cwd and not _getcwd():
1676 # cwd was removed in the course of removing files; print a helpful
1676 # cwd was removed in the course of removing files; print a helpful
1677 # warning.
1677 # warning.
1678 repo.ui.warn(
1678 repo.ui.warn(
1679 _(
1679 _(
1680 b"current directory was removed\n"
1680 b"current directory was removed\n"
1681 b"(consider changing to repo root: %s)\n"
1681 b"(consider changing to repo root: %s)\n"
1682 )
1682 )
1683 % repo.root
1683 % repo.root
1684 )
1684 )
1685
1685
1686
1686
1687 def batchget(repo, mctx, wctx, wantfiledata, actions):
1687 def batchget(repo, mctx, wctx, wantfiledata, actions):
1688 """apply gets to the working directory
1688 """apply gets to the working directory
1689
1689
1690 mctx is the context to get from
1690 mctx is the context to get from
1691
1691
1692 Yields arbitrarily many (False, tuple) for progress updates, followed by
1692 Yields arbitrarily many (False, tuple) for progress updates, followed by
1693 exactly one (True, filedata). When wantfiledata is false, filedata is an
1693 exactly one (True, filedata). When wantfiledata is false, filedata is an
1694 empty dict. When wantfiledata is true, filedata[f] is a triple (mode, size,
1694 empty dict. When wantfiledata is true, filedata[f] is a triple (mode, size,
1695 mtime) of the file f written for each action.
1695 mtime) of the file f written for each action.
1696 """
1696 """
1697 filedata = {}
1697 filedata = {}
1698 verbose = repo.ui.verbose
1698 verbose = repo.ui.verbose
1699 fctx = mctx.filectx
1699 fctx = mctx.filectx
1700 ui = repo.ui
1700 ui = repo.ui
1701 i = 0
1701 i = 0
1702 with repo.wvfs.backgroundclosing(ui, expectedcount=len(actions)):
1702 with repo.wvfs.backgroundclosing(ui, expectedcount=len(actions)):
1703 for f, (flags, backup), msg in actions:
1703 for f, (flags, backup), msg in actions:
1704 repo.ui.debug(b" %s: %s -> g\n" % (f, msg))
1704 repo.ui.debug(b" %s: %s -> g\n" % (f, msg))
1705 if verbose:
1705 if verbose:
1706 repo.ui.note(_(b"getting %s\n") % f)
1706 repo.ui.note(_(b"getting %s\n") % f)
1707
1707
1708 if backup:
1708 if backup:
1709 # If a file or directory exists with the same name, back that
1709 # If a file or directory exists with the same name, back that
1710 # up. Otherwise, look to see if there is a file that conflicts
1710 # up. Otherwise, look to see if there is a file that conflicts
1711 # with a directory this file is in, and if so, back that up.
1711 # with a directory this file is in, and if so, back that up.
1712 conflicting = f
1712 conflicting = f
1713 if not repo.wvfs.lexists(f):
1713 if not repo.wvfs.lexists(f):
1714 for p in pathutil.finddirs(f):
1714 for p in pathutil.finddirs(f):
1715 if repo.wvfs.isfileorlink(p):
1715 if repo.wvfs.isfileorlink(p):
1716 conflicting = p
1716 conflicting = p
1717 break
1717 break
1718 if repo.wvfs.lexists(conflicting):
1718 if repo.wvfs.lexists(conflicting):
1719 orig = scmutil.backuppath(ui, repo, conflicting)
1719 orig = scmutil.backuppath(ui, repo, conflicting)
1720 util.rename(repo.wjoin(conflicting), orig)
1720 util.rename(repo.wjoin(conflicting), orig)
1721 wfctx = wctx[f]
1721 wfctx = wctx[f]
1722 wfctx.clearunknown()
1722 wfctx.clearunknown()
1723 atomictemp = ui.configbool(b"experimental", b"update.atomic-file")
1723 atomictemp = ui.configbool(b"experimental", b"update.atomic-file")
1724 size = wfctx.write(
1724 size = wfctx.write(
1725 fctx(f).data(),
1725 fctx(f).data(),
1726 flags,
1726 flags,
1727 backgroundclose=True,
1727 backgroundclose=True,
1728 atomictemp=atomictemp,
1728 atomictemp=atomictemp,
1729 )
1729 )
1730 if wantfiledata:
1730 if wantfiledata:
1731 s = wfctx.lstat()
1731 s = wfctx.lstat()
1732 mode = s.st_mode
1732 mode = s.st_mode
1733 mtime = s[stat.ST_MTIME]
1733 mtime = s[stat.ST_MTIME]
1734 filedata[f] = (mode, size, mtime) # for dirstate.normal
1734 filedata[f] = (mode, size, mtime) # for dirstate.normal
1735 if i == 100:
1735 if i == 100:
1736 yield False, (i, f)
1736 yield False, (i, f)
1737 i = 0
1737 i = 0
1738 i += 1
1738 i += 1
1739 if i > 0:
1739 if i > 0:
1740 yield False, (i, f)
1740 yield False, (i, f)
1741 yield True, filedata
1741 yield True, filedata
1742
1742
1743
1743
1744 def _prefetchfiles(repo, ctx, actions):
1744 def _prefetchfiles(repo, ctx, actions):
1745 """Invoke ``scmutil.prefetchfiles()`` for the files relevant to the dict
1745 """Invoke ``scmutil.prefetchfiles()`` for the files relevant to the dict
1746 of merge actions. ``ctx`` is the context being merged in."""
1746 of merge actions. ``ctx`` is the context being merged in."""
1747
1747
1748 # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they
1748 # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they
1749 # don't touch the context to be merged in. 'cd' is skipped, because
1749 # don't touch the context to be merged in. 'cd' is skipped, because
1750 # changed/deleted never resolves to something from the remote side.
1750 # changed/deleted never resolves to something from the remote side.
1751 oplist = [
1751 oplist = [
1752 actions[a]
1752 actions[a]
1753 for a in (
1753 for a in (
1754 ACTION_GET,
1754 ACTION_GET,
1755 ACTION_DELETED_CHANGED,
1755 ACTION_DELETED_CHANGED,
1756 ACTION_LOCAL_DIR_RENAME_GET,
1756 ACTION_LOCAL_DIR_RENAME_GET,
1757 ACTION_MERGE,
1757 ACTION_MERGE,
1758 )
1758 )
1759 ]
1759 ]
1760 prefetch = scmutil.prefetchfiles
1760 prefetch = scmutil.prefetchfiles
1761 matchfiles = scmutil.matchfiles
1761 matchfiles = scmutil.matchfiles
1762 prefetch(
1762 prefetch(
1763 repo,
1763 repo,
1764 [ctx.rev()],
1764 [ctx.rev()],
1765 matchfiles(repo, [f for sublist in oplist for f, args, msg in sublist]),
1765 matchfiles(repo, [f for sublist in oplist for f, args, msg in sublist]),
1766 )
1766 )
1767
1767
1768
1768
1769 @attr.s(frozen=True)
1769 @attr.s(frozen=True)
1770 class updateresult(object):
1770 class updateresult(object):
1771 updatedcount = attr.ib()
1771 updatedcount = attr.ib()
1772 mergedcount = attr.ib()
1772 mergedcount = attr.ib()
1773 removedcount = attr.ib()
1773 removedcount = attr.ib()
1774 unresolvedcount = attr.ib()
1774 unresolvedcount = attr.ib()
1775
1775
1776 def isempty(self):
1776 def isempty(self):
1777 return not (
1777 return not (
1778 self.updatedcount
1778 self.updatedcount
1779 or self.mergedcount
1779 or self.mergedcount
1780 or self.removedcount
1780 or self.removedcount
1781 or self.unresolvedcount
1781 or self.unresolvedcount
1782 )
1782 )
1783
1783
1784
1784
1785 def emptyactions():
1785 def emptyactions():
1786 """create an actions dict, to be populated and passed to applyupdates()"""
1786 """create an actions dict, to be populated and passed to applyupdates()"""
1787 return dict(
1787 return dict(
1788 (m, [])
1788 (m, [])
1789 for m in (
1789 for m in (
1790 ACTION_ADD,
1790 ACTION_ADD,
1791 ACTION_ADD_MODIFIED,
1791 ACTION_ADD_MODIFIED,
1792 ACTION_FORGET,
1792 ACTION_FORGET,
1793 ACTION_GET,
1793 ACTION_GET,
1794 ACTION_CHANGED_DELETED,
1794 ACTION_CHANGED_DELETED,
1795 ACTION_DELETED_CHANGED,
1795 ACTION_DELETED_CHANGED,
1796 ACTION_REMOVE,
1796 ACTION_REMOVE,
1797 ACTION_DIR_RENAME_MOVE_LOCAL,
1797 ACTION_DIR_RENAME_MOVE_LOCAL,
1798 ACTION_LOCAL_DIR_RENAME_GET,
1798 ACTION_LOCAL_DIR_RENAME_GET,
1799 ACTION_MERGE,
1799 ACTION_MERGE,
1800 ACTION_EXEC,
1800 ACTION_EXEC,
1801 ACTION_KEEP,
1801 ACTION_KEEP,
1802 ACTION_PATH_CONFLICT,
1802 ACTION_PATH_CONFLICT,
1803 ACTION_PATH_CONFLICT_RESOLVE,
1803 ACTION_PATH_CONFLICT_RESOLVE,
1804 )
1804 )
1805 )
1805 )
1806
1806
1807
1807
1808 def applyupdates(
1808 def applyupdates(
1809 repo, actions, wctx, mctx, overwrite, wantfiledata, labels=None
1809 repo, actions, wctx, mctx, overwrite, wantfiledata, labels=None
1810 ):
1810 ):
1811 """apply the merge action list to the working directory
1811 """apply the merge action list to the working directory
1812
1812
1813 wctx is the working copy context
1813 wctx is the working copy context
1814 mctx is the context to be merged into the working copy
1814 mctx is the context to be merged into the working copy
1815
1815
1816 Return a tuple of (counts, filedata), where counts is a tuple
1816 Return a tuple of (counts, filedata), where counts is a tuple
1817 (updated, merged, removed, unresolved) that describes how many
1817 (updated, merged, removed, unresolved) that describes how many
1818 files were affected by the update, and filedata is as described in
1818 files were affected by the update, and filedata is as described in
1819 batchget.
1819 batchget.
1820 """
1820 """
1821
1821
1822 _prefetchfiles(repo, mctx, actions)
1822 _prefetchfiles(repo, mctx, actions)
1823
1823
1824 updated, merged, removed = 0, 0, 0
1824 updated, merged, removed = 0, 0, 0
1825 ms = mergestate.clean(repo, wctx.p1().node(), mctx.node(), labels)
1825 ms = mergestate.clean(repo, wctx.p1().node(), mctx.node(), labels)
1826 moves = []
1826 moves = []
1827 for m, l in actions.items():
1827 for m, l in actions.items():
1828 l.sort()
1828 l.sort()
1829
1829
1830 # 'cd' and 'dc' actions are treated like other merge conflicts
1830 # 'cd' and 'dc' actions are treated like other merge conflicts
1831 mergeactions = sorted(actions[ACTION_CHANGED_DELETED])
1831 mergeactions = sorted(actions[ACTION_CHANGED_DELETED])
1832 mergeactions.extend(sorted(actions[ACTION_DELETED_CHANGED]))
1832 mergeactions.extend(sorted(actions[ACTION_DELETED_CHANGED]))
1833 mergeactions.extend(actions[ACTION_MERGE])
1833 mergeactions.extend(actions[ACTION_MERGE])
1834 for f, args, msg in mergeactions:
1834 for f, args, msg in mergeactions:
1835 f1, f2, fa, move, anc = args
1835 f1, f2, fa, move, anc = args
1836 if f == b'.hgsubstate': # merged internally
1836 if f == b'.hgsubstate': # merged internally
1837 continue
1837 continue
1838 if f1 is None:
1838 if f1 is None:
1839 fcl = filemerge.absentfilectx(wctx, fa)
1839 fcl = filemerge.absentfilectx(wctx, fa)
1840 else:
1840 else:
1841 repo.ui.debug(b" preserving %s for resolve of %s\n" % (f1, f))
1841 repo.ui.debug(b" preserving %s for resolve of %s\n" % (f1, f))
1842 fcl = wctx[f1]
1842 fcl = wctx[f1]
1843 if f2 is None:
1843 if f2 is None:
1844 fco = filemerge.absentfilectx(mctx, fa)
1844 fco = filemerge.absentfilectx(mctx, fa)
1845 else:
1845 else:
1846 fco = mctx[f2]
1846 fco = mctx[f2]
1847 actx = repo[anc]
1847 actx = repo[anc]
1848 if fa in actx:
1848 if fa in actx:
1849 fca = actx[fa]
1849 fca = actx[fa]
1850 else:
1850 else:
1851 # TODO: move to absentfilectx
1851 # TODO: move to absentfilectx
1852 fca = repo.filectx(f1, fileid=nullrev)
1852 fca = repo.filectx(f1, fileid=nullrev)
1853 ms.add(fcl, fco, fca, f)
1853 ms.add(fcl, fco, fca, f)
1854 if f1 != f and move:
1854 if f1 != f and move:
1855 moves.append(f1)
1855 moves.append(f1)
1856
1856
1857 # remove renamed files after safely stored
1857 # remove renamed files after safely stored
1858 for f in moves:
1858 for f in moves:
1859 if wctx[f].lexists():
1859 if wctx[f].lexists():
1860 repo.ui.debug(b"removing %s\n" % f)
1860 repo.ui.debug(b"removing %s\n" % f)
1861 wctx[f].audit()
1861 wctx[f].audit()
1862 wctx[f].remove()
1862 wctx[f].remove()
1863
1863
1864 numupdates = sum(len(l) for m, l in actions.items() if m != ACTION_KEEP)
1864 numupdates = sum(len(l) for m, l in actions.items() if m != ACTION_KEEP)
1865 progress = repo.ui.makeprogress(
1865 progress = repo.ui.makeprogress(
1866 _(b'updating'), unit=_(b'files'), total=numupdates
1866 _(b'updating'), unit=_(b'files'), total=numupdates
1867 )
1867 )
1868
1868
1869 if [a for a in actions[ACTION_REMOVE] if a[0] == b'.hgsubstate']:
1869 if [a for a in actions[ACTION_REMOVE] if a[0] == b'.hgsubstate']:
1870 subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1870 subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1871
1871
1872 # record path conflicts
1872 # record path conflicts
1873 for f, args, msg in actions[ACTION_PATH_CONFLICT]:
1873 for f, args, msg in actions[ACTION_PATH_CONFLICT]:
1874 f1, fo = args
1874 f1, fo = args
1875 s = repo.ui.status
1875 s = repo.ui.status
1876 s(
1876 s(
1877 _(
1877 _(
1878 b"%s: path conflict - a file or link has the same name as a "
1878 b"%s: path conflict - a file or link has the same name as a "
1879 b"directory\n"
1879 b"directory\n"
1880 )
1880 )
1881 % f
1881 % f
1882 )
1882 )
1883 if fo == b'l':
1883 if fo == b'l':
1884 s(_(b"the local file has been renamed to %s\n") % f1)
1884 s(_(b"the local file has been renamed to %s\n") % f1)
1885 else:
1885 else:
1886 s(_(b"the remote file has been renamed to %s\n") % f1)
1886 s(_(b"the remote file has been renamed to %s\n") % f1)
1887 s(_(b"resolve manually then use 'hg resolve --mark %s'\n") % f)
1887 s(_(b"resolve manually then use 'hg resolve --mark %s'\n") % f)
1888 ms.addpath(f, f1, fo)
1888 ms.addpath(f, f1, fo)
1889 progress.increment(item=f)
1889 progress.increment(item=f)
1890
1890
1891 # When merging in-memory, we can't support worker processes, so set the
1891 # When merging in-memory, we can't support worker processes, so set the
1892 # per-item cost at 0 in that case.
1892 # per-item cost at 0 in that case.
1893 cost = 0 if wctx.isinmemory() else 0.001
1893 cost = 0 if wctx.isinmemory() else 0.001
1894
1894
1895 # remove in parallel (must come before resolving path conflicts and getting)
1895 # remove in parallel (must come before resolving path conflicts and getting)
1896 prog = worker.worker(
1896 prog = worker.worker(
1897 repo.ui, cost, batchremove, (repo, wctx), actions[ACTION_REMOVE]
1897 repo.ui, cost, batchremove, (repo, wctx), actions[ACTION_REMOVE]
1898 )
1898 )
1899 for i, item in prog:
1899 for i, item in prog:
1900 progress.increment(step=i, item=item)
1900 progress.increment(step=i, item=item)
1901 removed = len(actions[ACTION_REMOVE])
1901 removed = len(actions[ACTION_REMOVE])
1902
1902
1903 # resolve path conflicts (must come before getting)
1903 # resolve path conflicts (must come before getting)
1904 for f, args, msg in actions[ACTION_PATH_CONFLICT_RESOLVE]:
1904 for f, args, msg in actions[ACTION_PATH_CONFLICT_RESOLVE]:
1905 repo.ui.debug(b" %s: %s -> pr\n" % (f, msg))
1905 repo.ui.debug(b" %s: %s -> pr\n" % (f, msg))
1906 (f0,) = args
1906 (f0,) = args
1907 if wctx[f0].lexists():
1907 if wctx[f0].lexists():
1908 repo.ui.note(_(b"moving %s to %s\n") % (f0, f))
1908 repo.ui.note(_(b"moving %s to %s\n") % (f0, f))
1909 wctx[f].audit()
1909 wctx[f].audit()
1910 wctx[f].write(wctx.filectx(f0).data(), wctx.filectx(f0).flags())
1910 wctx[f].write(wctx.filectx(f0).data(), wctx.filectx(f0).flags())
1911 wctx[f0].remove()
1911 wctx[f0].remove()
1912 progress.increment(item=f)
1912 progress.increment(item=f)
1913
1913
1914 # get in parallel.
1914 # get in parallel.
1915 threadsafe = repo.ui.configbool(
1915 threadsafe = repo.ui.configbool(
1916 b'experimental', b'worker.wdir-get-thread-safe'
1916 b'experimental', b'worker.wdir-get-thread-safe'
1917 )
1917 )
1918 prog = worker.worker(
1918 prog = worker.worker(
1919 repo.ui,
1919 repo.ui,
1920 cost,
1920 cost,
1921 batchget,
1921 batchget,
1922 (repo, mctx, wctx, wantfiledata),
1922 (repo, mctx, wctx, wantfiledata),
1923 actions[ACTION_GET],
1923 actions[ACTION_GET],
1924 threadsafe=threadsafe,
1924 threadsafe=threadsafe,
1925 hasretval=True,
1925 hasretval=True,
1926 )
1926 )
1927 getfiledata = {}
1927 getfiledata = {}
1928 for final, res in prog:
1928 for final, res in prog:
1929 if final:
1929 if final:
1930 getfiledata = res
1930 getfiledata = res
1931 else:
1931 else:
1932 i, item = res
1932 i, item = res
1933 progress.increment(step=i, item=item)
1933 progress.increment(step=i, item=item)
1934 updated = len(actions[ACTION_GET])
1934 updated = len(actions[ACTION_GET])
1935
1935
1936 if [a for a in actions[ACTION_GET] if a[0] == b'.hgsubstate']:
1936 if [a for a in actions[ACTION_GET] if a[0] == b'.hgsubstate']:
1937 subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1937 subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1938
1938
1939 # forget (manifest only, just log it) (must come first)
1939 # forget (manifest only, just log it) (must come first)
1940 for f, args, msg in actions[ACTION_FORGET]:
1940 for f, args, msg in actions[ACTION_FORGET]:
1941 repo.ui.debug(b" %s: %s -> f\n" % (f, msg))
1941 repo.ui.debug(b" %s: %s -> f\n" % (f, msg))
1942 progress.increment(item=f)
1942 progress.increment(item=f)
1943
1943
1944 # re-add (manifest only, just log it)
1944 # re-add (manifest only, just log it)
1945 for f, args, msg in actions[ACTION_ADD]:
1945 for f, args, msg in actions[ACTION_ADD]:
1946 repo.ui.debug(b" %s: %s -> a\n" % (f, msg))
1946 repo.ui.debug(b" %s: %s -> a\n" % (f, msg))
1947 progress.increment(item=f)
1947 progress.increment(item=f)
1948
1948
1949 # re-add/mark as modified (manifest only, just log it)
1949 # re-add/mark as modified (manifest only, just log it)
1950 for f, args, msg in actions[ACTION_ADD_MODIFIED]:
1950 for f, args, msg in actions[ACTION_ADD_MODIFIED]:
1951 repo.ui.debug(b" %s: %s -> am\n" % (f, msg))
1951 repo.ui.debug(b" %s: %s -> am\n" % (f, msg))
1952 progress.increment(item=f)
1952 progress.increment(item=f)
1953
1953
1954 # keep (noop, just log it)
1954 # keep (noop, just log it)
1955 for f, args, msg in actions[ACTION_KEEP]:
1955 for f, args, msg in actions[ACTION_KEEP]:
1956 repo.ui.debug(b" %s: %s -> k\n" % (f, msg))
1956 repo.ui.debug(b" %s: %s -> k\n" % (f, msg))
1957 # no progress
1957 # no progress
1958
1958
1959 # directory rename, move local
1959 # directory rename, move local
1960 for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]:
1960 for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]:
1961 repo.ui.debug(b" %s: %s -> dm\n" % (f, msg))
1961 repo.ui.debug(b" %s: %s -> dm\n" % (f, msg))
1962 progress.increment(item=f)
1962 progress.increment(item=f)
1963 f0, flags = args
1963 f0, flags = args
1964 repo.ui.note(_(b"moving %s to %s\n") % (f0, f))
1964 repo.ui.note(_(b"moving %s to %s\n") % (f0, f))
1965 wctx[f].audit()
1965 wctx[f].audit()
1966 wctx[f].write(wctx.filectx(f0).data(), flags)
1966 wctx[f].write(wctx.filectx(f0).data(), flags)
1967 wctx[f0].remove()
1967 wctx[f0].remove()
1968 updated += 1
1968 updated += 1
1969
1969
1970 # local directory rename, get
1970 # local directory rename, get
1971 for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]:
1971 for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]:
1972 repo.ui.debug(b" %s: %s -> dg\n" % (f, msg))
1972 repo.ui.debug(b" %s: %s -> dg\n" % (f, msg))
1973 progress.increment(item=f)
1973 progress.increment(item=f)
1974 f0, flags = args
1974 f0, flags = args
1975 repo.ui.note(_(b"getting %s to %s\n") % (f0, f))
1975 repo.ui.note(_(b"getting %s to %s\n") % (f0, f))
1976 wctx[f].write(mctx.filectx(f0).data(), flags)
1976 wctx[f].write(mctx.filectx(f0).data(), flags)
1977 updated += 1
1977 updated += 1
1978
1978
1979 # exec
1979 # exec
1980 for f, args, msg in actions[ACTION_EXEC]:
1980 for f, args, msg in actions[ACTION_EXEC]:
1981 repo.ui.debug(b" %s: %s -> e\n" % (f, msg))
1981 repo.ui.debug(b" %s: %s -> e\n" % (f, msg))
1982 progress.increment(item=f)
1982 progress.increment(item=f)
1983 (flags,) = args
1983 (flags,) = args
1984 wctx[f].audit()
1984 wctx[f].audit()
1985 wctx[f].setflags(b'l' in flags, b'x' in flags)
1985 wctx[f].setflags(b'l' in flags, b'x' in flags)
1986 updated += 1
1986 updated += 1
1987
1987
1988 # the ordering is important here -- ms.mergedriver will raise if the merge
1988 # the ordering is important here -- ms.mergedriver will raise if the merge
1989 # driver has changed, and we want to be able to bypass it when overwrite is
1989 # driver has changed, and we want to be able to bypass it when overwrite is
1990 # True
1990 # True
1991 usemergedriver = not overwrite and mergeactions and ms.mergedriver
1991 usemergedriver = not overwrite and mergeactions and ms.mergedriver
1992
1992
1993 if usemergedriver:
1993 if usemergedriver:
1994 if wctx.isinmemory():
1994 if wctx.isinmemory():
1995 raise error.InMemoryMergeConflictsError(
1995 raise error.InMemoryMergeConflictsError(
1996 b"in-memory merge does not support mergedriver"
1996 b"in-memory merge does not support mergedriver"
1997 )
1997 )
1998 ms.commit()
1998 ms.commit()
1999 proceed = driverpreprocess(repo, ms, wctx, labels=labels)
1999 proceed = driverpreprocess(repo, ms, wctx, labels=labels)
2000 # the driver might leave some files unresolved
2000 # the driver might leave some files unresolved
2001 unresolvedf = set(ms.unresolved())
2001 unresolvedf = set(ms.unresolved())
2002 if not proceed:
2002 if not proceed:
2003 # XXX setting unresolved to at least 1 is a hack to make sure we
2003 # XXX setting unresolved to at least 1 is a hack to make sure we
2004 # error out
2004 # error out
2005 return updateresult(
2005 return updateresult(
2006 updated, merged, removed, max(len(unresolvedf), 1)
2006 updated, merged, removed, max(len(unresolvedf), 1)
2007 )
2007 )
2008 newactions = []
2008 newactions = []
2009 for f, args, msg in mergeactions:
2009 for f, args, msg in mergeactions:
2010 if f in unresolvedf:
2010 if f in unresolvedf:
2011 newactions.append((f, args, msg))
2011 newactions.append((f, args, msg))
2012 mergeactions = newactions
2012 mergeactions = newactions
2013
2013
2014 try:
2014 try:
2015 # premerge
2015 # premerge
2016 tocomplete = []
2016 tocomplete = []
2017 for f, args, msg in mergeactions:
2017 for f, args, msg in mergeactions:
2018 repo.ui.debug(b" %s: %s -> m (premerge)\n" % (f, msg))
2018 repo.ui.debug(b" %s: %s -> m (premerge)\n" % (f, msg))
2019 progress.increment(item=f)
2019 progress.increment(item=f)
2020 if f == b'.hgsubstate': # subrepo states need updating
2020 if f == b'.hgsubstate': # subrepo states need updating
2021 subrepoutil.submerge(
2021 subrepoutil.submerge(
2022 repo, wctx, mctx, wctx.ancestor(mctx), overwrite, labels
2022 repo, wctx, mctx, wctx.ancestor(mctx), overwrite, labels
2023 )
2023 )
2024 continue
2024 continue
2025 wctx[f].audit()
2025 wctx[f].audit()
2026 complete, r = ms.preresolve(f, wctx)
2026 complete, r = ms.preresolve(f, wctx)
2027 if not complete:
2027 if not complete:
2028 numupdates += 1
2028 numupdates += 1
2029 tocomplete.append((f, args, msg))
2029 tocomplete.append((f, args, msg))
2030
2030
2031 # merge
2031 # merge
2032 for f, args, msg in tocomplete:
2032 for f, args, msg in tocomplete:
2033 repo.ui.debug(b" %s: %s -> m (merge)\n" % (f, msg))
2033 repo.ui.debug(b" %s: %s -> m (merge)\n" % (f, msg))
2034 progress.increment(item=f, total=numupdates)
2034 progress.increment(item=f, total=numupdates)
2035 ms.resolve(f, wctx)
2035 ms.resolve(f, wctx)
2036
2036
2037 finally:
2037 finally:
2038 ms.commit()
2038 ms.commit()
2039
2039
2040 unresolved = ms.unresolvedcount()
2040 unresolved = ms.unresolvedcount()
2041
2041
2042 if (
2042 if (
2043 usemergedriver
2043 usemergedriver
2044 and not unresolved
2044 and not unresolved
2045 and ms.mdstate() != MERGE_DRIVER_STATE_SUCCESS
2045 and ms.mdstate() != MERGE_DRIVER_STATE_SUCCESS
2046 ):
2046 ):
2047 if not driverconclude(repo, ms, wctx, labels=labels):
2047 if not driverconclude(repo, ms, wctx, labels=labels):
2048 # XXX setting unresolved to at least 1 is a hack to make sure we
2048 # XXX setting unresolved to at least 1 is a hack to make sure we
2049 # error out
2049 # error out
2050 unresolved = max(unresolved, 1)
2050 unresolved = max(unresolved, 1)
2051
2051
2052 ms.commit()
2052 ms.commit()
2053
2053
2054 msupdated, msmerged, msremoved = ms.counts()
2054 msupdated, msmerged, msremoved = ms.counts()
2055 updated += msupdated
2055 updated += msupdated
2056 merged += msmerged
2056 merged += msmerged
2057 removed += msremoved
2057 removed += msremoved
2058
2058
2059 extraactions = ms.actions()
2059 extraactions = ms.actions()
2060 if extraactions:
2060 if extraactions:
2061 mfiles = set(a[0] for a in actions[ACTION_MERGE])
2061 mfiles = set(a[0] for a in actions[ACTION_MERGE])
2062 for k, acts in pycompat.iteritems(extraactions):
2062 for k, acts in pycompat.iteritems(extraactions):
2063 actions[k].extend(acts)
2063 actions[k].extend(acts)
2064 if k == ACTION_GET and wantfiledata:
2064 if k == ACTION_GET and wantfiledata:
2065 # no filedata until mergestate is updated to provide it
2065 # no filedata until mergestate is updated to provide it
2066 for a in acts:
2066 for a in acts:
2067 getfiledata[a[0]] = None
2067 getfiledata[a[0]] = None
2068 # Remove these files from actions[ACTION_MERGE] as well. This is
2068 # Remove these files from actions[ACTION_MERGE] as well. This is
2069 # important because in recordupdates, files in actions[ACTION_MERGE]
2069 # important because in recordupdates, files in actions[ACTION_MERGE]
2070 # are processed after files in other actions, and the merge driver
2070 # are processed after files in other actions, and the merge driver
2071 # might add files to those actions via extraactions above. This can
2071 # might add files to those actions via extraactions above. This can
2072 # lead to a file being recorded twice, with poor results. This is
2072 # lead to a file being recorded twice, with poor results. This is
2073 # especially problematic for actions[ACTION_REMOVE] (currently only
2073 # especially problematic for actions[ACTION_REMOVE] (currently only
2074 # possible with the merge driver in the initial merge process;
2074 # possible with the merge driver in the initial merge process;
2075 # interrupted merges don't go through this flow).
2075 # interrupted merges don't go through this flow).
2076 #
2076 #
2077 # The real fix here is to have indexes by both file and action so
2077 # The real fix here is to have indexes by both file and action so
2078 # that when the action for a file is changed it is automatically
2078 # that when the action for a file is changed it is automatically
2079 # reflected in the other action lists. But that involves a more
2079 # reflected in the other action lists. But that involves a more
2080 # complex data structure, so this will do for now.
2080 # complex data structure, so this will do for now.
2081 #
2081 #
2082 # We don't need to do the same operation for 'dc' and 'cd' because
2082 # We don't need to do the same operation for 'dc' and 'cd' because
2083 # those lists aren't consulted again.
2083 # those lists aren't consulted again.
2084 mfiles.difference_update(a[0] for a in acts)
2084 mfiles.difference_update(a[0] for a in acts)
2085
2085
2086 actions[ACTION_MERGE] = [
2086 actions[ACTION_MERGE] = [
2087 a for a in actions[ACTION_MERGE] if a[0] in mfiles
2087 a for a in actions[ACTION_MERGE] if a[0] in mfiles
2088 ]
2088 ]
2089
2089
2090 progress.complete()
2090 progress.complete()
2091 assert len(getfiledata) == (len(actions[ACTION_GET]) if wantfiledata else 0)
2091 assert len(getfiledata) == (len(actions[ACTION_GET]) if wantfiledata else 0)
2092 return updateresult(updated, merged, removed, unresolved), getfiledata
2092 return updateresult(updated, merged, removed, unresolved), getfiledata
2093
2093
2094
2094
2095 def recordupdates(repo, actions, branchmerge, getfiledata):
2095 def recordupdates(repo, actions, branchmerge, getfiledata):
2096 """record merge actions to the dirstate"""
2096 """record merge actions to the dirstate"""
2097 # remove (must come first)
2097 # remove (must come first)
2098 for f, args, msg in actions.get(ACTION_REMOVE, []):
2098 for f, args, msg in actions.get(ACTION_REMOVE, []):
2099 if branchmerge:
2099 if branchmerge:
2100 repo.dirstate.remove(f)
2100 repo.dirstate.remove(f)
2101 else:
2101 else:
2102 repo.dirstate.drop(f)
2102 repo.dirstate.drop(f)
2103
2103
2104 # forget (must come first)
2104 # forget (must come first)
2105 for f, args, msg in actions.get(ACTION_FORGET, []):
2105 for f, args, msg in actions.get(ACTION_FORGET, []):
2106 repo.dirstate.drop(f)
2106 repo.dirstate.drop(f)
2107
2107
2108 # resolve path conflicts
2108 # resolve path conflicts
2109 for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []):
2109 for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []):
2110 (f0,) = args
2110 (f0,) = args
2111 origf0 = repo.dirstate.copied(f0) or f0
2111 origf0 = repo.dirstate.copied(f0) or f0
2112 repo.dirstate.add(f)
2112 repo.dirstate.add(f)
2113 repo.dirstate.copy(origf0, f)
2113 repo.dirstate.copy(origf0, f)
2114 if f0 == origf0:
2114 if f0 == origf0:
2115 repo.dirstate.remove(f0)
2115 repo.dirstate.remove(f0)
2116 else:
2116 else:
2117 repo.dirstate.drop(f0)
2117 repo.dirstate.drop(f0)
2118
2118
2119 # re-add
2119 # re-add
2120 for f, args, msg in actions.get(ACTION_ADD, []):
2120 for f, args, msg in actions.get(ACTION_ADD, []):
2121 repo.dirstate.add(f)
2121 repo.dirstate.add(f)
2122
2122
2123 # re-add/mark as modified
2123 # re-add/mark as modified
2124 for f, args, msg in actions.get(ACTION_ADD_MODIFIED, []):
2124 for f, args, msg in actions.get(ACTION_ADD_MODIFIED, []):
2125 if branchmerge:
2125 if branchmerge:
2126 repo.dirstate.normallookup(f)
2126 repo.dirstate.normallookup(f)
2127 else:
2127 else:
2128 repo.dirstate.add(f)
2128 repo.dirstate.add(f)
2129
2129
2130 # exec change
2130 # exec change
2131 for f, args, msg in actions.get(ACTION_EXEC, []):
2131 for f, args, msg in actions.get(ACTION_EXEC, []):
2132 repo.dirstate.normallookup(f)
2132 repo.dirstate.normallookup(f)
2133
2133
2134 # keep
2134 # keep
2135 for f, args, msg in actions.get(ACTION_KEEP, []):
2135 for f, args, msg in actions.get(ACTION_KEEP, []):
2136 pass
2136 pass
2137
2137
2138 # get
2138 # get
2139 for f, args, msg in actions.get(ACTION_GET, []):
2139 for f, args, msg in actions.get(ACTION_GET, []):
2140 if branchmerge:
2140 if branchmerge:
2141 repo.dirstate.otherparent(f)
2141 repo.dirstate.otherparent(f)
2142 else:
2142 else:
2143 parentfiledata = getfiledata[f] if getfiledata else None
2143 parentfiledata = getfiledata[f] if getfiledata else None
2144 repo.dirstate.normal(f, parentfiledata=parentfiledata)
2144 repo.dirstate.normal(f, parentfiledata=parentfiledata)
2145
2145
2146 # merge
2146 # merge
2147 for f, args, msg in actions.get(ACTION_MERGE, []):
2147 for f, args, msg in actions.get(ACTION_MERGE, []):
2148 f1, f2, fa, move, anc = args
2148 f1, f2, fa, move, anc = args
2149 if branchmerge:
2149 if branchmerge:
2150 # We've done a branch merge, mark this file as merged
2150 # We've done a branch merge, mark this file as merged
2151 # so that we properly record the merger later
2151 # so that we properly record the merger later
2152 repo.dirstate.merge(f)
2152 repo.dirstate.merge(f)
2153 if f1 != f2: # copy/rename
2153 if f1 != f2: # copy/rename
2154 if move:
2154 if move:
2155 repo.dirstate.remove(f1)
2155 repo.dirstate.remove(f1)
2156 if f1 != f:
2156 if f1 != f:
2157 repo.dirstate.copy(f1, f)
2157 repo.dirstate.copy(f1, f)
2158 else:
2158 else:
2159 repo.dirstate.copy(f2, f)
2159 repo.dirstate.copy(f2, f)
2160 else:
2160 else:
2161 # We've update-merged a locally modified file, so
2161 # We've update-merged a locally modified file, so
2162 # we set the dirstate to emulate a normal checkout
2162 # we set the dirstate to emulate a normal checkout
2163 # of that file some time in the past. Thus our
2163 # of that file some time in the past. Thus our
2164 # merge will appear as a normal local file
2164 # merge will appear as a normal local file
2165 # modification.
2165 # modification.
2166 if f2 == f: # file not locally copied/moved
2166 if f2 == f: # file not locally copied/moved
2167 repo.dirstate.normallookup(f)
2167 repo.dirstate.normallookup(f)
2168 if move:
2168 if move:
2169 repo.dirstate.drop(f1)
2169 repo.dirstate.drop(f1)
2170
2170
2171 # directory rename, move local
2171 # directory rename, move local
2172 for f, args, msg in actions.get(ACTION_DIR_RENAME_MOVE_LOCAL, []):
2172 for f, args, msg in actions.get(ACTION_DIR_RENAME_MOVE_LOCAL, []):
2173 f0, flag = args
2173 f0, flag = args
2174 if branchmerge:
2174 if branchmerge:
2175 repo.dirstate.add(f)
2175 repo.dirstate.add(f)
2176 repo.dirstate.remove(f0)
2176 repo.dirstate.remove(f0)
2177 repo.dirstate.copy(f0, f)
2177 repo.dirstate.copy(f0, f)
2178 else:
2178 else:
2179 repo.dirstate.normal(f)
2179 repo.dirstate.normal(f)
2180 repo.dirstate.drop(f0)
2180 repo.dirstate.drop(f0)
2181
2181
2182 # directory rename, get
2182 # directory rename, get
2183 for f, args, msg in actions.get(ACTION_LOCAL_DIR_RENAME_GET, []):
2183 for f, args, msg in actions.get(ACTION_LOCAL_DIR_RENAME_GET, []):
2184 f0, flag = args
2184 f0, flag = args
2185 if branchmerge:
2185 if branchmerge:
2186 repo.dirstate.add(f)
2186 repo.dirstate.add(f)
2187 repo.dirstate.copy(f0, f)
2187 repo.dirstate.copy(f0, f)
2188 else:
2188 else:
2189 repo.dirstate.normal(f)
2189 repo.dirstate.normal(f)
2190
2190
2191
2191
2192 UPDATECHECK_ABORT = b'abort' # handled at higher layers
2192 UPDATECHECK_ABORT = b'abort' # handled at higher layers
2193 UPDATECHECK_NONE = b'none'
2193 UPDATECHECK_NONE = b'none'
2194 UPDATECHECK_LINEAR = b'linear'
2194 UPDATECHECK_LINEAR = b'linear'
2195 UPDATECHECK_NO_CONFLICT = b'noconflict'
2195 UPDATECHECK_NO_CONFLICT = b'noconflict'
2196
2196
2197
2197
2198 def update(
2198 def update(
2199 repo,
2199 repo,
2200 node,
2200 node,
2201 branchmerge,
2201 branchmerge,
2202 force,
2202 force,
2203 ancestor=None,
2203 ancestor=None,
2204 mergeancestor=False,
2204 mergeancestor=False,
2205 labels=None,
2205 labels=None,
2206 matcher=None,
2206 matcher=None,
2207 mergeforce=False,
2207 mergeforce=False,
2208 updatecheck=None,
2208 updatecheck=None,
2209 wc=None,
2209 wc=None,
2210 ):
2210 ):
2211 """
2211 """
2212 Perform a merge between the working directory and the given node
2212 Perform a merge between the working directory and the given node
2213
2213
2214 node = the node to update to
2214 node = the node to update to
2215 branchmerge = whether to merge between branches
2215 branchmerge = whether to merge between branches
2216 force = whether to force branch merging or file overwriting
2216 force = whether to force branch merging or file overwriting
2217 matcher = a matcher to filter file lists (dirstate not updated)
2217 matcher = a matcher to filter file lists (dirstate not updated)
2218 mergeancestor = whether it is merging with an ancestor. If true,
2218 mergeancestor = whether it is merging with an ancestor. If true,
2219 we should accept the incoming changes for any prompts that occur.
2219 we should accept the incoming changes for any prompts that occur.
2220 If false, merging with an ancestor (fast-forward) is only allowed
2220 If false, merging with an ancestor (fast-forward) is only allowed
2221 between different named branches. This flag is used by rebase extension
2221 between different named branches. This flag is used by rebase extension
2222 as a temporary fix and should be avoided in general.
2222 as a temporary fix and should be avoided in general.
2223 labels = labels to use for base, local and other
2223 labels = labels to use for base, local and other
2224 mergeforce = whether the merge was run with 'merge --force' (deprecated): if
2224 mergeforce = whether the merge was run with 'merge --force' (deprecated): if
2225 this is True, then 'force' should be True as well.
2225 this is True, then 'force' should be True as well.
2226
2226
2227 The table below shows all the behaviors of the update command given the
2227 The table below shows all the behaviors of the update command given the
2228 -c/--check and -C/--clean or no options, whether the working directory is
2228 -c/--check and -C/--clean or no options, whether the working directory is
2229 dirty, whether a revision is specified, and the relationship of the parent
2229 dirty, whether a revision is specified, and the relationship of the parent
2230 rev to the target rev (linear or not). Match from top first. The -n
2230 rev to the target rev (linear or not). Match from top first. The -n
2231 option doesn't exist on the command line, but represents the
2231 option doesn't exist on the command line, but represents the
2232 experimental.updatecheck=noconflict option.
2232 experimental.updatecheck=noconflict option.
2233
2233
2234 This logic is tested by test-update-branches.t.
2234 This logic is tested by test-update-branches.t.
2235
2235
2236 -c -C -n -m dirty rev linear | result
2236 -c -C -n -m dirty rev linear | result
2237 y y * * * * * | (1)
2237 y y * * * * * | (1)
2238 y * y * * * * | (1)
2238 y * y * * * * | (1)
2239 y * * y * * * | (1)
2239 y * * y * * * | (1)
2240 * y y * * * * | (1)
2240 * y y * * * * | (1)
2241 * y * y * * * | (1)
2241 * y * y * * * | (1)
2242 * * y y * * * | (1)
2242 * * y y * * * | (1)
2243 * * * * * n n | x
2243 * * * * * n n | x
2244 * * * * n * * | ok
2244 * * * * n * * | ok
2245 n n n n y * y | merge
2245 n n n n y * y | merge
2246 n n n n y y n | (2)
2246 n n n n y y n | (2)
2247 n n n y y * * | merge
2247 n n n y y * * | merge
2248 n n y n y * * | merge if no conflict
2248 n n y n y * * | merge if no conflict
2249 n y n n y * * | discard
2249 n y n n y * * | discard
2250 y n n n y * * | (3)
2250 y n n n y * * | (3)
2251
2251
2252 x = can't happen
2252 x = can't happen
2253 * = don't-care
2253 * = don't-care
2254 1 = incompatible options (checked in commands.py)
2254 1 = incompatible options (checked in commands.py)
2255 2 = abort: uncommitted changes (commit or update --clean to discard changes)
2255 2 = abort: uncommitted changes (commit or update --clean to discard changes)
2256 3 = abort: uncommitted changes (checked in commands.py)
2256 3 = abort: uncommitted changes (checked in commands.py)
2257
2257
2258 The merge is performed inside ``wc``, a workingctx-like objects. It defaults
2258 The merge is performed inside ``wc``, a workingctx-like objects. It defaults
2259 to repo[None] if None is passed.
2259 to repo[None] if None is passed.
2260
2260
2261 Return the same tuple as applyupdates().
2261 Return the same tuple as applyupdates().
2262 """
2262 """
2263 # Avoid cycle.
2263 # Avoid cycle.
2264 from . import sparse
2264 from . import sparse
2265
2265
2266 # This function used to find the default destination if node was None, but
2266 # This function used to find the default destination if node was None, but
2267 # that's now in destutil.py.
2267 # that's now in destutil.py.
2268 assert node is not None
2268 assert node is not None
2269 if not branchmerge and not force:
2269 if not branchmerge and not force:
2270 # TODO: remove the default once all callers that pass branchmerge=False
2270 # TODO: remove the default once all callers that pass branchmerge=False
2271 # and force=False pass a value for updatecheck. We may want to allow
2271 # and force=False pass a value for updatecheck. We may want to allow
2272 # updatecheck='abort' to better suppport some of these callers.
2272 # updatecheck='abort' to better suppport some of these callers.
2273 if updatecheck is None:
2273 if updatecheck is None:
2274 updatecheck = UPDATECHECK_LINEAR
2274 updatecheck = UPDATECHECK_LINEAR
2275 if updatecheck not in (
2275 if updatecheck not in (
2276 UPDATECHECK_NONE,
2276 UPDATECHECK_NONE,
2277 UPDATECHECK_LINEAR,
2277 UPDATECHECK_LINEAR,
2278 UPDATECHECK_NO_CONFLICT,
2278 UPDATECHECK_NO_CONFLICT,
2279 ):
2279 ):
2280 raise ValueError(
2280 raise ValueError(
2281 r'Invalid updatecheck %r (can accept %r)'
2281 r'Invalid updatecheck %r (can accept %r)'
2282 % (
2282 % (
2283 updatecheck,
2283 updatecheck,
2284 (
2284 (
2285 UPDATECHECK_NONE,
2285 UPDATECHECK_NONE,
2286 UPDATECHECK_LINEAR,
2286 UPDATECHECK_LINEAR,
2287 UPDATECHECK_NO_CONFLICT,
2287 UPDATECHECK_NO_CONFLICT,
2288 ),
2288 ),
2289 )
2289 )
2290 )
2290 )
2291 # If we're doing a partial update, we need to skip updating
2291 # If we're doing a partial update, we need to skip updating
2292 # the dirstate, so make a note of any partial-ness to the
2292 # the dirstate, so make a note of any partial-ness to the
2293 # update here.
2293 # update here.
2294 if matcher is None or matcher.always():
2294 if matcher is None or matcher.always():
2295 partial = False
2295 partial = False
2296 else:
2296 else:
2297 partial = True
2297 partial = True
2298 with repo.wlock():
2298 with repo.wlock():
2299 if wc is None:
2299 if wc is None:
2300 wc = repo[None]
2300 wc = repo[None]
2301 pl = wc.parents()
2301 pl = wc.parents()
2302 p1 = pl[0]
2302 p1 = pl[0]
2303 p2 = repo[node]
2303 p2 = repo[node]
2304 if ancestor is not None:
2304 if ancestor is not None:
2305 pas = [repo[ancestor]]
2305 pas = [repo[ancestor]]
2306 else:
2306 else:
2307 if repo.ui.configlist(b'merge', b'preferancestor') == [b'*']:
2307 if repo.ui.configlist(b'merge', b'preferancestor') == [b'*']:
2308 cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node())
2308 cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node())
2309 pas = [repo[anc] for anc in (sorted(cahs) or [nullid])]
2309 pas = [repo[anc] for anc in (sorted(cahs) or [nullid])]
2310 else:
2310 else:
2311 pas = [p1.ancestor(p2, warn=branchmerge)]
2311 pas = [p1.ancestor(p2, warn=branchmerge)]
2312
2312
2313 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), bytes(p1), bytes(p2)
2313 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), bytes(p1), bytes(p2)
2314
2314
2315 overwrite = force and not branchmerge
2315 overwrite = force and not branchmerge
2316 ### check phase
2316 ### check phase
2317 if not overwrite:
2317 if not overwrite:
2318 if len(pl) > 1:
2318 if len(pl) > 1:
2319 raise error.Abort(_(b"outstanding uncommitted merge"))
2319 raise error.Abort(_(b"outstanding uncommitted merge"))
2320 ms = mergestate.read(repo)
2320 ms = mergestate.read(repo)
2321 if list(ms.unresolved()):
2321 if list(ms.unresolved()):
2322 raise error.Abort(
2322 raise error.Abort(
2323 _(b"outstanding merge conflicts"),
2323 _(b"outstanding merge conflicts"),
2324 hint=_(b"use 'hg resolve' to resolve"),
2324 hint=_(b"use 'hg resolve' to resolve"),
2325 )
2325 )
2326 if branchmerge:
2326 if branchmerge:
2327 if pas == [p2]:
2327 if pas == [p2]:
2328 raise error.Abort(
2328 raise error.Abort(
2329 _(
2329 _(
2330 b"merging with a working directory ancestor"
2330 b"merging with a working directory ancestor"
2331 b" has no effect"
2331 b" has no effect"
2332 )
2332 )
2333 )
2333 )
2334 elif pas == [p1]:
2334 elif pas == [p1]:
2335 if not mergeancestor and wc.branch() == p2.branch():
2335 if not mergeancestor and wc.branch() == p2.branch():
2336 raise error.Abort(
2336 raise error.Abort(
2337 _(b"nothing to merge"),
2337 _(b"nothing to merge"),
2338 hint=_(b"use 'hg update' or check 'hg heads'"),
2338 hint=_(b"use 'hg update' or check 'hg heads'"),
2339 )
2339 )
2340 if not force and (wc.files() or wc.deleted()):
2340 if not force and (wc.files() or wc.deleted()):
2341 raise error.Abort(
2341 raise error.Abort(
2342 _(b"uncommitted changes"),
2342 _(b"uncommitted changes"),
2343 hint=_(b"use 'hg status' to list changes"),
2343 hint=_(b"use 'hg status' to list changes"),
2344 )
2344 )
2345 if not wc.isinmemory():
2345 if not wc.isinmemory():
2346 for s in sorted(wc.substate):
2346 for s in sorted(wc.substate):
2347 wc.sub(s).bailifchanged()
2347 wc.sub(s).bailifchanged()
2348
2348
2349 elif not overwrite:
2349 elif not overwrite:
2350 if p1 == p2: # no-op update
2350 if p1 == p2: # no-op update
2351 # call the hooks and exit early
2351 # call the hooks and exit early
2352 repo.hook(b'preupdate', throw=True, parent1=xp2, parent2=b'')
2352 repo.hook(b'preupdate', throw=True, parent1=xp2, parent2=b'')
2353 repo.hook(b'update', parent1=xp2, parent2=b'', error=0)
2353 repo.hook(b'update', parent1=xp2, parent2=b'', error=0)
2354 return updateresult(0, 0, 0, 0)
2354 return updateresult(0, 0, 0, 0)
2355
2355
2356 if updatecheck == UPDATECHECK_LINEAR and pas not in (
2356 if updatecheck == UPDATECHECK_LINEAR and pas not in (
2357 [p1],
2357 [p1],
2358 [p2],
2358 [p2],
2359 ): # nonlinear
2359 ): # nonlinear
2360 dirty = wc.dirty(missing=True)
2360 dirty = wc.dirty(missing=True)
2361 if dirty:
2361 if dirty:
2362 # Branching is a bit strange to ensure we do the minimal
2362 # Branching is a bit strange to ensure we do the minimal
2363 # amount of call to obsutil.foreground.
2363 # amount of call to obsutil.foreground.
2364 foreground = obsutil.foreground(repo, [p1.node()])
2364 foreground = obsutil.foreground(repo, [p1.node()])
2365 # note: the <node> variable contains a random identifier
2365 # note: the <node> variable contains a random identifier
2366 if repo[node].node() in foreground:
2366 if repo[node].node() in foreground:
2367 pass # allow updating to successors
2367 pass # allow updating to successors
2368 else:
2368 else:
2369 msg = _(b"uncommitted changes")
2369 msg = _(b"uncommitted changes")
2370 hint = _(b"commit or update --clean to discard changes")
2370 hint = _(b"commit or update --clean to discard changes")
2371 raise error.UpdateAbort(msg, hint=hint)
2371 raise error.UpdateAbort(msg, hint=hint)
2372 else:
2372 else:
2373 # Allow jumping branches if clean and specific rev given
2373 # Allow jumping branches if clean and specific rev given
2374 pass
2374 pass
2375
2375
2376 if overwrite:
2376 if overwrite:
2377 pas = [wc]
2377 pas = [wc]
2378 elif not branchmerge:
2378 elif not branchmerge:
2379 pas = [p1]
2379 pas = [p1]
2380
2380
2381 # deprecated config: merge.followcopies
2381 # deprecated config: merge.followcopies
2382 followcopies = repo.ui.configbool(b'merge', b'followcopies')
2382 followcopies = repo.ui.configbool(b'merge', b'followcopies')
2383 if overwrite:
2383 if overwrite:
2384 followcopies = False
2384 followcopies = False
2385 elif not pas[0]:
2385 elif not pas[0]:
2386 followcopies = False
2386 followcopies = False
2387 if not branchmerge and not wc.dirty(missing=True):
2387 if not branchmerge and not wc.dirty(missing=True):
2388 followcopies = False
2388 followcopies = False
2389
2389
2390 ### calculate phase
2390 ### calculate phase
2391 actionbyfile, diverge, renamedelete = calculateupdates(
2391 actionbyfile, diverge, renamedelete = calculateupdates(
2392 repo,
2392 repo,
2393 wc,
2393 wc,
2394 p2,
2394 p2,
2395 pas,
2395 pas,
2396 branchmerge,
2396 branchmerge,
2397 force,
2397 force,
2398 mergeancestor,
2398 mergeancestor,
2399 followcopies,
2399 followcopies,
2400 matcher=matcher,
2400 matcher=matcher,
2401 mergeforce=mergeforce,
2401 mergeforce=mergeforce,
2402 )
2402 )
2403
2403
2404 if updatecheck == UPDATECHECK_NO_CONFLICT:
2404 if updatecheck == UPDATECHECK_NO_CONFLICT:
2405 for f, (m, args, msg) in pycompat.iteritems(actionbyfile):
2405 for f, (m, args, msg) in pycompat.iteritems(actionbyfile):
2406 if m not in (
2406 if m not in (
2407 ACTION_GET,
2407 ACTION_GET,
2408 ACTION_KEEP,
2408 ACTION_KEEP,
2409 ACTION_EXEC,
2409 ACTION_EXEC,
2410 ACTION_REMOVE,
2410 ACTION_REMOVE,
2411 ACTION_PATH_CONFLICT_RESOLVE,
2411 ACTION_PATH_CONFLICT_RESOLVE,
2412 ):
2412 ):
2413 msg = _(b"conflicting changes")
2413 msg = _(b"conflicting changes")
2414 hint = _(b"commit or update --clean to discard changes")
2414 hint = _(b"commit or update --clean to discard changes")
2415 raise error.Abort(msg, hint=hint)
2415 raise error.Abort(msg, hint=hint)
2416
2416
2417 # Prompt and create actions. Most of this is in the resolve phase
2417 # Prompt and create actions. Most of this is in the resolve phase
2418 # already, but we can't handle .hgsubstate in filemerge or
2418 # already, but we can't handle .hgsubstate in filemerge or
2419 # subrepoutil.submerge yet so we have to keep prompting for it.
2419 # subrepoutil.submerge yet so we have to keep prompting for it.
2420 if b'.hgsubstate' in actionbyfile:
2420 if b'.hgsubstate' in actionbyfile:
2421 f = b'.hgsubstate'
2421 f = b'.hgsubstate'
2422 m, args, msg = actionbyfile[f]
2422 m, args, msg = actionbyfile[f]
2423 prompts = filemerge.partextras(labels)
2423 prompts = filemerge.partextras(labels)
2424 prompts[b'f'] = f
2424 prompts[b'f'] = f
2425 if m == ACTION_CHANGED_DELETED:
2425 if m == ACTION_CHANGED_DELETED:
2426 if repo.ui.promptchoice(
2426 if repo.ui.promptchoice(
2427 _(
2427 _(
2428 b"local%(l)s changed %(f)s which other%(o)s deleted\n"
2428 b"local%(l)s changed %(f)s which other%(o)s deleted\n"
2429 b"use (c)hanged version or (d)elete?"
2429 b"use (c)hanged version or (d)elete?"
2430 b"$$ &Changed $$ &Delete"
2430 b"$$ &Changed $$ &Delete"
2431 )
2431 )
2432 % prompts,
2432 % prompts,
2433 0,
2433 0,
2434 ):
2434 ):
2435 actionbyfile[f] = (ACTION_REMOVE, None, b'prompt delete')
2435 actionbyfile[f] = (ACTION_REMOVE, None, b'prompt delete')
2436 elif f in p1:
2436 elif f in p1:
2437 actionbyfile[f] = (
2437 actionbyfile[f] = (
2438 ACTION_ADD_MODIFIED,
2438 ACTION_ADD_MODIFIED,
2439 None,
2439 None,
2440 b'prompt keep',
2440 b'prompt keep',
2441 )
2441 )
2442 else:
2442 else:
2443 actionbyfile[f] = (ACTION_ADD, None, b'prompt keep')
2443 actionbyfile[f] = (ACTION_ADD, None, b'prompt keep')
2444 elif m == ACTION_DELETED_CHANGED:
2444 elif m == ACTION_DELETED_CHANGED:
2445 f1, f2, fa, move, anc = args
2445 f1, f2, fa, move, anc = args
2446 flags = p2[f2].flags()
2446 flags = p2[f2].flags()
2447 if (
2447 if (
2448 repo.ui.promptchoice(
2448 repo.ui.promptchoice(
2449 _(
2449 _(
2450 b"other%(o)s changed %(f)s which local%(l)s deleted\n"
2450 b"other%(o)s changed %(f)s which local%(l)s deleted\n"
2451 b"use (c)hanged version or leave (d)eleted?"
2451 b"use (c)hanged version or leave (d)eleted?"
2452 b"$$ &Changed $$ &Deleted"
2452 b"$$ &Changed $$ &Deleted"
2453 )
2453 )
2454 % prompts,
2454 % prompts,
2455 0,
2455 0,
2456 )
2456 )
2457 == 0
2457 == 0
2458 ):
2458 ):
2459 actionbyfile[f] = (
2459 actionbyfile[f] = (
2460 ACTION_GET,
2460 ACTION_GET,
2461 (flags, False),
2461 (flags, False),
2462 b'prompt recreating',
2462 b'prompt recreating',
2463 )
2463 )
2464 else:
2464 else:
2465 del actionbyfile[f]
2465 del actionbyfile[f]
2466
2466
2467 # Convert to dictionary-of-lists format
2467 # Convert to dictionary-of-lists format
2468 actions = emptyactions()
2468 actions = emptyactions()
2469 for f, (m, args, msg) in pycompat.iteritems(actionbyfile):
2469 for f, (m, args, msg) in pycompat.iteritems(actionbyfile):
2470 if m not in actions:
2470 if m not in actions:
2471 actions[m] = []
2471 actions[m] = []
2472 actions[m].append((f, args, msg))
2472 actions[m].append((f, args, msg))
2473
2473
2474 if not util.fscasesensitive(repo.path):
2474 if not util.fscasesensitive(repo.path):
2475 # check collision between files only in p2 for clean update
2475 # check collision between files only in p2 for clean update
2476 if not branchmerge and (
2476 if not branchmerge and (
2477 force or not wc.dirty(missing=True, branch=False)
2477 force or not wc.dirty(missing=True, branch=False)
2478 ):
2478 ):
2479 _checkcollision(repo, p2.manifest(), None)
2479 _checkcollision(repo, p2.manifest(), None)
2480 else:
2480 else:
2481 _checkcollision(repo, wc.manifest(), actions)
2481 _checkcollision(repo, wc.manifest(), actions)
2482
2482
2483 # divergent renames
2483 # divergent renames
2484 for f, fl in sorted(pycompat.iteritems(diverge)):
2484 for f, fl in sorted(pycompat.iteritems(diverge)):
2485 repo.ui.warn(
2485 repo.ui.warn(
2486 _(
2486 _(
2487 b"note: possible conflict - %s was renamed "
2487 b"note: possible conflict - %s was renamed "
2488 b"multiple times to:\n"
2488 b"multiple times to:\n"
2489 )
2489 )
2490 % f
2490 % f
2491 )
2491 )
2492 for nf in sorted(fl):
2492 for nf in sorted(fl):
2493 repo.ui.warn(b" %s\n" % nf)
2493 repo.ui.warn(b" %s\n" % nf)
2494
2494
2495 # rename and delete
2495 # rename and delete
2496 for f, fl in sorted(pycompat.iteritems(renamedelete)):
2496 for f, fl in sorted(pycompat.iteritems(renamedelete)):
2497 repo.ui.warn(
2497 repo.ui.warn(
2498 _(
2498 _(
2499 b"note: possible conflict - %s was deleted "
2499 b"note: possible conflict - %s was deleted "
2500 b"and renamed to:\n"
2500 b"and renamed to:\n"
2501 )
2501 )
2502 % f
2502 % f
2503 )
2503 )
2504 for nf in sorted(fl):
2504 for nf in sorted(fl):
2505 repo.ui.warn(b" %s\n" % nf)
2505 repo.ui.warn(b" %s\n" % nf)
2506
2506
2507 ### apply phase
2507 ### apply phase
2508 if not branchmerge: # just jump to the new rev
2508 if not branchmerge: # just jump to the new rev
2509 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, b''
2509 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, b''
2510 if not partial and not wc.isinmemory():
2510 if not partial and not wc.isinmemory():
2511 repo.hook(b'preupdate', throw=True, parent1=xp1, parent2=xp2)
2511 repo.hook(b'preupdate', throw=True, parent1=xp1, parent2=xp2)
2512 # note that we're in the middle of an update
2512 # note that we're in the middle of an update
2513 repo.vfs.write(b'updatestate', p2.hex())
2513 repo.vfs.write(b'updatestate', p2.hex())
2514
2514
2515 # Advertise fsmonitor when its presence could be useful.
2515 # Advertise fsmonitor when its presence could be useful.
2516 #
2516 #
2517 # We only advertise when performing an update from an empty working
2517 # We only advertise when performing an update from an empty working
2518 # directory. This typically only occurs during initial clone.
2518 # directory. This typically only occurs during initial clone.
2519 #
2519 #
2520 # We give users a mechanism to disable the warning in case it is
2520 # We give users a mechanism to disable the warning in case it is
2521 # annoying.
2521 # annoying.
2522 #
2522 #
2523 # We only allow on Linux and MacOS because that's where fsmonitor is
2523 # We only allow on Linux and MacOS because that's where fsmonitor is
2524 # considered stable.
2524 # considered stable.
2525 fsmonitorwarning = repo.ui.configbool(b'fsmonitor', b'warn_when_unused')
2525 fsmonitorwarning = repo.ui.configbool(b'fsmonitor', b'warn_when_unused')
2526 fsmonitorthreshold = repo.ui.configint(
2526 fsmonitorthreshold = repo.ui.configint(
2527 b'fsmonitor', b'warn_update_file_count'
2527 b'fsmonitor', b'warn_update_file_count'
2528 )
2528 )
2529 try:
2529 try:
2530 # avoid cycle: extensions -> cmdutil -> merge
2530 # avoid cycle: extensions -> cmdutil -> merge
2531 from . import extensions
2531 from . import extensions
2532
2532
2533 extensions.find(b'fsmonitor')
2533 extensions.find(b'fsmonitor')
2534 fsmonitorenabled = repo.ui.config(b'fsmonitor', b'mode') != b'off'
2534 fsmonitorenabled = repo.ui.config(b'fsmonitor', b'mode') != b'off'
2535 # We intentionally don't look at whether fsmonitor has disabled
2535 # We intentionally don't look at whether fsmonitor has disabled
2536 # itself because a) fsmonitor may have already printed a warning
2536 # itself because a) fsmonitor may have already printed a warning
2537 # b) we only care about the config state here.
2537 # b) we only care about the config state here.
2538 except KeyError:
2538 except KeyError:
2539 fsmonitorenabled = False
2539 fsmonitorenabled = False
2540
2540
2541 if (
2541 if (
2542 fsmonitorwarning
2542 fsmonitorwarning
2543 and not fsmonitorenabled
2543 and not fsmonitorenabled
2544 and p1.node() == nullid
2544 and p1.node() == nullid
2545 and len(actions[ACTION_GET]) >= fsmonitorthreshold
2545 and len(actions[ACTION_GET]) >= fsmonitorthreshold
2546 and pycompat.sysplatform.startswith((b'linux', b'darwin'))
2546 and pycompat.sysplatform.startswith((b'linux', b'darwin'))
2547 ):
2547 ):
2548 repo.ui.warn(
2548 repo.ui.warn(
2549 _(
2549 _(
2550 b'(warning: large working directory being used without '
2550 b'(warning: large working directory being used without '
2551 b'fsmonitor enabled; enable fsmonitor to improve performance; '
2551 b'fsmonitor enabled; enable fsmonitor to improve performance; '
2552 b'see "hg help -e fsmonitor")\n'
2552 b'see "hg help -e fsmonitor")\n'
2553 )
2553 )
2554 )
2554 )
2555
2555
2556 updatedirstate = not partial and not wc.isinmemory()
2556 updatedirstate = not partial and not wc.isinmemory()
2557 wantfiledata = updatedirstate and not branchmerge
2557 wantfiledata = updatedirstate and not branchmerge
2558 stats, getfiledata = applyupdates(
2558 stats, getfiledata = applyupdates(
2559 repo, actions, wc, p2, overwrite, wantfiledata, labels=labels
2559 repo, actions, wc, p2, overwrite, wantfiledata, labels=labels
2560 )
2560 )
2561
2561
2562 if updatedirstate:
2562 if updatedirstate:
2563 with repo.dirstate.parentchange():
2563 with repo.dirstate.parentchange():
2564 repo.setparents(fp1, fp2)
2564 repo.setparents(fp1, fp2)
2565 recordupdates(repo, actions, branchmerge, getfiledata)
2565 recordupdates(repo, actions, branchmerge, getfiledata)
2566 # update completed, clear state
2566 # update completed, clear state
2567 util.unlink(repo.vfs.join(b'updatestate'))
2567 util.unlink(repo.vfs.join(b'updatestate'))
2568
2568
2569 if not branchmerge:
2569 if not branchmerge:
2570 repo.dirstate.setbranch(p2.branch())
2570 repo.dirstate.setbranch(p2.branch())
2571
2571
2572 # If we're updating to a location, clean up any stale temporary includes
2572 # If we're updating to a location, clean up any stale temporary includes
2573 # (ex: this happens during hg rebase --abort).
2573 # (ex: this happens during hg rebase --abort).
2574 if not branchmerge:
2574 if not branchmerge:
2575 sparse.prunetemporaryincludes(repo)
2575 sparse.prunetemporaryincludes(repo)
2576
2576
2577 if not partial:
2577 if not partial:
2578 repo.hook(
2578 repo.hook(
2579 b'update', parent1=xp1, parent2=xp2, error=stats.unresolvedcount
2579 b'update', parent1=xp1, parent2=xp2, error=stats.unresolvedcount
2580 )
2580 )
2581 return stats
2581 return stats
2582
2582
2583
2583
2584 def graft(
2584 def graft(
2585 repo, ctx, base, labels=None, keepparent=False, keepconflictparent=False
2585 repo, ctx, base, labels=None, keepparent=False, keepconflictparent=False
2586 ):
2586 ):
2587 """Do a graft-like merge.
2587 """Do a graft-like merge.
2588
2588
2589 This is a merge where the merge ancestor is chosen such that one
2589 This is a merge where the merge ancestor is chosen such that one
2590 or more changesets are grafted onto the current changeset. In
2590 or more changesets are grafted onto the current changeset. In
2591 addition to the merge, this fixes up the dirstate to include only
2591 addition to the merge, this fixes up the dirstate to include only
2592 a single parent (if keepparent is False) and tries to duplicate any
2592 a single parent (if keepparent is False) and tries to duplicate any
2593 renames/copies appropriately.
2593 renames/copies appropriately.
2594
2594
2595 ctx - changeset to rebase
2595 ctx - changeset to rebase
2596 base - merge base, usually ctx.p1()
2596 base - merge base, usually ctx.p1()
2597 labels - merge labels eg ['local', 'graft']
2597 labels - merge labels eg ['local', 'graft']
2598 keepparent - keep second parent if any
2598 keepparent - keep second parent if any
2599 keepconflictparent - if unresolved, keep parent used for the merge
2599 keepconflictparent - if unresolved, keep parent used for the merge
2600
2600
2601 """
2601 """
2602 # If we're grafting a descendant onto an ancestor, be sure to pass
2602 # If we're grafting a descendant onto an ancestor, be sure to pass
2603 # mergeancestor=True to update. This does two things: 1) allows the merge if
2603 # mergeancestor=True to update. This does two things: 1) allows the merge if
2604 # the destination is the same as the parent of the ctx (so we can use graft
2604 # the destination is the same as the parent of the ctx (so we can use graft
2605 # to copy commits), and 2) informs update that the incoming changes are
2605 # to copy commits), and 2) informs update that the incoming changes are
2606 # newer than the destination so it doesn't prompt about "remote changed foo
2606 # newer than the destination so it doesn't prompt about "remote changed foo
2607 # which local deleted".
2607 # which local deleted".
2608 wctx = repo[None]
2608 wctx = repo[None]
2609 pctx = wctx.p1()
2609 pctx = wctx.p1()
2610 mergeancestor = repo.changelog.isancestor(pctx.node(), ctx.node())
2610 mergeancestor = repo.changelog.isancestor(pctx.node(), ctx.node())
2611
2611
2612 stats = update(
2612 stats = update(
2613 repo,
2613 repo,
2614 ctx.node(),
2614 ctx.node(),
2615 True,
2615 True,
2616 True,
2616 True,
2617 base.node(),
2617 base.node(),
2618 mergeancestor=mergeancestor,
2618 mergeancestor=mergeancestor,
2619 labels=labels,
2619 labels=labels,
2620 )
2620 )
2621
2621
2622 if keepconflictparent and stats.unresolvedcount:
2622 if keepconflictparent and stats.unresolvedcount:
2623 pother = ctx.node()
2623 pother = ctx.node()
2624 else:
2624 else:
2625 pother = nullid
2625 pother = nullid
2626 parents = ctx.parents()
2626 parents = ctx.parents()
2627 if keepparent and len(parents) == 2 and base in parents:
2627 if keepparent and len(parents) == 2 and base in parents:
2628 parents.remove(base)
2628 parents.remove(base)
2629 pother = parents[0].node()
2629 pother = parents[0].node()
2630 # Never set both parents equal to each other
2630 # Never set both parents equal to each other
2631 if pother == pctx.node():
2631 if pother == pctx.node():
2632 pother = nullid
2632 pother = nullid
2633
2633
2634 with repo.dirstate.parentchange():
2634 with repo.dirstate.parentchange():
2635 repo.setparents(pctx.node(), pother)
2635 repo.setparents(pctx.node(), pother)
2636 repo.dirstate.write(repo.currenttransaction())
2636 repo.dirstate.write(repo.currenttransaction())
2637 # fix up dirstate for copies and renames
2637 # fix up dirstate for copies and renames
2638 copies.graftcopies(repo, wctx, ctx, base)
2638 copies.graftcopies(wctx, ctx, base)
2639 return stats
2639 return stats
2640
2640
2641
2641
2642 def purge(
2642 def purge(
2643 repo,
2643 repo,
2644 matcher,
2644 matcher,
2645 ignored=False,
2645 ignored=False,
2646 removeemptydirs=True,
2646 removeemptydirs=True,
2647 removefiles=True,
2647 removefiles=True,
2648 abortonerror=False,
2648 abortonerror=False,
2649 noop=False,
2649 noop=False,
2650 ):
2650 ):
2651 """Purge the working directory of untracked files.
2651 """Purge the working directory of untracked files.
2652
2652
2653 ``matcher`` is a matcher configured to scan the working directory -
2653 ``matcher`` is a matcher configured to scan the working directory -
2654 potentially a subset.
2654 potentially a subset.
2655
2655
2656 ``ignored`` controls whether ignored files should also be purged.
2656 ``ignored`` controls whether ignored files should also be purged.
2657
2657
2658 ``removeemptydirs`` controls whether empty directories should be removed.
2658 ``removeemptydirs`` controls whether empty directories should be removed.
2659
2659
2660 ``removefiles`` controls whether files are removed.
2660 ``removefiles`` controls whether files are removed.
2661
2661
2662 ``abortonerror`` causes an exception to be raised if an error occurs
2662 ``abortonerror`` causes an exception to be raised if an error occurs
2663 deleting a file or directory.
2663 deleting a file or directory.
2664
2664
2665 ``noop`` controls whether to actually remove files. If not defined, actions
2665 ``noop`` controls whether to actually remove files. If not defined, actions
2666 will be taken.
2666 will be taken.
2667
2667
2668 Returns an iterable of relative paths in the working directory that were
2668 Returns an iterable of relative paths in the working directory that were
2669 or would be removed.
2669 or would be removed.
2670 """
2670 """
2671
2671
2672 def remove(removefn, path):
2672 def remove(removefn, path):
2673 try:
2673 try:
2674 removefn(path)
2674 removefn(path)
2675 except OSError:
2675 except OSError:
2676 m = _(b'%s cannot be removed') % path
2676 m = _(b'%s cannot be removed') % path
2677 if abortonerror:
2677 if abortonerror:
2678 raise error.Abort(m)
2678 raise error.Abort(m)
2679 else:
2679 else:
2680 repo.ui.warn(_(b'warning: %s\n') % m)
2680 repo.ui.warn(_(b'warning: %s\n') % m)
2681
2681
2682 # There's no API to copy a matcher. So mutate the passed matcher and
2682 # There's no API to copy a matcher. So mutate the passed matcher and
2683 # restore it when we're done.
2683 # restore it when we're done.
2684 oldtraversedir = matcher.traversedir
2684 oldtraversedir = matcher.traversedir
2685
2685
2686 res = []
2686 res = []
2687
2687
2688 try:
2688 try:
2689 if removeemptydirs:
2689 if removeemptydirs:
2690 directories = []
2690 directories = []
2691 matcher.traversedir = directories.append
2691 matcher.traversedir = directories.append
2692
2692
2693 status = repo.status(match=matcher, ignored=ignored, unknown=True)
2693 status = repo.status(match=matcher, ignored=ignored, unknown=True)
2694
2694
2695 if removefiles:
2695 if removefiles:
2696 for f in sorted(status.unknown + status.ignored):
2696 for f in sorted(status.unknown + status.ignored):
2697 if not noop:
2697 if not noop:
2698 repo.ui.note(_(b'removing file %s\n') % f)
2698 repo.ui.note(_(b'removing file %s\n') % f)
2699 remove(repo.wvfs.unlink, f)
2699 remove(repo.wvfs.unlink, f)
2700 res.append(f)
2700 res.append(f)
2701
2701
2702 if removeemptydirs:
2702 if removeemptydirs:
2703 for f in sorted(directories, reverse=True):
2703 for f in sorted(directories, reverse=True):
2704 if matcher(f) and not repo.wvfs.listdir(f):
2704 if matcher(f) and not repo.wvfs.listdir(f):
2705 if not noop:
2705 if not noop:
2706 repo.ui.note(_(b'removing directory %s\n') % f)
2706 repo.ui.note(_(b'removing directory %s\n') % f)
2707 remove(repo.wvfs.rmdir, f)
2707 remove(repo.wvfs.rmdir, f)
2708 res.append(f)
2708 res.append(f)
2709
2709
2710 return res
2710 return res
2711
2711
2712 finally:
2712 finally:
2713 matcher.traversedir = oldtraversedir
2713 matcher.traversedir = oldtraversedir
@@ -1,34 +1,35 b''
1 == New Features ==
1 == New Features ==
2
2
3 * Windows will process hgrc files in %PROGRAMDATA%\Mercurial\hgrc.d.
3 * Windows will process hgrc files in %PROGRAMDATA%\Mercurial\hgrc.d.
4
4
5
5
6 == New Experimental Features ==
6 == New Experimental Features ==
7
7
8
8
9 == Bug Fixes ==
9 == Bug Fixes ==
10
10
11 * The `indent()` template function was documented to not indent empty lines,
11 * The `indent()` template function was documented to not indent empty lines,
12 but it still indented the first line even if it was empty. It no longer does
12 but it still indented the first line even if it was empty. It no longer does
13 that.
13 that.
14
14
15 == Backwards Compatibility Changes ==
15 == Backwards Compatibility Changes ==
16
16
17
17
18 == Internal API Changes ==
18 == Internal API Changes ==
19
19
20 * Matcher instances no longer have a `explicitdir` property. Consider
20 * Matcher instances no longer have a `explicitdir` property. Consider
21 rewriting your code to use `repo.wvfs.isdir()` and/or
21 rewriting your code to use `repo.wvfs.isdir()` and/or
22 `ctx.hasdir()` instead. Also, the `traversedir` property is now
22 `ctx.hasdir()` instead. Also, the `traversedir` property is now
23 also called when only `explicitdir` used to be called. That may
23 also called when only `explicitdir` used to be called. That may
24 mean that you can simply remove the use of `explicitdir` if you
24 mean that you can simply remove the use of `explicitdir` if you
25 were already using `traversedir`.
25 were already using `traversedir`.
26
26
27 * The `revlog.nodemap` object have been merged into the `revlog.index` object.
27 * The `revlog.nodemap` object have been merged into the `revlog.index` object.
28 * `n in revlog.nodemap` becomes `revlog.index.has_node(n)`,
28 * `n in revlog.nodemap` becomes `revlog.index.has_node(n)`,
29 * `revlog.nodemap[n]` becomes `revlog.index.rev(n)`,
29 * `revlog.nodemap[n]` becomes `revlog.index.rev(n)`,
30 * `revlog.nodemap.get(n)` becomes `revlog.index.get_rev(n)`.
30 * `revlog.nodemap.get(n)` becomes `revlog.index.get_rev(n)`.
31
31
32 * `copies.duplicatecopies()` was renamed to
32 * `copies.duplicatecopies()` was renamed to
33 `copies.graftcopies()`. Its arguments changed from revision numbers
33 `copies.graftcopies()`. Its arguments changed from revision numbers
34 to context objects.
34 to context objects. It also lost its `repo` and `skip` arguments
35 (they should no longer be needed).
General Comments 0
You need to be logged in to leave comments. Login now