##// END OF EJS Templates
fix: make the order of the work queue deterministic...
Danny Hooper -
r42176:8f427f7c default
parent child Browse files
Show More
@@ -1,687 +1,687 b''
1 # fix - rewrite file content in changesets and working copy
1 # fix - rewrite file content in changesets and working copy
2 #
2 #
3 # Copyright 2018 Google LLC.
3 # Copyright 2018 Google LLC.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """rewrite file content in changesets or working copy (EXPERIMENTAL)
7 """rewrite file content in changesets or working copy (EXPERIMENTAL)
8
8
9 Provides a command that runs configured tools on the contents of modified files,
9 Provides a command that runs configured tools on the contents of modified files,
10 writing back any fixes to the working copy or replacing changesets.
10 writing back any fixes to the working copy or replacing changesets.
11
11
12 Here is an example configuration that causes :hg:`fix` to apply automatic
12 Here is an example configuration that causes :hg:`fix` to apply automatic
13 formatting fixes to modified lines in C++ code::
13 formatting fixes to modified lines in C++ code::
14
14
15 [fix]
15 [fix]
16 clang-format:command=clang-format --assume-filename={rootpath}
16 clang-format:command=clang-format --assume-filename={rootpath}
17 clang-format:linerange=--lines={first}:{last}
17 clang-format:linerange=--lines={first}:{last}
18 clang-format:pattern=set:**.cpp or **.hpp
18 clang-format:pattern=set:**.cpp or **.hpp
19
19
20 The :command suboption forms the first part of the shell command that will be
20 The :command suboption forms the first part of the shell command that will be
21 used to fix a file. The content of the file is passed on standard input, and the
21 used to fix a file. The content of the file is passed on standard input, and the
22 fixed file content is expected on standard output. Any output on standard error
22 fixed file content is expected on standard output. Any output on standard error
23 will be displayed as a warning. If the exit status is not zero, the file will
23 will be displayed as a warning. If the exit status is not zero, the file will
24 not be affected. A placeholder warning is displayed if there is a non-zero exit
24 not be affected. A placeholder warning is displayed if there is a non-zero exit
25 status but no standard error output. Some values may be substituted into the
25 status but no standard error output. Some values may be substituted into the
26 command::
26 command::
27
27
28 {rootpath} The path of the file being fixed, relative to the repo root
28 {rootpath} The path of the file being fixed, relative to the repo root
29 {basename} The name of the file being fixed, without the directory path
29 {basename} The name of the file being fixed, without the directory path
30
30
31 If the :linerange suboption is set, the tool will only be run if there are
31 If the :linerange suboption is set, the tool will only be run if there are
32 changed lines in a file. The value of this suboption is appended to the shell
32 changed lines in a file. The value of this suboption is appended to the shell
33 command once for every range of changed lines in the file. Some values may be
33 command once for every range of changed lines in the file. Some values may be
34 substituted into the command::
34 substituted into the command::
35
35
36 {first} The 1-based line number of the first line in the modified range
36 {first} The 1-based line number of the first line in the modified range
37 {last} The 1-based line number of the last line in the modified range
37 {last} The 1-based line number of the last line in the modified range
38
38
39 The :pattern suboption determines which files will be passed through each
39 The :pattern suboption determines which files will be passed through each
40 configured tool. See :hg:`help patterns` for possible values. If there are file
40 configured tool. See :hg:`help patterns` for possible values. If there are file
41 arguments to :hg:`fix`, the intersection of these patterns is used.
41 arguments to :hg:`fix`, the intersection of these patterns is used.
42
42
43 There is also a configurable limit for the maximum size of file that will be
43 There is also a configurable limit for the maximum size of file that will be
44 processed by :hg:`fix`::
44 processed by :hg:`fix`::
45
45
46 [fix]
46 [fix]
47 maxfilesize = 2MB
47 maxfilesize = 2MB
48
48
49 Normally, execution of configured tools will continue after a failure (indicated
49 Normally, execution of configured tools will continue after a failure (indicated
50 by a non-zero exit status). It can also be configured to abort after the first
50 by a non-zero exit status). It can also be configured to abort after the first
51 such failure, so that no files will be affected if any tool fails. This abort
51 such failure, so that no files will be affected if any tool fails. This abort
52 will also cause :hg:`fix` to exit with a non-zero status::
52 will also cause :hg:`fix` to exit with a non-zero status::
53
53
54 [fix]
54 [fix]
55 failure = abort
55 failure = abort
56
56
57 When multiple tools are configured to affect a file, they execute in an order
57 When multiple tools are configured to affect a file, they execute in an order
58 defined by the :priority suboption. The priority suboption has a default value
58 defined by the :priority suboption. The priority suboption has a default value
59 of zero for each tool. Tools are executed in order of descending priority. The
59 of zero for each tool. Tools are executed in order of descending priority. The
60 execution order of tools with equal priority is unspecified. For example, you
60 execution order of tools with equal priority is unspecified. For example, you
61 could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers
61 could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers
62 in a text file by ensuring that 'sort' runs before 'head'::
62 in a text file by ensuring that 'sort' runs before 'head'::
63
63
64 [fix]
64 [fix]
65 sort:command = sort -n
65 sort:command = sort -n
66 head:command = head -n 10
66 head:command = head -n 10
67 sort:pattern = numbers.txt
67 sort:pattern = numbers.txt
68 head:pattern = numbers.txt
68 head:pattern = numbers.txt
69 sort:priority = 2
69 sort:priority = 2
70 head:priority = 1
70 head:priority = 1
71
71
72 To account for changes made by each tool, the line numbers used for incremental
72 To account for changes made by each tool, the line numbers used for incremental
73 formatting are recomputed before executing the next tool. So, each tool may see
73 formatting are recomputed before executing the next tool. So, each tool may see
74 different values for the arguments added by the :linerange suboption.
74 different values for the arguments added by the :linerange suboption.
75 """
75 """
76
76
77 from __future__ import absolute_import
77 from __future__ import absolute_import
78
78
79 import collections
79 import collections
80 import itertools
80 import itertools
81 import os
81 import os
82 import re
82 import re
83 import subprocess
83 import subprocess
84
84
85 from mercurial.i18n import _
85 from mercurial.i18n import _
86 from mercurial.node import nullrev
86 from mercurial.node import nullrev
87 from mercurial.node import wdirrev
87 from mercurial.node import wdirrev
88
88
89 from mercurial.utils import (
89 from mercurial.utils import (
90 procutil,
90 procutil,
91 )
91 )
92
92
93 from mercurial import (
93 from mercurial import (
94 cmdutil,
94 cmdutil,
95 context,
95 context,
96 copies,
96 copies,
97 error,
97 error,
98 mdiff,
98 mdiff,
99 merge,
99 merge,
100 obsolete,
100 obsolete,
101 pycompat,
101 pycompat,
102 registrar,
102 registrar,
103 scmutil,
103 scmutil,
104 util,
104 util,
105 worker,
105 worker,
106 )
106 )
107
107
108 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
108 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
109 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
109 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
110 # be specifying the version(s) of Mercurial they are tested with, or
110 # be specifying the version(s) of Mercurial they are tested with, or
111 # leave the attribute unspecified.
111 # leave the attribute unspecified.
112 testedwith = 'ships-with-hg-core'
112 testedwith = 'ships-with-hg-core'
113
113
114 cmdtable = {}
114 cmdtable = {}
115 command = registrar.command(cmdtable)
115 command = registrar.command(cmdtable)
116
116
117 configtable = {}
117 configtable = {}
118 configitem = registrar.configitem(configtable)
118 configitem = registrar.configitem(configtable)
119
119
120 # Register the suboptions allowed for each configured fixer.
120 # Register the suboptions allowed for each configured fixer.
121 FIXER_ATTRS = {
121 FIXER_ATTRS = {
122 'command': None,
122 'command': None,
123 'linerange': None,
123 'linerange': None,
124 'fileset': None,
124 'fileset': None,
125 'pattern': None,
125 'pattern': None,
126 'priority': 0,
126 'priority': 0,
127 }
127 }
128
128
129 for key, default in FIXER_ATTRS.items():
129 for key, default in FIXER_ATTRS.items():
130 configitem('fix', '.*(:%s)?' % key, default=default, generic=True)
130 configitem('fix', '.*(:%s)?' % key, default=default, generic=True)
131
131
132 # A good default size allows most source code files to be fixed, but avoids
132 # A good default size allows most source code files to be fixed, but avoids
133 # letting fixer tools choke on huge inputs, which could be surprising to the
133 # letting fixer tools choke on huge inputs, which could be surprising to the
134 # user.
134 # user.
135 configitem('fix', 'maxfilesize', default='2MB')
135 configitem('fix', 'maxfilesize', default='2MB')
136
136
137 # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero.
137 # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero.
138 # This helps users do shell scripts that stop when a fixer tool signals a
138 # This helps users do shell scripts that stop when a fixer tool signals a
139 # problem.
139 # problem.
140 configitem('fix', 'failure', default='continue')
140 configitem('fix', 'failure', default='continue')
141
141
142 def checktoolfailureaction(ui, message, hint=None):
142 def checktoolfailureaction(ui, message, hint=None):
143 """Abort with 'message' if fix.failure=abort"""
143 """Abort with 'message' if fix.failure=abort"""
144 action = ui.config('fix', 'failure')
144 action = ui.config('fix', 'failure')
145 if action not in ('continue', 'abort'):
145 if action not in ('continue', 'abort'):
146 raise error.Abort(_('unknown fix.failure action: %s') % (action,),
146 raise error.Abort(_('unknown fix.failure action: %s') % (action,),
147 hint=_('use "continue" or "abort"'))
147 hint=_('use "continue" or "abort"'))
148 if action == 'abort':
148 if action == 'abort':
149 raise error.Abort(message, hint=hint)
149 raise error.Abort(message, hint=hint)
150
150
151 allopt = ('', 'all', False, _('fix all non-public non-obsolete revisions'))
151 allopt = ('', 'all', False, _('fix all non-public non-obsolete revisions'))
152 baseopt = ('', 'base', [], _('revisions to diff against (overrides automatic '
152 baseopt = ('', 'base', [], _('revisions to diff against (overrides automatic '
153 'selection, and applies to every revision being '
153 'selection, and applies to every revision being '
154 'fixed)'), _('REV'))
154 'fixed)'), _('REV'))
155 revopt = ('r', 'rev', [], _('revisions to fix'), _('REV'))
155 revopt = ('r', 'rev', [], _('revisions to fix'), _('REV'))
156 wdiropt = ('w', 'working-dir', False, _('fix the working directory'))
156 wdiropt = ('w', 'working-dir', False, _('fix the working directory'))
157 wholeopt = ('', 'whole', False, _('always fix every line of a file'))
157 wholeopt = ('', 'whole', False, _('always fix every line of a file'))
158 usage = _('[OPTION]... [FILE]...')
158 usage = _('[OPTION]... [FILE]...')
159
159
160 @command('fix', [allopt, baseopt, revopt, wdiropt, wholeopt], usage,
160 @command('fix', [allopt, baseopt, revopt, wdiropt, wholeopt], usage,
161 helpcategory=command.CATEGORY_FILE_CONTENTS)
161 helpcategory=command.CATEGORY_FILE_CONTENTS)
162 def fix(ui, repo, *pats, **opts):
162 def fix(ui, repo, *pats, **opts):
163 """rewrite file content in changesets or working directory
163 """rewrite file content in changesets or working directory
164
164
165 Runs any configured tools to fix the content of files. Only affects files
165 Runs any configured tools to fix the content of files. Only affects files
166 with changes, unless file arguments are provided. Only affects changed lines
166 with changes, unless file arguments are provided. Only affects changed lines
167 of files, unless the --whole flag is used. Some tools may always affect the
167 of files, unless the --whole flag is used. Some tools may always affect the
168 whole file regardless of --whole.
168 whole file regardless of --whole.
169
169
170 If revisions are specified with --rev, those revisions will be checked, and
170 If revisions are specified with --rev, those revisions will be checked, and
171 they may be replaced with new revisions that have fixed file content. It is
171 they may be replaced with new revisions that have fixed file content. It is
172 desirable to specify all descendants of each specified revision, so that the
172 desirable to specify all descendants of each specified revision, so that the
173 fixes propagate to the descendants. If all descendants are fixed at the same
173 fixes propagate to the descendants. If all descendants are fixed at the same
174 time, no merging, rebasing, or evolution will be required.
174 time, no merging, rebasing, or evolution will be required.
175
175
176 If --working-dir is used, files with uncommitted changes in the working copy
176 If --working-dir is used, files with uncommitted changes in the working copy
177 will be fixed. If the checked-out revision is also fixed, the working
177 will be fixed. If the checked-out revision is also fixed, the working
178 directory will update to the replacement revision.
178 directory will update to the replacement revision.
179
179
180 When determining what lines of each file to fix at each revision, the whole
180 When determining what lines of each file to fix at each revision, the whole
181 set of revisions being fixed is considered, so that fixes to earlier
181 set of revisions being fixed is considered, so that fixes to earlier
182 revisions are not forgotten in later ones. The --base flag can be used to
182 revisions are not forgotten in later ones. The --base flag can be used to
183 override this default behavior, though it is not usually desirable to do so.
183 override this default behavior, though it is not usually desirable to do so.
184 """
184 """
185 opts = pycompat.byteskwargs(opts)
185 opts = pycompat.byteskwargs(opts)
186 if opts['all']:
186 if opts['all']:
187 if opts['rev']:
187 if opts['rev']:
188 raise error.Abort(_('cannot specify both "--rev" and "--all"'))
188 raise error.Abort(_('cannot specify both "--rev" and "--all"'))
189 opts['rev'] = ['not public() and not obsolete()']
189 opts['rev'] = ['not public() and not obsolete()']
190 opts['working_dir'] = True
190 opts['working_dir'] = True
191 with repo.wlock(), repo.lock(), repo.transaction('fix'):
191 with repo.wlock(), repo.lock(), repo.transaction('fix'):
192 revstofix = getrevstofix(ui, repo, opts)
192 revstofix = getrevstofix(ui, repo, opts)
193 basectxs = getbasectxs(repo, opts, revstofix)
193 basectxs = getbasectxs(repo, opts, revstofix)
194 workqueue, numitems = getworkqueue(ui, repo, pats, opts, revstofix,
194 workqueue, numitems = getworkqueue(ui, repo, pats, opts, revstofix,
195 basectxs)
195 basectxs)
196 fixers = getfixers(ui)
196 fixers = getfixers(ui)
197
197
198 # There are no data dependencies between the workers fixing each file
198 # There are no data dependencies between the workers fixing each file
199 # revision, so we can use all available parallelism.
199 # revision, so we can use all available parallelism.
200 def getfixes(items):
200 def getfixes(items):
201 for rev, path in items:
201 for rev, path in items:
202 ctx = repo[rev]
202 ctx = repo[rev]
203 olddata = ctx[path].data()
203 olddata = ctx[path].data()
204 newdata = fixfile(ui, opts, fixers, ctx, path, basectxs[rev])
204 newdata = fixfile(ui, opts, fixers, ctx, path, basectxs[rev])
205 # Don't waste memory/time passing unchanged content back, but
205 # Don't waste memory/time passing unchanged content back, but
206 # produce one result per item either way.
206 # produce one result per item either way.
207 yield (rev, path, newdata if newdata != olddata else None)
207 yield (rev, path, newdata if newdata != olddata else None)
208 results = worker.worker(ui, 1.0, getfixes, tuple(), workqueue,
208 results = worker.worker(ui, 1.0, getfixes, tuple(), workqueue,
209 threadsafe=False)
209 threadsafe=False)
210
210
211 # We have to hold on to the data for each successor revision in memory
211 # We have to hold on to the data for each successor revision in memory
212 # until all its parents are committed. We ensure this by committing and
212 # until all its parents are committed. We ensure this by committing and
213 # freeing memory for the revisions in some topological order. This
213 # freeing memory for the revisions in some topological order. This
214 # leaves a little bit of memory efficiency on the table, but also makes
214 # leaves a little bit of memory efficiency on the table, but also makes
215 # the tests deterministic. It might also be considered a feature since
215 # the tests deterministic. It might also be considered a feature since
216 # it makes the results more easily reproducible.
216 # it makes the results more easily reproducible.
217 filedata = collections.defaultdict(dict)
217 filedata = collections.defaultdict(dict)
218 replacements = {}
218 replacements = {}
219 wdirwritten = False
219 wdirwritten = False
220 commitorder = sorted(revstofix, reverse=True)
220 commitorder = sorted(revstofix, reverse=True)
221 with ui.makeprogress(topic=_('fixing'), unit=_('files'),
221 with ui.makeprogress(topic=_('fixing'), unit=_('files'),
222 total=sum(numitems.values())) as progress:
222 total=sum(numitems.values())) as progress:
223 for rev, path, newdata in results:
223 for rev, path, newdata in results:
224 progress.increment(item=path)
224 progress.increment(item=path)
225 if newdata is not None:
225 if newdata is not None:
226 filedata[rev][path] = newdata
226 filedata[rev][path] = newdata
227 numitems[rev] -= 1
227 numitems[rev] -= 1
228 # Apply the fixes for this and any other revisions that are
228 # Apply the fixes for this and any other revisions that are
229 # ready and sitting at the front of the queue. Using a loop here
229 # ready and sitting at the front of the queue. Using a loop here
230 # prevents the queue from being blocked by the first revision to
230 # prevents the queue from being blocked by the first revision to
231 # be ready out of order.
231 # be ready out of order.
232 while commitorder and not numitems[commitorder[-1]]:
232 while commitorder and not numitems[commitorder[-1]]:
233 rev = commitorder.pop()
233 rev = commitorder.pop()
234 ctx = repo[rev]
234 ctx = repo[rev]
235 if rev == wdirrev:
235 if rev == wdirrev:
236 writeworkingdir(repo, ctx, filedata[rev], replacements)
236 writeworkingdir(repo, ctx, filedata[rev], replacements)
237 wdirwritten = bool(filedata[rev])
237 wdirwritten = bool(filedata[rev])
238 else:
238 else:
239 replacerev(ui, repo, ctx, filedata[rev], replacements)
239 replacerev(ui, repo, ctx, filedata[rev], replacements)
240 del filedata[rev]
240 del filedata[rev]
241
241
242 cleanup(repo, replacements, wdirwritten)
242 cleanup(repo, replacements, wdirwritten)
243
243
244 def cleanup(repo, replacements, wdirwritten):
244 def cleanup(repo, replacements, wdirwritten):
245 """Calls scmutil.cleanupnodes() with the given replacements.
245 """Calls scmutil.cleanupnodes() with the given replacements.
246
246
247 "replacements" is a dict from nodeid to nodeid, with one key and one value
247 "replacements" is a dict from nodeid to nodeid, with one key and one value
248 for every revision that was affected by fixing. This is slightly different
248 for every revision that was affected by fixing. This is slightly different
249 from cleanupnodes().
249 from cleanupnodes().
250
250
251 "wdirwritten" is a bool which tells whether the working copy was affected by
251 "wdirwritten" is a bool which tells whether the working copy was affected by
252 fixing, since it has no entry in "replacements".
252 fixing, since it has no entry in "replacements".
253
253
254 Useful as a hook point for extending "hg fix" with output summarizing the
254 Useful as a hook point for extending "hg fix" with output summarizing the
255 effects of the command, though we choose not to output anything here.
255 effects of the command, though we choose not to output anything here.
256 """
256 """
257 replacements = {prec: [succ] for prec, succ in replacements.iteritems()}
257 replacements = {prec: [succ] for prec, succ in replacements.iteritems()}
258 scmutil.cleanupnodes(repo, replacements, 'fix', fixphase=True)
258 scmutil.cleanupnodes(repo, replacements, 'fix', fixphase=True)
259
259
260 def getworkqueue(ui, repo, pats, opts, revstofix, basectxs):
260 def getworkqueue(ui, repo, pats, opts, revstofix, basectxs):
261 """"Constructs the list of files to be fixed at specific revisions
261 """"Constructs the list of files to be fixed at specific revisions
262
262
263 It is up to the caller how to consume the work items, and the only
263 It is up to the caller how to consume the work items, and the only
264 dependence between them is that replacement revisions must be committed in
264 dependence between them is that replacement revisions must be committed in
265 topological order. Each work item represents a file in the working copy or
265 topological order. Each work item represents a file in the working copy or
266 in some revision that should be fixed and written back to the working copy
266 in some revision that should be fixed and written back to the working copy
267 or into a replacement revision.
267 or into a replacement revision.
268
268
269 Work items for the same revision are grouped together, so that a worker
269 Work items for the same revision are grouped together, so that a worker
270 pool starting with the first N items in parallel is likely to finish the
270 pool starting with the first N items in parallel is likely to finish the
271 first revision's work before other revisions. This can allow us to write
271 first revision's work before other revisions. This can allow us to write
272 the result to disk and reduce memory footprint. At time of writing, the
272 the result to disk and reduce memory footprint. At time of writing, the
273 partition strategy in worker.py seems favorable to this. We also sort the
273 partition strategy in worker.py seems favorable to this. We also sort the
274 items by ascending revision number to match the order in which we commit
274 items by ascending revision number to match the order in which we commit
275 the fixes later.
275 the fixes later.
276 """
276 """
277 workqueue = []
277 workqueue = []
278 numitems = collections.defaultdict(int)
278 numitems = collections.defaultdict(int)
279 maxfilesize = ui.configbytes('fix', 'maxfilesize')
279 maxfilesize = ui.configbytes('fix', 'maxfilesize')
280 for rev in sorted(revstofix):
280 for rev in sorted(revstofix):
281 fixctx = repo[rev]
281 fixctx = repo[rev]
282 match = scmutil.match(fixctx, pats, opts)
282 match = scmutil.match(fixctx, pats, opts)
283 for path in pathstofix(ui, repo, pats, opts, match, basectxs[rev],
283 for path in sorted(pathstofix(
284 fixctx):
284 ui, repo, pats, opts, match, basectxs[rev], fixctx)):
285 fctx = fixctx[path]
285 fctx = fixctx[path]
286 if fctx.islink():
286 if fctx.islink():
287 continue
287 continue
288 if fctx.size() > maxfilesize:
288 if fctx.size() > maxfilesize:
289 ui.warn(_('ignoring file larger than %s: %s\n') %
289 ui.warn(_('ignoring file larger than %s: %s\n') %
290 (util.bytecount(maxfilesize), path))
290 (util.bytecount(maxfilesize), path))
291 continue
291 continue
292 workqueue.append((rev, path))
292 workqueue.append((rev, path))
293 numitems[rev] += 1
293 numitems[rev] += 1
294 return workqueue, numitems
294 return workqueue, numitems
295
295
296 def getrevstofix(ui, repo, opts):
296 def getrevstofix(ui, repo, opts):
297 """Returns the set of revision numbers that should be fixed"""
297 """Returns the set of revision numbers that should be fixed"""
298 revs = set(scmutil.revrange(repo, opts['rev']))
298 revs = set(scmutil.revrange(repo, opts['rev']))
299 for rev in revs:
299 for rev in revs:
300 checkfixablectx(ui, repo, repo[rev])
300 checkfixablectx(ui, repo, repo[rev])
301 if revs:
301 if revs:
302 cmdutil.checkunfinished(repo)
302 cmdutil.checkunfinished(repo)
303 checknodescendants(repo, revs)
303 checknodescendants(repo, revs)
304 if opts.get('working_dir'):
304 if opts.get('working_dir'):
305 revs.add(wdirrev)
305 revs.add(wdirrev)
306 if list(merge.mergestate.read(repo).unresolved()):
306 if list(merge.mergestate.read(repo).unresolved()):
307 raise error.Abort('unresolved conflicts', hint="use 'hg resolve'")
307 raise error.Abort('unresolved conflicts', hint="use 'hg resolve'")
308 if not revs:
308 if not revs:
309 raise error.Abort(
309 raise error.Abort(
310 'no changesets specified', hint='use --rev or --working-dir')
310 'no changesets specified', hint='use --rev or --working-dir')
311 return revs
311 return revs
312
312
313 def checknodescendants(repo, revs):
313 def checknodescendants(repo, revs):
314 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
314 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
315 repo.revs('(%ld::) - (%ld)', revs, revs)):
315 repo.revs('(%ld::) - (%ld)', revs, revs)):
316 raise error.Abort(_('can only fix a changeset together '
316 raise error.Abort(_('can only fix a changeset together '
317 'with all its descendants'))
317 'with all its descendants'))
318
318
319 def checkfixablectx(ui, repo, ctx):
319 def checkfixablectx(ui, repo, ctx):
320 """Aborts if the revision shouldn't be replaced with a fixed one."""
320 """Aborts if the revision shouldn't be replaced with a fixed one."""
321 if not ctx.mutable():
321 if not ctx.mutable():
322 raise error.Abort('can\'t fix immutable changeset %s' %
322 raise error.Abort('can\'t fix immutable changeset %s' %
323 (scmutil.formatchangeid(ctx),))
323 (scmutil.formatchangeid(ctx),))
324 if ctx.obsolete():
324 if ctx.obsolete():
325 # It would be better to actually check if the revision has a successor.
325 # It would be better to actually check if the revision has a successor.
326 allowdivergence = ui.configbool('experimental',
326 allowdivergence = ui.configbool('experimental',
327 'evolution.allowdivergence')
327 'evolution.allowdivergence')
328 if not allowdivergence:
328 if not allowdivergence:
329 raise error.Abort('fixing obsolete revision could cause divergence')
329 raise error.Abort('fixing obsolete revision could cause divergence')
330
330
331 def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx):
331 def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx):
332 """Returns the set of files that should be fixed in a context
332 """Returns the set of files that should be fixed in a context
333
333
334 The result depends on the base contexts; we include any file that has
334 The result depends on the base contexts; we include any file that has
335 changed relative to any of the base contexts. Base contexts should be
335 changed relative to any of the base contexts. Base contexts should be
336 ancestors of the context being fixed.
336 ancestors of the context being fixed.
337 """
337 """
338 files = set()
338 files = set()
339 for basectx in basectxs:
339 for basectx in basectxs:
340 stat = basectx.status(fixctx, match=match, listclean=bool(pats),
340 stat = basectx.status(fixctx, match=match, listclean=bool(pats),
341 listunknown=bool(pats))
341 listunknown=bool(pats))
342 files.update(
342 files.update(
343 set(itertools.chain(stat.added, stat.modified, stat.clean,
343 set(itertools.chain(stat.added, stat.modified, stat.clean,
344 stat.unknown)))
344 stat.unknown)))
345 return files
345 return files
346
346
347 def lineranges(opts, path, basectxs, fixctx, content2):
347 def lineranges(opts, path, basectxs, fixctx, content2):
348 """Returns the set of line ranges that should be fixed in a file
348 """Returns the set of line ranges that should be fixed in a file
349
349
350 Of the form [(10, 20), (30, 40)].
350 Of the form [(10, 20), (30, 40)].
351
351
352 This depends on the given base contexts; we must consider lines that have
352 This depends on the given base contexts; we must consider lines that have
353 changed versus any of the base contexts, and whether the file has been
353 changed versus any of the base contexts, and whether the file has been
354 renamed versus any of them.
354 renamed versus any of them.
355
355
356 Another way to understand this is that we exclude line ranges that are
356 Another way to understand this is that we exclude line ranges that are
357 common to the file in all base contexts.
357 common to the file in all base contexts.
358 """
358 """
359 if opts.get('whole'):
359 if opts.get('whole'):
360 # Return a range containing all lines. Rely on the diff implementation's
360 # Return a range containing all lines. Rely on the diff implementation's
361 # idea of how many lines are in the file, instead of reimplementing it.
361 # idea of how many lines are in the file, instead of reimplementing it.
362 return difflineranges('', content2)
362 return difflineranges('', content2)
363
363
364 rangeslist = []
364 rangeslist = []
365 for basectx in basectxs:
365 for basectx in basectxs:
366 basepath = copies.pathcopies(basectx, fixctx).get(path, path)
366 basepath = copies.pathcopies(basectx, fixctx).get(path, path)
367 if basepath in basectx:
367 if basepath in basectx:
368 content1 = basectx[basepath].data()
368 content1 = basectx[basepath].data()
369 else:
369 else:
370 content1 = ''
370 content1 = ''
371 rangeslist.extend(difflineranges(content1, content2))
371 rangeslist.extend(difflineranges(content1, content2))
372 return unionranges(rangeslist)
372 return unionranges(rangeslist)
373
373
374 def unionranges(rangeslist):
374 def unionranges(rangeslist):
375 """Return the union of some closed intervals
375 """Return the union of some closed intervals
376
376
377 >>> unionranges([])
377 >>> unionranges([])
378 []
378 []
379 >>> unionranges([(1, 100)])
379 >>> unionranges([(1, 100)])
380 [(1, 100)]
380 [(1, 100)]
381 >>> unionranges([(1, 100), (1, 100)])
381 >>> unionranges([(1, 100), (1, 100)])
382 [(1, 100)]
382 [(1, 100)]
383 >>> unionranges([(1, 100), (2, 100)])
383 >>> unionranges([(1, 100), (2, 100)])
384 [(1, 100)]
384 [(1, 100)]
385 >>> unionranges([(1, 99), (1, 100)])
385 >>> unionranges([(1, 99), (1, 100)])
386 [(1, 100)]
386 [(1, 100)]
387 >>> unionranges([(1, 100), (40, 60)])
387 >>> unionranges([(1, 100), (40, 60)])
388 [(1, 100)]
388 [(1, 100)]
389 >>> unionranges([(1, 49), (50, 100)])
389 >>> unionranges([(1, 49), (50, 100)])
390 [(1, 100)]
390 [(1, 100)]
391 >>> unionranges([(1, 48), (50, 100)])
391 >>> unionranges([(1, 48), (50, 100)])
392 [(1, 48), (50, 100)]
392 [(1, 48), (50, 100)]
393 >>> unionranges([(1, 2), (3, 4), (5, 6)])
393 >>> unionranges([(1, 2), (3, 4), (5, 6)])
394 [(1, 6)]
394 [(1, 6)]
395 """
395 """
396 rangeslist = sorted(set(rangeslist))
396 rangeslist = sorted(set(rangeslist))
397 unioned = []
397 unioned = []
398 if rangeslist:
398 if rangeslist:
399 unioned, rangeslist = [rangeslist[0]], rangeslist[1:]
399 unioned, rangeslist = [rangeslist[0]], rangeslist[1:]
400 for a, b in rangeslist:
400 for a, b in rangeslist:
401 c, d = unioned[-1]
401 c, d = unioned[-1]
402 if a > d + 1:
402 if a > d + 1:
403 unioned.append((a, b))
403 unioned.append((a, b))
404 else:
404 else:
405 unioned[-1] = (c, max(b, d))
405 unioned[-1] = (c, max(b, d))
406 return unioned
406 return unioned
407
407
408 def difflineranges(content1, content2):
408 def difflineranges(content1, content2):
409 """Return list of line number ranges in content2 that differ from content1.
409 """Return list of line number ranges in content2 that differ from content1.
410
410
411 Line numbers are 1-based. The numbers are the first and last line contained
411 Line numbers are 1-based. The numbers are the first and last line contained
412 in the range. Single-line ranges have the same line number for the first and
412 in the range. Single-line ranges have the same line number for the first and
413 last line. Excludes any empty ranges that result from lines that are only
413 last line. Excludes any empty ranges that result from lines that are only
414 present in content1. Relies on mdiff's idea of where the line endings are in
414 present in content1. Relies on mdiff's idea of where the line endings are in
415 the string.
415 the string.
416
416
417 >>> from mercurial import pycompat
417 >>> from mercurial import pycompat
418 >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)])
418 >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)])
419 >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b))
419 >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b))
420 >>> difflineranges2(b'', b'')
420 >>> difflineranges2(b'', b'')
421 []
421 []
422 >>> difflineranges2(b'a', b'')
422 >>> difflineranges2(b'a', b'')
423 []
423 []
424 >>> difflineranges2(b'', b'A')
424 >>> difflineranges2(b'', b'A')
425 [(1, 1)]
425 [(1, 1)]
426 >>> difflineranges2(b'a', b'a')
426 >>> difflineranges2(b'a', b'a')
427 []
427 []
428 >>> difflineranges2(b'a', b'A')
428 >>> difflineranges2(b'a', b'A')
429 [(1, 1)]
429 [(1, 1)]
430 >>> difflineranges2(b'ab', b'')
430 >>> difflineranges2(b'ab', b'')
431 []
431 []
432 >>> difflineranges2(b'', b'AB')
432 >>> difflineranges2(b'', b'AB')
433 [(1, 2)]
433 [(1, 2)]
434 >>> difflineranges2(b'abc', b'ac')
434 >>> difflineranges2(b'abc', b'ac')
435 []
435 []
436 >>> difflineranges2(b'ab', b'aCb')
436 >>> difflineranges2(b'ab', b'aCb')
437 [(2, 2)]
437 [(2, 2)]
438 >>> difflineranges2(b'abc', b'aBc')
438 >>> difflineranges2(b'abc', b'aBc')
439 [(2, 2)]
439 [(2, 2)]
440 >>> difflineranges2(b'ab', b'AB')
440 >>> difflineranges2(b'ab', b'AB')
441 [(1, 2)]
441 [(1, 2)]
442 >>> difflineranges2(b'abcde', b'aBcDe')
442 >>> difflineranges2(b'abcde', b'aBcDe')
443 [(2, 2), (4, 4)]
443 [(2, 2), (4, 4)]
444 >>> difflineranges2(b'abcde', b'aBCDe')
444 >>> difflineranges2(b'abcde', b'aBCDe')
445 [(2, 4)]
445 [(2, 4)]
446 """
446 """
447 ranges = []
447 ranges = []
448 for lines, kind in mdiff.allblocks(content1, content2):
448 for lines, kind in mdiff.allblocks(content1, content2):
449 firstline, lastline = lines[2:4]
449 firstline, lastline = lines[2:4]
450 if kind == '!' and firstline != lastline:
450 if kind == '!' and firstline != lastline:
451 ranges.append((firstline + 1, lastline))
451 ranges.append((firstline + 1, lastline))
452 return ranges
452 return ranges
453
453
454 def getbasectxs(repo, opts, revstofix):
454 def getbasectxs(repo, opts, revstofix):
455 """Returns a map of the base contexts for each revision
455 """Returns a map of the base contexts for each revision
456
456
457 The base contexts determine which lines are considered modified when we
457 The base contexts determine which lines are considered modified when we
458 attempt to fix just the modified lines in a file. It also determines which
458 attempt to fix just the modified lines in a file. It also determines which
459 files we attempt to fix, so it is important to compute this even when
459 files we attempt to fix, so it is important to compute this even when
460 --whole is used.
460 --whole is used.
461 """
461 """
462 # The --base flag overrides the usual logic, and we give every revision
462 # The --base flag overrides the usual logic, and we give every revision
463 # exactly the set of baserevs that the user specified.
463 # exactly the set of baserevs that the user specified.
464 if opts.get('base'):
464 if opts.get('base'):
465 baserevs = set(scmutil.revrange(repo, opts.get('base')))
465 baserevs = set(scmutil.revrange(repo, opts.get('base')))
466 if not baserevs:
466 if not baserevs:
467 baserevs = {nullrev}
467 baserevs = {nullrev}
468 basectxs = {repo[rev] for rev in baserevs}
468 basectxs = {repo[rev] for rev in baserevs}
469 return {rev: basectxs for rev in revstofix}
469 return {rev: basectxs for rev in revstofix}
470
470
471 # Proceed in topological order so that we can easily determine each
471 # Proceed in topological order so that we can easily determine each
472 # revision's baserevs by looking at its parents and their baserevs.
472 # revision's baserevs by looking at its parents and their baserevs.
473 basectxs = collections.defaultdict(set)
473 basectxs = collections.defaultdict(set)
474 for rev in sorted(revstofix):
474 for rev in sorted(revstofix):
475 ctx = repo[rev]
475 ctx = repo[rev]
476 for pctx in ctx.parents():
476 for pctx in ctx.parents():
477 if pctx.rev() in basectxs:
477 if pctx.rev() in basectxs:
478 basectxs[rev].update(basectxs[pctx.rev()])
478 basectxs[rev].update(basectxs[pctx.rev()])
479 else:
479 else:
480 basectxs[rev].add(pctx)
480 basectxs[rev].add(pctx)
481 return basectxs
481 return basectxs
482
482
483 def fixfile(ui, opts, fixers, fixctx, path, basectxs):
483 def fixfile(ui, opts, fixers, fixctx, path, basectxs):
484 """Run any configured fixers that should affect the file in this context
484 """Run any configured fixers that should affect the file in this context
485
485
486 Returns the file content that results from applying the fixers in some order
486 Returns the file content that results from applying the fixers in some order
487 starting with the file's content in the fixctx. Fixers that support line
487 starting with the file's content in the fixctx. Fixers that support line
488 ranges will affect lines that have changed relative to any of the basectxs
488 ranges will affect lines that have changed relative to any of the basectxs
489 (i.e. they will only avoid lines that are common to all basectxs).
489 (i.e. they will only avoid lines that are common to all basectxs).
490
490
491 A fixer tool's stdout will become the file's new content if and only if it
491 A fixer tool's stdout will become the file's new content if and only if it
492 exits with code zero.
492 exits with code zero.
493 """
493 """
494 newdata = fixctx[path].data()
494 newdata = fixctx[path].data()
495 for fixername, fixer in fixers.iteritems():
495 for fixername, fixer in fixers.iteritems():
496 if fixer.affects(opts, fixctx, path):
496 if fixer.affects(opts, fixctx, path):
497 rangesfn = lambda: lineranges(opts, path, basectxs, fixctx, newdata)
497 rangesfn = lambda: lineranges(opts, path, basectxs, fixctx, newdata)
498 command = fixer.command(ui, path, rangesfn)
498 command = fixer.command(ui, path, rangesfn)
499 if command is None:
499 if command is None:
500 continue
500 continue
501 ui.debug('subprocess: %s\n' % (command,))
501 ui.debug('subprocess: %s\n' % (command,))
502 proc = subprocess.Popen(
502 proc = subprocess.Popen(
503 procutil.tonativestr(command),
503 procutil.tonativestr(command),
504 shell=True,
504 shell=True,
505 cwd=procutil.tonativestr(b'/'),
505 cwd=procutil.tonativestr(b'/'),
506 stdin=subprocess.PIPE,
506 stdin=subprocess.PIPE,
507 stdout=subprocess.PIPE,
507 stdout=subprocess.PIPE,
508 stderr=subprocess.PIPE)
508 stderr=subprocess.PIPE)
509 newerdata, stderr = proc.communicate(newdata)
509 newerdata, stderr = proc.communicate(newdata)
510 if stderr:
510 if stderr:
511 showstderr(ui, fixctx.rev(), fixername, stderr)
511 showstderr(ui, fixctx.rev(), fixername, stderr)
512 if proc.returncode == 0:
512 if proc.returncode == 0:
513 newdata = newerdata
513 newdata = newerdata
514 else:
514 else:
515 if not stderr:
515 if not stderr:
516 message = _('exited with status %d\n') % (proc.returncode,)
516 message = _('exited with status %d\n') % (proc.returncode,)
517 showstderr(ui, fixctx.rev(), fixername, message)
517 showstderr(ui, fixctx.rev(), fixername, message)
518 checktoolfailureaction(
518 checktoolfailureaction(
519 ui, _('no fixes will be applied'),
519 ui, _('no fixes will be applied'),
520 hint=_('use --config fix.failure=continue to apply any '
520 hint=_('use --config fix.failure=continue to apply any '
521 'successful fixes anyway'))
521 'successful fixes anyway'))
522 return newdata
522 return newdata
523
523
524 def showstderr(ui, rev, fixername, stderr):
524 def showstderr(ui, rev, fixername, stderr):
525 """Writes the lines of the stderr string as warnings on the ui
525 """Writes the lines of the stderr string as warnings on the ui
526
526
527 Uses the revision number and fixername to give more context to each line of
527 Uses the revision number and fixername to give more context to each line of
528 the error message. Doesn't include file names, since those take up a lot of
528 the error message. Doesn't include file names, since those take up a lot of
529 space and would tend to be included in the error message if they were
529 space and would tend to be included in the error message if they were
530 relevant.
530 relevant.
531 """
531 """
532 for line in re.split('[\r\n]+', stderr):
532 for line in re.split('[\r\n]+', stderr):
533 if line:
533 if line:
534 ui.warn(('['))
534 ui.warn(('['))
535 if rev is None:
535 if rev is None:
536 ui.warn(_('wdir'), label='evolve.rev')
536 ui.warn(_('wdir'), label='evolve.rev')
537 else:
537 else:
538 ui.warn((str(rev)), label='evolve.rev')
538 ui.warn((str(rev)), label='evolve.rev')
539 ui.warn(('] %s: %s\n') % (fixername, line))
539 ui.warn(('] %s: %s\n') % (fixername, line))
540
540
541 def writeworkingdir(repo, ctx, filedata, replacements):
541 def writeworkingdir(repo, ctx, filedata, replacements):
542 """Write new content to the working copy and check out the new p1 if any
542 """Write new content to the working copy and check out the new p1 if any
543
543
544 We check out a new revision if and only if we fixed something in both the
544 We check out a new revision if and only if we fixed something in both the
545 working directory and its parent revision. This avoids the need for a full
545 working directory and its parent revision. This avoids the need for a full
546 update/merge, and means that the working directory simply isn't affected
546 update/merge, and means that the working directory simply isn't affected
547 unless the --working-dir flag is given.
547 unless the --working-dir flag is given.
548
548
549 Directly updates the dirstate for the affected files.
549 Directly updates the dirstate for the affected files.
550 """
550 """
551 for path, data in filedata.iteritems():
551 for path, data in filedata.iteritems():
552 fctx = ctx[path]
552 fctx = ctx[path]
553 fctx.write(data, fctx.flags())
553 fctx.write(data, fctx.flags())
554 if repo.dirstate[path] == 'n':
554 if repo.dirstate[path] == 'n':
555 repo.dirstate.normallookup(path)
555 repo.dirstate.normallookup(path)
556
556
557 oldparentnodes = repo.dirstate.parents()
557 oldparentnodes = repo.dirstate.parents()
558 newparentnodes = [replacements.get(n, n) for n in oldparentnodes]
558 newparentnodes = [replacements.get(n, n) for n in oldparentnodes]
559 if newparentnodes != oldparentnodes:
559 if newparentnodes != oldparentnodes:
560 repo.setparents(*newparentnodes)
560 repo.setparents(*newparentnodes)
561
561
562 def replacerev(ui, repo, ctx, filedata, replacements):
562 def replacerev(ui, repo, ctx, filedata, replacements):
563 """Commit a new revision like the given one, but with file content changes
563 """Commit a new revision like the given one, but with file content changes
564
564
565 "ctx" is the original revision to be replaced by a modified one.
565 "ctx" is the original revision to be replaced by a modified one.
566
566
567 "filedata" is a dict that maps paths to their new file content. All other
567 "filedata" is a dict that maps paths to their new file content. All other
568 paths will be recreated from the original revision without changes.
568 paths will be recreated from the original revision without changes.
569 "filedata" may contain paths that didn't exist in the original revision;
569 "filedata" may contain paths that didn't exist in the original revision;
570 they will be added.
570 they will be added.
571
571
572 "replacements" is a dict that maps a single node to a single node, and it is
572 "replacements" is a dict that maps a single node to a single node, and it is
573 updated to indicate the original revision is replaced by the newly created
573 updated to indicate the original revision is replaced by the newly created
574 one. No entry is added if the replacement's node already exists.
574 one. No entry is added if the replacement's node already exists.
575
575
576 The new revision has the same parents as the old one, unless those parents
576 The new revision has the same parents as the old one, unless those parents
577 have already been replaced, in which case those replacements are the parents
577 have already been replaced, in which case those replacements are the parents
578 of this new revision. Thus, if revisions are replaced in topological order,
578 of this new revision. Thus, if revisions are replaced in topological order,
579 there is no need to rebase them into the original topology later.
579 there is no need to rebase them into the original topology later.
580 """
580 """
581
581
582 p1rev, p2rev = repo.changelog.parentrevs(ctx.rev())
582 p1rev, p2rev = repo.changelog.parentrevs(ctx.rev())
583 p1ctx, p2ctx = repo[p1rev], repo[p2rev]
583 p1ctx, p2ctx = repo[p1rev], repo[p2rev]
584 newp1node = replacements.get(p1ctx.node(), p1ctx.node())
584 newp1node = replacements.get(p1ctx.node(), p1ctx.node())
585 newp2node = replacements.get(p2ctx.node(), p2ctx.node())
585 newp2node = replacements.get(p2ctx.node(), p2ctx.node())
586
586
587 # We don't want to create a revision that has no changes from the original,
587 # We don't want to create a revision that has no changes from the original,
588 # but we should if the original revision's parent has been replaced.
588 # but we should if the original revision's parent has been replaced.
589 # Otherwise, we would produce an orphan that needs no actual human
589 # Otherwise, we would produce an orphan that needs no actual human
590 # intervention to evolve. We can't rely on commit() to avoid creating the
590 # intervention to evolve. We can't rely on commit() to avoid creating the
591 # un-needed revision because the extra field added below produces a new hash
591 # un-needed revision because the extra field added below produces a new hash
592 # regardless of file content changes.
592 # regardless of file content changes.
593 if (not filedata and
593 if (not filedata and
594 p1ctx.node() not in replacements and
594 p1ctx.node() not in replacements and
595 p2ctx.node() not in replacements):
595 p2ctx.node() not in replacements):
596 return
596 return
597
597
598 def filectxfn(repo, memctx, path):
598 def filectxfn(repo, memctx, path):
599 if path not in ctx:
599 if path not in ctx:
600 return None
600 return None
601 fctx = ctx[path]
601 fctx = ctx[path]
602 copysource = fctx.copysource()
602 copysource = fctx.copysource()
603 return context.memfilectx(
603 return context.memfilectx(
604 repo,
604 repo,
605 memctx,
605 memctx,
606 path=fctx.path(),
606 path=fctx.path(),
607 data=filedata.get(path, fctx.data()),
607 data=filedata.get(path, fctx.data()),
608 islink=fctx.islink(),
608 islink=fctx.islink(),
609 isexec=fctx.isexec(),
609 isexec=fctx.isexec(),
610 copysource=copysource)
610 copysource=copysource)
611
611
612 extra = ctx.extra().copy()
612 extra = ctx.extra().copy()
613 extra['fix_source'] = ctx.hex()
613 extra['fix_source'] = ctx.hex()
614
614
615 memctx = context.memctx(
615 memctx = context.memctx(
616 repo,
616 repo,
617 parents=(newp1node, newp2node),
617 parents=(newp1node, newp2node),
618 text=ctx.description(),
618 text=ctx.description(),
619 files=set(ctx.files()) | set(filedata.keys()),
619 files=set(ctx.files()) | set(filedata.keys()),
620 filectxfn=filectxfn,
620 filectxfn=filectxfn,
621 user=ctx.user(),
621 user=ctx.user(),
622 date=ctx.date(),
622 date=ctx.date(),
623 extra=extra,
623 extra=extra,
624 branch=ctx.branch(),
624 branch=ctx.branch(),
625 editor=None)
625 editor=None)
626 sucnode = memctx.commit()
626 sucnode = memctx.commit()
627 prenode = ctx.node()
627 prenode = ctx.node()
628 if prenode == sucnode:
628 if prenode == sucnode:
629 ui.debug('node %s already existed\n' % (ctx.hex()))
629 ui.debug('node %s already existed\n' % (ctx.hex()))
630 else:
630 else:
631 replacements[ctx.node()] = sucnode
631 replacements[ctx.node()] = sucnode
632
632
633 def getfixers(ui):
633 def getfixers(ui):
634 """Returns a map of configured fixer tools indexed by their names
634 """Returns a map of configured fixer tools indexed by their names
635
635
636 Each value is a Fixer object with methods that implement the behavior of the
636 Each value is a Fixer object with methods that implement the behavior of the
637 fixer's config suboptions. Does not validate the config values.
637 fixer's config suboptions. Does not validate the config values.
638 """
638 """
639 fixers = {}
639 fixers = {}
640 for name in fixernames(ui):
640 for name in fixernames(ui):
641 fixers[name] = Fixer()
641 fixers[name] = Fixer()
642 attrs = ui.configsuboptions('fix', name)[1]
642 attrs = ui.configsuboptions('fix', name)[1]
643 if 'fileset' in attrs and 'pattern' not in attrs:
643 if 'fileset' in attrs and 'pattern' not in attrs:
644 ui.warn(_('the fix.tool:fileset config name is deprecated; '
644 ui.warn(_('the fix.tool:fileset config name is deprecated; '
645 'please rename it to fix.tool:pattern\n'))
645 'please rename it to fix.tool:pattern\n'))
646 attrs['pattern'] = attrs['fileset']
646 attrs['pattern'] = attrs['fileset']
647 for key, default in FIXER_ATTRS.items():
647 for key, default in FIXER_ATTRS.items():
648 setattr(fixers[name], pycompat.sysstr('_' + key),
648 setattr(fixers[name], pycompat.sysstr('_' + key),
649 attrs.get(key, default))
649 attrs.get(key, default))
650 fixers[name]._priority = int(fixers[name]._priority)
650 fixers[name]._priority = int(fixers[name]._priority)
651 return collections.OrderedDict(
651 return collections.OrderedDict(
652 sorted(fixers.items(), key=lambda item: item[1]._priority,
652 sorted(fixers.items(), key=lambda item: item[1]._priority,
653 reverse=True))
653 reverse=True))
654
654
655 def fixernames(ui):
655 def fixernames(ui):
656 """Returns the names of [fix] config options that have suboptions"""
656 """Returns the names of [fix] config options that have suboptions"""
657 names = set()
657 names = set()
658 for k, v in ui.configitems('fix'):
658 for k, v in ui.configitems('fix'):
659 if ':' in k:
659 if ':' in k:
660 names.add(k.split(':', 1)[0])
660 names.add(k.split(':', 1)[0])
661 return names
661 return names
662
662
663 class Fixer(object):
663 class Fixer(object):
664 """Wraps the raw config values for a fixer with methods"""
664 """Wraps the raw config values for a fixer with methods"""
665
665
666 def affects(self, opts, fixctx, path):
666 def affects(self, opts, fixctx, path):
667 """Should this fixer run on the file at the given path and context?"""
667 """Should this fixer run on the file at the given path and context?"""
668 return scmutil.match(fixctx, [self._pattern], opts)(path)
668 return scmutil.match(fixctx, [self._pattern], opts)(path)
669
669
670 def command(self, ui, path, rangesfn):
670 def command(self, ui, path, rangesfn):
671 """A shell command to use to invoke this fixer on the given file/lines
671 """A shell command to use to invoke this fixer on the given file/lines
672
672
673 May return None if there is no appropriate command to run for the given
673 May return None if there is no appropriate command to run for the given
674 parameters.
674 parameters.
675 """
675 """
676 expand = cmdutil.rendercommandtemplate
676 expand = cmdutil.rendercommandtemplate
677 parts = [expand(ui, self._command,
677 parts = [expand(ui, self._command,
678 {'rootpath': path, 'basename': os.path.basename(path)})]
678 {'rootpath': path, 'basename': os.path.basename(path)})]
679 if self._linerange:
679 if self._linerange:
680 ranges = rangesfn()
680 ranges = rangesfn()
681 if not ranges:
681 if not ranges:
682 # No line ranges to fix, so don't run the fixer.
682 # No line ranges to fix, so don't run the fixer.
683 return None
683 return None
684 for first, last in ranges:
684 for first, last in ranges:
685 parts.append(expand(ui, self._linerange,
685 parts.append(expand(ui, self._linerange,
686 {'first': first, 'last': last}))
686 {'first': first, 'last': last}))
687 return ' '.join(parts)
687 return ' '.join(parts)
General Comments 0
You need to be logged in to leave comments. Login now