##// END OF EJS Templates
memctx: rename constructor argument "copied" to "copysource" (API)...
Martin von Zweigbergk -
r42161:550a172a default
parent child Browse files
Show More
@@ -1,689 +1,689
1 # fix - rewrite file content in changesets and working copy
1 # fix - rewrite file content in changesets and working copy
2 #
2 #
3 # Copyright 2018 Google LLC.
3 # Copyright 2018 Google LLC.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """rewrite file content in changesets or working copy (EXPERIMENTAL)
7 """rewrite file content in changesets or working copy (EXPERIMENTAL)
8
8
9 Provides a command that runs configured tools on the contents of modified files,
9 Provides a command that runs configured tools on the contents of modified files,
10 writing back any fixes to the working copy or replacing changesets.
10 writing back any fixes to the working copy or replacing changesets.
11
11
12 Here is an example configuration that causes :hg:`fix` to apply automatic
12 Here is an example configuration that causes :hg:`fix` to apply automatic
13 formatting fixes to modified lines in C++ code::
13 formatting fixes to modified lines in C++ code::
14
14
15 [fix]
15 [fix]
16 clang-format:command=clang-format --assume-filename={rootpath}
16 clang-format:command=clang-format --assume-filename={rootpath}
17 clang-format:linerange=--lines={first}:{last}
17 clang-format:linerange=--lines={first}:{last}
18 clang-format:pattern=set:**.cpp or **.hpp
18 clang-format:pattern=set:**.cpp or **.hpp
19
19
20 The :command suboption forms the first part of the shell command that will be
20 The :command suboption forms the first part of the shell command that will be
21 used to fix a file. The content of the file is passed on standard input, and the
21 used to fix a file. The content of the file is passed on standard input, and the
22 fixed file content is expected on standard output. Any output on standard error
22 fixed file content is expected on standard output. Any output on standard error
23 will be displayed as a warning. If the exit status is not zero, the file will
23 will be displayed as a warning. If the exit status is not zero, the file will
24 not be affected. A placeholder warning is displayed if there is a non-zero exit
24 not be affected. A placeholder warning is displayed if there is a non-zero exit
25 status but no standard error output. Some values may be substituted into the
25 status but no standard error output. Some values may be substituted into the
26 command::
26 command::
27
27
28 {rootpath} The path of the file being fixed, relative to the repo root
28 {rootpath} The path of the file being fixed, relative to the repo root
29 {basename} The name of the file being fixed, without the directory path
29 {basename} The name of the file being fixed, without the directory path
30
30
31 If the :linerange suboption is set, the tool will only be run if there are
31 If the :linerange suboption is set, the tool will only be run if there are
32 changed lines in a file. The value of this suboption is appended to the shell
32 changed lines in a file. The value of this suboption is appended to the shell
33 command once for every range of changed lines in the file. Some values may be
33 command once for every range of changed lines in the file. Some values may be
34 substituted into the command::
34 substituted into the command::
35
35
36 {first} The 1-based line number of the first line in the modified range
36 {first} The 1-based line number of the first line in the modified range
37 {last} The 1-based line number of the last line in the modified range
37 {last} The 1-based line number of the last line in the modified range
38
38
39 The :pattern suboption determines which files will be passed through each
39 The :pattern suboption determines which files will be passed through each
40 configured tool. See :hg:`help patterns` for possible values. If there are file
40 configured tool. See :hg:`help patterns` for possible values. If there are file
41 arguments to :hg:`fix`, the intersection of these patterns is used.
41 arguments to :hg:`fix`, the intersection of these patterns is used.
42
42
43 There is also a configurable limit for the maximum size of file that will be
43 There is also a configurable limit for the maximum size of file that will be
44 processed by :hg:`fix`::
44 processed by :hg:`fix`::
45
45
46 [fix]
46 [fix]
47 maxfilesize = 2MB
47 maxfilesize = 2MB
48
48
49 Normally, execution of configured tools will continue after a failure (indicated
49 Normally, execution of configured tools will continue after a failure (indicated
50 by a non-zero exit status). It can also be configured to abort after the first
50 by a non-zero exit status). It can also be configured to abort after the first
51 such failure, so that no files will be affected if any tool fails. This abort
51 such failure, so that no files will be affected if any tool fails. This abort
52 will also cause :hg:`fix` to exit with a non-zero status::
52 will also cause :hg:`fix` to exit with a non-zero status::
53
53
54 [fix]
54 [fix]
55 failure = abort
55 failure = abort
56
56
57 When multiple tools are configured to affect a file, they execute in an order
57 When multiple tools are configured to affect a file, they execute in an order
58 defined by the :priority suboption. The priority suboption has a default value
58 defined by the :priority suboption. The priority suboption has a default value
59 of zero for each tool. Tools are executed in order of descending priority. The
59 of zero for each tool. Tools are executed in order of descending priority. The
60 execution order of tools with equal priority is unspecified. For example, you
60 execution order of tools with equal priority is unspecified. For example, you
61 could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers
61 could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers
62 in a text file by ensuring that 'sort' runs before 'head'::
62 in a text file by ensuring that 'sort' runs before 'head'::
63
63
64 [fix]
64 [fix]
65 sort:command = sort -n
65 sort:command = sort -n
66 head:command = head -n 10
66 head:command = head -n 10
67 sort:pattern = numbers.txt
67 sort:pattern = numbers.txt
68 head:pattern = numbers.txt
68 head:pattern = numbers.txt
69 sort:priority = 2
69 sort:priority = 2
70 head:priority = 1
70 head:priority = 1
71
71
72 To account for changes made by each tool, the line numbers used for incremental
72 To account for changes made by each tool, the line numbers used for incremental
73 formatting are recomputed before executing the next tool. So, each tool may see
73 formatting are recomputed before executing the next tool. So, each tool may see
74 different values for the arguments added by the :linerange suboption.
74 different values for the arguments added by the :linerange suboption.
75 """
75 """
76
76
77 from __future__ import absolute_import
77 from __future__ import absolute_import
78
78
79 import collections
79 import collections
80 import itertools
80 import itertools
81 import os
81 import os
82 import re
82 import re
83 import subprocess
83 import subprocess
84
84
85 from mercurial.i18n import _
85 from mercurial.i18n import _
86 from mercurial.node import nullrev
86 from mercurial.node import nullrev
87 from mercurial.node import wdirrev
87 from mercurial.node import wdirrev
88
88
89 from mercurial.utils import (
89 from mercurial.utils import (
90 procutil,
90 procutil,
91 )
91 )
92
92
93 from mercurial import (
93 from mercurial import (
94 cmdutil,
94 cmdutil,
95 context,
95 context,
96 copies,
96 copies,
97 error,
97 error,
98 mdiff,
98 mdiff,
99 merge,
99 merge,
100 obsolete,
100 obsolete,
101 pycompat,
101 pycompat,
102 registrar,
102 registrar,
103 scmutil,
103 scmutil,
104 util,
104 util,
105 worker,
105 worker,
106 )
106 )
107
107
108 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
108 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
109 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
109 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
110 # be specifying the version(s) of Mercurial they are tested with, or
110 # be specifying the version(s) of Mercurial they are tested with, or
111 # leave the attribute unspecified.
111 # leave the attribute unspecified.
112 testedwith = 'ships-with-hg-core'
112 testedwith = 'ships-with-hg-core'
113
113
114 cmdtable = {}
114 cmdtable = {}
115 command = registrar.command(cmdtable)
115 command = registrar.command(cmdtable)
116
116
117 configtable = {}
117 configtable = {}
118 configitem = registrar.configitem(configtable)
118 configitem = registrar.configitem(configtable)
119
119
120 # Register the suboptions allowed for each configured fixer.
120 # Register the suboptions allowed for each configured fixer.
121 FIXER_ATTRS = {
121 FIXER_ATTRS = {
122 'command': None,
122 'command': None,
123 'linerange': None,
123 'linerange': None,
124 'fileset': None,
124 'fileset': None,
125 'pattern': None,
125 'pattern': None,
126 'priority': 0,
126 'priority': 0,
127 }
127 }
128
128
129 for key, default in FIXER_ATTRS.items():
129 for key, default in FIXER_ATTRS.items():
130 configitem('fix', '.*(:%s)?' % key, default=default, generic=True)
130 configitem('fix', '.*(:%s)?' % key, default=default, generic=True)
131
131
132 # A good default size allows most source code files to be fixed, but avoids
132 # A good default size allows most source code files to be fixed, but avoids
133 # letting fixer tools choke on huge inputs, which could be surprising to the
133 # letting fixer tools choke on huge inputs, which could be surprising to the
134 # user.
134 # user.
135 configitem('fix', 'maxfilesize', default='2MB')
135 configitem('fix', 'maxfilesize', default='2MB')
136
136
137 # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero.
137 # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero.
138 # This helps users do shell scripts that stop when a fixer tool signals a
138 # This helps users do shell scripts that stop when a fixer tool signals a
139 # problem.
139 # problem.
140 configitem('fix', 'failure', default='continue')
140 configitem('fix', 'failure', default='continue')
141
141
142 def checktoolfailureaction(ui, message, hint=None):
142 def checktoolfailureaction(ui, message, hint=None):
143 """Abort with 'message' if fix.failure=abort"""
143 """Abort with 'message' if fix.failure=abort"""
144 action = ui.config('fix', 'failure')
144 action = ui.config('fix', 'failure')
145 if action not in ('continue', 'abort'):
145 if action not in ('continue', 'abort'):
146 raise error.Abort(_('unknown fix.failure action: %s') % (action,),
146 raise error.Abort(_('unknown fix.failure action: %s') % (action,),
147 hint=_('use "continue" or "abort"'))
147 hint=_('use "continue" or "abort"'))
148 if action == 'abort':
148 if action == 'abort':
149 raise error.Abort(message, hint=hint)
149 raise error.Abort(message, hint=hint)
150
150
151 allopt = ('', 'all', False, _('fix all non-public non-obsolete revisions'))
151 allopt = ('', 'all', False, _('fix all non-public non-obsolete revisions'))
152 baseopt = ('', 'base', [], _('revisions to diff against (overrides automatic '
152 baseopt = ('', 'base', [], _('revisions to diff against (overrides automatic '
153 'selection, and applies to every revision being '
153 'selection, and applies to every revision being '
154 'fixed)'), _('REV'))
154 'fixed)'), _('REV'))
155 revopt = ('r', 'rev', [], _('revisions to fix'), _('REV'))
155 revopt = ('r', 'rev', [], _('revisions to fix'), _('REV'))
156 wdiropt = ('w', 'working-dir', False, _('fix the working directory'))
156 wdiropt = ('w', 'working-dir', False, _('fix the working directory'))
157 wholeopt = ('', 'whole', False, _('always fix every line of a file'))
157 wholeopt = ('', 'whole', False, _('always fix every line of a file'))
158 usage = _('[OPTION]... [FILE]...')
158 usage = _('[OPTION]... [FILE]...')
159
159
160 @command('fix', [allopt, baseopt, revopt, wdiropt, wholeopt], usage,
160 @command('fix', [allopt, baseopt, revopt, wdiropt, wholeopt], usage,
161 helpcategory=command.CATEGORY_FILE_CONTENTS)
161 helpcategory=command.CATEGORY_FILE_CONTENTS)
162 def fix(ui, repo, *pats, **opts):
162 def fix(ui, repo, *pats, **opts):
163 """rewrite file content in changesets or working directory
163 """rewrite file content in changesets or working directory
164
164
165 Runs any configured tools to fix the content of files. Only affects files
165 Runs any configured tools to fix the content of files. Only affects files
166 with changes, unless file arguments are provided. Only affects changed lines
166 with changes, unless file arguments are provided. Only affects changed lines
167 of files, unless the --whole flag is used. Some tools may always affect the
167 of files, unless the --whole flag is used. Some tools may always affect the
168 whole file regardless of --whole.
168 whole file regardless of --whole.
169
169
170 If revisions are specified with --rev, those revisions will be checked, and
170 If revisions are specified with --rev, those revisions will be checked, and
171 they may be replaced with new revisions that have fixed file content. It is
171 they may be replaced with new revisions that have fixed file content. It is
172 desirable to specify all descendants of each specified revision, so that the
172 desirable to specify all descendants of each specified revision, so that the
173 fixes propagate to the descendants. If all descendants are fixed at the same
173 fixes propagate to the descendants. If all descendants are fixed at the same
174 time, no merging, rebasing, or evolution will be required.
174 time, no merging, rebasing, or evolution will be required.
175
175
176 If --working-dir is used, files with uncommitted changes in the working copy
176 If --working-dir is used, files with uncommitted changes in the working copy
177 will be fixed. If the checked-out revision is also fixed, the working
177 will be fixed. If the checked-out revision is also fixed, the working
178 directory will update to the replacement revision.
178 directory will update to the replacement revision.
179
179
180 When determining what lines of each file to fix at each revision, the whole
180 When determining what lines of each file to fix at each revision, the whole
181 set of revisions being fixed is considered, so that fixes to earlier
181 set of revisions being fixed is considered, so that fixes to earlier
182 revisions are not forgotten in later ones. The --base flag can be used to
182 revisions are not forgotten in later ones. The --base flag can be used to
183 override this default behavior, though it is not usually desirable to do so.
183 override this default behavior, though it is not usually desirable to do so.
184 """
184 """
185 opts = pycompat.byteskwargs(opts)
185 opts = pycompat.byteskwargs(opts)
186 if opts['all']:
186 if opts['all']:
187 if opts['rev']:
187 if opts['rev']:
188 raise error.Abort(_('cannot specify both "--rev" and "--all"'))
188 raise error.Abort(_('cannot specify both "--rev" and "--all"'))
189 opts['rev'] = ['not public() and not obsolete()']
189 opts['rev'] = ['not public() and not obsolete()']
190 opts['working_dir'] = True
190 opts['working_dir'] = True
191 with repo.wlock(), repo.lock(), repo.transaction('fix'):
191 with repo.wlock(), repo.lock(), repo.transaction('fix'):
192 revstofix = getrevstofix(ui, repo, opts)
192 revstofix = getrevstofix(ui, repo, opts)
193 basectxs = getbasectxs(repo, opts, revstofix)
193 basectxs = getbasectxs(repo, opts, revstofix)
194 workqueue, numitems = getworkqueue(ui, repo, pats, opts, revstofix,
194 workqueue, numitems = getworkqueue(ui, repo, pats, opts, revstofix,
195 basectxs)
195 basectxs)
196 fixers = getfixers(ui)
196 fixers = getfixers(ui)
197
197
198 # There are no data dependencies between the workers fixing each file
198 # There are no data dependencies between the workers fixing each file
199 # revision, so we can use all available parallelism.
199 # revision, so we can use all available parallelism.
200 def getfixes(items):
200 def getfixes(items):
201 for rev, path in items:
201 for rev, path in items:
202 ctx = repo[rev]
202 ctx = repo[rev]
203 olddata = ctx[path].data()
203 olddata = ctx[path].data()
204 newdata = fixfile(ui, opts, fixers, ctx, path, basectxs[rev])
204 newdata = fixfile(ui, opts, fixers, ctx, path, basectxs[rev])
205 # Don't waste memory/time passing unchanged content back, but
205 # Don't waste memory/time passing unchanged content back, but
206 # produce one result per item either way.
206 # produce one result per item either way.
207 yield (rev, path, newdata if newdata != olddata else None)
207 yield (rev, path, newdata if newdata != olddata else None)
208 results = worker.worker(ui, 1.0, getfixes, tuple(), workqueue,
208 results = worker.worker(ui, 1.0, getfixes, tuple(), workqueue,
209 threadsafe=False)
209 threadsafe=False)
210
210
211 # We have to hold on to the data for each successor revision in memory
211 # We have to hold on to the data for each successor revision in memory
212 # until all its parents are committed. We ensure this by committing and
212 # until all its parents are committed. We ensure this by committing and
213 # freeing memory for the revisions in some topological order. This
213 # freeing memory for the revisions in some topological order. This
214 # leaves a little bit of memory efficiency on the table, but also makes
214 # leaves a little bit of memory efficiency on the table, but also makes
215 # the tests deterministic. It might also be considered a feature since
215 # the tests deterministic. It might also be considered a feature since
216 # it makes the results more easily reproducible.
216 # it makes the results more easily reproducible.
217 filedata = collections.defaultdict(dict)
217 filedata = collections.defaultdict(dict)
218 replacements = {}
218 replacements = {}
219 wdirwritten = False
219 wdirwritten = False
220 commitorder = sorted(revstofix, reverse=True)
220 commitorder = sorted(revstofix, reverse=True)
221 with ui.makeprogress(topic=_('fixing'), unit=_('files'),
221 with ui.makeprogress(topic=_('fixing'), unit=_('files'),
222 total=sum(numitems.values())) as progress:
222 total=sum(numitems.values())) as progress:
223 for rev, path, newdata in results:
223 for rev, path, newdata in results:
224 progress.increment(item=path)
224 progress.increment(item=path)
225 if newdata is not None:
225 if newdata is not None:
226 filedata[rev][path] = newdata
226 filedata[rev][path] = newdata
227 numitems[rev] -= 1
227 numitems[rev] -= 1
228 # Apply the fixes for this and any other revisions that are
228 # Apply the fixes for this and any other revisions that are
229 # ready and sitting at the front of the queue. Using a loop here
229 # ready and sitting at the front of the queue. Using a loop here
230 # prevents the queue from being blocked by the first revision to
230 # prevents the queue from being blocked by the first revision to
231 # be ready out of order.
231 # be ready out of order.
232 while commitorder and not numitems[commitorder[-1]]:
232 while commitorder and not numitems[commitorder[-1]]:
233 rev = commitorder.pop()
233 rev = commitorder.pop()
234 ctx = repo[rev]
234 ctx = repo[rev]
235 if rev == wdirrev:
235 if rev == wdirrev:
236 writeworkingdir(repo, ctx, filedata[rev], replacements)
236 writeworkingdir(repo, ctx, filedata[rev], replacements)
237 wdirwritten = bool(filedata[rev])
237 wdirwritten = bool(filedata[rev])
238 else:
238 else:
239 replacerev(ui, repo, ctx, filedata[rev], replacements)
239 replacerev(ui, repo, ctx, filedata[rev], replacements)
240 del filedata[rev]
240 del filedata[rev]
241
241
242 cleanup(repo, replacements, wdirwritten)
242 cleanup(repo, replacements, wdirwritten)
243
243
244 def cleanup(repo, replacements, wdirwritten):
244 def cleanup(repo, replacements, wdirwritten):
245 """Calls scmutil.cleanupnodes() with the given replacements.
245 """Calls scmutil.cleanupnodes() with the given replacements.
246
246
247 "replacements" is a dict from nodeid to nodeid, with one key and one value
247 "replacements" is a dict from nodeid to nodeid, with one key and one value
248 for every revision that was affected by fixing. This is slightly different
248 for every revision that was affected by fixing. This is slightly different
249 from cleanupnodes().
249 from cleanupnodes().
250
250
251 "wdirwritten" is a bool which tells whether the working copy was affected by
251 "wdirwritten" is a bool which tells whether the working copy was affected by
252 fixing, since it has no entry in "replacements".
252 fixing, since it has no entry in "replacements".
253
253
254 Useful as a hook point for extending "hg fix" with output summarizing the
254 Useful as a hook point for extending "hg fix" with output summarizing the
255 effects of the command, though we choose not to output anything here.
255 effects of the command, though we choose not to output anything here.
256 """
256 """
257 replacements = {prec: [succ] for prec, succ in replacements.iteritems()}
257 replacements = {prec: [succ] for prec, succ in replacements.iteritems()}
258 scmutil.cleanupnodes(repo, replacements, 'fix', fixphase=True)
258 scmutil.cleanupnodes(repo, replacements, 'fix', fixphase=True)
259
259
260 def getworkqueue(ui, repo, pats, opts, revstofix, basectxs):
260 def getworkqueue(ui, repo, pats, opts, revstofix, basectxs):
261 """"Constructs the list of files to be fixed at specific revisions
261 """"Constructs the list of files to be fixed at specific revisions
262
262
263 It is up to the caller how to consume the work items, and the only
263 It is up to the caller how to consume the work items, and the only
264 dependence between them is that replacement revisions must be committed in
264 dependence between them is that replacement revisions must be committed in
265 topological order. Each work item represents a file in the working copy or
265 topological order. Each work item represents a file in the working copy or
266 in some revision that should be fixed and written back to the working copy
266 in some revision that should be fixed and written back to the working copy
267 or into a replacement revision.
267 or into a replacement revision.
268
268
269 Work items for the same revision are grouped together, so that a worker
269 Work items for the same revision are grouped together, so that a worker
270 pool starting with the first N items in parallel is likely to finish the
270 pool starting with the first N items in parallel is likely to finish the
271 first revision's work before other revisions. This can allow us to write
271 first revision's work before other revisions. This can allow us to write
272 the result to disk and reduce memory footprint. At time of writing, the
272 the result to disk and reduce memory footprint. At time of writing, the
273 partition strategy in worker.py seems favorable to this. We also sort the
273 partition strategy in worker.py seems favorable to this. We also sort the
274 items by ascending revision number to match the order in which we commit
274 items by ascending revision number to match the order in which we commit
275 the fixes later.
275 the fixes later.
276 """
276 """
277 workqueue = []
277 workqueue = []
278 numitems = collections.defaultdict(int)
278 numitems = collections.defaultdict(int)
279 maxfilesize = ui.configbytes('fix', 'maxfilesize')
279 maxfilesize = ui.configbytes('fix', 'maxfilesize')
280 for rev in sorted(revstofix):
280 for rev in sorted(revstofix):
281 fixctx = repo[rev]
281 fixctx = repo[rev]
282 match = scmutil.match(fixctx, pats, opts)
282 match = scmutil.match(fixctx, pats, opts)
283 for path in pathstofix(ui, repo, pats, opts, match, basectxs[rev],
283 for path in pathstofix(ui, repo, pats, opts, match, basectxs[rev],
284 fixctx):
284 fixctx):
285 if path not in fixctx:
285 if path not in fixctx:
286 continue
286 continue
287 fctx = fixctx[path]
287 fctx = fixctx[path]
288 if fctx.islink():
288 if fctx.islink():
289 continue
289 continue
290 if fctx.size() > maxfilesize:
290 if fctx.size() > maxfilesize:
291 ui.warn(_('ignoring file larger than %s: %s\n') %
291 ui.warn(_('ignoring file larger than %s: %s\n') %
292 (util.bytecount(maxfilesize), path))
292 (util.bytecount(maxfilesize), path))
293 continue
293 continue
294 workqueue.append((rev, path))
294 workqueue.append((rev, path))
295 numitems[rev] += 1
295 numitems[rev] += 1
296 return workqueue, numitems
296 return workqueue, numitems
297
297
298 def getrevstofix(ui, repo, opts):
298 def getrevstofix(ui, repo, opts):
299 """Returns the set of revision numbers that should be fixed"""
299 """Returns the set of revision numbers that should be fixed"""
300 revs = set(scmutil.revrange(repo, opts['rev']))
300 revs = set(scmutil.revrange(repo, opts['rev']))
301 for rev in revs:
301 for rev in revs:
302 checkfixablectx(ui, repo, repo[rev])
302 checkfixablectx(ui, repo, repo[rev])
303 if revs:
303 if revs:
304 cmdutil.checkunfinished(repo)
304 cmdutil.checkunfinished(repo)
305 checknodescendants(repo, revs)
305 checknodescendants(repo, revs)
306 if opts.get('working_dir'):
306 if opts.get('working_dir'):
307 revs.add(wdirrev)
307 revs.add(wdirrev)
308 if list(merge.mergestate.read(repo).unresolved()):
308 if list(merge.mergestate.read(repo).unresolved()):
309 raise error.Abort('unresolved conflicts', hint="use 'hg resolve'")
309 raise error.Abort('unresolved conflicts', hint="use 'hg resolve'")
310 if not revs:
310 if not revs:
311 raise error.Abort(
311 raise error.Abort(
312 'no changesets specified', hint='use --rev or --working-dir')
312 'no changesets specified', hint='use --rev or --working-dir')
313 return revs
313 return revs
314
314
315 def checknodescendants(repo, revs):
315 def checknodescendants(repo, revs):
316 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
316 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
317 repo.revs('(%ld::) - (%ld)', revs, revs)):
317 repo.revs('(%ld::) - (%ld)', revs, revs)):
318 raise error.Abort(_('can only fix a changeset together '
318 raise error.Abort(_('can only fix a changeset together '
319 'with all its descendants'))
319 'with all its descendants'))
320
320
321 def checkfixablectx(ui, repo, ctx):
321 def checkfixablectx(ui, repo, ctx):
322 """Aborts if the revision shouldn't be replaced with a fixed one."""
322 """Aborts if the revision shouldn't be replaced with a fixed one."""
323 if not ctx.mutable():
323 if not ctx.mutable():
324 raise error.Abort('can\'t fix immutable changeset %s' %
324 raise error.Abort('can\'t fix immutable changeset %s' %
325 (scmutil.formatchangeid(ctx),))
325 (scmutil.formatchangeid(ctx),))
326 if ctx.obsolete():
326 if ctx.obsolete():
327 # It would be better to actually check if the revision has a successor.
327 # It would be better to actually check if the revision has a successor.
328 allowdivergence = ui.configbool('experimental',
328 allowdivergence = ui.configbool('experimental',
329 'evolution.allowdivergence')
329 'evolution.allowdivergence')
330 if not allowdivergence:
330 if not allowdivergence:
331 raise error.Abort('fixing obsolete revision could cause divergence')
331 raise error.Abort('fixing obsolete revision could cause divergence')
332
332
333 def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx):
333 def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx):
334 """Returns the set of files that should be fixed in a context
334 """Returns the set of files that should be fixed in a context
335
335
336 The result depends on the base contexts; we include any file that has
336 The result depends on the base contexts; we include any file that has
337 changed relative to any of the base contexts. Base contexts should be
337 changed relative to any of the base contexts. Base contexts should be
338 ancestors of the context being fixed.
338 ancestors of the context being fixed.
339 """
339 """
340 files = set()
340 files = set()
341 for basectx in basectxs:
341 for basectx in basectxs:
342 stat = basectx.status(fixctx, match=match, listclean=bool(pats),
342 stat = basectx.status(fixctx, match=match, listclean=bool(pats),
343 listunknown=bool(pats))
343 listunknown=bool(pats))
344 files.update(
344 files.update(
345 set(itertools.chain(stat.added, stat.modified, stat.clean,
345 set(itertools.chain(stat.added, stat.modified, stat.clean,
346 stat.unknown)))
346 stat.unknown)))
347 return files
347 return files
348
348
349 def lineranges(opts, path, basectxs, fixctx, content2):
349 def lineranges(opts, path, basectxs, fixctx, content2):
350 """Returns the set of line ranges that should be fixed in a file
350 """Returns the set of line ranges that should be fixed in a file
351
351
352 Of the form [(10, 20), (30, 40)].
352 Of the form [(10, 20), (30, 40)].
353
353
354 This depends on the given base contexts; we must consider lines that have
354 This depends on the given base contexts; we must consider lines that have
355 changed versus any of the base contexts, and whether the file has been
355 changed versus any of the base contexts, and whether the file has been
356 renamed versus any of them.
356 renamed versus any of them.
357
357
358 Another way to understand this is that we exclude line ranges that are
358 Another way to understand this is that we exclude line ranges that are
359 common to the file in all base contexts.
359 common to the file in all base contexts.
360 """
360 """
361 if opts.get('whole'):
361 if opts.get('whole'):
362 # Return a range containing all lines. Rely on the diff implementation's
362 # Return a range containing all lines. Rely on the diff implementation's
363 # idea of how many lines are in the file, instead of reimplementing it.
363 # idea of how many lines are in the file, instead of reimplementing it.
364 return difflineranges('', content2)
364 return difflineranges('', content2)
365
365
366 rangeslist = []
366 rangeslist = []
367 for basectx in basectxs:
367 for basectx in basectxs:
368 basepath = copies.pathcopies(basectx, fixctx).get(path, path)
368 basepath = copies.pathcopies(basectx, fixctx).get(path, path)
369 if basepath in basectx:
369 if basepath in basectx:
370 content1 = basectx[basepath].data()
370 content1 = basectx[basepath].data()
371 else:
371 else:
372 content1 = ''
372 content1 = ''
373 rangeslist.extend(difflineranges(content1, content2))
373 rangeslist.extend(difflineranges(content1, content2))
374 return unionranges(rangeslist)
374 return unionranges(rangeslist)
375
375
376 def unionranges(rangeslist):
376 def unionranges(rangeslist):
377 """Return the union of some closed intervals
377 """Return the union of some closed intervals
378
378
379 >>> unionranges([])
379 >>> unionranges([])
380 []
380 []
381 >>> unionranges([(1, 100)])
381 >>> unionranges([(1, 100)])
382 [(1, 100)]
382 [(1, 100)]
383 >>> unionranges([(1, 100), (1, 100)])
383 >>> unionranges([(1, 100), (1, 100)])
384 [(1, 100)]
384 [(1, 100)]
385 >>> unionranges([(1, 100), (2, 100)])
385 >>> unionranges([(1, 100), (2, 100)])
386 [(1, 100)]
386 [(1, 100)]
387 >>> unionranges([(1, 99), (1, 100)])
387 >>> unionranges([(1, 99), (1, 100)])
388 [(1, 100)]
388 [(1, 100)]
389 >>> unionranges([(1, 100), (40, 60)])
389 >>> unionranges([(1, 100), (40, 60)])
390 [(1, 100)]
390 [(1, 100)]
391 >>> unionranges([(1, 49), (50, 100)])
391 >>> unionranges([(1, 49), (50, 100)])
392 [(1, 100)]
392 [(1, 100)]
393 >>> unionranges([(1, 48), (50, 100)])
393 >>> unionranges([(1, 48), (50, 100)])
394 [(1, 48), (50, 100)]
394 [(1, 48), (50, 100)]
395 >>> unionranges([(1, 2), (3, 4), (5, 6)])
395 >>> unionranges([(1, 2), (3, 4), (5, 6)])
396 [(1, 6)]
396 [(1, 6)]
397 """
397 """
398 rangeslist = sorted(set(rangeslist))
398 rangeslist = sorted(set(rangeslist))
399 unioned = []
399 unioned = []
400 if rangeslist:
400 if rangeslist:
401 unioned, rangeslist = [rangeslist[0]], rangeslist[1:]
401 unioned, rangeslist = [rangeslist[0]], rangeslist[1:]
402 for a, b in rangeslist:
402 for a, b in rangeslist:
403 c, d = unioned[-1]
403 c, d = unioned[-1]
404 if a > d + 1:
404 if a > d + 1:
405 unioned.append((a, b))
405 unioned.append((a, b))
406 else:
406 else:
407 unioned[-1] = (c, max(b, d))
407 unioned[-1] = (c, max(b, d))
408 return unioned
408 return unioned
409
409
410 def difflineranges(content1, content2):
410 def difflineranges(content1, content2):
411 """Return list of line number ranges in content2 that differ from content1.
411 """Return list of line number ranges in content2 that differ from content1.
412
412
413 Line numbers are 1-based. The numbers are the first and last line contained
413 Line numbers are 1-based. The numbers are the first and last line contained
414 in the range. Single-line ranges have the same line number for the first and
414 in the range. Single-line ranges have the same line number for the first and
415 last line. Excludes any empty ranges that result from lines that are only
415 last line. Excludes any empty ranges that result from lines that are only
416 present in content1. Relies on mdiff's idea of where the line endings are in
416 present in content1. Relies on mdiff's idea of where the line endings are in
417 the string.
417 the string.
418
418
419 >>> from mercurial import pycompat
419 >>> from mercurial import pycompat
420 >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)])
420 >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)])
421 >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b))
421 >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b))
422 >>> difflineranges2(b'', b'')
422 >>> difflineranges2(b'', b'')
423 []
423 []
424 >>> difflineranges2(b'a', b'')
424 >>> difflineranges2(b'a', b'')
425 []
425 []
426 >>> difflineranges2(b'', b'A')
426 >>> difflineranges2(b'', b'A')
427 [(1, 1)]
427 [(1, 1)]
428 >>> difflineranges2(b'a', b'a')
428 >>> difflineranges2(b'a', b'a')
429 []
429 []
430 >>> difflineranges2(b'a', b'A')
430 >>> difflineranges2(b'a', b'A')
431 [(1, 1)]
431 [(1, 1)]
432 >>> difflineranges2(b'ab', b'')
432 >>> difflineranges2(b'ab', b'')
433 []
433 []
434 >>> difflineranges2(b'', b'AB')
434 >>> difflineranges2(b'', b'AB')
435 [(1, 2)]
435 [(1, 2)]
436 >>> difflineranges2(b'abc', b'ac')
436 >>> difflineranges2(b'abc', b'ac')
437 []
437 []
438 >>> difflineranges2(b'ab', b'aCb')
438 >>> difflineranges2(b'ab', b'aCb')
439 [(2, 2)]
439 [(2, 2)]
440 >>> difflineranges2(b'abc', b'aBc')
440 >>> difflineranges2(b'abc', b'aBc')
441 [(2, 2)]
441 [(2, 2)]
442 >>> difflineranges2(b'ab', b'AB')
442 >>> difflineranges2(b'ab', b'AB')
443 [(1, 2)]
443 [(1, 2)]
444 >>> difflineranges2(b'abcde', b'aBcDe')
444 >>> difflineranges2(b'abcde', b'aBcDe')
445 [(2, 2), (4, 4)]
445 [(2, 2), (4, 4)]
446 >>> difflineranges2(b'abcde', b'aBCDe')
446 >>> difflineranges2(b'abcde', b'aBCDe')
447 [(2, 4)]
447 [(2, 4)]
448 """
448 """
449 ranges = []
449 ranges = []
450 for lines, kind in mdiff.allblocks(content1, content2):
450 for lines, kind in mdiff.allblocks(content1, content2):
451 firstline, lastline = lines[2:4]
451 firstline, lastline = lines[2:4]
452 if kind == '!' and firstline != lastline:
452 if kind == '!' and firstline != lastline:
453 ranges.append((firstline + 1, lastline))
453 ranges.append((firstline + 1, lastline))
454 return ranges
454 return ranges
455
455
456 def getbasectxs(repo, opts, revstofix):
456 def getbasectxs(repo, opts, revstofix):
457 """Returns a map of the base contexts for each revision
457 """Returns a map of the base contexts for each revision
458
458
459 The base contexts determine which lines are considered modified when we
459 The base contexts determine which lines are considered modified when we
460 attempt to fix just the modified lines in a file. It also determines which
460 attempt to fix just the modified lines in a file. It also determines which
461 files we attempt to fix, so it is important to compute this even when
461 files we attempt to fix, so it is important to compute this even when
462 --whole is used.
462 --whole is used.
463 """
463 """
464 # The --base flag overrides the usual logic, and we give every revision
464 # The --base flag overrides the usual logic, and we give every revision
465 # exactly the set of baserevs that the user specified.
465 # exactly the set of baserevs that the user specified.
466 if opts.get('base'):
466 if opts.get('base'):
467 baserevs = set(scmutil.revrange(repo, opts.get('base')))
467 baserevs = set(scmutil.revrange(repo, opts.get('base')))
468 if not baserevs:
468 if not baserevs:
469 baserevs = {nullrev}
469 baserevs = {nullrev}
470 basectxs = {repo[rev] for rev in baserevs}
470 basectxs = {repo[rev] for rev in baserevs}
471 return {rev: basectxs for rev in revstofix}
471 return {rev: basectxs for rev in revstofix}
472
472
473 # Proceed in topological order so that we can easily determine each
473 # Proceed in topological order so that we can easily determine each
474 # revision's baserevs by looking at its parents and their baserevs.
474 # revision's baserevs by looking at its parents and their baserevs.
475 basectxs = collections.defaultdict(set)
475 basectxs = collections.defaultdict(set)
476 for rev in sorted(revstofix):
476 for rev in sorted(revstofix):
477 ctx = repo[rev]
477 ctx = repo[rev]
478 for pctx in ctx.parents():
478 for pctx in ctx.parents():
479 if pctx.rev() in basectxs:
479 if pctx.rev() in basectxs:
480 basectxs[rev].update(basectxs[pctx.rev()])
480 basectxs[rev].update(basectxs[pctx.rev()])
481 else:
481 else:
482 basectxs[rev].add(pctx)
482 basectxs[rev].add(pctx)
483 return basectxs
483 return basectxs
484
484
485 def fixfile(ui, opts, fixers, fixctx, path, basectxs):
485 def fixfile(ui, opts, fixers, fixctx, path, basectxs):
486 """Run any configured fixers that should affect the file in this context
486 """Run any configured fixers that should affect the file in this context
487
487
488 Returns the file content that results from applying the fixers in some order
488 Returns the file content that results from applying the fixers in some order
489 starting with the file's content in the fixctx. Fixers that support line
489 starting with the file's content in the fixctx. Fixers that support line
490 ranges will affect lines that have changed relative to any of the basectxs
490 ranges will affect lines that have changed relative to any of the basectxs
491 (i.e. they will only avoid lines that are common to all basectxs).
491 (i.e. they will only avoid lines that are common to all basectxs).
492
492
493 A fixer tool's stdout will become the file's new content if and only if it
493 A fixer tool's stdout will become the file's new content if and only if it
494 exits with code zero.
494 exits with code zero.
495 """
495 """
496 newdata = fixctx[path].data()
496 newdata = fixctx[path].data()
497 for fixername, fixer in fixers.iteritems():
497 for fixername, fixer in fixers.iteritems():
498 if fixer.affects(opts, fixctx, path):
498 if fixer.affects(opts, fixctx, path):
499 rangesfn = lambda: lineranges(opts, path, basectxs, fixctx, newdata)
499 rangesfn = lambda: lineranges(opts, path, basectxs, fixctx, newdata)
500 command = fixer.command(ui, path, rangesfn)
500 command = fixer.command(ui, path, rangesfn)
501 if command is None:
501 if command is None:
502 continue
502 continue
503 ui.debug('subprocess: %s\n' % (command,))
503 ui.debug('subprocess: %s\n' % (command,))
504 proc = subprocess.Popen(
504 proc = subprocess.Popen(
505 procutil.tonativestr(command),
505 procutil.tonativestr(command),
506 shell=True,
506 shell=True,
507 cwd=procutil.tonativestr(b'/'),
507 cwd=procutil.tonativestr(b'/'),
508 stdin=subprocess.PIPE,
508 stdin=subprocess.PIPE,
509 stdout=subprocess.PIPE,
509 stdout=subprocess.PIPE,
510 stderr=subprocess.PIPE)
510 stderr=subprocess.PIPE)
511 newerdata, stderr = proc.communicate(newdata)
511 newerdata, stderr = proc.communicate(newdata)
512 if stderr:
512 if stderr:
513 showstderr(ui, fixctx.rev(), fixername, stderr)
513 showstderr(ui, fixctx.rev(), fixername, stderr)
514 if proc.returncode == 0:
514 if proc.returncode == 0:
515 newdata = newerdata
515 newdata = newerdata
516 else:
516 else:
517 if not stderr:
517 if not stderr:
518 message = _('exited with status %d\n') % (proc.returncode,)
518 message = _('exited with status %d\n') % (proc.returncode,)
519 showstderr(ui, fixctx.rev(), fixername, message)
519 showstderr(ui, fixctx.rev(), fixername, message)
520 checktoolfailureaction(
520 checktoolfailureaction(
521 ui, _('no fixes will be applied'),
521 ui, _('no fixes will be applied'),
522 hint=_('use --config fix.failure=continue to apply any '
522 hint=_('use --config fix.failure=continue to apply any '
523 'successful fixes anyway'))
523 'successful fixes anyway'))
524 return newdata
524 return newdata
525
525
526 def showstderr(ui, rev, fixername, stderr):
526 def showstderr(ui, rev, fixername, stderr):
527 """Writes the lines of the stderr string as warnings on the ui
527 """Writes the lines of the stderr string as warnings on the ui
528
528
529 Uses the revision number and fixername to give more context to each line of
529 Uses the revision number and fixername to give more context to each line of
530 the error message. Doesn't include file names, since those take up a lot of
530 the error message. Doesn't include file names, since those take up a lot of
531 space and would tend to be included in the error message if they were
531 space and would tend to be included in the error message if they were
532 relevant.
532 relevant.
533 """
533 """
534 for line in re.split('[\r\n]+', stderr):
534 for line in re.split('[\r\n]+', stderr):
535 if line:
535 if line:
536 ui.warn(('['))
536 ui.warn(('['))
537 if rev is None:
537 if rev is None:
538 ui.warn(_('wdir'), label='evolve.rev')
538 ui.warn(_('wdir'), label='evolve.rev')
539 else:
539 else:
540 ui.warn((str(rev)), label='evolve.rev')
540 ui.warn((str(rev)), label='evolve.rev')
541 ui.warn(('] %s: %s\n') % (fixername, line))
541 ui.warn(('] %s: %s\n') % (fixername, line))
542
542
543 def writeworkingdir(repo, ctx, filedata, replacements):
543 def writeworkingdir(repo, ctx, filedata, replacements):
544 """Write new content to the working copy and check out the new p1 if any
544 """Write new content to the working copy and check out the new p1 if any
545
545
546 We check out a new revision if and only if we fixed something in both the
546 We check out a new revision if and only if we fixed something in both the
547 working directory and its parent revision. This avoids the need for a full
547 working directory and its parent revision. This avoids the need for a full
548 update/merge, and means that the working directory simply isn't affected
548 update/merge, and means that the working directory simply isn't affected
549 unless the --working-dir flag is given.
549 unless the --working-dir flag is given.
550
550
551 Directly updates the dirstate for the affected files.
551 Directly updates the dirstate for the affected files.
552 """
552 """
553 for path, data in filedata.iteritems():
553 for path, data in filedata.iteritems():
554 fctx = ctx[path]
554 fctx = ctx[path]
555 fctx.write(data, fctx.flags())
555 fctx.write(data, fctx.flags())
556 if repo.dirstate[path] == 'n':
556 if repo.dirstate[path] == 'n':
557 repo.dirstate.normallookup(path)
557 repo.dirstate.normallookup(path)
558
558
559 oldparentnodes = repo.dirstate.parents()
559 oldparentnodes = repo.dirstate.parents()
560 newparentnodes = [replacements.get(n, n) for n in oldparentnodes]
560 newparentnodes = [replacements.get(n, n) for n in oldparentnodes]
561 if newparentnodes != oldparentnodes:
561 if newparentnodes != oldparentnodes:
562 repo.setparents(*newparentnodes)
562 repo.setparents(*newparentnodes)
563
563
564 def replacerev(ui, repo, ctx, filedata, replacements):
564 def replacerev(ui, repo, ctx, filedata, replacements):
565 """Commit a new revision like the given one, but with file content changes
565 """Commit a new revision like the given one, but with file content changes
566
566
567 "ctx" is the original revision to be replaced by a modified one.
567 "ctx" is the original revision to be replaced by a modified one.
568
568
569 "filedata" is a dict that maps paths to their new file content. All other
569 "filedata" is a dict that maps paths to their new file content. All other
570 paths will be recreated from the original revision without changes.
570 paths will be recreated from the original revision without changes.
571 "filedata" may contain paths that didn't exist in the original revision;
571 "filedata" may contain paths that didn't exist in the original revision;
572 they will be added.
572 they will be added.
573
573
574 "replacements" is a dict that maps a single node to a single node, and it is
574 "replacements" is a dict that maps a single node to a single node, and it is
575 updated to indicate the original revision is replaced by the newly created
575 updated to indicate the original revision is replaced by the newly created
576 one. No entry is added if the replacement's node already exists.
576 one. No entry is added if the replacement's node already exists.
577
577
578 The new revision has the same parents as the old one, unless those parents
578 The new revision has the same parents as the old one, unless those parents
579 have already been replaced, in which case those replacements are the parents
579 have already been replaced, in which case those replacements are the parents
580 of this new revision. Thus, if revisions are replaced in topological order,
580 of this new revision. Thus, if revisions are replaced in topological order,
581 there is no need to rebase them into the original topology later.
581 there is no need to rebase them into the original topology later.
582 """
582 """
583
583
584 p1rev, p2rev = repo.changelog.parentrevs(ctx.rev())
584 p1rev, p2rev = repo.changelog.parentrevs(ctx.rev())
585 p1ctx, p2ctx = repo[p1rev], repo[p2rev]
585 p1ctx, p2ctx = repo[p1rev], repo[p2rev]
586 newp1node = replacements.get(p1ctx.node(), p1ctx.node())
586 newp1node = replacements.get(p1ctx.node(), p1ctx.node())
587 newp2node = replacements.get(p2ctx.node(), p2ctx.node())
587 newp2node = replacements.get(p2ctx.node(), p2ctx.node())
588
588
589 # We don't want to create a revision that has no changes from the original,
589 # We don't want to create a revision that has no changes from the original,
590 # but we should if the original revision's parent has been replaced.
590 # but we should if the original revision's parent has been replaced.
591 # Otherwise, we would produce an orphan that needs no actual human
591 # Otherwise, we would produce an orphan that needs no actual human
592 # intervention to evolve. We can't rely on commit() to avoid creating the
592 # intervention to evolve. We can't rely on commit() to avoid creating the
593 # un-needed revision because the extra field added below produces a new hash
593 # un-needed revision because the extra field added below produces a new hash
594 # regardless of file content changes.
594 # regardless of file content changes.
595 if (not filedata and
595 if (not filedata and
596 p1ctx.node() not in replacements and
596 p1ctx.node() not in replacements and
597 p2ctx.node() not in replacements):
597 p2ctx.node() not in replacements):
598 return
598 return
599
599
600 def filectxfn(repo, memctx, path):
600 def filectxfn(repo, memctx, path):
601 if path not in ctx:
601 if path not in ctx:
602 return None
602 return None
603 fctx = ctx[path]
603 fctx = ctx[path]
604 copied = fctx.copysource()
604 copysource = fctx.copysource()
605 return context.memfilectx(
605 return context.memfilectx(
606 repo,
606 repo,
607 memctx,
607 memctx,
608 path=fctx.path(),
608 path=fctx.path(),
609 data=filedata.get(path, fctx.data()),
609 data=filedata.get(path, fctx.data()),
610 islink=fctx.islink(),
610 islink=fctx.islink(),
611 isexec=fctx.isexec(),
611 isexec=fctx.isexec(),
612 copied=copied)
612 copysource=copysource)
613
613
614 extra = ctx.extra().copy()
614 extra = ctx.extra().copy()
615 extra['fix_source'] = ctx.hex()
615 extra['fix_source'] = ctx.hex()
616
616
617 memctx = context.memctx(
617 memctx = context.memctx(
618 repo,
618 repo,
619 parents=(newp1node, newp2node),
619 parents=(newp1node, newp2node),
620 text=ctx.description(),
620 text=ctx.description(),
621 files=set(ctx.files()) | set(filedata.keys()),
621 files=set(ctx.files()) | set(filedata.keys()),
622 filectxfn=filectxfn,
622 filectxfn=filectxfn,
623 user=ctx.user(),
623 user=ctx.user(),
624 date=ctx.date(),
624 date=ctx.date(),
625 extra=extra,
625 extra=extra,
626 branch=ctx.branch(),
626 branch=ctx.branch(),
627 editor=None)
627 editor=None)
628 sucnode = memctx.commit()
628 sucnode = memctx.commit()
629 prenode = ctx.node()
629 prenode = ctx.node()
630 if prenode == sucnode:
630 if prenode == sucnode:
631 ui.debug('node %s already existed\n' % (ctx.hex()))
631 ui.debug('node %s already existed\n' % (ctx.hex()))
632 else:
632 else:
633 replacements[ctx.node()] = sucnode
633 replacements[ctx.node()] = sucnode
634
634
635 def getfixers(ui):
635 def getfixers(ui):
636 """Returns a map of configured fixer tools indexed by their names
636 """Returns a map of configured fixer tools indexed by their names
637
637
638 Each value is a Fixer object with methods that implement the behavior of the
638 Each value is a Fixer object with methods that implement the behavior of the
639 fixer's config suboptions. Does not validate the config values.
639 fixer's config suboptions. Does not validate the config values.
640 """
640 """
641 fixers = {}
641 fixers = {}
642 for name in fixernames(ui):
642 for name in fixernames(ui):
643 fixers[name] = Fixer()
643 fixers[name] = Fixer()
644 attrs = ui.configsuboptions('fix', name)[1]
644 attrs = ui.configsuboptions('fix', name)[1]
645 if 'fileset' in attrs and 'pattern' not in attrs:
645 if 'fileset' in attrs and 'pattern' not in attrs:
646 ui.warn(_('the fix.tool:fileset config name is deprecated; '
646 ui.warn(_('the fix.tool:fileset config name is deprecated; '
647 'please rename it to fix.tool:pattern\n'))
647 'please rename it to fix.tool:pattern\n'))
648 attrs['pattern'] = attrs['fileset']
648 attrs['pattern'] = attrs['fileset']
649 for key, default in FIXER_ATTRS.items():
649 for key, default in FIXER_ATTRS.items():
650 setattr(fixers[name], pycompat.sysstr('_' + key),
650 setattr(fixers[name], pycompat.sysstr('_' + key),
651 attrs.get(key, default))
651 attrs.get(key, default))
652 fixers[name]._priority = int(fixers[name]._priority)
652 fixers[name]._priority = int(fixers[name]._priority)
653 return collections.OrderedDict(
653 return collections.OrderedDict(
654 sorted(fixers.items(), key=lambda item: item[1]._priority,
654 sorted(fixers.items(), key=lambda item: item[1]._priority,
655 reverse=True))
655 reverse=True))
656
656
657 def fixernames(ui):
657 def fixernames(ui):
658 """Returns the names of [fix] config options that have suboptions"""
658 """Returns the names of [fix] config options that have suboptions"""
659 names = set()
659 names = set()
660 for k, v in ui.configitems('fix'):
660 for k, v in ui.configitems('fix'):
661 if ':' in k:
661 if ':' in k:
662 names.add(k.split(':', 1)[0])
662 names.add(k.split(':', 1)[0])
663 return names
663 return names
664
664
665 class Fixer(object):
665 class Fixer(object):
666 """Wraps the raw config values for a fixer with methods"""
666 """Wraps the raw config values for a fixer with methods"""
667
667
668 def affects(self, opts, fixctx, path):
668 def affects(self, opts, fixctx, path):
669 """Should this fixer run on the file at the given path and context?"""
669 """Should this fixer run on the file at the given path and context?"""
670 return scmutil.match(fixctx, [self._pattern], opts)(path)
670 return scmutil.match(fixctx, [self._pattern], opts)(path)
671
671
672 def command(self, ui, path, rangesfn):
672 def command(self, ui, path, rangesfn):
673 """A shell command to use to invoke this fixer on the given file/lines
673 """A shell command to use to invoke this fixer on the given file/lines
674
674
675 May return None if there is no appropriate command to run for the given
675 May return None if there is no appropriate command to run for the given
676 parameters.
676 parameters.
677 """
677 """
678 expand = cmdutil.rendercommandtemplate
678 expand = cmdutil.rendercommandtemplate
679 parts = [expand(ui, self._command,
679 parts = [expand(ui, self._command,
680 {'rootpath': path, 'basename': os.path.basename(path)})]
680 {'rootpath': path, 'basename': os.path.basename(path)})]
681 if self._linerange:
681 if self._linerange:
682 ranges = rangesfn()
682 ranges = rangesfn()
683 if not ranges:
683 if not ranges:
684 # No line ranges to fix, so don't run the fixer.
684 # No line ranges to fix, so don't run the fixer.
685 return None
685 return None
686 for first, last in ranges:
686 for first, last in ranges:
687 parts.append(expand(ui, self._linerange,
687 parts.append(expand(ui, self._linerange,
688 {'first': first, 'last': last}))
688 {'first': first, 'last': last}))
689 return ' '.join(parts)
689 return ' '.join(parts)
@@ -1,2272 +1,2272
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description and date
39 # r, roll = like fold, but discard this commit's description and date
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit commit message without changing commit content
41 # m, mess = edit commit message without changing commit content
42 # b, base = checkout changeset and apply further changesets from there
42 # b, base = checkout changeset and apply further changesets from there
43 #
43 #
44
44
45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
46 for each revision in your history. For example, if you had meant to add gamma
46 for each revision in your history. For example, if you had meant to add gamma
47 before beta, and then wanted to add delta in the same revision as beta, you
47 before beta, and then wanted to add delta in the same revision as beta, you
48 would reorganize the file to look like this::
48 would reorganize the file to look like this::
49
49
50 pick 030b686bedc4 Add gamma
50 pick 030b686bedc4 Add gamma
51 pick c561b4e977df Add beta
51 pick c561b4e977df Add beta
52 fold 7c2fd3b9020c Add delta
52 fold 7c2fd3b9020c Add delta
53
53
54 # Edit history between c561b4e977df and 7c2fd3b9020c
54 # Edit history between c561b4e977df and 7c2fd3b9020c
55 #
55 #
56 # Commits are listed from least to most recent
56 # Commits are listed from least to most recent
57 #
57 #
58 # Commands:
58 # Commands:
59 # p, pick = use commit
59 # p, pick = use commit
60 # e, edit = use commit, but stop for amending
60 # e, edit = use commit, but stop for amending
61 # f, fold = use commit, but combine it with the one above
61 # f, fold = use commit, but combine it with the one above
62 # r, roll = like fold, but discard this commit's description and date
62 # r, roll = like fold, but discard this commit's description and date
63 # d, drop = remove commit from history
63 # d, drop = remove commit from history
64 # m, mess = edit commit message without changing commit content
64 # m, mess = edit commit message without changing commit content
65 # b, base = checkout changeset and apply further changesets from there
65 # b, base = checkout changeset and apply further changesets from there
66 #
66 #
67
67
68 At which point you close the editor and ``histedit`` starts working. When you
68 At which point you close the editor and ``histedit`` starts working. When you
69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
70 those revisions together, offering you a chance to clean up the commit message::
70 those revisions together, offering you a chance to clean up the commit message::
71
71
72 Add beta
72 Add beta
73 ***
73 ***
74 Add delta
74 Add delta
75
75
76 Edit the commit message to your liking, then close the editor. The date used
76 Edit the commit message to your liking, then close the editor. The date used
77 for the commit will be the later of the two commits' dates. For this example,
77 for the commit will be the later of the two commits' dates. For this example,
78 let's assume that the commit message was changed to ``Add beta and delta.``
78 let's assume that the commit message was changed to ``Add beta and delta.``
79 After histedit has run and had a chance to remove any old or temporary
79 After histedit has run and had a chance to remove any old or temporary
80 revisions it needed, the history looks like this::
80 revisions it needed, the history looks like this::
81
81
82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
83 | Add beta and delta.
83 | Add beta and delta.
84 |
84 |
85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
86 | Add gamma
86 | Add gamma
87 |
87 |
88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
89 Add alpha
89 Add alpha
90
90
91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
92 ones) until after it has completed all the editing operations, so it will
92 ones) until after it has completed all the editing operations, so it will
93 probably perform several strip operations when it's done. For the above example,
93 probably perform several strip operations when it's done. For the above example,
94 it had to run strip twice. Strip can be slow depending on a variety of factors,
94 it had to run strip twice. Strip can be slow depending on a variety of factors,
95 so you might need to be a little patient. You can choose to keep the original
95 so you might need to be a little patient. You can choose to keep the original
96 revisions by passing the ``--keep`` flag.
96 revisions by passing the ``--keep`` flag.
97
97
98 The ``edit`` operation will drop you back to a command prompt,
98 The ``edit`` operation will drop you back to a command prompt,
99 allowing you to edit files freely, or even use ``hg record`` to commit
99 allowing you to edit files freely, or even use ``hg record`` to commit
100 some changes as a separate commit. When you're done, any remaining
100 some changes as a separate commit. When you're done, any remaining
101 uncommitted changes will be committed as well. When done, run ``hg
101 uncommitted changes will be committed as well. When done, run ``hg
102 histedit --continue`` to finish this step. If there are uncommitted
102 histedit --continue`` to finish this step. If there are uncommitted
103 changes, you'll be prompted for a new commit message, but the default
103 changes, you'll be prompted for a new commit message, but the default
104 commit message will be the original message for the ``edit`` ed
104 commit message will be the original message for the ``edit`` ed
105 revision, and the date of the original commit will be preserved.
105 revision, and the date of the original commit will be preserved.
106
106
107 The ``message`` operation will give you a chance to revise a commit
107 The ``message`` operation will give you a chance to revise a commit
108 message without changing the contents. It's a shortcut for doing
108 message without changing the contents. It's a shortcut for doing
109 ``edit`` immediately followed by `hg histedit --continue``.
109 ``edit`` immediately followed by `hg histedit --continue``.
110
110
111 If ``histedit`` encounters a conflict when moving a revision (while
111 If ``histedit`` encounters a conflict when moving a revision (while
112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
113 ``edit`` with the difference that it won't prompt you for a commit
113 ``edit`` with the difference that it won't prompt you for a commit
114 message when done. If you decide at this point that you don't like how
114 message when done. If you decide at this point that you don't like how
115 much work it will be to rearrange history, or that you made a mistake,
115 much work it will be to rearrange history, or that you made a mistake,
116 you can use ``hg histedit --abort`` to abandon the new changes you
116 you can use ``hg histedit --abort`` to abandon the new changes you
117 have made and return to the state before you attempted to edit your
117 have made and return to the state before you attempted to edit your
118 history.
118 history.
119
119
120 If we clone the histedit-ed example repository above and add four more
120 If we clone the histedit-ed example repository above and add four more
121 changes, such that we have the following history::
121 changes, such that we have the following history::
122
122
123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
124 | Add theta
124 | Add theta
125 |
125 |
126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
127 | Add eta
127 | Add eta
128 |
128 |
129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
130 | Add zeta
130 | Add zeta
131 |
131 |
132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
133 | Add epsilon
133 | Add epsilon
134 |
134 |
135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
136 | Add beta and delta.
136 | Add beta and delta.
137 |
137 |
138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
139 | Add gamma
139 | Add gamma
140 |
140 |
141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
142 Add alpha
142 Add alpha
143
143
144 If you run ``hg histedit --outgoing`` on the clone then it is the same
144 If you run ``hg histedit --outgoing`` on the clone then it is the same
145 as running ``hg histedit 836302820282``. If you need plan to push to a
145 as running ``hg histedit 836302820282``. If you need plan to push to a
146 repository that Mercurial does not detect to be related to the source
146 repository that Mercurial does not detect to be related to the source
147 repo, you can add a ``--force`` option.
147 repo, you can add a ``--force`` option.
148
148
149 Config
149 Config
150 ------
150 ------
151
151
152 Histedit rule lines are truncated to 80 characters by default. You
152 Histedit rule lines are truncated to 80 characters by default. You
153 can customize this behavior by setting a different length in your
153 can customize this behavior by setting a different length in your
154 configuration file::
154 configuration file::
155
155
156 [histedit]
156 [histedit]
157 linelen = 120 # truncate rule lines at 120 characters
157 linelen = 120 # truncate rule lines at 120 characters
158
158
159 The summary of a change can be customized as well::
159 The summary of a change can be customized as well::
160
160
161 [histedit]
161 [histedit]
162 summary-template = '{rev} {bookmarks} {desc|firstline}'
162 summary-template = '{rev} {bookmarks} {desc|firstline}'
163
163
164 The customized summary should be kept short enough that rule lines
164 The customized summary should be kept short enough that rule lines
165 will fit in the configured line length. See above if that requires
165 will fit in the configured line length. See above if that requires
166 customization.
166 customization.
167
167
168 ``hg histedit`` attempts to automatically choose an appropriate base
168 ``hg histedit`` attempts to automatically choose an appropriate base
169 revision to use. To change which base revision is used, define a
169 revision to use. To change which base revision is used, define a
170 revset in your configuration file::
170 revset in your configuration file::
171
171
172 [histedit]
172 [histedit]
173 defaultrev = only(.) & draft()
173 defaultrev = only(.) & draft()
174
174
175 By default each edited revision needs to be present in histedit commands.
175 By default each edited revision needs to be present in histedit commands.
176 To remove revision you need to use ``drop`` operation. You can configure
176 To remove revision you need to use ``drop`` operation. You can configure
177 the drop to be implicit for missing commits by adding::
177 the drop to be implicit for missing commits by adding::
178
178
179 [histedit]
179 [histedit]
180 dropmissing = True
180 dropmissing = True
181
181
182 By default, histedit will close the transaction after each action. For
182 By default, histedit will close the transaction after each action. For
183 performance purposes, you can configure histedit to use a single transaction
183 performance purposes, you can configure histedit to use a single transaction
184 across the entire histedit. WARNING: This setting introduces a significant risk
184 across the entire histedit. WARNING: This setting introduces a significant risk
185 of losing the work you've done in a histedit if the histedit aborts
185 of losing the work you've done in a histedit if the histedit aborts
186 unexpectedly::
186 unexpectedly::
187
187
188 [histedit]
188 [histedit]
189 singletransaction = True
189 singletransaction = True
190
190
191 """
191 """
192
192
193 from __future__ import absolute_import
193 from __future__ import absolute_import
194
194
195 # chistedit dependencies that are not available everywhere
195 # chistedit dependencies that are not available everywhere
196 try:
196 try:
197 import fcntl
197 import fcntl
198 import termios
198 import termios
199 except ImportError:
199 except ImportError:
200 fcntl = None
200 fcntl = None
201 termios = None
201 termios = None
202
202
203 import functools
203 import functools
204 import os
204 import os
205 import struct
205 import struct
206
206
207 from mercurial.i18n import _
207 from mercurial.i18n import _
208 from mercurial import (
208 from mercurial import (
209 bundle2,
209 bundle2,
210 cmdutil,
210 cmdutil,
211 context,
211 context,
212 copies,
212 copies,
213 destutil,
213 destutil,
214 discovery,
214 discovery,
215 error,
215 error,
216 exchange,
216 exchange,
217 extensions,
217 extensions,
218 hg,
218 hg,
219 logcmdutil,
219 logcmdutil,
220 merge as mergemod,
220 merge as mergemod,
221 mergeutil,
221 mergeutil,
222 node,
222 node,
223 obsolete,
223 obsolete,
224 pycompat,
224 pycompat,
225 registrar,
225 registrar,
226 repair,
226 repair,
227 scmutil,
227 scmutil,
228 state as statemod,
228 state as statemod,
229 util,
229 util,
230 )
230 )
231 from mercurial.utils import (
231 from mercurial.utils import (
232 dateutil,
232 dateutil,
233 stringutil,
233 stringutil,
234 )
234 )
235
235
236 pickle = util.pickle
236 pickle = util.pickle
237 cmdtable = {}
237 cmdtable = {}
238 command = registrar.command(cmdtable)
238 command = registrar.command(cmdtable)
239
239
240 configtable = {}
240 configtable = {}
241 configitem = registrar.configitem(configtable)
241 configitem = registrar.configitem(configtable)
242 configitem('experimental', 'histedit.autoverb',
242 configitem('experimental', 'histedit.autoverb',
243 default=False,
243 default=False,
244 )
244 )
245 configitem('histedit', 'defaultrev',
245 configitem('histedit', 'defaultrev',
246 default=None,
246 default=None,
247 )
247 )
248 configitem('histedit', 'dropmissing',
248 configitem('histedit', 'dropmissing',
249 default=False,
249 default=False,
250 )
250 )
251 configitem('histedit', 'linelen',
251 configitem('histedit', 'linelen',
252 default=80,
252 default=80,
253 )
253 )
254 configitem('histedit', 'singletransaction',
254 configitem('histedit', 'singletransaction',
255 default=False,
255 default=False,
256 )
256 )
257 configitem('ui', 'interface.histedit',
257 configitem('ui', 'interface.histedit',
258 default=None,
258 default=None,
259 )
259 )
260 configitem('histedit', 'summary-template',
260 configitem('histedit', 'summary-template',
261 default='{rev} {desc|firstline}')
261 default='{rev} {desc|firstline}')
262
262
263 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
263 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
264 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
264 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
265 # be specifying the version(s) of Mercurial they are tested with, or
265 # be specifying the version(s) of Mercurial they are tested with, or
266 # leave the attribute unspecified.
266 # leave the attribute unspecified.
267 testedwith = 'ships-with-hg-core'
267 testedwith = 'ships-with-hg-core'
268
268
269 actiontable = {}
269 actiontable = {}
270 primaryactions = set()
270 primaryactions = set()
271 secondaryactions = set()
271 secondaryactions = set()
272 tertiaryactions = set()
272 tertiaryactions = set()
273 internalactions = set()
273 internalactions = set()
274
274
275 def geteditcomment(ui, first, last):
275 def geteditcomment(ui, first, last):
276 """ construct the editor comment
276 """ construct the editor comment
277 The comment includes::
277 The comment includes::
278 - an intro
278 - an intro
279 - sorted primary commands
279 - sorted primary commands
280 - sorted short commands
280 - sorted short commands
281 - sorted long commands
281 - sorted long commands
282 - additional hints
282 - additional hints
283
283
284 Commands are only included once.
284 Commands are only included once.
285 """
285 """
286 intro = _("""Edit history between %s and %s
286 intro = _("""Edit history between %s and %s
287
287
288 Commits are listed from least to most recent
288 Commits are listed from least to most recent
289
289
290 You can reorder changesets by reordering the lines
290 You can reorder changesets by reordering the lines
291
291
292 Commands:
292 Commands:
293 """)
293 """)
294 actions = []
294 actions = []
295 def addverb(v):
295 def addverb(v):
296 a = actiontable[v]
296 a = actiontable[v]
297 lines = a.message.split("\n")
297 lines = a.message.split("\n")
298 if len(a.verbs):
298 if len(a.verbs):
299 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
299 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
300 actions.append(" %s = %s" % (v, lines[0]))
300 actions.append(" %s = %s" % (v, lines[0]))
301 actions.extend([' %s' for l in lines[1:]])
301 actions.extend([' %s' for l in lines[1:]])
302
302
303 for v in (
303 for v in (
304 sorted(primaryactions) +
304 sorted(primaryactions) +
305 sorted(secondaryactions) +
305 sorted(secondaryactions) +
306 sorted(tertiaryactions)
306 sorted(tertiaryactions)
307 ):
307 ):
308 addverb(v)
308 addverb(v)
309 actions.append('')
309 actions.append('')
310
310
311 hints = []
311 hints = []
312 if ui.configbool('histedit', 'dropmissing'):
312 if ui.configbool('histedit', 'dropmissing'):
313 hints.append("Deleting a changeset from the list "
313 hints.append("Deleting a changeset from the list "
314 "will DISCARD it from the edited history!")
314 "will DISCARD it from the edited history!")
315
315
316 lines = (intro % (first, last)).split('\n') + actions + hints
316 lines = (intro % (first, last)).split('\n') + actions + hints
317
317
318 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
318 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
319
319
320 class histeditstate(object):
320 class histeditstate(object):
321 def __init__(self, repo):
321 def __init__(self, repo):
322 self.repo = repo
322 self.repo = repo
323 self.actions = None
323 self.actions = None
324 self.keep = None
324 self.keep = None
325 self.topmost = None
325 self.topmost = None
326 self.parentctxnode = None
326 self.parentctxnode = None
327 self.lock = None
327 self.lock = None
328 self.wlock = None
328 self.wlock = None
329 self.backupfile = None
329 self.backupfile = None
330 self.stateobj = statemod.cmdstate(repo, 'histedit-state')
330 self.stateobj = statemod.cmdstate(repo, 'histedit-state')
331 self.replacements = []
331 self.replacements = []
332
332
333 def read(self):
333 def read(self):
334 """Load histedit state from disk and set fields appropriately."""
334 """Load histedit state from disk and set fields appropriately."""
335 if not self.stateobj.exists():
335 if not self.stateobj.exists():
336 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
336 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
337
337
338 data = self._read()
338 data = self._read()
339
339
340 self.parentctxnode = data['parentctxnode']
340 self.parentctxnode = data['parentctxnode']
341 actions = parserules(data['rules'], self)
341 actions = parserules(data['rules'], self)
342 self.actions = actions
342 self.actions = actions
343 self.keep = data['keep']
343 self.keep = data['keep']
344 self.topmost = data['topmost']
344 self.topmost = data['topmost']
345 self.replacements = data['replacements']
345 self.replacements = data['replacements']
346 self.backupfile = data['backupfile']
346 self.backupfile = data['backupfile']
347
347
348 def _read(self):
348 def _read(self):
349 fp = self.repo.vfs.read('histedit-state')
349 fp = self.repo.vfs.read('histedit-state')
350 if fp.startswith('v1\n'):
350 if fp.startswith('v1\n'):
351 data = self._load()
351 data = self._load()
352 parentctxnode, rules, keep, topmost, replacements, backupfile = data
352 parentctxnode, rules, keep, topmost, replacements, backupfile = data
353 else:
353 else:
354 data = pickle.loads(fp)
354 data = pickle.loads(fp)
355 parentctxnode, rules, keep, topmost, replacements = data
355 parentctxnode, rules, keep, topmost, replacements = data
356 backupfile = None
356 backupfile = None
357 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
357 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
358
358
359 return {'parentctxnode': parentctxnode, "rules": rules, "keep": keep,
359 return {'parentctxnode': parentctxnode, "rules": rules, "keep": keep,
360 "topmost": topmost, "replacements": replacements,
360 "topmost": topmost, "replacements": replacements,
361 "backupfile": backupfile}
361 "backupfile": backupfile}
362
362
363 def write(self, tr=None):
363 def write(self, tr=None):
364 if tr:
364 if tr:
365 tr.addfilegenerator('histedit-state', ('histedit-state',),
365 tr.addfilegenerator('histedit-state', ('histedit-state',),
366 self._write, location='plain')
366 self._write, location='plain')
367 else:
367 else:
368 with self.repo.vfs("histedit-state", "w") as f:
368 with self.repo.vfs("histedit-state", "w") as f:
369 self._write(f)
369 self._write(f)
370
370
371 def _write(self, fp):
371 def _write(self, fp):
372 fp.write('v1\n')
372 fp.write('v1\n')
373 fp.write('%s\n' % node.hex(self.parentctxnode))
373 fp.write('%s\n' % node.hex(self.parentctxnode))
374 fp.write('%s\n' % node.hex(self.topmost))
374 fp.write('%s\n' % node.hex(self.topmost))
375 fp.write('%s\n' % ('True' if self.keep else 'False'))
375 fp.write('%s\n' % ('True' if self.keep else 'False'))
376 fp.write('%d\n' % len(self.actions))
376 fp.write('%d\n' % len(self.actions))
377 for action in self.actions:
377 for action in self.actions:
378 fp.write('%s\n' % action.tostate())
378 fp.write('%s\n' % action.tostate())
379 fp.write('%d\n' % len(self.replacements))
379 fp.write('%d\n' % len(self.replacements))
380 for replacement in self.replacements:
380 for replacement in self.replacements:
381 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
381 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
382 for r in replacement[1])))
382 for r in replacement[1])))
383 backupfile = self.backupfile
383 backupfile = self.backupfile
384 if not backupfile:
384 if not backupfile:
385 backupfile = ''
385 backupfile = ''
386 fp.write('%s\n' % backupfile)
386 fp.write('%s\n' % backupfile)
387
387
388 def _load(self):
388 def _load(self):
389 fp = self.repo.vfs('histedit-state', 'r')
389 fp = self.repo.vfs('histedit-state', 'r')
390 lines = [l[:-1] for l in fp.readlines()]
390 lines = [l[:-1] for l in fp.readlines()]
391
391
392 index = 0
392 index = 0
393 lines[index] # version number
393 lines[index] # version number
394 index += 1
394 index += 1
395
395
396 parentctxnode = node.bin(lines[index])
396 parentctxnode = node.bin(lines[index])
397 index += 1
397 index += 1
398
398
399 topmost = node.bin(lines[index])
399 topmost = node.bin(lines[index])
400 index += 1
400 index += 1
401
401
402 keep = lines[index] == 'True'
402 keep = lines[index] == 'True'
403 index += 1
403 index += 1
404
404
405 # Rules
405 # Rules
406 rules = []
406 rules = []
407 rulelen = int(lines[index])
407 rulelen = int(lines[index])
408 index += 1
408 index += 1
409 for i in pycompat.xrange(rulelen):
409 for i in pycompat.xrange(rulelen):
410 ruleaction = lines[index]
410 ruleaction = lines[index]
411 index += 1
411 index += 1
412 rule = lines[index]
412 rule = lines[index]
413 index += 1
413 index += 1
414 rules.append((ruleaction, rule))
414 rules.append((ruleaction, rule))
415
415
416 # Replacements
416 # Replacements
417 replacements = []
417 replacements = []
418 replacementlen = int(lines[index])
418 replacementlen = int(lines[index])
419 index += 1
419 index += 1
420 for i in pycompat.xrange(replacementlen):
420 for i in pycompat.xrange(replacementlen):
421 replacement = lines[index]
421 replacement = lines[index]
422 original = node.bin(replacement[:40])
422 original = node.bin(replacement[:40])
423 succ = [node.bin(replacement[i:i + 40]) for i in
423 succ = [node.bin(replacement[i:i + 40]) for i in
424 range(40, len(replacement), 40)]
424 range(40, len(replacement), 40)]
425 replacements.append((original, succ))
425 replacements.append((original, succ))
426 index += 1
426 index += 1
427
427
428 backupfile = lines[index]
428 backupfile = lines[index]
429 index += 1
429 index += 1
430
430
431 fp.close()
431 fp.close()
432
432
433 return parentctxnode, rules, keep, topmost, replacements, backupfile
433 return parentctxnode, rules, keep, topmost, replacements, backupfile
434
434
435 def clear(self):
435 def clear(self):
436 if self.inprogress():
436 if self.inprogress():
437 self.repo.vfs.unlink('histedit-state')
437 self.repo.vfs.unlink('histedit-state')
438
438
439 def inprogress(self):
439 def inprogress(self):
440 return self.repo.vfs.exists('histedit-state')
440 return self.repo.vfs.exists('histedit-state')
441
441
442
442
443 class histeditaction(object):
443 class histeditaction(object):
444 def __init__(self, state, node):
444 def __init__(self, state, node):
445 self.state = state
445 self.state = state
446 self.repo = state.repo
446 self.repo = state.repo
447 self.node = node
447 self.node = node
448
448
449 @classmethod
449 @classmethod
450 def fromrule(cls, state, rule):
450 def fromrule(cls, state, rule):
451 """Parses the given rule, returning an instance of the histeditaction.
451 """Parses the given rule, returning an instance of the histeditaction.
452 """
452 """
453 ruleid = rule.strip().split(' ', 1)[0]
453 ruleid = rule.strip().split(' ', 1)[0]
454 # ruleid can be anything from rev numbers, hashes, "bookmarks" etc
454 # ruleid can be anything from rev numbers, hashes, "bookmarks" etc
455 # Check for validation of rule ids and get the rulehash
455 # Check for validation of rule ids and get the rulehash
456 try:
456 try:
457 rev = node.bin(ruleid)
457 rev = node.bin(ruleid)
458 except TypeError:
458 except TypeError:
459 try:
459 try:
460 _ctx = scmutil.revsingle(state.repo, ruleid)
460 _ctx = scmutil.revsingle(state.repo, ruleid)
461 rulehash = _ctx.hex()
461 rulehash = _ctx.hex()
462 rev = node.bin(rulehash)
462 rev = node.bin(rulehash)
463 except error.RepoLookupError:
463 except error.RepoLookupError:
464 raise error.ParseError(_("invalid changeset %s") % ruleid)
464 raise error.ParseError(_("invalid changeset %s") % ruleid)
465 return cls(state, rev)
465 return cls(state, rev)
466
466
467 def verify(self, prev, expected, seen):
467 def verify(self, prev, expected, seen):
468 """ Verifies semantic correctness of the rule"""
468 """ Verifies semantic correctness of the rule"""
469 repo = self.repo
469 repo = self.repo
470 ha = node.hex(self.node)
470 ha = node.hex(self.node)
471 self.node = scmutil.resolvehexnodeidprefix(repo, ha)
471 self.node = scmutil.resolvehexnodeidprefix(repo, ha)
472 if self.node is None:
472 if self.node is None:
473 raise error.ParseError(_('unknown changeset %s listed') % ha[:12])
473 raise error.ParseError(_('unknown changeset %s listed') % ha[:12])
474 self._verifynodeconstraints(prev, expected, seen)
474 self._verifynodeconstraints(prev, expected, seen)
475
475
476 def _verifynodeconstraints(self, prev, expected, seen):
476 def _verifynodeconstraints(self, prev, expected, seen):
477 # by default command need a node in the edited list
477 # by default command need a node in the edited list
478 if self.node not in expected:
478 if self.node not in expected:
479 raise error.ParseError(_('%s "%s" changeset was not a candidate')
479 raise error.ParseError(_('%s "%s" changeset was not a candidate')
480 % (self.verb, node.short(self.node)),
480 % (self.verb, node.short(self.node)),
481 hint=_('only use listed changesets'))
481 hint=_('only use listed changesets'))
482 # and only one command per node
482 # and only one command per node
483 if self.node in seen:
483 if self.node in seen:
484 raise error.ParseError(_('duplicated command for changeset %s') %
484 raise error.ParseError(_('duplicated command for changeset %s') %
485 node.short(self.node))
485 node.short(self.node))
486
486
487 def torule(self):
487 def torule(self):
488 """build a histedit rule line for an action
488 """build a histedit rule line for an action
489
489
490 by default lines are in the form:
490 by default lines are in the form:
491 <hash> <rev> <summary>
491 <hash> <rev> <summary>
492 """
492 """
493 ctx = self.repo[self.node]
493 ctx = self.repo[self.node]
494 ui = self.repo.ui
494 ui = self.repo.ui
495 summary = cmdutil.rendertemplate(
495 summary = cmdutil.rendertemplate(
496 ctx, ui.config('histedit', 'summary-template')) or ''
496 ctx, ui.config('histedit', 'summary-template')) or ''
497 summary = summary.splitlines()[0]
497 summary = summary.splitlines()[0]
498 line = '%s %s %s' % (self.verb, ctx, summary)
498 line = '%s %s %s' % (self.verb, ctx, summary)
499 # trim to 75 columns by default so it's not stupidly wide in my editor
499 # trim to 75 columns by default so it's not stupidly wide in my editor
500 # (the 5 more are left for verb)
500 # (the 5 more are left for verb)
501 maxlen = self.repo.ui.configint('histedit', 'linelen')
501 maxlen = self.repo.ui.configint('histedit', 'linelen')
502 maxlen = max(maxlen, 22) # avoid truncating hash
502 maxlen = max(maxlen, 22) # avoid truncating hash
503 return stringutil.ellipsis(line, maxlen)
503 return stringutil.ellipsis(line, maxlen)
504
504
505 def tostate(self):
505 def tostate(self):
506 """Print an action in format used by histedit state files
506 """Print an action in format used by histedit state files
507 (the first line is a verb, the remainder is the second)
507 (the first line is a verb, the remainder is the second)
508 """
508 """
509 return "%s\n%s" % (self.verb, node.hex(self.node))
509 return "%s\n%s" % (self.verb, node.hex(self.node))
510
510
511 def run(self):
511 def run(self):
512 """Runs the action. The default behavior is simply apply the action's
512 """Runs the action. The default behavior is simply apply the action's
513 rulectx onto the current parentctx."""
513 rulectx onto the current parentctx."""
514 self.applychange()
514 self.applychange()
515 self.continuedirty()
515 self.continuedirty()
516 return self.continueclean()
516 return self.continueclean()
517
517
518 def applychange(self):
518 def applychange(self):
519 """Applies the changes from this action's rulectx onto the current
519 """Applies the changes from this action's rulectx onto the current
520 parentctx, but does not commit them."""
520 parentctx, but does not commit them."""
521 repo = self.repo
521 repo = self.repo
522 rulectx = repo[self.node]
522 rulectx = repo[self.node]
523 repo.ui.pushbuffer(error=True, labeled=True)
523 repo.ui.pushbuffer(error=True, labeled=True)
524 hg.update(repo, self.state.parentctxnode, quietempty=True)
524 hg.update(repo, self.state.parentctxnode, quietempty=True)
525 stats = applychanges(repo.ui, repo, rulectx, {})
525 stats = applychanges(repo.ui, repo, rulectx, {})
526 repo.dirstate.setbranch(rulectx.branch())
526 repo.dirstate.setbranch(rulectx.branch())
527 if stats.unresolvedcount:
527 if stats.unresolvedcount:
528 buf = repo.ui.popbuffer()
528 buf = repo.ui.popbuffer()
529 repo.ui.write(buf)
529 repo.ui.write(buf)
530 raise error.InterventionRequired(
530 raise error.InterventionRequired(
531 _('Fix up the change (%s %s)') %
531 _('Fix up the change (%s %s)') %
532 (self.verb, node.short(self.node)),
532 (self.verb, node.short(self.node)),
533 hint=_('hg histedit --continue to resume'))
533 hint=_('hg histedit --continue to resume'))
534 else:
534 else:
535 repo.ui.popbuffer()
535 repo.ui.popbuffer()
536
536
537 def continuedirty(self):
537 def continuedirty(self):
538 """Continues the action when changes have been applied to the working
538 """Continues the action when changes have been applied to the working
539 copy. The default behavior is to commit the dirty changes."""
539 copy. The default behavior is to commit the dirty changes."""
540 repo = self.repo
540 repo = self.repo
541 rulectx = repo[self.node]
541 rulectx = repo[self.node]
542
542
543 editor = self.commiteditor()
543 editor = self.commiteditor()
544 commit = commitfuncfor(repo, rulectx)
544 commit = commitfuncfor(repo, rulectx)
545 if repo.ui.configbool('rewrite', 'update-timestamp'):
545 if repo.ui.configbool('rewrite', 'update-timestamp'):
546 date = dateutil.makedate()
546 date = dateutil.makedate()
547 else:
547 else:
548 date = rulectx.date()
548 date = rulectx.date()
549 commit(text=rulectx.description(), user=rulectx.user(),
549 commit(text=rulectx.description(), user=rulectx.user(),
550 date=date, extra=rulectx.extra(), editor=editor)
550 date=date, extra=rulectx.extra(), editor=editor)
551
551
552 def commiteditor(self):
552 def commiteditor(self):
553 """The editor to be used to edit the commit message."""
553 """The editor to be used to edit the commit message."""
554 return False
554 return False
555
555
556 def continueclean(self):
556 def continueclean(self):
557 """Continues the action when the working copy is clean. The default
557 """Continues the action when the working copy is clean. The default
558 behavior is to accept the current commit as the new version of the
558 behavior is to accept the current commit as the new version of the
559 rulectx."""
559 rulectx."""
560 ctx = self.repo['.']
560 ctx = self.repo['.']
561 if ctx.node() == self.state.parentctxnode:
561 if ctx.node() == self.state.parentctxnode:
562 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
562 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
563 node.short(self.node))
563 node.short(self.node))
564 return ctx, [(self.node, tuple())]
564 return ctx, [(self.node, tuple())]
565 if ctx.node() == self.node:
565 if ctx.node() == self.node:
566 # Nothing changed
566 # Nothing changed
567 return ctx, []
567 return ctx, []
568 return ctx, [(self.node, (ctx.node(),))]
568 return ctx, [(self.node, (ctx.node(),))]
569
569
570 def commitfuncfor(repo, src):
570 def commitfuncfor(repo, src):
571 """Build a commit function for the replacement of <src>
571 """Build a commit function for the replacement of <src>
572
572
573 This function ensure we apply the same treatment to all changesets.
573 This function ensure we apply the same treatment to all changesets.
574
574
575 - Add a 'histedit_source' entry in extra.
575 - Add a 'histedit_source' entry in extra.
576
576
577 Note that fold has its own separated logic because its handling is a bit
577 Note that fold has its own separated logic because its handling is a bit
578 different and not easily factored out of the fold method.
578 different and not easily factored out of the fold method.
579 """
579 """
580 phasemin = src.phase()
580 phasemin = src.phase()
581 def commitfunc(**kwargs):
581 def commitfunc(**kwargs):
582 overrides = {('phases', 'new-commit'): phasemin}
582 overrides = {('phases', 'new-commit'): phasemin}
583 with repo.ui.configoverride(overrides, 'histedit'):
583 with repo.ui.configoverride(overrides, 'histedit'):
584 extra = kwargs.get(r'extra', {}).copy()
584 extra = kwargs.get(r'extra', {}).copy()
585 extra['histedit_source'] = src.hex()
585 extra['histedit_source'] = src.hex()
586 kwargs[r'extra'] = extra
586 kwargs[r'extra'] = extra
587 return repo.commit(**kwargs)
587 return repo.commit(**kwargs)
588 return commitfunc
588 return commitfunc
589
589
590 def applychanges(ui, repo, ctx, opts):
590 def applychanges(ui, repo, ctx, opts):
591 """Merge changeset from ctx (only) in the current working directory"""
591 """Merge changeset from ctx (only) in the current working directory"""
592 wcpar = repo.dirstate.p1()
592 wcpar = repo.dirstate.p1()
593 if ctx.p1().node() == wcpar:
593 if ctx.p1().node() == wcpar:
594 # edits are "in place" we do not need to make any merge,
594 # edits are "in place" we do not need to make any merge,
595 # just applies changes on parent for editing
595 # just applies changes on parent for editing
596 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
596 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
597 stats = mergemod.updateresult(0, 0, 0, 0)
597 stats = mergemod.updateresult(0, 0, 0, 0)
598 else:
598 else:
599 try:
599 try:
600 # ui.forcemerge is an internal variable, do not document
600 # ui.forcemerge is an internal variable, do not document
601 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
601 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
602 'histedit')
602 'histedit')
603 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
603 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
604 finally:
604 finally:
605 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
605 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
606 return stats
606 return stats
607
607
608 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
608 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
609 """collapse the set of revisions from first to last as new one.
609 """collapse the set of revisions from first to last as new one.
610
610
611 Expected commit options are:
611 Expected commit options are:
612 - message
612 - message
613 - date
613 - date
614 - username
614 - username
615 Commit message is edited in all cases.
615 Commit message is edited in all cases.
616
616
617 This function works in memory."""
617 This function works in memory."""
618 ctxs = list(repo.set('%d::%d', firstctx.rev(), lastctx.rev()))
618 ctxs = list(repo.set('%d::%d', firstctx.rev(), lastctx.rev()))
619 if not ctxs:
619 if not ctxs:
620 return None
620 return None
621 for c in ctxs:
621 for c in ctxs:
622 if not c.mutable():
622 if not c.mutable():
623 raise error.ParseError(
623 raise error.ParseError(
624 _("cannot fold into public change %s") % node.short(c.node()))
624 _("cannot fold into public change %s") % node.short(c.node()))
625 base = firstctx.p1()
625 base = firstctx.p1()
626
626
627 # commit a new version of the old changeset, including the update
627 # commit a new version of the old changeset, including the update
628 # collect all files which might be affected
628 # collect all files which might be affected
629 files = set()
629 files = set()
630 for ctx in ctxs:
630 for ctx in ctxs:
631 files.update(ctx.files())
631 files.update(ctx.files())
632
632
633 # Recompute copies (avoid recording a -> b -> a)
633 # Recompute copies (avoid recording a -> b -> a)
634 copied = copies.pathcopies(base, lastctx)
634 copied = copies.pathcopies(base, lastctx)
635
635
636 # prune files which were reverted by the updates
636 # prune files which were reverted by the updates
637 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
637 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
638 # commit version of these files as defined by head
638 # commit version of these files as defined by head
639 headmf = lastctx.manifest()
639 headmf = lastctx.manifest()
640 def filectxfn(repo, ctx, path):
640 def filectxfn(repo, ctx, path):
641 if path in headmf:
641 if path in headmf:
642 fctx = lastctx[path]
642 fctx = lastctx[path]
643 flags = fctx.flags()
643 flags = fctx.flags()
644 mctx = context.memfilectx(repo, ctx,
644 mctx = context.memfilectx(repo, ctx,
645 fctx.path(), fctx.data(),
645 fctx.path(), fctx.data(),
646 islink='l' in flags,
646 islink='l' in flags,
647 isexec='x' in flags,
647 isexec='x' in flags,
648 copied=copied.get(path))
648 copysource=copied.get(path))
649 return mctx
649 return mctx
650 return None
650 return None
651
651
652 if commitopts.get('message'):
652 if commitopts.get('message'):
653 message = commitopts['message']
653 message = commitopts['message']
654 else:
654 else:
655 message = firstctx.description()
655 message = firstctx.description()
656 user = commitopts.get('user')
656 user = commitopts.get('user')
657 date = commitopts.get('date')
657 date = commitopts.get('date')
658 extra = commitopts.get('extra')
658 extra = commitopts.get('extra')
659
659
660 parents = (firstctx.p1().node(), firstctx.p2().node())
660 parents = (firstctx.p1().node(), firstctx.p2().node())
661 editor = None
661 editor = None
662 if not skipprompt:
662 if not skipprompt:
663 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
663 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
664 new = context.memctx(repo,
664 new = context.memctx(repo,
665 parents=parents,
665 parents=parents,
666 text=message,
666 text=message,
667 files=files,
667 files=files,
668 filectxfn=filectxfn,
668 filectxfn=filectxfn,
669 user=user,
669 user=user,
670 date=date,
670 date=date,
671 extra=extra,
671 extra=extra,
672 editor=editor)
672 editor=editor)
673 return repo.commitctx(new)
673 return repo.commitctx(new)
674
674
675 def _isdirtywc(repo):
675 def _isdirtywc(repo):
676 return repo[None].dirty(missing=True)
676 return repo[None].dirty(missing=True)
677
677
678 def abortdirty():
678 def abortdirty():
679 raise error.Abort(_('working copy has pending changes'),
679 raise error.Abort(_('working copy has pending changes'),
680 hint=_('amend, commit, or revert them and run histedit '
680 hint=_('amend, commit, or revert them and run histedit '
681 '--continue, or abort with histedit --abort'))
681 '--continue, or abort with histedit --abort'))
682
682
683 def action(verbs, message, priority=False, internal=False):
683 def action(verbs, message, priority=False, internal=False):
684 def wrap(cls):
684 def wrap(cls):
685 assert not priority or not internal
685 assert not priority or not internal
686 verb = verbs[0]
686 verb = verbs[0]
687 if priority:
687 if priority:
688 primaryactions.add(verb)
688 primaryactions.add(verb)
689 elif internal:
689 elif internal:
690 internalactions.add(verb)
690 internalactions.add(verb)
691 elif len(verbs) > 1:
691 elif len(verbs) > 1:
692 secondaryactions.add(verb)
692 secondaryactions.add(verb)
693 else:
693 else:
694 tertiaryactions.add(verb)
694 tertiaryactions.add(verb)
695
695
696 cls.verb = verb
696 cls.verb = verb
697 cls.verbs = verbs
697 cls.verbs = verbs
698 cls.message = message
698 cls.message = message
699 for verb in verbs:
699 for verb in verbs:
700 actiontable[verb] = cls
700 actiontable[verb] = cls
701 return cls
701 return cls
702 return wrap
702 return wrap
703
703
704 @action(['pick', 'p'],
704 @action(['pick', 'p'],
705 _('use commit'),
705 _('use commit'),
706 priority=True)
706 priority=True)
707 class pick(histeditaction):
707 class pick(histeditaction):
708 def run(self):
708 def run(self):
709 rulectx = self.repo[self.node]
709 rulectx = self.repo[self.node]
710 if rulectx.p1().node() == self.state.parentctxnode:
710 if rulectx.p1().node() == self.state.parentctxnode:
711 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
711 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
712 return rulectx, []
712 return rulectx, []
713
713
714 return super(pick, self).run()
714 return super(pick, self).run()
715
715
716 @action(['edit', 'e'],
716 @action(['edit', 'e'],
717 _('use commit, but stop for amending'),
717 _('use commit, but stop for amending'),
718 priority=True)
718 priority=True)
719 class edit(histeditaction):
719 class edit(histeditaction):
720 def run(self):
720 def run(self):
721 repo = self.repo
721 repo = self.repo
722 rulectx = repo[self.node]
722 rulectx = repo[self.node]
723 hg.update(repo, self.state.parentctxnode, quietempty=True)
723 hg.update(repo, self.state.parentctxnode, quietempty=True)
724 applychanges(repo.ui, repo, rulectx, {})
724 applychanges(repo.ui, repo, rulectx, {})
725 raise error.InterventionRequired(
725 raise error.InterventionRequired(
726 _('Editing (%s), you may commit or record as needed now.')
726 _('Editing (%s), you may commit or record as needed now.')
727 % node.short(self.node),
727 % node.short(self.node),
728 hint=_('hg histedit --continue to resume'))
728 hint=_('hg histedit --continue to resume'))
729
729
730 def commiteditor(self):
730 def commiteditor(self):
731 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
731 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
732
732
733 @action(['fold', 'f'],
733 @action(['fold', 'f'],
734 _('use commit, but combine it with the one above'))
734 _('use commit, but combine it with the one above'))
735 class fold(histeditaction):
735 class fold(histeditaction):
736 def verify(self, prev, expected, seen):
736 def verify(self, prev, expected, seen):
737 """ Verifies semantic correctness of the fold rule"""
737 """ Verifies semantic correctness of the fold rule"""
738 super(fold, self).verify(prev, expected, seen)
738 super(fold, self).verify(prev, expected, seen)
739 repo = self.repo
739 repo = self.repo
740 if not prev:
740 if not prev:
741 c = repo[self.node].p1()
741 c = repo[self.node].p1()
742 elif not prev.verb in ('pick', 'base'):
742 elif not prev.verb in ('pick', 'base'):
743 return
743 return
744 else:
744 else:
745 c = repo[prev.node]
745 c = repo[prev.node]
746 if not c.mutable():
746 if not c.mutable():
747 raise error.ParseError(
747 raise error.ParseError(
748 _("cannot fold into public change %s") % node.short(c.node()))
748 _("cannot fold into public change %s") % node.short(c.node()))
749
749
750
750
751 def continuedirty(self):
751 def continuedirty(self):
752 repo = self.repo
752 repo = self.repo
753 rulectx = repo[self.node]
753 rulectx = repo[self.node]
754
754
755 commit = commitfuncfor(repo, rulectx)
755 commit = commitfuncfor(repo, rulectx)
756 commit(text='fold-temp-revision %s' % node.short(self.node),
756 commit(text='fold-temp-revision %s' % node.short(self.node),
757 user=rulectx.user(), date=rulectx.date(),
757 user=rulectx.user(), date=rulectx.date(),
758 extra=rulectx.extra())
758 extra=rulectx.extra())
759
759
760 def continueclean(self):
760 def continueclean(self):
761 repo = self.repo
761 repo = self.repo
762 ctx = repo['.']
762 ctx = repo['.']
763 rulectx = repo[self.node]
763 rulectx = repo[self.node]
764 parentctxnode = self.state.parentctxnode
764 parentctxnode = self.state.parentctxnode
765 if ctx.node() == parentctxnode:
765 if ctx.node() == parentctxnode:
766 repo.ui.warn(_('%s: empty changeset\n') %
766 repo.ui.warn(_('%s: empty changeset\n') %
767 node.short(self.node))
767 node.short(self.node))
768 return ctx, [(self.node, (parentctxnode,))]
768 return ctx, [(self.node, (parentctxnode,))]
769
769
770 parentctx = repo[parentctxnode]
770 parentctx = repo[parentctxnode]
771 newcommits = set(c.node() for c in repo.set('(%d::. - %d)',
771 newcommits = set(c.node() for c in repo.set('(%d::. - %d)',
772 parentctx.rev(),
772 parentctx.rev(),
773 parentctx.rev()))
773 parentctx.rev()))
774 if not newcommits:
774 if not newcommits:
775 repo.ui.warn(_('%s: cannot fold - working copy is not a '
775 repo.ui.warn(_('%s: cannot fold - working copy is not a '
776 'descendant of previous commit %s\n') %
776 'descendant of previous commit %s\n') %
777 (node.short(self.node), node.short(parentctxnode)))
777 (node.short(self.node), node.short(parentctxnode)))
778 return ctx, [(self.node, (ctx.node(),))]
778 return ctx, [(self.node, (ctx.node(),))]
779
779
780 middlecommits = newcommits.copy()
780 middlecommits = newcommits.copy()
781 middlecommits.discard(ctx.node())
781 middlecommits.discard(ctx.node())
782
782
783 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
783 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
784 middlecommits)
784 middlecommits)
785
785
786 def skipprompt(self):
786 def skipprompt(self):
787 """Returns true if the rule should skip the message editor.
787 """Returns true if the rule should skip the message editor.
788
788
789 For example, 'fold' wants to show an editor, but 'rollup'
789 For example, 'fold' wants to show an editor, but 'rollup'
790 doesn't want to.
790 doesn't want to.
791 """
791 """
792 return False
792 return False
793
793
794 def mergedescs(self):
794 def mergedescs(self):
795 """Returns true if the rule should merge messages of multiple changes.
795 """Returns true if the rule should merge messages of multiple changes.
796
796
797 This exists mainly so that 'rollup' rules can be a subclass of
797 This exists mainly so that 'rollup' rules can be a subclass of
798 'fold'.
798 'fold'.
799 """
799 """
800 return True
800 return True
801
801
802 def firstdate(self):
802 def firstdate(self):
803 """Returns true if the rule should preserve the date of the first
803 """Returns true if the rule should preserve the date of the first
804 change.
804 change.
805
805
806 This exists mainly so that 'rollup' rules can be a subclass of
806 This exists mainly so that 'rollup' rules can be a subclass of
807 'fold'.
807 'fold'.
808 """
808 """
809 return False
809 return False
810
810
811 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
811 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
812 parent = ctx.p1().node()
812 parent = ctx.p1().node()
813 hg.updaterepo(repo, parent, overwrite=False)
813 hg.updaterepo(repo, parent, overwrite=False)
814 ### prepare new commit data
814 ### prepare new commit data
815 commitopts = {}
815 commitopts = {}
816 commitopts['user'] = ctx.user()
816 commitopts['user'] = ctx.user()
817 # commit message
817 # commit message
818 if not self.mergedescs():
818 if not self.mergedescs():
819 newmessage = ctx.description()
819 newmessage = ctx.description()
820 else:
820 else:
821 newmessage = '\n***\n'.join(
821 newmessage = '\n***\n'.join(
822 [ctx.description()] +
822 [ctx.description()] +
823 [repo[r].description() for r in internalchanges] +
823 [repo[r].description() for r in internalchanges] +
824 [oldctx.description()]) + '\n'
824 [oldctx.description()]) + '\n'
825 commitopts['message'] = newmessage
825 commitopts['message'] = newmessage
826 # date
826 # date
827 if self.firstdate():
827 if self.firstdate():
828 commitopts['date'] = ctx.date()
828 commitopts['date'] = ctx.date()
829 else:
829 else:
830 commitopts['date'] = max(ctx.date(), oldctx.date())
830 commitopts['date'] = max(ctx.date(), oldctx.date())
831 # if date is to be updated to current
831 # if date is to be updated to current
832 if ui.configbool('rewrite', 'update-timestamp'):
832 if ui.configbool('rewrite', 'update-timestamp'):
833 commitopts['date'] = dateutil.makedate()
833 commitopts['date'] = dateutil.makedate()
834
834
835 extra = ctx.extra().copy()
835 extra = ctx.extra().copy()
836 # histedit_source
836 # histedit_source
837 # note: ctx is likely a temporary commit but that the best we can do
837 # note: ctx is likely a temporary commit but that the best we can do
838 # here. This is sufficient to solve issue3681 anyway.
838 # here. This is sufficient to solve issue3681 anyway.
839 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
839 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
840 commitopts['extra'] = extra
840 commitopts['extra'] = extra
841 phasemin = max(ctx.phase(), oldctx.phase())
841 phasemin = max(ctx.phase(), oldctx.phase())
842 overrides = {('phases', 'new-commit'): phasemin}
842 overrides = {('phases', 'new-commit'): phasemin}
843 with repo.ui.configoverride(overrides, 'histedit'):
843 with repo.ui.configoverride(overrides, 'histedit'):
844 n = collapse(repo, ctx, repo[newnode], commitopts,
844 n = collapse(repo, ctx, repo[newnode], commitopts,
845 skipprompt=self.skipprompt())
845 skipprompt=self.skipprompt())
846 if n is None:
846 if n is None:
847 return ctx, []
847 return ctx, []
848 hg.updaterepo(repo, n, overwrite=False)
848 hg.updaterepo(repo, n, overwrite=False)
849 replacements = [(oldctx.node(), (newnode,)),
849 replacements = [(oldctx.node(), (newnode,)),
850 (ctx.node(), (n,)),
850 (ctx.node(), (n,)),
851 (newnode, (n,)),
851 (newnode, (n,)),
852 ]
852 ]
853 for ich in internalchanges:
853 for ich in internalchanges:
854 replacements.append((ich, (n,)))
854 replacements.append((ich, (n,)))
855 return repo[n], replacements
855 return repo[n], replacements
856
856
857 @action(['base', 'b'],
857 @action(['base', 'b'],
858 _('checkout changeset and apply further changesets from there'))
858 _('checkout changeset and apply further changesets from there'))
859 class base(histeditaction):
859 class base(histeditaction):
860
860
861 def run(self):
861 def run(self):
862 if self.repo['.'].node() != self.node:
862 if self.repo['.'].node() != self.node:
863 mergemod.update(self.repo, self.node, branchmerge=False, force=True)
863 mergemod.update(self.repo, self.node, branchmerge=False, force=True)
864 return self.continueclean()
864 return self.continueclean()
865
865
866 def continuedirty(self):
866 def continuedirty(self):
867 abortdirty()
867 abortdirty()
868
868
869 def continueclean(self):
869 def continueclean(self):
870 basectx = self.repo['.']
870 basectx = self.repo['.']
871 return basectx, []
871 return basectx, []
872
872
873 def _verifynodeconstraints(self, prev, expected, seen):
873 def _verifynodeconstraints(self, prev, expected, seen):
874 # base can only be use with a node not in the edited set
874 # base can only be use with a node not in the edited set
875 if self.node in expected:
875 if self.node in expected:
876 msg = _('%s "%s" changeset was an edited list candidate')
876 msg = _('%s "%s" changeset was an edited list candidate')
877 raise error.ParseError(
877 raise error.ParseError(
878 msg % (self.verb, node.short(self.node)),
878 msg % (self.verb, node.short(self.node)),
879 hint=_('base must only use unlisted changesets'))
879 hint=_('base must only use unlisted changesets'))
880
880
881 @action(['_multifold'],
881 @action(['_multifold'],
882 _(
882 _(
883 """fold subclass used for when multiple folds happen in a row
883 """fold subclass used for when multiple folds happen in a row
884
884
885 We only want to fire the editor for the folded message once when
885 We only want to fire the editor for the folded message once when
886 (say) four changes are folded down into a single change. This is
886 (say) four changes are folded down into a single change. This is
887 similar to rollup, but we should preserve both messages so that
887 similar to rollup, but we should preserve both messages so that
888 when the last fold operation runs we can show the user all the
888 when the last fold operation runs we can show the user all the
889 commit messages in their editor.
889 commit messages in their editor.
890 """),
890 """),
891 internal=True)
891 internal=True)
892 class _multifold(fold):
892 class _multifold(fold):
893 def skipprompt(self):
893 def skipprompt(self):
894 return True
894 return True
895
895
896 @action(["roll", "r"],
896 @action(["roll", "r"],
897 _("like fold, but discard this commit's description and date"))
897 _("like fold, but discard this commit's description and date"))
898 class rollup(fold):
898 class rollup(fold):
899 def mergedescs(self):
899 def mergedescs(self):
900 return False
900 return False
901
901
902 def skipprompt(self):
902 def skipprompt(self):
903 return True
903 return True
904
904
905 def firstdate(self):
905 def firstdate(self):
906 return True
906 return True
907
907
908 @action(["drop", "d"],
908 @action(["drop", "d"],
909 _('remove commit from history'))
909 _('remove commit from history'))
910 class drop(histeditaction):
910 class drop(histeditaction):
911 def run(self):
911 def run(self):
912 parentctx = self.repo[self.state.parentctxnode]
912 parentctx = self.repo[self.state.parentctxnode]
913 return parentctx, [(self.node, tuple())]
913 return parentctx, [(self.node, tuple())]
914
914
915 @action(["mess", "m"],
915 @action(["mess", "m"],
916 _('edit commit message without changing commit content'),
916 _('edit commit message without changing commit content'),
917 priority=True)
917 priority=True)
918 class message(histeditaction):
918 class message(histeditaction):
919 def commiteditor(self):
919 def commiteditor(self):
920 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
920 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
921
921
922 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
922 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
923 """utility function to find the first outgoing changeset
923 """utility function to find the first outgoing changeset
924
924
925 Used by initialization code"""
925 Used by initialization code"""
926 if opts is None:
926 if opts is None:
927 opts = {}
927 opts = {}
928 dest = ui.expandpath(remote or 'default-push', remote or 'default')
928 dest = ui.expandpath(remote or 'default-push', remote or 'default')
929 dest, branches = hg.parseurl(dest, None)[:2]
929 dest, branches = hg.parseurl(dest, None)[:2]
930 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
930 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
931
931
932 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
932 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
933 other = hg.peer(repo, opts, dest)
933 other = hg.peer(repo, opts, dest)
934
934
935 if revs:
935 if revs:
936 revs = [repo.lookup(rev) for rev in revs]
936 revs = [repo.lookup(rev) for rev in revs]
937
937
938 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
938 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
939 if not outgoing.missing:
939 if not outgoing.missing:
940 raise error.Abort(_('no outgoing ancestors'))
940 raise error.Abort(_('no outgoing ancestors'))
941 roots = list(repo.revs("roots(%ln)", outgoing.missing))
941 roots = list(repo.revs("roots(%ln)", outgoing.missing))
942 if len(roots) > 1:
942 if len(roots) > 1:
943 msg = _('there are ambiguous outgoing revisions')
943 msg = _('there are ambiguous outgoing revisions')
944 hint = _("see 'hg help histedit' for more detail")
944 hint = _("see 'hg help histedit' for more detail")
945 raise error.Abort(msg, hint=hint)
945 raise error.Abort(msg, hint=hint)
946 return repo[roots[0]].node()
946 return repo[roots[0]].node()
947
947
948 # Curses Support
948 # Curses Support
949 try:
949 try:
950 import curses
950 import curses
951
951
952 # Curses requires setting the locale or it will default to the C
952 # Curses requires setting the locale or it will default to the C
953 # locale. This sets the locale to the user's default system
953 # locale. This sets the locale to the user's default system
954 # locale.
954 # locale.
955 import locale
955 import locale
956 locale.setlocale(locale.LC_ALL, r'')
956 locale.setlocale(locale.LC_ALL, r'')
957 except ImportError:
957 except ImportError:
958 curses = None
958 curses = None
959
959
960 KEY_LIST = ['pick', 'edit', 'fold', 'drop', 'mess', 'roll']
960 KEY_LIST = ['pick', 'edit', 'fold', 'drop', 'mess', 'roll']
961 ACTION_LABELS = {
961 ACTION_LABELS = {
962 'fold': '^fold',
962 'fold': '^fold',
963 'roll': '^roll',
963 'roll': '^roll',
964 }
964 }
965
965
966 COLOR_HELP, COLOR_SELECTED, COLOR_OK, COLOR_WARN, COLOR_CURRENT = 1, 2, 3, 4, 5
966 COLOR_HELP, COLOR_SELECTED, COLOR_OK, COLOR_WARN, COLOR_CURRENT = 1, 2, 3, 4, 5
967
967
968 E_QUIT, E_HISTEDIT = 1, 2
968 E_QUIT, E_HISTEDIT = 1, 2
969 E_PAGEDOWN, E_PAGEUP, E_LINEUP, E_LINEDOWN, E_RESIZE = 3, 4, 5, 6, 7
969 E_PAGEDOWN, E_PAGEUP, E_LINEUP, E_LINEDOWN, E_RESIZE = 3, 4, 5, 6, 7
970 MODE_INIT, MODE_PATCH, MODE_RULES, MODE_HELP = 0, 1, 2, 3
970 MODE_INIT, MODE_PATCH, MODE_RULES, MODE_HELP = 0, 1, 2, 3
971
971
972 KEYTABLE = {
972 KEYTABLE = {
973 'global': {
973 'global': {
974 'h': 'next-action',
974 'h': 'next-action',
975 'KEY_RIGHT': 'next-action',
975 'KEY_RIGHT': 'next-action',
976 'l': 'prev-action',
976 'l': 'prev-action',
977 'KEY_LEFT': 'prev-action',
977 'KEY_LEFT': 'prev-action',
978 'q': 'quit',
978 'q': 'quit',
979 'c': 'histedit',
979 'c': 'histedit',
980 'C': 'histedit',
980 'C': 'histedit',
981 'v': 'showpatch',
981 'v': 'showpatch',
982 '?': 'help',
982 '?': 'help',
983 },
983 },
984 MODE_RULES: {
984 MODE_RULES: {
985 'd': 'action-drop',
985 'd': 'action-drop',
986 'e': 'action-edit',
986 'e': 'action-edit',
987 'f': 'action-fold',
987 'f': 'action-fold',
988 'm': 'action-mess',
988 'm': 'action-mess',
989 'p': 'action-pick',
989 'p': 'action-pick',
990 'r': 'action-roll',
990 'r': 'action-roll',
991 ' ': 'select',
991 ' ': 'select',
992 'j': 'down',
992 'j': 'down',
993 'k': 'up',
993 'k': 'up',
994 'KEY_DOWN': 'down',
994 'KEY_DOWN': 'down',
995 'KEY_UP': 'up',
995 'KEY_UP': 'up',
996 'J': 'move-down',
996 'J': 'move-down',
997 'K': 'move-up',
997 'K': 'move-up',
998 'KEY_NPAGE': 'move-down',
998 'KEY_NPAGE': 'move-down',
999 'KEY_PPAGE': 'move-up',
999 'KEY_PPAGE': 'move-up',
1000 '0': 'goto', # Used for 0..9
1000 '0': 'goto', # Used for 0..9
1001 },
1001 },
1002 MODE_PATCH: {
1002 MODE_PATCH: {
1003 ' ': 'page-down',
1003 ' ': 'page-down',
1004 'KEY_NPAGE': 'page-down',
1004 'KEY_NPAGE': 'page-down',
1005 'KEY_PPAGE': 'page-up',
1005 'KEY_PPAGE': 'page-up',
1006 'j': 'line-down',
1006 'j': 'line-down',
1007 'k': 'line-up',
1007 'k': 'line-up',
1008 'KEY_DOWN': 'line-down',
1008 'KEY_DOWN': 'line-down',
1009 'KEY_UP': 'line-up',
1009 'KEY_UP': 'line-up',
1010 'J': 'down',
1010 'J': 'down',
1011 'K': 'up',
1011 'K': 'up',
1012 },
1012 },
1013 MODE_HELP: {
1013 MODE_HELP: {
1014 },
1014 },
1015 }
1015 }
1016
1016
1017 def screen_size():
1017 def screen_size():
1018 return struct.unpack('hh', fcntl.ioctl(1, termios.TIOCGWINSZ, ' '))
1018 return struct.unpack('hh', fcntl.ioctl(1, termios.TIOCGWINSZ, ' '))
1019
1019
1020 class histeditrule(object):
1020 class histeditrule(object):
1021 def __init__(self, ctx, pos, action='pick'):
1021 def __init__(self, ctx, pos, action='pick'):
1022 self.ctx = ctx
1022 self.ctx = ctx
1023 self.action = action
1023 self.action = action
1024 self.origpos = pos
1024 self.origpos = pos
1025 self.pos = pos
1025 self.pos = pos
1026 self.conflicts = []
1026 self.conflicts = []
1027
1027
1028 def __str__(self):
1028 def __str__(self):
1029 # Some actions ('fold' and 'roll') combine a patch with a previous one.
1029 # Some actions ('fold' and 'roll') combine a patch with a previous one.
1030 # Add a marker showing which patch they apply to, and also omit the
1030 # Add a marker showing which patch they apply to, and also omit the
1031 # description for 'roll' (since it will get discarded). Example display:
1031 # description for 'roll' (since it will get discarded). Example display:
1032 #
1032 #
1033 # #10 pick 316392:06a16c25c053 add option to skip tests
1033 # #10 pick 316392:06a16c25c053 add option to skip tests
1034 # #11 ^roll 316393:71313c964cc5
1034 # #11 ^roll 316393:71313c964cc5
1035 # #12 pick 316394:ab31f3973b0d include mfbt for mozilla-config.h
1035 # #12 pick 316394:ab31f3973b0d include mfbt for mozilla-config.h
1036 # #13 ^fold 316395:14ce5803f4c3 fix warnings
1036 # #13 ^fold 316395:14ce5803f4c3 fix warnings
1037 #
1037 #
1038 # The carets point to the changeset being folded into ("roll this
1038 # The carets point to the changeset being folded into ("roll this
1039 # changeset into the changeset above").
1039 # changeset into the changeset above").
1040 action = ACTION_LABELS.get(self.action, self.action)
1040 action = ACTION_LABELS.get(self.action, self.action)
1041 h = self.ctx.hex()[0:12]
1041 h = self.ctx.hex()[0:12]
1042 r = self.ctx.rev()
1042 r = self.ctx.rev()
1043 desc = self.ctx.description().splitlines()[0].strip()
1043 desc = self.ctx.description().splitlines()[0].strip()
1044 if self.action == 'roll':
1044 if self.action == 'roll':
1045 desc = ''
1045 desc = ''
1046 return "#{0:<2} {1:<6} {2}:{3} {4}".format(
1046 return "#{0:<2} {1:<6} {2}:{3} {4}".format(
1047 self.origpos, action, r, h, desc)
1047 self.origpos, action, r, h, desc)
1048
1048
1049 def checkconflicts(self, other):
1049 def checkconflicts(self, other):
1050 if other.pos > self.pos and other.origpos <= self.origpos:
1050 if other.pos > self.pos and other.origpos <= self.origpos:
1051 if set(other.ctx.files()) & set(self.ctx.files()) != set():
1051 if set(other.ctx.files()) & set(self.ctx.files()) != set():
1052 self.conflicts.append(other)
1052 self.conflicts.append(other)
1053 return self.conflicts
1053 return self.conflicts
1054
1054
1055 if other in self.conflicts:
1055 if other in self.conflicts:
1056 self.conflicts.remove(other)
1056 self.conflicts.remove(other)
1057 return self.conflicts
1057 return self.conflicts
1058
1058
1059 # ============ EVENTS ===============
1059 # ============ EVENTS ===============
1060 def movecursor(state, oldpos, newpos):
1060 def movecursor(state, oldpos, newpos):
1061 '''Change the rule/changeset that the cursor is pointing to, regardless of
1061 '''Change the rule/changeset that the cursor is pointing to, regardless of
1062 current mode (you can switch between patches from the view patch window).'''
1062 current mode (you can switch between patches from the view patch window).'''
1063 state['pos'] = newpos
1063 state['pos'] = newpos
1064
1064
1065 mode, _ = state['mode']
1065 mode, _ = state['mode']
1066 if mode == MODE_RULES:
1066 if mode == MODE_RULES:
1067 # Scroll through the list by updating the view for MODE_RULES, so that
1067 # Scroll through the list by updating the view for MODE_RULES, so that
1068 # even if we are not currently viewing the rules, switching back will
1068 # even if we are not currently viewing the rules, switching back will
1069 # result in the cursor's rule being visible.
1069 # result in the cursor's rule being visible.
1070 modestate = state['modes'][MODE_RULES]
1070 modestate = state['modes'][MODE_RULES]
1071 if newpos < modestate['line_offset']:
1071 if newpos < modestate['line_offset']:
1072 modestate['line_offset'] = newpos
1072 modestate['line_offset'] = newpos
1073 elif newpos > modestate['line_offset'] + state['page_height'] - 1:
1073 elif newpos > modestate['line_offset'] + state['page_height'] - 1:
1074 modestate['line_offset'] = newpos - state['page_height'] + 1
1074 modestate['line_offset'] = newpos - state['page_height'] + 1
1075
1075
1076 # Reset the patch view region to the top of the new patch.
1076 # Reset the patch view region to the top of the new patch.
1077 state['modes'][MODE_PATCH]['line_offset'] = 0
1077 state['modes'][MODE_PATCH]['line_offset'] = 0
1078
1078
1079 def changemode(state, mode):
1079 def changemode(state, mode):
1080 curmode, _ = state['mode']
1080 curmode, _ = state['mode']
1081 state['mode'] = (mode, curmode)
1081 state['mode'] = (mode, curmode)
1082
1082
1083 def makeselection(state, pos):
1083 def makeselection(state, pos):
1084 state['selected'] = pos
1084 state['selected'] = pos
1085
1085
1086 def swap(state, oldpos, newpos):
1086 def swap(state, oldpos, newpos):
1087 """Swap two positions and calculate necessary conflicts in
1087 """Swap two positions and calculate necessary conflicts in
1088 O(|newpos-oldpos|) time"""
1088 O(|newpos-oldpos|) time"""
1089
1089
1090 rules = state['rules']
1090 rules = state['rules']
1091 assert 0 <= oldpos < len(rules) and 0 <= newpos < len(rules)
1091 assert 0 <= oldpos < len(rules) and 0 <= newpos < len(rules)
1092
1092
1093 rules[oldpos], rules[newpos] = rules[newpos], rules[oldpos]
1093 rules[oldpos], rules[newpos] = rules[newpos], rules[oldpos]
1094
1094
1095 # TODO: swap should not know about histeditrule's internals
1095 # TODO: swap should not know about histeditrule's internals
1096 rules[newpos].pos = newpos
1096 rules[newpos].pos = newpos
1097 rules[oldpos].pos = oldpos
1097 rules[oldpos].pos = oldpos
1098
1098
1099 start = min(oldpos, newpos)
1099 start = min(oldpos, newpos)
1100 end = max(oldpos, newpos)
1100 end = max(oldpos, newpos)
1101 for r in pycompat.xrange(start, end + 1):
1101 for r in pycompat.xrange(start, end + 1):
1102 rules[newpos].checkconflicts(rules[r])
1102 rules[newpos].checkconflicts(rules[r])
1103 rules[oldpos].checkconflicts(rules[r])
1103 rules[oldpos].checkconflicts(rules[r])
1104
1104
1105 if state['selected']:
1105 if state['selected']:
1106 makeselection(state, newpos)
1106 makeselection(state, newpos)
1107
1107
1108 def changeaction(state, pos, action):
1108 def changeaction(state, pos, action):
1109 """Change the action state on the given position to the new action"""
1109 """Change the action state on the given position to the new action"""
1110 rules = state['rules']
1110 rules = state['rules']
1111 assert 0 <= pos < len(rules)
1111 assert 0 <= pos < len(rules)
1112 rules[pos].action = action
1112 rules[pos].action = action
1113
1113
1114 def cycleaction(state, pos, next=False):
1114 def cycleaction(state, pos, next=False):
1115 """Changes the action state the next or the previous action from
1115 """Changes the action state the next or the previous action from
1116 the action list"""
1116 the action list"""
1117 rules = state['rules']
1117 rules = state['rules']
1118 assert 0 <= pos < len(rules)
1118 assert 0 <= pos < len(rules)
1119 current = rules[pos].action
1119 current = rules[pos].action
1120
1120
1121 assert current in KEY_LIST
1121 assert current in KEY_LIST
1122
1122
1123 index = KEY_LIST.index(current)
1123 index = KEY_LIST.index(current)
1124 if next:
1124 if next:
1125 index += 1
1125 index += 1
1126 else:
1126 else:
1127 index -= 1
1127 index -= 1
1128 changeaction(state, pos, KEY_LIST[index % len(KEY_LIST)])
1128 changeaction(state, pos, KEY_LIST[index % len(KEY_LIST)])
1129
1129
1130 def changeview(state, delta, unit):
1130 def changeview(state, delta, unit):
1131 '''Change the region of whatever is being viewed (a patch or the list of
1131 '''Change the region of whatever is being viewed (a patch or the list of
1132 changesets). 'delta' is an amount (+/- 1) and 'unit' is 'page' or 'line'.'''
1132 changesets). 'delta' is an amount (+/- 1) and 'unit' is 'page' or 'line'.'''
1133 mode, _ = state['mode']
1133 mode, _ = state['mode']
1134 if mode != MODE_PATCH:
1134 if mode != MODE_PATCH:
1135 return
1135 return
1136 mode_state = state['modes'][mode]
1136 mode_state = state['modes'][mode]
1137 num_lines = len(patchcontents(state))
1137 num_lines = len(patchcontents(state))
1138 page_height = state['page_height']
1138 page_height = state['page_height']
1139 unit = page_height if unit == 'page' else 1
1139 unit = page_height if unit == 'page' else 1
1140 num_pages = 1 + (num_lines - 1) / page_height
1140 num_pages = 1 + (num_lines - 1) / page_height
1141 max_offset = (num_pages - 1) * page_height
1141 max_offset = (num_pages - 1) * page_height
1142 newline = mode_state['line_offset'] + delta * unit
1142 newline = mode_state['line_offset'] + delta * unit
1143 mode_state['line_offset'] = max(0, min(max_offset, newline))
1143 mode_state['line_offset'] = max(0, min(max_offset, newline))
1144
1144
1145 def event(state, ch):
1145 def event(state, ch):
1146 """Change state based on the current character input
1146 """Change state based on the current character input
1147
1147
1148 This takes the current state and based on the current character input from
1148 This takes the current state and based on the current character input from
1149 the user we change the state.
1149 the user we change the state.
1150 """
1150 """
1151 selected = state['selected']
1151 selected = state['selected']
1152 oldpos = state['pos']
1152 oldpos = state['pos']
1153 rules = state['rules']
1153 rules = state['rules']
1154
1154
1155 if ch in (curses.KEY_RESIZE, "KEY_RESIZE"):
1155 if ch in (curses.KEY_RESIZE, "KEY_RESIZE"):
1156 return E_RESIZE
1156 return E_RESIZE
1157
1157
1158 lookup_ch = ch
1158 lookup_ch = ch
1159 if '0' <= ch <= '9':
1159 if '0' <= ch <= '9':
1160 lookup_ch = '0'
1160 lookup_ch = '0'
1161
1161
1162 curmode, prevmode = state['mode']
1162 curmode, prevmode = state['mode']
1163 action = KEYTABLE[curmode].get(lookup_ch, KEYTABLE['global'].get(lookup_ch))
1163 action = KEYTABLE[curmode].get(lookup_ch, KEYTABLE['global'].get(lookup_ch))
1164 if action is None:
1164 if action is None:
1165 return
1165 return
1166 if action in ('down', 'move-down'):
1166 if action in ('down', 'move-down'):
1167 newpos = min(oldpos + 1, len(rules) - 1)
1167 newpos = min(oldpos + 1, len(rules) - 1)
1168 movecursor(state, oldpos, newpos)
1168 movecursor(state, oldpos, newpos)
1169 if selected is not None or action == 'move-down':
1169 if selected is not None or action == 'move-down':
1170 swap(state, oldpos, newpos)
1170 swap(state, oldpos, newpos)
1171 elif action in ('up', 'move-up'):
1171 elif action in ('up', 'move-up'):
1172 newpos = max(0, oldpos - 1)
1172 newpos = max(0, oldpos - 1)
1173 movecursor(state, oldpos, newpos)
1173 movecursor(state, oldpos, newpos)
1174 if selected is not None or action == 'move-up':
1174 if selected is not None or action == 'move-up':
1175 swap(state, oldpos, newpos)
1175 swap(state, oldpos, newpos)
1176 elif action == 'next-action':
1176 elif action == 'next-action':
1177 cycleaction(state, oldpos, next=True)
1177 cycleaction(state, oldpos, next=True)
1178 elif action == 'prev-action':
1178 elif action == 'prev-action':
1179 cycleaction(state, oldpos, next=False)
1179 cycleaction(state, oldpos, next=False)
1180 elif action == 'select':
1180 elif action == 'select':
1181 selected = oldpos if selected is None else None
1181 selected = oldpos if selected is None else None
1182 makeselection(state, selected)
1182 makeselection(state, selected)
1183 elif action == 'goto' and int(ch) < len(rules) and len(rules) <= 10:
1183 elif action == 'goto' and int(ch) < len(rules) and len(rules) <= 10:
1184 newrule = next((r for r in rules if r.origpos == int(ch)))
1184 newrule = next((r for r in rules if r.origpos == int(ch)))
1185 movecursor(state, oldpos, newrule.pos)
1185 movecursor(state, oldpos, newrule.pos)
1186 if selected is not None:
1186 if selected is not None:
1187 swap(state, oldpos, newrule.pos)
1187 swap(state, oldpos, newrule.pos)
1188 elif action.startswith('action-'):
1188 elif action.startswith('action-'):
1189 changeaction(state, oldpos, action[7:])
1189 changeaction(state, oldpos, action[7:])
1190 elif action == 'showpatch':
1190 elif action == 'showpatch':
1191 changemode(state, MODE_PATCH if curmode != MODE_PATCH else prevmode)
1191 changemode(state, MODE_PATCH if curmode != MODE_PATCH else prevmode)
1192 elif action == 'help':
1192 elif action == 'help':
1193 changemode(state, MODE_HELP if curmode != MODE_HELP else prevmode)
1193 changemode(state, MODE_HELP if curmode != MODE_HELP else prevmode)
1194 elif action == 'quit':
1194 elif action == 'quit':
1195 return E_QUIT
1195 return E_QUIT
1196 elif action == 'histedit':
1196 elif action == 'histedit':
1197 return E_HISTEDIT
1197 return E_HISTEDIT
1198 elif action == 'page-down':
1198 elif action == 'page-down':
1199 return E_PAGEDOWN
1199 return E_PAGEDOWN
1200 elif action == 'page-up':
1200 elif action == 'page-up':
1201 return E_PAGEUP
1201 return E_PAGEUP
1202 elif action == 'line-down':
1202 elif action == 'line-down':
1203 return E_LINEDOWN
1203 return E_LINEDOWN
1204 elif action == 'line-up':
1204 elif action == 'line-up':
1205 return E_LINEUP
1205 return E_LINEUP
1206
1206
1207 def makecommands(rules):
1207 def makecommands(rules):
1208 """Returns a list of commands consumable by histedit --commands based on
1208 """Returns a list of commands consumable by histedit --commands based on
1209 our list of rules"""
1209 our list of rules"""
1210 commands = []
1210 commands = []
1211 for rules in rules:
1211 for rules in rules:
1212 commands.append("{0} {1}\n".format(rules.action, rules.ctx))
1212 commands.append("{0} {1}\n".format(rules.action, rules.ctx))
1213 return commands
1213 return commands
1214
1214
1215 def addln(win, y, x, line, color=None):
1215 def addln(win, y, x, line, color=None):
1216 """Add a line to the given window left padding but 100% filled with
1216 """Add a line to the given window left padding but 100% filled with
1217 whitespace characters, so that the color appears on the whole line"""
1217 whitespace characters, so that the color appears on the whole line"""
1218 maxy, maxx = win.getmaxyx()
1218 maxy, maxx = win.getmaxyx()
1219 length = maxx - 1 - x
1219 length = maxx - 1 - x
1220 line = ("{0:<%d}" % length).format(str(line).strip())[:length]
1220 line = ("{0:<%d}" % length).format(str(line).strip())[:length]
1221 if y < 0:
1221 if y < 0:
1222 y = maxy + y
1222 y = maxy + y
1223 if x < 0:
1223 if x < 0:
1224 x = maxx + x
1224 x = maxx + x
1225 if color:
1225 if color:
1226 win.addstr(y, x, line, color)
1226 win.addstr(y, x, line, color)
1227 else:
1227 else:
1228 win.addstr(y, x, line)
1228 win.addstr(y, x, line)
1229
1229
1230 def patchcontents(state):
1230 def patchcontents(state):
1231 repo = state['repo']
1231 repo = state['repo']
1232 rule = state['rules'][state['pos']]
1232 rule = state['rules'][state['pos']]
1233 displayer = logcmdutil.changesetdisplayer(repo.ui, repo, {
1233 displayer = logcmdutil.changesetdisplayer(repo.ui, repo, {
1234 'patch': True, 'verbose': True
1234 'patch': True, 'verbose': True
1235 }, buffered=True)
1235 }, buffered=True)
1236 displayer.show(rule.ctx)
1236 displayer.show(rule.ctx)
1237 displayer.close()
1237 displayer.close()
1238 return displayer.hunk[rule.ctx.rev()].splitlines()
1238 return displayer.hunk[rule.ctx.rev()].splitlines()
1239
1239
1240 def _chisteditmain(repo, rules, stdscr):
1240 def _chisteditmain(repo, rules, stdscr):
1241 # initialize color pattern
1241 # initialize color pattern
1242 curses.init_pair(COLOR_HELP, curses.COLOR_WHITE, curses.COLOR_BLUE)
1242 curses.init_pair(COLOR_HELP, curses.COLOR_WHITE, curses.COLOR_BLUE)
1243 curses.init_pair(COLOR_SELECTED, curses.COLOR_BLACK, curses.COLOR_WHITE)
1243 curses.init_pair(COLOR_SELECTED, curses.COLOR_BLACK, curses.COLOR_WHITE)
1244 curses.init_pair(COLOR_WARN, curses.COLOR_BLACK, curses.COLOR_YELLOW)
1244 curses.init_pair(COLOR_WARN, curses.COLOR_BLACK, curses.COLOR_YELLOW)
1245 curses.init_pair(COLOR_OK, curses.COLOR_BLACK, curses.COLOR_GREEN)
1245 curses.init_pair(COLOR_OK, curses.COLOR_BLACK, curses.COLOR_GREEN)
1246 curses.init_pair(COLOR_CURRENT, curses.COLOR_WHITE, curses.COLOR_MAGENTA)
1246 curses.init_pair(COLOR_CURRENT, curses.COLOR_WHITE, curses.COLOR_MAGENTA)
1247
1247
1248 # don't display the cursor
1248 # don't display the cursor
1249 try:
1249 try:
1250 curses.curs_set(0)
1250 curses.curs_set(0)
1251 except curses.error:
1251 except curses.error:
1252 pass
1252 pass
1253
1253
1254 def rendercommit(win, state):
1254 def rendercommit(win, state):
1255 """Renders the commit window that shows the log of the current selected
1255 """Renders the commit window that shows the log of the current selected
1256 commit"""
1256 commit"""
1257 pos = state['pos']
1257 pos = state['pos']
1258 rules = state['rules']
1258 rules = state['rules']
1259 rule = rules[pos]
1259 rule = rules[pos]
1260
1260
1261 ctx = rule.ctx
1261 ctx = rule.ctx
1262 win.box()
1262 win.box()
1263
1263
1264 maxy, maxx = win.getmaxyx()
1264 maxy, maxx = win.getmaxyx()
1265 length = maxx - 3
1265 length = maxx - 3
1266
1266
1267 line = "changeset: {0}:{1:<12}".format(ctx.rev(), ctx)
1267 line = "changeset: {0}:{1:<12}".format(ctx.rev(), ctx)
1268 win.addstr(1, 1, line[:length])
1268 win.addstr(1, 1, line[:length])
1269
1269
1270 line = "user: {0}".format(ctx.user())
1270 line = "user: {0}".format(ctx.user())
1271 win.addstr(2, 1, line[:length])
1271 win.addstr(2, 1, line[:length])
1272
1272
1273 bms = repo.nodebookmarks(ctx.node())
1273 bms = repo.nodebookmarks(ctx.node())
1274 line = "bookmark: {0}".format(' '.join(bms))
1274 line = "bookmark: {0}".format(' '.join(bms))
1275 win.addstr(3, 1, line[:length])
1275 win.addstr(3, 1, line[:length])
1276
1276
1277 line = "files: {0}".format(','.join(ctx.files()))
1277 line = "files: {0}".format(','.join(ctx.files()))
1278 win.addstr(4, 1, line[:length])
1278 win.addstr(4, 1, line[:length])
1279
1279
1280 line = "summary: {0}".format(ctx.description().splitlines()[0])
1280 line = "summary: {0}".format(ctx.description().splitlines()[0])
1281 win.addstr(5, 1, line[:length])
1281 win.addstr(5, 1, line[:length])
1282
1282
1283 conflicts = rule.conflicts
1283 conflicts = rule.conflicts
1284 if len(conflicts) > 0:
1284 if len(conflicts) > 0:
1285 conflictstr = ','.join(map(lambda r: str(r.ctx), conflicts))
1285 conflictstr = ','.join(map(lambda r: str(r.ctx), conflicts))
1286 conflictstr = "changed files overlap with {0}".format(conflictstr)
1286 conflictstr = "changed files overlap with {0}".format(conflictstr)
1287 else:
1287 else:
1288 conflictstr = 'no overlap'
1288 conflictstr = 'no overlap'
1289
1289
1290 win.addstr(6, 1, conflictstr[:length])
1290 win.addstr(6, 1, conflictstr[:length])
1291 win.noutrefresh()
1291 win.noutrefresh()
1292
1292
1293 def helplines(mode):
1293 def helplines(mode):
1294 if mode == MODE_PATCH:
1294 if mode == MODE_PATCH:
1295 help = """\
1295 help = """\
1296 ?: help, k/up: line up, j/down: line down, v: stop viewing patch
1296 ?: help, k/up: line up, j/down: line down, v: stop viewing patch
1297 pgup: prev page, space/pgdn: next page, c: commit, q: abort
1297 pgup: prev page, space/pgdn: next page, c: commit, q: abort
1298 """
1298 """
1299 else:
1299 else:
1300 help = """\
1300 help = """\
1301 ?: help, k/up: move up, j/down: move down, space: select, v: view patch
1301 ?: help, k/up: move up, j/down: move down, space: select, v: view patch
1302 d: drop, e: edit, f: fold, m: mess, p: pick, r: roll
1302 d: drop, e: edit, f: fold, m: mess, p: pick, r: roll
1303 pgup/K: move patch up, pgdn/J: move patch down, c: commit, q: abort
1303 pgup/K: move patch up, pgdn/J: move patch down, c: commit, q: abort
1304 """
1304 """
1305 return help.splitlines()
1305 return help.splitlines()
1306
1306
1307 def renderhelp(win, state):
1307 def renderhelp(win, state):
1308 maxy, maxx = win.getmaxyx()
1308 maxy, maxx = win.getmaxyx()
1309 mode, _ = state['mode']
1309 mode, _ = state['mode']
1310 for y, line in enumerate(helplines(mode)):
1310 for y, line in enumerate(helplines(mode)):
1311 if y >= maxy:
1311 if y >= maxy:
1312 break
1312 break
1313 addln(win, y, 0, line, curses.color_pair(COLOR_HELP))
1313 addln(win, y, 0, line, curses.color_pair(COLOR_HELP))
1314 win.noutrefresh()
1314 win.noutrefresh()
1315
1315
1316 def renderrules(rulesscr, state):
1316 def renderrules(rulesscr, state):
1317 rules = state['rules']
1317 rules = state['rules']
1318 pos = state['pos']
1318 pos = state['pos']
1319 selected = state['selected']
1319 selected = state['selected']
1320 start = state['modes'][MODE_RULES]['line_offset']
1320 start = state['modes'][MODE_RULES]['line_offset']
1321
1321
1322 conflicts = [r.ctx for r in rules if r.conflicts]
1322 conflicts = [r.ctx for r in rules if r.conflicts]
1323 if len(conflicts) > 0:
1323 if len(conflicts) > 0:
1324 line = "potential conflict in %s" % ','.join(map(str, conflicts))
1324 line = "potential conflict in %s" % ','.join(map(str, conflicts))
1325 addln(rulesscr, -1, 0, line, curses.color_pair(COLOR_WARN))
1325 addln(rulesscr, -1, 0, line, curses.color_pair(COLOR_WARN))
1326
1326
1327 for y, rule in enumerate(rules[start:]):
1327 for y, rule in enumerate(rules[start:]):
1328 if y >= state['page_height']:
1328 if y >= state['page_height']:
1329 break
1329 break
1330 if len(rule.conflicts) > 0:
1330 if len(rule.conflicts) > 0:
1331 rulesscr.addstr(y, 0, " ", curses.color_pair(COLOR_WARN))
1331 rulesscr.addstr(y, 0, " ", curses.color_pair(COLOR_WARN))
1332 else:
1332 else:
1333 rulesscr.addstr(y, 0, " ", curses.COLOR_BLACK)
1333 rulesscr.addstr(y, 0, " ", curses.COLOR_BLACK)
1334 if y + start == selected:
1334 if y + start == selected:
1335 addln(rulesscr, y, 2, rule, curses.color_pair(COLOR_SELECTED))
1335 addln(rulesscr, y, 2, rule, curses.color_pair(COLOR_SELECTED))
1336 elif y + start == pos:
1336 elif y + start == pos:
1337 addln(rulesscr, y, 2, rule,
1337 addln(rulesscr, y, 2, rule,
1338 curses.color_pair(COLOR_CURRENT) | curses.A_BOLD)
1338 curses.color_pair(COLOR_CURRENT) | curses.A_BOLD)
1339 else:
1339 else:
1340 addln(rulesscr, y, 2, rule)
1340 addln(rulesscr, y, 2, rule)
1341 rulesscr.noutrefresh()
1341 rulesscr.noutrefresh()
1342
1342
1343 def renderstring(win, state, output):
1343 def renderstring(win, state, output):
1344 maxy, maxx = win.getmaxyx()
1344 maxy, maxx = win.getmaxyx()
1345 length = min(maxy - 1, len(output))
1345 length = min(maxy - 1, len(output))
1346 for y in range(0, length):
1346 for y in range(0, length):
1347 win.addstr(y, 0, output[y])
1347 win.addstr(y, 0, output[y])
1348 win.noutrefresh()
1348 win.noutrefresh()
1349
1349
1350 def renderpatch(win, state):
1350 def renderpatch(win, state):
1351 start = state['modes'][MODE_PATCH]['line_offset']
1351 start = state['modes'][MODE_PATCH]['line_offset']
1352 renderstring(win, state, patchcontents(state)[start:])
1352 renderstring(win, state, patchcontents(state)[start:])
1353
1353
1354 def layout(mode):
1354 def layout(mode):
1355 maxy, maxx = stdscr.getmaxyx()
1355 maxy, maxx = stdscr.getmaxyx()
1356 helplen = len(helplines(mode))
1356 helplen = len(helplines(mode))
1357 return {
1357 return {
1358 'commit': (8, maxx),
1358 'commit': (8, maxx),
1359 'help': (helplen, maxx),
1359 'help': (helplen, maxx),
1360 'main': (maxy - helplen - 8, maxx),
1360 'main': (maxy - helplen - 8, maxx),
1361 }
1361 }
1362
1362
1363 def drawvertwin(size, y, x):
1363 def drawvertwin(size, y, x):
1364 win = curses.newwin(size[0], size[1], y, x)
1364 win = curses.newwin(size[0], size[1], y, x)
1365 y += size[0]
1365 y += size[0]
1366 return win, y, x
1366 return win, y, x
1367
1367
1368 state = {
1368 state = {
1369 'pos': 0,
1369 'pos': 0,
1370 'rules': rules,
1370 'rules': rules,
1371 'selected': None,
1371 'selected': None,
1372 'mode': (MODE_INIT, MODE_INIT),
1372 'mode': (MODE_INIT, MODE_INIT),
1373 'page_height': None,
1373 'page_height': None,
1374 'modes': {
1374 'modes': {
1375 MODE_RULES: {
1375 MODE_RULES: {
1376 'line_offset': 0,
1376 'line_offset': 0,
1377 },
1377 },
1378 MODE_PATCH: {
1378 MODE_PATCH: {
1379 'line_offset': 0,
1379 'line_offset': 0,
1380 }
1380 }
1381 },
1381 },
1382 'repo': repo,
1382 'repo': repo,
1383 }
1383 }
1384
1384
1385 # eventloop
1385 # eventloop
1386 ch = None
1386 ch = None
1387 stdscr.clear()
1387 stdscr.clear()
1388 stdscr.refresh()
1388 stdscr.refresh()
1389 while True:
1389 while True:
1390 try:
1390 try:
1391 oldmode, _ = state['mode']
1391 oldmode, _ = state['mode']
1392 if oldmode == MODE_INIT:
1392 if oldmode == MODE_INIT:
1393 changemode(state, MODE_RULES)
1393 changemode(state, MODE_RULES)
1394 e = event(state, ch)
1394 e = event(state, ch)
1395
1395
1396 if e == E_QUIT:
1396 if e == E_QUIT:
1397 return False
1397 return False
1398 if e == E_HISTEDIT:
1398 if e == E_HISTEDIT:
1399 return state['rules']
1399 return state['rules']
1400 else:
1400 else:
1401 if e == E_RESIZE:
1401 if e == E_RESIZE:
1402 size = screen_size()
1402 size = screen_size()
1403 if size != stdscr.getmaxyx():
1403 if size != stdscr.getmaxyx():
1404 curses.resizeterm(*size)
1404 curses.resizeterm(*size)
1405
1405
1406 curmode, _ = state['mode']
1406 curmode, _ = state['mode']
1407 sizes = layout(curmode)
1407 sizes = layout(curmode)
1408 if curmode != oldmode:
1408 if curmode != oldmode:
1409 state['page_height'] = sizes['main'][0]
1409 state['page_height'] = sizes['main'][0]
1410 # Adjust the view to fit the current screen size.
1410 # Adjust the view to fit the current screen size.
1411 movecursor(state, state['pos'], state['pos'])
1411 movecursor(state, state['pos'], state['pos'])
1412
1412
1413 # Pack the windows against the top, each pane spread across the
1413 # Pack the windows against the top, each pane spread across the
1414 # full width of the screen.
1414 # full width of the screen.
1415 y, x = (0, 0)
1415 y, x = (0, 0)
1416 helpwin, y, x = drawvertwin(sizes['help'], y, x)
1416 helpwin, y, x = drawvertwin(sizes['help'], y, x)
1417 mainwin, y, x = drawvertwin(sizes['main'], y, x)
1417 mainwin, y, x = drawvertwin(sizes['main'], y, x)
1418 commitwin, y, x = drawvertwin(sizes['commit'], y, x)
1418 commitwin, y, x = drawvertwin(sizes['commit'], y, x)
1419
1419
1420 if e in (E_PAGEDOWN, E_PAGEUP, E_LINEDOWN, E_LINEUP):
1420 if e in (E_PAGEDOWN, E_PAGEUP, E_LINEDOWN, E_LINEUP):
1421 if e == E_PAGEDOWN:
1421 if e == E_PAGEDOWN:
1422 changeview(state, +1, 'page')
1422 changeview(state, +1, 'page')
1423 elif e == E_PAGEUP:
1423 elif e == E_PAGEUP:
1424 changeview(state, -1, 'page')
1424 changeview(state, -1, 'page')
1425 elif e == E_LINEDOWN:
1425 elif e == E_LINEDOWN:
1426 changeview(state, +1, 'line')
1426 changeview(state, +1, 'line')
1427 elif e == E_LINEUP:
1427 elif e == E_LINEUP:
1428 changeview(state, -1, 'line')
1428 changeview(state, -1, 'line')
1429
1429
1430 # start rendering
1430 # start rendering
1431 commitwin.erase()
1431 commitwin.erase()
1432 helpwin.erase()
1432 helpwin.erase()
1433 mainwin.erase()
1433 mainwin.erase()
1434 if curmode == MODE_PATCH:
1434 if curmode == MODE_PATCH:
1435 renderpatch(mainwin, state)
1435 renderpatch(mainwin, state)
1436 elif curmode == MODE_HELP:
1436 elif curmode == MODE_HELP:
1437 renderstring(mainwin, state, __doc__.strip().splitlines())
1437 renderstring(mainwin, state, __doc__.strip().splitlines())
1438 else:
1438 else:
1439 renderrules(mainwin, state)
1439 renderrules(mainwin, state)
1440 rendercommit(commitwin, state)
1440 rendercommit(commitwin, state)
1441 renderhelp(helpwin, state)
1441 renderhelp(helpwin, state)
1442 curses.doupdate()
1442 curses.doupdate()
1443 # done rendering
1443 # done rendering
1444 ch = stdscr.getkey()
1444 ch = stdscr.getkey()
1445 except curses.error:
1445 except curses.error:
1446 pass
1446 pass
1447
1447
1448 def _chistedit(ui, repo, *freeargs, **opts):
1448 def _chistedit(ui, repo, *freeargs, **opts):
1449 """interactively edit changeset history via a curses interface
1449 """interactively edit changeset history via a curses interface
1450
1450
1451 Provides a ncurses interface to histedit. Press ? in chistedit mode
1451 Provides a ncurses interface to histedit. Press ? in chistedit mode
1452 to see an extensive help. Requires python-curses to be installed."""
1452 to see an extensive help. Requires python-curses to be installed."""
1453
1453
1454 if curses is None:
1454 if curses is None:
1455 raise error.Abort(_("Python curses library required"))
1455 raise error.Abort(_("Python curses library required"))
1456
1456
1457 # disable color
1457 # disable color
1458 ui._colormode = None
1458 ui._colormode = None
1459
1459
1460 try:
1460 try:
1461 keep = opts.get('keep')
1461 keep = opts.get('keep')
1462 revs = opts.get('rev', [])[:]
1462 revs = opts.get('rev', [])[:]
1463 cmdutil.checkunfinished(repo)
1463 cmdutil.checkunfinished(repo)
1464 cmdutil.bailifchanged(repo)
1464 cmdutil.bailifchanged(repo)
1465
1465
1466 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1466 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1467 raise error.Abort(_('history edit already in progress, try '
1467 raise error.Abort(_('history edit already in progress, try '
1468 '--continue or --abort'))
1468 '--continue or --abort'))
1469 revs.extend(freeargs)
1469 revs.extend(freeargs)
1470 if not revs:
1470 if not revs:
1471 defaultrev = destutil.desthistedit(ui, repo)
1471 defaultrev = destutil.desthistedit(ui, repo)
1472 if defaultrev is not None:
1472 if defaultrev is not None:
1473 revs.append(defaultrev)
1473 revs.append(defaultrev)
1474 if len(revs) != 1:
1474 if len(revs) != 1:
1475 raise error.Abort(
1475 raise error.Abort(
1476 _('histedit requires exactly one ancestor revision'))
1476 _('histedit requires exactly one ancestor revision'))
1477
1477
1478 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1478 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1479 if len(rr) != 1:
1479 if len(rr) != 1:
1480 raise error.Abort(_('The specified revisions must have '
1480 raise error.Abort(_('The specified revisions must have '
1481 'exactly one common root'))
1481 'exactly one common root'))
1482 root = rr[0].node()
1482 root = rr[0].node()
1483
1483
1484 topmost = repo.dirstate.p1()
1484 topmost = repo.dirstate.p1()
1485 revs = between(repo, root, topmost, keep)
1485 revs = between(repo, root, topmost, keep)
1486 if not revs:
1486 if not revs:
1487 raise error.Abort(_('%s is not an ancestor of working directory') %
1487 raise error.Abort(_('%s is not an ancestor of working directory') %
1488 node.short(root))
1488 node.short(root))
1489
1489
1490 ctxs = []
1490 ctxs = []
1491 for i, r in enumerate(revs):
1491 for i, r in enumerate(revs):
1492 ctxs.append(histeditrule(repo[r], i))
1492 ctxs.append(histeditrule(repo[r], i))
1493 rc = curses.wrapper(functools.partial(_chisteditmain, repo, ctxs))
1493 rc = curses.wrapper(functools.partial(_chisteditmain, repo, ctxs))
1494 curses.echo()
1494 curses.echo()
1495 curses.endwin()
1495 curses.endwin()
1496 if rc is False:
1496 if rc is False:
1497 ui.write(_("histedit aborted\n"))
1497 ui.write(_("histedit aborted\n"))
1498 return 0
1498 return 0
1499 if type(rc) is list:
1499 if type(rc) is list:
1500 ui.status(_("running histedit\n"))
1500 ui.status(_("running histedit\n"))
1501 rules = makecommands(rc)
1501 rules = makecommands(rc)
1502 filename = repo.vfs.join('chistedit')
1502 filename = repo.vfs.join('chistedit')
1503 with open(filename, 'w+') as fp:
1503 with open(filename, 'w+') as fp:
1504 for r in rules:
1504 for r in rules:
1505 fp.write(r)
1505 fp.write(r)
1506 opts['commands'] = filename
1506 opts['commands'] = filename
1507 return _texthistedit(ui, repo, *freeargs, **opts)
1507 return _texthistedit(ui, repo, *freeargs, **opts)
1508 except KeyboardInterrupt:
1508 except KeyboardInterrupt:
1509 pass
1509 pass
1510 return -1
1510 return -1
1511
1511
1512 @command('histedit',
1512 @command('histedit',
1513 [('', 'commands', '',
1513 [('', 'commands', '',
1514 _('read history edits from the specified file'), _('FILE')),
1514 _('read history edits from the specified file'), _('FILE')),
1515 ('c', 'continue', False, _('continue an edit already in progress')),
1515 ('c', 'continue', False, _('continue an edit already in progress')),
1516 ('', 'edit-plan', False, _('edit remaining actions list')),
1516 ('', 'edit-plan', False, _('edit remaining actions list')),
1517 ('k', 'keep', False,
1517 ('k', 'keep', False,
1518 _("don't strip old nodes after edit is complete")),
1518 _("don't strip old nodes after edit is complete")),
1519 ('', 'abort', False, _('abort an edit in progress')),
1519 ('', 'abort', False, _('abort an edit in progress')),
1520 ('o', 'outgoing', False, _('changesets not found in destination')),
1520 ('o', 'outgoing', False, _('changesets not found in destination')),
1521 ('f', 'force', False,
1521 ('f', 'force', False,
1522 _('force outgoing even for unrelated repositories')),
1522 _('force outgoing even for unrelated repositories')),
1523 ('r', 'rev', [], _('first revision to be edited'), _('REV'))] +
1523 ('r', 'rev', [], _('first revision to be edited'), _('REV'))] +
1524 cmdutil.formatteropts,
1524 cmdutil.formatteropts,
1525 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"),
1525 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"),
1526 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
1526 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
1527 def histedit(ui, repo, *freeargs, **opts):
1527 def histedit(ui, repo, *freeargs, **opts):
1528 """interactively edit changeset history
1528 """interactively edit changeset history
1529
1529
1530 This command lets you edit a linear series of changesets (up to
1530 This command lets you edit a linear series of changesets (up to
1531 and including the working directory, which should be clean).
1531 and including the working directory, which should be clean).
1532 You can:
1532 You can:
1533
1533
1534 - `pick` to [re]order a changeset
1534 - `pick` to [re]order a changeset
1535
1535
1536 - `drop` to omit changeset
1536 - `drop` to omit changeset
1537
1537
1538 - `mess` to reword the changeset commit message
1538 - `mess` to reword the changeset commit message
1539
1539
1540 - `fold` to combine it with the preceding changeset (using the later date)
1540 - `fold` to combine it with the preceding changeset (using the later date)
1541
1541
1542 - `roll` like fold, but discarding this commit's description and date
1542 - `roll` like fold, but discarding this commit's description and date
1543
1543
1544 - `edit` to edit this changeset (preserving date)
1544 - `edit` to edit this changeset (preserving date)
1545
1545
1546 - `base` to checkout changeset and apply further changesets from there
1546 - `base` to checkout changeset and apply further changesets from there
1547
1547
1548 There are a number of ways to select the root changeset:
1548 There are a number of ways to select the root changeset:
1549
1549
1550 - Specify ANCESTOR directly
1550 - Specify ANCESTOR directly
1551
1551
1552 - Use --outgoing -- it will be the first linear changeset not
1552 - Use --outgoing -- it will be the first linear changeset not
1553 included in destination. (See :hg:`help config.paths.default-push`)
1553 included in destination. (See :hg:`help config.paths.default-push`)
1554
1554
1555 - Otherwise, the value from the "histedit.defaultrev" config option
1555 - Otherwise, the value from the "histedit.defaultrev" config option
1556 is used as a revset to select the base revision when ANCESTOR is not
1556 is used as a revset to select the base revision when ANCESTOR is not
1557 specified. The first revision returned by the revset is used. By
1557 specified. The first revision returned by the revset is used. By
1558 default, this selects the editable history that is unique to the
1558 default, this selects the editable history that is unique to the
1559 ancestry of the working directory.
1559 ancestry of the working directory.
1560
1560
1561 .. container:: verbose
1561 .. container:: verbose
1562
1562
1563 If you use --outgoing, this command will abort if there are ambiguous
1563 If you use --outgoing, this command will abort if there are ambiguous
1564 outgoing revisions. For example, if there are multiple branches
1564 outgoing revisions. For example, if there are multiple branches
1565 containing outgoing revisions.
1565 containing outgoing revisions.
1566
1566
1567 Use "min(outgoing() and ::.)" or similar revset specification
1567 Use "min(outgoing() and ::.)" or similar revset specification
1568 instead of --outgoing to specify edit target revision exactly in
1568 instead of --outgoing to specify edit target revision exactly in
1569 such ambiguous situation. See :hg:`help revsets` for detail about
1569 such ambiguous situation. See :hg:`help revsets` for detail about
1570 selecting revisions.
1570 selecting revisions.
1571
1571
1572 .. container:: verbose
1572 .. container:: verbose
1573
1573
1574 Examples:
1574 Examples:
1575
1575
1576 - A number of changes have been made.
1576 - A number of changes have been made.
1577 Revision 3 is no longer needed.
1577 Revision 3 is no longer needed.
1578
1578
1579 Start history editing from revision 3::
1579 Start history editing from revision 3::
1580
1580
1581 hg histedit -r 3
1581 hg histedit -r 3
1582
1582
1583 An editor opens, containing the list of revisions,
1583 An editor opens, containing the list of revisions,
1584 with specific actions specified::
1584 with specific actions specified::
1585
1585
1586 pick 5339bf82f0ca 3 Zworgle the foobar
1586 pick 5339bf82f0ca 3 Zworgle the foobar
1587 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1587 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1588 pick 0a9639fcda9d 5 Morgify the cromulancy
1588 pick 0a9639fcda9d 5 Morgify the cromulancy
1589
1589
1590 Additional information about the possible actions
1590 Additional information about the possible actions
1591 to take appears below the list of revisions.
1591 to take appears below the list of revisions.
1592
1592
1593 To remove revision 3 from the history,
1593 To remove revision 3 from the history,
1594 its action (at the beginning of the relevant line)
1594 its action (at the beginning of the relevant line)
1595 is changed to 'drop'::
1595 is changed to 'drop'::
1596
1596
1597 drop 5339bf82f0ca 3 Zworgle the foobar
1597 drop 5339bf82f0ca 3 Zworgle the foobar
1598 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1598 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1599 pick 0a9639fcda9d 5 Morgify the cromulancy
1599 pick 0a9639fcda9d 5 Morgify the cromulancy
1600
1600
1601 - A number of changes have been made.
1601 - A number of changes have been made.
1602 Revision 2 and 4 need to be swapped.
1602 Revision 2 and 4 need to be swapped.
1603
1603
1604 Start history editing from revision 2::
1604 Start history editing from revision 2::
1605
1605
1606 hg histedit -r 2
1606 hg histedit -r 2
1607
1607
1608 An editor opens, containing the list of revisions,
1608 An editor opens, containing the list of revisions,
1609 with specific actions specified::
1609 with specific actions specified::
1610
1610
1611 pick 252a1af424ad 2 Blorb a morgwazzle
1611 pick 252a1af424ad 2 Blorb a morgwazzle
1612 pick 5339bf82f0ca 3 Zworgle the foobar
1612 pick 5339bf82f0ca 3 Zworgle the foobar
1613 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1613 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1614
1614
1615 To swap revision 2 and 4, its lines are swapped
1615 To swap revision 2 and 4, its lines are swapped
1616 in the editor::
1616 in the editor::
1617
1617
1618 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1618 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1619 pick 5339bf82f0ca 3 Zworgle the foobar
1619 pick 5339bf82f0ca 3 Zworgle the foobar
1620 pick 252a1af424ad 2 Blorb a morgwazzle
1620 pick 252a1af424ad 2 Blorb a morgwazzle
1621
1621
1622 Returns 0 on success, 1 if user intervention is required (not only
1622 Returns 0 on success, 1 if user intervention is required (not only
1623 for intentional "edit" command, but also for resolving unexpected
1623 for intentional "edit" command, but also for resolving unexpected
1624 conflicts).
1624 conflicts).
1625 """
1625 """
1626 # kludge: _chistedit only works for starting an edit, not aborting
1626 # kludge: _chistedit only works for starting an edit, not aborting
1627 # or continuing, so fall back to regular _texthistedit for those
1627 # or continuing, so fall back to regular _texthistedit for those
1628 # operations.
1628 # operations.
1629 if ui.interface('histedit') == 'curses' and _getgoal(
1629 if ui.interface('histedit') == 'curses' and _getgoal(
1630 pycompat.byteskwargs(opts)) == goalnew:
1630 pycompat.byteskwargs(opts)) == goalnew:
1631 return _chistedit(ui, repo, *freeargs, **opts)
1631 return _chistedit(ui, repo, *freeargs, **opts)
1632 return _texthistedit(ui, repo, *freeargs, **opts)
1632 return _texthistedit(ui, repo, *freeargs, **opts)
1633
1633
1634 def _texthistedit(ui, repo, *freeargs, **opts):
1634 def _texthistedit(ui, repo, *freeargs, **opts):
1635 state = histeditstate(repo)
1635 state = histeditstate(repo)
1636 with repo.wlock() as wlock, repo.lock() as lock:
1636 with repo.wlock() as wlock, repo.lock() as lock:
1637 state.wlock = wlock
1637 state.wlock = wlock
1638 state.lock = lock
1638 state.lock = lock
1639 _histedit(ui, repo, state, *freeargs, **opts)
1639 _histedit(ui, repo, state, *freeargs, **opts)
1640
1640
1641 goalcontinue = 'continue'
1641 goalcontinue = 'continue'
1642 goalabort = 'abort'
1642 goalabort = 'abort'
1643 goaleditplan = 'edit-plan'
1643 goaleditplan = 'edit-plan'
1644 goalnew = 'new'
1644 goalnew = 'new'
1645
1645
1646 def _getgoal(opts):
1646 def _getgoal(opts):
1647 if opts.get(b'continue'):
1647 if opts.get(b'continue'):
1648 return goalcontinue
1648 return goalcontinue
1649 if opts.get(b'abort'):
1649 if opts.get(b'abort'):
1650 return goalabort
1650 return goalabort
1651 if opts.get(b'edit_plan'):
1651 if opts.get(b'edit_plan'):
1652 return goaleditplan
1652 return goaleditplan
1653 return goalnew
1653 return goalnew
1654
1654
1655 def _readfile(ui, path):
1655 def _readfile(ui, path):
1656 if path == '-':
1656 if path == '-':
1657 with ui.timeblockedsection('histedit'):
1657 with ui.timeblockedsection('histedit'):
1658 return ui.fin.read()
1658 return ui.fin.read()
1659 else:
1659 else:
1660 with open(path, 'rb') as f:
1660 with open(path, 'rb') as f:
1661 return f.read()
1661 return f.read()
1662
1662
1663 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1663 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1664 # TODO only abort if we try to histedit mq patches, not just
1664 # TODO only abort if we try to histedit mq patches, not just
1665 # blanket if mq patches are applied somewhere
1665 # blanket if mq patches are applied somewhere
1666 mq = getattr(repo, 'mq', None)
1666 mq = getattr(repo, 'mq', None)
1667 if mq and mq.applied:
1667 if mq and mq.applied:
1668 raise error.Abort(_('source has mq patches applied'))
1668 raise error.Abort(_('source has mq patches applied'))
1669
1669
1670 # basic argument incompatibility processing
1670 # basic argument incompatibility processing
1671 outg = opts.get('outgoing')
1671 outg = opts.get('outgoing')
1672 editplan = opts.get('edit_plan')
1672 editplan = opts.get('edit_plan')
1673 abort = opts.get('abort')
1673 abort = opts.get('abort')
1674 force = opts.get('force')
1674 force = opts.get('force')
1675 if force and not outg:
1675 if force and not outg:
1676 raise error.Abort(_('--force only allowed with --outgoing'))
1676 raise error.Abort(_('--force only allowed with --outgoing'))
1677 if goal == 'continue':
1677 if goal == 'continue':
1678 if any((outg, abort, revs, freeargs, rules, editplan)):
1678 if any((outg, abort, revs, freeargs, rules, editplan)):
1679 raise error.Abort(_('no arguments allowed with --continue'))
1679 raise error.Abort(_('no arguments allowed with --continue'))
1680 elif goal == 'abort':
1680 elif goal == 'abort':
1681 if any((outg, revs, freeargs, rules, editplan)):
1681 if any((outg, revs, freeargs, rules, editplan)):
1682 raise error.Abort(_('no arguments allowed with --abort'))
1682 raise error.Abort(_('no arguments allowed with --abort'))
1683 elif goal == 'edit-plan':
1683 elif goal == 'edit-plan':
1684 if any((outg, revs, freeargs)):
1684 if any((outg, revs, freeargs)):
1685 raise error.Abort(_('only --commands argument allowed with '
1685 raise error.Abort(_('only --commands argument allowed with '
1686 '--edit-plan'))
1686 '--edit-plan'))
1687 else:
1687 else:
1688 if state.inprogress():
1688 if state.inprogress():
1689 raise error.Abort(_('history edit already in progress, try '
1689 raise error.Abort(_('history edit already in progress, try '
1690 '--continue or --abort'))
1690 '--continue or --abort'))
1691 if outg:
1691 if outg:
1692 if revs:
1692 if revs:
1693 raise error.Abort(_('no revisions allowed with --outgoing'))
1693 raise error.Abort(_('no revisions allowed with --outgoing'))
1694 if len(freeargs) > 1:
1694 if len(freeargs) > 1:
1695 raise error.Abort(
1695 raise error.Abort(
1696 _('only one repo argument allowed with --outgoing'))
1696 _('only one repo argument allowed with --outgoing'))
1697 else:
1697 else:
1698 revs.extend(freeargs)
1698 revs.extend(freeargs)
1699 if len(revs) == 0:
1699 if len(revs) == 0:
1700 defaultrev = destutil.desthistedit(ui, repo)
1700 defaultrev = destutil.desthistedit(ui, repo)
1701 if defaultrev is not None:
1701 if defaultrev is not None:
1702 revs.append(defaultrev)
1702 revs.append(defaultrev)
1703
1703
1704 if len(revs) != 1:
1704 if len(revs) != 1:
1705 raise error.Abort(
1705 raise error.Abort(
1706 _('histedit requires exactly one ancestor revision'))
1706 _('histedit requires exactly one ancestor revision'))
1707
1707
1708 def _histedit(ui, repo, state, *freeargs, **opts):
1708 def _histedit(ui, repo, state, *freeargs, **opts):
1709 opts = pycompat.byteskwargs(opts)
1709 opts = pycompat.byteskwargs(opts)
1710 fm = ui.formatter('histedit', opts)
1710 fm = ui.formatter('histedit', opts)
1711 fm.startitem()
1711 fm.startitem()
1712 goal = _getgoal(opts)
1712 goal = _getgoal(opts)
1713 revs = opts.get('rev', [])
1713 revs = opts.get('rev', [])
1714 nobackup = not ui.configbool('rewrite', 'backup-bundle')
1714 nobackup = not ui.configbool('rewrite', 'backup-bundle')
1715 rules = opts.get('commands', '')
1715 rules = opts.get('commands', '')
1716 state.keep = opts.get('keep', False)
1716 state.keep = opts.get('keep', False)
1717
1717
1718 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1718 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1719
1719
1720 hastags = False
1720 hastags = False
1721 if revs:
1721 if revs:
1722 revs = scmutil.revrange(repo, revs)
1722 revs = scmutil.revrange(repo, revs)
1723 ctxs = [repo[rev] for rev in revs]
1723 ctxs = [repo[rev] for rev in revs]
1724 for ctx in ctxs:
1724 for ctx in ctxs:
1725 tags = [tag for tag in ctx.tags() if tag != 'tip']
1725 tags = [tag for tag in ctx.tags() if tag != 'tip']
1726 if not hastags:
1726 if not hastags:
1727 hastags = len(tags)
1727 hastags = len(tags)
1728 if hastags:
1728 if hastags:
1729 if ui.promptchoice(_('warning: tags associated with the given'
1729 if ui.promptchoice(_('warning: tags associated with the given'
1730 ' changeset will be lost after histedit.\n'
1730 ' changeset will be lost after histedit.\n'
1731 'do you want to continue (yN)? $$ &Yes $$ &No'),
1731 'do you want to continue (yN)? $$ &Yes $$ &No'),
1732 default=1):
1732 default=1):
1733 raise error.Abort(_('histedit cancelled\n'))
1733 raise error.Abort(_('histedit cancelled\n'))
1734 # rebuild state
1734 # rebuild state
1735 if goal == goalcontinue:
1735 if goal == goalcontinue:
1736 state.read()
1736 state.read()
1737 state = bootstrapcontinue(ui, state, opts)
1737 state = bootstrapcontinue(ui, state, opts)
1738 elif goal == goaleditplan:
1738 elif goal == goaleditplan:
1739 _edithisteditplan(ui, repo, state, rules)
1739 _edithisteditplan(ui, repo, state, rules)
1740 return
1740 return
1741 elif goal == goalabort:
1741 elif goal == goalabort:
1742 _aborthistedit(ui, repo, state, nobackup=nobackup)
1742 _aborthistedit(ui, repo, state, nobackup=nobackup)
1743 return
1743 return
1744 else:
1744 else:
1745 # goal == goalnew
1745 # goal == goalnew
1746 _newhistedit(ui, repo, state, revs, freeargs, opts)
1746 _newhistedit(ui, repo, state, revs, freeargs, opts)
1747
1747
1748 _continuehistedit(ui, repo, state)
1748 _continuehistedit(ui, repo, state)
1749 _finishhistedit(ui, repo, state, fm)
1749 _finishhistedit(ui, repo, state, fm)
1750 fm.end()
1750 fm.end()
1751
1751
1752 def _continuehistedit(ui, repo, state):
1752 def _continuehistedit(ui, repo, state):
1753 """This function runs after either:
1753 """This function runs after either:
1754 - bootstrapcontinue (if the goal is 'continue')
1754 - bootstrapcontinue (if the goal is 'continue')
1755 - _newhistedit (if the goal is 'new')
1755 - _newhistedit (if the goal is 'new')
1756 """
1756 """
1757 # preprocess rules so that we can hide inner folds from the user
1757 # preprocess rules so that we can hide inner folds from the user
1758 # and only show one editor
1758 # and only show one editor
1759 actions = state.actions[:]
1759 actions = state.actions[:]
1760 for idx, (action, nextact) in enumerate(
1760 for idx, (action, nextact) in enumerate(
1761 zip(actions, actions[1:] + [None])):
1761 zip(actions, actions[1:] + [None])):
1762 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1762 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1763 state.actions[idx].__class__ = _multifold
1763 state.actions[idx].__class__ = _multifold
1764
1764
1765 # Force an initial state file write, so the user can run --abort/continue
1765 # Force an initial state file write, so the user can run --abort/continue
1766 # even if there's an exception before the first transaction serialize.
1766 # even if there's an exception before the first transaction serialize.
1767 state.write()
1767 state.write()
1768
1768
1769 tr = None
1769 tr = None
1770 # Don't use singletransaction by default since it rolls the entire
1770 # Don't use singletransaction by default since it rolls the entire
1771 # transaction back if an unexpected exception happens (like a
1771 # transaction back if an unexpected exception happens (like a
1772 # pretxncommit hook throws, or the user aborts the commit msg editor).
1772 # pretxncommit hook throws, or the user aborts the commit msg editor).
1773 if ui.configbool("histedit", "singletransaction"):
1773 if ui.configbool("histedit", "singletransaction"):
1774 # Don't use a 'with' for the transaction, since actions may close
1774 # Don't use a 'with' for the transaction, since actions may close
1775 # and reopen a transaction. For example, if the action executes an
1775 # and reopen a transaction. For example, if the action executes an
1776 # external process it may choose to commit the transaction first.
1776 # external process it may choose to commit the transaction first.
1777 tr = repo.transaction('histedit')
1777 tr = repo.transaction('histedit')
1778 progress = ui.makeprogress(_("editing"), unit=_('changes'),
1778 progress = ui.makeprogress(_("editing"), unit=_('changes'),
1779 total=len(state.actions))
1779 total=len(state.actions))
1780 with progress, util.acceptintervention(tr):
1780 with progress, util.acceptintervention(tr):
1781 while state.actions:
1781 while state.actions:
1782 state.write(tr=tr)
1782 state.write(tr=tr)
1783 actobj = state.actions[0]
1783 actobj = state.actions[0]
1784 progress.increment(item=actobj.torule())
1784 progress.increment(item=actobj.torule())
1785 ui.debug('histedit: processing %s %s\n' % (actobj.verb,
1785 ui.debug('histedit: processing %s %s\n' % (actobj.verb,
1786 actobj.torule()))
1786 actobj.torule()))
1787 parentctx, replacement_ = actobj.run()
1787 parentctx, replacement_ = actobj.run()
1788 state.parentctxnode = parentctx.node()
1788 state.parentctxnode = parentctx.node()
1789 state.replacements.extend(replacement_)
1789 state.replacements.extend(replacement_)
1790 state.actions.pop(0)
1790 state.actions.pop(0)
1791
1791
1792 state.write()
1792 state.write()
1793
1793
1794 def _finishhistedit(ui, repo, state, fm):
1794 def _finishhistedit(ui, repo, state, fm):
1795 """This action runs when histedit is finishing its session"""
1795 """This action runs when histedit is finishing its session"""
1796 hg.updaterepo(repo, state.parentctxnode, overwrite=False)
1796 hg.updaterepo(repo, state.parentctxnode, overwrite=False)
1797
1797
1798 mapping, tmpnodes, created, ntm = processreplacement(state)
1798 mapping, tmpnodes, created, ntm = processreplacement(state)
1799 if mapping:
1799 if mapping:
1800 for prec, succs in mapping.iteritems():
1800 for prec, succs in mapping.iteritems():
1801 if not succs:
1801 if not succs:
1802 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1802 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1803 else:
1803 else:
1804 ui.debug('histedit: %s is replaced by %s\n' % (
1804 ui.debug('histedit: %s is replaced by %s\n' % (
1805 node.short(prec), node.short(succs[0])))
1805 node.short(prec), node.short(succs[0])))
1806 if len(succs) > 1:
1806 if len(succs) > 1:
1807 m = 'histedit: %s'
1807 m = 'histedit: %s'
1808 for n in succs[1:]:
1808 for n in succs[1:]:
1809 ui.debug(m % node.short(n))
1809 ui.debug(m % node.short(n))
1810
1810
1811 if not state.keep:
1811 if not state.keep:
1812 if mapping:
1812 if mapping:
1813 movetopmostbookmarks(repo, state.topmost, ntm)
1813 movetopmostbookmarks(repo, state.topmost, ntm)
1814 # TODO update mq state
1814 # TODO update mq state
1815 else:
1815 else:
1816 mapping = {}
1816 mapping = {}
1817
1817
1818 for n in tmpnodes:
1818 for n in tmpnodes:
1819 if n in repo:
1819 if n in repo:
1820 mapping[n] = ()
1820 mapping[n] = ()
1821
1821
1822 # remove entries about unknown nodes
1822 # remove entries about unknown nodes
1823 nodemap = repo.unfiltered().changelog.nodemap
1823 nodemap = repo.unfiltered().changelog.nodemap
1824 mapping = {k: v for k, v in mapping.items()
1824 mapping = {k: v for k, v in mapping.items()
1825 if k in nodemap and all(n in nodemap for n in v)}
1825 if k in nodemap and all(n in nodemap for n in v)}
1826 scmutil.cleanupnodes(repo, mapping, 'histedit')
1826 scmutil.cleanupnodes(repo, mapping, 'histedit')
1827 hf = fm.hexfunc
1827 hf = fm.hexfunc
1828 fl = fm.formatlist
1828 fl = fm.formatlist
1829 fd = fm.formatdict
1829 fd = fm.formatdict
1830 nodechanges = fd({hf(oldn): fl([hf(n) for n in newn], name='node')
1830 nodechanges = fd({hf(oldn): fl([hf(n) for n in newn], name='node')
1831 for oldn, newn in mapping.iteritems()},
1831 for oldn, newn in mapping.iteritems()},
1832 key="oldnode", value="newnodes")
1832 key="oldnode", value="newnodes")
1833 fm.data(nodechanges=nodechanges)
1833 fm.data(nodechanges=nodechanges)
1834
1834
1835 state.clear()
1835 state.clear()
1836 if os.path.exists(repo.sjoin('undo')):
1836 if os.path.exists(repo.sjoin('undo')):
1837 os.unlink(repo.sjoin('undo'))
1837 os.unlink(repo.sjoin('undo'))
1838 if repo.vfs.exists('histedit-last-edit.txt'):
1838 if repo.vfs.exists('histedit-last-edit.txt'):
1839 repo.vfs.unlink('histedit-last-edit.txt')
1839 repo.vfs.unlink('histedit-last-edit.txt')
1840
1840
1841 def _aborthistedit(ui, repo, state, nobackup=False):
1841 def _aborthistedit(ui, repo, state, nobackup=False):
1842 try:
1842 try:
1843 state.read()
1843 state.read()
1844 __, leafs, tmpnodes, __ = processreplacement(state)
1844 __, leafs, tmpnodes, __ = processreplacement(state)
1845 ui.debug('restore wc to old parent %s\n'
1845 ui.debug('restore wc to old parent %s\n'
1846 % node.short(state.topmost))
1846 % node.short(state.topmost))
1847
1847
1848 # Recover our old commits if necessary
1848 # Recover our old commits if necessary
1849 if not state.topmost in repo and state.backupfile:
1849 if not state.topmost in repo and state.backupfile:
1850 backupfile = repo.vfs.join(state.backupfile)
1850 backupfile = repo.vfs.join(state.backupfile)
1851 f = hg.openpath(ui, backupfile)
1851 f = hg.openpath(ui, backupfile)
1852 gen = exchange.readbundle(ui, f, backupfile)
1852 gen = exchange.readbundle(ui, f, backupfile)
1853 with repo.transaction('histedit.abort') as tr:
1853 with repo.transaction('histedit.abort') as tr:
1854 bundle2.applybundle(repo, gen, tr, source='histedit',
1854 bundle2.applybundle(repo, gen, tr, source='histedit',
1855 url='bundle:' + backupfile)
1855 url='bundle:' + backupfile)
1856
1856
1857 os.remove(backupfile)
1857 os.remove(backupfile)
1858
1858
1859 # check whether we should update away
1859 # check whether we should update away
1860 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1860 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1861 state.parentctxnode, leafs | tmpnodes):
1861 state.parentctxnode, leafs | tmpnodes):
1862 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1862 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1863 cleanupnode(ui, repo, tmpnodes, nobackup=nobackup)
1863 cleanupnode(ui, repo, tmpnodes, nobackup=nobackup)
1864 cleanupnode(ui, repo, leafs, nobackup=nobackup)
1864 cleanupnode(ui, repo, leafs, nobackup=nobackup)
1865 except Exception:
1865 except Exception:
1866 if state.inprogress():
1866 if state.inprogress():
1867 ui.warn(_('warning: encountered an exception during histedit '
1867 ui.warn(_('warning: encountered an exception during histedit '
1868 '--abort; the repository may not have been completely '
1868 '--abort; the repository may not have been completely '
1869 'cleaned up\n'))
1869 'cleaned up\n'))
1870 raise
1870 raise
1871 finally:
1871 finally:
1872 state.clear()
1872 state.clear()
1873
1873
1874 def _edithisteditplan(ui, repo, state, rules):
1874 def _edithisteditplan(ui, repo, state, rules):
1875 state.read()
1875 state.read()
1876 if not rules:
1876 if not rules:
1877 comment = geteditcomment(ui,
1877 comment = geteditcomment(ui,
1878 node.short(state.parentctxnode),
1878 node.short(state.parentctxnode),
1879 node.short(state.topmost))
1879 node.short(state.topmost))
1880 rules = ruleeditor(repo, ui, state.actions, comment)
1880 rules = ruleeditor(repo, ui, state.actions, comment)
1881 else:
1881 else:
1882 rules = _readfile(ui, rules)
1882 rules = _readfile(ui, rules)
1883 actions = parserules(rules, state)
1883 actions = parserules(rules, state)
1884 ctxs = [repo[act.node]
1884 ctxs = [repo[act.node]
1885 for act in state.actions if act.node]
1885 for act in state.actions if act.node]
1886 warnverifyactions(ui, repo, actions, state, ctxs)
1886 warnverifyactions(ui, repo, actions, state, ctxs)
1887 state.actions = actions
1887 state.actions = actions
1888 state.write()
1888 state.write()
1889
1889
1890 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1890 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1891 outg = opts.get('outgoing')
1891 outg = opts.get('outgoing')
1892 rules = opts.get('commands', '')
1892 rules = opts.get('commands', '')
1893 force = opts.get('force')
1893 force = opts.get('force')
1894
1894
1895 cmdutil.checkunfinished(repo)
1895 cmdutil.checkunfinished(repo)
1896 cmdutil.bailifchanged(repo)
1896 cmdutil.bailifchanged(repo)
1897
1897
1898 topmost = repo.dirstate.p1()
1898 topmost = repo.dirstate.p1()
1899 if outg:
1899 if outg:
1900 if freeargs:
1900 if freeargs:
1901 remote = freeargs[0]
1901 remote = freeargs[0]
1902 else:
1902 else:
1903 remote = None
1903 remote = None
1904 root = findoutgoing(ui, repo, remote, force, opts)
1904 root = findoutgoing(ui, repo, remote, force, opts)
1905 else:
1905 else:
1906 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1906 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1907 if len(rr) != 1:
1907 if len(rr) != 1:
1908 raise error.Abort(_('The specified revisions must have '
1908 raise error.Abort(_('The specified revisions must have '
1909 'exactly one common root'))
1909 'exactly one common root'))
1910 root = rr[0].node()
1910 root = rr[0].node()
1911
1911
1912 revs = between(repo, root, topmost, state.keep)
1912 revs = between(repo, root, topmost, state.keep)
1913 if not revs:
1913 if not revs:
1914 raise error.Abort(_('%s is not an ancestor of working directory') %
1914 raise error.Abort(_('%s is not an ancestor of working directory') %
1915 node.short(root))
1915 node.short(root))
1916
1916
1917 ctxs = [repo[r] for r in revs]
1917 ctxs = [repo[r] for r in revs]
1918 if not rules:
1918 if not rules:
1919 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1919 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1920 actions = [pick(state, r) for r in revs]
1920 actions = [pick(state, r) for r in revs]
1921 rules = ruleeditor(repo, ui, actions, comment)
1921 rules = ruleeditor(repo, ui, actions, comment)
1922 else:
1922 else:
1923 rules = _readfile(ui, rules)
1923 rules = _readfile(ui, rules)
1924 actions = parserules(rules, state)
1924 actions = parserules(rules, state)
1925 warnverifyactions(ui, repo, actions, state, ctxs)
1925 warnverifyactions(ui, repo, actions, state, ctxs)
1926
1926
1927 parentctxnode = repo[root].p1().node()
1927 parentctxnode = repo[root].p1().node()
1928
1928
1929 state.parentctxnode = parentctxnode
1929 state.parentctxnode = parentctxnode
1930 state.actions = actions
1930 state.actions = actions
1931 state.topmost = topmost
1931 state.topmost = topmost
1932 state.replacements = []
1932 state.replacements = []
1933
1933
1934 ui.log("histedit", "%d actions to histedit\n", len(actions),
1934 ui.log("histedit", "%d actions to histedit\n", len(actions),
1935 histedit_num_actions=len(actions))
1935 histedit_num_actions=len(actions))
1936
1936
1937 # Create a backup so we can always abort completely.
1937 # Create a backup so we can always abort completely.
1938 backupfile = None
1938 backupfile = None
1939 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1939 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1940 backupfile = repair.backupbundle(repo, [parentctxnode],
1940 backupfile = repair.backupbundle(repo, [parentctxnode],
1941 [topmost], root, 'histedit')
1941 [topmost], root, 'histedit')
1942 state.backupfile = backupfile
1942 state.backupfile = backupfile
1943
1943
1944 def _getsummary(ctx):
1944 def _getsummary(ctx):
1945 # a common pattern is to extract the summary but default to the empty
1945 # a common pattern is to extract the summary but default to the empty
1946 # string
1946 # string
1947 summary = ctx.description() or ''
1947 summary = ctx.description() or ''
1948 if summary:
1948 if summary:
1949 summary = summary.splitlines()[0]
1949 summary = summary.splitlines()[0]
1950 return summary
1950 return summary
1951
1951
1952 def bootstrapcontinue(ui, state, opts):
1952 def bootstrapcontinue(ui, state, opts):
1953 repo = state.repo
1953 repo = state.repo
1954
1954
1955 ms = mergemod.mergestate.read(repo)
1955 ms = mergemod.mergestate.read(repo)
1956 mergeutil.checkunresolved(ms)
1956 mergeutil.checkunresolved(ms)
1957
1957
1958 if state.actions:
1958 if state.actions:
1959 actobj = state.actions.pop(0)
1959 actobj = state.actions.pop(0)
1960
1960
1961 if _isdirtywc(repo):
1961 if _isdirtywc(repo):
1962 actobj.continuedirty()
1962 actobj.continuedirty()
1963 if _isdirtywc(repo):
1963 if _isdirtywc(repo):
1964 abortdirty()
1964 abortdirty()
1965
1965
1966 parentctx, replacements = actobj.continueclean()
1966 parentctx, replacements = actobj.continueclean()
1967
1967
1968 state.parentctxnode = parentctx.node()
1968 state.parentctxnode = parentctx.node()
1969 state.replacements.extend(replacements)
1969 state.replacements.extend(replacements)
1970
1970
1971 return state
1971 return state
1972
1972
1973 def between(repo, old, new, keep):
1973 def between(repo, old, new, keep):
1974 """select and validate the set of revision to edit
1974 """select and validate the set of revision to edit
1975
1975
1976 When keep is false, the specified set can't have children."""
1976 When keep is false, the specified set can't have children."""
1977 revs = repo.revs('%n::%n', old, new)
1977 revs = repo.revs('%n::%n', old, new)
1978 if revs and not keep:
1978 if revs and not keep:
1979 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1979 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1980 repo.revs('(%ld::) - (%ld)', revs, revs)):
1980 repo.revs('(%ld::) - (%ld)', revs, revs)):
1981 raise error.Abort(_('can only histedit a changeset together '
1981 raise error.Abort(_('can only histedit a changeset together '
1982 'with all its descendants'))
1982 'with all its descendants'))
1983 if repo.revs('(%ld) and merge()', revs):
1983 if repo.revs('(%ld) and merge()', revs):
1984 raise error.Abort(_('cannot edit history that contains merges'))
1984 raise error.Abort(_('cannot edit history that contains merges'))
1985 root = repo[revs.first()] # list is already sorted by repo.revs()
1985 root = repo[revs.first()] # list is already sorted by repo.revs()
1986 if not root.mutable():
1986 if not root.mutable():
1987 raise error.Abort(_('cannot edit public changeset: %s') % root,
1987 raise error.Abort(_('cannot edit public changeset: %s') % root,
1988 hint=_("see 'hg help phases' for details"))
1988 hint=_("see 'hg help phases' for details"))
1989 return pycompat.maplist(repo.changelog.node, revs)
1989 return pycompat.maplist(repo.changelog.node, revs)
1990
1990
1991 def ruleeditor(repo, ui, actions, editcomment=""):
1991 def ruleeditor(repo, ui, actions, editcomment=""):
1992 """open an editor to edit rules
1992 """open an editor to edit rules
1993
1993
1994 rules are in the format [ [act, ctx], ...] like in state.rules
1994 rules are in the format [ [act, ctx], ...] like in state.rules
1995 """
1995 """
1996 if repo.ui.configbool("experimental", "histedit.autoverb"):
1996 if repo.ui.configbool("experimental", "histedit.autoverb"):
1997 newact = util.sortdict()
1997 newact = util.sortdict()
1998 for act in actions:
1998 for act in actions:
1999 ctx = repo[act.node]
1999 ctx = repo[act.node]
2000 summary = _getsummary(ctx)
2000 summary = _getsummary(ctx)
2001 fword = summary.split(' ', 1)[0].lower()
2001 fword = summary.split(' ', 1)[0].lower()
2002 added = False
2002 added = False
2003
2003
2004 # if it doesn't end with the special character '!' just skip this
2004 # if it doesn't end with the special character '!' just skip this
2005 if fword.endswith('!'):
2005 if fword.endswith('!'):
2006 fword = fword[:-1]
2006 fword = fword[:-1]
2007 if fword in primaryactions | secondaryactions | tertiaryactions:
2007 if fword in primaryactions | secondaryactions | tertiaryactions:
2008 act.verb = fword
2008 act.verb = fword
2009 # get the target summary
2009 # get the target summary
2010 tsum = summary[len(fword) + 1:].lstrip()
2010 tsum = summary[len(fword) + 1:].lstrip()
2011 # safe but slow: reverse iterate over the actions so we
2011 # safe but slow: reverse iterate over the actions so we
2012 # don't clash on two commits having the same summary
2012 # don't clash on two commits having the same summary
2013 for na, l in reversed(list(newact.iteritems())):
2013 for na, l in reversed(list(newact.iteritems())):
2014 actx = repo[na.node]
2014 actx = repo[na.node]
2015 asum = _getsummary(actx)
2015 asum = _getsummary(actx)
2016 if asum == tsum:
2016 if asum == tsum:
2017 added = True
2017 added = True
2018 l.append(act)
2018 l.append(act)
2019 break
2019 break
2020
2020
2021 if not added:
2021 if not added:
2022 newact[act] = []
2022 newact[act] = []
2023
2023
2024 # copy over and flatten the new list
2024 # copy over and flatten the new list
2025 actions = []
2025 actions = []
2026 for na, l in newact.iteritems():
2026 for na, l in newact.iteritems():
2027 actions.append(na)
2027 actions.append(na)
2028 actions += l
2028 actions += l
2029
2029
2030 rules = '\n'.join([act.torule() for act in actions])
2030 rules = '\n'.join([act.torule() for act in actions])
2031 rules += '\n\n'
2031 rules += '\n\n'
2032 rules += editcomment
2032 rules += editcomment
2033 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
2033 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
2034 repopath=repo.path, action='histedit')
2034 repopath=repo.path, action='histedit')
2035
2035
2036 # Save edit rules in .hg/histedit-last-edit.txt in case
2036 # Save edit rules in .hg/histedit-last-edit.txt in case
2037 # the user needs to ask for help after something
2037 # the user needs to ask for help after something
2038 # surprising happens.
2038 # surprising happens.
2039 with repo.vfs('histedit-last-edit.txt', 'wb') as f:
2039 with repo.vfs('histedit-last-edit.txt', 'wb') as f:
2040 f.write(rules)
2040 f.write(rules)
2041
2041
2042 return rules
2042 return rules
2043
2043
2044 def parserules(rules, state):
2044 def parserules(rules, state):
2045 """Read the histedit rules string and return list of action objects """
2045 """Read the histedit rules string and return list of action objects """
2046 rules = [l for l in (r.strip() for r in rules.splitlines())
2046 rules = [l for l in (r.strip() for r in rules.splitlines())
2047 if l and not l.startswith('#')]
2047 if l and not l.startswith('#')]
2048 actions = []
2048 actions = []
2049 for r in rules:
2049 for r in rules:
2050 if ' ' not in r:
2050 if ' ' not in r:
2051 raise error.ParseError(_('malformed line "%s"') % r)
2051 raise error.ParseError(_('malformed line "%s"') % r)
2052 verb, rest = r.split(' ', 1)
2052 verb, rest = r.split(' ', 1)
2053
2053
2054 if verb not in actiontable:
2054 if verb not in actiontable:
2055 raise error.ParseError(_('unknown action "%s"') % verb)
2055 raise error.ParseError(_('unknown action "%s"') % verb)
2056
2056
2057 action = actiontable[verb].fromrule(state, rest)
2057 action = actiontable[verb].fromrule(state, rest)
2058 actions.append(action)
2058 actions.append(action)
2059 return actions
2059 return actions
2060
2060
2061 def warnverifyactions(ui, repo, actions, state, ctxs):
2061 def warnverifyactions(ui, repo, actions, state, ctxs):
2062 try:
2062 try:
2063 verifyactions(actions, state, ctxs)
2063 verifyactions(actions, state, ctxs)
2064 except error.ParseError:
2064 except error.ParseError:
2065 if repo.vfs.exists('histedit-last-edit.txt'):
2065 if repo.vfs.exists('histedit-last-edit.txt'):
2066 ui.warn(_('warning: histedit rules saved '
2066 ui.warn(_('warning: histedit rules saved '
2067 'to: .hg/histedit-last-edit.txt\n'))
2067 'to: .hg/histedit-last-edit.txt\n'))
2068 raise
2068 raise
2069
2069
2070 def verifyactions(actions, state, ctxs):
2070 def verifyactions(actions, state, ctxs):
2071 """Verify that there exists exactly one action per given changeset and
2071 """Verify that there exists exactly one action per given changeset and
2072 other constraints.
2072 other constraints.
2073
2073
2074 Will abort if there are to many or too few rules, a malformed rule,
2074 Will abort if there are to many or too few rules, a malformed rule,
2075 or a rule on a changeset outside of the user-given range.
2075 or a rule on a changeset outside of the user-given range.
2076 """
2076 """
2077 expected = set(c.node() for c in ctxs)
2077 expected = set(c.node() for c in ctxs)
2078 seen = set()
2078 seen = set()
2079 prev = None
2079 prev = None
2080
2080
2081 if actions and actions[0].verb in ['roll', 'fold']:
2081 if actions and actions[0].verb in ['roll', 'fold']:
2082 raise error.ParseError(_('first changeset cannot use verb "%s"') %
2082 raise error.ParseError(_('first changeset cannot use verb "%s"') %
2083 actions[0].verb)
2083 actions[0].verb)
2084
2084
2085 for action in actions:
2085 for action in actions:
2086 action.verify(prev, expected, seen)
2086 action.verify(prev, expected, seen)
2087 prev = action
2087 prev = action
2088 if action.node is not None:
2088 if action.node is not None:
2089 seen.add(action.node)
2089 seen.add(action.node)
2090 missing = sorted(expected - seen) # sort to stabilize output
2090 missing = sorted(expected - seen) # sort to stabilize output
2091
2091
2092 if state.repo.ui.configbool('histedit', 'dropmissing'):
2092 if state.repo.ui.configbool('histedit', 'dropmissing'):
2093 if len(actions) == 0:
2093 if len(actions) == 0:
2094 raise error.ParseError(_('no rules provided'),
2094 raise error.ParseError(_('no rules provided'),
2095 hint=_('use strip extension to remove commits'))
2095 hint=_('use strip extension to remove commits'))
2096
2096
2097 drops = [drop(state, n) for n in missing]
2097 drops = [drop(state, n) for n in missing]
2098 # put the in the beginning so they execute immediately and
2098 # put the in the beginning so they execute immediately and
2099 # don't show in the edit-plan in the future
2099 # don't show in the edit-plan in the future
2100 actions[:0] = drops
2100 actions[:0] = drops
2101 elif missing:
2101 elif missing:
2102 raise error.ParseError(_('missing rules for changeset %s') %
2102 raise error.ParseError(_('missing rules for changeset %s') %
2103 node.short(missing[0]),
2103 node.short(missing[0]),
2104 hint=_('use "drop %s" to discard, see also: '
2104 hint=_('use "drop %s" to discard, see also: '
2105 "'hg help -e histedit.config'")
2105 "'hg help -e histedit.config'")
2106 % node.short(missing[0]))
2106 % node.short(missing[0]))
2107
2107
2108 def adjustreplacementsfrommarkers(repo, oldreplacements):
2108 def adjustreplacementsfrommarkers(repo, oldreplacements):
2109 """Adjust replacements from obsolescence markers
2109 """Adjust replacements from obsolescence markers
2110
2110
2111 Replacements structure is originally generated based on
2111 Replacements structure is originally generated based on
2112 histedit's state and does not account for changes that are
2112 histedit's state and does not account for changes that are
2113 not recorded there. This function fixes that by adding
2113 not recorded there. This function fixes that by adding
2114 data read from obsolescence markers"""
2114 data read from obsolescence markers"""
2115 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
2115 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
2116 return oldreplacements
2116 return oldreplacements
2117
2117
2118 unfi = repo.unfiltered()
2118 unfi = repo.unfiltered()
2119 nm = unfi.changelog.nodemap
2119 nm = unfi.changelog.nodemap
2120 obsstore = repo.obsstore
2120 obsstore = repo.obsstore
2121 newreplacements = list(oldreplacements)
2121 newreplacements = list(oldreplacements)
2122 oldsuccs = [r[1] for r in oldreplacements]
2122 oldsuccs = [r[1] for r in oldreplacements]
2123 # successors that have already been added to succstocheck once
2123 # successors that have already been added to succstocheck once
2124 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
2124 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
2125 succstocheck = list(seensuccs)
2125 succstocheck = list(seensuccs)
2126 while succstocheck:
2126 while succstocheck:
2127 n = succstocheck.pop()
2127 n = succstocheck.pop()
2128 missing = nm.get(n) is None
2128 missing = nm.get(n) is None
2129 markers = obsstore.successors.get(n, ())
2129 markers = obsstore.successors.get(n, ())
2130 if missing and not markers:
2130 if missing and not markers:
2131 # dead end, mark it as such
2131 # dead end, mark it as such
2132 newreplacements.append((n, ()))
2132 newreplacements.append((n, ()))
2133 for marker in markers:
2133 for marker in markers:
2134 nsuccs = marker[1]
2134 nsuccs = marker[1]
2135 newreplacements.append((n, nsuccs))
2135 newreplacements.append((n, nsuccs))
2136 for nsucc in nsuccs:
2136 for nsucc in nsuccs:
2137 if nsucc not in seensuccs:
2137 if nsucc not in seensuccs:
2138 seensuccs.add(nsucc)
2138 seensuccs.add(nsucc)
2139 succstocheck.append(nsucc)
2139 succstocheck.append(nsucc)
2140
2140
2141 return newreplacements
2141 return newreplacements
2142
2142
2143 def processreplacement(state):
2143 def processreplacement(state):
2144 """process the list of replacements to return
2144 """process the list of replacements to return
2145
2145
2146 1) the final mapping between original and created nodes
2146 1) the final mapping between original and created nodes
2147 2) the list of temporary node created by histedit
2147 2) the list of temporary node created by histedit
2148 3) the list of new commit created by histedit"""
2148 3) the list of new commit created by histedit"""
2149 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
2149 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
2150 allsuccs = set()
2150 allsuccs = set()
2151 replaced = set()
2151 replaced = set()
2152 fullmapping = {}
2152 fullmapping = {}
2153 # initialize basic set
2153 # initialize basic set
2154 # fullmapping records all operations recorded in replacement
2154 # fullmapping records all operations recorded in replacement
2155 for rep in replacements:
2155 for rep in replacements:
2156 allsuccs.update(rep[1])
2156 allsuccs.update(rep[1])
2157 replaced.add(rep[0])
2157 replaced.add(rep[0])
2158 fullmapping.setdefault(rep[0], set()).update(rep[1])
2158 fullmapping.setdefault(rep[0], set()).update(rep[1])
2159 new = allsuccs - replaced
2159 new = allsuccs - replaced
2160 tmpnodes = allsuccs & replaced
2160 tmpnodes = allsuccs & replaced
2161 # Reduce content fullmapping into direct relation between original nodes
2161 # Reduce content fullmapping into direct relation between original nodes
2162 # and final node created during history edition
2162 # and final node created during history edition
2163 # Dropped changeset are replaced by an empty list
2163 # Dropped changeset are replaced by an empty list
2164 toproceed = set(fullmapping)
2164 toproceed = set(fullmapping)
2165 final = {}
2165 final = {}
2166 while toproceed:
2166 while toproceed:
2167 for x in list(toproceed):
2167 for x in list(toproceed):
2168 succs = fullmapping[x]
2168 succs = fullmapping[x]
2169 for s in list(succs):
2169 for s in list(succs):
2170 if s in toproceed:
2170 if s in toproceed:
2171 # non final node with unknown closure
2171 # non final node with unknown closure
2172 # We can't process this now
2172 # We can't process this now
2173 break
2173 break
2174 elif s in final:
2174 elif s in final:
2175 # non final node, replace with closure
2175 # non final node, replace with closure
2176 succs.remove(s)
2176 succs.remove(s)
2177 succs.update(final[s])
2177 succs.update(final[s])
2178 else:
2178 else:
2179 final[x] = succs
2179 final[x] = succs
2180 toproceed.remove(x)
2180 toproceed.remove(x)
2181 # remove tmpnodes from final mapping
2181 # remove tmpnodes from final mapping
2182 for n in tmpnodes:
2182 for n in tmpnodes:
2183 del final[n]
2183 del final[n]
2184 # we expect all changes involved in final to exist in the repo
2184 # we expect all changes involved in final to exist in the repo
2185 # turn `final` into list (topologically sorted)
2185 # turn `final` into list (topologically sorted)
2186 nm = state.repo.changelog.nodemap
2186 nm = state.repo.changelog.nodemap
2187 for prec, succs in final.items():
2187 for prec, succs in final.items():
2188 final[prec] = sorted(succs, key=nm.get)
2188 final[prec] = sorted(succs, key=nm.get)
2189
2189
2190 # computed topmost element (necessary for bookmark)
2190 # computed topmost element (necessary for bookmark)
2191 if new:
2191 if new:
2192 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
2192 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
2193 elif not final:
2193 elif not final:
2194 # Nothing rewritten at all. we won't need `newtopmost`
2194 # Nothing rewritten at all. we won't need `newtopmost`
2195 # It is the same as `oldtopmost` and `processreplacement` know it
2195 # It is the same as `oldtopmost` and `processreplacement` know it
2196 newtopmost = None
2196 newtopmost = None
2197 else:
2197 else:
2198 # every body died. The newtopmost is the parent of the root.
2198 # every body died. The newtopmost is the parent of the root.
2199 r = state.repo.changelog.rev
2199 r = state.repo.changelog.rev
2200 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
2200 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
2201
2201
2202 return final, tmpnodes, new, newtopmost
2202 return final, tmpnodes, new, newtopmost
2203
2203
2204 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
2204 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
2205 """Move bookmark from oldtopmost to newly created topmost
2205 """Move bookmark from oldtopmost to newly created topmost
2206
2206
2207 This is arguably a feature and we may only want that for the active
2207 This is arguably a feature and we may only want that for the active
2208 bookmark. But the behavior is kept compatible with the old version for now.
2208 bookmark. But the behavior is kept compatible with the old version for now.
2209 """
2209 """
2210 if not oldtopmost or not newtopmost:
2210 if not oldtopmost or not newtopmost:
2211 return
2211 return
2212 oldbmarks = repo.nodebookmarks(oldtopmost)
2212 oldbmarks = repo.nodebookmarks(oldtopmost)
2213 if oldbmarks:
2213 if oldbmarks:
2214 with repo.lock(), repo.transaction('histedit') as tr:
2214 with repo.lock(), repo.transaction('histedit') as tr:
2215 marks = repo._bookmarks
2215 marks = repo._bookmarks
2216 changes = []
2216 changes = []
2217 for name in oldbmarks:
2217 for name in oldbmarks:
2218 changes.append((name, newtopmost))
2218 changes.append((name, newtopmost))
2219 marks.applychanges(repo, tr, changes)
2219 marks.applychanges(repo, tr, changes)
2220
2220
2221 def cleanupnode(ui, repo, nodes, nobackup=False):
2221 def cleanupnode(ui, repo, nodes, nobackup=False):
2222 """strip a group of nodes from the repository
2222 """strip a group of nodes from the repository
2223
2223
2224 The set of node to strip may contains unknown nodes."""
2224 The set of node to strip may contains unknown nodes."""
2225 with repo.lock():
2225 with repo.lock():
2226 # do not let filtering get in the way of the cleanse
2226 # do not let filtering get in the way of the cleanse
2227 # we should probably get rid of obsolescence marker created during the
2227 # we should probably get rid of obsolescence marker created during the
2228 # histedit, but we currently do not have such information.
2228 # histedit, but we currently do not have such information.
2229 repo = repo.unfiltered()
2229 repo = repo.unfiltered()
2230 # Find all nodes that need to be stripped
2230 # Find all nodes that need to be stripped
2231 # (we use %lr instead of %ln to silently ignore unknown items)
2231 # (we use %lr instead of %ln to silently ignore unknown items)
2232 nm = repo.changelog.nodemap
2232 nm = repo.changelog.nodemap
2233 nodes = sorted(n for n in nodes if n in nm)
2233 nodes = sorted(n for n in nodes if n in nm)
2234 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
2234 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
2235 if roots:
2235 if roots:
2236 backup = not nobackup
2236 backup = not nobackup
2237 repair.strip(ui, repo, roots, backup=backup)
2237 repair.strip(ui, repo, roots, backup=backup)
2238
2238
2239 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
2239 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
2240 if isinstance(nodelist, str):
2240 if isinstance(nodelist, str):
2241 nodelist = [nodelist]
2241 nodelist = [nodelist]
2242 state = histeditstate(repo)
2242 state = histeditstate(repo)
2243 if state.inprogress():
2243 if state.inprogress():
2244 state.read()
2244 state.read()
2245 histedit_nodes = {action.node for action
2245 histedit_nodes = {action.node for action
2246 in state.actions if action.node}
2246 in state.actions if action.node}
2247 common_nodes = histedit_nodes & set(nodelist)
2247 common_nodes = histedit_nodes & set(nodelist)
2248 if common_nodes:
2248 if common_nodes:
2249 raise error.Abort(_("histedit in progress, can't strip %s")
2249 raise error.Abort(_("histedit in progress, can't strip %s")
2250 % ', '.join(node.short(x) for x in common_nodes))
2250 % ', '.join(node.short(x) for x in common_nodes))
2251 return orig(ui, repo, nodelist, *args, **kwargs)
2251 return orig(ui, repo, nodelist, *args, **kwargs)
2252
2252
2253 extensions.wrapfunction(repair, 'strip', stripwrapper)
2253 extensions.wrapfunction(repair, 'strip', stripwrapper)
2254
2254
2255 def summaryhook(ui, repo):
2255 def summaryhook(ui, repo):
2256 state = histeditstate(repo)
2256 state = histeditstate(repo)
2257 if not state.inprogress():
2257 if not state.inprogress():
2258 return
2258 return
2259 state.read()
2259 state.read()
2260 if state.actions:
2260 if state.actions:
2261 # i18n: column positioning for "hg summary"
2261 # i18n: column positioning for "hg summary"
2262 ui.write(_('hist: %s (histedit --continue)\n') %
2262 ui.write(_('hist: %s (histedit --continue)\n') %
2263 (ui.label(_('%d remaining'), 'histedit.remaining') %
2263 (ui.label(_('%d remaining'), 'histedit.remaining') %
2264 len(state.actions)))
2264 len(state.actions)))
2265
2265
2266 def extsetup(ui):
2266 def extsetup(ui):
2267 cmdutil.summaryhooks.add('histedit', summaryhook)
2267 cmdutil.summaryhooks.add('histedit', summaryhook)
2268 cmdutil.unfinishedstates.append(
2268 cmdutil.unfinishedstates.append(
2269 ['histedit-state', False, True, _('histedit in progress'),
2269 ['histedit-state', False, True, _('histedit in progress'),
2270 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
2270 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
2271 cmdutil.afterresolvedstates.append(
2271 cmdutil.afterresolvedstates.append(
2272 ['histedit-state', _('hg histedit --continue')])
2272 ['histedit-state', _('hg histedit --continue')])
@@ -1,223 +1,223
1 # uncommit - undo the actions of a commit
1 # uncommit - undo the actions of a commit
2 #
2 #
3 # Copyright 2011 Peter Arrenbrecht <peter.arrenbrecht@gmail.com>
3 # Copyright 2011 Peter Arrenbrecht <peter.arrenbrecht@gmail.com>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 # Pierre-Yves David <pierre-yves.david@ens-lyon.org>
5 # Pierre-Yves David <pierre-yves.david@ens-lyon.org>
6 # Patrick Mezard <patrick@mezard.eu>
6 # Patrick Mezard <patrick@mezard.eu>
7 # Copyright 2016 Facebook, Inc.
7 # Copyright 2016 Facebook, Inc.
8 #
8 #
9 # This software may be used and distributed according to the terms of the
9 # This software may be used and distributed according to the terms of the
10 # GNU General Public License version 2 or any later version.
10 # GNU General Public License version 2 or any later version.
11
11
12 """uncommit part or all of a local changeset (EXPERIMENTAL)
12 """uncommit part or all of a local changeset (EXPERIMENTAL)
13
13
14 This command undoes the effect of a local commit, returning the affected
14 This command undoes the effect of a local commit, returning the affected
15 files to their uncommitted state. This means that files modified, added or
15 files to their uncommitted state. This means that files modified, added or
16 removed in the changeset will be left unchanged, and so will remain modified,
16 removed in the changeset will be left unchanged, and so will remain modified,
17 added and removed in the working directory.
17 added and removed in the working directory.
18 """
18 """
19
19
20 from __future__ import absolute_import
20 from __future__ import absolute_import
21
21
22 from mercurial.i18n import _
22 from mercurial.i18n import _
23
23
24 from mercurial import (
24 from mercurial import (
25 cmdutil,
25 cmdutil,
26 commands,
26 commands,
27 context,
27 context,
28 copies as copiesmod,
28 copies as copiesmod,
29 error,
29 error,
30 node,
30 node,
31 obsutil,
31 obsutil,
32 pycompat,
32 pycompat,
33 registrar,
33 registrar,
34 rewriteutil,
34 rewriteutil,
35 scmutil,
35 scmutil,
36 )
36 )
37
37
38 cmdtable = {}
38 cmdtable = {}
39 command = registrar.command(cmdtable)
39 command = registrar.command(cmdtable)
40
40
41 configtable = {}
41 configtable = {}
42 configitem = registrar.configitem(configtable)
42 configitem = registrar.configitem(configtable)
43
43
44 configitem('experimental', 'uncommitondirtywdir',
44 configitem('experimental', 'uncommitondirtywdir',
45 default=False,
45 default=False,
46 )
46 )
47 configitem('experimental', 'uncommit.keep',
47 configitem('experimental', 'uncommit.keep',
48 default=False,
48 default=False,
49 )
49 )
50
50
51 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
51 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
52 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
52 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
53 # be specifying the version(s) of Mercurial they are tested with, or
53 # be specifying the version(s) of Mercurial they are tested with, or
54 # leave the attribute unspecified.
54 # leave the attribute unspecified.
55 testedwith = 'ships-with-hg-core'
55 testedwith = 'ships-with-hg-core'
56
56
57 def _commitfiltered(repo, ctx, match, keepcommit):
57 def _commitfiltered(repo, ctx, match, keepcommit):
58 """Recommit ctx with changed files not in match. Return the new
58 """Recommit ctx with changed files not in match. Return the new
59 node identifier, or None if nothing changed.
59 node identifier, or None if nothing changed.
60 """
60 """
61 base = ctx.p1()
61 base = ctx.p1()
62 # ctx
62 # ctx
63 initialfiles = set(ctx.files())
63 initialfiles = set(ctx.files())
64 exclude = set(f for f in initialfiles if match(f))
64 exclude = set(f for f in initialfiles if match(f))
65
65
66 # No files matched commit, so nothing excluded
66 # No files matched commit, so nothing excluded
67 if not exclude:
67 if not exclude:
68 return None
68 return None
69
69
70 # return the p1 so that we don't create an obsmarker later
70 # return the p1 so that we don't create an obsmarker later
71 if not keepcommit:
71 if not keepcommit:
72 return ctx.p1().node()
72 return ctx.p1().node()
73
73
74 files = (initialfiles - exclude)
74 files = (initialfiles - exclude)
75 # Filter copies
75 # Filter copies
76 copied = copiesmod.pathcopies(base, ctx)
76 copied = copiesmod.pathcopies(base, ctx)
77 copied = dict((dst, src) for dst, src in copied.iteritems()
77 copied = dict((dst, src) for dst, src in copied.iteritems()
78 if dst in files)
78 if dst in files)
79 def filectxfn(repo, memctx, path, contentctx=ctx, redirect=()):
79 def filectxfn(repo, memctx, path, contentctx=ctx, redirect=()):
80 if path not in contentctx:
80 if path not in contentctx:
81 return None
81 return None
82 fctx = contentctx[path]
82 fctx = contentctx[path]
83 mctx = context.memfilectx(repo, memctx, fctx.path(), fctx.data(),
83 mctx = context.memfilectx(repo, memctx, fctx.path(), fctx.data(),
84 fctx.islink(),
84 fctx.islink(),
85 fctx.isexec(),
85 fctx.isexec(),
86 copied=copied.get(path))
86 copysource=copied.get(path))
87 return mctx
87 return mctx
88
88
89 if not files:
89 if not files:
90 repo.ui.status(_("note: keeping empty commit\n"))
90 repo.ui.status(_("note: keeping empty commit\n"))
91
91
92 new = context.memctx(repo,
92 new = context.memctx(repo,
93 parents=[base.node(), node.nullid],
93 parents=[base.node(), node.nullid],
94 text=ctx.description(),
94 text=ctx.description(),
95 files=files,
95 files=files,
96 filectxfn=filectxfn,
96 filectxfn=filectxfn,
97 user=ctx.user(),
97 user=ctx.user(),
98 date=ctx.date(),
98 date=ctx.date(),
99 extra=ctx.extra())
99 extra=ctx.extra())
100 return repo.commitctx(new)
100 return repo.commitctx(new)
101
101
102 @command('uncommit',
102 @command('uncommit',
103 [('', 'keep', None, _('allow an empty commit after uncommiting')),
103 [('', 'keep', None, _('allow an empty commit after uncommiting')),
104 ('', 'allow-dirty-working-copy', False,
104 ('', 'allow-dirty-working-copy', False,
105 _('allow uncommit with outstanding changes'))
105 _('allow uncommit with outstanding changes'))
106 ] + commands.walkopts,
106 ] + commands.walkopts,
107 _('[OPTION]... [FILE]...'),
107 _('[OPTION]... [FILE]...'),
108 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
108 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
109 def uncommit(ui, repo, *pats, **opts):
109 def uncommit(ui, repo, *pats, **opts):
110 """uncommit part or all of a local changeset
110 """uncommit part or all of a local changeset
111
111
112 This command undoes the effect of a local commit, returning the affected
112 This command undoes the effect of a local commit, returning the affected
113 files to their uncommitted state. This means that files modified or
113 files to their uncommitted state. This means that files modified or
114 deleted in the changeset will be left unchanged, and so will remain
114 deleted in the changeset will be left unchanged, and so will remain
115 modified in the working directory.
115 modified in the working directory.
116
116
117 If no files are specified, the commit will be pruned, unless --keep is
117 If no files are specified, the commit will be pruned, unless --keep is
118 given.
118 given.
119 """
119 """
120 opts = pycompat.byteskwargs(opts)
120 opts = pycompat.byteskwargs(opts)
121
121
122 with repo.wlock(), repo.lock():
122 with repo.wlock(), repo.lock():
123
123
124 m, a, r, d = repo.status()[:4]
124 m, a, r, d = repo.status()[:4]
125 isdirtypath = any(set(m + a + r + d) & set(pats))
125 isdirtypath = any(set(m + a + r + d) & set(pats))
126 allowdirtywcopy = (opts['allow_dirty_working_copy'] or
126 allowdirtywcopy = (opts['allow_dirty_working_copy'] or
127 repo.ui.configbool('experimental', 'uncommitondirtywdir'))
127 repo.ui.configbool('experimental', 'uncommitondirtywdir'))
128 if not allowdirtywcopy and (not pats or isdirtypath):
128 if not allowdirtywcopy and (not pats or isdirtypath):
129 cmdutil.bailifchanged(repo, hint=_('requires '
129 cmdutil.bailifchanged(repo, hint=_('requires '
130 '--allow-dirty-working-copy to uncommit'))
130 '--allow-dirty-working-copy to uncommit'))
131 old = repo['.']
131 old = repo['.']
132 rewriteutil.precheck(repo, [old.rev()], 'uncommit')
132 rewriteutil.precheck(repo, [old.rev()], 'uncommit')
133 if len(old.parents()) > 1:
133 if len(old.parents()) > 1:
134 raise error.Abort(_("cannot uncommit merge changeset"))
134 raise error.Abort(_("cannot uncommit merge changeset"))
135
135
136 with repo.transaction('uncommit'):
136 with repo.transaction('uncommit'):
137 match = scmutil.match(old, pats, opts)
137 match = scmutil.match(old, pats, opts)
138 keepcommit = pats
138 keepcommit = pats
139 if not keepcommit:
139 if not keepcommit:
140 if opts.get('keep') is not None:
140 if opts.get('keep') is not None:
141 keepcommit = opts.get('keep')
141 keepcommit = opts.get('keep')
142 else:
142 else:
143 keepcommit = ui.configbool('experimental', 'uncommit.keep')
143 keepcommit = ui.configbool('experimental', 'uncommit.keep')
144 newid = _commitfiltered(repo, old, match, keepcommit)
144 newid = _commitfiltered(repo, old, match, keepcommit)
145 if newid is None:
145 if newid is None:
146 ui.status(_("nothing to uncommit\n"))
146 ui.status(_("nothing to uncommit\n"))
147 return 1
147 return 1
148
148
149 mapping = {}
149 mapping = {}
150 if newid != old.p1().node():
150 if newid != old.p1().node():
151 # Move local changes on filtered changeset
151 # Move local changes on filtered changeset
152 mapping[old.node()] = (newid,)
152 mapping[old.node()] = (newid,)
153 else:
153 else:
154 # Fully removed the old commit
154 # Fully removed the old commit
155 mapping[old.node()] = ()
155 mapping[old.node()] = ()
156
156
157 with repo.dirstate.parentchange():
157 with repo.dirstate.parentchange():
158 scmutil.movedirstate(repo, repo[newid], match)
158 scmutil.movedirstate(repo, repo[newid], match)
159
159
160 scmutil.cleanupnodes(repo, mapping, 'uncommit', fixphase=True)
160 scmutil.cleanupnodes(repo, mapping, 'uncommit', fixphase=True)
161
161
162 def predecessormarkers(ctx):
162 def predecessormarkers(ctx):
163 """yields the obsolete markers marking the given changeset as a successor"""
163 """yields the obsolete markers marking the given changeset as a successor"""
164 for data in ctx.repo().obsstore.predecessors.get(ctx.node(), ()):
164 for data in ctx.repo().obsstore.predecessors.get(ctx.node(), ()):
165 yield obsutil.marker(ctx.repo(), data)
165 yield obsutil.marker(ctx.repo(), data)
166
166
167 @command('unamend', [], helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
167 @command('unamend', [], helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
168 helpbasic=True)
168 helpbasic=True)
169 def unamend(ui, repo, **opts):
169 def unamend(ui, repo, **opts):
170 """undo the most recent amend operation on a current changeset
170 """undo the most recent amend operation on a current changeset
171
171
172 This command will roll back to the previous version of a changeset,
172 This command will roll back to the previous version of a changeset,
173 leaving working directory in state in which it was before running
173 leaving working directory in state in which it was before running
174 `hg amend` (e.g. files modified as part of an amend will be
174 `hg amend` (e.g. files modified as part of an amend will be
175 marked as modified `hg status`)
175 marked as modified `hg status`)
176 """
176 """
177
177
178 unfi = repo.unfiltered()
178 unfi = repo.unfiltered()
179 with repo.wlock(), repo.lock(), repo.transaction('unamend'):
179 with repo.wlock(), repo.lock(), repo.transaction('unamend'):
180
180
181 # identify the commit from which to unamend
181 # identify the commit from which to unamend
182 curctx = repo['.']
182 curctx = repo['.']
183
183
184 rewriteutil.precheck(repo, [curctx.rev()], 'unamend')
184 rewriteutil.precheck(repo, [curctx.rev()], 'unamend')
185
185
186 # identify the commit to which to unamend
186 # identify the commit to which to unamend
187 markers = list(predecessormarkers(curctx))
187 markers = list(predecessormarkers(curctx))
188 if len(markers) != 1:
188 if len(markers) != 1:
189 e = _("changeset must have one predecessor, found %i predecessors")
189 e = _("changeset must have one predecessor, found %i predecessors")
190 raise error.Abort(e % len(markers))
190 raise error.Abort(e % len(markers))
191
191
192 prednode = markers[0].prednode()
192 prednode = markers[0].prednode()
193 predctx = unfi[prednode]
193 predctx = unfi[prednode]
194
194
195 # add an extra so that we get a new hash
195 # add an extra so that we get a new hash
196 # note: allowing unamend to undo an unamend is an intentional feature
196 # note: allowing unamend to undo an unamend is an intentional feature
197 extras = predctx.extra()
197 extras = predctx.extra()
198 extras['unamend_source'] = curctx.hex()
198 extras['unamend_source'] = curctx.hex()
199
199
200 def filectxfn(repo, ctx_, path):
200 def filectxfn(repo, ctx_, path):
201 try:
201 try:
202 return predctx.filectx(path)
202 return predctx.filectx(path)
203 except KeyError:
203 except KeyError:
204 return None
204 return None
205
205
206 # Make a new commit same as predctx
206 # Make a new commit same as predctx
207 newctx = context.memctx(repo,
207 newctx = context.memctx(repo,
208 parents=(predctx.p1(), predctx.p2()),
208 parents=(predctx.p1(), predctx.p2()),
209 text=predctx.description(),
209 text=predctx.description(),
210 files=predctx.files(),
210 files=predctx.files(),
211 filectxfn=filectxfn,
211 filectxfn=filectxfn,
212 user=predctx.user(),
212 user=predctx.user(),
213 date=predctx.date(),
213 date=predctx.date(),
214 extra=extras)
214 extra=extras)
215 newprednode = repo.commitctx(newctx)
215 newprednode = repo.commitctx(newctx)
216 newpredctx = repo[newprednode]
216 newpredctx = repo[newprednode]
217 dirstate = repo.dirstate
217 dirstate = repo.dirstate
218
218
219 with dirstate.parentchange():
219 with dirstate.parentchange():
220 scmutil.movedirstate(repo, newpredctx)
220 scmutil.movedirstate(repo, newpredctx)
221
221
222 mapping = {curctx.node(): (newprednode,)}
222 mapping = {curctx.node(): (newprednode,)}
223 scmutil.cleanupnodes(repo, mapping, 'unamend', fixphase=True)
223 scmutil.cleanupnodes(repo, mapping, 'unamend', fixphase=True)
@@ -1,3382 +1,3382
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import copy as copymod
10 import copy as copymod
11 import errno
11 import errno
12 import os
12 import os
13 import re
13 import re
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 hex,
17 hex,
18 nullid,
18 nullid,
19 nullrev,
19 nullrev,
20 short,
20 short,
21 )
21 )
22
22
23 from . import (
23 from . import (
24 bookmarks,
24 bookmarks,
25 changelog,
25 changelog,
26 copies,
26 copies,
27 crecord as crecordmod,
27 crecord as crecordmod,
28 dirstateguard,
28 dirstateguard,
29 encoding,
29 encoding,
30 error,
30 error,
31 formatter,
31 formatter,
32 logcmdutil,
32 logcmdutil,
33 match as matchmod,
33 match as matchmod,
34 merge as mergemod,
34 merge as mergemod,
35 mergeutil,
35 mergeutil,
36 obsolete,
36 obsolete,
37 patch,
37 patch,
38 pathutil,
38 pathutil,
39 phases,
39 phases,
40 pycompat,
40 pycompat,
41 revlog,
41 revlog,
42 rewriteutil,
42 rewriteutil,
43 scmutil,
43 scmutil,
44 smartset,
44 smartset,
45 subrepoutil,
45 subrepoutil,
46 templatekw,
46 templatekw,
47 templater,
47 templater,
48 util,
48 util,
49 vfs as vfsmod,
49 vfs as vfsmod,
50 )
50 )
51
51
52 from .utils import (
52 from .utils import (
53 dateutil,
53 dateutil,
54 stringutil,
54 stringutil,
55 )
55 )
56
56
57 stringio = util.stringio
57 stringio = util.stringio
58
58
59 # templates of common command options
59 # templates of common command options
60
60
61 dryrunopts = [
61 dryrunopts = [
62 ('n', 'dry-run', None,
62 ('n', 'dry-run', None,
63 _('do not perform actions, just print output')),
63 _('do not perform actions, just print output')),
64 ]
64 ]
65
65
66 confirmopts = [
66 confirmopts = [
67 ('', 'confirm', None,
67 ('', 'confirm', None,
68 _('ask before applying actions')),
68 _('ask before applying actions')),
69 ]
69 ]
70
70
71 remoteopts = [
71 remoteopts = [
72 ('e', 'ssh', '',
72 ('e', 'ssh', '',
73 _('specify ssh command to use'), _('CMD')),
73 _('specify ssh command to use'), _('CMD')),
74 ('', 'remotecmd', '',
74 ('', 'remotecmd', '',
75 _('specify hg command to run on the remote side'), _('CMD')),
75 _('specify hg command to run on the remote side'), _('CMD')),
76 ('', 'insecure', None,
76 ('', 'insecure', None,
77 _('do not verify server certificate (ignoring web.cacerts config)')),
77 _('do not verify server certificate (ignoring web.cacerts config)')),
78 ]
78 ]
79
79
80 walkopts = [
80 walkopts = [
81 ('I', 'include', [],
81 ('I', 'include', [],
82 _('include names matching the given patterns'), _('PATTERN')),
82 _('include names matching the given patterns'), _('PATTERN')),
83 ('X', 'exclude', [],
83 ('X', 'exclude', [],
84 _('exclude names matching the given patterns'), _('PATTERN')),
84 _('exclude names matching the given patterns'), _('PATTERN')),
85 ]
85 ]
86
86
87 commitopts = [
87 commitopts = [
88 ('m', 'message', '',
88 ('m', 'message', '',
89 _('use text as commit message'), _('TEXT')),
89 _('use text as commit message'), _('TEXT')),
90 ('l', 'logfile', '',
90 ('l', 'logfile', '',
91 _('read commit message from file'), _('FILE')),
91 _('read commit message from file'), _('FILE')),
92 ]
92 ]
93
93
94 commitopts2 = [
94 commitopts2 = [
95 ('d', 'date', '',
95 ('d', 'date', '',
96 _('record the specified date as commit date'), _('DATE')),
96 _('record the specified date as commit date'), _('DATE')),
97 ('u', 'user', '',
97 ('u', 'user', '',
98 _('record the specified user as committer'), _('USER')),
98 _('record the specified user as committer'), _('USER')),
99 ]
99 ]
100
100
101 formatteropts = [
101 formatteropts = [
102 ('T', 'template', '',
102 ('T', 'template', '',
103 _('display with template'), _('TEMPLATE')),
103 _('display with template'), _('TEMPLATE')),
104 ]
104 ]
105
105
106 templateopts = [
106 templateopts = [
107 ('', 'style', '',
107 ('', 'style', '',
108 _('display using template map file (DEPRECATED)'), _('STYLE')),
108 _('display using template map file (DEPRECATED)'), _('STYLE')),
109 ('T', 'template', '',
109 ('T', 'template', '',
110 _('display with template'), _('TEMPLATE')),
110 _('display with template'), _('TEMPLATE')),
111 ]
111 ]
112
112
113 logopts = [
113 logopts = [
114 ('p', 'patch', None, _('show patch')),
114 ('p', 'patch', None, _('show patch')),
115 ('g', 'git', None, _('use git extended diff format')),
115 ('g', 'git', None, _('use git extended diff format')),
116 ('l', 'limit', '',
116 ('l', 'limit', '',
117 _('limit number of changes displayed'), _('NUM')),
117 _('limit number of changes displayed'), _('NUM')),
118 ('M', 'no-merges', None, _('do not show merges')),
118 ('M', 'no-merges', None, _('do not show merges')),
119 ('', 'stat', None, _('output diffstat-style summary of changes')),
119 ('', 'stat', None, _('output diffstat-style summary of changes')),
120 ('G', 'graph', None, _("show the revision DAG")),
120 ('G', 'graph', None, _("show the revision DAG")),
121 ] + templateopts
121 ] + templateopts
122
122
123 diffopts = [
123 diffopts = [
124 ('a', 'text', None, _('treat all files as text')),
124 ('a', 'text', None, _('treat all files as text')),
125 ('g', 'git', None, _('use git extended diff format')),
125 ('g', 'git', None, _('use git extended diff format')),
126 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
126 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
127 ('', 'nodates', None, _('omit dates from diff headers'))
127 ('', 'nodates', None, _('omit dates from diff headers'))
128 ]
128 ]
129
129
130 diffwsopts = [
130 diffwsopts = [
131 ('w', 'ignore-all-space', None,
131 ('w', 'ignore-all-space', None,
132 _('ignore white space when comparing lines')),
132 _('ignore white space when comparing lines')),
133 ('b', 'ignore-space-change', None,
133 ('b', 'ignore-space-change', None,
134 _('ignore changes in the amount of white space')),
134 _('ignore changes in the amount of white space')),
135 ('B', 'ignore-blank-lines', None,
135 ('B', 'ignore-blank-lines', None,
136 _('ignore changes whose lines are all blank')),
136 _('ignore changes whose lines are all blank')),
137 ('Z', 'ignore-space-at-eol', None,
137 ('Z', 'ignore-space-at-eol', None,
138 _('ignore changes in whitespace at EOL')),
138 _('ignore changes in whitespace at EOL')),
139 ]
139 ]
140
140
141 diffopts2 = [
141 diffopts2 = [
142 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
142 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
143 ('p', 'show-function', None, _('show which function each change is in')),
143 ('p', 'show-function', None, _('show which function each change is in')),
144 ('', 'reverse', None, _('produce a diff that undoes the changes')),
144 ('', 'reverse', None, _('produce a diff that undoes the changes')),
145 ] + diffwsopts + [
145 ] + diffwsopts + [
146 ('U', 'unified', '',
146 ('U', 'unified', '',
147 _('number of lines of context to show'), _('NUM')),
147 _('number of lines of context to show'), _('NUM')),
148 ('', 'stat', None, _('output diffstat-style summary of changes')),
148 ('', 'stat', None, _('output diffstat-style summary of changes')),
149 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
149 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
150 ]
150 ]
151
151
152 mergetoolopts = [
152 mergetoolopts = [
153 ('t', 'tool', '', _('specify merge tool'), _('TOOL')),
153 ('t', 'tool', '', _('specify merge tool'), _('TOOL')),
154 ]
154 ]
155
155
156 similarityopts = [
156 similarityopts = [
157 ('s', 'similarity', '',
157 ('s', 'similarity', '',
158 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
158 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
159 ]
159 ]
160
160
161 subrepoopts = [
161 subrepoopts = [
162 ('S', 'subrepos', None,
162 ('S', 'subrepos', None,
163 _('recurse into subrepositories'))
163 _('recurse into subrepositories'))
164 ]
164 ]
165
165
166 debugrevlogopts = [
166 debugrevlogopts = [
167 ('c', 'changelog', False, _('open changelog')),
167 ('c', 'changelog', False, _('open changelog')),
168 ('m', 'manifest', False, _('open manifest')),
168 ('m', 'manifest', False, _('open manifest')),
169 ('', 'dir', '', _('open directory manifest')),
169 ('', 'dir', '', _('open directory manifest')),
170 ]
170 ]
171
171
172 # special string such that everything below this line will be ingored in the
172 # special string such that everything below this line will be ingored in the
173 # editor text
173 # editor text
174 _linebelow = "^HG: ------------------------ >8 ------------------------$"
174 _linebelow = "^HG: ------------------------ >8 ------------------------$"
175
175
176 def ishunk(x):
176 def ishunk(x):
177 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
177 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
178 return isinstance(x, hunkclasses)
178 return isinstance(x, hunkclasses)
179
179
180 def newandmodified(chunks, originalchunks):
180 def newandmodified(chunks, originalchunks):
181 newlyaddedandmodifiedfiles = set()
181 newlyaddedandmodifiedfiles = set()
182 for chunk in chunks:
182 for chunk in chunks:
183 if (ishunk(chunk) and chunk.header.isnewfile() and chunk not in
183 if (ishunk(chunk) and chunk.header.isnewfile() and chunk not in
184 originalchunks):
184 originalchunks):
185 newlyaddedandmodifiedfiles.add(chunk.header.filename())
185 newlyaddedandmodifiedfiles.add(chunk.header.filename())
186 return newlyaddedandmodifiedfiles
186 return newlyaddedandmodifiedfiles
187
187
188 def parsealiases(cmd):
188 def parsealiases(cmd):
189 return cmd.split("|")
189 return cmd.split("|")
190
190
191 def setupwrapcolorwrite(ui):
191 def setupwrapcolorwrite(ui):
192 # wrap ui.write so diff output can be labeled/colorized
192 # wrap ui.write so diff output can be labeled/colorized
193 def wrapwrite(orig, *args, **kw):
193 def wrapwrite(orig, *args, **kw):
194 label = kw.pop(r'label', '')
194 label = kw.pop(r'label', '')
195 for chunk, l in patch.difflabel(lambda: args):
195 for chunk, l in patch.difflabel(lambda: args):
196 orig(chunk, label=label + l)
196 orig(chunk, label=label + l)
197
197
198 oldwrite = ui.write
198 oldwrite = ui.write
199 def wrap(*args, **kwargs):
199 def wrap(*args, **kwargs):
200 return wrapwrite(oldwrite, *args, **kwargs)
200 return wrapwrite(oldwrite, *args, **kwargs)
201 setattr(ui, 'write', wrap)
201 setattr(ui, 'write', wrap)
202 return oldwrite
202 return oldwrite
203
203
204 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
204 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
205 try:
205 try:
206 if usecurses:
206 if usecurses:
207 if testfile:
207 if testfile:
208 recordfn = crecordmod.testdecorator(
208 recordfn = crecordmod.testdecorator(
209 testfile, crecordmod.testchunkselector)
209 testfile, crecordmod.testchunkselector)
210 else:
210 else:
211 recordfn = crecordmod.chunkselector
211 recordfn = crecordmod.chunkselector
212
212
213 return crecordmod.filterpatch(ui, originalhunks, recordfn,
213 return crecordmod.filterpatch(ui, originalhunks, recordfn,
214 operation)
214 operation)
215 except crecordmod.fallbackerror as e:
215 except crecordmod.fallbackerror as e:
216 ui.warn('%s\n' % e.message)
216 ui.warn('%s\n' % e.message)
217 ui.warn(_('falling back to text mode\n'))
217 ui.warn(_('falling back to text mode\n'))
218
218
219 return patch.filterpatch(ui, originalhunks, operation)
219 return patch.filterpatch(ui, originalhunks, operation)
220
220
221 def recordfilter(ui, originalhunks, operation=None):
221 def recordfilter(ui, originalhunks, operation=None):
222 """ Prompts the user to filter the originalhunks and return a list of
222 """ Prompts the user to filter the originalhunks and return a list of
223 selected hunks.
223 selected hunks.
224 *operation* is used for to build ui messages to indicate the user what
224 *operation* is used for to build ui messages to indicate the user what
225 kind of filtering they are doing: reverting, committing, shelving, etc.
225 kind of filtering they are doing: reverting, committing, shelving, etc.
226 (see patch.filterpatch).
226 (see patch.filterpatch).
227 """
227 """
228 usecurses = crecordmod.checkcurses(ui)
228 usecurses = crecordmod.checkcurses(ui)
229 testfile = ui.config('experimental', 'crecordtest')
229 testfile = ui.config('experimental', 'crecordtest')
230 oldwrite = setupwrapcolorwrite(ui)
230 oldwrite = setupwrapcolorwrite(ui)
231 try:
231 try:
232 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
232 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
233 testfile, operation)
233 testfile, operation)
234 finally:
234 finally:
235 ui.write = oldwrite
235 ui.write = oldwrite
236 return newchunks, newopts
236 return newchunks, newopts
237
237
238 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
238 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
239 filterfn, *pats, **opts):
239 filterfn, *pats, **opts):
240 opts = pycompat.byteskwargs(opts)
240 opts = pycompat.byteskwargs(opts)
241 if not ui.interactive():
241 if not ui.interactive():
242 if cmdsuggest:
242 if cmdsuggest:
243 msg = _('running non-interactively, use %s instead') % cmdsuggest
243 msg = _('running non-interactively, use %s instead') % cmdsuggest
244 else:
244 else:
245 msg = _('running non-interactively')
245 msg = _('running non-interactively')
246 raise error.Abort(msg)
246 raise error.Abort(msg)
247
247
248 # make sure username is set before going interactive
248 # make sure username is set before going interactive
249 if not opts.get('user'):
249 if not opts.get('user'):
250 ui.username() # raise exception, username not provided
250 ui.username() # raise exception, username not provided
251
251
252 def recordfunc(ui, repo, message, match, opts):
252 def recordfunc(ui, repo, message, match, opts):
253 """This is generic record driver.
253 """This is generic record driver.
254
254
255 Its job is to interactively filter local changes, and
255 Its job is to interactively filter local changes, and
256 accordingly prepare working directory into a state in which the
256 accordingly prepare working directory into a state in which the
257 job can be delegated to a non-interactive commit command such as
257 job can be delegated to a non-interactive commit command such as
258 'commit' or 'qrefresh'.
258 'commit' or 'qrefresh'.
259
259
260 After the actual job is done by non-interactive command, the
260 After the actual job is done by non-interactive command, the
261 working directory is restored to its original state.
261 working directory is restored to its original state.
262
262
263 In the end we'll record interesting changes, and everything else
263 In the end we'll record interesting changes, and everything else
264 will be left in place, so the user can continue working.
264 will be left in place, so the user can continue working.
265 """
265 """
266
266
267 checkunfinished(repo, commit=True)
267 checkunfinished(repo, commit=True)
268 wctx = repo[None]
268 wctx = repo[None]
269 merge = len(wctx.parents()) > 1
269 merge = len(wctx.parents()) > 1
270 if merge:
270 if merge:
271 raise error.Abort(_('cannot partially commit a merge '
271 raise error.Abort(_('cannot partially commit a merge '
272 '(use "hg commit" instead)'))
272 '(use "hg commit" instead)'))
273
273
274 status = repo.status(match=match)
274 status = repo.status(match=match)
275
275
276 overrides = {(b'ui', b'commitsubrepos'): True}
276 overrides = {(b'ui', b'commitsubrepos'): True}
277
277
278 with repo.ui.configoverride(overrides, b'record'):
278 with repo.ui.configoverride(overrides, b'record'):
279 # subrepoutil.precommit() modifies the status
279 # subrepoutil.precommit() modifies the status
280 tmpstatus = scmutil.status(copymod.copy(status[0]),
280 tmpstatus = scmutil.status(copymod.copy(status[0]),
281 copymod.copy(status[1]),
281 copymod.copy(status[1]),
282 copymod.copy(status[2]),
282 copymod.copy(status[2]),
283 copymod.copy(status[3]),
283 copymod.copy(status[3]),
284 copymod.copy(status[4]),
284 copymod.copy(status[4]),
285 copymod.copy(status[5]),
285 copymod.copy(status[5]),
286 copymod.copy(status[6]))
286 copymod.copy(status[6]))
287
287
288 # Force allows -X subrepo to skip the subrepo.
288 # Force allows -X subrepo to skip the subrepo.
289 subs, commitsubs, newstate = subrepoutil.precommit(
289 subs, commitsubs, newstate = subrepoutil.precommit(
290 repo.ui, wctx, tmpstatus, match, force=True)
290 repo.ui, wctx, tmpstatus, match, force=True)
291 for s in subs:
291 for s in subs:
292 if s in commitsubs:
292 if s in commitsubs:
293 dirtyreason = wctx.sub(s).dirtyreason(True)
293 dirtyreason = wctx.sub(s).dirtyreason(True)
294 raise error.Abort(dirtyreason)
294 raise error.Abort(dirtyreason)
295
295
296 def fail(f, msg):
296 def fail(f, msg):
297 raise error.Abort('%s: %s' % (f, msg))
297 raise error.Abort('%s: %s' % (f, msg))
298
298
299 force = opts.get('force')
299 force = opts.get('force')
300 if not force:
300 if not force:
301 vdirs = []
301 vdirs = []
302 match.explicitdir = vdirs.append
302 match.explicitdir = vdirs.append
303 match.bad = fail
303 match.bad = fail
304
304
305 if not force:
305 if not force:
306 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
306 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
307 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True,
307 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True,
308 section='commands',
308 section='commands',
309 configprefix='commit.interactive.')
309 configprefix='commit.interactive.')
310 diffopts.nodates = True
310 diffopts.nodates = True
311 diffopts.git = True
311 diffopts.git = True
312 diffopts.showfunc = True
312 diffopts.showfunc = True
313 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
313 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
314 originalchunks = patch.parsepatch(originaldiff)
314 originalchunks = patch.parsepatch(originaldiff)
315
315
316 # 1. filter patch, since we are intending to apply subset of it
316 # 1. filter patch, since we are intending to apply subset of it
317 try:
317 try:
318 chunks, newopts = filterfn(ui, originalchunks)
318 chunks, newopts = filterfn(ui, originalchunks)
319 except error.PatchError as err:
319 except error.PatchError as err:
320 raise error.Abort(_('error parsing patch: %s') % err)
320 raise error.Abort(_('error parsing patch: %s') % err)
321 opts.update(newopts)
321 opts.update(newopts)
322
322
323 # We need to keep a backup of files that have been newly added and
323 # We need to keep a backup of files that have been newly added and
324 # modified during the recording process because there is a previous
324 # modified during the recording process because there is a previous
325 # version without the edit in the workdir
325 # version without the edit in the workdir
326 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
326 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
327 contenders = set()
327 contenders = set()
328 for h in chunks:
328 for h in chunks:
329 try:
329 try:
330 contenders.update(set(h.files()))
330 contenders.update(set(h.files()))
331 except AttributeError:
331 except AttributeError:
332 pass
332 pass
333
333
334 changed = status.modified + status.added + status.removed
334 changed = status.modified + status.added + status.removed
335 newfiles = [f for f in changed if f in contenders]
335 newfiles = [f for f in changed if f in contenders]
336 if not newfiles:
336 if not newfiles:
337 ui.status(_('no changes to record\n'))
337 ui.status(_('no changes to record\n'))
338 return 0
338 return 0
339
339
340 modified = set(status.modified)
340 modified = set(status.modified)
341
341
342 # 2. backup changed files, so we can restore them in the end
342 # 2. backup changed files, so we can restore them in the end
343
343
344 if backupall:
344 if backupall:
345 tobackup = changed
345 tobackup = changed
346 else:
346 else:
347 tobackup = [f for f in newfiles if f in modified or f in
347 tobackup = [f for f in newfiles if f in modified or f in
348 newlyaddedandmodifiedfiles]
348 newlyaddedandmodifiedfiles]
349 backups = {}
349 backups = {}
350 if tobackup:
350 if tobackup:
351 backupdir = repo.vfs.join('record-backups')
351 backupdir = repo.vfs.join('record-backups')
352 try:
352 try:
353 os.mkdir(backupdir)
353 os.mkdir(backupdir)
354 except OSError as err:
354 except OSError as err:
355 if err.errno != errno.EEXIST:
355 if err.errno != errno.EEXIST:
356 raise
356 raise
357 try:
357 try:
358 # backup continues
358 # backup continues
359 for f in tobackup:
359 for f in tobackup:
360 fd, tmpname = pycompat.mkstemp(prefix=f.replace('/', '_') + '.',
360 fd, tmpname = pycompat.mkstemp(prefix=f.replace('/', '_') + '.',
361 dir=backupdir)
361 dir=backupdir)
362 os.close(fd)
362 os.close(fd)
363 ui.debug('backup %r as %r\n' % (f, tmpname))
363 ui.debug('backup %r as %r\n' % (f, tmpname))
364 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
364 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
365 backups[f] = tmpname
365 backups[f] = tmpname
366
366
367 fp = stringio()
367 fp = stringio()
368 for c in chunks:
368 for c in chunks:
369 fname = c.filename()
369 fname = c.filename()
370 if fname in backups:
370 if fname in backups:
371 c.write(fp)
371 c.write(fp)
372 dopatch = fp.tell()
372 dopatch = fp.tell()
373 fp.seek(0)
373 fp.seek(0)
374
374
375 # 2.5 optionally review / modify patch in text editor
375 # 2.5 optionally review / modify patch in text editor
376 if opts.get('review', False):
376 if opts.get('review', False):
377 patchtext = (crecordmod.diffhelptext
377 patchtext = (crecordmod.diffhelptext
378 + crecordmod.patchhelptext
378 + crecordmod.patchhelptext
379 + fp.read())
379 + fp.read())
380 reviewedpatch = ui.edit(patchtext, "",
380 reviewedpatch = ui.edit(patchtext, "",
381 action="diff",
381 action="diff",
382 repopath=repo.path)
382 repopath=repo.path)
383 fp.truncate(0)
383 fp.truncate(0)
384 fp.write(reviewedpatch)
384 fp.write(reviewedpatch)
385 fp.seek(0)
385 fp.seek(0)
386
386
387 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
387 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
388 # 3a. apply filtered patch to clean repo (clean)
388 # 3a. apply filtered patch to clean repo (clean)
389 if backups:
389 if backups:
390 # Equivalent to hg.revert
390 # Equivalent to hg.revert
391 m = scmutil.matchfiles(repo, backups.keys())
391 m = scmutil.matchfiles(repo, backups.keys())
392 mergemod.update(repo, repo.dirstate.p1(), branchmerge=False,
392 mergemod.update(repo, repo.dirstate.p1(), branchmerge=False,
393 force=True, matcher=m)
393 force=True, matcher=m)
394
394
395 # 3b. (apply)
395 # 3b. (apply)
396 if dopatch:
396 if dopatch:
397 try:
397 try:
398 ui.debug('applying patch\n')
398 ui.debug('applying patch\n')
399 ui.debug(fp.getvalue())
399 ui.debug(fp.getvalue())
400 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
400 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
401 except error.PatchError as err:
401 except error.PatchError as err:
402 raise error.Abort(pycompat.bytestr(err))
402 raise error.Abort(pycompat.bytestr(err))
403 del fp
403 del fp
404
404
405 # 4. We prepared working directory according to filtered
405 # 4. We prepared working directory according to filtered
406 # patch. Now is the time to delegate the job to
406 # patch. Now is the time to delegate the job to
407 # commit/qrefresh or the like!
407 # commit/qrefresh or the like!
408
408
409 # Make all of the pathnames absolute.
409 # Make all of the pathnames absolute.
410 newfiles = [repo.wjoin(nf) for nf in newfiles]
410 newfiles = [repo.wjoin(nf) for nf in newfiles]
411 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
411 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
412 finally:
412 finally:
413 # 5. finally restore backed-up files
413 # 5. finally restore backed-up files
414 try:
414 try:
415 dirstate = repo.dirstate
415 dirstate = repo.dirstate
416 for realname, tmpname in backups.iteritems():
416 for realname, tmpname in backups.iteritems():
417 ui.debug('restoring %r to %r\n' % (tmpname, realname))
417 ui.debug('restoring %r to %r\n' % (tmpname, realname))
418
418
419 if dirstate[realname] == 'n':
419 if dirstate[realname] == 'n':
420 # without normallookup, restoring timestamp
420 # without normallookup, restoring timestamp
421 # may cause partially committed files
421 # may cause partially committed files
422 # to be treated as unmodified
422 # to be treated as unmodified
423 dirstate.normallookup(realname)
423 dirstate.normallookup(realname)
424
424
425 # copystat=True here and above are a hack to trick any
425 # copystat=True here and above are a hack to trick any
426 # editors that have f open that we haven't modified them.
426 # editors that have f open that we haven't modified them.
427 #
427 #
428 # Also note that this racy as an editor could notice the
428 # Also note that this racy as an editor could notice the
429 # file's mtime before we've finished writing it.
429 # file's mtime before we've finished writing it.
430 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
430 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
431 os.unlink(tmpname)
431 os.unlink(tmpname)
432 if tobackup:
432 if tobackup:
433 os.rmdir(backupdir)
433 os.rmdir(backupdir)
434 except OSError:
434 except OSError:
435 pass
435 pass
436
436
437 def recordinwlock(ui, repo, message, match, opts):
437 def recordinwlock(ui, repo, message, match, opts):
438 with repo.wlock():
438 with repo.wlock():
439 return recordfunc(ui, repo, message, match, opts)
439 return recordfunc(ui, repo, message, match, opts)
440
440
441 return commit(ui, repo, recordinwlock, pats, opts)
441 return commit(ui, repo, recordinwlock, pats, opts)
442
442
443 class dirnode(object):
443 class dirnode(object):
444 """
444 """
445 Represent a directory in user working copy with information required for
445 Represent a directory in user working copy with information required for
446 the purpose of tersing its status.
446 the purpose of tersing its status.
447
447
448 path is the path to the directory, without a trailing '/'
448 path is the path to the directory, without a trailing '/'
449
449
450 statuses is a set of statuses of all files in this directory (this includes
450 statuses is a set of statuses of all files in this directory (this includes
451 all the files in all the subdirectories too)
451 all the files in all the subdirectories too)
452
452
453 files is a list of files which are direct child of this directory
453 files is a list of files which are direct child of this directory
454
454
455 subdirs is a dictionary of sub-directory name as the key and it's own
455 subdirs is a dictionary of sub-directory name as the key and it's own
456 dirnode object as the value
456 dirnode object as the value
457 """
457 """
458
458
459 def __init__(self, dirpath):
459 def __init__(self, dirpath):
460 self.path = dirpath
460 self.path = dirpath
461 self.statuses = set([])
461 self.statuses = set([])
462 self.files = []
462 self.files = []
463 self.subdirs = {}
463 self.subdirs = {}
464
464
465 def _addfileindir(self, filename, status):
465 def _addfileindir(self, filename, status):
466 """Add a file in this directory as a direct child."""
466 """Add a file in this directory as a direct child."""
467 self.files.append((filename, status))
467 self.files.append((filename, status))
468
468
469 def addfile(self, filename, status):
469 def addfile(self, filename, status):
470 """
470 """
471 Add a file to this directory or to its direct parent directory.
471 Add a file to this directory or to its direct parent directory.
472
472
473 If the file is not direct child of this directory, we traverse to the
473 If the file is not direct child of this directory, we traverse to the
474 directory of which this file is a direct child of and add the file
474 directory of which this file is a direct child of and add the file
475 there.
475 there.
476 """
476 """
477
477
478 # the filename contains a path separator, it means it's not the direct
478 # the filename contains a path separator, it means it's not the direct
479 # child of this directory
479 # child of this directory
480 if '/' in filename:
480 if '/' in filename:
481 subdir, filep = filename.split('/', 1)
481 subdir, filep = filename.split('/', 1)
482
482
483 # does the dirnode object for subdir exists
483 # does the dirnode object for subdir exists
484 if subdir not in self.subdirs:
484 if subdir not in self.subdirs:
485 subdirpath = pathutil.join(self.path, subdir)
485 subdirpath = pathutil.join(self.path, subdir)
486 self.subdirs[subdir] = dirnode(subdirpath)
486 self.subdirs[subdir] = dirnode(subdirpath)
487
487
488 # try adding the file in subdir
488 # try adding the file in subdir
489 self.subdirs[subdir].addfile(filep, status)
489 self.subdirs[subdir].addfile(filep, status)
490
490
491 else:
491 else:
492 self._addfileindir(filename, status)
492 self._addfileindir(filename, status)
493
493
494 if status not in self.statuses:
494 if status not in self.statuses:
495 self.statuses.add(status)
495 self.statuses.add(status)
496
496
497 def iterfilepaths(self):
497 def iterfilepaths(self):
498 """Yield (status, path) for files directly under this directory."""
498 """Yield (status, path) for files directly under this directory."""
499 for f, st in self.files:
499 for f, st in self.files:
500 yield st, pathutil.join(self.path, f)
500 yield st, pathutil.join(self.path, f)
501
501
502 def tersewalk(self, terseargs):
502 def tersewalk(self, terseargs):
503 """
503 """
504 Yield (status, path) obtained by processing the status of this
504 Yield (status, path) obtained by processing the status of this
505 dirnode.
505 dirnode.
506
506
507 terseargs is the string of arguments passed by the user with `--terse`
507 terseargs is the string of arguments passed by the user with `--terse`
508 flag.
508 flag.
509
509
510 Following are the cases which can happen:
510 Following are the cases which can happen:
511
511
512 1) All the files in the directory (including all the files in its
512 1) All the files in the directory (including all the files in its
513 subdirectories) share the same status and the user has asked us to terse
513 subdirectories) share the same status and the user has asked us to terse
514 that status. -> yield (status, dirpath). dirpath will end in '/'.
514 that status. -> yield (status, dirpath). dirpath will end in '/'.
515
515
516 2) Otherwise, we do following:
516 2) Otherwise, we do following:
517
517
518 a) Yield (status, filepath) for all the files which are in this
518 a) Yield (status, filepath) for all the files which are in this
519 directory (only the ones in this directory, not the subdirs)
519 directory (only the ones in this directory, not the subdirs)
520
520
521 b) Recurse the function on all the subdirectories of this
521 b) Recurse the function on all the subdirectories of this
522 directory
522 directory
523 """
523 """
524
524
525 if len(self.statuses) == 1:
525 if len(self.statuses) == 1:
526 onlyst = self.statuses.pop()
526 onlyst = self.statuses.pop()
527
527
528 # Making sure we terse only when the status abbreviation is
528 # Making sure we terse only when the status abbreviation is
529 # passed as terse argument
529 # passed as terse argument
530 if onlyst in terseargs:
530 if onlyst in terseargs:
531 yield onlyst, self.path + '/'
531 yield onlyst, self.path + '/'
532 return
532 return
533
533
534 # add the files to status list
534 # add the files to status list
535 for st, fpath in self.iterfilepaths():
535 for st, fpath in self.iterfilepaths():
536 yield st, fpath
536 yield st, fpath
537
537
538 #recurse on the subdirs
538 #recurse on the subdirs
539 for dirobj in self.subdirs.values():
539 for dirobj in self.subdirs.values():
540 for st, fpath in dirobj.tersewalk(terseargs):
540 for st, fpath in dirobj.tersewalk(terseargs):
541 yield st, fpath
541 yield st, fpath
542
542
543 def tersedir(statuslist, terseargs):
543 def tersedir(statuslist, terseargs):
544 """
544 """
545 Terse the status if all the files in a directory shares the same status.
545 Terse the status if all the files in a directory shares the same status.
546
546
547 statuslist is scmutil.status() object which contains a list of files for
547 statuslist is scmutil.status() object which contains a list of files for
548 each status.
548 each status.
549 terseargs is string which is passed by the user as the argument to `--terse`
549 terseargs is string which is passed by the user as the argument to `--terse`
550 flag.
550 flag.
551
551
552 The function makes a tree of objects of dirnode class, and at each node it
552 The function makes a tree of objects of dirnode class, and at each node it
553 stores the information required to know whether we can terse a certain
553 stores the information required to know whether we can terse a certain
554 directory or not.
554 directory or not.
555 """
555 """
556 # the order matters here as that is used to produce final list
556 # the order matters here as that is used to produce final list
557 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
557 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
558
558
559 # checking the argument validity
559 # checking the argument validity
560 for s in pycompat.bytestr(terseargs):
560 for s in pycompat.bytestr(terseargs):
561 if s not in allst:
561 if s not in allst:
562 raise error.Abort(_("'%s' not recognized") % s)
562 raise error.Abort(_("'%s' not recognized") % s)
563
563
564 # creating a dirnode object for the root of the repo
564 # creating a dirnode object for the root of the repo
565 rootobj = dirnode('')
565 rootobj = dirnode('')
566 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
566 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
567 'ignored', 'removed')
567 'ignored', 'removed')
568
568
569 tersedict = {}
569 tersedict = {}
570 for attrname in pstatus:
570 for attrname in pstatus:
571 statuschar = attrname[0:1]
571 statuschar = attrname[0:1]
572 for f in getattr(statuslist, attrname):
572 for f in getattr(statuslist, attrname):
573 rootobj.addfile(f, statuschar)
573 rootobj.addfile(f, statuschar)
574 tersedict[statuschar] = []
574 tersedict[statuschar] = []
575
575
576 # we won't be tersing the root dir, so add files in it
576 # we won't be tersing the root dir, so add files in it
577 for st, fpath in rootobj.iterfilepaths():
577 for st, fpath in rootobj.iterfilepaths():
578 tersedict[st].append(fpath)
578 tersedict[st].append(fpath)
579
579
580 # process each sub-directory and build tersedict
580 # process each sub-directory and build tersedict
581 for subdir in rootobj.subdirs.values():
581 for subdir in rootobj.subdirs.values():
582 for st, f in subdir.tersewalk(terseargs):
582 for st, f in subdir.tersewalk(terseargs):
583 tersedict[st].append(f)
583 tersedict[st].append(f)
584
584
585 tersedlist = []
585 tersedlist = []
586 for st in allst:
586 for st in allst:
587 tersedict[st].sort()
587 tersedict[st].sort()
588 tersedlist.append(tersedict[st])
588 tersedlist.append(tersedict[st])
589
589
590 return tersedlist
590 return tersedlist
591
591
592 def _commentlines(raw):
592 def _commentlines(raw):
593 '''Surround lineswith a comment char and a new line'''
593 '''Surround lineswith a comment char and a new line'''
594 lines = raw.splitlines()
594 lines = raw.splitlines()
595 commentedlines = ['# %s' % line for line in lines]
595 commentedlines = ['# %s' % line for line in lines]
596 return '\n'.join(commentedlines) + '\n'
596 return '\n'.join(commentedlines) + '\n'
597
597
598 def _conflictsmsg(repo):
598 def _conflictsmsg(repo):
599 mergestate = mergemod.mergestate.read(repo)
599 mergestate = mergemod.mergestate.read(repo)
600 if not mergestate.active():
600 if not mergestate.active():
601 return
601 return
602
602
603 m = scmutil.match(repo[None])
603 m = scmutil.match(repo[None])
604 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
604 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
605 if unresolvedlist:
605 if unresolvedlist:
606 mergeliststr = '\n'.join(
606 mergeliststr = '\n'.join(
607 [' %s' % util.pathto(repo.root, encoding.getcwd(), path)
607 [' %s' % util.pathto(repo.root, encoding.getcwd(), path)
608 for path in sorted(unresolvedlist)])
608 for path in sorted(unresolvedlist)])
609 msg = _('''Unresolved merge conflicts:
609 msg = _('''Unresolved merge conflicts:
610
610
611 %s
611 %s
612
612
613 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
613 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
614 else:
614 else:
615 msg = _('No unresolved merge conflicts.')
615 msg = _('No unresolved merge conflicts.')
616
616
617 return _commentlines(msg)
617 return _commentlines(msg)
618
618
619 def _helpmessage(continuecmd, abortcmd):
619 def _helpmessage(continuecmd, abortcmd):
620 msg = _('To continue: %s\n'
620 msg = _('To continue: %s\n'
621 'To abort: %s') % (continuecmd, abortcmd)
621 'To abort: %s') % (continuecmd, abortcmd)
622 return _commentlines(msg)
622 return _commentlines(msg)
623
623
624 def _rebasemsg():
624 def _rebasemsg():
625 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
625 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
626
626
627 def _histeditmsg():
627 def _histeditmsg():
628 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
628 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
629
629
630 def _unshelvemsg():
630 def _unshelvemsg():
631 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
631 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
632
632
633 def _graftmsg():
633 def _graftmsg():
634 return _helpmessage('hg graft --continue', 'hg graft --abort')
634 return _helpmessage('hg graft --continue', 'hg graft --abort')
635
635
636 def _mergemsg():
636 def _mergemsg():
637 return _helpmessage('hg commit', 'hg merge --abort')
637 return _helpmessage('hg commit', 'hg merge --abort')
638
638
639 def _bisectmsg():
639 def _bisectmsg():
640 msg = _('To mark the changeset good: hg bisect --good\n'
640 msg = _('To mark the changeset good: hg bisect --good\n'
641 'To mark the changeset bad: hg bisect --bad\n'
641 'To mark the changeset bad: hg bisect --bad\n'
642 'To abort: hg bisect --reset\n')
642 'To abort: hg bisect --reset\n')
643 return _commentlines(msg)
643 return _commentlines(msg)
644
644
645 def fileexistspredicate(filename):
645 def fileexistspredicate(filename):
646 return lambda repo: repo.vfs.exists(filename)
646 return lambda repo: repo.vfs.exists(filename)
647
647
648 def _mergepredicate(repo):
648 def _mergepredicate(repo):
649 return len(repo[None].parents()) > 1
649 return len(repo[None].parents()) > 1
650
650
651 STATES = (
651 STATES = (
652 # (state, predicate to detect states, helpful message function)
652 # (state, predicate to detect states, helpful message function)
653 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
653 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
654 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
654 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
655 ('graft', fileexistspredicate('graftstate'), _graftmsg),
655 ('graft', fileexistspredicate('graftstate'), _graftmsg),
656 ('unshelve', fileexistspredicate('shelvedstate'), _unshelvemsg),
656 ('unshelve', fileexistspredicate('shelvedstate'), _unshelvemsg),
657 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
657 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
658 # The merge state is part of a list that will be iterated over.
658 # The merge state is part of a list that will be iterated over.
659 # They need to be last because some of the other unfinished states may also
659 # They need to be last because some of the other unfinished states may also
660 # be in a merge or update state (eg. rebase, histedit, graft, etc).
660 # be in a merge or update state (eg. rebase, histedit, graft, etc).
661 # We want those to have priority.
661 # We want those to have priority.
662 ('merge', _mergepredicate, _mergemsg),
662 ('merge', _mergepredicate, _mergemsg),
663 )
663 )
664
664
665 def _getrepostate(repo):
665 def _getrepostate(repo):
666 # experimental config: commands.status.skipstates
666 # experimental config: commands.status.skipstates
667 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
667 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
668 for state, statedetectionpredicate, msgfn in STATES:
668 for state, statedetectionpredicate, msgfn in STATES:
669 if state in skip:
669 if state in skip:
670 continue
670 continue
671 if statedetectionpredicate(repo):
671 if statedetectionpredicate(repo):
672 return (state, statedetectionpredicate, msgfn)
672 return (state, statedetectionpredicate, msgfn)
673
673
674 def morestatus(repo, fm):
674 def morestatus(repo, fm):
675 statetuple = _getrepostate(repo)
675 statetuple = _getrepostate(repo)
676 label = 'status.morestatus'
676 label = 'status.morestatus'
677 if statetuple:
677 if statetuple:
678 state, statedetectionpredicate, helpfulmsg = statetuple
678 state, statedetectionpredicate, helpfulmsg = statetuple
679 statemsg = _('The repository is in an unfinished *%s* state.') % state
679 statemsg = _('The repository is in an unfinished *%s* state.') % state
680 fm.plain('%s\n' % _commentlines(statemsg), label=label)
680 fm.plain('%s\n' % _commentlines(statemsg), label=label)
681 conmsg = _conflictsmsg(repo)
681 conmsg = _conflictsmsg(repo)
682 if conmsg:
682 if conmsg:
683 fm.plain('%s\n' % conmsg, label=label)
683 fm.plain('%s\n' % conmsg, label=label)
684 if helpfulmsg:
684 if helpfulmsg:
685 helpmsg = helpfulmsg()
685 helpmsg = helpfulmsg()
686 fm.plain('%s\n' % helpmsg, label=label)
686 fm.plain('%s\n' % helpmsg, label=label)
687
687
688 def findpossible(cmd, table, strict=False):
688 def findpossible(cmd, table, strict=False):
689 """
689 """
690 Return cmd -> (aliases, command table entry)
690 Return cmd -> (aliases, command table entry)
691 for each matching command.
691 for each matching command.
692 Return debug commands (or their aliases) only if no normal command matches.
692 Return debug commands (or their aliases) only if no normal command matches.
693 """
693 """
694 choice = {}
694 choice = {}
695 debugchoice = {}
695 debugchoice = {}
696
696
697 if cmd in table:
697 if cmd in table:
698 # short-circuit exact matches, "log" alias beats "log|history"
698 # short-circuit exact matches, "log" alias beats "log|history"
699 keys = [cmd]
699 keys = [cmd]
700 else:
700 else:
701 keys = table.keys()
701 keys = table.keys()
702
702
703 allcmds = []
703 allcmds = []
704 for e in keys:
704 for e in keys:
705 aliases = parsealiases(e)
705 aliases = parsealiases(e)
706 allcmds.extend(aliases)
706 allcmds.extend(aliases)
707 found = None
707 found = None
708 if cmd in aliases:
708 if cmd in aliases:
709 found = cmd
709 found = cmd
710 elif not strict:
710 elif not strict:
711 for a in aliases:
711 for a in aliases:
712 if a.startswith(cmd):
712 if a.startswith(cmd):
713 found = a
713 found = a
714 break
714 break
715 if found is not None:
715 if found is not None:
716 if aliases[0].startswith("debug") or found.startswith("debug"):
716 if aliases[0].startswith("debug") or found.startswith("debug"):
717 debugchoice[found] = (aliases, table[e])
717 debugchoice[found] = (aliases, table[e])
718 else:
718 else:
719 choice[found] = (aliases, table[e])
719 choice[found] = (aliases, table[e])
720
720
721 if not choice and debugchoice:
721 if not choice and debugchoice:
722 choice = debugchoice
722 choice = debugchoice
723
723
724 return choice, allcmds
724 return choice, allcmds
725
725
726 def findcmd(cmd, table, strict=True):
726 def findcmd(cmd, table, strict=True):
727 """Return (aliases, command table entry) for command string."""
727 """Return (aliases, command table entry) for command string."""
728 choice, allcmds = findpossible(cmd, table, strict)
728 choice, allcmds = findpossible(cmd, table, strict)
729
729
730 if cmd in choice:
730 if cmd in choice:
731 return choice[cmd]
731 return choice[cmd]
732
732
733 if len(choice) > 1:
733 if len(choice) > 1:
734 clist = sorted(choice)
734 clist = sorted(choice)
735 raise error.AmbiguousCommand(cmd, clist)
735 raise error.AmbiguousCommand(cmd, clist)
736
736
737 if choice:
737 if choice:
738 return list(choice.values())[0]
738 return list(choice.values())[0]
739
739
740 raise error.UnknownCommand(cmd, allcmds)
740 raise error.UnknownCommand(cmd, allcmds)
741
741
742 def changebranch(ui, repo, revs, label):
742 def changebranch(ui, repo, revs, label):
743 """ Change the branch name of given revs to label """
743 """ Change the branch name of given revs to label """
744
744
745 with repo.wlock(), repo.lock(), repo.transaction('branches'):
745 with repo.wlock(), repo.lock(), repo.transaction('branches'):
746 # abort in case of uncommitted merge or dirty wdir
746 # abort in case of uncommitted merge or dirty wdir
747 bailifchanged(repo)
747 bailifchanged(repo)
748 revs = scmutil.revrange(repo, revs)
748 revs = scmutil.revrange(repo, revs)
749 if not revs:
749 if not revs:
750 raise error.Abort("empty revision set")
750 raise error.Abort("empty revision set")
751 roots = repo.revs('roots(%ld)', revs)
751 roots = repo.revs('roots(%ld)', revs)
752 if len(roots) > 1:
752 if len(roots) > 1:
753 raise error.Abort(_("cannot change branch of non-linear revisions"))
753 raise error.Abort(_("cannot change branch of non-linear revisions"))
754 rewriteutil.precheck(repo, revs, 'change branch of')
754 rewriteutil.precheck(repo, revs, 'change branch of')
755
755
756 root = repo[roots.first()]
756 root = repo[roots.first()]
757 rpb = {parent.branch() for parent in root.parents()}
757 rpb = {parent.branch() for parent in root.parents()}
758 if label not in rpb and label in repo.branchmap():
758 if label not in rpb and label in repo.branchmap():
759 raise error.Abort(_("a branch of the same name already exists"))
759 raise error.Abort(_("a branch of the same name already exists"))
760
760
761 if repo.revs('obsolete() and %ld', revs):
761 if repo.revs('obsolete() and %ld', revs):
762 raise error.Abort(_("cannot change branch of a obsolete changeset"))
762 raise error.Abort(_("cannot change branch of a obsolete changeset"))
763
763
764 # make sure only topological heads
764 # make sure only topological heads
765 if repo.revs('heads(%ld) - head()', revs):
765 if repo.revs('heads(%ld) - head()', revs):
766 raise error.Abort(_("cannot change branch in middle of a stack"))
766 raise error.Abort(_("cannot change branch in middle of a stack"))
767
767
768 replacements = {}
768 replacements = {}
769 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
769 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
770 # mercurial.subrepo -> mercurial.cmdutil
770 # mercurial.subrepo -> mercurial.cmdutil
771 from . import context
771 from . import context
772 for rev in revs:
772 for rev in revs:
773 ctx = repo[rev]
773 ctx = repo[rev]
774 oldbranch = ctx.branch()
774 oldbranch = ctx.branch()
775 # check if ctx has same branch
775 # check if ctx has same branch
776 if oldbranch == label:
776 if oldbranch == label:
777 continue
777 continue
778
778
779 def filectxfn(repo, newctx, path):
779 def filectxfn(repo, newctx, path):
780 try:
780 try:
781 return ctx[path]
781 return ctx[path]
782 except error.ManifestLookupError:
782 except error.ManifestLookupError:
783 return None
783 return None
784
784
785 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
785 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
786 % (hex(ctx.node()), oldbranch, label))
786 % (hex(ctx.node()), oldbranch, label))
787 extra = ctx.extra()
787 extra = ctx.extra()
788 extra['branch_change'] = hex(ctx.node())
788 extra['branch_change'] = hex(ctx.node())
789 # While changing branch of set of linear commits, make sure that
789 # While changing branch of set of linear commits, make sure that
790 # we base our commits on new parent rather than old parent which
790 # we base our commits on new parent rather than old parent which
791 # was obsoleted while changing the branch
791 # was obsoleted while changing the branch
792 p1 = ctx.p1().node()
792 p1 = ctx.p1().node()
793 p2 = ctx.p2().node()
793 p2 = ctx.p2().node()
794 if p1 in replacements:
794 if p1 in replacements:
795 p1 = replacements[p1][0]
795 p1 = replacements[p1][0]
796 if p2 in replacements:
796 if p2 in replacements:
797 p2 = replacements[p2][0]
797 p2 = replacements[p2][0]
798
798
799 mc = context.memctx(repo, (p1, p2),
799 mc = context.memctx(repo, (p1, p2),
800 ctx.description(),
800 ctx.description(),
801 ctx.files(),
801 ctx.files(),
802 filectxfn,
802 filectxfn,
803 user=ctx.user(),
803 user=ctx.user(),
804 date=ctx.date(),
804 date=ctx.date(),
805 extra=extra,
805 extra=extra,
806 branch=label)
806 branch=label)
807
807
808 newnode = repo.commitctx(mc)
808 newnode = repo.commitctx(mc)
809 replacements[ctx.node()] = (newnode,)
809 replacements[ctx.node()] = (newnode,)
810 ui.debug('new node id is %s\n' % hex(newnode))
810 ui.debug('new node id is %s\n' % hex(newnode))
811
811
812 # create obsmarkers and move bookmarks
812 # create obsmarkers and move bookmarks
813 scmutil.cleanupnodes(repo, replacements, 'branch-change', fixphase=True)
813 scmutil.cleanupnodes(repo, replacements, 'branch-change', fixphase=True)
814
814
815 # move the working copy too
815 # move the working copy too
816 wctx = repo[None]
816 wctx = repo[None]
817 # in-progress merge is a bit too complex for now.
817 # in-progress merge is a bit too complex for now.
818 if len(wctx.parents()) == 1:
818 if len(wctx.parents()) == 1:
819 newid = replacements.get(wctx.p1().node())
819 newid = replacements.get(wctx.p1().node())
820 if newid is not None:
820 if newid is not None:
821 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
821 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
822 # mercurial.cmdutil
822 # mercurial.cmdutil
823 from . import hg
823 from . import hg
824 hg.update(repo, newid[0], quietempty=True)
824 hg.update(repo, newid[0], quietempty=True)
825
825
826 ui.status(_("changed branch on %d changesets\n") % len(replacements))
826 ui.status(_("changed branch on %d changesets\n") % len(replacements))
827
827
828 def findrepo(p):
828 def findrepo(p):
829 while not os.path.isdir(os.path.join(p, ".hg")):
829 while not os.path.isdir(os.path.join(p, ".hg")):
830 oldp, p = p, os.path.dirname(p)
830 oldp, p = p, os.path.dirname(p)
831 if p == oldp:
831 if p == oldp:
832 return None
832 return None
833
833
834 return p
834 return p
835
835
836 def bailifchanged(repo, merge=True, hint=None):
836 def bailifchanged(repo, merge=True, hint=None):
837 """ enforce the precondition that working directory must be clean.
837 """ enforce the precondition that working directory must be clean.
838
838
839 'merge' can be set to false if a pending uncommitted merge should be
839 'merge' can be set to false if a pending uncommitted merge should be
840 ignored (such as when 'update --check' runs).
840 ignored (such as when 'update --check' runs).
841
841
842 'hint' is the usual hint given to Abort exception.
842 'hint' is the usual hint given to Abort exception.
843 """
843 """
844
844
845 if merge and repo.dirstate.p2() != nullid:
845 if merge and repo.dirstate.p2() != nullid:
846 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
846 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
847 modified, added, removed, deleted = repo.status()[:4]
847 modified, added, removed, deleted = repo.status()[:4]
848 if modified or added or removed or deleted:
848 if modified or added or removed or deleted:
849 raise error.Abort(_('uncommitted changes'), hint=hint)
849 raise error.Abort(_('uncommitted changes'), hint=hint)
850 ctx = repo[None]
850 ctx = repo[None]
851 for s in sorted(ctx.substate):
851 for s in sorted(ctx.substate):
852 ctx.sub(s).bailifchanged(hint=hint)
852 ctx.sub(s).bailifchanged(hint=hint)
853
853
854 def logmessage(ui, opts):
854 def logmessage(ui, opts):
855 """ get the log message according to -m and -l option """
855 """ get the log message according to -m and -l option """
856 message = opts.get('message')
856 message = opts.get('message')
857 logfile = opts.get('logfile')
857 logfile = opts.get('logfile')
858
858
859 if message and logfile:
859 if message and logfile:
860 raise error.Abort(_('options --message and --logfile are mutually '
860 raise error.Abort(_('options --message and --logfile are mutually '
861 'exclusive'))
861 'exclusive'))
862 if not message and logfile:
862 if not message and logfile:
863 try:
863 try:
864 if isstdiofilename(logfile):
864 if isstdiofilename(logfile):
865 message = ui.fin.read()
865 message = ui.fin.read()
866 else:
866 else:
867 message = '\n'.join(util.readfile(logfile).splitlines())
867 message = '\n'.join(util.readfile(logfile).splitlines())
868 except IOError as inst:
868 except IOError as inst:
869 raise error.Abort(_("can't read commit message '%s': %s") %
869 raise error.Abort(_("can't read commit message '%s': %s") %
870 (logfile, encoding.strtolocal(inst.strerror)))
870 (logfile, encoding.strtolocal(inst.strerror)))
871 return message
871 return message
872
872
873 def mergeeditform(ctxorbool, baseformname):
873 def mergeeditform(ctxorbool, baseformname):
874 """return appropriate editform name (referencing a committemplate)
874 """return appropriate editform name (referencing a committemplate)
875
875
876 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
876 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
877 merging is committed.
877 merging is committed.
878
878
879 This returns baseformname with '.merge' appended if it is a merge,
879 This returns baseformname with '.merge' appended if it is a merge,
880 otherwise '.normal' is appended.
880 otherwise '.normal' is appended.
881 """
881 """
882 if isinstance(ctxorbool, bool):
882 if isinstance(ctxorbool, bool):
883 if ctxorbool:
883 if ctxorbool:
884 return baseformname + ".merge"
884 return baseformname + ".merge"
885 elif len(ctxorbool.parents()) > 1:
885 elif len(ctxorbool.parents()) > 1:
886 return baseformname + ".merge"
886 return baseformname + ".merge"
887
887
888 return baseformname + ".normal"
888 return baseformname + ".normal"
889
889
890 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
890 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
891 editform='', **opts):
891 editform='', **opts):
892 """get appropriate commit message editor according to '--edit' option
892 """get appropriate commit message editor according to '--edit' option
893
893
894 'finishdesc' is a function to be called with edited commit message
894 'finishdesc' is a function to be called with edited commit message
895 (= 'description' of the new changeset) just after editing, but
895 (= 'description' of the new changeset) just after editing, but
896 before checking empty-ness. It should return actual text to be
896 before checking empty-ness. It should return actual text to be
897 stored into history. This allows to change description before
897 stored into history. This allows to change description before
898 storing.
898 storing.
899
899
900 'extramsg' is a extra message to be shown in the editor instead of
900 'extramsg' is a extra message to be shown in the editor instead of
901 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
901 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
902 is automatically added.
902 is automatically added.
903
903
904 'editform' is a dot-separated list of names, to distinguish
904 'editform' is a dot-separated list of names, to distinguish
905 the purpose of commit text editing.
905 the purpose of commit text editing.
906
906
907 'getcommiteditor' returns 'commitforceeditor' regardless of
907 'getcommiteditor' returns 'commitforceeditor' regardless of
908 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
908 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
909 they are specific for usage in MQ.
909 they are specific for usage in MQ.
910 """
910 """
911 if edit or finishdesc or extramsg:
911 if edit or finishdesc or extramsg:
912 return lambda r, c, s: commitforceeditor(r, c, s,
912 return lambda r, c, s: commitforceeditor(r, c, s,
913 finishdesc=finishdesc,
913 finishdesc=finishdesc,
914 extramsg=extramsg,
914 extramsg=extramsg,
915 editform=editform)
915 editform=editform)
916 elif editform:
916 elif editform:
917 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
917 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
918 else:
918 else:
919 return commiteditor
919 return commiteditor
920
920
921 def _escapecommandtemplate(tmpl):
921 def _escapecommandtemplate(tmpl):
922 parts = []
922 parts = []
923 for typ, start, end in templater.scantemplate(tmpl, raw=True):
923 for typ, start, end in templater.scantemplate(tmpl, raw=True):
924 if typ == b'string':
924 if typ == b'string':
925 parts.append(stringutil.escapestr(tmpl[start:end]))
925 parts.append(stringutil.escapestr(tmpl[start:end]))
926 else:
926 else:
927 parts.append(tmpl[start:end])
927 parts.append(tmpl[start:end])
928 return b''.join(parts)
928 return b''.join(parts)
929
929
930 def rendercommandtemplate(ui, tmpl, props):
930 def rendercommandtemplate(ui, tmpl, props):
931 r"""Expand a literal template 'tmpl' in a way suitable for command line
931 r"""Expand a literal template 'tmpl' in a way suitable for command line
932
932
933 '\' in outermost string is not taken as an escape character because it
933 '\' in outermost string is not taken as an escape character because it
934 is a directory separator on Windows.
934 is a directory separator on Windows.
935
935
936 >>> from . import ui as uimod
936 >>> from . import ui as uimod
937 >>> ui = uimod.ui()
937 >>> ui = uimod.ui()
938 >>> rendercommandtemplate(ui, b'c:\\{path}', {b'path': b'foo'})
938 >>> rendercommandtemplate(ui, b'c:\\{path}', {b'path': b'foo'})
939 'c:\\foo'
939 'c:\\foo'
940 >>> rendercommandtemplate(ui, b'{"c:\\{path}"}', {'path': b'foo'})
940 >>> rendercommandtemplate(ui, b'{"c:\\{path}"}', {'path': b'foo'})
941 'c:{path}'
941 'c:{path}'
942 """
942 """
943 if not tmpl:
943 if not tmpl:
944 return tmpl
944 return tmpl
945 t = formatter.maketemplater(ui, _escapecommandtemplate(tmpl))
945 t = formatter.maketemplater(ui, _escapecommandtemplate(tmpl))
946 return t.renderdefault(props)
946 return t.renderdefault(props)
947
947
948 def rendertemplate(ctx, tmpl, props=None):
948 def rendertemplate(ctx, tmpl, props=None):
949 """Expand a literal template 'tmpl' byte-string against one changeset
949 """Expand a literal template 'tmpl' byte-string against one changeset
950
950
951 Each props item must be a stringify-able value or a callable returning
951 Each props item must be a stringify-able value or a callable returning
952 such value, i.e. no bare list nor dict should be passed.
952 such value, i.e. no bare list nor dict should be passed.
953 """
953 """
954 repo = ctx.repo()
954 repo = ctx.repo()
955 tres = formatter.templateresources(repo.ui, repo)
955 tres = formatter.templateresources(repo.ui, repo)
956 t = formatter.maketemplater(repo.ui, tmpl, defaults=templatekw.keywords,
956 t = formatter.maketemplater(repo.ui, tmpl, defaults=templatekw.keywords,
957 resources=tres)
957 resources=tres)
958 mapping = {'ctx': ctx}
958 mapping = {'ctx': ctx}
959 if props:
959 if props:
960 mapping.update(props)
960 mapping.update(props)
961 return t.renderdefault(mapping)
961 return t.renderdefault(mapping)
962
962
963 def _buildfntemplate(pat, total=None, seqno=None, revwidth=None, pathname=None):
963 def _buildfntemplate(pat, total=None, seqno=None, revwidth=None, pathname=None):
964 r"""Convert old-style filename format string to template string
964 r"""Convert old-style filename format string to template string
965
965
966 >>> _buildfntemplate(b'foo-%b-%n.patch', seqno=0)
966 >>> _buildfntemplate(b'foo-%b-%n.patch', seqno=0)
967 'foo-{reporoot|basename}-{seqno}.patch'
967 'foo-{reporoot|basename}-{seqno}.patch'
968 >>> _buildfntemplate(b'%R{tags % "{tag}"}%H')
968 >>> _buildfntemplate(b'%R{tags % "{tag}"}%H')
969 '{rev}{tags % "{tag}"}{node}'
969 '{rev}{tags % "{tag}"}{node}'
970
970
971 '\' in outermost strings has to be escaped because it is a directory
971 '\' in outermost strings has to be escaped because it is a directory
972 separator on Windows:
972 separator on Windows:
973
973
974 >>> _buildfntemplate(b'c:\\tmp\\%R\\%n.patch', seqno=0)
974 >>> _buildfntemplate(b'c:\\tmp\\%R\\%n.patch', seqno=0)
975 'c:\\\\tmp\\\\{rev}\\\\{seqno}.patch'
975 'c:\\\\tmp\\\\{rev}\\\\{seqno}.patch'
976 >>> _buildfntemplate(b'\\\\foo\\bar.patch')
976 >>> _buildfntemplate(b'\\\\foo\\bar.patch')
977 '\\\\\\\\foo\\\\bar.patch'
977 '\\\\\\\\foo\\\\bar.patch'
978 >>> _buildfntemplate(b'\\{tags % "{tag}"}')
978 >>> _buildfntemplate(b'\\{tags % "{tag}"}')
979 '\\\\{tags % "{tag}"}'
979 '\\\\{tags % "{tag}"}'
980
980
981 but inner strings follow the template rules (i.e. '\' is taken as an
981 but inner strings follow the template rules (i.e. '\' is taken as an
982 escape character):
982 escape character):
983
983
984 >>> _buildfntemplate(br'{"c:\tmp"}', seqno=0)
984 >>> _buildfntemplate(br'{"c:\tmp"}', seqno=0)
985 '{"c:\\tmp"}'
985 '{"c:\\tmp"}'
986 """
986 """
987 expander = {
987 expander = {
988 b'H': b'{node}',
988 b'H': b'{node}',
989 b'R': b'{rev}',
989 b'R': b'{rev}',
990 b'h': b'{node|short}',
990 b'h': b'{node|short}',
991 b'm': br'{sub(r"[^\w]", "_", desc|firstline)}',
991 b'm': br'{sub(r"[^\w]", "_", desc|firstline)}',
992 b'r': b'{if(revwidth, pad(rev, revwidth, "0", left=True), rev)}',
992 b'r': b'{if(revwidth, pad(rev, revwidth, "0", left=True), rev)}',
993 b'%': b'%',
993 b'%': b'%',
994 b'b': b'{reporoot|basename}',
994 b'b': b'{reporoot|basename}',
995 }
995 }
996 if total is not None:
996 if total is not None:
997 expander[b'N'] = b'{total}'
997 expander[b'N'] = b'{total}'
998 if seqno is not None:
998 if seqno is not None:
999 expander[b'n'] = b'{seqno}'
999 expander[b'n'] = b'{seqno}'
1000 if total is not None and seqno is not None:
1000 if total is not None and seqno is not None:
1001 expander[b'n'] = b'{pad(seqno, total|stringify|count, "0", left=True)}'
1001 expander[b'n'] = b'{pad(seqno, total|stringify|count, "0", left=True)}'
1002 if pathname is not None:
1002 if pathname is not None:
1003 expander[b's'] = b'{pathname|basename}'
1003 expander[b's'] = b'{pathname|basename}'
1004 expander[b'd'] = b'{if(pathname|dirname, pathname|dirname, ".")}'
1004 expander[b'd'] = b'{if(pathname|dirname, pathname|dirname, ".")}'
1005 expander[b'p'] = b'{pathname}'
1005 expander[b'p'] = b'{pathname}'
1006
1006
1007 newname = []
1007 newname = []
1008 for typ, start, end in templater.scantemplate(pat, raw=True):
1008 for typ, start, end in templater.scantemplate(pat, raw=True):
1009 if typ != b'string':
1009 if typ != b'string':
1010 newname.append(pat[start:end])
1010 newname.append(pat[start:end])
1011 continue
1011 continue
1012 i = start
1012 i = start
1013 while i < end:
1013 while i < end:
1014 n = pat.find(b'%', i, end)
1014 n = pat.find(b'%', i, end)
1015 if n < 0:
1015 if n < 0:
1016 newname.append(stringutil.escapestr(pat[i:end]))
1016 newname.append(stringutil.escapestr(pat[i:end]))
1017 break
1017 break
1018 newname.append(stringutil.escapestr(pat[i:n]))
1018 newname.append(stringutil.escapestr(pat[i:n]))
1019 if n + 2 > end:
1019 if n + 2 > end:
1020 raise error.Abort(_("incomplete format spec in output "
1020 raise error.Abort(_("incomplete format spec in output "
1021 "filename"))
1021 "filename"))
1022 c = pat[n + 1:n + 2]
1022 c = pat[n + 1:n + 2]
1023 i = n + 2
1023 i = n + 2
1024 try:
1024 try:
1025 newname.append(expander[c])
1025 newname.append(expander[c])
1026 except KeyError:
1026 except KeyError:
1027 raise error.Abort(_("invalid format spec '%%%s' in output "
1027 raise error.Abort(_("invalid format spec '%%%s' in output "
1028 "filename") % c)
1028 "filename") % c)
1029 return ''.join(newname)
1029 return ''.join(newname)
1030
1030
1031 def makefilename(ctx, pat, **props):
1031 def makefilename(ctx, pat, **props):
1032 if not pat:
1032 if not pat:
1033 return pat
1033 return pat
1034 tmpl = _buildfntemplate(pat, **props)
1034 tmpl = _buildfntemplate(pat, **props)
1035 # BUG: alias expansion shouldn't be made against template fragments
1035 # BUG: alias expansion shouldn't be made against template fragments
1036 # rewritten from %-format strings, but we have no easy way to partially
1036 # rewritten from %-format strings, but we have no easy way to partially
1037 # disable the expansion.
1037 # disable the expansion.
1038 return rendertemplate(ctx, tmpl, pycompat.byteskwargs(props))
1038 return rendertemplate(ctx, tmpl, pycompat.byteskwargs(props))
1039
1039
1040 def isstdiofilename(pat):
1040 def isstdiofilename(pat):
1041 """True if the given pat looks like a filename denoting stdin/stdout"""
1041 """True if the given pat looks like a filename denoting stdin/stdout"""
1042 return not pat or pat == '-'
1042 return not pat or pat == '-'
1043
1043
1044 class _unclosablefile(object):
1044 class _unclosablefile(object):
1045 def __init__(self, fp):
1045 def __init__(self, fp):
1046 self._fp = fp
1046 self._fp = fp
1047
1047
1048 def close(self):
1048 def close(self):
1049 pass
1049 pass
1050
1050
1051 def __iter__(self):
1051 def __iter__(self):
1052 return iter(self._fp)
1052 return iter(self._fp)
1053
1053
1054 def __getattr__(self, attr):
1054 def __getattr__(self, attr):
1055 return getattr(self._fp, attr)
1055 return getattr(self._fp, attr)
1056
1056
1057 def __enter__(self):
1057 def __enter__(self):
1058 return self
1058 return self
1059
1059
1060 def __exit__(self, exc_type, exc_value, exc_tb):
1060 def __exit__(self, exc_type, exc_value, exc_tb):
1061 pass
1061 pass
1062
1062
1063 def makefileobj(ctx, pat, mode='wb', **props):
1063 def makefileobj(ctx, pat, mode='wb', **props):
1064 writable = mode not in ('r', 'rb')
1064 writable = mode not in ('r', 'rb')
1065
1065
1066 if isstdiofilename(pat):
1066 if isstdiofilename(pat):
1067 repo = ctx.repo()
1067 repo = ctx.repo()
1068 if writable:
1068 if writable:
1069 fp = repo.ui.fout
1069 fp = repo.ui.fout
1070 else:
1070 else:
1071 fp = repo.ui.fin
1071 fp = repo.ui.fin
1072 return _unclosablefile(fp)
1072 return _unclosablefile(fp)
1073 fn = makefilename(ctx, pat, **props)
1073 fn = makefilename(ctx, pat, **props)
1074 return open(fn, mode)
1074 return open(fn, mode)
1075
1075
1076 def openstorage(repo, cmd, file_, opts, returnrevlog=False):
1076 def openstorage(repo, cmd, file_, opts, returnrevlog=False):
1077 """opens the changelog, manifest, a filelog or a given revlog"""
1077 """opens the changelog, manifest, a filelog or a given revlog"""
1078 cl = opts['changelog']
1078 cl = opts['changelog']
1079 mf = opts['manifest']
1079 mf = opts['manifest']
1080 dir = opts['dir']
1080 dir = opts['dir']
1081 msg = None
1081 msg = None
1082 if cl and mf:
1082 if cl and mf:
1083 msg = _('cannot specify --changelog and --manifest at the same time')
1083 msg = _('cannot specify --changelog and --manifest at the same time')
1084 elif cl and dir:
1084 elif cl and dir:
1085 msg = _('cannot specify --changelog and --dir at the same time')
1085 msg = _('cannot specify --changelog and --dir at the same time')
1086 elif cl or mf or dir:
1086 elif cl or mf or dir:
1087 if file_:
1087 if file_:
1088 msg = _('cannot specify filename with --changelog or --manifest')
1088 msg = _('cannot specify filename with --changelog or --manifest')
1089 elif not repo:
1089 elif not repo:
1090 msg = _('cannot specify --changelog or --manifest or --dir '
1090 msg = _('cannot specify --changelog or --manifest or --dir '
1091 'without a repository')
1091 'without a repository')
1092 if msg:
1092 if msg:
1093 raise error.Abort(msg)
1093 raise error.Abort(msg)
1094
1094
1095 r = None
1095 r = None
1096 if repo:
1096 if repo:
1097 if cl:
1097 if cl:
1098 r = repo.unfiltered().changelog
1098 r = repo.unfiltered().changelog
1099 elif dir:
1099 elif dir:
1100 if 'treemanifest' not in repo.requirements:
1100 if 'treemanifest' not in repo.requirements:
1101 raise error.Abort(_("--dir can only be used on repos with "
1101 raise error.Abort(_("--dir can only be used on repos with "
1102 "treemanifest enabled"))
1102 "treemanifest enabled"))
1103 if not dir.endswith('/'):
1103 if not dir.endswith('/'):
1104 dir = dir + '/'
1104 dir = dir + '/'
1105 dirlog = repo.manifestlog.getstorage(dir)
1105 dirlog = repo.manifestlog.getstorage(dir)
1106 if len(dirlog):
1106 if len(dirlog):
1107 r = dirlog
1107 r = dirlog
1108 elif mf:
1108 elif mf:
1109 r = repo.manifestlog.getstorage(b'')
1109 r = repo.manifestlog.getstorage(b'')
1110 elif file_:
1110 elif file_:
1111 filelog = repo.file(file_)
1111 filelog = repo.file(file_)
1112 if len(filelog):
1112 if len(filelog):
1113 r = filelog
1113 r = filelog
1114
1114
1115 # Not all storage may be revlogs. If requested, try to return an actual
1115 # Not all storage may be revlogs. If requested, try to return an actual
1116 # revlog instance.
1116 # revlog instance.
1117 if returnrevlog:
1117 if returnrevlog:
1118 if isinstance(r, revlog.revlog):
1118 if isinstance(r, revlog.revlog):
1119 pass
1119 pass
1120 elif util.safehasattr(r, '_revlog'):
1120 elif util.safehasattr(r, '_revlog'):
1121 r = r._revlog
1121 r = r._revlog
1122 elif r is not None:
1122 elif r is not None:
1123 raise error.Abort(_('%r does not appear to be a revlog') % r)
1123 raise error.Abort(_('%r does not appear to be a revlog') % r)
1124
1124
1125 if not r:
1125 if not r:
1126 if not returnrevlog:
1126 if not returnrevlog:
1127 raise error.Abort(_('cannot give path to non-revlog'))
1127 raise error.Abort(_('cannot give path to non-revlog'))
1128
1128
1129 if not file_:
1129 if not file_:
1130 raise error.CommandError(cmd, _('invalid arguments'))
1130 raise error.CommandError(cmd, _('invalid arguments'))
1131 if not os.path.isfile(file_):
1131 if not os.path.isfile(file_):
1132 raise error.Abort(_("revlog '%s' not found") % file_)
1132 raise error.Abort(_("revlog '%s' not found") % file_)
1133 r = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False),
1133 r = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False),
1134 file_[:-2] + ".i")
1134 file_[:-2] + ".i")
1135 return r
1135 return r
1136
1136
1137 def openrevlog(repo, cmd, file_, opts):
1137 def openrevlog(repo, cmd, file_, opts):
1138 """Obtain a revlog backing storage of an item.
1138 """Obtain a revlog backing storage of an item.
1139
1139
1140 This is similar to ``openstorage()`` except it always returns a revlog.
1140 This is similar to ``openstorage()`` except it always returns a revlog.
1141
1141
1142 In most cases, a caller cares about the main storage object - not the
1142 In most cases, a caller cares about the main storage object - not the
1143 revlog backing it. Therefore, this function should only be used by code
1143 revlog backing it. Therefore, this function should only be used by code
1144 that needs to examine low-level revlog implementation details. e.g. debug
1144 that needs to examine low-level revlog implementation details. e.g. debug
1145 commands.
1145 commands.
1146 """
1146 """
1147 return openstorage(repo, cmd, file_, opts, returnrevlog=True)
1147 return openstorage(repo, cmd, file_, opts, returnrevlog=True)
1148
1148
1149 def copy(ui, repo, pats, opts, rename=False):
1149 def copy(ui, repo, pats, opts, rename=False):
1150 # called with the repo lock held
1150 # called with the repo lock held
1151 #
1151 #
1152 # hgsep => pathname that uses "/" to separate directories
1152 # hgsep => pathname that uses "/" to separate directories
1153 # ossep => pathname that uses os.sep to separate directories
1153 # ossep => pathname that uses os.sep to separate directories
1154 cwd = repo.getcwd()
1154 cwd = repo.getcwd()
1155 targets = {}
1155 targets = {}
1156 after = opts.get("after")
1156 after = opts.get("after")
1157 dryrun = opts.get("dry_run")
1157 dryrun = opts.get("dry_run")
1158 wctx = repo[None]
1158 wctx = repo[None]
1159
1159
1160 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
1160 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
1161 def walkpat(pat):
1161 def walkpat(pat):
1162 srcs = []
1162 srcs = []
1163 if after:
1163 if after:
1164 badstates = '?'
1164 badstates = '?'
1165 else:
1165 else:
1166 badstates = '?r'
1166 badstates = '?r'
1167 m = scmutil.match(wctx, [pat], opts, globbed=True)
1167 m = scmutil.match(wctx, [pat], opts, globbed=True)
1168 for abs in wctx.walk(m):
1168 for abs in wctx.walk(m):
1169 state = repo.dirstate[abs]
1169 state = repo.dirstate[abs]
1170 rel = uipathfn(abs)
1170 rel = uipathfn(abs)
1171 exact = m.exact(abs)
1171 exact = m.exact(abs)
1172 if state in badstates:
1172 if state in badstates:
1173 if exact and state == '?':
1173 if exact and state == '?':
1174 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1174 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1175 if exact and state == 'r':
1175 if exact and state == 'r':
1176 ui.warn(_('%s: not copying - file has been marked for'
1176 ui.warn(_('%s: not copying - file has been marked for'
1177 ' remove\n') % rel)
1177 ' remove\n') % rel)
1178 continue
1178 continue
1179 # abs: hgsep
1179 # abs: hgsep
1180 # rel: ossep
1180 # rel: ossep
1181 srcs.append((abs, rel, exact))
1181 srcs.append((abs, rel, exact))
1182 return srcs
1182 return srcs
1183
1183
1184 # abssrc: hgsep
1184 # abssrc: hgsep
1185 # relsrc: ossep
1185 # relsrc: ossep
1186 # otarget: ossep
1186 # otarget: ossep
1187 def copyfile(abssrc, relsrc, otarget, exact):
1187 def copyfile(abssrc, relsrc, otarget, exact):
1188 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1188 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1189 if '/' in abstarget:
1189 if '/' in abstarget:
1190 # We cannot normalize abstarget itself, this would prevent
1190 # We cannot normalize abstarget itself, this would prevent
1191 # case only renames, like a => A.
1191 # case only renames, like a => A.
1192 abspath, absname = abstarget.rsplit('/', 1)
1192 abspath, absname = abstarget.rsplit('/', 1)
1193 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1193 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1194 reltarget = repo.pathto(abstarget, cwd)
1194 reltarget = repo.pathto(abstarget, cwd)
1195 target = repo.wjoin(abstarget)
1195 target = repo.wjoin(abstarget)
1196 src = repo.wjoin(abssrc)
1196 src = repo.wjoin(abssrc)
1197 state = repo.dirstate[abstarget]
1197 state = repo.dirstate[abstarget]
1198
1198
1199 scmutil.checkportable(ui, abstarget)
1199 scmutil.checkportable(ui, abstarget)
1200
1200
1201 # check for collisions
1201 # check for collisions
1202 prevsrc = targets.get(abstarget)
1202 prevsrc = targets.get(abstarget)
1203 if prevsrc is not None:
1203 if prevsrc is not None:
1204 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1204 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1205 (reltarget, repo.pathto(abssrc, cwd),
1205 (reltarget, repo.pathto(abssrc, cwd),
1206 repo.pathto(prevsrc, cwd)))
1206 repo.pathto(prevsrc, cwd)))
1207 return True # report a failure
1207 return True # report a failure
1208
1208
1209 # check for overwrites
1209 # check for overwrites
1210 exists = os.path.lexists(target)
1210 exists = os.path.lexists(target)
1211 samefile = False
1211 samefile = False
1212 if exists and abssrc != abstarget:
1212 if exists and abssrc != abstarget:
1213 if (repo.dirstate.normalize(abssrc) ==
1213 if (repo.dirstate.normalize(abssrc) ==
1214 repo.dirstate.normalize(abstarget)):
1214 repo.dirstate.normalize(abstarget)):
1215 if not rename:
1215 if not rename:
1216 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1216 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1217 return True # report a failure
1217 return True # report a failure
1218 exists = False
1218 exists = False
1219 samefile = True
1219 samefile = True
1220
1220
1221 if not after and exists or after and state in 'mn':
1221 if not after and exists or after and state in 'mn':
1222 if not opts['force']:
1222 if not opts['force']:
1223 if state in 'mn':
1223 if state in 'mn':
1224 msg = _('%s: not overwriting - file already committed\n')
1224 msg = _('%s: not overwriting - file already committed\n')
1225 if after:
1225 if after:
1226 flags = '--after --force'
1226 flags = '--after --force'
1227 else:
1227 else:
1228 flags = '--force'
1228 flags = '--force'
1229 if rename:
1229 if rename:
1230 hint = _("('hg rename %s' to replace the file by "
1230 hint = _("('hg rename %s' to replace the file by "
1231 'recording a rename)\n') % flags
1231 'recording a rename)\n') % flags
1232 else:
1232 else:
1233 hint = _("('hg copy %s' to replace the file by "
1233 hint = _("('hg copy %s' to replace the file by "
1234 'recording a copy)\n') % flags
1234 'recording a copy)\n') % flags
1235 else:
1235 else:
1236 msg = _('%s: not overwriting - file exists\n')
1236 msg = _('%s: not overwriting - file exists\n')
1237 if rename:
1237 if rename:
1238 hint = _("('hg rename --after' to record the rename)\n")
1238 hint = _("('hg rename --after' to record the rename)\n")
1239 else:
1239 else:
1240 hint = _("('hg copy --after' to record the copy)\n")
1240 hint = _("('hg copy --after' to record the copy)\n")
1241 ui.warn(msg % reltarget)
1241 ui.warn(msg % reltarget)
1242 ui.warn(hint)
1242 ui.warn(hint)
1243 return True # report a failure
1243 return True # report a failure
1244
1244
1245 if after:
1245 if after:
1246 if not exists:
1246 if not exists:
1247 if rename:
1247 if rename:
1248 ui.warn(_('%s: not recording move - %s does not exist\n') %
1248 ui.warn(_('%s: not recording move - %s does not exist\n') %
1249 (relsrc, reltarget))
1249 (relsrc, reltarget))
1250 else:
1250 else:
1251 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1251 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1252 (relsrc, reltarget))
1252 (relsrc, reltarget))
1253 return True # report a failure
1253 return True # report a failure
1254 elif not dryrun:
1254 elif not dryrun:
1255 try:
1255 try:
1256 if exists:
1256 if exists:
1257 os.unlink(target)
1257 os.unlink(target)
1258 targetdir = os.path.dirname(target) or '.'
1258 targetdir = os.path.dirname(target) or '.'
1259 if not os.path.isdir(targetdir):
1259 if not os.path.isdir(targetdir):
1260 os.makedirs(targetdir)
1260 os.makedirs(targetdir)
1261 if samefile:
1261 if samefile:
1262 tmp = target + "~hgrename"
1262 tmp = target + "~hgrename"
1263 os.rename(src, tmp)
1263 os.rename(src, tmp)
1264 os.rename(tmp, target)
1264 os.rename(tmp, target)
1265 else:
1265 else:
1266 # Preserve stat info on renames, not on copies; this matches
1266 # Preserve stat info on renames, not on copies; this matches
1267 # Linux CLI behavior.
1267 # Linux CLI behavior.
1268 util.copyfile(src, target, copystat=rename)
1268 util.copyfile(src, target, copystat=rename)
1269 srcexists = True
1269 srcexists = True
1270 except IOError as inst:
1270 except IOError as inst:
1271 if inst.errno == errno.ENOENT:
1271 if inst.errno == errno.ENOENT:
1272 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1272 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1273 srcexists = False
1273 srcexists = False
1274 else:
1274 else:
1275 ui.warn(_('%s: cannot copy - %s\n') %
1275 ui.warn(_('%s: cannot copy - %s\n') %
1276 (relsrc, encoding.strtolocal(inst.strerror)))
1276 (relsrc, encoding.strtolocal(inst.strerror)))
1277 return True # report a failure
1277 return True # report a failure
1278
1278
1279 if ui.verbose or not exact:
1279 if ui.verbose or not exact:
1280 if rename:
1280 if rename:
1281 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1281 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1282 else:
1282 else:
1283 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1283 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1284
1284
1285 targets[abstarget] = abssrc
1285 targets[abstarget] = abssrc
1286
1286
1287 # fix up dirstate
1287 # fix up dirstate
1288 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1288 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1289 dryrun=dryrun, cwd=cwd)
1289 dryrun=dryrun, cwd=cwd)
1290 if rename and not dryrun:
1290 if rename and not dryrun:
1291 if not after and srcexists and not samefile:
1291 if not after and srcexists and not samefile:
1292 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
1292 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
1293 repo.wvfs.unlinkpath(abssrc, rmdir=rmdir)
1293 repo.wvfs.unlinkpath(abssrc, rmdir=rmdir)
1294 wctx.forget([abssrc])
1294 wctx.forget([abssrc])
1295
1295
1296 # pat: ossep
1296 # pat: ossep
1297 # dest ossep
1297 # dest ossep
1298 # srcs: list of (hgsep, hgsep, ossep, bool)
1298 # srcs: list of (hgsep, hgsep, ossep, bool)
1299 # return: function that takes hgsep and returns ossep
1299 # return: function that takes hgsep and returns ossep
1300 def targetpathfn(pat, dest, srcs):
1300 def targetpathfn(pat, dest, srcs):
1301 if os.path.isdir(pat):
1301 if os.path.isdir(pat):
1302 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1302 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1303 abspfx = util.localpath(abspfx)
1303 abspfx = util.localpath(abspfx)
1304 if destdirexists:
1304 if destdirexists:
1305 striplen = len(os.path.split(abspfx)[0])
1305 striplen = len(os.path.split(abspfx)[0])
1306 else:
1306 else:
1307 striplen = len(abspfx)
1307 striplen = len(abspfx)
1308 if striplen:
1308 if striplen:
1309 striplen += len(pycompat.ossep)
1309 striplen += len(pycompat.ossep)
1310 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1310 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1311 elif destdirexists:
1311 elif destdirexists:
1312 res = lambda p: os.path.join(dest,
1312 res = lambda p: os.path.join(dest,
1313 os.path.basename(util.localpath(p)))
1313 os.path.basename(util.localpath(p)))
1314 else:
1314 else:
1315 res = lambda p: dest
1315 res = lambda p: dest
1316 return res
1316 return res
1317
1317
1318 # pat: ossep
1318 # pat: ossep
1319 # dest ossep
1319 # dest ossep
1320 # srcs: list of (hgsep, hgsep, ossep, bool)
1320 # srcs: list of (hgsep, hgsep, ossep, bool)
1321 # return: function that takes hgsep and returns ossep
1321 # return: function that takes hgsep and returns ossep
1322 def targetpathafterfn(pat, dest, srcs):
1322 def targetpathafterfn(pat, dest, srcs):
1323 if matchmod.patkind(pat):
1323 if matchmod.patkind(pat):
1324 # a mercurial pattern
1324 # a mercurial pattern
1325 res = lambda p: os.path.join(dest,
1325 res = lambda p: os.path.join(dest,
1326 os.path.basename(util.localpath(p)))
1326 os.path.basename(util.localpath(p)))
1327 else:
1327 else:
1328 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1328 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1329 if len(abspfx) < len(srcs[0][0]):
1329 if len(abspfx) < len(srcs[0][0]):
1330 # A directory. Either the target path contains the last
1330 # A directory. Either the target path contains the last
1331 # component of the source path or it does not.
1331 # component of the source path or it does not.
1332 def evalpath(striplen):
1332 def evalpath(striplen):
1333 score = 0
1333 score = 0
1334 for s in srcs:
1334 for s in srcs:
1335 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1335 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1336 if os.path.lexists(t):
1336 if os.path.lexists(t):
1337 score += 1
1337 score += 1
1338 return score
1338 return score
1339
1339
1340 abspfx = util.localpath(abspfx)
1340 abspfx = util.localpath(abspfx)
1341 striplen = len(abspfx)
1341 striplen = len(abspfx)
1342 if striplen:
1342 if striplen:
1343 striplen += len(pycompat.ossep)
1343 striplen += len(pycompat.ossep)
1344 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1344 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1345 score = evalpath(striplen)
1345 score = evalpath(striplen)
1346 striplen1 = len(os.path.split(abspfx)[0])
1346 striplen1 = len(os.path.split(abspfx)[0])
1347 if striplen1:
1347 if striplen1:
1348 striplen1 += len(pycompat.ossep)
1348 striplen1 += len(pycompat.ossep)
1349 if evalpath(striplen1) > score:
1349 if evalpath(striplen1) > score:
1350 striplen = striplen1
1350 striplen = striplen1
1351 res = lambda p: os.path.join(dest,
1351 res = lambda p: os.path.join(dest,
1352 util.localpath(p)[striplen:])
1352 util.localpath(p)[striplen:])
1353 else:
1353 else:
1354 # a file
1354 # a file
1355 if destdirexists:
1355 if destdirexists:
1356 res = lambda p: os.path.join(dest,
1356 res = lambda p: os.path.join(dest,
1357 os.path.basename(util.localpath(p)))
1357 os.path.basename(util.localpath(p)))
1358 else:
1358 else:
1359 res = lambda p: dest
1359 res = lambda p: dest
1360 return res
1360 return res
1361
1361
1362 pats = scmutil.expandpats(pats)
1362 pats = scmutil.expandpats(pats)
1363 if not pats:
1363 if not pats:
1364 raise error.Abort(_('no source or destination specified'))
1364 raise error.Abort(_('no source or destination specified'))
1365 if len(pats) == 1:
1365 if len(pats) == 1:
1366 raise error.Abort(_('no destination specified'))
1366 raise error.Abort(_('no destination specified'))
1367 dest = pats.pop()
1367 dest = pats.pop()
1368 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1368 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1369 if not destdirexists:
1369 if not destdirexists:
1370 if len(pats) > 1 or matchmod.patkind(pats[0]):
1370 if len(pats) > 1 or matchmod.patkind(pats[0]):
1371 raise error.Abort(_('with multiple sources, destination must be an '
1371 raise error.Abort(_('with multiple sources, destination must be an '
1372 'existing directory'))
1372 'existing directory'))
1373 if util.endswithsep(dest):
1373 if util.endswithsep(dest):
1374 raise error.Abort(_('destination %s is not a directory') % dest)
1374 raise error.Abort(_('destination %s is not a directory') % dest)
1375
1375
1376 tfn = targetpathfn
1376 tfn = targetpathfn
1377 if after:
1377 if after:
1378 tfn = targetpathafterfn
1378 tfn = targetpathafterfn
1379 copylist = []
1379 copylist = []
1380 for pat in pats:
1380 for pat in pats:
1381 srcs = walkpat(pat)
1381 srcs = walkpat(pat)
1382 if not srcs:
1382 if not srcs:
1383 continue
1383 continue
1384 copylist.append((tfn(pat, dest, srcs), srcs))
1384 copylist.append((tfn(pat, dest, srcs), srcs))
1385 if not copylist:
1385 if not copylist:
1386 raise error.Abort(_('no files to copy'))
1386 raise error.Abort(_('no files to copy'))
1387
1387
1388 errors = 0
1388 errors = 0
1389 for targetpath, srcs in copylist:
1389 for targetpath, srcs in copylist:
1390 for abssrc, relsrc, exact in srcs:
1390 for abssrc, relsrc, exact in srcs:
1391 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1391 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1392 errors += 1
1392 errors += 1
1393
1393
1394 return errors != 0
1394 return errors != 0
1395
1395
1396 ## facility to let extension process additional data into an import patch
1396 ## facility to let extension process additional data into an import patch
1397 # list of identifier to be executed in order
1397 # list of identifier to be executed in order
1398 extrapreimport = [] # run before commit
1398 extrapreimport = [] # run before commit
1399 extrapostimport = [] # run after commit
1399 extrapostimport = [] # run after commit
1400 # mapping from identifier to actual import function
1400 # mapping from identifier to actual import function
1401 #
1401 #
1402 # 'preimport' are run before the commit is made and are provided the following
1402 # 'preimport' are run before the commit is made and are provided the following
1403 # arguments:
1403 # arguments:
1404 # - repo: the localrepository instance,
1404 # - repo: the localrepository instance,
1405 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1405 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1406 # - extra: the future extra dictionary of the changeset, please mutate it,
1406 # - extra: the future extra dictionary of the changeset, please mutate it,
1407 # - opts: the import options.
1407 # - opts: the import options.
1408 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1408 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1409 # mutation of in memory commit and more. Feel free to rework the code to get
1409 # mutation of in memory commit and more. Feel free to rework the code to get
1410 # there.
1410 # there.
1411 extrapreimportmap = {}
1411 extrapreimportmap = {}
1412 # 'postimport' are run after the commit is made and are provided the following
1412 # 'postimport' are run after the commit is made and are provided the following
1413 # argument:
1413 # argument:
1414 # - ctx: the changectx created by import.
1414 # - ctx: the changectx created by import.
1415 extrapostimportmap = {}
1415 extrapostimportmap = {}
1416
1416
1417 def tryimportone(ui, repo, patchdata, parents, opts, msgs, updatefunc):
1417 def tryimportone(ui, repo, patchdata, parents, opts, msgs, updatefunc):
1418 """Utility function used by commands.import to import a single patch
1418 """Utility function used by commands.import to import a single patch
1419
1419
1420 This function is explicitly defined here to help the evolve extension to
1420 This function is explicitly defined here to help the evolve extension to
1421 wrap this part of the import logic.
1421 wrap this part of the import logic.
1422
1422
1423 The API is currently a bit ugly because it a simple code translation from
1423 The API is currently a bit ugly because it a simple code translation from
1424 the import command. Feel free to make it better.
1424 the import command. Feel free to make it better.
1425
1425
1426 :patchdata: a dictionary containing parsed patch data (such as from
1426 :patchdata: a dictionary containing parsed patch data (such as from
1427 ``patch.extract()``)
1427 ``patch.extract()``)
1428 :parents: nodes that will be parent of the created commit
1428 :parents: nodes that will be parent of the created commit
1429 :opts: the full dict of option passed to the import command
1429 :opts: the full dict of option passed to the import command
1430 :msgs: list to save commit message to.
1430 :msgs: list to save commit message to.
1431 (used in case we need to save it when failing)
1431 (used in case we need to save it when failing)
1432 :updatefunc: a function that update a repo to a given node
1432 :updatefunc: a function that update a repo to a given node
1433 updatefunc(<repo>, <node>)
1433 updatefunc(<repo>, <node>)
1434 """
1434 """
1435 # avoid cycle context -> subrepo -> cmdutil
1435 # avoid cycle context -> subrepo -> cmdutil
1436 from . import context
1436 from . import context
1437
1437
1438 tmpname = patchdata.get('filename')
1438 tmpname = patchdata.get('filename')
1439 message = patchdata.get('message')
1439 message = patchdata.get('message')
1440 user = opts.get('user') or patchdata.get('user')
1440 user = opts.get('user') or patchdata.get('user')
1441 date = opts.get('date') or patchdata.get('date')
1441 date = opts.get('date') or patchdata.get('date')
1442 branch = patchdata.get('branch')
1442 branch = patchdata.get('branch')
1443 nodeid = patchdata.get('nodeid')
1443 nodeid = patchdata.get('nodeid')
1444 p1 = patchdata.get('p1')
1444 p1 = patchdata.get('p1')
1445 p2 = patchdata.get('p2')
1445 p2 = patchdata.get('p2')
1446
1446
1447 nocommit = opts.get('no_commit')
1447 nocommit = opts.get('no_commit')
1448 importbranch = opts.get('import_branch')
1448 importbranch = opts.get('import_branch')
1449 update = not opts.get('bypass')
1449 update = not opts.get('bypass')
1450 strip = opts["strip"]
1450 strip = opts["strip"]
1451 prefix = opts["prefix"]
1451 prefix = opts["prefix"]
1452 sim = float(opts.get('similarity') or 0)
1452 sim = float(opts.get('similarity') or 0)
1453
1453
1454 if not tmpname:
1454 if not tmpname:
1455 return None, None, False
1455 return None, None, False
1456
1456
1457 rejects = False
1457 rejects = False
1458
1458
1459 cmdline_message = logmessage(ui, opts)
1459 cmdline_message = logmessage(ui, opts)
1460 if cmdline_message:
1460 if cmdline_message:
1461 # pickup the cmdline msg
1461 # pickup the cmdline msg
1462 message = cmdline_message
1462 message = cmdline_message
1463 elif message:
1463 elif message:
1464 # pickup the patch msg
1464 # pickup the patch msg
1465 message = message.strip()
1465 message = message.strip()
1466 else:
1466 else:
1467 # launch the editor
1467 # launch the editor
1468 message = None
1468 message = None
1469 ui.debug('message:\n%s\n' % (message or ''))
1469 ui.debug('message:\n%s\n' % (message or ''))
1470
1470
1471 if len(parents) == 1:
1471 if len(parents) == 1:
1472 parents.append(repo[nullid])
1472 parents.append(repo[nullid])
1473 if opts.get('exact'):
1473 if opts.get('exact'):
1474 if not nodeid or not p1:
1474 if not nodeid or not p1:
1475 raise error.Abort(_('not a Mercurial patch'))
1475 raise error.Abort(_('not a Mercurial patch'))
1476 p1 = repo[p1]
1476 p1 = repo[p1]
1477 p2 = repo[p2 or nullid]
1477 p2 = repo[p2 or nullid]
1478 elif p2:
1478 elif p2:
1479 try:
1479 try:
1480 p1 = repo[p1]
1480 p1 = repo[p1]
1481 p2 = repo[p2]
1481 p2 = repo[p2]
1482 # Without any options, consider p2 only if the
1482 # Without any options, consider p2 only if the
1483 # patch is being applied on top of the recorded
1483 # patch is being applied on top of the recorded
1484 # first parent.
1484 # first parent.
1485 if p1 != parents[0]:
1485 if p1 != parents[0]:
1486 p1 = parents[0]
1486 p1 = parents[0]
1487 p2 = repo[nullid]
1487 p2 = repo[nullid]
1488 except error.RepoError:
1488 except error.RepoError:
1489 p1, p2 = parents
1489 p1, p2 = parents
1490 if p2.node() == nullid:
1490 if p2.node() == nullid:
1491 ui.warn(_("warning: import the patch as a normal revision\n"
1491 ui.warn(_("warning: import the patch as a normal revision\n"
1492 "(use --exact to import the patch as a merge)\n"))
1492 "(use --exact to import the patch as a merge)\n"))
1493 else:
1493 else:
1494 p1, p2 = parents
1494 p1, p2 = parents
1495
1495
1496 n = None
1496 n = None
1497 if update:
1497 if update:
1498 if p1 != parents[0]:
1498 if p1 != parents[0]:
1499 updatefunc(repo, p1.node())
1499 updatefunc(repo, p1.node())
1500 if p2 != parents[1]:
1500 if p2 != parents[1]:
1501 repo.setparents(p1.node(), p2.node())
1501 repo.setparents(p1.node(), p2.node())
1502
1502
1503 if opts.get('exact') or importbranch:
1503 if opts.get('exact') or importbranch:
1504 repo.dirstate.setbranch(branch or 'default')
1504 repo.dirstate.setbranch(branch or 'default')
1505
1505
1506 partial = opts.get('partial', False)
1506 partial = opts.get('partial', False)
1507 files = set()
1507 files = set()
1508 try:
1508 try:
1509 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1509 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1510 files=files, eolmode=None, similarity=sim / 100.0)
1510 files=files, eolmode=None, similarity=sim / 100.0)
1511 except error.PatchError as e:
1511 except error.PatchError as e:
1512 if not partial:
1512 if not partial:
1513 raise error.Abort(pycompat.bytestr(e))
1513 raise error.Abort(pycompat.bytestr(e))
1514 if partial:
1514 if partial:
1515 rejects = True
1515 rejects = True
1516
1516
1517 files = list(files)
1517 files = list(files)
1518 if nocommit:
1518 if nocommit:
1519 if message:
1519 if message:
1520 msgs.append(message)
1520 msgs.append(message)
1521 else:
1521 else:
1522 if opts.get('exact') or p2:
1522 if opts.get('exact') or p2:
1523 # If you got here, you either use --force and know what
1523 # If you got here, you either use --force and know what
1524 # you are doing or used --exact or a merge patch while
1524 # you are doing or used --exact or a merge patch while
1525 # being updated to its first parent.
1525 # being updated to its first parent.
1526 m = None
1526 m = None
1527 else:
1527 else:
1528 m = scmutil.matchfiles(repo, files or [])
1528 m = scmutil.matchfiles(repo, files or [])
1529 editform = mergeeditform(repo[None], 'import.normal')
1529 editform = mergeeditform(repo[None], 'import.normal')
1530 if opts.get('exact'):
1530 if opts.get('exact'):
1531 editor = None
1531 editor = None
1532 else:
1532 else:
1533 editor = getcommiteditor(editform=editform,
1533 editor = getcommiteditor(editform=editform,
1534 **pycompat.strkwargs(opts))
1534 **pycompat.strkwargs(opts))
1535 extra = {}
1535 extra = {}
1536 for idfunc in extrapreimport:
1536 for idfunc in extrapreimport:
1537 extrapreimportmap[idfunc](repo, patchdata, extra, opts)
1537 extrapreimportmap[idfunc](repo, patchdata, extra, opts)
1538 overrides = {}
1538 overrides = {}
1539 if partial:
1539 if partial:
1540 overrides[('ui', 'allowemptycommit')] = True
1540 overrides[('ui', 'allowemptycommit')] = True
1541 with repo.ui.configoverride(overrides, 'import'):
1541 with repo.ui.configoverride(overrides, 'import'):
1542 n = repo.commit(message, user,
1542 n = repo.commit(message, user,
1543 date, match=m,
1543 date, match=m,
1544 editor=editor, extra=extra)
1544 editor=editor, extra=extra)
1545 for idfunc in extrapostimport:
1545 for idfunc in extrapostimport:
1546 extrapostimportmap[idfunc](repo[n])
1546 extrapostimportmap[idfunc](repo[n])
1547 else:
1547 else:
1548 if opts.get('exact') or importbranch:
1548 if opts.get('exact') or importbranch:
1549 branch = branch or 'default'
1549 branch = branch or 'default'
1550 else:
1550 else:
1551 branch = p1.branch()
1551 branch = p1.branch()
1552 store = patch.filestore()
1552 store = patch.filestore()
1553 try:
1553 try:
1554 files = set()
1554 files = set()
1555 try:
1555 try:
1556 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1556 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1557 files, eolmode=None)
1557 files, eolmode=None)
1558 except error.PatchError as e:
1558 except error.PatchError as e:
1559 raise error.Abort(stringutil.forcebytestr(e))
1559 raise error.Abort(stringutil.forcebytestr(e))
1560 if opts.get('exact'):
1560 if opts.get('exact'):
1561 editor = None
1561 editor = None
1562 else:
1562 else:
1563 editor = getcommiteditor(editform='import.bypass')
1563 editor = getcommiteditor(editform='import.bypass')
1564 memctx = context.memctx(repo, (p1.node(), p2.node()),
1564 memctx = context.memctx(repo, (p1.node(), p2.node()),
1565 message,
1565 message,
1566 files=files,
1566 files=files,
1567 filectxfn=store,
1567 filectxfn=store,
1568 user=user,
1568 user=user,
1569 date=date,
1569 date=date,
1570 branch=branch,
1570 branch=branch,
1571 editor=editor)
1571 editor=editor)
1572 n = memctx.commit()
1572 n = memctx.commit()
1573 finally:
1573 finally:
1574 store.close()
1574 store.close()
1575 if opts.get('exact') and nocommit:
1575 if opts.get('exact') and nocommit:
1576 # --exact with --no-commit is still useful in that it does merge
1576 # --exact with --no-commit is still useful in that it does merge
1577 # and branch bits
1577 # and branch bits
1578 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1578 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1579 elif opts.get('exact') and (not n or hex(n) != nodeid):
1579 elif opts.get('exact') and (not n or hex(n) != nodeid):
1580 raise error.Abort(_('patch is damaged or loses information'))
1580 raise error.Abort(_('patch is damaged or loses information'))
1581 msg = _('applied to working directory')
1581 msg = _('applied to working directory')
1582 if n:
1582 if n:
1583 # i18n: refers to a short changeset id
1583 # i18n: refers to a short changeset id
1584 msg = _('created %s') % short(n)
1584 msg = _('created %s') % short(n)
1585 return msg, n, rejects
1585 return msg, n, rejects
1586
1586
1587 # facility to let extensions include additional data in an exported patch
1587 # facility to let extensions include additional data in an exported patch
1588 # list of identifiers to be executed in order
1588 # list of identifiers to be executed in order
1589 extraexport = []
1589 extraexport = []
1590 # mapping from identifier to actual export function
1590 # mapping from identifier to actual export function
1591 # function as to return a string to be added to the header or None
1591 # function as to return a string to be added to the header or None
1592 # it is given two arguments (sequencenumber, changectx)
1592 # it is given two arguments (sequencenumber, changectx)
1593 extraexportmap = {}
1593 extraexportmap = {}
1594
1594
1595 def _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts):
1595 def _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts):
1596 node = scmutil.binnode(ctx)
1596 node = scmutil.binnode(ctx)
1597 parents = [p.node() for p in ctx.parents() if p]
1597 parents = [p.node() for p in ctx.parents() if p]
1598 branch = ctx.branch()
1598 branch = ctx.branch()
1599 if switch_parent:
1599 if switch_parent:
1600 parents.reverse()
1600 parents.reverse()
1601
1601
1602 if parents:
1602 if parents:
1603 prev = parents[0]
1603 prev = parents[0]
1604 else:
1604 else:
1605 prev = nullid
1605 prev = nullid
1606
1606
1607 fm.context(ctx=ctx)
1607 fm.context(ctx=ctx)
1608 fm.plain('# HG changeset patch\n')
1608 fm.plain('# HG changeset patch\n')
1609 fm.write('user', '# User %s\n', ctx.user())
1609 fm.write('user', '# User %s\n', ctx.user())
1610 fm.plain('# Date %d %d\n' % ctx.date())
1610 fm.plain('# Date %d %d\n' % ctx.date())
1611 fm.write('date', '# %s\n', fm.formatdate(ctx.date()))
1611 fm.write('date', '# %s\n', fm.formatdate(ctx.date()))
1612 fm.condwrite(branch and branch != 'default',
1612 fm.condwrite(branch and branch != 'default',
1613 'branch', '# Branch %s\n', branch)
1613 'branch', '# Branch %s\n', branch)
1614 fm.write('node', '# Node ID %s\n', hex(node))
1614 fm.write('node', '# Node ID %s\n', hex(node))
1615 fm.plain('# Parent %s\n' % hex(prev))
1615 fm.plain('# Parent %s\n' % hex(prev))
1616 if len(parents) > 1:
1616 if len(parents) > 1:
1617 fm.plain('# Parent %s\n' % hex(parents[1]))
1617 fm.plain('# Parent %s\n' % hex(parents[1]))
1618 fm.data(parents=fm.formatlist(pycompat.maplist(hex, parents), name='node'))
1618 fm.data(parents=fm.formatlist(pycompat.maplist(hex, parents), name='node'))
1619
1619
1620 # TODO: redesign extraexportmap function to support formatter
1620 # TODO: redesign extraexportmap function to support formatter
1621 for headerid in extraexport:
1621 for headerid in extraexport:
1622 header = extraexportmap[headerid](seqno, ctx)
1622 header = extraexportmap[headerid](seqno, ctx)
1623 if header is not None:
1623 if header is not None:
1624 fm.plain('# %s\n' % header)
1624 fm.plain('# %s\n' % header)
1625
1625
1626 fm.write('desc', '%s\n', ctx.description().rstrip())
1626 fm.write('desc', '%s\n', ctx.description().rstrip())
1627 fm.plain('\n')
1627 fm.plain('\n')
1628
1628
1629 if fm.isplain():
1629 if fm.isplain():
1630 chunkiter = patch.diffui(repo, prev, node, match, opts=diffopts)
1630 chunkiter = patch.diffui(repo, prev, node, match, opts=diffopts)
1631 for chunk, label in chunkiter:
1631 for chunk, label in chunkiter:
1632 fm.plain(chunk, label=label)
1632 fm.plain(chunk, label=label)
1633 else:
1633 else:
1634 chunkiter = patch.diff(repo, prev, node, match, opts=diffopts)
1634 chunkiter = patch.diff(repo, prev, node, match, opts=diffopts)
1635 # TODO: make it structured?
1635 # TODO: make it structured?
1636 fm.data(diff=b''.join(chunkiter))
1636 fm.data(diff=b''.join(chunkiter))
1637
1637
1638 def _exportfile(repo, revs, fm, dest, switch_parent, diffopts, match):
1638 def _exportfile(repo, revs, fm, dest, switch_parent, diffopts, match):
1639 """Export changesets to stdout or a single file"""
1639 """Export changesets to stdout or a single file"""
1640 for seqno, rev in enumerate(revs, 1):
1640 for seqno, rev in enumerate(revs, 1):
1641 ctx = repo[rev]
1641 ctx = repo[rev]
1642 if not dest.startswith('<'):
1642 if not dest.startswith('<'):
1643 repo.ui.note("%s\n" % dest)
1643 repo.ui.note("%s\n" % dest)
1644 fm.startitem()
1644 fm.startitem()
1645 _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts)
1645 _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts)
1646
1646
1647 def _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, diffopts,
1647 def _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, diffopts,
1648 match):
1648 match):
1649 """Export changesets to possibly multiple files"""
1649 """Export changesets to possibly multiple files"""
1650 total = len(revs)
1650 total = len(revs)
1651 revwidth = max(len(str(rev)) for rev in revs)
1651 revwidth = max(len(str(rev)) for rev in revs)
1652 filemap = util.sortdict() # filename: [(seqno, rev), ...]
1652 filemap = util.sortdict() # filename: [(seqno, rev), ...]
1653
1653
1654 for seqno, rev in enumerate(revs, 1):
1654 for seqno, rev in enumerate(revs, 1):
1655 ctx = repo[rev]
1655 ctx = repo[rev]
1656 dest = makefilename(ctx, fntemplate,
1656 dest = makefilename(ctx, fntemplate,
1657 total=total, seqno=seqno, revwidth=revwidth)
1657 total=total, seqno=seqno, revwidth=revwidth)
1658 filemap.setdefault(dest, []).append((seqno, rev))
1658 filemap.setdefault(dest, []).append((seqno, rev))
1659
1659
1660 for dest in filemap:
1660 for dest in filemap:
1661 with formatter.maybereopen(basefm, dest) as fm:
1661 with formatter.maybereopen(basefm, dest) as fm:
1662 repo.ui.note("%s\n" % dest)
1662 repo.ui.note("%s\n" % dest)
1663 for seqno, rev in filemap[dest]:
1663 for seqno, rev in filemap[dest]:
1664 fm.startitem()
1664 fm.startitem()
1665 ctx = repo[rev]
1665 ctx = repo[rev]
1666 _exportsingle(repo, ctx, fm, match, switch_parent, seqno,
1666 _exportsingle(repo, ctx, fm, match, switch_parent, seqno,
1667 diffopts)
1667 diffopts)
1668
1668
1669 def export(repo, revs, basefm, fntemplate='hg-%h.patch', switch_parent=False,
1669 def export(repo, revs, basefm, fntemplate='hg-%h.patch', switch_parent=False,
1670 opts=None, match=None):
1670 opts=None, match=None):
1671 '''export changesets as hg patches
1671 '''export changesets as hg patches
1672
1672
1673 Args:
1673 Args:
1674 repo: The repository from which we're exporting revisions.
1674 repo: The repository from which we're exporting revisions.
1675 revs: A list of revisions to export as revision numbers.
1675 revs: A list of revisions to export as revision numbers.
1676 basefm: A formatter to which patches should be written.
1676 basefm: A formatter to which patches should be written.
1677 fntemplate: An optional string to use for generating patch file names.
1677 fntemplate: An optional string to use for generating patch file names.
1678 switch_parent: If True, show diffs against second parent when not nullid.
1678 switch_parent: If True, show diffs against second parent when not nullid.
1679 Default is false, which always shows diff against p1.
1679 Default is false, which always shows diff against p1.
1680 opts: diff options to use for generating the patch.
1680 opts: diff options to use for generating the patch.
1681 match: If specified, only export changes to files matching this matcher.
1681 match: If specified, only export changes to files matching this matcher.
1682
1682
1683 Returns:
1683 Returns:
1684 Nothing.
1684 Nothing.
1685
1685
1686 Side Effect:
1686 Side Effect:
1687 "HG Changeset Patch" data is emitted to one of the following
1687 "HG Changeset Patch" data is emitted to one of the following
1688 destinations:
1688 destinations:
1689 fntemplate specified: Each rev is written to a unique file named using
1689 fntemplate specified: Each rev is written to a unique file named using
1690 the given template.
1690 the given template.
1691 Otherwise: All revs will be written to basefm.
1691 Otherwise: All revs will be written to basefm.
1692 '''
1692 '''
1693 scmutil.prefetchfiles(repo, revs, match)
1693 scmutil.prefetchfiles(repo, revs, match)
1694
1694
1695 if not fntemplate:
1695 if not fntemplate:
1696 _exportfile(repo, revs, basefm, '<unnamed>', switch_parent, opts, match)
1696 _exportfile(repo, revs, basefm, '<unnamed>', switch_parent, opts, match)
1697 else:
1697 else:
1698 _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, opts,
1698 _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, opts,
1699 match)
1699 match)
1700
1700
1701 def exportfile(repo, revs, fp, switch_parent=False, opts=None, match=None):
1701 def exportfile(repo, revs, fp, switch_parent=False, opts=None, match=None):
1702 """Export changesets to the given file stream"""
1702 """Export changesets to the given file stream"""
1703 scmutil.prefetchfiles(repo, revs, match)
1703 scmutil.prefetchfiles(repo, revs, match)
1704
1704
1705 dest = getattr(fp, 'name', '<unnamed>')
1705 dest = getattr(fp, 'name', '<unnamed>')
1706 with formatter.formatter(repo.ui, fp, 'export', {}) as fm:
1706 with formatter.formatter(repo.ui, fp, 'export', {}) as fm:
1707 _exportfile(repo, revs, fm, dest, switch_parent, opts, match)
1707 _exportfile(repo, revs, fm, dest, switch_parent, opts, match)
1708
1708
1709 def showmarker(fm, marker, index=None):
1709 def showmarker(fm, marker, index=None):
1710 """utility function to display obsolescence marker in a readable way
1710 """utility function to display obsolescence marker in a readable way
1711
1711
1712 To be used by debug function."""
1712 To be used by debug function."""
1713 if index is not None:
1713 if index is not None:
1714 fm.write('index', '%i ', index)
1714 fm.write('index', '%i ', index)
1715 fm.write('prednode', '%s ', hex(marker.prednode()))
1715 fm.write('prednode', '%s ', hex(marker.prednode()))
1716 succs = marker.succnodes()
1716 succs = marker.succnodes()
1717 fm.condwrite(succs, 'succnodes', '%s ',
1717 fm.condwrite(succs, 'succnodes', '%s ',
1718 fm.formatlist(map(hex, succs), name='node'))
1718 fm.formatlist(map(hex, succs), name='node'))
1719 fm.write('flag', '%X ', marker.flags())
1719 fm.write('flag', '%X ', marker.flags())
1720 parents = marker.parentnodes()
1720 parents = marker.parentnodes()
1721 if parents is not None:
1721 if parents is not None:
1722 fm.write('parentnodes', '{%s} ',
1722 fm.write('parentnodes', '{%s} ',
1723 fm.formatlist(map(hex, parents), name='node', sep=', '))
1723 fm.formatlist(map(hex, parents), name='node', sep=', '))
1724 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1724 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1725 meta = marker.metadata().copy()
1725 meta = marker.metadata().copy()
1726 meta.pop('date', None)
1726 meta.pop('date', None)
1727 smeta = pycompat.rapply(pycompat.maybebytestr, meta)
1727 smeta = pycompat.rapply(pycompat.maybebytestr, meta)
1728 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1728 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1729 fm.plain('\n')
1729 fm.plain('\n')
1730
1730
1731 def finddate(ui, repo, date):
1731 def finddate(ui, repo, date):
1732 """Find the tipmost changeset that matches the given date spec"""
1732 """Find the tipmost changeset that matches the given date spec"""
1733
1733
1734 df = dateutil.matchdate(date)
1734 df = dateutil.matchdate(date)
1735 m = scmutil.matchall(repo)
1735 m = scmutil.matchall(repo)
1736 results = {}
1736 results = {}
1737
1737
1738 def prep(ctx, fns):
1738 def prep(ctx, fns):
1739 d = ctx.date()
1739 d = ctx.date()
1740 if df(d[0]):
1740 if df(d[0]):
1741 results[ctx.rev()] = d
1741 results[ctx.rev()] = d
1742
1742
1743 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1743 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1744 rev = ctx.rev()
1744 rev = ctx.rev()
1745 if rev in results:
1745 if rev in results:
1746 ui.status(_("found revision %s from %s\n") %
1746 ui.status(_("found revision %s from %s\n") %
1747 (rev, dateutil.datestr(results[rev])))
1747 (rev, dateutil.datestr(results[rev])))
1748 return '%d' % rev
1748 return '%d' % rev
1749
1749
1750 raise error.Abort(_("revision matching date not found"))
1750 raise error.Abort(_("revision matching date not found"))
1751
1751
1752 def increasingwindows(windowsize=8, sizelimit=512):
1752 def increasingwindows(windowsize=8, sizelimit=512):
1753 while True:
1753 while True:
1754 yield windowsize
1754 yield windowsize
1755 if windowsize < sizelimit:
1755 if windowsize < sizelimit:
1756 windowsize *= 2
1756 windowsize *= 2
1757
1757
1758 def _walkrevs(repo, opts):
1758 def _walkrevs(repo, opts):
1759 # Default --rev value depends on --follow but --follow behavior
1759 # Default --rev value depends on --follow but --follow behavior
1760 # depends on revisions resolved from --rev...
1760 # depends on revisions resolved from --rev...
1761 follow = opts.get('follow') or opts.get('follow_first')
1761 follow = opts.get('follow') or opts.get('follow_first')
1762 if opts.get('rev'):
1762 if opts.get('rev'):
1763 revs = scmutil.revrange(repo, opts['rev'])
1763 revs = scmutil.revrange(repo, opts['rev'])
1764 elif follow and repo.dirstate.p1() == nullid:
1764 elif follow and repo.dirstate.p1() == nullid:
1765 revs = smartset.baseset()
1765 revs = smartset.baseset()
1766 elif follow:
1766 elif follow:
1767 revs = repo.revs('reverse(:.)')
1767 revs = repo.revs('reverse(:.)')
1768 else:
1768 else:
1769 revs = smartset.spanset(repo)
1769 revs = smartset.spanset(repo)
1770 revs.reverse()
1770 revs.reverse()
1771 return revs
1771 return revs
1772
1772
1773 class FileWalkError(Exception):
1773 class FileWalkError(Exception):
1774 pass
1774 pass
1775
1775
1776 def walkfilerevs(repo, match, follow, revs, fncache):
1776 def walkfilerevs(repo, match, follow, revs, fncache):
1777 '''Walks the file history for the matched files.
1777 '''Walks the file history for the matched files.
1778
1778
1779 Returns the changeset revs that are involved in the file history.
1779 Returns the changeset revs that are involved in the file history.
1780
1780
1781 Throws FileWalkError if the file history can't be walked using
1781 Throws FileWalkError if the file history can't be walked using
1782 filelogs alone.
1782 filelogs alone.
1783 '''
1783 '''
1784 wanted = set()
1784 wanted = set()
1785 copies = []
1785 copies = []
1786 minrev, maxrev = min(revs), max(revs)
1786 minrev, maxrev = min(revs), max(revs)
1787 def filerevs(filelog, last):
1787 def filerevs(filelog, last):
1788 """
1788 """
1789 Only files, no patterns. Check the history of each file.
1789 Only files, no patterns. Check the history of each file.
1790
1790
1791 Examines filelog entries within minrev, maxrev linkrev range
1791 Examines filelog entries within minrev, maxrev linkrev range
1792 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1792 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1793 tuples in backwards order
1793 tuples in backwards order
1794 """
1794 """
1795 cl_count = len(repo)
1795 cl_count = len(repo)
1796 revs = []
1796 revs = []
1797 for j in pycompat.xrange(0, last + 1):
1797 for j in pycompat.xrange(0, last + 1):
1798 linkrev = filelog.linkrev(j)
1798 linkrev = filelog.linkrev(j)
1799 if linkrev < minrev:
1799 if linkrev < minrev:
1800 continue
1800 continue
1801 # only yield rev for which we have the changelog, it can
1801 # only yield rev for which we have the changelog, it can
1802 # happen while doing "hg log" during a pull or commit
1802 # happen while doing "hg log" during a pull or commit
1803 if linkrev >= cl_count:
1803 if linkrev >= cl_count:
1804 break
1804 break
1805
1805
1806 parentlinkrevs = []
1806 parentlinkrevs = []
1807 for p in filelog.parentrevs(j):
1807 for p in filelog.parentrevs(j):
1808 if p != nullrev:
1808 if p != nullrev:
1809 parentlinkrevs.append(filelog.linkrev(p))
1809 parentlinkrevs.append(filelog.linkrev(p))
1810 n = filelog.node(j)
1810 n = filelog.node(j)
1811 revs.append((linkrev, parentlinkrevs,
1811 revs.append((linkrev, parentlinkrevs,
1812 follow and filelog.renamed(n)))
1812 follow and filelog.renamed(n)))
1813
1813
1814 return reversed(revs)
1814 return reversed(revs)
1815 def iterfiles():
1815 def iterfiles():
1816 pctx = repo['.']
1816 pctx = repo['.']
1817 for filename in match.files():
1817 for filename in match.files():
1818 if follow:
1818 if follow:
1819 if filename not in pctx:
1819 if filename not in pctx:
1820 raise error.Abort(_('cannot follow file not in parent '
1820 raise error.Abort(_('cannot follow file not in parent '
1821 'revision: "%s"') % filename)
1821 'revision: "%s"') % filename)
1822 yield filename, pctx[filename].filenode()
1822 yield filename, pctx[filename].filenode()
1823 else:
1823 else:
1824 yield filename, None
1824 yield filename, None
1825 for filename_node in copies:
1825 for filename_node in copies:
1826 yield filename_node
1826 yield filename_node
1827
1827
1828 for file_, node in iterfiles():
1828 for file_, node in iterfiles():
1829 filelog = repo.file(file_)
1829 filelog = repo.file(file_)
1830 if not len(filelog):
1830 if not len(filelog):
1831 if node is None:
1831 if node is None:
1832 # A zero count may be a directory or deleted file, so
1832 # A zero count may be a directory or deleted file, so
1833 # try to find matching entries on the slow path.
1833 # try to find matching entries on the slow path.
1834 if follow:
1834 if follow:
1835 raise error.Abort(
1835 raise error.Abort(
1836 _('cannot follow nonexistent file: "%s"') % file_)
1836 _('cannot follow nonexistent file: "%s"') % file_)
1837 raise FileWalkError("Cannot walk via filelog")
1837 raise FileWalkError("Cannot walk via filelog")
1838 else:
1838 else:
1839 continue
1839 continue
1840
1840
1841 if node is None:
1841 if node is None:
1842 last = len(filelog) - 1
1842 last = len(filelog) - 1
1843 else:
1843 else:
1844 last = filelog.rev(node)
1844 last = filelog.rev(node)
1845
1845
1846 # keep track of all ancestors of the file
1846 # keep track of all ancestors of the file
1847 ancestors = {filelog.linkrev(last)}
1847 ancestors = {filelog.linkrev(last)}
1848
1848
1849 # iterate from latest to oldest revision
1849 # iterate from latest to oldest revision
1850 for rev, flparentlinkrevs, copied in filerevs(filelog, last):
1850 for rev, flparentlinkrevs, copied in filerevs(filelog, last):
1851 if not follow:
1851 if not follow:
1852 if rev > maxrev:
1852 if rev > maxrev:
1853 continue
1853 continue
1854 else:
1854 else:
1855 # Note that last might not be the first interesting
1855 # Note that last might not be the first interesting
1856 # rev to us:
1856 # rev to us:
1857 # if the file has been changed after maxrev, we'll
1857 # if the file has been changed after maxrev, we'll
1858 # have linkrev(last) > maxrev, and we still need
1858 # have linkrev(last) > maxrev, and we still need
1859 # to explore the file graph
1859 # to explore the file graph
1860 if rev not in ancestors:
1860 if rev not in ancestors:
1861 continue
1861 continue
1862 # XXX insert 1327 fix here
1862 # XXX insert 1327 fix here
1863 if flparentlinkrevs:
1863 if flparentlinkrevs:
1864 ancestors.update(flparentlinkrevs)
1864 ancestors.update(flparentlinkrevs)
1865
1865
1866 fncache.setdefault(rev, []).append(file_)
1866 fncache.setdefault(rev, []).append(file_)
1867 wanted.add(rev)
1867 wanted.add(rev)
1868 if copied:
1868 if copied:
1869 copies.append(copied)
1869 copies.append(copied)
1870
1870
1871 return wanted
1871 return wanted
1872
1872
1873 class _followfilter(object):
1873 class _followfilter(object):
1874 def __init__(self, repo, onlyfirst=False):
1874 def __init__(self, repo, onlyfirst=False):
1875 self.repo = repo
1875 self.repo = repo
1876 self.startrev = nullrev
1876 self.startrev = nullrev
1877 self.roots = set()
1877 self.roots = set()
1878 self.onlyfirst = onlyfirst
1878 self.onlyfirst = onlyfirst
1879
1879
1880 def match(self, rev):
1880 def match(self, rev):
1881 def realparents(rev):
1881 def realparents(rev):
1882 if self.onlyfirst:
1882 if self.onlyfirst:
1883 return self.repo.changelog.parentrevs(rev)[0:1]
1883 return self.repo.changelog.parentrevs(rev)[0:1]
1884 else:
1884 else:
1885 return filter(lambda x: x != nullrev,
1885 return filter(lambda x: x != nullrev,
1886 self.repo.changelog.parentrevs(rev))
1886 self.repo.changelog.parentrevs(rev))
1887
1887
1888 if self.startrev == nullrev:
1888 if self.startrev == nullrev:
1889 self.startrev = rev
1889 self.startrev = rev
1890 return True
1890 return True
1891
1891
1892 if rev > self.startrev:
1892 if rev > self.startrev:
1893 # forward: all descendants
1893 # forward: all descendants
1894 if not self.roots:
1894 if not self.roots:
1895 self.roots.add(self.startrev)
1895 self.roots.add(self.startrev)
1896 for parent in realparents(rev):
1896 for parent in realparents(rev):
1897 if parent in self.roots:
1897 if parent in self.roots:
1898 self.roots.add(rev)
1898 self.roots.add(rev)
1899 return True
1899 return True
1900 else:
1900 else:
1901 # backwards: all parents
1901 # backwards: all parents
1902 if not self.roots:
1902 if not self.roots:
1903 self.roots.update(realparents(self.startrev))
1903 self.roots.update(realparents(self.startrev))
1904 if rev in self.roots:
1904 if rev in self.roots:
1905 self.roots.remove(rev)
1905 self.roots.remove(rev)
1906 self.roots.update(realparents(rev))
1906 self.roots.update(realparents(rev))
1907 return True
1907 return True
1908
1908
1909 return False
1909 return False
1910
1910
1911 def walkchangerevs(repo, match, opts, prepare):
1911 def walkchangerevs(repo, match, opts, prepare):
1912 '''Iterate over files and the revs in which they changed.
1912 '''Iterate over files and the revs in which they changed.
1913
1913
1914 Callers most commonly need to iterate backwards over the history
1914 Callers most commonly need to iterate backwards over the history
1915 in which they are interested. Doing so has awful (quadratic-looking)
1915 in which they are interested. Doing so has awful (quadratic-looking)
1916 performance, so we use iterators in a "windowed" way.
1916 performance, so we use iterators in a "windowed" way.
1917
1917
1918 We walk a window of revisions in the desired order. Within the
1918 We walk a window of revisions in the desired order. Within the
1919 window, we first walk forwards to gather data, then in the desired
1919 window, we first walk forwards to gather data, then in the desired
1920 order (usually backwards) to display it.
1920 order (usually backwards) to display it.
1921
1921
1922 This function returns an iterator yielding contexts. Before
1922 This function returns an iterator yielding contexts. Before
1923 yielding each context, the iterator will first call the prepare
1923 yielding each context, the iterator will first call the prepare
1924 function on each context in the window in forward order.'''
1924 function on each context in the window in forward order.'''
1925
1925
1926 allfiles = opts.get('all_files')
1926 allfiles = opts.get('all_files')
1927 follow = opts.get('follow') or opts.get('follow_first')
1927 follow = opts.get('follow') or opts.get('follow_first')
1928 revs = _walkrevs(repo, opts)
1928 revs = _walkrevs(repo, opts)
1929 if not revs:
1929 if not revs:
1930 return []
1930 return []
1931 wanted = set()
1931 wanted = set()
1932 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1932 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1933 fncache = {}
1933 fncache = {}
1934 change = repo.__getitem__
1934 change = repo.__getitem__
1935
1935
1936 # First step is to fill wanted, the set of revisions that we want to yield.
1936 # First step is to fill wanted, the set of revisions that we want to yield.
1937 # When it does not induce extra cost, we also fill fncache for revisions in
1937 # When it does not induce extra cost, we also fill fncache for revisions in
1938 # wanted: a cache of filenames that were changed (ctx.files()) and that
1938 # wanted: a cache of filenames that were changed (ctx.files()) and that
1939 # match the file filtering conditions.
1939 # match the file filtering conditions.
1940
1940
1941 if match.always() or allfiles:
1941 if match.always() or allfiles:
1942 # No files, no patterns. Display all revs.
1942 # No files, no patterns. Display all revs.
1943 wanted = revs
1943 wanted = revs
1944 elif not slowpath:
1944 elif not slowpath:
1945 # We only have to read through the filelog to find wanted revisions
1945 # We only have to read through the filelog to find wanted revisions
1946
1946
1947 try:
1947 try:
1948 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1948 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1949 except FileWalkError:
1949 except FileWalkError:
1950 slowpath = True
1950 slowpath = True
1951
1951
1952 # We decided to fall back to the slowpath because at least one
1952 # We decided to fall back to the slowpath because at least one
1953 # of the paths was not a file. Check to see if at least one of them
1953 # of the paths was not a file. Check to see if at least one of them
1954 # existed in history, otherwise simply return
1954 # existed in history, otherwise simply return
1955 for path in match.files():
1955 for path in match.files():
1956 if path == '.' or path in repo.store:
1956 if path == '.' or path in repo.store:
1957 break
1957 break
1958 else:
1958 else:
1959 return []
1959 return []
1960
1960
1961 if slowpath:
1961 if slowpath:
1962 # We have to read the changelog to match filenames against
1962 # We have to read the changelog to match filenames against
1963 # changed files
1963 # changed files
1964
1964
1965 if follow:
1965 if follow:
1966 raise error.Abort(_('can only follow copies/renames for explicit '
1966 raise error.Abort(_('can only follow copies/renames for explicit '
1967 'filenames'))
1967 'filenames'))
1968
1968
1969 # The slow path checks files modified in every changeset.
1969 # The slow path checks files modified in every changeset.
1970 # This is really slow on large repos, so compute the set lazily.
1970 # This is really slow on large repos, so compute the set lazily.
1971 class lazywantedset(object):
1971 class lazywantedset(object):
1972 def __init__(self):
1972 def __init__(self):
1973 self.set = set()
1973 self.set = set()
1974 self.revs = set(revs)
1974 self.revs = set(revs)
1975
1975
1976 # No need to worry about locality here because it will be accessed
1976 # No need to worry about locality here because it will be accessed
1977 # in the same order as the increasing window below.
1977 # in the same order as the increasing window below.
1978 def __contains__(self, value):
1978 def __contains__(self, value):
1979 if value in self.set:
1979 if value in self.set:
1980 return True
1980 return True
1981 elif not value in self.revs:
1981 elif not value in self.revs:
1982 return False
1982 return False
1983 else:
1983 else:
1984 self.revs.discard(value)
1984 self.revs.discard(value)
1985 ctx = change(value)
1985 ctx = change(value)
1986 if allfiles:
1986 if allfiles:
1987 matches = list(ctx.manifest().walk(match))
1987 matches = list(ctx.manifest().walk(match))
1988 else:
1988 else:
1989 matches = [f for f in ctx.files() if match(f)]
1989 matches = [f for f in ctx.files() if match(f)]
1990 if matches:
1990 if matches:
1991 fncache[value] = matches
1991 fncache[value] = matches
1992 self.set.add(value)
1992 self.set.add(value)
1993 return True
1993 return True
1994 return False
1994 return False
1995
1995
1996 def discard(self, value):
1996 def discard(self, value):
1997 self.revs.discard(value)
1997 self.revs.discard(value)
1998 self.set.discard(value)
1998 self.set.discard(value)
1999
1999
2000 wanted = lazywantedset()
2000 wanted = lazywantedset()
2001
2001
2002 # it might be worthwhile to do this in the iterator if the rev range
2002 # it might be worthwhile to do this in the iterator if the rev range
2003 # is descending and the prune args are all within that range
2003 # is descending and the prune args are all within that range
2004 for rev in opts.get('prune', ()):
2004 for rev in opts.get('prune', ()):
2005 rev = repo[rev].rev()
2005 rev = repo[rev].rev()
2006 ff = _followfilter(repo)
2006 ff = _followfilter(repo)
2007 stop = min(revs[0], revs[-1])
2007 stop = min(revs[0], revs[-1])
2008 for x in pycompat.xrange(rev, stop - 1, -1):
2008 for x in pycompat.xrange(rev, stop - 1, -1):
2009 if ff.match(x):
2009 if ff.match(x):
2010 wanted = wanted - [x]
2010 wanted = wanted - [x]
2011
2011
2012 # Now that wanted is correctly initialized, we can iterate over the
2012 # Now that wanted is correctly initialized, we can iterate over the
2013 # revision range, yielding only revisions in wanted.
2013 # revision range, yielding only revisions in wanted.
2014 def iterate():
2014 def iterate():
2015 if follow and match.always():
2015 if follow and match.always():
2016 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
2016 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
2017 def want(rev):
2017 def want(rev):
2018 return ff.match(rev) and rev in wanted
2018 return ff.match(rev) and rev in wanted
2019 else:
2019 else:
2020 def want(rev):
2020 def want(rev):
2021 return rev in wanted
2021 return rev in wanted
2022
2022
2023 it = iter(revs)
2023 it = iter(revs)
2024 stopiteration = False
2024 stopiteration = False
2025 for windowsize in increasingwindows():
2025 for windowsize in increasingwindows():
2026 nrevs = []
2026 nrevs = []
2027 for i in pycompat.xrange(windowsize):
2027 for i in pycompat.xrange(windowsize):
2028 rev = next(it, None)
2028 rev = next(it, None)
2029 if rev is None:
2029 if rev is None:
2030 stopiteration = True
2030 stopiteration = True
2031 break
2031 break
2032 elif want(rev):
2032 elif want(rev):
2033 nrevs.append(rev)
2033 nrevs.append(rev)
2034 for rev in sorted(nrevs):
2034 for rev in sorted(nrevs):
2035 fns = fncache.get(rev)
2035 fns = fncache.get(rev)
2036 ctx = change(rev)
2036 ctx = change(rev)
2037 if not fns:
2037 if not fns:
2038 def fns_generator():
2038 def fns_generator():
2039 if allfiles:
2039 if allfiles:
2040 fiter = iter(ctx)
2040 fiter = iter(ctx)
2041 else:
2041 else:
2042 fiter = ctx.files()
2042 fiter = ctx.files()
2043 for f in fiter:
2043 for f in fiter:
2044 if match(f):
2044 if match(f):
2045 yield f
2045 yield f
2046 fns = fns_generator()
2046 fns = fns_generator()
2047 prepare(ctx, fns)
2047 prepare(ctx, fns)
2048 for rev in nrevs:
2048 for rev in nrevs:
2049 yield change(rev)
2049 yield change(rev)
2050
2050
2051 if stopiteration:
2051 if stopiteration:
2052 break
2052 break
2053
2053
2054 return iterate()
2054 return iterate()
2055
2055
2056 def add(ui, repo, match, prefix, uipathfn, explicitonly, **opts):
2056 def add(ui, repo, match, prefix, uipathfn, explicitonly, **opts):
2057 bad = []
2057 bad = []
2058
2058
2059 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2059 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2060 names = []
2060 names = []
2061 wctx = repo[None]
2061 wctx = repo[None]
2062 cca = None
2062 cca = None
2063 abort, warn = scmutil.checkportabilityalert(ui)
2063 abort, warn = scmutil.checkportabilityalert(ui)
2064 if abort or warn:
2064 if abort or warn:
2065 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2065 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2066
2066
2067 match = repo.narrowmatch(match, includeexact=True)
2067 match = repo.narrowmatch(match, includeexact=True)
2068 badmatch = matchmod.badmatch(match, badfn)
2068 badmatch = matchmod.badmatch(match, badfn)
2069 dirstate = repo.dirstate
2069 dirstate = repo.dirstate
2070 # We don't want to just call wctx.walk here, since it would return a lot of
2070 # We don't want to just call wctx.walk here, since it would return a lot of
2071 # clean files, which we aren't interested in and takes time.
2071 # clean files, which we aren't interested in and takes time.
2072 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
2072 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
2073 unknown=True, ignored=False, full=False)):
2073 unknown=True, ignored=False, full=False)):
2074 exact = match.exact(f)
2074 exact = match.exact(f)
2075 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2075 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2076 if cca:
2076 if cca:
2077 cca(f)
2077 cca(f)
2078 names.append(f)
2078 names.append(f)
2079 if ui.verbose or not exact:
2079 if ui.verbose or not exact:
2080 ui.status(_('adding %s\n') % uipathfn(f),
2080 ui.status(_('adding %s\n') % uipathfn(f),
2081 label='ui.addremove.added')
2081 label='ui.addremove.added')
2082
2082
2083 for subpath in sorted(wctx.substate):
2083 for subpath in sorted(wctx.substate):
2084 sub = wctx.sub(subpath)
2084 sub = wctx.sub(subpath)
2085 try:
2085 try:
2086 submatch = matchmod.subdirmatcher(subpath, match)
2086 submatch = matchmod.subdirmatcher(subpath, match)
2087 subprefix = repo.wvfs.reljoin(prefix, subpath)
2087 subprefix = repo.wvfs.reljoin(prefix, subpath)
2088 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2088 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2089 if opts.get(r'subrepos'):
2089 if opts.get(r'subrepos'):
2090 bad.extend(sub.add(ui, submatch, subprefix, subuipathfn, False,
2090 bad.extend(sub.add(ui, submatch, subprefix, subuipathfn, False,
2091 **opts))
2091 **opts))
2092 else:
2092 else:
2093 bad.extend(sub.add(ui, submatch, subprefix, subuipathfn, True,
2093 bad.extend(sub.add(ui, submatch, subprefix, subuipathfn, True,
2094 **opts))
2094 **opts))
2095 except error.LookupError:
2095 except error.LookupError:
2096 ui.status(_("skipping missing subrepository: %s\n")
2096 ui.status(_("skipping missing subrepository: %s\n")
2097 % uipathfn(subpath))
2097 % uipathfn(subpath))
2098
2098
2099 if not opts.get(r'dry_run'):
2099 if not opts.get(r'dry_run'):
2100 rejected = wctx.add(names, prefix)
2100 rejected = wctx.add(names, prefix)
2101 bad.extend(f for f in rejected if f in match.files())
2101 bad.extend(f for f in rejected if f in match.files())
2102 return bad
2102 return bad
2103
2103
2104 def addwebdirpath(repo, serverpath, webconf):
2104 def addwebdirpath(repo, serverpath, webconf):
2105 webconf[serverpath] = repo.root
2105 webconf[serverpath] = repo.root
2106 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2106 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2107
2107
2108 for r in repo.revs('filelog("path:.hgsub")'):
2108 for r in repo.revs('filelog("path:.hgsub")'):
2109 ctx = repo[r]
2109 ctx = repo[r]
2110 for subpath in ctx.substate:
2110 for subpath in ctx.substate:
2111 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2111 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2112
2112
2113 def forget(ui, repo, match, prefix, uipathfn, explicitonly, dryrun,
2113 def forget(ui, repo, match, prefix, uipathfn, explicitonly, dryrun,
2114 interactive):
2114 interactive):
2115 if dryrun and interactive:
2115 if dryrun and interactive:
2116 raise error.Abort(_("cannot specify both --dry-run and --interactive"))
2116 raise error.Abort(_("cannot specify both --dry-run and --interactive"))
2117 bad = []
2117 bad = []
2118 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2118 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2119 wctx = repo[None]
2119 wctx = repo[None]
2120 forgot = []
2120 forgot = []
2121
2121
2122 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2122 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2123 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2123 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2124 if explicitonly:
2124 if explicitonly:
2125 forget = [f for f in forget if match.exact(f)]
2125 forget = [f for f in forget if match.exact(f)]
2126
2126
2127 for subpath in sorted(wctx.substate):
2127 for subpath in sorted(wctx.substate):
2128 sub = wctx.sub(subpath)
2128 sub = wctx.sub(subpath)
2129 submatch = matchmod.subdirmatcher(subpath, match)
2129 submatch = matchmod.subdirmatcher(subpath, match)
2130 subprefix = repo.wvfs.reljoin(prefix, subpath)
2130 subprefix = repo.wvfs.reljoin(prefix, subpath)
2131 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2131 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2132 try:
2132 try:
2133 subbad, subforgot = sub.forget(submatch, subprefix, subuipathfn,
2133 subbad, subforgot = sub.forget(submatch, subprefix, subuipathfn,
2134 dryrun=dryrun,
2134 dryrun=dryrun,
2135 interactive=interactive)
2135 interactive=interactive)
2136 bad.extend([subpath + '/' + f for f in subbad])
2136 bad.extend([subpath + '/' + f for f in subbad])
2137 forgot.extend([subpath + '/' + f for f in subforgot])
2137 forgot.extend([subpath + '/' + f for f in subforgot])
2138 except error.LookupError:
2138 except error.LookupError:
2139 ui.status(_("skipping missing subrepository: %s\n")
2139 ui.status(_("skipping missing subrepository: %s\n")
2140 % uipathfn(subpath))
2140 % uipathfn(subpath))
2141
2141
2142 if not explicitonly:
2142 if not explicitonly:
2143 for f in match.files():
2143 for f in match.files():
2144 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2144 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2145 if f not in forgot:
2145 if f not in forgot:
2146 if repo.wvfs.exists(f):
2146 if repo.wvfs.exists(f):
2147 # Don't complain if the exact case match wasn't given.
2147 # Don't complain if the exact case match wasn't given.
2148 # But don't do this until after checking 'forgot', so
2148 # But don't do this until after checking 'forgot', so
2149 # that subrepo files aren't normalized, and this op is
2149 # that subrepo files aren't normalized, and this op is
2150 # purely from data cached by the status walk above.
2150 # purely from data cached by the status walk above.
2151 if repo.dirstate.normalize(f) in repo.dirstate:
2151 if repo.dirstate.normalize(f) in repo.dirstate:
2152 continue
2152 continue
2153 ui.warn(_('not removing %s: '
2153 ui.warn(_('not removing %s: '
2154 'file is already untracked\n')
2154 'file is already untracked\n')
2155 % uipathfn(f))
2155 % uipathfn(f))
2156 bad.append(f)
2156 bad.append(f)
2157
2157
2158 if interactive:
2158 if interactive:
2159 responses = _('[Ynsa?]'
2159 responses = _('[Ynsa?]'
2160 '$$ &Yes, forget this file'
2160 '$$ &Yes, forget this file'
2161 '$$ &No, skip this file'
2161 '$$ &No, skip this file'
2162 '$$ &Skip remaining files'
2162 '$$ &Skip remaining files'
2163 '$$ Include &all remaining files'
2163 '$$ Include &all remaining files'
2164 '$$ &? (display help)')
2164 '$$ &? (display help)')
2165 for filename in forget[:]:
2165 for filename in forget[:]:
2166 r = ui.promptchoice(_('forget %s %s') %
2166 r = ui.promptchoice(_('forget %s %s') %
2167 (uipathfn(filename), responses))
2167 (uipathfn(filename), responses))
2168 if r == 4: # ?
2168 if r == 4: # ?
2169 while r == 4:
2169 while r == 4:
2170 for c, t in ui.extractchoices(responses)[1]:
2170 for c, t in ui.extractchoices(responses)[1]:
2171 ui.write('%s - %s\n' % (c, encoding.lower(t)))
2171 ui.write('%s - %s\n' % (c, encoding.lower(t)))
2172 r = ui.promptchoice(_('forget %s %s') %
2172 r = ui.promptchoice(_('forget %s %s') %
2173 (uipathfn(filename), responses))
2173 (uipathfn(filename), responses))
2174 if r == 0: # yes
2174 if r == 0: # yes
2175 continue
2175 continue
2176 elif r == 1: # no
2176 elif r == 1: # no
2177 forget.remove(filename)
2177 forget.remove(filename)
2178 elif r == 2: # Skip
2178 elif r == 2: # Skip
2179 fnindex = forget.index(filename)
2179 fnindex = forget.index(filename)
2180 del forget[fnindex:]
2180 del forget[fnindex:]
2181 break
2181 break
2182 elif r == 3: # All
2182 elif r == 3: # All
2183 break
2183 break
2184
2184
2185 for f in forget:
2185 for f in forget:
2186 if ui.verbose or not match.exact(f) or interactive:
2186 if ui.verbose or not match.exact(f) or interactive:
2187 ui.status(_('removing %s\n') % uipathfn(f),
2187 ui.status(_('removing %s\n') % uipathfn(f),
2188 label='ui.addremove.removed')
2188 label='ui.addremove.removed')
2189
2189
2190 if not dryrun:
2190 if not dryrun:
2191 rejected = wctx.forget(forget, prefix)
2191 rejected = wctx.forget(forget, prefix)
2192 bad.extend(f for f in rejected if f in match.files())
2192 bad.extend(f for f in rejected if f in match.files())
2193 forgot.extend(f for f in forget if f not in rejected)
2193 forgot.extend(f for f in forget if f not in rejected)
2194 return bad, forgot
2194 return bad, forgot
2195
2195
2196 def files(ui, ctx, m, uipathfn, fm, fmt, subrepos):
2196 def files(ui, ctx, m, uipathfn, fm, fmt, subrepos):
2197 ret = 1
2197 ret = 1
2198
2198
2199 needsfctx = ui.verbose or {'size', 'flags'} & fm.datahint()
2199 needsfctx = ui.verbose or {'size', 'flags'} & fm.datahint()
2200 for f in ctx.matches(m):
2200 for f in ctx.matches(m):
2201 fm.startitem()
2201 fm.startitem()
2202 fm.context(ctx=ctx)
2202 fm.context(ctx=ctx)
2203 if needsfctx:
2203 if needsfctx:
2204 fc = ctx[f]
2204 fc = ctx[f]
2205 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2205 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2206 fm.data(path=f)
2206 fm.data(path=f)
2207 fm.plain(fmt % uipathfn(f))
2207 fm.plain(fmt % uipathfn(f))
2208 ret = 0
2208 ret = 0
2209
2209
2210 for subpath in sorted(ctx.substate):
2210 for subpath in sorted(ctx.substate):
2211 submatch = matchmod.subdirmatcher(subpath, m)
2211 submatch = matchmod.subdirmatcher(subpath, m)
2212 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2212 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2213 if (subrepos or m.exact(subpath) or any(submatch.files())):
2213 if (subrepos or m.exact(subpath) or any(submatch.files())):
2214 sub = ctx.sub(subpath)
2214 sub = ctx.sub(subpath)
2215 try:
2215 try:
2216 recurse = m.exact(subpath) or subrepos
2216 recurse = m.exact(subpath) or subrepos
2217 if sub.printfiles(ui, submatch, subuipathfn, fm, fmt,
2217 if sub.printfiles(ui, submatch, subuipathfn, fm, fmt,
2218 recurse) == 0:
2218 recurse) == 0:
2219 ret = 0
2219 ret = 0
2220 except error.LookupError:
2220 except error.LookupError:
2221 ui.status(_("skipping missing subrepository: %s\n")
2221 ui.status(_("skipping missing subrepository: %s\n")
2222 % uipathfn(subpath))
2222 % uipathfn(subpath))
2223
2223
2224 return ret
2224 return ret
2225
2225
2226 def remove(ui, repo, m, prefix, uipathfn, after, force, subrepos, dryrun,
2226 def remove(ui, repo, m, prefix, uipathfn, after, force, subrepos, dryrun,
2227 warnings=None):
2227 warnings=None):
2228 ret = 0
2228 ret = 0
2229 s = repo.status(match=m, clean=True)
2229 s = repo.status(match=m, clean=True)
2230 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2230 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2231
2231
2232 wctx = repo[None]
2232 wctx = repo[None]
2233
2233
2234 if warnings is None:
2234 if warnings is None:
2235 warnings = []
2235 warnings = []
2236 warn = True
2236 warn = True
2237 else:
2237 else:
2238 warn = False
2238 warn = False
2239
2239
2240 subs = sorted(wctx.substate)
2240 subs = sorted(wctx.substate)
2241 progress = ui.makeprogress(_('searching'), total=len(subs),
2241 progress = ui.makeprogress(_('searching'), total=len(subs),
2242 unit=_('subrepos'))
2242 unit=_('subrepos'))
2243 for subpath in subs:
2243 for subpath in subs:
2244 submatch = matchmod.subdirmatcher(subpath, m)
2244 submatch = matchmod.subdirmatcher(subpath, m)
2245 subprefix = repo.wvfs.reljoin(prefix, subpath)
2245 subprefix = repo.wvfs.reljoin(prefix, subpath)
2246 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2246 subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
2247 if subrepos or m.exact(subpath) or any(submatch.files()):
2247 if subrepos or m.exact(subpath) or any(submatch.files()):
2248 progress.increment()
2248 progress.increment()
2249 sub = wctx.sub(subpath)
2249 sub = wctx.sub(subpath)
2250 try:
2250 try:
2251 if sub.removefiles(submatch, subprefix, subuipathfn, after,
2251 if sub.removefiles(submatch, subprefix, subuipathfn, after,
2252 force, subrepos, dryrun, warnings):
2252 force, subrepos, dryrun, warnings):
2253 ret = 1
2253 ret = 1
2254 except error.LookupError:
2254 except error.LookupError:
2255 warnings.append(_("skipping missing subrepository: %s\n")
2255 warnings.append(_("skipping missing subrepository: %s\n")
2256 % uipathfn(subpath))
2256 % uipathfn(subpath))
2257 progress.complete()
2257 progress.complete()
2258
2258
2259 # warn about failure to delete explicit files/dirs
2259 # warn about failure to delete explicit files/dirs
2260 deleteddirs = util.dirs(deleted)
2260 deleteddirs = util.dirs(deleted)
2261 files = m.files()
2261 files = m.files()
2262 progress = ui.makeprogress(_('deleting'), total=len(files),
2262 progress = ui.makeprogress(_('deleting'), total=len(files),
2263 unit=_('files'))
2263 unit=_('files'))
2264 for f in files:
2264 for f in files:
2265 def insubrepo():
2265 def insubrepo():
2266 for subpath in wctx.substate:
2266 for subpath in wctx.substate:
2267 if f.startswith(subpath + '/'):
2267 if f.startswith(subpath + '/'):
2268 return True
2268 return True
2269 return False
2269 return False
2270
2270
2271 progress.increment()
2271 progress.increment()
2272 isdir = f in deleteddirs or wctx.hasdir(f)
2272 isdir = f in deleteddirs or wctx.hasdir(f)
2273 if (f in repo.dirstate or isdir or f == '.'
2273 if (f in repo.dirstate or isdir or f == '.'
2274 or insubrepo() or f in subs):
2274 or insubrepo() or f in subs):
2275 continue
2275 continue
2276
2276
2277 if repo.wvfs.exists(f):
2277 if repo.wvfs.exists(f):
2278 if repo.wvfs.isdir(f):
2278 if repo.wvfs.isdir(f):
2279 warnings.append(_('not removing %s: no tracked files\n')
2279 warnings.append(_('not removing %s: no tracked files\n')
2280 % uipathfn(f))
2280 % uipathfn(f))
2281 else:
2281 else:
2282 warnings.append(_('not removing %s: file is untracked\n')
2282 warnings.append(_('not removing %s: file is untracked\n')
2283 % uipathfn(f))
2283 % uipathfn(f))
2284 # missing files will generate a warning elsewhere
2284 # missing files will generate a warning elsewhere
2285 ret = 1
2285 ret = 1
2286 progress.complete()
2286 progress.complete()
2287
2287
2288 if force:
2288 if force:
2289 list = modified + deleted + clean + added
2289 list = modified + deleted + clean + added
2290 elif after:
2290 elif after:
2291 list = deleted
2291 list = deleted
2292 remaining = modified + added + clean
2292 remaining = modified + added + clean
2293 progress = ui.makeprogress(_('skipping'), total=len(remaining),
2293 progress = ui.makeprogress(_('skipping'), total=len(remaining),
2294 unit=_('files'))
2294 unit=_('files'))
2295 for f in remaining:
2295 for f in remaining:
2296 progress.increment()
2296 progress.increment()
2297 if ui.verbose or (f in files):
2297 if ui.verbose or (f in files):
2298 warnings.append(_('not removing %s: file still exists\n')
2298 warnings.append(_('not removing %s: file still exists\n')
2299 % uipathfn(f))
2299 % uipathfn(f))
2300 ret = 1
2300 ret = 1
2301 progress.complete()
2301 progress.complete()
2302 else:
2302 else:
2303 list = deleted + clean
2303 list = deleted + clean
2304 progress = ui.makeprogress(_('skipping'),
2304 progress = ui.makeprogress(_('skipping'),
2305 total=(len(modified) + len(added)),
2305 total=(len(modified) + len(added)),
2306 unit=_('files'))
2306 unit=_('files'))
2307 for f in modified:
2307 for f in modified:
2308 progress.increment()
2308 progress.increment()
2309 warnings.append(_('not removing %s: file is modified (use -f'
2309 warnings.append(_('not removing %s: file is modified (use -f'
2310 ' to force removal)\n') % uipathfn(f))
2310 ' to force removal)\n') % uipathfn(f))
2311 ret = 1
2311 ret = 1
2312 for f in added:
2312 for f in added:
2313 progress.increment()
2313 progress.increment()
2314 warnings.append(_("not removing %s: file has been marked for add"
2314 warnings.append(_("not removing %s: file has been marked for add"
2315 " (use 'hg forget' to undo add)\n") % uipathfn(f))
2315 " (use 'hg forget' to undo add)\n") % uipathfn(f))
2316 ret = 1
2316 ret = 1
2317 progress.complete()
2317 progress.complete()
2318
2318
2319 list = sorted(list)
2319 list = sorted(list)
2320 progress = ui.makeprogress(_('deleting'), total=len(list),
2320 progress = ui.makeprogress(_('deleting'), total=len(list),
2321 unit=_('files'))
2321 unit=_('files'))
2322 for f in list:
2322 for f in list:
2323 if ui.verbose or not m.exact(f):
2323 if ui.verbose or not m.exact(f):
2324 progress.increment()
2324 progress.increment()
2325 ui.status(_('removing %s\n') % uipathfn(f),
2325 ui.status(_('removing %s\n') % uipathfn(f),
2326 label='ui.addremove.removed')
2326 label='ui.addremove.removed')
2327 progress.complete()
2327 progress.complete()
2328
2328
2329 if not dryrun:
2329 if not dryrun:
2330 with repo.wlock():
2330 with repo.wlock():
2331 if not after:
2331 if not after:
2332 for f in list:
2332 for f in list:
2333 if f in added:
2333 if f in added:
2334 continue # we never unlink added files on remove
2334 continue # we never unlink added files on remove
2335 rmdir = repo.ui.configbool('experimental',
2335 rmdir = repo.ui.configbool('experimental',
2336 'removeemptydirs')
2336 'removeemptydirs')
2337 repo.wvfs.unlinkpath(f, ignoremissing=True, rmdir=rmdir)
2337 repo.wvfs.unlinkpath(f, ignoremissing=True, rmdir=rmdir)
2338 repo[None].forget(list)
2338 repo[None].forget(list)
2339
2339
2340 if warn:
2340 if warn:
2341 for warning in warnings:
2341 for warning in warnings:
2342 ui.warn(warning)
2342 ui.warn(warning)
2343
2343
2344 return ret
2344 return ret
2345
2345
2346 def _updatecatformatter(fm, ctx, matcher, path, decode):
2346 def _updatecatformatter(fm, ctx, matcher, path, decode):
2347 """Hook for adding data to the formatter used by ``hg cat``.
2347 """Hook for adding data to the formatter used by ``hg cat``.
2348
2348
2349 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2349 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2350 this method first."""
2350 this method first."""
2351 data = ctx[path].data()
2351 data = ctx[path].data()
2352 if decode:
2352 if decode:
2353 data = ctx.repo().wwritedata(path, data)
2353 data = ctx.repo().wwritedata(path, data)
2354 fm.startitem()
2354 fm.startitem()
2355 fm.context(ctx=ctx)
2355 fm.context(ctx=ctx)
2356 fm.write('data', '%s', data)
2356 fm.write('data', '%s', data)
2357 fm.data(path=path)
2357 fm.data(path=path)
2358
2358
2359 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2359 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2360 err = 1
2360 err = 1
2361 opts = pycompat.byteskwargs(opts)
2361 opts = pycompat.byteskwargs(opts)
2362
2362
2363 def write(path):
2363 def write(path):
2364 filename = None
2364 filename = None
2365 if fntemplate:
2365 if fntemplate:
2366 filename = makefilename(ctx, fntemplate,
2366 filename = makefilename(ctx, fntemplate,
2367 pathname=os.path.join(prefix, path))
2367 pathname=os.path.join(prefix, path))
2368 # attempt to create the directory if it does not already exist
2368 # attempt to create the directory if it does not already exist
2369 try:
2369 try:
2370 os.makedirs(os.path.dirname(filename))
2370 os.makedirs(os.path.dirname(filename))
2371 except OSError:
2371 except OSError:
2372 pass
2372 pass
2373 with formatter.maybereopen(basefm, filename) as fm:
2373 with formatter.maybereopen(basefm, filename) as fm:
2374 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2374 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2375
2375
2376 # Automation often uses hg cat on single files, so special case it
2376 # Automation often uses hg cat on single files, so special case it
2377 # for performance to avoid the cost of parsing the manifest.
2377 # for performance to avoid the cost of parsing the manifest.
2378 if len(matcher.files()) == 1 and not matcher.anypats():
2378 if len(matcher.files()) == 1 and not matcher.anypats():
2379 file = matcher.files()[0]
2379 file = matcher.files()[0]
2380 mfl = repo.manifestlog
2380 mfl = repo.manifestlog
2381 mfnode = ctx.manifestnode()
2381 mfnode = ctx.manifestnode()
2382 try:
2382 try:
2383 if mfnode and mfl[mfnode].find(file)[0]:
2383 if mfnode and mfl[mfnode].find(file)[0]:
2384 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2384 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2385 write(file)
2385 write(file)
2386 return 0
2386 return 0
2387 except KeyError:
2387 except KeyError:
2388 pass
2388 pass
2389
2389
2390 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2390 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2391
2391
2392 for abs in ctx.walk(matcher):
2392 for abs in ctx.walk(matcher):
2393 write(abs)
2393 write(abs)
2394 err = 0
2394 err = 0
2395
2395
2396 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
2396 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
2397 for subpath in sorted(ctx.substate):
2397 for subpath in sorted(ctx.substate):
2398 sub = ctx.sub(subpath)
2398 sub = ctx.sub(subpath)
2399 try:
2399 try:
2400 submatch = matchmod.subdirmatcher(subpath, matcher)
2400 submatch = matchmod.subdirmatcher(subpath, matcher)
2401 subprefix = os.path.join(prefix, subpath)
2401 subprefix = os.path.join(prefix, subpath)
2402 if not sub.cat(submatch, basefm, fntemplate, subprefix,
2402 if not sub.cat(submatch, basefm, fntemplate, subprefix,
2403 **pycompat.strkwargs(opts)):
2403 **pycompat.strkwargs(opts)):
2404 err = 0
2404 err = 0
2405 except error.RepoLookupError:
2405 except error.RepoLookupError:
2406 ui.status(_("skipping missing subrepository: %s\n") %
2406 ui.status(_("skipping missing subrepository: %s\n") %
2407 uipathfn(subpath))
2407 uipathfn(subpath))
2408
2408
2409 return err
2409 return err
2410
2410
2411 def commit(ui, repo, commitfunc, pats, opts):
2411 def commit(ui, repo, commitfunc, pats, opts):
2412 '''commit the specified files or all outstanding changes'''
2412 '''commit the specified files or all outstanding changes'''
2413 date = opts.get('date')
2413 date = opts.get('date')
2414 if date:
2414 if date:
2415 opts['date'] = dateutil.parsedate(date)
2415 opts['date'] = dateutil.parsedate(date)
2416 message = logmessage(ui, opts)
2416 message = logmessage(ui, opts)
2417 matcher = scmutil.match(repo[None], pats, opts)
2417 matcher = scmutil.match(repo[None], pats, opts)
2418
2418
2419 dsguard = None
2419 dsguard = None
2420 # extract addremove carefully -- this function can be called from a command
2420 # extract addremove carefully -- this function can be called from a command
2421 # that doesn't support addremove
2421 # that doesn't support addremove
2422 if opts.get('addremove'):
2422 if opts.get('addremove'):
2423 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2423 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2424 with dsguard or util.nullcontextmanager():
2424 with dsguard or util.nullcontextmanager():
2425 if dsguard:
2425 if dsguard:
2426 relative = scmutil.anypats(pats, opts)
2426 relative = scmutil.anypats(pats, opts)
2427 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=relative)
2427 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=relative)
2428 if scmutil.addremove(repo, matcher, "", uipathfn, opts) != 0:
2428 if scmutil.addremove(repo, matcher, "", uipathfn, opts) != 0:
2429 raise error.Abort(
2429 raise error.Abort(
2430 _("failed to mark all new/missing files as added/removed"))
2430 _("failed to mark all new/missing files as added/removed"))
2431
2431
2432 return commitfunc(ui, repo, message, matcher, opts)
2432 return commitfunc(ui, repo, message, matcher, opts)
2433
2433
2434 def samefile(f, ctx1, ctx2):
2434 def samefile(f, ctx1, ctx2):
2435 if f in ctx1.manifest():
2435 if f in ctx1.manifest():
2436 a = ctx1.filectx(f)
2436 a = ctx1.filectx(f)
2437 if f in ctx2.manifest():
2437 if f in ctx2.manifest():
2438 b = ctx2.filectx(f)
2438 b = ctx2.filectx(f)
2439 return (not a.cmp(b)
2439 return (not a.cmp(b)
2440 and a.flags() == b.flags())
2440 and a.flags() == b.flags())
2441 else:
2441 else:
2442 return False
2442 return False
2443 else:
2443 else:
2444 return f not in ctx2.manifest()
2444 return f not in ctx2.manifest()
2445
2445
2446 def amend(ui, repo, old, extra, pats, opts):
2446 def amend(ui, repo, old, extra, pats, opts):
2447 # avoid cycle context -> subrepo -> cmdutil
2447 # avoid cycle context -> subrepo -> cmdutil
2448 from . import context
2448 from . import context
2449
2449
2450 # amend will reuse the existing user if not specified, but the obsolete
2450 # amend will reuse the existing user if not specified, but the obsolete
2451 # marker creation requires that the current user's name is specified.
2451 # marker creation requires that the current user's name is specified.
2452 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2452 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2453 ui.username() # raise exception if username not set
2453 ui.username() # raise exception if username not set
2454
2454
2455 ui.note(_('amending changeset %s\n') % old)
2455 ui.note(_('amending changeset %s\n') % old)
2456 base = old.p1()
2456 base = old.p1()
2457
2457
2458 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2458 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2459 # Participating changesets:
2459 # Participating changesets:
2460 #
2460 #
2461 # wctx o - workingctx that contains changes from working copy
2461 # wctx o - workingctx that contains changes from working copy
2462 # | to go into amending commit
2462 # | to go into amending commit
2463 # |
2463 # |
2464 # old o - changeset to amend
2464 # old o - changeset to amend
2465 # |
2465 # |
2466 # base o - first parent of the changeset to amend
2466 # base o - first parent of the changeset to amend
2467 wctx = repo[None]
2467 wctx = repo[None]
2468
2468
2469 # Copy to avoid mutating input
2469 # Copy to avoid mutating input
2470 extra = extra.copy()
2470 extra = extra.copy()
2471 # Update extra dict from amended commit (e.g. to preserve graft
2471 # Update extra dict from amended commit (e.g. to preserve graft
2472 # source)
2472 # source)
2473 extra.update(old.extra())
2473 extra.update(old.extra())
2474
2474
2475 # Also update it from the from the wctx
2475 # Also update it from the from the wctx
2476 extra.update(wctx.extra())
2476 extra.update(wctx.extra())
2477
2477
2478 user = opts.get('user') or old.user()
2478 user = opts.get('user') or old.user()
2479
2479
2480 datemaydiffer = False # date-only change should be ignored?
2480 datemaydiffer = False # date-only change should be ignored?
2481 if opts.get('date') and opts.get('currentdate'):
2481 if opts.get('date') and opts.get('currentdate'):
2482 raise error.Abort(_('--date and --currentdate are mutually '
2482 raise error.Abort(_('--date and --currentdate are mutually '
2483 'exclusive'))
2483 'exclusive'))
2484 if opts.get('date'):
2484 if opts.get('date'):
2485 date = dateutil.parsedate(opts.get('date'))
2485 date = dateutil.parsedate(opts.get('date'))
2486 elif opts.get('currentdate'):
2486 elif opts.get('currentdate'):
2487 date = dateutil.makedate()
2487 date = dateutil.makedate()
2488 elif (ui.configbool('rewrite', 'update-timestamp')
2488 elif (ui.configbool('rewrite', 'update-timestamp')
2489 and opts.get('currentdate') is None):
2489 and opts.get('currentdate') is None):
2490 date = dateutil.makedate()
2490 date = dateutil.makedate()
2491 datemaydiffer = True
2491 datemaydiffer = True
2492 else:
2492 else:
2493 date = old.date()
2493 date = old.date()
2494
2494
2495 if len(old.parents()) > 1:
2495 if len(old.parents()) > 1:
2496 # ctx.files() isn't reliable for merges, so fall back to the
2496 # ctx.files() isn't reliable for merges, so fall back to the
2497 # slower repo.status() method
2497 # slower repo.status() method
2498 files = set([fn for st in base.status(old)[:3]
2498 files = set([fn for st in base.status(old)[:3]
2499 for fn in st])
2499 for fn in st])
2500 else:
2500 else:
2501 files = set(old.files())
2501 files = set(old.files())
2502
2502
2503 # add/remove the files to the working copy if the "addremove" option
2503 # add/remove the files to the working copy if the "addremove" option
2504 # was specified.
2504 # was specified.
2505 matcher = scmutil.match(wctx, pats, opts)
2505 matcher = scmutil.match(wctx, pats, opts)
2506 relative = scmutil.anypats(pats, opts)
2506 relative = scmutil.anypats(pats, opts)
2507 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=relative)
2507 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=relative)
2508 if (opts.get('addremove')
2508 if (opts.get('addremove')
2509 and scmutil.addremove(repo, matcher, "", uipathfn, opts)):
2509 and scmutil.addremove(repo, matcher, "", uipathfn, opts)):
2510 raise error.Abort(
2510 raise error.Abort(
2511 _("failed to mark all new/missing files as added/removed"))
2511 _("failed to mark all new/missing files as added/removed"))
2512
2512
2513 # Check subrepos. This depends on in-place wctx._status update in
2513 # Check subrepos. This depends on in-place wctx._status update in
2514 # subrepo.precommit(). To minimize the risk of this hack, we do
2514 # subrepo.precommit(). To minimize the risk of this hack, we do
2515 # nothing if .hgsub does not exist.
2515 # nothing if .hgsub does not exist.
2516 if '.hgsub' in wctx or '.hgsub' in old:
2516 if '.hgsub' in wctx or '.hgsub' in old:
2517 subs, commitsubs, newsubstate = subrepoutil.precommit(
2517 subs, commitsubs, newsubstate = subrepoutil.precommit(
2518 ui, wctx, wctx._status, matcher)
2518 ui, wctx, wctx._status, matcher)
2519 # amend should abort if commitsubrepos is enabled
2519 # amend should abort if commitsubrepos is enabled
2520 assert not commitsubs
2520 assert not commitsubs
2521 if subs:
2521 if subs:
2522 subrepoutil.writestate(repo, newsubstate)
2522 subrepoutil.writestate(repo, newsubstate)
2523
2523
2524 ms = mergemod.mergestate.read(repo)
2524 ms = mergemod.mergestate.read(repo)
2525 mergeutil.checkunresolved(ms)
2525 mergeutil.checkunresolved(ms)
2526
2526
2527 filestoamend = set(f for f in wctx.files() if matcher(f))
2527 filestoamend = set(f for f in wctx.files() if matcher(f))
2528
2528
2529 changes = (len(filestoamend) > 0)
2529 changes = (len(filestoamend) > 0)
2530 if changes:
2530 if changes:
2531 # Recompute copies (avoid recording a -> b -> a)
2531 # Recompute copies (avoid recording a -> b -> a)
2532 copied = copies.pathcopies(base, wctx, matcher)
2532 copied = copies.pathcopies(base, wctx, matcher)
2533 if old.p2:
2533 if old.p2:
2534 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2534 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2535
2535
2536 # Prune files which were reverted by the updates: if old
2536 # Prune files which were reverted by the updates: if old
2537 # introduced file X and the file was renamed in the working
2537 # introduced file X and the file was renamed in the working
2538 # copy, then those two files are the same and
2538 # copy, then those two files are the same and
2539 # we can discard X from our list of files. Likewise if X
2539 # we can discard X from our list of files. Likewise if X
2540 # was removed, it's no longer relevant. If X is missing (aka
2540 # was removed, it's no longer relevant. If X is missing (aka
2541 # deleted), old X must be preserved.
2541 # deleted), old X must be preserved.
2542 files.update(filestoamend)
2542 files.update(filestoamend)
2543 files = [f for f in files if (not samefile(f, wctx, base)
2543 files = [f for f in files if (not samefile(f, wctx, base)
2544 or f in wctx.deleted())]
2544 or f in wctx.deleted())]
2545
2545
2546 def filectxfn(repo, ctx_, path):
2546 def filectxfn(repo, ctx_, path):
2547 try:
2547 try:
2548 # If the file being considered is not amongst the files
2548 # If the file being considered is not amongst the files
2549 # to be amended, we should return the file context from the
2549 # to be amended, we should return the file context from the
2550 # old changeset. This avoids issues when only some files in
2550 # old changeset. This avoids issues when only some files in
2551 # the working copy are being amended but there are also
2551 # the working copy are being amended but there are also
2552 # changes to other files from the old changeset.
2552 # changes to other files from the old changeset.
2553 if path not in filestoamend:
2553 if path not in filestoamend:
2554 return old.filectx(path)
2554 return old.filectx(path)
2555
2555
2556 # Return None for removed files.
2556 # Return None for removed files.
2557 if path in wctx.removed():
2557 if path in wctx.removed():
2558 return None
2558 return None
2559
2559
2560 fctx = wctx[path]
2560 fctx = wctx[path]
2561 flags = fctx.flags()
2561 flags = fctx.flags()
2562 mctx = context.memfilectx(repo, ctx_,
2562 mctx = context.memfilectx(repo, ctx_,
2563 fctx.path(), fctx.data(),
2563 fctx.path(), fctx.data(),
2564 islink='l' in flags,
2564 islink='l' in flags,
2565 isexec='x' in flags,
2565 isexec='x' in flags,
2566 copied=copied.get(path))
2566 copysource=copied.get(path))
2567 return mctx
2567 return mctx
2568 except KeyError:
2568 except KeyError:
2569 return None
2569 return None
2570 else:
2570 else:
2571 ui.note(_('copying changeset %s to %s\n') % (old, base))
2571 ui.note(_('copying changeset %s to %s\n') % (old, base))
2572
2572
2573 # Use version of files as in the old cset
2573 # Use version of files as in the old cset
2574 def filectxfn(repo, ctx_, path):
2574 def filectxfn(repo, ctx_, path):
2575 try:
2575 try:
2576 return old.filectx(path)
2576 return old.filectx(path)
2577 except KeyError:
2577 except KeyError:
2578 return None
2578 return None
2579
2579
2580 # See if we got a message from -m or -l, if not, open the editor with
2580 # See if we got a message from -m or -l, if not, open the editor with
2581 # the message of the changeset to amend.
2581 # the message of the changeset to amend.
2582 message = logmessage(ui, opts)
2582 message = logmessage(ui, opts)
2583
2583
2584 editform = mergeeditform(old, 'commit.amend')
2584 editform = mergeeditform(old, 'commit.amend')
2585 editor = getcommiteditor(editform=editform,
2585 editor = getcommiteditor(editform=editform,
2586 **pycompat.strkwargs(opts))
2586 **pycompat.strkwargs(opts))
2587
2587
2588 if not message:
2588 if not message:
2589 editor = getcommiteditor(edit=True, editform=editform)
2589 editor = getcommiteditor(edit=True, editform=editform)
2590 message = old.description()
2590 message = old.description()
2591
2591
2592 pureextra = extra.copy()
2592 pureextra = extra.copy()
2593 extra['amend_source'] = old.hex()
2593 extra['amend_source'] = old.hex()
2594
2594
2595 new = context.memctx(repo,
2595 new = context.memctx(repo,
2596 parents=[base.node(), old.p2().node()],
2596 parents=[base.node(), old.p2().node()],
2597 text=message,
2597 text=message,
2598 files=files,
2598 files=files,
2599 filectxfn=filectxfn,
2599 filectxfn=filectxfn,
2600 user=user,
2600 user=user,
2601 date=date,
2601 date=date,
2602 extra=extra,
2602 extra=extra,
2603 editor=editor)
2603 editor=editor)
2604
2604
2605 newdesc = changelog.stripdesc(new.description())
2605 newdesc = changelog.stripdesc(new.description())
2606 if ((not changes)
2606 if ((not changes)
2607 and newdesc == old.description()
2607 and newdesc == old.description()
2608 and user == old.user()
2608 and user == old.user()
2609 and (date == old.date() or datemaydiffer)
2609 and (date == old.date() or datemaydiffer)
2610 and pureextra == old.extra()):
2610 and pureextra == old.extra()):
2611 # nothing changed. continuing here would create a new node
2611 # nothing changed. continuing here would create a new node
2612 # anyway because of the amend_source noise.
2612 # anyway because of the amend_source noise.
2613 #
2613 #
2614 # This not what we expect from amend.
2614 # This not what we expect from amend.
2615 return old.node()
2615 return old.node()
2616
2616
2617 commitphase = None
2617 commitphase = None
2618 if opts.get('secret'):
2618 if opts.get('secret'):
2619 commitphase = phases.secret
2619 commitphase = phases.secret
2620 newid = repo.commitctx(new)
2620 newid = repo.commitctx(new)
2621
2621
2622 # Reroute the working copy parent to the new changeset
2622 # Reroute the working copy parent to the new changeset
2623 repo.setparents(newid, nullid)
2623 repo.setparents(newid, nullid)
2624 mapping = {old.node(): (newid,)}
2624 mapping = {old.node(): (newid,)}
2625 obsmetadata = None
2625 obsmetadata = None
2626 if opts.get('note'):
2626 if opts.get('note'):
2627 obsmetadata = {'note': encoding.fromlocal(opts['note'])}
2627 obsmetadata = {'note': encoding.fromlocal(opts['note'])}
2628 backup = ui.configbool('rewrite', 'backup-bundle')
2628 backup = ui.configbool('rewrite', 'backup-bundle')
2629 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata,
2629 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata,
2630 fixphase=True, targetphase=commitphase,
2630 fixphase=True, targetphase=commitphase,
2631 backup=backup)
2631 backup=backup)
2632
2632
2633 # Fixing the dirstate because localrepo.commitctx does not update
2633 # Fixing the dirstate because localrepo.commitctx does not update
2634 # it. This is rather convenient because we did not need to update
2634 # it. This is rather convenient because we did not need to update
2635 # the dirstate for all the files in the new commit which commitctx
2635 # the dirstate for all the files in the new commit which commitctx
2636 # could have done if it updated the dirstate. Now, we can
2636 # could have done if it updated the dirstate. Now, we can
2637 # selectively update the dirstate only for the amended files.
2637 # selectively update the dirstate only for the amended files.
2638 dirstate = repo.dirstate
2638 dirstate = repo.dirstate
2639
2639
2640 # Update the state of the files which were added and
2640 # Update the state of the files which were added and
2641 # and modified in the amend to "normal" in the dirstate.
2641 # and modified in the amend to "normal" in the dirstate.
2642 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2642 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2643 for f in normalfiles:
2643 for f in normalfiles:
2644 dirstate.normal(f)
2644 dirstate.normal(f)
2645
2645
2646 # Update the state of files which were removed in the amend
2646 # Update the state of files which were removed in the amend
2647 # to "removed" in the dirstate.
2647 # to "removed" in the dirstate.
2648 removedfiles = set(wctx.removed()) & filestoamend
2648 removedfiles = set(wctx.removed()) & filestoamend
2649 for f in removedfiles:
2649 for f in removedfiles:
2650 dirstate.drop(f)
2650 dirstate.drop(f)
2651
2651
2652 return newid
2652 return newid
2653
2653
2654 def commiteditor(repo, ctx, subs, editform=''):
2654 def commiteditor(repo, ctx, subs, editform=''):
2655 if ctx.description():
2655 if ctx.description():
2656 return ctx.description()
2656 return ctx.description()
2657 return commitforceeditor(repo, ctx, subs, editform=editform,
2657 return commitforceeditor(repo, ctx, subs, editform=editform,
2658 unchangedmessagedetection=True)
2658 unchangedmessagedetection=True)
2659
2659
2660 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2660 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2661 editform='', unchangedmessagedetection=False):
2661 editform='', unchangedmessagedetection=False):
2662 if not extramsg:
2662 if not extramsg:
2663 extramsg = _("Leave message empty to abort commit.")
2663 extramsg = _("Leave message empty to abort commit.")
2664
2664
2665 forms = [e for e in editform.split('.') if e]
2665 forms = [e for e in editform.split('.') if e]
2666 forms.insert(0, 'changeset')
2666 forms.insert(0, 'changeset')
2667 templatetext = None
2667 templatetext = None
2668 while forms:
2668 while forms:
2669 ref = '.'.join(forms)
2669 ref = '.'.join(forms)
2670 if repo.ui.config('committemplate', ref):
2670 if repo.ui.config('committemplate', ref):
2671 templatetext = committext = buildcommittemplate(
2671 templatetext = committext = buildcommittemplate(
2672 repo, ctx, subs, extramsg, ref)
2672 repo, ctx, subs, extramsg, ref)
2673 break
2673 break
2674 forms.pop()
2674 forms.pop()
2675 else:
2675 else:
2676 committext = buildcommittext(repo, ctx, subs, extramsg)
2676 committext = buildcommittext(repo, ctx, subs, extramsg)
2677
2677
2678 # run editor in the repository root
2678 # run editor in the repository root
2679 olddir = encoding.getcwd()
2679 olddir = encoding.getcwd()
2680 os.chdir(repo.root)
2680 os.chdir(repo.root)
2681
2681
2682 # make in-memory changes visible to external process
2682 # make in-memory changes visible to external process
2683 tr = repo.currenttransaction()
2683 tr = repo.currenttransaction()
2684 repo.dirstate.write(tr)
2684 repo.dirstate.write(tr)
2685 pending = tr and tr.writepending() and repo.root
2685 pending = tr and tr.writepending() and repo.root
2686
2686
2687 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2687 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2688 editform=editform, pending=pending,
2688 editform=editform, pending=pending,
2689 repopath=repo.path, action='commit')
2689 repopath=repo.path, action='commit')
2690 text = editortext
2690 text = editortext
2691
2691
2692 # strip away anything below this special string (used for editors that want
2692 # strip away anything below this special string (used for editors that want
2693 # to display the diff)
2693 # to display the diff)
2694 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2694 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2695 if stripbelow:
2695 if stripbelow:
2696 text = text[:stripbelow.start()]
2696 text = text[:stripbelow.start()]
2697
2697
2698 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2698 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2699 os.chdir(olddir)
2699 os.chdir(olddir)
2700
2700
2701 if finishdesc:
2701 if finishdesc:
2702 text = finishdesc(text)
2702 text = finishdesc(text)
2703 if not text.strip():
2703 if not text.strip():
2704 raise error.Abort(_("empty commit message"))
2704 raise error.Abort(_("empty commit message"))
2705 if unchangedmessagedetection and editortext == templatetext:
2705 if unchangedmessagedetection and editortext == templatetext:
2706 raise error.Abort(_("commit message unchanged"))
2706 raise error.Abort(_("commit message unchanged"))
2707
2707
2708 return text
2708 return text
2709
2709
2710 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2710 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2711 ui = repo.ui
2711 ui = repo.ui
2712 spec = formatter.templatespec(ref, None, None)
2712 spec = formatter.templatespec(ref, None, None)
2713 t = logcmdutil.changesettemplater(ui, repo, spec)
2713 t = logcmdutil.changesettemplater(ui, repo, spec)
2714 t.t.cache.update((k, templater.unquotestring(v))
2714 t.t.cache.update((k, templater.unquotestring(v))
2715 for k, v in repo.ui.configitems('committemplate'))
2715 for k, v in repo.ui.configitems('committemplate'))
2716
2716
2717 if not extramsg:
2717 if not extramsg:
2718 extramsg = '' # ensure that extramsg is string
2718 extramsg = '' # ensure that extramsg is string
2719
2719
2720 ui.pushbuffer()
2720 ui.pushbuffer()
2721 t.show(ctx, extramsg=extramsg)
2721 t.show(ctx, extramsg=extramsg)
2722 return ui.popbuffer()
2722 return ui.popbuffer()
2723
2723
2724 def hgprefix(msg):
2724 def hgprefix(msg):
2725 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2725 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2726
2726
2727 def buildcommittext(repo, ctx, subs, extramsg):
2727 def buildcommittext(repo, ctx, subs, extramsg):
2728 edittext = []
2728 edittext = []
2729 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2729 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2730 if ctx.description():
2730 if ctx.description():
2731 edittext.append(ctx.description())
2731 edittext.append(ctx.description())
2732 edittext.append("")
2732 edittext.append("")
2733 edittext.append("") # Empty line between message and comments.
2733 edittext.append("") # Empty line between message and comments.
2734 edittext.append(hgprefix(_("Enter commit message."
2734 edittext.append(hgprefix(_("Enter commit message."
2735 " Lines beginning with 'HG:' are removed.")))
2735 " Lines beginning with 'HG:' are removed.")))
2736 edittext.append(hgprefix(extramsg))
2736 edittext.append(hgprefix(extramsg))
2737 edittext.append("HG: --")
2737 edittext.append("HG: --")
2738 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2738 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2739 if ctx.p2():
2739 if ctx.p2():
2740 edittext.append(hgprefix(_("branch merge")))
2740 edittext.append(hgprefix(_("branch merge")))
2741 if ctx.branch():
2741 if ctx.branch():
2742 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2742 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2743 if bookmarks.isactivewdirparent(repo):
2743 if bookmarks.isactivewdirparent(repo):
2744 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2744 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2745 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2745 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2746 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2746 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2747 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2747 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2748 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2748 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2749 if not added and not modified and not removed:
2749 if not added and not modified and not removed:
2750 edittext.append(hgprefix(_("no files changed")))
2750 edittext.append(hgprefix(_("no files changed")))
2751 edittext.append("")
2751 edittext.append("")
2752
2752
2753 return "\n".join(edittext)
2753 return "\n".join(edittext)
2754
2754
2755 def commitstatus(repo, node, branch, bheads=None, opts=None):
2755 def commitstatus(repo, node, branch, bheads=None, opts=None):
2756 if opts is None:
2756 if opts is None:
2757 opts = {}
2757 opts = {}
2758 ctx = repo[node]
2758 ctx = repo[node]
2759 parents = ctx.parents()
2759 parents = ctx.parents()
2760
2760
2761 if (not opts.get('amend') and bheads and node not in bheads and not
2761 if (not opts.get('amend') and bheads and node not in bheads and not
2762 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2762 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2763 repo.ui.status(_('created new head\n'))
2763 repo.ui.status(_('created new head\n'))
2764 # The message is not printed for initial roots. For the other
2764 # The message is not printed for initial roots. For the other
2765 # changesets, it is printed in the following situations:
2765 # changesets, it is printed in the following situations:
2766 #
2766 #
2767 # Par column: for the 2 parents with ...
2767 # Par column: for the 2 parents with ...
2768 # N: null or no parent
2768 # N: null or no parent
2769 # B: parent is on another named branch
2769 # B: parent is on another named branch
2770 # C: parent is a regular non head changeset
2770 # C: parent is a regular non head changeset
2771 # H: parent was a branch head of the current branch
2771 # H: parent was a branch head of the current branch
2772 # Msg column: whether we print "created new head" message
2772 # Msg column: whether we print "created new head" message
2773 # In the following, it is assumed that there already exists some
2773 # In the following, it is assumed that there already exists some
2774 # initial branch heads of the current branch, otherwise nothing is
2774 # initial branch heads of the current branch, otherwise nothing is
2775 # printed anyway.
2775 # printed anyway.
2776 #
2776 #
2777 # Par Msg Comment
2777 # Par Msg Comment
2778 # N N y additional topo root
2778 # N N y additional topo root
2779 #
2779 #
2780 # B N y additional branch root
2780 # B N y additional branch root
2781 # C N y additional topo head
2781 # C N y additional topo head
2782 # H N n usual case
2782 # H N n usual case
2783 #
2783 #
2784 # B B y weird additional branch root
2784 # B B y weird additional branch root
2785 # C B y branch merge
2785 # C B y branch merge
2786 # H B n merge with named branch
2786 # H B n merge with named branch
2787 #
2787 #
2788 # C C y additional head from merge
2788 # C C y additional head from merge
2789 # C H n merge with a head
2789 # C H n merge with a head
2790 #
2790 #
2791 # H H n head merge: head count decreases
2791 # H H n head merge: head count decreases
2792
2792
2793 if not opts.get('close_branch'):
2793 if not opts.get('close_branch'):
2794 for r in parents:
2794 for r in parents:
2795 if r.closesbranch() and r.branch() == branch:
2795 if r.closesbranch() and r.branch() == branch:
2796 repo.ui.status(_('reopening closed branch head %d\n') % r.rev())
2796 repo.ui.status(_('reopening closed branch head %d\n') % r.rev())
2797
2797
2798 if repo.ui.debugflag:
2798 if repo.ui.debugflag:
2799 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx.hex()))
2799 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx.hex()))
2800 elif repo.ui.verbose:
2800 elif repo.ui.verbose:
2801 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx))
2801 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx))
2802
2802
2803 def postcommitstatus(repo, pats, opts):
2803 def postcommitstatus(repo, pats, opts):
2804 return repo.status(match=scmutil.match(repo[None], pats, opts))
2804 return repo.status(match=scmutil.match(repo[None], pats, opts))
2805
2805
2806 def revert(ui, repo, ctx, parents, *pats, **opts):
2806 def revert(ui, repo, ctx, parents, *pats, **opts):
2807 opts = pycompat.byteskwargs(opts)
2807 opts = pycompat.byteskwargs(opts)
2808 parent, p2 = parents
2808 parent, p2 = parents
2809 node = ctx.node()
2809 node = ctx.node()
2810
2810
2811 mf = ctx.manifest()
2811 mf = ctx.manifest()
2812 if node == p2:
2812 if node == p2:
2813 parent = p2
2813 parent = p2
2814
2814
2815 # need all matching names in dirstate and manifest of target rev,
2815 # need all matching names in dirstate and manifest of target rev,
2816 # so have to walk both. do not print errors if files exist in one
2816 # so have to walk both. do not print errors if files exist in one
2817 # but not other. in both cases, filesets should be evaluated against
2817 # but not other. in both cases, filesets should be evaluated against
2818 # workingctx to get consistent result (issue4497). this means 'set:**'
2818 # workingctx to get consistent result (issue4497). this means 'set:**'
2819 # cannot be used to select missing files from target rev.
2819 # cannot be used to select missing files from target rev.
2820
2820
2821 # `names` is a mapping for all elements in working copy and target revision
2821 # `names` is a mapping for all elements in working copy and target revision
2822 # The mapping is in the form:
2822 # The mapping is in the form:
2823 # <abs path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2823 # <abs path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2824 names = {}
2824 names = {}
2825 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
2825 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
2826
2826
2827 with repo.wlock():
2827 with repo.wlock():
2828 ## filling of the `names` mapping
2828 ## filling of the `names` mapping
2829 # walk dirstate to fill `names`
2829 # walk dirstate to fill `names`
2830
2830
2831 interactive = opts.get('interactive', False)
2831 interactive = opts.get('interactive', False)
2832 wctx = repo[None]
2832 wctx = repo[None]
2833 m = scmutil.match(wctx, pats, opts)
2833 m = scmutil.match(wctx, pats, opts)
2834
2834
2835 # we'll need this later
2835 # we'll need this later
2836 targetsubs = sorted(s for s in wctx.substate if m(s))
2836 targetsubs = sorted(s for s in wctx.substate if m(s))
2837
2837
2838 if not m.always():
2838 if not m.always():
2839 matcher = matchmod.badmatch(m, lambda x, y: False)
2839 matcher = matchmod.badmatch(m, lambda x, y: False)
2840 for abs in wctx.walk(matcher):
2840 for abs in wctx.walk(matcher):
2841 names[abs] = m.exact(abs)
2841 names[abs] = m.exact(abs)
2842
2842
2843 # walk target manifest to fill `names`
2843 # walk target manifest to fill `names`
2844
2844
2845 def badfn(path, msg):
2845 def badfn(path, msg):
2846 if path in names:
2846 if path in names:
2847 return
2847 return
2848 if path in ctx.substate:
2848 if path in ctx.substate:
2849 return
2849 return
2850 path_ = path + '/'
2850 path_ = path + '/'
2851 for f in names:
2851 for f in names:
2852 if f.startswith(path_):
2852 if f.startswith(path_):
2853 return
2853 return
2854 ui.warn("%s: %s\n" % (uipathfn(path), msg))
2854 ui.warn("%s: %s\n" % (uipathfn(path), msg))
2855
2855
2856 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2856 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2857 if abs not in names:
2857 if abs not in names:
2858 names[abs] = m.exact(abs)
2858 names[abs] = m.exact(abs)
2859
2859
2860 # Find status of all file in `names`.
2860 # Find status of all file in `names`.
2861 m = scmutil.matchfiles(repo, names)
2861 m = scmutil.matchfiles(repo, names)
2862
2862
2863 changes = repo.status(node1=node, match=m,
2863 changes = repo.status(node1=node, match=m,
2864 unknown=True, ignored=True, clean=True)
2864 unknown=True, ignored=True, clean=True)
2865 else:
2865 else:
2866 changes = repo.status(node1=node, match=m)
2866 changes = repo.status(node1=node, match=m)
2867 for kind in changes:
2867 for kind in changes:
2868 for abs in kind:
2868 for abs in kind:
2869 names[abs] = m.exact(abs)
2869 names[abs] = m.exact(abs)
2870
2870
2871 m = scmutil.matchfiles(repo, names)
2871 m = scmutil.matchfiles(repo, names)
2872
2872
2873 modified = set(changes.modified)
2873 modified = set(changes.modified)
2874 added = set(changes.added)
2874 added = set(changes.added)
2875 removed = set(changes.removed)
2875 removed = set(changes.removed)
2876 _deleted = set(changes.deleted)
2876 _deleted = set(changes.deleted)
2877 unknown = set(changes.unknown)
2877 unknown = set(changes.unknown)
2878 unknown.update(changes.ignored)
2878 unknown.update(changes.ignored)
2879 clean = set(changes.clean)
2879 clean = set(changes.clean)
2880 modadded = set()
2880 modadded = set()
2881
2881
2882 # We need to account for the state of the file in the dirstate,
2882 # We need to account for the state of the file in the dirstate,
2883 # even when we revert against something else than parent. This will
2883 # even when we revert against something else than parent. This will
2884 # slightly alter the behavior of revert (doing back up or not, delete
2884 # slightly alter the behavior of revert (doing back up or not, delete
2885 # or just forget etc).
2885 # or just forget etc).
2886 if parent == node:
2886 if parent == node:
2887 dsmodified = modified
2887 dsmodified = modified
2888 dsadded = added
2888 dsadded = added
2889 dsremoved = removed
2889 dsremoved = removed
2890 # store all local modifications, useful later for rename detection
2890 # store all local modifications, useful later for rename detection
2891 localchanges = dsmodified | dsadded
2891 localchanges = dsmodified | dsadded
2892 modified, added, removed = set(), set(), set()
2892 modified, added, removed = set(), set(), set()
2893 else:
2893 else:
2894 changes = repo.status(node1=parent, match=m)
2894 changes = repo.status(node1=parent, match=m)
2895 dsmodified = set(changes.modified)
2895 dsmodified = set(changes.modified)
2896 dsadded = set(changes.added)
2896 dsadded = set(changes.added)
2897 dsremoved = set(changes.removed)
2897 dsremoved = set(changes.removed)
2898 # store all local modifications, useful later for rename detection
2898 # store all local modifications, useful later for rename detection
2899 localchanges = dsmodified | dsadded
2899 localchanges = dsmodified | dsadded
2900
2900
2901 # only take into account for removes between wc and target
2901 # only take into account for removes between wc and target
2902 clean |= dsremoved - removed
2902 clean |= dsremoved - removed
2903 dsremoved &= removed
2903 dsremoved &= removed
2904 # distinct between dirstate remove and other
2904 # distinct between dirstate remove and other
2905 removed -= dsremoved
2905 removed -= dsremoved
2906
2906
2907 modadded = added & dsmodified
2907 modadded = added & dsmodified
2908 added -= modadded
2908 added -= modadded
2909
2909
2910 # tell newly modified apart.
2910 # tell newly modified apart.
2911 dsmodified &= modified
2911 dsmodified &= modified
2912 dsmodified |= modified & dsadded # dirstate added may need backup
2912 dsmodified |= modified & dsadded # dirstate added may need backup
2913 modified -= dsmodified
2913 modified -= dsmodified
2914
2914
2915 # We need to wait for some post-processing to update this set
2915 # We need to wait for some post-processing to update this set
2916 # before making the distinction. The dirstate will be used for
2916 # before making the distinction. The dirstate will be used for
2917 # that purpose.
2917 # that purpose.
2918 dsadded = added
2918 dsadded = added
2919
2919
2920 # in case of merge, files that are actually added can be reported as
2920 # in case of merge, files that are actually added can be reported as
2921 # modified, we need to post process the result
2921 # modified, we need to post process the result
2922 if p2 != nullid:
2922 if p2 != nullid:
2923 mergeadd = set(dsmodified)
2923 mergeadd = set(dsmodified)
2924 for path in dsmodified:
2924 for path in dsmodified:
2925 if path in mf:
2925 if path in mf:
2926 mergeadd.remove(path)
2926 mergeadd.remove(path)
2927 dsadded |= mergeadd
2927 dsadded |= mergeadd
2928 dsmodified -= mergeadd
2928 dsmodified -= mergeadd
2929
2929
2930 # if f is a rename, update `names` to also revert the source
2930 # if f is a rename, update `names` to also revert the source
2931 for f in localchanges:
2931 for f in localchanges:
2932 src = repo.dirstate.copied(f)
2932 src = repo.dirstate.copied(f)
2933 # XXX should we check for rename down to target node?
2933 # XXX should we check for rename down to target node?
2934 if src and src not in names and repo.dirstate[src] == 'r':
2934 if src and src not in names and repo.dirstate[src] == 'r':
2935 dsremoved.add(src)
2935 dsremoved.add(src)
2936 names[src] = True
2936 names[src] = True
2937
2937
2938 # determine the exact nature of the deleted changesets
2938 # determine the exact nature of the deleted changesets
2939 deladded = set(_deleted)
2939 deladded = set(_deleted)
2940 for path in _deleted:
2940 for path in _deleted:
2941 if path in mf:
2941 if path in mf:
2942 deladded.remove(path)
2942 deladded.remove(path)
2943 deleted = _deleted - deladded
2943 deleted = _deleted - deladded
2944
2944
2945 # distinguish between file to forget and the other
2945 # distinguish between file to forget and the other
2946 added = set()
2946 added = set()
2947 for abs in dsadded:
2947 for abs in dsadded:
2948 if repo.dirstate[abs] != 'a':
2948 if repo.dirstate[abs] != 'a':
2949 added.add(abs)
2949 added.add(abs)
2950 dsadded -= added
2950 dsadded -= added
2951
2951
2952 for abs in deladded:
2952 for abs in deladded:
2953 if repo.dirstate[abs] == 'a':
2953 if repo.dirstate[abs] == 'a':
2954 dsadded.add(abs)
2954 dsadded.add(abs)
2955 deladded -= dsadded
2955 deladded -= dsadded
2956
2956
2957 # For files marked as removed, we check if an unknown file is present at
2957 # For files marked as removed, we check if an unknown file is present at
2958 # the same path. If a such file exists it may need to be backed up.
2958 # the same path. If a such file exists it may need to be backed up.
2959 # Making the distinction at this stage helps have simpler backup
2959 # Making the distinction at this stage helps have simpler backup
2960 # logic.
2960 # logic.
2961 removunk = set()
2961 removunk = set()
2962 for abs in removed:
2962 for abs in removed:
2963 target = repo.wjoin(abs)
2963 target = repo.wjoin(abs)
2964 if os.path.lexists(target):
2964 if os.path.lexists(target):
2965 removunk.add(abs)
2965 removunk.add(abs)
2966 removed -= removunk
2966 removed -= removunk
2967
2967
2968 dsremovunk = set()
2968 dsremovunk = set()
2969 for abs in dsremoved:
2969 for abs in dsremoved:
2970 target = repo.wjoin(abs)
2970 target = repo.wjoin(abs)
2971 if os.path.lexists(target):
2971 if os.path.lexists(target):
2972 dsremovunk.add(abs)
2972 dsremovunk.add(abs)
2973 dsremoved -= dsremovunk
2973 dsremoved -= dsremovunk
2974
2974
2975 # action to be actually performed by revert
2975 # action to be actually performed by revert
2976 # (<list of file>, message>) tuple
2976 # (<list of file>, message>) tuple
2977 actions = {'revert': ([], _('reverting %s\n')),
2977 actions = {'revert': ([], _('reverting %s\n')),
2978 'add': ([], _('adding %s\n')),
2978 'add': ([], _('adding %s\n')),
2979 'remove': ([], _('removing %s\n')),
2979 'remove': ([], _('removing %s\n')),
2980 'drop': ([], _('removing %s\n')),
2980 'drop': ([], _('removing %s\n')),
2981 'forget': ([], _('forgetting %s\n')),
2981 'forget': ([], _('forgetting %s\n')),
2982 'undelete': ([], _('undeleting %s\n')),
2982 'undelete': ([], _('undeleting %s\n')),
2983 'noop': (None, _('no changes needed to %s\n')),
2983 'noop': (None, _('no changes needed to %s\n')),
2984 'unknown': (None, _('file not managed: %s\n')),
2984 'unknown': (None, _('file not managed: %s\n')),
2985 }
2985 }
2986
2986
2987 # "constant" that convey the backup strategy.
2987 # "constant" that convey the backup strategy.
2988 # All set to `discard` if `no-backup` is set do avoid checking
2988 # All set to `discard` if `no-backup` is set do avoid checking
2989 # no_backup lower in the code.
2989 # no_backup lower in the code.
2990 # These values are ordered for comparison purposes
2990 # These values are ordered for comparison purposes
2991 backupinteractive = 3 # do backup if interactively modified
2991 backupinteractive = 3 # do backup if interactively modified
2992 backup = 2 # unconditionally do backup
2992 backup = 2 # unconditionally do backup
2993 check = 1 # check if the existing file differs from target
2993 check = 1 # check if the existing file differs from target
2994 discard = 0 # never do backup
2994 discard = 0 # never do backup
2995 if opts.get('no_backup'):
2995 if opts.get('no_backup'):
2996 backupinteractive = backup = check = discard
2996 backupinteractive = backup = check = discard
2997 if interactive:
2997 if interactive:
2998 dsmodifiedbackup = backupinteractive
2998 dsmodifiedbackup = backupinteractive
2999 else:
2999 else:
3000 dsmodifiedbackup = backup
3000 dsmodifiedbackup = backup
3001 tobackup = set()
3001 tobackup = set()
3002
3002
3003 backupanddel = actions['remove']
3003 backupanddel = actions['remove']
3004 if not opts.get('no_backup'):
3004 if not opts.get('no_backup'):
3005 backupanddel = actions['drop']
3005 backupanddel = actions['drop']
3006
3006
3007 disptable = (
3007 disptable = (
3008 # dispatch table:
3008 # dispatch table:
3009 # file state
3009 # file state
3010 # action
3010 # action
3011 # make backup
3011 # make backup
3012
3012
3013 ## Sets that results that will change file on disk
3013 ## Sets that results that will change file on disk
3014 # Modified compared to target, no local change
3014 # Modified compared to target, no local change
3015 (modified, actions['revert'], discard),
3015 (modified, actions['revert'], discard),
3016 # Modified compared to target, but local file is deleted
3016 # Modified compared to target, but local file is deleted
3017 (deleted, actions['revert'], discard),
3017 (deleted, actions['revert'], discard),
3018 # Modified compared to target, local change
3018 # Modified compared to target, local change
3019 (dsmodified, actions['revert'], dsmodifiedbackup),
3019 (dsmodified, actions['revert'], dsmodifiedbackup),
3020 # Added since target
3020 # Added since target
3021 (added, actions['remove'], discard),
3021 (added, actions['remove'], discard),
3022 # Added in working directory
3022 # Added in working directory
3023 (dsadded, actions['forget'], discard),
3023 (dsadded, actions['forget'], discard),
3024 # Added since target, have local modification
3024 # Added since target, have local modification
3025 (modadded, backupanddel, backup),
3025 (modadded, backupanddel, backup),
3026 # Added since target but file is missing in working directory
3026 # Added since target but file is missing in working directory
3027 (deladded, actions['drop'], discard),
3027 (deladded, actions['drop'], discard),
3028 # Removed since target, before working copy parent
3028 # Removed since target, before working copy parent
3029 (removed, actions['add'], discard),
3029 (removed, actions['add'], discard),
3030 # Same as `removed` but an unknown file exists at the same path
3030 # Same as `removed` but an unknown file exists at the same path
3031 (removunk, actions['add'], check),
3031 (removunk, actions['add'], check),
3032 # Removed since targe, marked as such in working copy parent
3032 # Removed since targe, marked as such in working copy parent
3033 (dsremoved, actions['undelete'], discard),
3033 (dsremoved, actions['undelete'], discard),
3034 # Same as `dsremoved` but an unknown file exists at the same path
3034 # Same as `dsremoved` but an unknown file exists at the same path
3035 (dsremovunk, actions['undelete'], check),
3035 (dsremovunk, actions['undelete'], check),
3036 ## the following sets does not result in any file changes
3036 ## the following sets does not result in any file changes
3037 # File with no modification
3037 # File with no modification
3038 (clean, actions['noop'], discard),
3038 (clean, actions['noop'], discard),
3039 # Existing file, not tracked anywhere
3039 # Existing file, not tracked anywhere
3040 (unknown, actions['unknown'], discard),
3040 (unknown, actions['unknown'], discard),
3041 )
3041 )
3042
3042
3043 for abs, exact in sorted(names.items()):
3043 for abs, exact in sorted(names.items()):
3044 # target file to be touch on disk (relative to cwd)
3044 # target file to be touch on disk (relative to cwd)
3045 target = repo.wjoin(abs)
3045 target = repo.wjoin(abs)
3046 # search the entry in the dispatch table.
3046 # search the entry in the dispatch table.
3047 # if the file is in any of these sets, it was touched in the working
3047 # if the file is in any of these sets, it was touched in the working
3048 # directory parent and we are sure it needs to be reverted.
3048 # directory parent and we are sure it needs to be reverted.
3049 for table, (xlist, msg), dobackup in disptable:
3049 for table, (xlist, msg), dobackup in disptable:
3050 if abs not in table:
3050 if abs not in table:
3051 continue
3051 continue
3052 if xlist is not None:
3052 if xlist is not None:
3053 xlist.append(abs)
3053 xlist.append(abs)
3054 if dobackup:
3054 if dobackup:
3055 # If in interactive mode, don't automatically create
3055 # If in interactive mode, don't automatically create
3056 # .orig files (issue4793)
3056 # .orig files (issue4793)
3057 if dobackup == backupinteractive:
3057 if dobackup == backupinteractive:
3058 tobackup.add(abs)
3058 tobackup.add(abs)
3059 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3059 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3060 absbakname = scmutil.backuppath(ui, repo, abs)
3060 absbakname = scmutil.backuppath(ui, repo, abs)
3061 bakname = os.path.relpath(absbakname,
3061 bakname = os.path.relpath(absbakname,
3062 start=repo.root)
3062 start=repo.root)
3063 ui.note(_('saving current version of %s as %s\n') %
3063 ui.note(_('saving current version of %s as %s\n') %
3064 (uipathfn(abs), uipathfn(bakname)))
3064 (uipathfn(abs), uipathfn(bakname)))
3065 if not opts.get('dry_run'):
3065 if not opts.get('dry_run'):
3066 if interactive:
3066 if interactive:
3067 util.copyfile(target, absbakname)
3067 util.copyfile(target, absbakname)
3068 else:
3068 else:
3069 util.rename(target, absbakname)
3069 util.rename(target, absbakname)
3070 if opts.get('dry_run'):
3070 if opts.get('dry_run'):
3071 if ui.verbose or not exact:
3071 if ui.verbose or not exact:
3072 ui.status(msg % uipathfn(abs))
3072 ui.status(msg % uipathfn(abs))
3073 elif exact:
3073 elif exact:
3074 ui.warn(msg % uipathfn(abs))
3074 ui.warn(msg % uipathfn(abs))
3075 break
3075 break
3076
3076
3077 if not opts.get('dry_run'):
3077 if not opts.get('dry_run'):
3078 needdata = ('revert', 'add', 'undelete')
3078 needdata = ('revert', 'add', 'undelete')
3079 oplist = [actions[name][0] for name in needdata]
3079 oplist = [actions[name][0] for name in needdata]
3080 prefetch = scmutil.prefetchfiles
3080 prefetch = scmutil.prefetchfiles
3081 matchfiles = scmutil.matchfiles
3081 matchfiles = scmutil.matchfiles
3082 prefetch(repo, [ctx.rev()],
3082 prefetch(repo, [ctx.rev()],
3083 matchfiles(repo,
3083 matchfiles(repo,
3084 [f for sublist in oplist for f in sublist]))
3084 [f for sublist in oplist for f in sublist]))
3085 _performrevert(repo, parents, ctx, names, uipathfn, actions,
3085 _performrevert(repo, parents, ctx, names, uipathfn, actions,
3086 interactive, tobackup)
3086 interactive, tobackup)
3087
3087
3088 if targetsubs:
3088 if targetsubs:
3089 # Revert the subrepos on the revert list
3089 # Revert the subrepos on the revert list
3090 for sub in targetsubs:
3090 for sub in targetsubs:
3091 try:
3091 try:
3092 wctx.sub(sub).revert(ctx.substate[sub], *pats,
3092 wctx.sub(sub).revert(ctx.substate[sub], *pats,
3093 **pycompat.strkwargs(opts))
3093 **pycompat.strkwargs(opts))
3094 except KeyError:
3094 except KeyError:
3095 raise error.Abort("subrepository '%s' does not exist in %s!"
3095 raise error.Abort("subrepository '%s' does not exist in %s!"
3096 % (sub, short(ctx.node())))
3096 % (sub, short(ctx.node())))
3097
3097
3098 def _performrevert(repo, parents, ctx, names, uipathfn, actions,
3098 def _performrevert(repo, parents, ctx, names, uipathfn, actions,
3099 interactive=False, tobackup=None):
3099 interactive=False, tobackup=None):
3100 """function that actually perform all the actions computed for revert
3100 """function that actually perform all the actions computed for revert
3101
3101
3102 This is an independent function to let extension to plug in and react to
3102 This is an independent function to let extension to plug in and react to
3103 the imminent revert.
3103 the imminent revert.
3104
3104
3105 Make sure you have the working directory locked when calling this function.
3105 Make sure you have the working directory locked when calling this function.
3106 """
3106 """
3107 parent, p2 = parents
3107 parent, p2 = parents
3108 node = ctx.node()
3108 node = ctx.node()
3109 excluded_files = []
3109 excluded_files = []
3110
3110
3111 def checkout(f):
3111 def checkout(f):
3112 fc = ctx[f]
3112 fc = ctx[f]
3113 repo.wwrite(f, fc.data(), fc.flags())
3113 repo.wwrite(f, fc.data(), fc.flags())
3114
3114
3115 def doremove(f):
3115 def doremove(f):
3116 try:
3116 try:
3117 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
3117 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
3118 repo.wvfs.unlinkpath(f, rmdir=rmdir)
3118 repo.wvfs.unlinkpath(f, rmdir=rmdir)
3119 except OSError:
3119 except OSError:
3120 pass
3120 pass
3121 repo.dirstate.remove(f)
3121 repo.dirstate.remove(f)
3122
3122
3123 def prntstatusmsg(action, f):
3123 def prntstatusmsg(action, f):
3124 exact = names[f]
3124 exact = names[f]
3125 if repo.ui.verbose or not exact:
3125 if repo.ui.verbose or not exact:
3126 repo.ui.status(actions[action][1] % uipathfn(f))
3126 repo.ui.status(actions[action][1] % uipathfn(f))
3127
3127
3128 audit_path = pathutil.pathauditor(repo.root, cached=True)
3128 audit_path = pathutil.pathauditor(repo.root, cached=True)
3129 for f in actions['forget'][0]:
3129 for f in actions['forget'][0]:
3130 if interactive:
3130 if interactive:
3131 choice = repo.ui.promptchoice(
3131 choice = repo.ui.promptchoice(
3132 _("forget added file %s (Yn)?$$ &Yes $$ &No") % uipathfn(f))
3132 _("forget added file %s (Yn)?$$ &Yes $$ &No") % uipathfn(f))
3133 if choice == 0:
3133 if choice == 0:
3134 prntstatusmsg('forget', f)
3134 prntstatusmsg('forget', f)
3135 repo.dirstate.drop(f)
3135 repo.dirstate.drop(f)
3136 else:
3136 else:
3137 excluded_files.append(f)
3137 excluded_files.append(f)
3138 else:
3138 else:
3139 prntstatusmsg('forget', f)
3139 prntstatusmsg('forget', f)
3140 repo.dirstate.drop(f)
3140 repo.dirstate.drop(f)
3141 for f in actions['remove'][0]:
3141 for f in actions['remove'][0]:
3142 audit_path(f)
3142 audit_path(f)
3143 if interactive:
3143 if interactive:
3144 choice = repo.ui.promptchoice(
3144 choice = repo.ui.promptchoice(
3145 _("remove added file %s (Yn)?$$ &Yes $$ &No") % uipathfn(f))
3145 _("remove added file %s (Yn)?$$ &Yes $$ &No") % uipathfn(f))
3146 if choice == 0:
3146 if choice == 0:
3147 prntstatusmsg('remove', f)
3147 prntstatusmsg('remove', f)
3148 doremove(f)
3148 doremove(f)
3149 else:
3149 else:
3150 excluded_files.append(f)
3150 excluded_files.append(f)
3151 else:
3151 else:
3152 prntstatusmsg('remove', f)
3152 prntstatusmsg('remove', f)
3153 doremove(f)
3153 doremove(f)
3154 for f in actions['drop'][0]:
3154 for f in actions['drop'][0]:
3155 audit_path(f)
3155 audit_path(f)
3156 prntstatusmsg('drop', f)
3156 prntstatusmsg('drop', f)
3157 repo.dirstate.remove(f)
3157 repo.dirstate.remove(f)
3158
3158
3159 normal = None
3159 normal = None
3160 if node == parent:
3160 if node == parent:
3161 # We're reverting to our parent. If possible, we'd like status
3161 # We're reverting to our parent. If possible, we'd like status
3162 # to report the file as clean. We have to use normallookup for
3162 # to report the file as clean. We have to use normallookup for
3163 # merges to avoid losing information about merged/dirty files.
3163 # merges to avoid losing information about merged/dirty files.
3164 if p2 != nullid:
3164 if p2 != nullid:
3165 normal = repo.dirstate.normallookup
3165 normal = repo.dirstate.normallookup
3166 else:
3166 else:
3167 normal = repo.dirstate.normal
3167 normal = repo.dirstate.normal
3168
3168
3169 newlyaddedandmodifiedfiles = set()
3169 newlyaddedandmodifiedfiles = set()
3170 if interactive:
3170 if interactive:
3171 # Prompt the user for changes to revert
3171 # Prompt the user for changes to revert
3172 torevert = [f for f in actions['revert'][0] if f not in excluded_files]
3172 torevert = [f for f in actions['revert'][0] if f not in excluded_files]
3173 m = scmutil.matchfiles(repo, torevert)
3173 m = scmutil.matchfiles(repo, torevert)
3174 diffopts = patch.difffeatureopts(repo.ui, whitespace=True,
3174 diffopts = patch.difffeatureopts(repo.ui, whitespace=True,
3175 section='commands',
3175 section='commands',
3176 configprefix='revert.interactive.')
3176 configprefix='revert.interactive.')
3177 diffopts.nodates = True
3177 diffopts.nodates = True
3178 diffopts.git = True
3178 diffopts.git = True
3179 operation = 'apply'
3179 operation = 'apply'
3180 if node == parent:
3180 if node == parent:
3181 if repo.ui.configbool('experimental',
3181 if repo.ui.configbool('experimental',
3182 'revert.interactive.select-to-keep'):
3182 'revert.interactive.select-to-keep'):
3183 operation = 'keep'
3183 operation = 'keep'
3184 else:
3184 else:
3185 operation = 'discard'
3185 operation = 'discard'
3186
3186
3187 if operation == 'apply':
3187 if operation == 'apply':
3188 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3188 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3189 else:
3189 else:
3190 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3190 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3191 originalchunks = patch.parsepatch(diff)
3191 originalchunks = patch.parsepatch(diff)
3192
3192
3193 try:
3193 try:
3194
3194
3195 chunks, opts = recordfilter(repo.ui, originalchunks,
3195 chunks, opts = recordfilter(repo.ui, originalchunks,
3196 operation=operation)
3196 operation=operation)
3197 if operation == 'discard':
3197 if operation == 'discard':
3198 chunks = patch.reversehunks(chunks)
3198 chunks = patch.reversehunks(chunks)
3199
3199
3200 except error.PatchError as err:
3200 except error.PatchError as err:
3201 raise error.Abort(_('error parsing patch: %s') % err)
3201 raise error.Abort(_('error parsing patch: %s') % err)
3202
3202
3203 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3203 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3204 if tobackup is None:
3204 if tobackup is None:
3205 tobackup = set()
3205 tobackup = set()
3206 # Apply changes
3206 # Apply changes
3207 fp = stringio()
3207 fp = stringio()
3208 # chunks are serialized per file, but files aren't sorted
3208 # chunks are serialized per file, but files aren't sorted
3209 for f in sorted(set(c.header.filename() for c in chunks if ishunk(c))):
3209 for f in sorted(set(c.header.filename() for c in chunks if ishunk(c))):
3210 prntstatusmsg('revert', f)
3210 prntstatusmsg('revert', f)
3211 files = set()
3211 files = set()
3212 for c in chunks:
3212 for c in chunks:
3213 if ishunk(c):
3213 if ishunk(c):
3214 abs = c.header.filename()
3214 abs = c.header.filename()
3215 # Create a backup file only if this hunk should be backed up
3215 # Create a backup file only if this hunk should be backed up
3216 if c.header.filename() in tobackup:
3216 if c.header.filename() in tobackup:
3217 target = repo.wjoin(abs)
3217 target = repo.wjoin(abs)
3218 bakname = scmutil.backuppath(repo.ui, repo, abs)
3218 bakname = scmutil.backuppath(repo.ui, repo, abs)
3219 util.copyfile(target, bakname)
3219 util.copyfile(target, bakname)
3220 tobackup.remove(abs)
3220 tobackup.remove(abs)
3221 if abs not in files:
3221 if abs not in files:
3222 files.add(abs)
3222 files.add(abs)
3223 if operation == 'keep':
3223 if operation == 'keep':
3224 checkout(abs)
3224 checkout(abs)
3225 c.write(fp)
3225 c.write(fp)
3226 dopatch = fp.tell()
3226 dopatch = fp.tell()
3227 fp.seek(0)
3227 fp.seek(0)
3228 if dopatch:
3228 if dopatch:
3229 try:
3229 try:
3230 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3230 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3231 except error.PatchError as err:
3231 except error.PatchError as err:
3232 raise error.Abort(pycompat.bytestr(err))
3232 raise error.Abort(pycompat.bytestr(err))
3233 del fp
3233 del fp
3234 else:
3234 else:
3235 for f in actions['revert'][0]:
3235 for f in actions['revert'][0]:
3236 prntstatusmsg('revert', f)
3236 prntstatusmsg('revert', f)
3237 checkout(f)
3237 checkout(f)
3238 if normal:
3238 if normal:
3239 normal(f)
3239 normal(f)
3240
3240
3241 for f in actions['add'][0]:
3241 for f in actions['add'][0]:
3242 # Don't checkout modified files, they are already created by the diff
3242 # Don't checkout modified files, they are already created by the diff
3243 if f not in newlyaddedandmodifiedfiles:
3243 if f not in newlyaddedandmodifiedfiles:
3244 prntstatusmsg('add', f)
3244 prntstatusmsg('add', f)
3245 checkout(f)
3245 checkout(f)
3246 repo.dirstate.add(f)
3246 repo.dirstate.add(f)
3247
3247
3248 normal = repo.dirstate.normallookup
3248 normal = repo.dirstate.normallookup
3249 if node == parent and p2 == nullid:
3249 if node == parent and p2 == nullid:
3250 normal = repo.dirstate.normal
3250 normal = repo.dirstate.normal
3251 for f in actions['undelete'][0]:
3251 for f in actions['undelete'][0]:
3252 if interactive:
3252 if interactive:
3253 choice = repo.ui.promptchoice(
3253 choice = repo.ui.promptchoice(
3254 _("add back removed file %s (Yn)?$$ &Yes $$ &No") % f)
3254 _("add back removed file %s (Yn)?$$ &Yes $$ &No") % f)
3255 if choice == 0:
3255 if choice == 0:
3256 prntstatusmsg('undelete', f)
3256 prntstatusmsg('undelete', f)
3257 checkout(f)
3257 checkout(f)
3258 normal(f)
3258 normal(f)
3259 else:
3259 else:
3260 excluded_files.append(f)
3260 excluded_files.append(f)
3261 else:
3261 else:
3262 prntstatusmsg('undelete', f)
3262 prntstatusmsg('undelete', f)
3263 checkout(f)
3263 checkout(f)
3264 normal(f)
3264 normal(f)
3265
3265
3266 copied = copies.pathcopies(repo[parent], ctx)
3266 copied = copies.pathcopies(repo[parent], ctx)
3267
3267
3268 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3268 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3269 if f in copied:
3269 if f in copied:
3270 repo.dirstate.copy(copied[f], f)
3270 repo.dirstate.copy(copied[f], f)
3271
3271
3272 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3272 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3273 # commands.outgoing. "missing" is "missing" of the result of
3273 # commands.outgoing. "missing" is "missing" of the result of
3274 # "findcommonoutgoing()"
3274 # "findcommonoutgoing()"
3275 outgoinghooks = util.hooks()
3275 outgoinghooks = util.hooks()
3276
3276
3277 # a list of (ui, repo) functions called by commands.summary
3277 # a list of (ui, repo) functions called by commands.summary
3278 summaryhooks = util.hooks()
3278 summaryhooks = util.hooks()
3279
3279
3280 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3280 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3281 #
3281 #
3282 # functions should return tuple of booleans below, if 'changes' is None:
3282 # functions should return tuple of booleans below, if 'changes' is None:
3283 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3283 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3284 #
3284 #
3285 # otherwise, 'changes' is a tuple of tuples below:
3285 # otherwise, 'changes' is a tuple of tuples below:
3286 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3286 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3287 # - (desturl, destbranch, destpeer, outgoing)
3287 # - (desturl, destbranch, destpeer, outgoing)
3288 summaryremotehooks = util.hooks()
3288 summaryremotehooks = util.hooks()
3289
3289
3290 # A list of state files kept by multistep operations like graft.
3290 # A list of state files kept by multistep operations like graft.
3291 # Since graft cannot be aborted, it is considered 'clearable' by update.
3291 # Since graft cannot be aborted, it is considered 'clearable' by update.
3292 # note: bisect is intentionally excluded
3292 # note: bisect is intentionally excluded
3293 # (state file, clearable, allowcommit, error, hint)
3293 # (state file, clearable, allowcommit, error, hint)
3294 unfinishedstates = [
3294 unfinishedstates = [
3295 ('graftstate', True, False, _('graft in progress'),
3295 ('graftstate', True, False, _('graft in progress'),
3296 _("use 'hg graft --continue' or 'hg graft --stop' to stop")),
3296 _("use 'hg graft --continue' or 'hg graft --stop' to stop")),
3297 ('updatestate', True, False, _('last update was interrupted'),
3297 ('updatestate', True, False, _('last update was interrupted'),
3298 _("use 'hg update' to get a consistent checkout"))
3298 _("use 'hg update' to get a consistent checkout"))
3299 ]
3299 ]
3300
3300
3301 def checkunfinished(repo, commit=False):
3301 def checkunfinished(repo, commit=False):
3302 '''Look for an unfinished multistep operation, like graft, and abort
3302 '''Look for an unfinished multistep operation, like graft, and abort
3303 if found. It's probably good to check this right before
3303 if found. It's probably good to check this right before
3304 bailifchanged().
3304 bailifchanged().
3305 '''
3305 '''
3306 # Check for non-clearable states first, so things like rebase will take
3306 # Check for non-clearable states first, so things like rebase will take
3307 # precedence over update.
3307 # precedence over update.
3308 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3308 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3309 if clearable or (commit and allowcommit):
3309 if clearable or (commit and allowcommit):
3310 continue
3310 continue
3311 if repo.vfs.exists(f):
3311 if repo.vfs.exists(f):
3312 raise error.Abort(msg, hint=hint)
3312 raise error.Abort(msg, hint=hint)
3313
3313
3314 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3314 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3315 if not clearable or (commit and allowcommit):
3315 if not clearable or (commit and allowcommit):
3316 continue
3316 continue
3317 if repo.vfs.exists(f):
3317 if repo.vfs.exists(f):
3318 raise error.Abort(msg, hint=hint)
3318 raise error.Abort(msg, hint=hint)
3319
3319
3320 def clearunfinished(repo):
3320 def clearunfinished(repo):
3321 '''Check for unfinished operations (as above), and clear the ones
3321 '''Check for unfinished operations (as above), and clear the ones
3322 that are clearable.
3322 that are clearable.
3323 '''
3323 '''
3324 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3324 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3325 if not clearable and repo.vfs.exists(f):
3325 if not clearable and repo.vfs.exists(f):
3326 raise error.Abort(msg, hint=hint)
3326 raise error.Abort(msg, hint=hint)
3327 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3327 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3328 if clearable and repo.vfs.exists(f):
3328 if clearable and repo.vfs.exists(f):
3329 util.unlink(repo.vfs.join(f))
3329 util.unlink(repo.vfs.join(f))
3330
3330
3331 afterresolvedstates = [
3331 afterresolvedstates = [
3332 ('graftstate',
3332 ('graftstate',
3333 _('hg graft --continue')),
3333 _('hg graft --continue')),
3334 ]
3334 ]
3335
3335
3336 def howtocontinue(repo):
3336 def howtocontinue(repo):
3337 '''Check for an unfinished operation and return the command to finish
3337 '''Check for an unfinished operation and return the command to finish
3338 it.
3338 it.
3339
3339
3340 afterresolvedstates tuples define a .hg/{file} and the corresponding
3340 afterresolvedstates tuples define a .hg/{file} and the corresponding
3341 command needed to finish it.
3341 command needed to finish it.
3342
3342
3343 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3343 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3344 a boolean.
3344 a boolean.
3345 '''
3345 '''
3346 contmsg = _("continue: %s")
3346 contmsg = _("continue: %s")
3347 for f, msg in afterresolvedstates:
3347 for f, msg in afterresolvedstates:
3348 if repo.vfs.exists(f):
3348 if repo.vfs.exists(f):
3349 return contmsg % msg, True
3349 return contmsg % msg, True
3350 if repo[None].dirty(missing=True, merge=False, branch=False):
3350 if repo[None].dirty(missing=True, merge=False, branch=False):
3351 return contmsg % _("hg commit"), False
3351 return contmsg % _("hg commit"), False
3352 return None, None
3352 return None, None
3353
3353
3354 def checkafterresolved(repo):
3354 def checkafterresolved(repo):
3355 '''Inform the user about the next action after completing hg resolve
3355 '''Inform the user about the next action after completing hg resolve
3356
3356
3357 If there's a matching afterresolvedstates, howtocontinue will yield
3357 If there's a matching afterresolvedstates, howtocontinue will yield
3358 repo.ui.warn as the reporter.
3358 repo.ui.warn as the reporter.
3359
3359
3360 Otherwise, it will yield repo.ui.note.
3360 Otherwise, it will yield repo.ui.note.
3361 '''
3361 '''
3362 msg, warning = howtocontinue(repo)
3362 msg, warning = howtocontinue(repo)
3363 if msg is not None:
3363 if msg is not None:
3364 if warning:
3364 if warning:
3365 repo.ui.warn("%s\n" % msg)
3365 repo.ui.warn("%s\n" % msg)
3366 else:
3366 else:
3367 repo.ui.note("%s\n" % msg)
3367 repo.ui.note("%s\n" % msg)
3368
3368
3369 def wrongtooltocontinue(repo, task):
3369 def wrongtooltocontinue(repo, task):
3370 '''Raise an abort suggesting how to properly continue if there is an
3370 '''Raise an abort suggesting how to properly continue if there is an
3371 active task.
3371 active task.
3372
3372
3373 Uses howtocontinue() to find the active task.
3373 Uses howtocontinue() to find the active task.
3374
3374
3375 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3375 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3376 a hint.
3376 a hint.
3377 '''
3377 '''
3378 after = howtocontinue(repo)
3378 after = howtocontinue(repo)
3379 hint = None
3379 hint = None
3380 if after[1]:
3380 if after[1]:
3381 hint = after[0]
3381 hint = after[0]
3382 raise error.Abort(_('no %s in progress') % task, hint=hint)
3382 raise error.Abort(_('no %s in progress') % task, hint=hint)
@@ -1,2550 +1,2550
1 # context.py - changeset and file context objects for mercurial
1 # context.py - changeset and file context objects for mercurial
2 #
2 #
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import filecmp
11 import filecmp
12 import os
12 import os
13 import stat
13 import stat
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 addednodeid,
17 addednodeid,
18 hex,
18 hex,
19 modifiednodeid,
19 modifiednodeid,
20 nullid,
20 nullid,
21 nullrev,
21 nullrev,
22 short,
22 short,
23 wdirfilenodeids,
23 wdirfilenodeids,
24 wdirhex,
24 wdirhex,
25 )
25 )
26 from . import (
26 from . import (
27 dagop,
27 dagop,
28 encoding,
28 encoding,
29 error,
29 error,
30 fileset,
30 fileset,
31 match as matchmod,
31 match as matchmod,
32 obsolete as obsmod,
32 obsolete as obsmod,
33 patch,
33 patch,
34 pathutil,
34 pathutil,
35 phases,
35 phases,
36 pycompat,
36 pycompat,
37 repoview,
37 repoview,
38 scmutil,
38 scmutil,
39 sparse,
39 sparse,
40 subrepo,
40 subrepo,
41 subrepoutil,
41 subrepoutil,
42 util,
42 util,
43 )
43 )
44 from .utils import (
44 from .utils import (
45 dateutil,
45 dateutil,
46 stringutil,
46 stringutil,
47 )
47 )
48
48
49 propertycache = util.propertycache
49 propertycache = util.propertycache
50
50
51 class basectx(object):
51 class basectx(object):
52 """A basectx object represents the common logic for its children:
52 """A basectx object represents the common logic for its children:
53 changectx: read-only context that is already present in the repo,
53 changectx: read-only context that is already present in the repo,
54 workingctx: a context that represents the working directory and can
54 workingctx: a context that represents the working directory and can
55 be committed,
55 be committed,
56 memctx: a context that represents changes in-memory and can also
56 memctx: a context that represents changes in-memory and can also
57 be committed."""
57 be committed."""
58
58
59 def __init__(self, repo):
59 def __init__(self, repo):
60 self._repo = repo
60 self._repo = repo
61
61
62 def __bytes__(self):
62 def __bytes__(self):
63 return short(self.node())
63 return short(self.node())
64
64
65 __str__ = encoding.strmethod(__bytes__)
65 __str__ = encoding.strmethod(__bytes__)
66
66
67 def __repr__(self):
67 def __repr__(self):
68 return r"<%s %s>" % (type(self).__name__, str(self))
68 return r"<%s %s>" % (type(self).__name__, str(self))
69
69
70 def __eq__(self, other):
70 def __eq__(self, other):
71 try:
71 try:
72 return type(self) == type(other) and self._rev == other._rev
72 return type(self) == type(other) and self._rev == other._rev
73 except AttributeError:
73 except AttributeError:
74 return False
74 return False
75
75
76 def __ne__(self, other):
76 def __ne__(self, other):
77 return not (self == other)
77 return not (self == other)
78
78
79 def __contains__(self, key):
79 def __contains__(self, key):
80 return key in self._manifest
80 return key in self._manifest
81
81
82 def __getitem__(self, key):
82 def __getitem__(self, key):
83 return self.filectx(key)
83 return self.filectx(key)
84
84
85 def __iter__(self):
85 def __iter__(self):
86 return iter(self._manifest)
86 return iter(self._manifest)
87
87
88 def _buildstatusmanifest(self, status):
88 def _buildstatusmanifest(self, status):
89 """Builds a manifest that includes the given status results, if this is
89 """Builds a manifest that includes the given status results, if this is
90 a working copy context. For non-working copy contexts, it just returns
90 a working copy context. For non-working copy contexts, it just returns
91 the normal manifest."""
91 the normal manifest."""
92 return self.manifest()
92 return self.manifest()
93
93
94 def _matchstatus(self, other, match):
94 def _matchstatus(self, other, match):
95 """This internal method provides a way for child objects to override the
95 """This internal method provides a way for child objects to override the
96 match operator.
96 match operator.
97 """
97 """
98 return match
98 return match
99
99
100 def _buildstatus(self, other, s, match, listignored, listclean,
100 def _buildstatus(self, other, s, match, listignored, listclean,
101 listunknown):
101 listunknown):
102 """build a status with respect to another context"""
102 """build a status with respect to another context"""
103 # Load earliest manifest first for caching reasons. More specifically,
103 # Load earliest manifest first for caching reasons. More specifically,
104 # if you have revisions 1000 and 1001, 1001 is probably stored as a
104 # if you have revisions 1000 and 1001, 1001 is probably stored as a
105 # delta against 1000. Thus, if you read 1000 first, we'll reconstruct
105 # delta against 1000. Thus, if you read 1000 first, we'll reconstruct
106 # 1000 and cache it so that when you read 1001, we just need to apply a
106 # 1000 and cache it so that when you read 1001, we just need to apply a
107 # delta to what's in the cache. So that's one full reconstruction + one
107 # delta to what's in the cache. So that's one full reconstruction + one
108 # delta application.
108 # delta application.
109 mf2 = None
109 mf2 = None
110 if self.rev() is not None and self.rev() < other.rev():
110 if self.rev() is not None and self.rev() < other.rev():
111 mf2 = self._buildstatusmanifest(s)
111 mf2 = self._buildstatusmanifest(s)
112 mf1 = other._buildstatusmanifest(s)
112 mf1 = other._buildstatusmanifest(s)
113 if mf2 is None:
113 if mf2 is None:
114 mf2 = self._buildstatusmanifest(s)
114 mf2 = self._buildstatusmanifest(s)
115
115
116 modified, added = [], []
116 modified, added = [], []
117 removed = []
117 removed = []
118 clean = []
118 clean = []
119 deleted, unknown, ignored = s.deleted, s.unknown, s.ignored
119 deleted, unknown, ignored = s.deleted, s.unknown, s.ignored
120 deletedset = set(deleted)
120 deletedset = set(deleted)
121 d = mf1.diff(mf2, match=match, clean=listclean)
121 d = mf1.diff(mf2, match=match, clean=listclean)
122 for fn, value in d.iteritems():
122 for fn, value in d.iteritems():
123 if fn in deletedset:
123 if fn in deletedset:
124 continue
124 continue
125 if value is None:
125 if value is None:
126 clean.append(fn)
126 clean.append(fn)
127 continue
127 continue
128 (node1, flag1), (node2, flag2) = value
128 (node1, flag1), (node2, flag2) = value
129 if node1 is None:
129 if node1 is None:
130 added.append(fn)
130 added.append(fn)
131 elif node2 is None:
131 elif node2 is None:
132 removed.append(fn)
132 removed.append(fn)
133 elif flag1 != flag2:
133 elif flag1 != flag2:
134 modified.append(fn)
134 modified.append(fn)
135 elif node2 not in wdirfilenodeids:
135 elif node2 not in wdirfilenodeids:
136 # When comparing files between two commits, we save time by
136 # When comparing files between two commits, we save time by
137 # not comparing the file contents when the nodeids differ.
137 # not comparing the file contents when the nodeids differ.
138 # Note that this means we incorrectly report a reverted change
138 # Note that this means we incorrectly report a reverted change
139 # to a file as a modification.
139 # to a file as a modification.
140 modified.append(fn)
140 modified.append(fn)
141 elif self[fn].cmp(other[fn]):
141 elif self[fn].cmp(other[fn]):
142 modified.append(fn)
142 modified.append(fn)
143 else:
143 else:
144 clean.append(fn)
144 clean.append(fn)
145
145
146 if removed:
146 if removed:
147 # need to filter files if they are already reported as removed
147 # need to filter files if they are already reported as removed
148 unknown = [fn for fn in unknown if fn not in mf1 and
148 unknown = [fn for fn in unknown if fn not in mf1 and
149 (not match or match(fn))]
149 (not match or match(fn))]
150 ignored = [fn for fn in ignored if fn not in mf1 and
150 ignored = [fn for fn in ignored if fn not in mf1 and
151 (not match or match(fn))]
151 (not match or match(fn))]
152 # if they're deleted, don't report them as removed
152 # if they're deleted, don't report them as removed
153 removed = [fn for fn in removed if fn not in deletedset]
153 removed = [fn for fn in removed if fn not in deletedset]
154
154
155 return scmutil.status(modified, added, removed, deleted, unknown,
155 return scmutil.status(modified, added, removed, deleted, unknown,
156 ignored, clean)
156 ignored, clean)
157
157
158 @propertycache
158 @propertycache
159 def substate(self):
159 def substate(self):
160 return subrepoutil.state(self, self._repo.ui)
160 return subrepoutil.state(self, self._repo.ui)
161
161
162 def subrev(self, subpath):
162 def subrev(self, subpath):
163 return self.substate[subpath][1]
163 return self.substate[subpath][1]
164
164
165 def rev(self):
165 def rev(self):
166 return self._rev
166 return self._rev
167 def node(self):
167 def node(self):
168 return self._node
168 return self._node
169 def hex(self):
169 def hex(self):
170 return hex(self.node())
170 return hex(self.node())
171 def manifest(self):
171 def manifest(self):
172 return self._manifest
172 return self._manifest
173 def manifestctx(self):
173 def manifestctx(self):
174 return self._manifestctx
174 return self._manifestctx
175 def repo(self):
175 def repo(self):
176 return self._repo
176 return self._repo
177 def phasestr(self):
177 def phasestr(self):
178 return phases.phasenames[self.phase()]
178 return phases.phasenames[self.phase()]
179 def mutable(self):
179 def mutable(self):
180 return self.phase() > phases.public
180 return self.phase() > phases.public
181
181
182 def matchfileset(self, expr, badfn=None):
182 def matchfileset(self, expr, badfn=None):
183 return fileset.match(self, expr, badfn=badfn)
183 return fileset.match(self, expr, badfn=badfn)
184
184
185 def obsolete(self):
185 def obsolete(self):
186 """True if the changeset is obsolete"""
186 """True if the changeset is obsolete"""
187 return self.rev() in obsmod.getrevs(self._repo, 'obsolete')
187 return self.rev() in obsmod.getrevs(self._repo, 'obsolete')
188
188
189 def extinct(self):
189 def extinct(self):
190 """True if the changeset is extinct"""
190 """True if the changeset is extinct"""
191 return self.rev() in obsmod.getrevs(self._repo, 'extinct')
191 return self.rev() in obsmod.getrevs(self._repo, 'extinct')
192
192
193 def orphan(self):
193 def orphan(self):
194 """True if the changeset is not obsolete, but its ancestor is"""
194 """True if the changeset is not obsolete, but its ancestor is"""
195 return self.rev() in obsmod.getrevs(self._repo, 'orphan')
195 return self.rev() in obsmod.getrevs(self._repo, 'orphan')
196
196
197 def phasedivergent(self):
197 def phasedivergent(self):
198 """True if the changeset tries to be a successor of a public changeset
198 """True if the changeset tries to be a successor of a public changeset
199
199
200 Only non-public and non-obsolete changesets may be phase-divergent.
200 Only non-public and non-obsolete changesets may be phase-divergent.
201 """
201 """
202 return self.rev() in obsmod.getrevs(self._repo, 'phasedivergent')
202 return self.rev() in obsmod.getrevs(self._repo, 'phasedivergent')
203
203
204 def contentdivergent(self):
204 def contentdivergent(self):
205 """Is a successor of a changeset with multiple possible successor sets
205 """Is a successor of a changeset with multiple possible successor sets
206
206
207 Only non-public and non-obsolete changesets may be content-divergent.
207 Only non-public and non-obsolete changesets may be content-divergent.
208 """
208 """
209 return self.rev() in obsmod.getrevs(self._repo, 'contentdivergent')
209 return self.rev() in obsmod.getrevs(self._repo, 'contentdivergent')
210
210
211 def isunstable(self):
211 def isunstable(self):
212 """True if the changeset is either orphan, phase-divergent or
212 """True if the changeset is either orphan, phase-divergent or
213 content-divergent"""
213 content-divergent"""
214 return self.orphan() or self.phasedivergent() or self.contentdivergent()
214 return self.orphan() or self.phasedivergent() or self.contentdivergent()
215
215
216 def instabilities(self):
216 def instabilities(self):
217 """return the list of instabilities affecting this changeset.
217 """return the list of instabilities affecting this changeset.
218
218
219 Instabilities are returned as strings. possible values are:
219 Instabilities are returned as strings. possible values are:
220 - orphan,
220 - orphan,
221 - phase-divergent,
221 - phase-divergent,
222 - content-divergent.
222 - content-divergent.
223 """
223 """
224 instabilities = []
224 instabilities = []
225 if self.orphan():
225 if self.orphan():
226 instabilities.append('orphan')
226 instabilities.append('orphan')
227 if self.phasedivergent():
227 if self.phasedivergent():
228 instabilities.append('phase-divergent')
228 instabilities.append('phase-divergent')
229 if self.contentdivergent():
229 if self.contentdivergent():
230 instabilities.append('content-divergent')
230 instabilities.append('content-divergent')
231 return instabilities
231 return instabilities
232
232
233 def parents(self):
233 def parents(self):
234 """return contexts for each parent changeset"""
234 """return contexts for each parent changeset"""
235 return self._parents
235 return self._parents
236
236
237 def p1(self):
237 def p1(self):
238 return self._parents[0]
238 return self._parents[0]
239
239
240 def p2(self):
240 def p2(self):
241 parents = self._parents
241 parents = self._parents
242 if len(parents) == 2:
242 if len(parents) == 2:
243 return parents[1]
243 return parents[1]
244 return self._repo[nullrev]
244 return self._repo[nullrev]
245
245
246 def _fileinfo(self, path):
246 def _fileinfo(self, path):
247 if r'_manifest' in self.__dict__:
247 if r'_manifest' in self.__dict__:
248 try:
248 try:
249 return self._manifest[path], self._manifest.flags(path)
249 return self._manifest[path], self._manifest.flags(path)
250 except KeyError:
250 except KeyError:
251 raise error.ManifestLookupError(self._node, path,
251 raise error.ManifestLookupError(self._node, path,
252 _('not found in manifest'))
252 _('not found in manifest'))
253 if r'_manifestdelta' in self.__dict__ or path in self.files():
253 if r'_manifestdelta' in self.__dict__ or path in self.files():
254 if path in self._manifestdelta:
254 if path in self._manifestdelta:
255 return (self._manifestdelta[path],
255 return (self._manifestdelta[path],
256 self._manifestdelta.flags(path))
256 self._manifestdelta.flags(path))
257 mfl = self._repo.manifestlog
257 mfl = self._repo.manifestlog
258 try:
258 try:
259 node, flag = mfl[self._changeset.manifest].find(path)
259 node, flag = mfl[self._changeset.manifest].find(path)
260 except KeyError:
260 except KeyError:
261 raise error.ManifestLookupError(self._node, path,
261 raise error.ManifestLookupError(self._node, path,
262 _('not found in manifest'))
262 _('not found in manifest'))
263
263
264 return node, flag
264 return node, flag
265
265
266 def filenode(self, path):
266 def filenode(self, path):
267 return self._fileinfo(path)[0]
267 return self._fileinfo(path)[0]
268
268
269 def flags(self, path):
269 def flags(self, path):
270 try:
270 try:
271 return self._fileinfo(path)[1]
271 return self._fileinfo(path)[1]
272 except error.LookupError:
272 except error.LookupError:
273 return ''
273 return ''
274
274
275 def sub(self, path, allowcreate=True):
275 def sub(self, path, allowcreate=True):
276 '''return a subrepo for the stored revision of path, never wdir()'''
276 '''return a subrepo for the stored revision of path, never wdir()'''
277 return subrepo.subrepo(self, path, allowcreate=allowcreate)
277 return subrepo.subrepo(self, path, allowcreate=allowcreate)
278
278
279 def nullsub(self, path, pctx):
279 def nullsub(self, path, pctx):
280 return subrepo.nullsubrepo(self, path, pctx)
280 return subrepo.nullsubrepo(self, path, pctx)
281
281
282 def workingsub(self, path):
282 def workingsub(self, path):
283 '''return a subrepo for the stored revision, or wdir if this is a wdir
283 '''return a subrepo for the stored revision, or wdir if this is a wdir
284 context.
284 context.
285 '''
285 '''
286 return subrepo.subrepo(self, path, allowwdir=True)
286 return subrepo.subrepo(self, path, allowwdir=True)
287
287
288 def match(self, pats=None, include=None, exclude=None, default='glob',
288 def match(self, pats=None, include=None, exclude=None, default='glob',
289 listsubrepos=False, badfn=None):
289 listsubrepos=False, badfn=None):
290 r = self._repo
290 r = self._repo
291 return matchmod.match(r.root, r.getcwd(), pats,
291 return matchmod.match(r.root, r.getcwd(), pats,
292 include, exclude, default,
292 include, exclude, default,
293 auditor=r.nofsauditor, ctx=self,
293 auditor=r.nofsauditor, ctx=self,
294 listsubrepos=listsubrepos, badfn=badfn)
294 listsubrepos=listsubrepos, badfn=badfn)
295
295
296 def diff(self, ctx2=None, match=None, changes=None, opts=None,
296 def diff(self, ctx2=None, match=None, changes=None, opts=None,
297 losedatafn=None, pathfn=None, copy=None,
297 losedatafn=None, pathfn=None, copy=None,
298 copysourcematch=None, hunksfilterfn=None):
298 copysourcematch=None, hunksfilterfn=None):
299 """Returns a diff generator for the given contexts and matcher"""
299 """Returns a diff generator for the given contexts and matcher"""
300 if ctx2 is None:
300 if ctx2 is None:
301 ctx2 = self.p1()
301 ctx2 = self.p1()
302 if ctx2 is not None:
302 if ctx2 is not None:
303 ctx2 = self._repo[ctx2]
303 ctx2 = self._repo[ctx2]
304 return patch.diff(self._repo, ctx2, self, match=match, changes=changes,
304 return patch.diff(self._repo, ctx2, self, match=match, changes=changes,
305 opts=opts, losedatafn=losedatafn, pathfn=pathfn,
305 opts=opts, losedatafn=losedatafn, pathfn=pathfn,
306 copy=copy, copysourcematch=copysourcematch,
306 copy=copy, copysourcematch=copysourcematch,
307 hunksfilterfn=hunksfilterfn)
307 hunksfilterfn=hunksfilterfn)
308
308
309 def dirs(self):
309 def dirs(self):
310 return self._manifest.dirs()
310 return self._manifest.dirs()
311
311
312 def hasdir(self, dir):
312 def hasdir(self, dir):
313 return self._manifest.hasdir(dir)
313 return self._manifest.hasdir(dir)
314
314
315 def status(self, other=None, match=None, listignored=False,
315 def status(self, other=None, match=None, listignored=False,
316 listclean=False, listunknown=False, listsubrepos=False):
316 listclean=False, listunknown=False, listsubrepos=False):
317 """return status of files between two nodes or node and working
317 """return status of files between two nodes or node and working
318 directory.
318 directory.
319
319
320 If other is None, compare this node with working directory.
320 If other is None, compare this node with working directory.
321
321
322 returns (modified, added, removed, deleted, unknown, ignored, clean)
322 returns (modified, added, removed, deleted, unknown, ignored, clean)
323 """
323 """
324
324
325 ctx1 = self
325 ctx1 = self
326 ctx2 = self._repo[other]
326 ctx2 = self._repo[other]
327
327
328 # This next code block is, admittedly, fragile logic that tests for
328 # This next code block is, admittedly, fragile logic that tests for
329 # reversing the contexts and wouldn't need to exist if it weren't for
329 # reversing the contexts and wouldn't need to exist if it weren't for
330 # the fast (and common) code path of comparing the working directory
330 # the fast (and common) code path of comparing the working directory
331 # with its first parent.
331 # with its first parent.
332 #
332 #
333 # What we're aiming for here is the ability to call:
333 # What we're aiming for here is the ability to call:
334 #
334 #
335 # workingctx.status(parentctx)
335 # workingctx.status(parentctx)
336 #
336 #
337 # If we always built the manifest for each context and compared those,
337 # If we always built the manifest for each context and compared those,
338 # then we'd be done. But the special case of the above call means we
338 # then we'd be done. But the special case of the above call means we
339 # just copy the manifest of the parent.
339 # just copy the manifest of the parent.
340 reversed = False
340 reversed = False
341 if (not isinstance(ctx1, changectx)
341 if (not isinstance(ctx1, changectx)
342 and isinstance(ctx2, changectx)):
342 and isinstance(ctx2, changectx)):
343 reversed = True
343 reversed = True
344 ctx1, ctx2 = ctx2, ctx1
344 ctx1, ctx2 = ctx2, ctx1
345
345
346 match = self._repo.narrowmatch(match)
346 match = self._repo.narrowmatch(match)
347 match = ctx2._matchstatus(ctx1, match)
347 match = ctx2._matchstatus(ctx1, match)
348 r = scmutil.status([], [], [], [], [], [], [])
348 r = scmutil.status([], [], [], [], [], [], [])
349 r = ctx2._buildstatus(ctx1, r, match, listignored, listclean,
349 r = ctx2._buildstatus(ctx1, r, match, listignored, listclean,
350 listunknown)
350 listunknown)
351
351
352 if reversed:
352 if reversed:
353 # Reverse added and removed. Clear deleted, unknown and ignored as
353 # Reverse added and removed. Clear deleted, unknown and ignored as
354 # these make no sense to reverse.
354 # these make no sense to reverse.
355 r = scmutil.status(r.modified, r.removed, r.added, [], [], [],
355 r = scmutil.status(r.modified, r.removed, r.added, [], [], [],
356 r.clean)
356 r.clean)
357
357
358 if listsubrepos:
358 if listsubrepos:
359 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
359 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
360 try:
360 try:
361 rev2 = ctx2.subrev(subpath)
361 rev2 = ctx2.subrev(subpath)
362 except KeyError:
362 except KeyError:
363 # A subrepo that existed in node1 was deleted between
363 # A subrepo that existed in node1 was deleted between
364 # node1 and node2 (inclusive). Thus, ctx2's substate
364 # node1 and node2 (inclusive). Thus, ctx2's substate
365 # won't contain that subpath. The best we can do ignore it.
365 # won't contain that subpath. The best we can do ignore it.
366 rev2 = None
366 rev2 = None
367 submatch = matchmod.subdirmatcher(subpath, match)
367 submatch = matchmod.subdirmatcher(subpath, match)
368 s = sub.status(rev2, match=submatch, ignored=listignored,
368 s = sub.status(rev2, match=submatch, ignored=listignored,
369 clean=listclean, unknown=listunknown,
369 clean=listclean, unknown=listunknown,
370 listsubrepos=True)
370 listsubrepos=True)
371 for rfiles, sfiles in zip(r, s):
371 for rfiles, sfiles in zip(r, s):
372 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
372 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
373
373
374 for l in r:
374 for l in r:
375 l.sort()
375 l.sort()
376
376
377 return r
377 return r
378
378
379 class changectx(basectx):
379 class changectx(basectx):
380 """A changecontext object makes access to data related to a particular
380 """A changecontext object makes access to data related to a particular
381 changeset convenient. It represents a read-only context already present in
381 changeset convenient. It represents a read-only context already present in
382 the repo."""
382 the repo."""
383 def __init__(self, repo, rev, node):
383 def __init__(self, repo, rev, node):
384 super(changectx, self).__init__(repo)
384 super(changectx, self).__init__(repo)
385 self._rev = rev
385 self._rev = rev
386 self._node = node
386 self._node = node
387
387
388 def __hash__(self):
388 def __hash__(self):
389 try:
389 try:
390 return hash(self._rev)
390 return hash(self._rev)
391 except AttributeError:
391 except AttributeError:
392 return id(self)
392 return id(self)
393
393
394 def __nonzero__(self):
394 def __nonzero__(self):
395 return self._rev != nullrev
395 return self._rev != nullrev
396
396
397 __bool__ = __nonzero__
397 __bool__ = __nonzero__
398
398
399 @propertycache
399 @propertycache
400 def _changeset(self):
400 def _changeset(self):
401 return self._repo.changelog.changelogrevision(self.rev())
401 return self._repo.changelog.changelogrevision(self.rev())
402
402
403 @propertycache
403 @propertycache
404 def _manifest(self):
404 def _manifest(self):
405 return self._manifestctx.read()
405 return self._manifestctx.read()
406
406
407 @property
407 @property
408 def _manifestctx(self):
408 def _manifestctx(self):
409 return self._repo.manifestlog[self._changeset.manifest]
409 return self._repo.manifestlog[self._changeset.manifest]
410
410
411 @propertycache
411 @propertycache
412 def _manifestdelta(self):
412 def _manifestdelta(self):
413 return self._manifestctx.readdelta()
413 return self._manifestctx.readdelta()
414
414
415 @propertycache
415 @propertycache
416 def _parents(self):
416 def _parents(self):
417 repo = self._repo
417 repo = self._repo
418 p1, p2 = repo.changelog.parentrevs(self._rev)
418 p1, p2 = repo.changelog.parentrevs(self._rev)
419 if p2 == nullrev:
419 if p2 == nullrev:
420 return [repo[p1]]
420 return [repo[p1]]
421 return [repo[p1], repo[p2]]
421 return [repo[p1], repo[p2]]
422
422
423 def changeset(self):
423 def changeset(self):
424 c = self._changeset
424 c = self._changeset
425 return (
425 return (
426 c.manifest,
426 c.manifest,
427 c.user,
427 c.user,
428 c.date,
428 c.date,
429 c.files,
429 c.files,
430 c.description,
430 c.description,
431 c.extra,
431 c.extra,
432 )
432 )
433 def manifestnode(self):
433 def manifestnode(self):
434 return self._changeset.manifest
434 return self._changeset.manifest
435
435
436 def user(self):
436 def user(self):
437 return self._changeset.user
437 return self._changeset.user
438 def date(self):
438 def date(self):
439 return self._changeset.date
439 return self._changeset.date
440 def files(self):
440 def files(self):
441 return self._changeset.files
441 return self._changeset.files
442 @propertycache
442 @propertycache
443 def _copies(self):
443 def _copies(self):
444 p1copies = {}
444 p1copies = {}
445 p2copies = {}
445 p2copies = {}
446 p1 = self.p1()
446 p1 = self.p1()
447 p2 = self.p2()
447 p2 = self.p2()
448 narrowmatch = self._repo.narrowmatch()
448 narrowmatch = self._repo.narrowmatch()
449 for dst in self.files():
449 for dst in self.files():
450 if not narrowmatch(dst) or dst not in self:
450 if not narrowmatch(dst) or dst not in self:
451 continue
451 continue
452 copied = self[dst].renamed()
452 copied = self[dst].renamed()
453 if not copied:
453 if not copied:
454 continue
454 continue
455 src, srcnode = copied
455 src, srcnode = copied
456 if src in p1 and p1[src].filenode() == srcnode:
456 if src in p1 and p1[src].filenode() == srcnode:
457 p1copies[dst] = src
457 p1copies[dst] = src
458 elif src in p2 and p2[src].filenode() == srcnode:
458 elif src in p2 and p2[src].filenode() == srcnode:
459 p2copies[dst] = src
459 p2copies[dst] = src
460 return p1copies, p2copies
460 return p1copies, p2copies
461 def p1copies(self):
461 def p1copies(self):
462 return self._copies[0]
462 return self._copies[0]
463 def p2copies(self):
463 def p2copies(self):
464 return self._copies[1]
464 return self._copies[1]
465 def description(self):
465 def description(self):
466 return self._changeset.description
466 return self._changeset.description
467 def branch(self):
467 def branch(self):
468 return encoding.tolocal(self._changeset.extra.get("branch"))
468 return encoding.tolocal(self._changeset.extra.get("branch"))
469 def closesbranch(self):
469 def closesbranch(self):
470 return 'close' in self._changeset.extra
470 return 'close' in self._changeset.extra
471 def extra(self):
471 def extra(self):
472 """Return a dict of extra information."""
472 """Return a dict of extra information."""
473 return self._changeset.extra
473 return self._changeset.extra
474 def tags(self):
474 def tags(self):
475 """Return a list of byte tag names"""
475 """Return a list of byte tag names"""
476 return self._repo.nodetags(self._node)
476 return self._repo.nodetags(self._node)
477 def bookmarks(self):
477 def bookmarks(self):
478 """Return a list of byte bookmark names."""
478 """Return a list of byte bookmark names."""
479 return self._repo.nodebookmarks(self._node)
479 return self._repo.nodebookmarks(self._node)
480 def phase(self):
480 def phase(self):
481 return self._repo._phasecache.phase(self._repo, self._rev)
481 return self._repo._phasecache.phase(self._repo, self._rev)
482 def hidden(self):
482 def hidden(self):
483 return self._rev in repoview.filterrevs(self._repo, 'visible')
483 return self._rev in repoview.filterrevs(self._repo, 'visible')
484
484
485 def isinmemory(self):
485 def isinmemory(self):
486 return False
486 return False
487
487
488 def children(self):
488 def children(self):
489 """return list of changectx contexts for each child changeset.
489 """return list of changectx contexts for each child changeset.
490
490
491 This returns only the immediate child changesets. Use descendants() to
491 This returns only the immediate child changesets. Use descendants() to
492 recursively walk children.
492 recursively walk children.
493 """
493 """
494 c = self._repo.changelog.children(self._node)
494 c = self._repo.changelog.children(self._node)
495 return [self._repo[x] for x in c]
495 return [self._repo[x] for x in c]
496
496
497 def ancestors(self):
497 def ancestors(self):
498 for a in self._repo.changelog.ancestors([self._rev]):
498 for a in self._repo.changelog.ancestors([self._rev]):
499 yield self._repo[a]
499 yield self._repo[a]
500
500
501 def descendants(self):
501 def descendants(self):
502 """Recursively yield all children of the changeset.
502 """Recursively yield all children of the changeset.
503
503
504 For just the immediate children, use children()
504 For just the immediate children, use children()
505 """
505 """
506 for d in self._repo.changelog.descendants([self._rev]):
506 for d in self._repo.changelog.descendants([self._rev]):
507 yield self._repo[d]
507 yield self._repo[d]
508
508
509 def filectx(self, path, fileid=None, filelog=None):
509 def filectx(self, path, fileid=None, filelog=None):
510 """get a file context from this changeset"""
510 """get a file context from this changeset"""
511 if fileid is None:
511 if fileid is None:
512 fileid = self.filenode(path)
512 fileid = self.filenode(path)
513 return filectx(self._repo, path, fileid=fileid,
513 return filectx(self._repo, path, fileid=fileid,
514 changectx=self, filelog=filelog)
514 changectx=self, filelog=filelog)
515
515
516 def ancestor(self, c2, warn=False):
516 def ancestor(self, c2, warn=False):
517 """return the "best" ancestor context of self and c2
517 """return the "best" ancestor context of self and c2
518
518
519 If there are multiple candidates, it will show a message and check
519 If there are multiple candidates, it will show a message and check
520 merge.preferancestor configuration before falling back to the
520 merge.preferancestor configuration before falling back to the
521 revlog ancestor."""
521 revlog ancestor."""
522 # deal with workingctxs
522 # deal with workingctxs
523 n2 = c2._node
523 n2 = c2._node
524 if n2 is None:
524 if n2 is None:
525 n2 = c2._parents[0]._node
525 n2 = c2._parents[0]._node
526 cahs = self._repo.changelog.commonancestorsheads(self._node, n2)
526 cahs = self._repo.changelog.commonancestorsheads(self._node, n2)
527 if not cahs:
527 if not cahs:
528 anc = nullid
528 anc = nullid
529 elif len(cahs) == 1:
529 elif len(cahs) == 1:
530 anc = cahs[0]
530 anc = cahs[0]
531 else:
531 else:
532 # experimental config: merge.preferancestor
532 # experimental config: merge.preferancestor
533 for r in self._repo.ui.configlist('merge', 'preferancestor'):
533 for r in self._repo.ui.configlist('merge', 'preferancestor'):
534 try:
534 try:
535 ctx = scmutil.revsymbol(self._repo, r)
535 ctx = scmutil.revsymbol(self._repo, r)
536 except error.RepoLookupError:
536 except error.RepoLookupError:
537 continue
537 continue
538 anc = ctx.node()
538 anc = ctx.node()
539 if anc in cahs:
539 if anc in cahs:
540 break
540 break
541 else:
541 else:
542 anc = self._repo.changelog.ancestor(self._node, n2)
542 anc = self._repo.changelog.ancestor(self._node, n2)
543 if warn:
543 if warn:
544 self._repo.ui.status(
544 self._repo.ui.status(
545 (_("note: using %s as ancestor of %s and %s\n") %
545 (_("note: using %s as ancestor of %s and %s\n") %
546 (short(anc), short(self._node), short(n2))) +
546 (short(anc), short(self._node), short(n2))) +
547 ''.join(_(" alternatively, use --config "
547 ''.join(_(" alternatively, use --config "
548 "merge.preferancestor=%s\n") %
548 "merge.preferancestor=%s\n") %
549 short(n) for n in sorted(cahs) if n != anc))
549 short(n) for n in sorted(cahs) if n != anc))
550 return self._repo[anc]
550 return self._repo[anc]
551
551
552 def isancestorof(self, other):
552 def isancestorof(self, other):
553 """True if this changeset is an ancestor of other"""
553 """True if this changeset is an ancestor of other"""
554 return self._repo.changelog.isancestorrev(self._rev, other._rev)
554 return self._repo.changelog.isancestorrev(self._rev, other._rev)
555
555
556 def walk(self, match):
556 def walk(self, match):
557 '''Generates matching file names.'''
557 '''Generates matching file names.'''
558
558
559 # Wrap match.bad method to have message with nodeid
559 # Wrap match.bad method to have message with nodeid
560 def bad(fn, msg):
560 def bad(fn, msg):
561 # The manifest doesn't know about subrepos, so don't complain about
561 # The manifest doesn't know about subrepos, so don't complain about
562 # paths into valid subrepos.
562 # paths into valid subrepos.
563 if any(fn == s or fn.startswith(s + '/')
563 if any(fn == s or fn.startswith(s + '/')
564 for s in self.substate):
564 for s in self.substate):
565 return
565 return
566 match.bad(fn, _('no such file in rev %s') % self)
566 match.bad(fn, _('no such file in rev %s') % self)
567
567
568 m = matchmod.badmatch(self._repo.narrowmatch(match), bad)
568 m = matchmod.badmatch(self._repo.narrowmatch(match), bad)
569 return self._manifest.walk(m)
569 return self._manifest.walk(m)
570
570
571 def matches(self, match):
571 def matches(self, match):
572 return self.walk(match)
572 return self.walk(match)
573
573
574 class basefilectx(object):
574 class basefilectx(object):
575 """A filecontext object represents the common logic for its children:
575 """A filecontext object represents the common logic for its children:
576 filectx: read-only access to a filerevision that is already present
576 filectx: read-only access to a filerevision that is already present
577 in the repo,
577 in the repo,
578 workingfilectx: a filecontext that represents files from the working
578 workingfilectx: a filecontext that represents files from the working
579 directory,
579 directory,
580 memfilectx: a filecontext that represents files in-memory,
580 memfilectx: a filecontext that represents files in-memory,
581 """
581 """
582 @propertycache
582 @propertycache
583 def _filelog(self):
583 def _filelog(self):
584 return self._repo.file(self._path)
584 return self._repo.file(self._path)
585
585
586 @propertycache
586 @propertycache
587 def _changeid(self):
587 def _changeid(self):
588 if r'_changectx' in self.__dict__:
588 if r'_changectx' in self.__dict__:
589 return self._changectx.rev()
589 return self._changectx.rev()
590 elif r'_descendantrev' in self.__dict__:
590 elif r'_descendantrev' in self.__dict__:
591 # this file context was created from a revision with a known
591 # this file context was created from a revision with a known
592 # descendant, we can (lazily) correct for linkrev aliases
592 # descendant, we can (lazily) correct for linkrev aliases
593 return self._adjustlinkrev(self._descendantrev)
593 return self._adjustlinkrev(self._descendantrev)
594 else:
594 else:
595 return self._filelog.linkrev(self._filerev)
595 return self._filelog.linkrev(self._filerev)
596
596
597 @propertycache
597 @propertycache
598 def _filenode(self):
598 def _filenode(self):
599 if r'_fileid' in self.__dict__:
599 if r'_fileid' in self.__dict__:
600 return self._filelog.lookup(self._fileid)
600 return self._filelog.lookup(self._fileid)
601 else:
601 else:
602 return self._changectx.filenode(self._path)
602 return self._changectx.filenode(self._path)
603
603
604 @propertycache
604 @propertycache
605 def _filerev(self):
605 def _filerev(self):
606 return self._filelog.rev(self._filenode)
606 return self._filelog.rev(self._filenode)
607
607
608 @propertycache
608 @propertycache
609 def _repopath(self):
609 def _repopath(self):
610 return self._path
610 return self._path
611
611
612 def __nonzero__(self):
612 def __nonzero__(self):
613 try:
613 try:
614 self._filenode
614 self._filenode
615 return True
615 return True
616 except error.LookupError:
616 except error.LookupError:
617 # file is missing
617 # file is missing
618 return False
618 return False
619
619
620 __bool__ = __nonzero__
620 __bool__ = __nonzero__
621
621
622 def __bytes__(self):
622 def __bytes__(self):
623 try:
623 try:
624 return "%s@%s" % (self.path(), self._changectx)
624 return "%s@%s" % (self.path(), self._changectx)
625 except error.LookupError:
625 except error.LookupError:
626 return "%s@???" % self.path()
626 return "%s@???" % self.path()
627
627
628 __str__ = encoding.strmethod(__bytes__)
628 __str__ = encoding.strmethod(__bytes__)
629
629
630 def __repr__(self):
630 def __repr__(self):
631 return r"<%s %s>" % (type(self).__name__, str(self))
631 return r"<%s %s>" % (type(self).__name__, str(self))
632
632
633 def __hash__(self):
633 def __hash__(self):
634 try:
634 try:
635 return hash((self._path, self._filenode))
635 return hash((self._path, self._filenode))
636 except AttributeError:
636 except AttributeError:
637 return id(self)
637 return id(self)
638
638
639 def __eq__(self, other):
639 def __eq__(self, other):
640 try:
640 try:
641 return (type(self) == type(other) and self._path == other._path
641 return (type(self) == type(other) and self._path == other._path
642 and self._filenode == other._filenode)
642 and self._filenode == other._filenode)
643 except AttributeError:
643 except AttributeError:
644 return False
644 return False
645
645
646 def __ne__(self, other):
646 def __ne__(self, other):
647 return not (self == other)
647 return not (self == other)
648
648
649 def filerev(self):
649 def filerev(self):
650 return self._filerev
650 return self._filerev
651 def filenode(self):
651 def filenode(self):
652 return self._filenode
652 return self._filenode
653 @propertycache
653 @propertycache
654 def _flags(self):
654 def _flags(self):
655 return self._changectx.flags(self._path)
655 return self._changectx.flags(self._path)
656 def flags(self):
656 def flags(self):
657 return self._flags
657 return self._flags
658 def filelog(self):
658 def filelog(self):
659 return self._filelog
659 return self._filelog
660 def rev(self):
660 def rev(self):
661 return self._changeid
661 return self._changeid
662 def linkrev(self):
662 def linkrev(self):
663 return self._filelog.linkrev(self._filerev)
663 return self._filelog.linkrev(self._filerev)
664 def node(self):
664 def node(self):
665 return self._changectx.node()
665 return self._changectx.node()
666 def hex(self):
666 def hex(self):
667 return self._changectx.hex()
667 return self._changectx.hex()
668 def user(self):
668 def user(self):
669 return self._changectx.user()
669 return self._changectx.user()
670 def date(self):
670 def date(self):
671 return self._changectx.date()
671 return self._changectx.date()
672 def files(self):
672 def files(self):
673 return self._changectx.files()
673 return self._changectx.files()
674 def description(self):
674 def description(self):
675 return self._changectx.description()
675 return self._changectx.description()
676 def branch(self):
676 def branch(self):
677 return self._changectx.branch()
677 return self._changectx.branch()
678 def extra(self):
678 def extra(self):
679 return self._changectx.extra()
679 return self._changectx.extra()
680 def phase(self):
680 def phase(self):
681 return self._changectx.phase()
681 return self._changectx.phase()
682 def phasestr(self):
682 def phasestr(self):
683 return self._changectx.phasestr()
683 return self._changectx.phasestr()
684 def obsolete(self):
684 def obsolete(self):
685 return self._changectx.obsolete()
685 return self._changectx.obsolete()
686 def instabilities(self):
686 def instabilities(self):
687 return self._changectx.instabilities()
687 return self._changectx.instabilities()
688 def manifest(self):
688 def manifest(self):
689 return self._changectx.manifest()
689 return self._changectx.manifest()
690 def changectx(self):
690 def changectx(self):
691 return self._changectx
691 return self._changectx
692 def renamed(self):
692 def renamed(self):
693 return self._copied
693 return self._copied
694 def copysource(self):
694 def copysource(self):
695 return self._copied and self._copied[0]
695 return self._copied and self._copied[0]
696 def repo(self):
696 def repo(self):
697 return self._repo
697 return self._repo
698 def size(self):
698 def size(self):
699 return len(self.data())
699 return len(self.data())
700
700
701 def path(self):
701 def path(self):
702 return self._path
702 return self._path
703
703
704 def isbinary(self):
704 def isbinary(self):
705 try:
705 try:
706 return stringutil.binary(self.data())
706 return stringutil.binary(self.data())
707 except IOError:
707 except IOError:
708 return False
708 return False
709 def isexec(self):
709 def isexec(self):
710 return 'x' in self.flags()
710 return 'x' in self.flags()
711 def islink(self):
711 def islink(self):
712 return 'l' in self.flags()
712 return 'l' in self.flags()
713
713
714 def isabsent(self):
714 def isabsent(self):
715 """whether this filectx represents a file not in self._changectx
715 """whether this filectx represents a file not in self._changectx
716
716
717 This is mainly for merge code to detect change/delete conflicts. This is
717 This is mainly for merge code to detect change/delete conflicts. This is
718 expected to be True for all subclasses of basectx."""
718 expected to be True for all subclasses of basectx."""
719 return False
719 return False
720
720
721 _customcmp = False
721 _customcmp = False
722 def cmp(self, fctx):
722 def cmp(self, fctx):
723 """compare with other file context
723 """compare with other file context
724
724
725 returns True if different than fctx.
725 returns True if different than fctx.
726 """
726 """
727 if fctx._customcmp:
727 if fctx._customcmp:
728 return fctx.cmp(self)
728 return fctx.cmp(self)
729
729
730 if self._filenode is None:
730 if self._filenode is None:
731 raise error.ProgrammingError(
731 raise error.ProgrammingError(
732 'filectx.cmp() must be reimplemented if not backed by revlog')
732 'filectx.cmp() must be reimplemented if not backed by revlog')
733
733
734 if fctx._filenode is None:
734 if fctx._filenode is None:
735 if self._repo._encodefilterpats:
735 if self._repo._encodefilterpats:
736 # can't rely on size() because wdir content may be decoded
736 # can't rely on size() because wdir content may be decoded
737 return self._filelog.cmp(self._filenode, fctx.data())
737 return self._filelog.cmp(self._filenode, fctx.data())
738 if self.size() - 4 == fctx.size():
738 if self.size() - 4 == fctx.size():
739 # size() can match:
739 # size() can match:
740 # if file data starts with '\1\n', empty metadata block is
740 # if file data starts with '\1\n', empty metadata block is
741 # prepended, which adds 4 bytes to filelog.size().
741 # prepended, which adds 4 bytes to filelog.size().
742 return self._filelog.cmp(self._filenode, fctx.data())
742 return self._filelog.cmp(self._filenode, fctx.data())
743 if self.size() == fctx.size():
743 if self.size() == fctx.size():
744 # size() matches: need to compare content
744 # size() matches: need to compare content
745 return self._filelog.cmp(self._filenode, fctx.data())
745 return self._filelog.cmp(self._filenode, fctx.data())
746
746
747 # size() differs
747 # size() differs
748 return True
748 return True
749
749
750 def _adjustlinkrev(self, srcrev, inclusive=False, stoprev=None):
750 def _adjustlinkrev(self, srcrev, inclusive=False, stoprev=None):
751 """return the first ancestor of <srcrev> introducing <fnode>
751 """return the first ancestor of <srcrev> introducing <fnode>
752
752
753 If the linkrev of the file revision does not point to an ancestor of
753 If the linkrev of the file revision does not point to an ancestor of
754 srcrev, we'll walk down the ancestors until we find one introducing
754 srcrev, we'll walk down the ancestors until we find one introducing
755 this file revision.
755 this file revision.
756
756
757 :srcrev: the changeset revision we search ancestors from
757 :srcrev: the changeset revision we search ancestors from
758 :inclusive: if true, the src revision will also be checked
758 :inclusive: if true, the src revision will also be checked
759 :stoprev: an optional revision to stop the walk at. If no introduction
759 :stoprev: an optional revision to stop the walk at. If no introduction
760 of this file content could be found before this floor
760 of this file content could be found before this floor
761 revision, the function will returns "None" and stops its
761 revision, the function will returns "None" and stops its
762 iteration.
762 iteration.
763 """
763 """
764 repo = self._repo
764 repo = self._repo
765 cl = repo.unfiltered().changelog
765 cl = repo.unfiltered().changelog
766 mfl = repo.manifestlog
766 mfl = repo.manifestlog
767 # fetch the linkrev
767 # fetch the linkrev
768 lkr = self.linkrev()
768 lkr = self.linkrev()
769 if srcrev == lkr:
769 if srcrev == lkr:
770 return lkr
770 return lkr
771 # hack to reuse ancestor computation when searching for renames
771 # hack to reuse ancestor computation when searching for renames
772 memberanc = getattr(self, '_ancestrycontext', None)
772 memberanc = getattr(self, '_ancestrycontext', None)
773 iteranc = None
773 iteranc = None
774 if srcrev is None:
774 if srcrev is None:
775 # wctx case, used by workingfilectx during mergecopy
775 # wctx case, used by workingfilectx during mergecopy
776 revs = [p.rev() for p in self._repo[None].parents()]
776 revs = [p.rev() for p in self._repo[None].parents()]
777 inclusive = True # we skipped the real (revless) source
777 inclusive = True # we skipped the real (revless) source
778 else:
778 else:
779 revs = [srcrev]
779 revs = [srcrev]
780 if memberanc is None:
780 if memberanc is None:
781 memberanc = iteranc = cl.ancestors(revs, lkr,
781 memberanc = iteranc = cl.ancestors(revs, lkr,
782 inclusive=inclusive)
782 inclusive=inclusive)
783 # check if this linkrev is an ancestor of srcrev
783 # check if this linkrev is an ancestor of srcrev
784 if lkr not in memberanc:
784 if lkr not in memberanc:
785 if iteranc is None:
785 if iteranc is None:
786 iteranc = cl.ancestors(revs, lkr, inclusive=inclusive)
786 iteranc = cl.ancestors(revs, lkr, inclusive=inclusive)
787 fnode = self._filenode
787 fnode = self._filenode
788 path = self._path
788 path = self._path
789 for a in iteranc:
789 for a in iteranc:
790 if stoprev is not None and a < stoprev:
790 if stoprev is not None and a < stoprev:
791 return None
791 return None
792 ac = cl.read(a) # get changeset data (we avoid object creation)
792 ac = cl.read(a) # get changeset data (we avoid object creation)
793 if path in ac[3]: # checking the 'files' field.
793 if path in ac[3]: # checking the 'files' field.
794 # The file has been touched, check if the content is
794 # The file has been touched, check if the content is
795 # similar to the one we search for.
795 # similar to the one we search for.
796 if fnode == mfl[ac[0]].readfast().get(path):
796 if fnode == mfl[ac[0]].readfast().get(path):
797 return a
797 return a
798 # In theory, we should never get out of that loop without a result.
798 # In theory, we should never get out of that loop without a result.
799 # But if manifest uses a buggy file revision (not children of the
799 # But if manifest uses a buggy file revision (not children of the
800 # one it replaces) we could. Such a buggy situation will likely
800 # one it replaces) we could. Such a buggy situation will likely
801 # result is crash somewhere else at to some point.
801 # result is crash somewhere else at to some point.
802 return lkr
802 return lkr
803
803
804 def isintroducedafter(self, changelogrev):
804 def isintroducedafter(self, changelogrev):
805 """True if a filectx has been introduced after a given floor revision
805 """True if a filectx has been introduced after a given floor revision
806 """
806 """
807 if self.linkrev() >= changelogrev:
807 if self.linkrev() >= changelogrev:
808 return True
808 return True
809 introrev = self._introrev(stoprev=changelogrev)
809 introrev = self._introrev(stoprev=changelogrev)
810 if introrev is None:
810 if introrev is None:
811 return False
811 return False
812 return introrev >= changelogrev
812 return introrev >= changelogrev
813
813
814 def introrev(self):
814 def introrev(self):
815 """return the rev of the changeset which introduced this file revision
815 """return the rev of the changeset which introduced this file revision
816
816
817 This method is different from linkrev because it take into account the
817 This method is different from linkrev because it take into account the
818 changeset the filectx was created from. It ensures the returned
818 changeset the filectx was created from. It ensures the returned
819 revision is one of its ancestors. This prevents bugs from
819 revision is one of its ancestors. This prevents bugs from
820 'linkrev-shadowing' when a file revision is used by multiple
820 'linkrev-shadowing' when a file revision is used by multiple
821 changesets.
821 changesets.
822 """
822 """
823 return self._introrev()
823 return self._introrev()
824
824
825 def _introrev(self, stoprev=None):
825 def _introrev(self, stoprev=None):
826 """
826 """
827 Same as `introrev` but, with an extra argument to limit changelog
827 Same as `introrev` but, with an extra argument to limit changelog
828 iteration range in some internal usecase.
828 iteration range in some internal usecase.
829
829
830 If `stoprev` is set, the `introrev` will not be searched past that
830 If `stoprev` is set, the `introrev` will not be searched past that
831 `stoprev` revision and "None" might be returned. This is useful to
831 `stoprev` revision and "None" might be returned. This is useful to
832 limit the iteration range.
832 limit the iteration range.
833 """
833 """
834 toprev = None
834 toprev = None
835 attrs = vars(self)
835 attrs = vars(self)
836 if r'_changeid' in attrs:
836 if r'_changeid' in attrs:
837 # We have a cached value already
837 # We have a cached value already
838 toprev = self._changeid
838 toprev = self._changeid
839 elif r'_changectx' in attrs:
839 elif r'_changectx' in attrs:
840 # We know which changelog entry we are coming from
840 # We know which changelog entry we are coming from
841 toprev = self._changectx.rev()
841 toprev = self._changectx.rev()
842
842
843 if toprev is not None:
843 if toprev is not None:
844 return self._adjustlinkrev(toprev, inclusive=True, stoprev=stoprev)
844 return self._adjustlinkrev(toprev, inclusive=True, stoprev=stoprev)
845 elif r'_descendantrev' in attrs:
845 elif r'_descendantrev' in attrs:
846 introrev = self._adjustlinkrev(self._descendantrev, stoprev=stoprev)
846 introrev = self._adjustlinkrev(self._descendantrev, stoprev=stoprev)
847 # be nice and cache the result of the computation
847 # be nice and cache the result of the computation
848 if introrev is not None:
848 if introrev is not None:
849 self._changeid = introrev
849 self._changeid = introrev
850 return introrev
850 return introrev
851 else:
851 else:
852 return self.linkrev()
852 return self.linkrev()
853
853
854 def introfilectx(self):
854 def introfilectx(self):
855 """Return filectx having identical contents, but pointing to the
855 """Return filectx having identical contents, but pointing to the
856 changeset revision where this filectx was introduced"""
856 changeset revision where this filectx was introduced"""
857 introrev = self.introrev()
857 introrev = self.introrev()
858 if self.rev() == introrev:
858 if self.rev() == introrev:
859 return self
859 return self
860 return self.filectx(self.filenode(), changeid=introrev)
860 return self.filectx(self.filenode(), changeid=introrev)
861
861
862 def _parentfilectx(self, path, fileid, filelog):
862 def _parentfilectx(self, path, fileid, filelog):
863 """create parent filectx keeping ancestry info for _adjustlinkrev()"""
863 """create parent filectx keeping ancestry info for _adjustlinkrev()"""
864 fctx = filectx(self._repo, path, fileid=fileid, filelog=filelog)
864 fctx = filectx(self._repo, path, fileid=fileid, filelog=filelog)
865 if r'_changeid' in vars(self) or r'_changectx' in vars(self):
865 if r'_changeid' in vars(self) or r'_changectx' in vars(self):
866 # If self is associated with a changeset (probably explicitly
866 # If self is associated with a changeset (probably explicitly
867 # fed), ensure the created filectx is associated with a
867 # fed), ensure the created filectx is associated with a
868 # changeset that is an ancestor of self.changectx.
868 # changeset that is an ancestor of self.changectx.
869 # This lets us later use _adjustlinkrev to get a correct link.
869 # This lets us later use _adjustlinkrev to get a correct link.
870 fctx._descendantrev = self.rev()
870 fctx._descendantrev = self.rev()
871 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
871 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
872 elif r'_descendantrev' in vars(self):
872 elif r'_descendantrev' in vars(self):
873 # Otherwise propagate _descendantrev if we have one associated.
873 # Otherwise propagate _descendantrev if we have one associated.
874 fctx._descendantrev = self._descendantrev
874 fctx._descendantrev = self._descendantrev
875 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
875 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
876 return fctx
876 return fctx
877
877
878 def parents(self):
878 def parents(self):
879 _path = self._path
879 _path = self._path
880 fl = self._filelog
880 fl = self._filelog
881 parents = self._filelog.parents(self._filenode)
881 parents = self._filelog.parents(self._filenode)
882 pl = [(_path, node, fl) for node in parents if node != nullid]
882 pl = [(_path, node, fl) for node in parents if node != nullid]
883
883
884 r = fl.renamed(self._filenode)
884 r = fl.renamed(self._filenode)
885 if r:
885 if r:
886 # - In the simple rename case, both parent are nullid, pl is empty.
886 # - In the simple rename case, both parent are nullid, pl is empty.
887 # - In case of merge, only one of the parent is null id and should
887 # - In case of merge, only one of the parent is null id and should
888 # be replaced with the rename information. This parent is -always-
888 # be replaced with the rename information. This parent is -always-
889 # the first one.
889 # the first one.
890 #
890 #
891 # As null id have always been filtered out in the previous list
891 # As null id have always been filtered out in the previous list
892 # comprehension, inserting to 0 will always result in "replacing
892 # comprehension, inserting to 0 will always result in "replacing
893 # first nullid parent with rename information.
893 # first nullid parent with rename information.
894 pl.insert(0, (r[0], r[1], self._repo.file(r[0])))
894 pl.insert(0, (r[0], r[1], self._repo.file(r[0])))
895
895
896 return [self._parentfilectx(path, fnode, l) for path, fnode, l in pl]
896 return [self._parentfilectx(path, fnode, l) for path, fnode, l in pl]
897
897
898 def p1(self):
898 def p1(self):
899 return self.parents()[0]
899 return self.parents()[0]
900
900
901 def p2(self):
901 def p2(self):
902 p = self.parents()
902 p = self.parents()
903 if len(p) == 2:
903 if len(p) == 2:
904 return p[1]
904 return p[1]
905 return filectx(self._repo, self._path, fileid=-1, filelog=self._filelog)
905 return filectx(self._repo, self._path, fileid=-1, filelog=self._filelog)
906
906
907 def annotate(self, follow=False, skiprevs=None, diffopts=None):
907 def annotate(self, follow=False, skiprevs=None, diffopts=None):
908 """Returns a list of annotateline objects for each line in the file
908 """Returns a list of annotateline objects for each line in the file
909
909
910 - line.fctx is the filectx of the node where that line was last changed
910 - line.fctx is the filectx of the node where that line was last changed
911 - line.lineno is the line number at the first appearance in the managed
911 - line.lineno is the line number at the first appearance in the managed
912 file
912 file
913 - line.text is the data on that line (including newline character)
913 - line.text is the data on that line (including newline character)
914 """
914 """
915 getlog = util.lrucachefunc(lambda x: self._repo.file(x))
915 getlog = util.lrucachefunc(lambda x: self._repo.file(x))
916
916
917 def parents(f):
917 def parents(f):
918 # Cut _descendantrev here to mitigate the penalty of lazy linkrev
918 # Cut _descendantrev here to mitigate the penalty of lazy linkrev
919 # adjustment. Otherwise, p._adjustlinkrev() would walk changelog
919 # adjustment. Otherwise, p._adjustlinkrev() would walk changelog
920 # from the topmost introrev (= srcrev) down to p.linkrev() if it
920 # from the topmost introrev (= srcrev) down to p.linkrev() if it
921 # isn't an ancestor of the srcrev.
921 # isn't an ancestor of the srcrev.
922 f._changeid
922 f._changeid
923 pl = f.parents()
923 pl = f.parents()
924
924
925 # Don't return renamed parents if we aren't following.
925 # Don't return renamed parents if we aren't following.
926 if not follow:
926 if not follow:
927 pl = [p for p in pl if p.path() == f.path()]
927 pl = [p for p in pl if p.path() == f.path()]
928
928
929 # renamed filectx won't have a filelog yet, so set it
929 # renamed filectx won't have a filelog yet, so set it
930 # from the cache to save time
930 # from the cache to save time
931 for p in pl:
931 for p in pl:
932 if not r'_filelog' in p.__dict__:
932 if not r'_filelog' in p.__dict__:
933 p._filelog = getlog(p.path())
933 p._filelog = getlog(p.path())
934
934
935 return pl
935 return pl
936
936
937 # use linkrev to find the first changeset where self appeared
937 # use linkrev to find the first changeset where self appeared
938 base = self.introfilectx()
938 base = self.introfilectx()
939 if getattr(base, '_ancestrycontext', None) is None:
939 if getattr(base, '_ancestrycontext', None) is None:
940 cl = self._repo.changelog
940 cl = self._repo.changelog
941 if base.rev() is None:
941 if base.rev() is None:
942 # wctx is not inclusive, but works because _ancestrycontext
942 # wctx is not inclusive, but works because _ancestrycontext
943 # is used to test filelog revisions
943 # is used to test filelog revisions
944 ac = cl.ancestors([p.rev() for p in base.parents()],
944 ac = cl.ancestors([p.rev() for p in base.parents()],
945 inclusive=True)
945 inclusive=True)
946 else:
946 else:
947 ac = cl.ancestors([base.rev()], inclusive=True)
947 ac = cl.ancestors([base.rev()], inclusive=True)
948 base._ancestrycontext = ac
948 base._ancestrycontext = ac
949
949
950 return dagop.annotate(base, parents, skiprevs=skiprevs,
950 return dagop.annotate(base, parents, skiprevs=skiprevs,
951 diffopts=diffopts)
951 diffopts=diffopts)
952
952
953 def ancestors(self, followfirst=False):
953 def ancestors(self, followfirst=False):
954 visit = {}
954 visit = {}
955 c = self
955 c = self
956 if followfirst:
956 if followfirst:
957 cut = 1
957 cut = 1
958 else:
958 else:
959 cut = None
959 cut = None
960
960
961 while True:
961 while True:
962 for parent in c.parents()[:cut]:
962 for parent in c.parents()[:cut]:
963 visit[(parent.linkrev(), parent.filenode())] = parent
963 visit[(parent.linkrev(), parent.filenode())] = parent
964 if not visit:
964 if not visit:
965 break
965 break
966 c = visit.pop(max(visit))
966 c = visit.pop(max(visit))
967 yield c
967 yield c
968
968
969 def decodeddata(self):
969 def decodeddata(self):
970 """Returns `data()` after running repository decoding filters.
970 """Returns `data()` after running repository decoding filters.
971
971
972 This is often equivalent to how the data would be expressed on disk.
972 This is often equivalent to how the data would be expressed on disk.
973 """
973 """
974 return self._repo.wwritedata(self.path(), self.data())
974 return self._repo.wwritedata(self.path(), self.data())
975
975
976 class filectx(basefilectx):
976 class filectx(basefilectx):
977 """A filecontext object makes access to data related to a particular
977 """A filecontext object makes access to data related to a particular
978 filerevision convenient."""
978 filerevision convenient."""
979 def __init__(self, repo, path, changeid=None, fileid=None,
979 def __init__(self, repo, path, changeid=None, fileid=None,
980 filelog=None, changectx=None):
980 filelog=None, changectx=None):
981 """changeid must be a revision number, if specified.
981 """changeid must be a revision number, if specified.
982 fileid can be a file revision or node."""
982 fileid can be a file revision or node."""
983 self._repo = repo
983 self._repo = repo
984 self._path = path
984 self._path = path
985
985
986 assert (changeid is not None
986 assert (changeid is not None
987 or fileid is not None
987 or fileid is not None
988 or changectx is not None), (
988 or changectx is not None), (
989 "bad args: changeid=%r, fileid=%r, changectx=%r"
989 "bad args: changeid=%r, fileid=%r, changectx=%r"
990 % (changeid, fileid, changectx))
990 % (changeid, fileid, changectx))
991
991
992 if filelog is not None:
992 if filelog is not None:
993 self._filelog = filelog
993 self._filelog = filelog
994
994
995 if changeid is not None:
995 if changeid is not None:
996 self._changeid = changeid
996 self._changeid = changeid
997 if changectx is not None:
997 if changectx is not None:
998 self._changectx = changectx
998 self._changectx = changectx
999 if fileid is not None:
999 if fileid is not None:
1000 self._fileid = fileid
1000 self._fileid = fileid
1001
1001
1002 @propertycache
1002 @propertycache
1003 def _changectx(self):
1003 def _changectx(self):
1004 try:
1004 try:
1005 return self._repo[self._changeid]
1005 return self._repo[self._changeid]
1006 except error.FilteredRepoLookupError:
1006 except error.FilteredRepoLookupError:
1007 # Linkrev may point to any revision in the repository. When the
1007 # Linkrev may point to any revision in the repository. When the
1008 # repository is filtered this may lead to `filectx` trying to build
1008 # repository is filtered this may lead to `filectx` trying to build
1009 # `changectx` for filtered revision. In such case we fallback to
1009 # `changectx` for filtered revision. In such case we fallback to
1010 # creating `changectx` on the unfiltered version of the reposition.
1010 # creating `changectx` on the unfiltered version of the reposition.
1011 # This fallback should not be an issue because `changectx` from
1011 # This fallback should not be an issue because `changectx` from
1012 # `filectx` are not used in complex operations that care about
1012 # `filectx` are not used in complex operations that care about
1013 # filtering.
1013 # filtering.
1014 #
1014 #
1015 # This fallback is a cheap and dirty fix that prevent several
1015 # This fallback is a cheap and dirty fix that prevent several
1016 # crashes. It does not ensure the behavior is correct. However the
1016 # crashes. It does not ensure the behavior is correct. However the
1017 # behavior was not correct before filtering either and "incorrect
1017 # behavior was not correct before filtering either and "incorrect
1018 # behavior" is seen as better as "crash"
1018 # behavior" is seen as better as "crash"
1019 #
1019 #
1020 # Linkrevs have several serious troubles with filtering that are
1020 # Linkrevs have several serious troubles with filtering that are
1021 # complicated to solve. Proper handling of the issue here should be
1021 # complicated to solve. Proper handling of the issue here should be
1022 # considered when solving linkrev issue are on the table.
1022 # considered when solving linkrev issue are on the table.
1023 return self._repo.unfiltered()[self._changeid]
1023 return self._repo.unfiltered()[self._changeid]
1024
1024
1025 def filectx(self, fileid, changeid=None):
1025 def filectx(self, fileid, changeid=None):
1026 '''opens an arbitrary revision of the file without
1026 '''opens an arbitrary revision of the file without
1027 opening a new filelog'''
1027 opening a new filelog'''
1028 return filectx(self._repo, self._path, fileid=fileid,
1028 return filectx(self._repo, self._path, fileid=fileid,
1029 filelog=self._filelog, changeid=changeid)
1029 filelog=self._filelog, changeid=changeid)
1030
1030
1031 def rawdata(self):
1031 def rawdata(self):
1032 return self._filelog.revision(self._filenode, raw=True)
1032 return self._filelog.revision(self._filenode, raw=True)
1033
1033
1034 def rawflags(self):
1034 def rawflags(self):
1035 """low-level revlog flags"""
1035 """low-level revlog flags"""
1036 return self._filelog.flags(self._filerev)
1036 return self._filelog.flags(self._filerev)
1037
1037
1038 def data(self):
1038 def data(self):
1039 try:
1039 try:
1040 return self._filelog.read(self._filenode)
1040 return self._filelog.read(self._filenode)
1041 except error.CensoredNodeError:
1041 except error.CensoredNodeError:
1042 if self._repo.ui.config("censor", "policy") == "ignore":
1042 if self._repo.ui.config("censor", "policy") == "ignore":
1043 return ""
1043 return ""
1044 raise error.Abort(_("censored node: %s") % short(self._filenode),
1044 raise error.Abort(_("censored node: %s") % short(self._filenode),
1045 hint=_("set censor.policy to ignore errors"))
1045 hint=_("set censor.policy to ignore errors"))
1046
1046
1047 def size(self):
1047 def size(self):
1048 return self._filelog.size(self._filerev)
1048 return self._filelog.size(self._filerev)
1049
1049
1050 @propertycache
1050 @propertycache
1051 def _copied(self):
1051 def _copied(self):
1052 """check if file was actually renamed in this changeset revision
1052 """check if file was actually renamed in this changeset revision
1053
1053
1054 If rename logged in file revision, we report copy for changeset only
1054 If rename logged in file revision, we report copy for changeset only
1055 if file revisions linkrev points back to the changeset in question
1055 if file revisions linkrev points back to the changeset in question
1056 or both changeset parents contain different file revisions.
1056 or both changeset parents contain different file revisions.
1057 """
1057 """
1058
1058
1059 renamed = self._filelog.renamed(self._filenode)
1059 renamed = self._filelog.renamed(self._filenode)
1060 if not renamed:
1060 if not renamed:
1061 return None
1061 return None
1062
1062
1063 if self.rev() == self.linkrev():
1063 if self.rev() == self.linkrev():
1064 return renamed
1064 return renamed
1065
1065
1066 name = self.path()
1066 name = self.path()
1067 fnode = self._filenode
1067 fnode = self._filenode
1068 for p in self._changectx.parents():
1068 for p in self._changectx.parents():
1069 try:
1069 try:
1070 if fnode == p.filenode(name):
1070 if fnode == p.filenode(name):
1071 return None
1071 return None
1072 except error.LookupError:
1072 except error.LookupError:
1073 pass
1073 pass
1074 return renamed
1074 return renamed
1075
1075
1076 def children(self):
1076 def children(self):
1077 # hard for renames
1077 # hard for renames
1078 c = self._filelog.children(self._filenode)
1078 c = self._filelog.children(self._filenode)
1079 return [filectx(self._repo, self._path, fileid=x,
1079 return [filectx(self._repo, self._path, fileid=x,
1080 filelog=self._filelog) for x in c]
1080 filelog=self._filelog) for x in c]
1081
1081
1082 class committablectx(basectx):
1082 class committablectx(basectx):
1083 """A committablectx object provides common functionality for a context that
1083 """A committablectx object provides common functionality for a context that
1084 wants the ability to commit, e.g. workingctx or memctx."""
1084 wants the ability to commit, e.g. workingctx or memctx."""
1085 def __init__(self, repo, text="", user=None, date=None, extra=None,
1085 def __init__(self, repo, text="", user=None, date=None, extra=None,
1086 changes=None):
1086 changes=None):
1087 super(committablectx, self).__init__(repo)
1087 super(committablectx, self).__init__(repo)
1088 self._rev = None
1088 self._rev = None
1089 self._node = None
1089 self._node = None
1090 self._text = text
1090 self._text = text
1091 if date:
1091 if date:
1092 self._date = dateutil.parsedate(date)
1092 self._date = dateutil.parsedate(date)
1093 if user:
1093 if user:
1094 self._user = user
1094 self._user = user
1095 if changes:
1095 if changes:
1096 self._status = changes
1096 self._status = changes
1097
1097
1098 self._extra = {}
1098 self._extra = {}
1099 if extra:
1099 if extra:
1100 self._extra = extra.copy()
1100 self._extra = extra.copy()
1101 if 'branch' not in self._extra:
1101 if 'branch' not in self._extra:
1102 try:
1102 try:
1103 branch = encoding.fromlocal(self._repo.dirstate.branch())
1103 branch = encoding.fromlocal(self._repo.dirstate.branch())
1104 except UnicodeDecodeError:
1104 except UnicodeDecodeError:
1105 raise error.Abort(_('branch name not in UTF-8!'))
1105 raise error.Abort(_('branch name not in UTF-8!'))
1106 self._extra['branch'] = branch
1106 self._extra['branch'] = branch
1107 if self._extra['branch'] == '':
1107 if self._extra['branch'] == '':
1108 self._extra['branch'] = 'default'
1108 self._extra['branch'] = 'default'
1109
1109
1110 def __bytes__(self):
1110 def __bytes__(self):
1111 return bytes(self._parents[0]) + "+"
1111 return bytes(self._parents[0]) + "+"
1112
1112
1113 __str__ = encoding.strmethod(__bytes__)
1113 __str__ = encoding.strmethod(__bytes__)
1114
1114
1115 def __nonzero__(self):
1115 def __nonzero__(self):
1116 return True
1116 return True
1117
1117
1118 __bool__ = __nonzero__
1118 __bool__ = __nonzero__
1119
1119
1120 def _buildflagfunc(self):
1120 def _buildflagfunc(self):
1121 # Create a fallback function for getting file flags when the
1121 # Create a fallback function for getting file flags when the
1122 # filesystem doesn't support them
1122 # filesystem doesn't support them
1123
1123
1124 copiesget = self._repo.dirstate.copies().get
1124 copiesget = self._repo.dirstate.copies().get
1125 parents = self.parents()
1125 parents = self.parents()
1126 if len(parents) < 2:
1126 if len(parents) < 2:
1127 # when we have one parent, it's easy: copy from parent
1127 # when we have one parent, it's easy: copy from parent
1128 man = parents[0].manifest()
1128 man = parents[0].manifest()
1129 def func(f):
1129 def func(f):
1130 f = copiesget(f, f)
1130 f = copiesget(f, f)
1131 return man.flags(f)
1131 return man.flags(f)
1132 else:
1132 else:
1133 # merges are tricky: we try to reconstruct the unstored
1133 # merges are tricky: we try to reconstruct the unstored
1134 # result from the merge (issue1802)
1134 # result from the merge (issue1802)
1135 p1, p2 = parents
1135 p1, p2 = parents
1136 pa = p1.ancestor(p2)
1136 pa = p1.ancestor(p2)
1137 m1, m2, ma = p1.manifest(), p2.manifest(), pa.manifest()
1137 m1, m2, ma = p1.manifest(), p2.manifest(), pa.manifest()
1138
1138
1139 def func(f):
1139 def func(f):
1140 f = copiesget(f, f) # may be wrong for merges with copies
1140 f = copiesget(f, f) # may be wrong for merges with copies
1141 fl1, fl2, fla = m1.flags(f), m2.flags(f), ma.flags(f)
1141 fl1, fl2, fla = m1.flags(f), m2.flags(f), ma.flags(f)
1142 if fl1 == fl2:
1142 if fl1 == fl2:
1143 return fl1
1143 return fl1
1144 if fl1 == fla:
1144 if fl1 == fla:
1145 return fl2
1145 return fl2
1146 if fl2 == fla:
1146 if fl2 == fla:
1147 return fl1
1147 return fl1
1148 return '' # punt for conflicts
1148 return '' # punt for conflicts
1149
1149
1150 return func
1150 return func
1151
1151
1152 @propertycache
1152 @propertycache
1153 def _flagfunc(self):
1153 def _flagfunc(self):
1154 return self._repo.dirstate.flagfunc(self._buildflagfunc)
1154 return self._repo.dirstate.flagfunc(self._buildflagfunc)
1155
1155
1156 @propertycache
1156 @propertycache
1157 def _status(self):
1157 def _status(self):
1158 return self._repo.status()
1158 return self._repo.status()
1159
1159
1160 @propertycache
1160 @propertycache
1161 def _user(self):
1161 def _user(self):
1162 return self._repo.ui.username()
1162 return self._repo.ui.username()
1163
1163
1164 @propertycache
1164 @propertycache
1165 def _date(self):
1165 def _date(self):
1166 ui = self._repo.ui
1166 ui = self._repo.ui
1167 date = ui.configdate('devel', 'default-date')
1167 date = ui.configdate('devel', 'default-date')
1168 if date is None:
1168 if date is None:
1169 date = dateutil.makedate()
1169 date = dateutil.makedate()
1170 return date
1170 return date
1171
1171
1172 def subrev(self, subpath):
1172 def subrev(self, subpath):
1173 return None
1173 return None
1174
1174
1175 def manifestnode(self):
1175 def manifestnode(self):
1176 return None
1176 return None
1177 def user(self):
1177 def user(self):
1178 return self._user or self._repo.ui.username()
1178 return self._user or self._repo.ui.username()
1179 def date(self):
1179 def date(self):
1180 return self._date
1180 return self._date
1181 def description(self):
1181 def description(self):
1182 return self._text
1182 return self._text
1183 def files(self):
1183 def files(self):
1184 return sorted(self._status.modified + self._status.added +
1184 return sorted(self._status.modified + self._status.added +
1185 self._status.removed)
1185 self._status.removed)
1186 def modified(self):
1186 def modified(self):
1187 return self._status.modified
1187 return self._status.modified
1188 def added(self):
1188 def added(self):
1189 return self._status.added
1189 return self._status.added
1190 def removed(self):
1190 def removed(self):
1191 return self._status.removed
1191 return self._status.removed
1192 def deleted(self):
1192 def deleted(self):
1193 return self._status.deleted
1193 return self._status.deleted
1194 @propertycache
1194 @propertycache
1195 def _copies(self):
1195 def _copies(self):
1196 p1copies = {}
1196 p1copies = {}
1197 p2copies = {}
1197 p2copies = {}
1198 parents = self._repo.dirstate.parents()
1198 parents = self._repo.dirstate.parents()
1199 p1manifest = self._repo[parents[0]].manifest()
1199 p1manifest = self._repo[parents[0]].manifest()
1200 p2manifest = self._repo[parents[1]].manifest()
1200 p2manifest = self._repo[parents[1]].manifest()
1201 narrowmatch = self._repo.narrowmatch()
1201 narrowmatch = self._repo.narrowmatch()
1202 for dst, src in self._repo.dirstate.copies().items():
1202 for dst, src in self._repo.dirstate.copies().items():
1203 if not narrowmatch(dst):
1203 if not narrowmatch(dst):
1204 continue
1204 continue
1205 if src in p1manifest:
1205 if src in p1manifest:
1206 p1copies[dst] = src
1206 p1copies[dst] = src
1207 elif src in p2manifest:
1207 elif src in p2manifest:
1208 p2copies[dst] = src
1208 p2copies[dst] = src
1209 return p1copies, p2copies
1209 return p1copies, p2copies
1210 def p1copies(self):
1210 def p1copies(self):
1211 return self._copies[0]
1211 return self._copies[0]
1212 def p2copies(self):
1212 def p2copies(self):
1213 return self._copies[1]
1213 return self._copies[1]
1214 def branch(self):
1214 def branch(self):
1215 return encoding.tolocal(self._extra['branch'])
1215 return encoding.tolocal(self._extra['branch'])
1216 def closesbranch(self):
1216 def closesbranch(self):
1217 return 'close' in self._extra
1217 return 'close' in self._extra
1218 def extra(self):
1218 def extra(self):
1219 return self._extra
1219 return self._extra
1220
1220
1221 def isinmemory(self):
1221 def isinmemory(self):
1222 return False
1222 return False
1223
1223
1224 def tags(self):
1224 def tags(self):
1225 return []
1225 return []
1226
1226
1227 def bookmarks(self):
1227 def bookmarks(self):
1228 b = []
1228 b = []
1229 for p in self.parents():
1229 for p in self.parents():
1230 b.extend(p.bookmarks())
1230 b.extend(p.bookmarks())
1231 return b
1231 return b
1232
1232
1233 def phase(self):
1233 def phase(self):
1234 phase = phases.draft # default phase to draft
1234 phase = phases.draft # default phase to draft
1235 for p in self.parents():
1235 for p in self.parents():
1236 phase = max(phase, p.phase())
1236 phase = max(phase, p.phase())
1237 return phase
1237 return phase
1238
1238
1239 def hidden(self):
1239 def hidden(self):
1240 return False
1240 return False
1241
1241
1242 def children(self):
1242 def children(self):
1243 return []
1243 return []
1244
1244
1245 def flags(self, path):
1245 def flags(self, path):
1246 if r'_manifest' in self.__dict__:
1246 if r'_manifest' in self.__dict__:
1247 try:
1247 try:
1248 return self._manifest.flags(path)
1248 return self._manifest.flags(path)
1249 except KeyError:
1249 except KeyError:
1250 return ''
1250 return ''
1251
1251
1252 try:
1252 try:
1253 return self._flagfunc(path)
1253 return self._flagfunc(path)
1254 except OSError:
1254 except OSError:
1255 return ''
1255 return ''
1256
1256
1257 def ancestor(self, c2):
1257 def ancestor(self, c2):
1258 """return the "best" ancestor context of self and c2"""
1258 """return the "best" ancestor context of self and c2"""
1259 return self._parents[0].ancestor(c2) # punt on two parents for now
1259 return self._parents[0].ancestor(c2) # punt on two parents for now
1260
1260
1261 def walk(self, match):
1261 def walk(self, match):
1262 '''Generates matching file names.'''
1262 '''Generates matching file names.'''
1263 return sorted(self._repo.dirstate.walk(self._repo.narrowmatch(match),
1263 return sorted(self._repo.dirstate.walk(self._repo.narrowmatch(match),
1264 subrepos=sorted(self.substate),
1264 subrepos=sorted(self.substate),
1265 unknown=True, ignored=False))
1265 unknown=True, ignored=False))
1266
1266
1267 def matches(self, match):
1267 def matches(self, match):
1268 match = self._repo.narrowmatch(match)
1268 match = self._repo.narrowmatch(match)
1269 ds = self._repo.dirstate
1269 ds = self._repo.dirstate
1270 return sorted(f for f in ds.matches(match) if ds[f] != 'r')
1270 return sorted(f for f in ds.matches(match) if ds[f] != 'r')
1271
1271
1272 def ancestors(self):
1272 def ancestors(self):
1273 for p in self._parents:
1273 for p in self._parents:
1274 yield p
1274 yield p
1275 for a in self._repo.changelog.ancestors(
1275 for a in self._repo.changelog.ancestors(
1276 [p.rev() for p in self._parents]):
1276 [p.rev() for p in self._parents]):
1277 yield self._repo[a]
1277 yield self._repo[a]
1278
1278
1279 def markcommitted(self, node):
1279 def markcommitted(self, node):
1280 """Perform post-commit cleanup necessary after committing this ctx
1280 """Perform post-commit cleanup necessary after committing this ctx
1281
1281
1282 Specifically, this updates backing stores this working context
1282 Specifically, this updates backing stores this working context
1283 wraps to reflect the fact that the changes reflected by this
1283 wraps to reflect the fact that the changes reflected by this
1284 workingctx have been committed. For example, it marks
1284 workingctx have been committed. For example, it marks
1285 modified and added files as normal in the dirstate.
1285 modified and added files as normal in the dirstate.
1286
1286
1287 """
1287 """
1288
1288
1289 with self._repo.dirstate.parentchange():
1289 with self._repo.dirstate.parentchange():
1290 for f in self.modified() + self.added():
1290 for f in self.modified() + self.added():
1291 self._repo.dirstate.normal(f)
1291 self._repo.dirstate.normal(f)
1292 for f in self.removed():
1292 for f in self.removed():
1293 self._repo.dirstate.drop(f)
1293 self._repo.dirstate.drop(f)
1294 self._repo.dirstate.setparents(node)
1294 self._repo.dirstate.setparents(node)
1295
1295
1296 # write changes out explicitly, because nesting wlock at
1296 # write changes out explicitly, because nesting wlock at
1297 # runtime may prevent 'wlock.release()' in 'repo.commit()'
1297 # runtime may prevent 'wlock.release()' in 'repo.commit()'
1298 # from immediately doing so for subsequent changing files
1298 # from immediately doing so for subsequent changing files
1299 self._repo.dirstate.write(self._repo.currenttransaction())
1299 self._repo.dirstate.write(self._repo.currenttransaction())
1300
1300
1301 def dirty(self, missing=False, merge=True, branch=True):
1301 def dirty(self, missing=False, merge=True, branch=True):
1302 return False
1302 return False
1303
1303
1304 class workingctx(committablectx):
1304 class workingctx(committablectx):
1305 """A workingctx object makes access to data related to
1305 """A workingctx object makes access to data related to
1306 the current working directory convenient.
1306 the current working directory convenient.
1307 date - any valid date string or (unixtime, offset), or None.
1307 date - any valid date string or (unixtime, offset), or None.
1308 user - username string, or None.
1308 user - username string, or None.
1309 extra - a dictionary of extra values, or None.
1309 extra - a dictionary of extra values, or None.
1310 changes - a list of file lists as returned by localrepo.status()
1310 changes - a list of file lists as returned by localrepo.status()
1311 or None to use the repository status.
1311 or None to use the repository status.
1312 """
1312 """
1313 def __init__(self, repo, text="", user=None, date=None, extra=None,
1313 def __init__(self, repo, text="", user=None, date=None, extra=None,
1314 changes=None):
1314 changes=None):
1315 super(workingctx, self).__init__(repo, text, user, date, extra, changes)
1315 super(workingctx, self).__init__(repo, text, user, date, extra, changes)
1316
1316
1317 def __iter__(self):
1317 def __iter__(self):
1318 d = self._repo.dirstate
1318 d = self._repo.dirstate
1319 for f in d:
1319 for f in d:
1320 if d[f] != 'r':
1320 if d[f] != 'r':
1321 yield f
1321 yield f
1322
1322
1323 def __contains__(self, key):
1323 def __contains__(self, key):
1324 return self._repo.dirstate[key] not in "?r"
1324 return self._repo.dirstate[key] not in "?r"
1325
1325
1326 def hex(self):
1326 def hex(self):
1327 return wdirhex
1327 return wdirhex
1328
1328
1329 @propertycache
1329 @propertycache
1330 def _parents(self):
1330 def _parents(self):
1331 p = self._repo.dirstate.parents()
1331 p = self._repo.dirstate.parents()
1332 if p[1] == nullid:
1332 if p[1] == nullid:
1333 p = p[:-1]
1333 p = p[:-1]
1334 # use unfiltered repo to delay/avoid loading obsmarkers
1334 # use unfiltered repo to delay/avoid loading obsmarkers
1335 unfi = self._repo.unfiltered()
1335 unfi = self._repo.unfiltered()
1336 return [changectx(self._repo, unfi.changelog.rev(n), n) for n in p]
1336 return [changectx(self._repo, unfi.changelog.rev(n), n) for n in p]
1337
1337
1338 def _fileinfo(self, path):
1338 def _fileinfo(self, path):
1339 # populate __dict__['_manifest'] as workingctx has no _manifestdelta
1339 # populate __dict__['_manifest'] as workingctx has no _manifestdelta
1340 self._manifest
1340 self._manifest
1341 return super(workingctx, self)._fileinfo(path)
1341 return super(workingctx, self)._fileinfo(path)
1342
1342
1343 def filectx(self, path, filelog=None):
1343 def filectx(self, path, filelog=None):
1344 """get a file context from the working directory"""
1344 """get a file context from the working directory"""
1345 return workingfilectx(self._repo, path, workingctx=self,
1345 return workingfilectx(self._repo, path, workingctx=self,
1346 filelog=filelog)
1346 filelog=filelog)
1347
1347
1348 def dirty(self, missing=False, merge=True, branch=True):
1348 def dirty(self, missing=False, merge=True, branch=True):
1349 "check whether a working directory is modified"
1349 "check whether a working directory is modified"
1350 # check subrepos first
1350 # check subrepos first
1351 for s in sorted(self.substate):
1351 for s in sorted(self.substate):
1352 if self.sub(s).dirty(missing=missing):
1352 if self.sub(s).dirty(missing=missing):
1353 return True
1353 return True
1354 # check current working dir
1354 # check current working dir
1355 return ((merge and self.p2()) or
1355 return ((merge and self.p2()) or
1356 (branch and self.branch() != self.p1().branch()) or
1356 (branch and self.branch() != self.p1().branch()) or
1357 self.modified() or self.added() or self.removed() or
1357 self.modified() or self.added() or self.removed() or
1358 (missing and self.deleted()))
1358 (missing and self.deleted()))
1359
1359
1360 def add(self, list, prefix=""):
1360 def add(self, list, prefix=""):
1361 with self._repo.wlock():
1361 with self._repo.wlock():
1362 ui, ds = self._repo.ui, self._repo.dirstate
1362 ui, ds = self._repo.ui, self._repo.dirstate
1363 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1363 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1364 rejected = []
1364 rejected = []
1365 lstat = self._repo.wvfs.lstat
1365 lstat = self._repo.wvfs.lstat
1366 for f in list:
1366 for f in list:
1367 # ds.pathto() returns an absolute file when this is invoked from
1367 # ds.pathto() returns an absolute file when this is invoked from
1368 # the keyword extension. That gets flagged as non-portable on
1368 # the keyword extension. That gets flagged as non-portable on
1369 # Windows, since it contains the drive letter and colon.
1369 # Windows, since it contains the drive letter and colon.
1370 scmutil.checkportable(ui, os.path.join(prefix, f))
1370 scmutil.checkportable(ui, os.path.join(prefix, f))
1371 try:
1371 try:
1372 st = lstat(f)
1372 st = lstat(f)
1373 except OSError:
1373 except OSError:
1374 ui.warn(_("%s does not exist!\n") % uipath(f))
1374 ui.warn(_("%s does not exist!\n") % uipath(f))
1375 rejected.append(f)
1375 rejected.append(f)
1376 continue
1376 continue
1377 limit = ui.configbytes('ui', 'large-file-limit')
1377 limit = ui.configbytes('ui', 'large-file-limit')
1378 if limit != 0 and st.st_size > limit:
1378 if limit != 0 and st.st_size > limit:
1379 ui.warn(_("%s: up to %d MB of RAM may be required "
1379 ui.warn(_("%s: up to %d MB of RAM may be required "
1380 "to manage this file\n"
1380 "to manage this file\n"
1381 "(use 'hg revert %s' to cancel the "
1381 "(use 'hg revert %s' to cancel the "
1382 "pending addition)\n")
1382 "pending addition)\n")
1383 % (f, 3 * st.st_size // 1000000, uipath(f)))
1383 % (f, 3 * st.st_size // 1000000, uipath(f)))
1384 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1384 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1385 ui.warn(_("%s not added: only files and symlinks "
1385 ui.warn(_("%s not added: only files and symlinks "
1386 "supported currently\n") % uipath(f))
1386 "supported currently\n") % uipath(f))
1387 rejected.append(f)
1387 rejected.append(f)
1388 elif ds[f] in 'amn':
1388 elif ds[f] in 'amn':
1389 ui.warn(_("%s already tracked!\n") % uipath(f))
1389 ui.warn(_("%s already tracked!\n") % uipath(f))
1390 elif ds[f] == 'r':
1390 elif ds[f] == 'r':
1391 ds.normallookup(f)
1391 ds.normallookup(f)
1392 else:
1392 else:
1393 ds.add(f)
1393 ds.add(f)
1394 return rejected
1394 return rejected
1395
1395
1396 def forget(self, files, prefix=""):
1396 def forget(self, files, prefix=""):
1397 with self._repo.wlock():
1397 with self._repo.wlock():
1398 ds = self._repo.dirstate
1398 ds = self._repo.dirstate
1399 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1399 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1400 rejected = []
1400 rejected = []
1401 for f in files:
1401 for f in files:
1402 if f not in ds:
1402 if f not in ds:
1403 self._repo.ui.warn(_("%s not tracked!\n") % uipath(f))
1403 self._repo.ui.warn(_("%s not tracked!\n") % uipath(f))
1404 rejected.append(f)
1404 rejected.append(f)
1405 elif ds[f] != 'a':
1405 elif ds[f] != 'a':
1406 ds.remove(f)
1406 ds.remove(f)
1407 else:
1407 else:
1408 ds.drop(f)
1408 ds.drop(f)
1409 return rejected
1409 return rejected
1410
1410
1411 def copy(self, source, dest):
1411 def copy(self, source, dest):
1412 try:
1412 try:
1413 st = self._repo.wvfs.lstat(dest)
1413 st = self._repo.wvfs.lstat(dest)
1414 except OSError as err:
1414 except OSError as err:
1415 if err.errno != errno.ENOENT:
1415 if err.errno != errno.ENOENT:
1416 raise
1416 raise
1417 self._repo.ui.warn(_("%s does not exist!\n")
1417 self._repo.ui.warn(_("%s does not exist!\n")
1418 % self._repo.dirstate.pathto(dest))
1418 % self._repo.dirstate.pathto(dest))
1419 return
1419 return
1420 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1420 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1421 self._repo.ui.warn(_("copy failed: %s is not a file or a "
1421 self._repo.ui.warn(_("copy failed: %s is not a file or a "
1422 "symbolic link\n")
1422 "symbolic link\n")
1423 % self._repo.dirstate.pathto(dest))
1423 % self._repo.dirstate.pathto(dest))
1424 else:
1424 else:
1425 with self._repo.wlock():
1425 with self._repo.wlock():
1426 ds = self._repo.dirstate
1426 ds = self._repo.dirstate
1427 if ds[dest] in '?':
1427 if ds[dest] in '?':
1428 ds.add(dest)
1428 ds.add(dest)
1429 elif ds[dest] in 'r':
1429 elif ds[dest] in 'r':
1430 ds.normallookup(dest)
1430 ds.normallookup(dest)
1431 ds.copy(source, dest)
1431 ds.copy(source, dest)
1432
1432
1433 def match(self, pats=None, include=None, exclude=None, default='glob',
1433 def match(self, pats=None, include=None, exclude=None, default='glob',
1434 listsubrepos=False, badfn=None):
1434 listsubrepos=False, badfn=None):
1435 r = self._repo
1435 r = self._repo
1436
1436
1437 # Only a case insensitive filesystem needs magic to translate user input
1437 # Only a case insensitive filesystem needs magic to translate user input
1438 # to actual case in the filesystem.
1438 # to actual case in the filesystem.
1439 icasefs = not util.fscasesensitive(r.root)
1439 icasefs = not util.fscasesensitive(r.root)
1440 return matchmod.match(r.root, r.getcwd(), pats, include, exclude,
1440 return matchmod.match(r.root, r.getcwd(), pats, include, exclude,
1441 default, auditor=r.auditor, ctx=self,
1441 default, auditor=r.auditor, ctx=self,
1442 listsubrepos=listsubrepos, badfn=badfn,
1442 listsubrepos=listsubrepos, badfn=badfn,
1443 icasefs=icasefs)
1443 icasefs=icasefs)
1444
1444
1445 def _filtersuspectsymlink(self, files):
1445 def _filtersuspectsymlink(self, files):
1446 if not files or self._repo.dirstate._checklink:
1446 if not files or self._repo.dirstate._checklink:
1447 return files
1447 return files
1448
1448
1449 # Symlink placeholders may get non-symlink-like contents
1449 # Symlink placeholders may get non-symlink-like contents
1450 # via user error or dereferencing by NFS or Samba servers,
1450 # via user error or dereferencing by NFS or Samba servers,
1451 # so we filter out any placeholders that don't look like a
1451 # so we filter out any placeholders that don't look like a
1452 # symlink
1452 # symlink
1453 sane = []
1453 sane = []
1454 for f in files:
1454 for f in files:
1455 if self.flags(f) == 'l':
1455 if self.flags(f) == 'l':
1456 d = self[f].data()
1456 d = self[f].data()
1457 if (d == '' or len(d) >= 1024 or '\n' in d
1457 if (d == '' or len(d) >= 1024 or '\n' in d
1458 or stringutil.binary(d)):
1458 or stringutil.binary(d)):
1459 self._repo.ui.debug('ignoring suspect symlink placeholder'
1459 self._repo.ui.debug('ignoring suspect symlink placeholder'
1460 ' "%s"\n' % f)
1460 ' "%s"\n' % f)
1461 continue
1461 continue
1462 sane.append(f)
1462 sane.append(f)
1463 return sane
1463 return sane
1464
1464
1465 def _checklookup(self, files):
1465 def _checklookup(self, files):
1466 # check for any possibly clean files
1466 # check for any possibly clean files
1467 if not files:
1467 if not files:
1468 return [], [], []
1468 return [], [], []
1469
1469
1470 modified = []
1470 modified = []
1471 deleted = []
1471 deleted = []
1472 fixup = []
1472 fixup = []
1473 pctx = self._parents[0]
1473 pctx = self._parents[0]
1474 # do a full compare of any files that might have changed
1474 # do a full compare of any files that might have changed
1475 for f in sorted(files):
1475 for f in sorted(files):
1476 try:
1476 try:
1477 # This will return True for a file that got replaced by a
1477 # This will return True for a file that got replaced by a
1478 # directory in the interim, but fixing that is pretty hard.
1478 # directory in the interim, but fixing that is pretty hard.
1479 if (f not in pctx or self.flags(f) != pctx.flags(f)
1479 if (f not in pctx or self.flags(f) != pctx.flags(f)
1480 or pctx[f].cmp(self[f])):
1480 or pctx[f].cmp(self[f])):
1481 modified.append(f)
1481 modified.append(f)
1482 else:
1482 else:
1483 fixup.append(f)
1483 fixup.append(f)
1484 except (IOError, OSError):
1484 except (IOError, OSError):
1485 # A file become inaccessible in between? Mark it as deleted,
1485 # A file become inaccessible in between? Mark it as deleted,
1486 # matching dirstate behavior (issue5584).
1486 # matching dirstate behavior (issue5584).
1487 # The dirstate has more complex behavior around whether a
1487 # The dirstate has more complex behavior around whether a
1488 # missing file matches a directory, etc, but we don't need to
1488 # missing file matches a directory, etc, but we don't need to
1489 # bother with that: if f has made it to this point, we're sure
1489 # bother with that: if f has made it to this point, we're sure
1490 # it's in the dirstate.
1490 # it's in the dirstate.
1491 deleted.append(f)
1491 deleted.append(f)
1492
1492
1493 return modified, deleted, fixup
1493 return modified, deleted, fixup
1494
1494
1495 def _poststatusfixup(self, status, fixup):
1495 def _poststatusfixup(self, status, fixup):
1496 """update dirstate for files that are actually clean"""
1496 """update dirstate for files that are actually clean"""
1497 poststatus = self._repo.postdsstatus()
1497 poststatus = self._repo.postdsstatus()
1498 if fixup or poststatus:
1498 if fixup or poststatus:
1499 try:
1499 try:
1500 oldid = self._repo.dirstate.identity()
1500 oldid = self._repo.dirstate.identity()
1501
1501
1502 # updating the dirstate is optional
1502 # updating the dirstate is optional
1503 # so we don't wait on the lock
1503 # so we don't wait on the lock
1504 # wlock can invalidate the dirstate, so cache normal _after_
1504 # wlock can invalidate the dirstate, so cache normal _after_
1505 # taking the lock
1505 # taking the lock
1506 with self._repo.wlock(False):
1506 with self._repo.wlock(False):
1507 if self._repo.dirstate.identity() == oldid:
1507 if self._repo.dirstate.identity() == oldid:
1508 if fixup:
1508 if fixup:
1509 normal = self._repo.dirstate.normal
1509 normal = self._repo.dirstate.normal
1510 for f in fixup:
1510 for f in fixup:
1511 normal(f)
1511 normal(f)
1512 # write changes out explicitly, because nesting
1512 # write changes out explicitly, because nesting
1513 # wlock at runtime may prevent 'wlock.release()'
1513 # wlock at runtime may prevent 'wlock.release()'
1514 # after this block from doing so for subsequent
1514 # after this block from doing so for subsequent
1515 # changing files
1515 # changing files
1516 tr = self._repo.currenttransaction()
1516 tr = self._repo.currenttransaction()
1517 self._repo.dirstate.write(tr)
1517 self._repo.dirstate.write(tr)
1518
1518
1519 if poststatus:
1519 if poststatus:
1520 for ps in poststatus:
1520 for ps in poststatus:
1521 ps(self, status)
1521 ps(self, status)
1522 else:
1522 else:
1523 # in this case, writing changes out breaks
1523 # in this case, writing changes out breaks
1524 # consistency, because .hg/dirstate was
1524 # consistency, because .hg/dirstate was
1525 # already changed simultaneously after last
1525 # already changed simultaneously after last
1526 # caching (see also issue5584 for detail)
1526 # caching (see also issue5584 for detail)
1527 self._repo.ui.debug('skip updating dirstate: '
1527 self._repo.ui.debug('skip updating dirstate: '
1528 'identity mismatch\n')
1528 'identity mismatch\n')
1529 except error.LockError:
1529 except error.LockError:
1530 pass
1530 pass
1531 finally:
1531 finally:
1532 # Even if the wlock couldn't be grabbed, clear out the list.
1532 # Even if the wlock couldn't be grabbed, clear out the list.
1533 self._repo.clearpostdsstatus()
1533 self._repo.clearpostdsstatus()
1534
1534
1535 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
1535 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
1536 '''Gets the status from the dirstate -- internal use only.'''
1536 '''Gets the status from the dirstate -- internal use only.'''
1537 subrepos = []
1537 subrepos = []
1538 if '.hgsub' in self:
1538 if '.hgsub' in self:
1539 subrepos = sorted(self.substate)
1539 subrepos = sorted(self.substate)
1540 cmp, s = self._repo.dirstate.status(match, subrepos, ignored=ignored,
1540 cmp, s = self._repo.dirstate.status(match, subrepos, ignored=ignored,
1541 clean=clean, unknown=unknown)
1541 clean=clean, unknown=unknown)
1542
1542
1543 # check for any possibly clean files
1543 # check for any possibly clean files
1544 fixup = []
1544 fixup = []
1545 if cmp:
1545 if cmp:
1546 modified2, deleted2, fixup = self._checklookup(cmp)
1546 modified2, deleted2, fixup = self._checklookup(cmp)
1547 s.modified.extend(modified2)
1547 s.modified.extend(modified2)
1548 s.deleted.extend(deleted2)
1548 s.deleted.extend(deleted2)
1549
1549
1550 if fixup and clean:
1550 if fixup and clean:
1551 s.clean.extend(fixup)
1551 s.clean.extend(fixup)
1552
1552
1553 self._poststatusfixup(s, fixup)
1553 self._poststatusfixup(s, fixup)
1554
1554
1555 if match.always():
1555 if match.always():
1556 # cache for performance
1556 # cache for performance
1557 if s.unknown or s.ignored or s.clean:
1557 if s.unknown or s.ignored or s.clean:
1558 # "_status" is cached with list*=False in the normal route
1558 # "_status" is cached with list*=False in the normal route
1559 self._status = scmutil.status(s.modified, s.added, s.removed,
1559 self._status = scmutil.status(s.modified, s.added, s.removed,
1560 s.deleted, [], [], [])
1560 s.deleted, [], [], [])
1561 else:
1561 else:
1562 self._status = s
1562 self._status = s
1563
1563
1564 return s
1564 return s
1565
1565
1566 @propertycache
1566 @propertycache
1567 def _manifest(self):
1567 def _manifest(self):
1568 """generate a manifest corresponding to the values in self._status
1568 """generate a manifest corresponding to the values in self._status
1569
1569
1570 This reuse the file nodeid from parent, but we use special node
1570 This reuse the file nodeid from parent, but we use special node
1571 identifiers for added and modified files. This is used by manifests
1571 identifiers for added and modified files. This is used by manifests
1572 merge to see that files are different and by update logic to avoid
1572 merge to see that files are different and by update logic to avoid
1573 deleting newly added files.
1573 deleting newly added files.
1574 """
1574 """
1575 return self._buildstatusmanifest(self._status)
1575 return self._buildstatusmanifest(self._status)
1576
1576
1577 def _buildstatusmanifest(self, status):
1577 def _buildstatusmanifest(self, status):
1578 """Builds a manifest that includes the given status results."""
1578 """Builds a manifest that includes the given status results."""
1579 parents = self.parents()
1579 parents = self.parents()
1580
1580
1581 man = parents[0].manifest().copy()
1581 man = parents[0].manifest().copy()
1582
1582
1583 ff = self._flagfunc
1583 ff = self._flagfunc
1584 for i, l in ((addednodeid, status.added),
1584 for i, l in ((addednodeid, status.added),
1585 (modifiednodeid, status.modified)):
1585 (modifiednodeid, status.modified)):
1586 for f in l:
1586 for f in l:
1587 man[f] = i
1587 man[f] = i
1588 try:
1588 try:
1589 man.setflag(f, ff(f))
1589 man.setflag(f, ff(f))
1590 except OSError:
1590 except OSError:
1591 pass
1591 pass
1592
1592
1593 for f in status.deleted + status.removed:
1593 for f in status.deleted + status.removed:
1594 if f in man:
1594 if f in man:
1595 del man[f]
1595 del man[f]
1596
1596
1597 return man
1597 return man
1598
1598
1599 def _buildstatus(self, other, s, match, listignored, listclean,
1599 def _buildstatus(self, other, s, match, listignored, listclean,
1600 listunknown):
1600 listunknown):
1601 """build a status with respect to another context
1601 """build a status with respect to another context
1602
1602
1603 This includes logic for maintaining the fast path of status when
1603 This includes logic for maintaining the fast path of status when
1604 comparing the working directory against its parent, which is to skip
1604 comparing the working directory against its parent, which is to skip
1605 building a new manifest if self (working directory) is not comparing
1605 building a new manifest if self (working directory) is not comparing
1606 against its parent (repo['.']).
1606 against its parent (repo['.']).
1607 """
1607 """
1608 s = self._dirstatestatus(match, listignored, listclean, listunknown)
1608 s = self._dirstatestatus(match, listignored, listclean, listunknown)
1609 # Filter out symlinks that, in the case of FAT32 and NTFS filesystems,
1609 # Filter out symlinks that, in the case of FAT32 and NTFS filesystems,
1610 # might have accidentally ended up with the entire contents of the file
1610 # might have accidentally ended up with the entire contents of the file
1611 # they are supposed to be linking to.
1611 # they are supposed to be linking to.
1612 s.modified[:] = self._filtersuspectsymlink(s.modified)
1612 s.modified[:] = self._filtersuspectsymlink(s.modified)
1613 if other != self._repo['.']:
1613 if other != self._repo['.']:
1614 s = super(workingctx, self)._buildstatus(other, s, match,
1614 s = super(workingctx, self)._buildstatus(other, s, match,
1615 listignored, listclean,
1615 listignored, listclean,
1616 listunknown)
1616 listunknown)
1617 return s
1617 return s
1618
1618
1619 def _matchstatus(self, other, match):
1619 def _matchstatus(self, other, match):
1620 """override the match method with a filter for directory patterns
1620 """override the match method with a filter for directory patterns
1621
1621
1622 We use inheritance to customize the match.bad method only in cases of
1622 We use inheritance to customize the match.bad method only in cases of
1623 workingctx since it belongs only to the working directory when
1623 workingctx since it belongs only to the working directory when
1624 comparing against the parent changeset.
1624 comparing against the parent changeset.
1625
1625
1626 If we aren't comparing against the working directory's parent, then we
1626 If we aren't comparing against the working directory's parent, then we
1627 just use the default match object sent to us.
1627 just use the default match object sent to us.
1628 """
1628 """
1629 if other != self._repo['.']:
1629 if other != self._repo['.']:
1630 def bad(f, msg):
1630 def bad(f, msg):
1631 # 'f' may be a directory pattern from 'match.files()',
1631 # 'f' may be a directory pattern from 'match.files()',
1632 # so 'f not in ctx1' is not enough
1632 # so 'f not in ctx1' is not enough
1633 if f not in other and not other.hasdir(f):
1633 if f not in other and not other.hasdir(f):
1634 self._repo.ui.warn('%s: %s\n' %
1634 self._repo.ui.warn('%s: %s\n' %
1635 (self._repo.dirstate.pathto(f), msg))
1635 (self._repo.dirstate.pathto(f), msg))
1636 match.bad = bad
1636 match.bad = bad
1637 return match
1637 return match
1638
1638
1639 def markcommitted(self, node):
1639 def markcommitted(self, node):
1640 super(workingctx, self).markcommitted(node)
1640 super(workingctx, self).markcommitted(node)
1641
1641
1642 sparse.aftercommit(self._repo, node)
1642 sparse.aftercommit(self._repo, node)
1643
1643
1644 class committablefilectx(basefilectx):
1644 class committablefilectx(basefilectx):
1645 """A committablefilectx provides common functionality for a file context
1645 """A committablefilectx provides common functionality for a file context
1646 that wants the ability to commit, e.g. workingfilectx or memfilectx."""
1646 that wants the ability to commit, e.g. workingfilectx or memfilectx."""
1647 def __init__(self, repo, path, filelog=None, ctx=None):
1647 def __init__(self, repo, path, filelog=None, ctx=None):
1648 self._repo = repo
1648 self._repo = repo
1649 self._path = path
1649 self._path = path
1650 self._changeid = None
1650 self._changeid = None
1651 self._filerev = self._filenode = None
1651 self._filerev = self._filenode = None
1652
1652
1653 if filelog is not None:
1653 if filelog is not None:
1654 self._filelog = filelog
1654 self._filelog = filelog
1655 if ctx:
1655 if ctx:
1656 self._changectx = ctx
1656 self._changectx = ctx
1657
1657
1658 def __nonzero__(self):
1658 def __nonzero__(self):
1659 return True
1659 return True
1660
1660
1661 __bool__ = __nonzero__
1661 __bool__ = __nonzero__
1662
1662
1663 def linkrev(self):
1663 def linkrev(self):
1664 # linked to self._changectx no matter if file is modified or not
1664 # linked to self._changectx no matter if file is modified or not
1665 return self.rev()
1665 return self.rev()
1666
1666
1667 def renamed(self):
1667 def renamed(self):
1668 path = self.copysource()
1668 path = self.copysource()
1669 if not path:
1669 if not path:
1670 return None
1670 return None
1671 return path, self._changectx._parents[0]._manifest.get(path, nullid)
1671 return path, self._changectx._parents[0]._manifest.get(path, nullid)
1672
1672
1673 def parents(self):
1673 def parents(self):
1674 '''return parent filectxs, following copies if necessary'''
1674 '''return parent filectxs, following copies if necessary'''
1675 def filenode(ctx, path):
1675 def filenode(ctx, path):
1676 return ctx._manifest.get(path, nullid)
1676 return ctx._manifest.get(path, nullid)
1677
1677
1678 path = self._path
1678 path = self._path
1679 fl = self._filelog
1679 fl = self._filelog
1680 pcl = self._changectx._parents
1680 pcl = self._changectx._parents
1681 renamed = self.renamed()
1681 renamed = self.renamed()
1682
1682
1683 if renamed:
1683 if renamed:
1684 pl = [renamed + (None,)]
1684 pl = [renamed + (None,)]
1685 else:
1685 else:
1686 pl = [(path, filenode(pcl[0], path), fl)]
1686 pl = [(path, filenode(pcl[0], path), fl)]
1687
1687
1688 for pc in pcl[1:]:
1688 for pc in pcl[1:]:
1689 pl.append((path, filenode(pc, path), fl))
1689 pl.append((path, filenode(pc, path), fl))
1690
1690
1691 return [self._parentfilectx(p, fileid=n, filelog=l)
1691 return [self._parentfilectx(p, fileid=n, filelog=l)
1692 for p, n, l in pl if n != nullid]
1692 for p, n, l in pl if n != nullid]
1693
1693
1694 def children(self):
1694 def children(self):
1695 return []
1695 return []
1696
1696
1697 class workingfilectx(committablefilectx):
1697 class workingfilectx(committablefilectx):
1698 """A workingfilectx object makes access to data related to a particular
1698 """A workingfilectx object makes access to data related to a particular
1699 file in the working directory convenient."""
1699 file in the working directory convenient."""
1700 def __init__(self, repo, path, filelog=None, workingctx=None):
1700 def __init__(self, repo, path, filelog=None, workingctx=None):
1701 super(workingfilectx, self).__init__(repo, path, filelog, workingctx)
1701 super(workingfilectx, self).__init__(repo, path, filelog, workingctx)
1702
1702
1703 @propertycache
1703 @propertycache
1704 def _changectx(self):
1704 def _changectx(self):
1705 return workingctx(self._repo)
1705 return workingctx(self._repo)
1706
1706
1707 def data(self):
1707 def data(self):
1708 return self._repo.wread(self._path)
1708 return self._repo.wread(self._path)
1709 def copysource(self):
1709 def copysource(self):
1710 return self._repo.dirstate.copied(self._path)
1710 return self._repo.dirstate.copied(self._path)
1711
1711
1712 def size(self):
1712 def size(self):
1713 return self._repo.wvfs.lstat(self._path).st_size
1713 return self._repo.wvfs.lstat(self._path).st_size
1714 def date(self):
1714 def date(self):
1715 t, tz = self._changectx.date()
1715 t, tz = self._changectx.date()
1716 try:
1716 try:
1717 return (self._repo.wvfs.lstat(self._path)[stat.ST_MTIME], tz)
1717 return (self._repo.wvfs.lstat(self._path)[stat.ST_MTIME], tz)
1718 except OSError as err:
1718 except OSError as err:
1719 if err.errno != errno.ENOENT:
1719 if err.errno != errno.ENOENT:
1720 raise
1720 raise
1721 return (t, tz)
1721 return (t, tz)
1722
1722
1723 def exists(self):
1723 def exists(self):
1724 return self._repo.wvfs.exists(self._path)
1724 return self._repo.wvfs.exists(self._path)
1725
1725
1726 def lexists(self):
1726 def lexists(self):
1727 return self._repo.wvfs.lexists(self._path)
1727 return self._repo.wvfs.lexists(self._path)
1728
1728
1729 def audit(self):
1729 def audit(self):
1730 return self._repo.wvfs.audit(self._path)
1730 return self._repo.wvfs.audit(self._path)
1731
1731
1732 def cmp(self, fctx):
1732 def cmp(self, fctx):
1733 """compare with other file context
1733 """compare with other file context
1734
1734
1735 returns True if different than fctx.
1735 returns True if different than fctx.
1736 """
1736 """
1737 # fctx should be a filectx (not a workingfilectx)
1737 # fctx should be a filectx (not a workingfilectx)
1738 # invert comparison to reuse the same code path
1738 # invert comparison to reuse the same code path
1739 return fctx.cmp(self)
1739 return fctx.cmp(self)
1740
1740
1741 def remove(self, ignoremissing=False):
1741 def remove(self, ignoremissing=False):
1742 """wraps unlink for a repo's working directory"""
1742 """wraps unlink for a repo's working directory"""
1743 rmdir = self._repo.ui.configbool('experimental', 'removeemptydirs')
1743 rmdir = self._repo.ui.configbool('experimental', 'removeemptydirs')
1744 self._repo.wvfs.unlinkpath(self._path, ignoremissing=ignoremissing,
1744 self._repo.wvfs.unlinkpath(self._path, ignoremissing=ignoremissing,
1745 rmdir=rmdir)
1745 rmdir=rmdir)
1746
1746
1747 def write(self, data, flags, backgroundclose=False, **kwargs):
1747 def write(self, data, flags, backgroundclose=False, **kwargs):
1748 """wraps repo.wwrite"""
1748 """wraps repo.wwrite"""
1749 self._repo.wwrite(self._path, data, flags,
1749 self._repo.wwrite(self._path, data, flags,
1750 backgroundclose=backgroundclose,
1750 backgroundclose=backgroundclose,
1751 **kwargs)
1751 **kwargs)
1752
1752
1753 def markcopied(self, src):
1753 def markcopied(self, src):
1754 """marks this file a copy of `src`"""
1754 """marks this file a copy of `src`"""
1755 if self._repo.dirstate[self._path] in "nma":
1755 if self._repo.dirstate[self._path] in "nma":
1756 self._repo.dirstate.copy(src, self._path)
1756 self._repo.dirstate.copy(src, self._path)
1757
1757
1758 def clearunknown(self):
1758 def clearunknown(self):
1759 """Removes conflicting items in the working directory so that
1759 """Removes conflicting items in the working directory so that
1760 ``write()`` can be called successfully.
1760 ``write()`` can be called successfully.
1761 """
1761 """
1762 wvfs = self._repo.wvfs
1762 wvfs = self._repo.wvfs
1763 f = self._path
1763 f = self._path
1764 wvfs.audit(f)
1764 wvfs.audit(f)
1765 if self._repo.ui.configbool('experimental', 'merge.checkpathconflicts'):
1765 if self._repo.ui.configbool('experimental', 'merge.checkpathconflicts'):
1766 # remove files under the directory as they should already be
1766 # remove files under the directory as they should already be
1767 # warned and backed up
1767 # warned and backed up
1768 if wvfs.isdir(f) and not wvfs.islink(f):
1768 if wvfs.isdir(f) and not wvfs.islink(f):
1769 wvfs.rmtree(f, forcibly=True)
1769 wvfs.rmtree(f, forcibly=True)
1770 for p in reversed(list(util.finddirs(f))):
1770 for p in reversed(list(util.finddirs(f))):
1771 if wvfs.isfileorlink(p):
1771 if wvfs.isfileorlink(p):
1772 wvfs.unlink(p)
1772 wvfs.unlink(p)
1773 break
1773 break
1774 else:
1774 else:
1775 # don't remove files if path conflicts are not processed
1775 # don't remove files if path conflicts are not processed
1776 if wvfs.isdir(f) and not wvfs.islink(f):
1776 if wvfs.isdir(f) and not wvfs.islink(f):
1777 wvfs.removedirs(f)
1777 wvfs.removedirs(f)
1778
1778
1779 def setflags(self, l, x):
1779 def setflags(self, l, x):
1780 self._repo.wvfs.setflags(self._path, l, x)
1780 self._repo.wvfs.setflags(self._path, l, x)
1781
1781
1782 class overlayworkingctx(committablectx):
1782 class overlayworkingctx(committablectx):
1783 """Wraps another mutable context with a write-back cache that can be
1783 """Wraps another mutable context with a write-back cache that can be
1784 converted into a commit context.
1784 converted into a commit context.
1785
1785
1786 self._cache[path] maps to a dict with keys: {
1786 self._cache[path] maps to a dict with keys: {
1787 'exists': bool?
1787 'exists': bool?
1788 'date': date?
1788 'date': date?
1789 'data': str?
1789 'data': str?
1790 'flags': str?
1790 'flags': str?
1791 'copied': str? (path or None)
1791 'copied': str? (path or None)
1792 }
1792 }
1793 If `exists` is True, `flags` must be non-None and 'date' is non-None. If it
1793 If `exists` is True, `flags` must be non-None and 'date' is non-None. If it
1794 is `False`, the file was deleted.
1794 is `False`, the file was deleted.
1795 """
1795 """
1796
1796
1797 def __init__(self, repo):
1797 def __init__(self, repo):
1798 super(overlayworkingctx, self).__init__(repo)
1798 super(overlayworkingctx, self).__init__(repo)
1799 self.clean()
1799 self.clean()
1800
1800
1801 def setbase(self, wrappedctx):
1801 def setbase(self, wrappedctx):
1802 self._wrappedctx = wrappedctx
1802 self._wrappedctx = wrappedctx
1803 self._parents = [wrappedctx]
1803 self._parents = [wrappedctx]
1804 # Drop old manifest cache as it is now out of date.
1804 # Drop old manifest cache as it is now out of date.
1805 # This is necessary when, e.g., rebasing several nodes with one
1805 # This is necessary when, e.g., rebasing several nodes with one
1806 # ``overlayworkingctx`` (e.g. with --collapse).
1806 # ``overlayworkingctx`` (e.g. with --collapse).
1807 util.clearcachedproperty(self, '_manifest')
1807 util.clearcachedproperty(self, '_manifest')
1808
1808
1809 def data(self, path):
1809 def data(self, path):
1810 if self.isdirty(path):
1810 if self.isdirty(path):
1811 if self._cache[path]['exists']:
1811 if self._cache[path]['exists']:
1812 if self._cache[path]['data']:
1812 if self._cache[path]['data']:
1813 return self._cache[path]['data']
1813 return self._cache[path]['data']
1814 else:
1814 else:
1815 # Must fallback here, too, because we only set flags.
1815 # Must fallback here, too, because we only set flags.
1816 return self._wrappedctx[path].data()
1816 return self._wrappedctx[path].data()
1817 else:
1817 else:
1818 raise error.ProgrammingError("No such file or directory: %s" %
1818 raise error.ProgrammingError("No such file or directory: %s" %
1819 path)
1819 path)
1820 else:
1820 else:
1821 return self._wrappedctx[path].data()
1821 return self._wrappedctx[path].data()
1822
1822
1823 @propertycache
1823 @propertycache
1824 def _manifest(self):
1824 def _manifest(self):
1825 parents = self.parents()
1825 parents = self.parents()
1826 man = parents[0].manifest().copy()
1826 man = parents[0].manifest().copy()
1827
1827
1828 flag = self._flagfunc
1828 flag = self._flagfunc
1829 for path in self.added():
1829 for path in self.added():
1830 man[path] = addednodeid
1830 man[path] = addednodeid
1831 man.setflag(path, flag(path))
1831 man.setflag(path, flag(path))
1832 for path in self.modified():
1832 for path in self.modified():
1833 man[path] = modifiednodeid
1833 man[path] = modifiednodeid
1834 man.setflag(path, flag(path))
1834 man.setflag(path, flag(path))
1835 for path in self.removed():
1835 for path in self.removed():
1836 del man[path]
1836 del man[path]
1837 return man
1837 return man
1838
1838
1839 @propertycache
1839 @propertycache
1840 def _flagfunc(self):
1840 def _flagfunc(self):
1841 def f(path):
1841 def f(path):
1842 return self._cache[path]['flags']
1842 return self._cache[path]['flags']
1843 return f
1843 return f
1844
1844
1845 def files(self):
1845 def files(self):
1846 return sorted(self.added() + self.modified() + self.removed())
1846 return sorted(self.added() + self.modified() + self.removed())
1847
1847
1848 def modified(self):
1848 def modified(self):
1849 return [f for f in self._cache.keys() if self._cache[f]['exists'] and
1849 return [f for f in self._cache.keys() if self._cache[f]['exists'] and
1850 self._existsinparent(f)]
1850 self._existsinparent(f)]
1851
1851
1852 def added(self):
1852 def added(self):
1853 return [f for f in self._cache.keys() if self._cache[f]['exists'] and
1853 return [f for f in self._cache.keys() if self._cache[f]['exists'] and
1854 not self._existsinparent(f)]
1854 not self._existsinparent(f)]
1855
1855
1856 def removed(self):
1856 def removed(self):
1857 return [f for f in self._cache.keys() if
1857 return [f for f in self._cache.keys() if
1858 not self._cache[f]['exists'] and self._existsinparent(f)]
1858 not self._cache[f]['exists'] and self._existsinparent(f)]
1859
1859
1860 def p1copies(self):
1860 def p1copies(self):
1861 copies = self._repo._wrappedctx.p1copies().copy()
1861 copies = self._repo._wrappedctx.p1copies().copy()
1862 narrowmatch = self._repo.narrowmatch()
1862 narrowmatch = self._repo.narrowmatch()
1863 for f in self._cache.keys():
1863 for f in self._cache.keys():
1864 if not narrowmatch(f):
1864 if not narrowmatch(f):
1865 continue
1865 continue
1866 copies.pop(f, None) # delete if it exists
1866 copies.pop(f, None) # delete if it exists
1867 source = self._cache[f]['copied']
1867 source = self._cache[f]['copied']
1868 if source:
1868 if source:
1869 copies[f] = source
1869 copies[f] = source
1870 return copies
1870 return copies
1871
1871
1872 def p2copies(self):
1872 def p2copies(self):
1873 copies = self._repo._wrappedctx.p2copies().copy()
1873 copies = self._repo._wrappedctx.p2copies().copy()
1874 narrowmatch = self._repo.narrowmatch()
1874 narrowmatch = self._repo.narrowmatch()
1875 for f in self._cache.keys():
1875 for f in self._cache.keys():
1876 if not narrowmatch(f):
1876 if not narrowmatch(f):
1877 continue
1877 continue
1878 copies.pop(f, None) # delete if it exists
1878 copies.pop(f, None) # delete if it exists
1879 source = self._cache[f]['copied']
1879 source = self._cache[f]['copied']
1880 if source:
1880 if source:
1881 copies[f] = source
1881 copies[f] = source
1882 return copies
1882 return copies
1883
1883
1884 def isinmemory(self):
1884 def isinmemory(self):
1885 return True
1885 return True
1886
1886
1887 def filedate(self, path):
1887 def filedate(self, path):
1888 if self.isdirty(path):
1888 if self.isdirty(path):
1889 return self._cache[path]['date']
1889 return self._cache[path]['date']
1890 else:
1890 else:
1891 return self._wrappedctx[path].date()
1891 return self._wrappedctx[path].date()
1892
1892
1893 def markcopied(self, path, origin):
1893 def markcopied(self, path, origin):
1894 self._markdirty(path, exists=True, date=self.filedate(path),
1894 self._markdirty(path, exists=True, date=self.filedate(path),
1895 flags=self.flags(path), copied=origin)
1895 flags=self.flags(path), copied=origin)
1896
1896
1897 def copydata(self, path):
1897 def copydata(self, path):
1898 if self.isdirty(path):
1898 if self.isdirty(path):
1899 return self._cache[path]['copied']
1899 return self._cache[path]['copied']
1900 else:
1900 else:
1901 raise error.ProgrammingError('copydata() called on clean context')
1901 raise error.ProgrammingError('copydata() called on clean context')
1902
1902
1903 def flags(self, path):
1903 def flags(self, path):
1904 if self.isdirty(path):
1904 if self.isdirty(path):
1905 if self._cache[path]['exists']:
1905 if self._cache[path]['exists']:
1906 return self._cache[path]['flags']
1906 return self._cache[path]['flags']
1907 else:
1907 else:
1908 raise error.ProgrammingError("No such file or directory: %s" %
1908 raise error.ProgrammingError("No such file or directory: %s" %
1909 self._path)
1909 self._path)
1910 else:
1910 else:
1911 return self._wrappedctx[path].flags()
1911 return self._wrappedctx[path].flags()
1912
1912
1913 def __contains__(self, key):
1913 def __contains__(self, key):
1914 if key in self._cache:
1914 if key in self._cache:
1915 return self._cache[key]['exists']
1915 return self._cache[key]['exists']
1916 return key in self.p1()
1916 return key in self.p1()
1917
1917
1918 def _existsinparent(self, path):
1918 def _existsinparent(self, path):
1919 try:
1919 try:
1920 # ``commitctx` raises a ``ManifestLookupError`` if a path does not
1920 # ``commitctx` raises a ``ManifestLookupError`` if a path does not
1921 # exist, unlike ``workingctx``, which returns a ``workingfilectx``
1921 # exist, unlike ``workingctx``, which returns a ``workingfilectx``
1922 # with an ``exists()`` function.
1922 # with an ``exists()`` function.
1923 self._wrappedctx[path]
1923 self._wrappedctx[path]
1924 return True
1924 return True
1925 except error.ManifestLookupError:
1925 except error.ManifestLookupError:
1926 return False
1926 return False
1927
1927
1928 def _auditconflicts(self, path):
1928 def _auditconflicts(self, path):
1929 """Replicates conflict checks done by wvfs.write().
1929 """Replicates conflict checks done by wvfs.write().
1930
1930
1931 Since we never write to the filesystem and never call `applyupdates` in
1931 Since we never write to the filesystem and never call `applyupdates` in
1932 IMM, we'll never check that a path is actually writable -- e.g., because
1932 IMM, we'll never check that a path is actually writable -- e.g., because
1933 it adds `a/foo`, but `a` is actually a file in the other commit.
1933 it adds `a/foo`, but `a` is actually a file in the other commit.
1934 """
1934 """
1935 def fail(path, component):
1935 def fail(path, component):
1936 # p1() is the base and we're receiving "writes" for p2()'s
1936 # p1() is the base and we're receiving "writes" for p2()'s
1937 # files.
1937 # files.
1938 if 'l' in self.p1()[component].flags():
1938 if 'l' in self.p1()[component].flags():
1939 raise error.Abort("error: %s conflicts with symlink %s "
1939 raise error.Abort("error: %s conflicts with symlink %s "
1940 "in %d." % (path, component,
1940 "in %d." % (path, component,
1941 self.p1().rev()))
1941 self.p1().rev()))
1942 else:
1942 else:
1943 raise error.Abort("error: '%s' conflicts with file '%s' in "
1943 raise error.Abort("error: '%s' conflicts with file '%s' in "
1944 "%d." % (path, component,
1944 "%d." % (path, component,
1945 self.p1().rev()))
1945 self.p1().rev()))
1946
1946
1947 # Test that each new directory to be created to write this path from p2
1947 # Test that each new directory to be created to write this path from p2
1948 # is not a file in p1.
1948 # is not a file in p1.
1949 components = path.split('/')
1949 components = path.split('/')
1950 for i in pycompat.xrange(len(components)):
1950 for i in pycompat.xrange(len(components)):
1951 component = "/".join(components[0:i])
1951 component = "/".join(components[0:i])
1952 if component in self:
1952 if component in self:
1953 fail(path, component)
1953 fail(path, component)
1954
1954
1955 # Test the other direction -- that this path from p2 isn't a directory
1955 # Test the other direction -- that this path from p2 isn't a directory
1956 # in p1 (test that p1 doesn't have any paths matching `path/*`).
1956 # in p1 (test that p1 doesn't have any paths matching `path/*`).
1957 match = self.match(include=[path + '/'], default=b'path')
1957 match = self.match(include=[path + '/'], default=b'path')
1958 matches = self.p1().manifest().matches(match)
1958 matches = self.p1().manifest().matches(match)
1959 mfiles = matches.keys()
1959 mfiles = matches.keys()
1960 if len(mfiles) > 0:
1960 if len(mfiles) > 0:
1961 if len(mfiles) == 1 and mfiles[0] == path:
1961 if len(mfiles) == 1 and mfiles[0] == path:
1962 return
1962 return
1963 # omit the files which are deleted in current IMM wctx
1963 # omit the files which are deleted in current IMM wctx
1964 mfiles = [m for m in mfiles if m in self]
1964 mfiles = [m for m in mfiles if m in self]
1965 if not mfiles:
1965 if not mfiles:
1966 return
1966 return
1967 raise error.Abort("error: file '%s' cannot be written because "
1967 raise error.Abort("error: file '%s' cannot be written because "
1968 " '%s/' is a folder in %s (containing %d "
1968 " '%s/' is a folder in %s (containing %d "
1969 "entries: %s)"
1969 "entries: %s)"
1970 % (path, path, self.p1(), len(mfiles),
1970 % (path, path, self.p1(), len(mfiles),
1971 ', '.join(mfiles)))
1971 ', '.join(mfiles)))
1972
1972
1973 def write(self, path, data, flags='', **kwargs):
1973 def write(self, path, data, flags='', **kwargs):
1974 if data is None:
1974 if data is None:
1975 raise error.ProgrammingError("data must be non-None")
1975 raise error.ProgrammingError("data must be non-None")
1976 self._auditconflicts(path)
1976 self._auditconflicts(path)
1977 self._markdirty(path, exists=True, data=data, date=dateutil.makedate(),
1977 self._markdirty(path, exists=True, data=data, date=dateutil.makedate(),
1978 flags=flags)
1978 flags=flags)
1979
1979
1980 def setflags(self, path, l, x):
1980 def setflags(self, path, l, x):
1981 flag = ''
1981 flag = ''
1982 if l:
1982 if l:
1983 flag = 'l'
1983 flag = 'l'
1984 elif x:
1984 elif x:
1985 flag = 'x'
1985 flag = 'x'
1986 self._markdirty(path, exists=True, date=dateutil.makedate(),
1986 self._markdirty(path, exists=True, date=dateutil.makedate(),
1987 flags=flag)
1987 flags=flag)
1988
1988
1989 def remove(self, path):
1989 def remove(self, path):
1990 self._markdirty(path, exists=False)
1990 self._markdirty(path, exists=False)
1991
1991
1992 def exists(self, path):
1992 def exists(self, path):
1993 """exists behaves like `lexists`, but needs to follow symlinks and
1993 """exists behaves like `lexists`, but needs to follow symlinks and
1994 return False if they are broken.
1994 return False if they are broken.
1995 """
1995 """
1996 if self.isdirty(path):
1996 if self.isdirty(path):
1997 # If this path exists and is a symlink, "follow" it by calling
1997 # If this path exists and is a symlink, "follow" it by calling
1998 # exists on the destination path.
1998 # exists on the destination path.
1999 if (self._cache[path]['exists'] and
1999 if (self._cache[path]['exists'] and
2000 'l' in self._cache[path]['flags']):
2000 'l' in self._cache[path]['flags']):
2001 return self.exists(self._cache[path]['data'].strip())
2001 return self.exists(self._cache[path]['data'].strip())
2002 else:
2002 else:
2003 return self._cache[path]['exists']
2003 return self._cache[path]['exists']
2004
2004
2005 return self._existsinparent(path)
2005 return self._existsinparent(path)
2006
2006
2007 def lexists(self, path):
2007 def lexists(self, path):
2008 """lexists returns True if the path exists"""
2008 """lexists returns True if the path exists"""
2009 if self.isdirty(path):
2009 if self.isdirty(path):
2010 return self._cache[path]['exists']
2010 return self._cache[path]['exists']
2011
2011
2012 return self._existsinparent(path)
2012 return self._existsinparent(path)
2013
2013
2014 def size(self, path):
2014 def size(self, path):
2015 if self.isdirty(path):
2015 if self.isdirty(path):
2016 if self._cache[path]['exists']:
2016 if self._cache[path]['exists']:
2017 return len(self._cache[path]['data'])
2017 return len(self._cache[path]['data'])
2018 else:
2018 else:
2019 raise error.ProgrammingError("No such file or directory: %s" %
2019 raise error.ProgrammingError("No such file or directory: %s" %
2020 self._path)
2020 self._path)
2021 return self._wrappedctx[path].size()
2021 return self._wrappedctx[path].size()
2022
2022
2023 def tomemctx(self, text, branch=None, extra=None, date=None, parents=None,
2023 def tomemctx(self, text, branch=None, extra=None, date=None, parents=None,
2024 user=None, editor=None):
2024 user=None, editor=None):
2025 """Converts this ``overlayworkingctx`` into a ``memctx`` ready to be
2025 """Converts this ``overlayworkingctx`` into a ``memctx`` ready to be
2026 committed.
2026 committed.
2027
2027
2028 ``text`` is the commit message.
2028 ``text`` is the commit message.
2029 ``parents`` (optional) are rev numbers.
2029 ``parents`` (optional) are rev numbers.
2030 """
2030 """
2031 # Default parents to the wrapped contexts' if not passed.
2031 # Default parents to the wrapped contexts' if not passed.
2032 if parents is None:
2032 if parents is None:
2033 parents = self._wrappedctx.parents()
2033 parents = self._wrappedctx.parents()
2034 if len(parents) == 1:
2034 if len(parents) == 1:
2035 parents = (parents[0], None)
2035 parents = (parents[0], None)
2036
2036
2037 # ``parents`` is passed as rev numbers; convert to ``commitctxs``.
2037 # ``parents`` is passed as rev numbers; convert to ``commitctxs``.
2038 if parents[1] is None:
2038 if parents[1] is None:
2039 parents = (self._repo[parents[0]], None)
2039 parents = (self._repo[parents[0]], None)
2040 else:
2040 else:
2041 parents = (self._repo[parents[0]], self._repo[parents[1]])
2041 parents = (self._repo[parents[0]], self._repo[parents[1]])
2042
2042
2043 files = self._cache.keys()
2043 files = self._cache.keys()
2044 def getfile(repo, memctx, path):
2044 def getfile(repo, memctx, path):
2045 if self._cache[path]['exists']:
2045 if self._cache[path]['exists']:
2046 return memfilectx(repo, memctx, path,
2046 return memfilectx(repo, memctx, path,
2047 self._cache[path]['data'],
2047 self._cache[path]['data'],
2048 'l' in self._cache[path]['flags'],
2048 'l' in self._cache[path]['flags'],
2049 'x' in self._cache[path]['flags'],
2049 'x' in self._cache[path]['flags'],
2050 self._cache[path]['copied'])
2050 self._cache[path]['copied'])
2051 else:
2051 else:
2052 # Returning None, but including the path in `files`, is
2052 # Returning None, but including the path in `files`, is
2053 # necessary for memctx to register a deletion.
2053 # necessary for memctx to register a deletion.
2054 return None
2054 return None
2055 return memctx(self._repo, parents, text, files, getfile, date=date,
2055 return memctx(self._repo, parents, text, files, getfile, date=date,
2056 extra=extra, user=user, branch=branch, editor=editor)
2056 extra=extra, user=user, branch=branch, editor=editor)
2057
2057
2058 def isdirty(self, path):
2058 def isdirty(self, path):
2059 return path in self._cache
2059 return path in self._cache
2060
2060
2061 def isempty(self):
2061 def isempty(self):
2062 # We need to discard any keys that are actually clean before the empty
2062 # We need to discard any keys that are actually clean before the empty
2063 # commit check.
2063 # commit check.
2064 self._compact()
2064 self._compact()
2065 return len(self._cache) == 0
2065 return len(self._cache) == 0
2066
2066
2067 def clean(self):
2067 def clean(self):
2068 self._cache = {}
2068 self._cache = {}
2069
2069
2070 def _compact(self):
2070 def _compact(self):
2071 """Removes keys from the cache that are actually clean, by comparing
2071 """Removes keys from the cache that are actually clean, by comparing
2072 them with the underlying context.
2072 them with the underlying context.
2073
2073
2074 This can occur during the merge process, e.g. by passing --tool :local
2074 This can occur during the merge process, e.g. by passing --tool :local
2075 to resolve a conflict.
2075 to resolve a conflict.
2076 """
2076 """
2077 keys = []
2077 keys = []
2078 # This won't be perfect, but can help performance significantly when
2078 # This won't be perfect, but can help performance significantly when
2079 # using things like remotefilelog.
2079 # using things like remotefilelog.
2080 scmutil.prefetchfiles(
2080 scmutil.prefetchfiles(
2081 self.repo(), [self.p1().rev()],
2081 self.repo(), [self.p1().rev()],
2082 scmutil.matchfiles(self.repo(), self._cache.keys()))
2082 scmutil.matchfiles(self.repo(), self._cache.keys()))
2083
2083
2084 for path in self._cache.keys():
2084 for path in self._cache.keys():
2085 cache = self._cache[path]
2085 cache = self._cache[path]
2086 try:
2086 try:
2087 underlying = self._wrappedctx[path]
2087 underlying = self._wrappedctx[path]
2088 if (underlying.data() == cache['data'] and
2088 if (underlying.data() == cache['data'] and
2089 underlying.flags() == cache['flags']):
2089 underlying.flags() == cache['flags']):
2090 keys.append(path)
2090 keys.append(path)
2091 except error.ManifestLookupError:
2091 except error.ManifestLookupError:
2092 # Path not in the underlying manifest (created).
2092 # Path not in the underlying manifest (created).
2093 continue
2093 continue
2094
2094
2095 for path in keys:
2095 for path in keys:
2096 del self._cache[path]
2096 del self._cache[path]
2097 return keys
2097 return keys
2098
2098
2099 def _markdirty(self, path, exists, data=None, date=None, flags='',
2099 def _markdirty(self, path, exists, data=None, date=None, flags='',
2100 copied=None):
2100 copied=None):
2101 # data not provided, let's see if we already have some; if not, let's
2101 # data not provided, let's see if we already have some; if not, let's
2102 # grab it from our underlying context, so that we always have data if
2102 # grab it from our underlying context, so that we always have data if
2103 # the file is marked as existing.
2103 # the file is marked as existing.
2104 if exists and data is None:
2104 if exists and data is None:
2105 oldentry = self._cache.get(path) or {}
2105 oldentry = self._cache.get(path) or {}
2106 data = oldentry.get('data') or self._wrappedctx[path].data()
2106 data = oldentry.get('data') or self._wrappedctx[path].data()
2107
2107
2108 self._cache[path] = {
2108 self._cache[path] = {
2109 'exists': exists,
2109 'exists': exists,
2110 'data': data,
2110 'data': data,
2111 'date': date,
2111 'date': date,
2112 'flags': flags,
2112 'flags': flags,
2113 'copied': copied,
2113 'copied': copied,
2114 }
2114 }
2115
2115
2116 def filectx(self, path, filelog=None):
2116 def filectx(self, path, filelog=None):
2117 return overlayworkingfilectx(self._repo, path, parent=self,
2117 return overlayworkingfilectx(self._repo, path, parent=self,
2118 filelog=filelog)
2118 filelog=filelog)
2119
2119
2120 class overlayworkingfilectx(committablefilectx):
2120 class overlayworkingfilectx(committablefilectx):
2121 """Wrap a ``workingfilectx`` but intercepts all writes into an in-memory
2121 """Wrap a ``workingfilectx`` but intercepts all writes into an in-memory
2122 cache, which can be flushed through later by calling ``flush()``."""
2122 cache, which can be flushed through later by calling ``flush()``."""
2123
2123
2124 def __init__(self, repo, path, filelog=None, parent=None):
2124 def __init__(self, repo, path, filelog=None, parent=None):
2125 super(overlayworkingfilectx, self).__init__(repo, path, filelog,
2125 super(overlayworkingfilectx, self).__init__(repo, path, filelog,
2126 parent)
2126 parent)
2127 self._repo = repo
2127 self._repo = repo
2128 self._parent = parent
2128 self._parent = parent
2129 self._path = path
2129 self._path = path
2130
2130
2131 def cmp(self, fctx):
2131 def cmp(self, fctx):
2132 return self.data() != fctx.data()
2132 return self.data() != fctx.data()
2133
2133
2134 def changectx(self):
2134 def changectx(self):
2135 return self._parent
2135 return self._parent
2136
2136
2137 def data(self):
2137 def data(self):
2138 return self._parent.data(self._path)
2138 return self._parent.data(self._path)
2139
2139
2140 def date(self):
2140 def date(self):
2141 return self._parent.filedate(self._path)
2141 return self._parent.filedate(self._path)
2142
2142
2143 def exists(self):
2143 def exists(self):
2144 return self.lexists()
2144 return self.lexists()
2145
2145
2146 def lexists(self):
2146 def lexists(self):
2147 return self._parent.exists(self._path)
2147 return self._parent.exists(self._path)
2148
2148
2149 def copysource(self):
2149 def copysource(self):
2150 return self._parent.copydata(self._path)
2150 return self._parent.copydata(self._path)
2151
2151
2152 def size(self):
2152 def size(self):
2153 return self._parent.size(self._path)
2153 return self._parent.size(self._path)
2154
2154
2155 def markcopied(self, origin):
2155 def markcopied(self, origin):
2156 self._parent.markcopied(self._path, origin)
2156 self._parent.markcopied(self._path, origin)
2157
2157
2158 def audit(self):
2158 def audit(self):
2159 pass
2159 pass
2160
2160
2161 def flags(self):
2161 def flags(self):
2162 return self._parent.flags(self._path)
2162 return self._parent.flags(self._path)
2163
2163
2164 def setflags(self, islink, isexec):
2164 def setflags(self, islink, isexec):
2165 return self._parent.setflags(self._path, islink, isexec)
2165 return self._parent.setflags(self._path, islink, isexec)
2166
2166
2167 def write(self, data, flags, backgroundclose=False, **kwargs):
2167 def write(self, data, flags, backgroundclose=False, **kwargs):
2168 return self._parent.write(self._path, data, flags, **kwargs)
2168 return self._parent.write(self._path, data, flags, **kwargs)
2169
2169
2170 def remove(self, ignoremissing=False):
2170 def remove(self, ignoremissing=False):
2171 return self._parent.remove(self._path)
2171 return self._parent.remove(self._path)
2172
2172
2173 def clearunknown(self):
2173 def clearunknown(self):
2174 pass
2174 pass
2175
2175
2176 class workingcommitctx(workingctx):
2176 class workingcommitctx(workingctx):
2177 """A workingcommitctx object makes access to data related to
2177 """A workingcommitctx object makes access to data related to
2178 the revision being committed convenient.
2178 the revision being committed convenient.
2179
2179
2180 This hides changes in the working directory, if they aren't
2180 This hides changes in the working directory, if they aren't
2181 committed in this context.
2181 committed in this context.
2182 """
2182 """
2183 def __init__(self, repo, changes,
2183 def __init__(self, repo, changes,
2184 text="", user=None, date=None, extra=None):
2184 text="", user=None, date=None, extra=None):
2185 super(workingcommitctx, self).__init__(repo, text, user, date, extra,
2185 super(workingcommitctx, self).__init__(repo, text, user, date, extra,
2186 changes)
2186 changes)
2187
2187
2188 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
2188 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
2189 """Return matched files only in ``self._status``
2189 """Return matched files only in ``self._status``
2190
2190
2191 Uncommitted files appear "clean" via this context, even if
2191 Uncommitted files appear "clean" via this context, even if
2192 they aren't actually so in the working directory.
2192 they aren't actually so in the working directory.
2193 """
2193 """
2194 if clean:
2194 if clean:
2195 clean = [f for f in self._manifest if f not in self._changedset]
2195 clean = [f for f in self._manifest if f not in self._changedset]
2196 else:
2196 else:
2197 clean = []
2197 clean = []
2198 return scmutil.status([f for f in self._status.modified if match(f)],
2198 return scmutil.status([f for f in self._status.modified if match(f)],
2199 [f for f in self._status.added if match(f)],
2199 [f for f in self._status.added if match(f)],
2200 [f for f in self._status.removed if match(f)],
2200 [f for f in self._status.removed if match(f)],
2201 [], [], [], clean)
2201 [], [], [], clean)
2202
2202
2203 @propertycache
2203 @propertycache
2204 def _changedset(self):
2204 def _changedset(self):
2205 """Return the set of files changed in this context
2205 """Return the set of files changed in this context
2206 """
2206 """
2207 changed = set(self._status.modified)
2207 changed = set(self._status.modified)
2208 changed.update(self._status.added)
2208 changed.update(self._status.added)
2209 changed.update(self._status.removed)
2209 changed.update(self._status.removed)
2210 return changed
2210 return changed
2211
2211
2212 def makecachingfilectxfn(func):
2212 def makecachingfilectxfn(func):
2213 """Create a filectxfn that caches based on the path.
2213 """Create a filectxfn that caches based on the path.
2214
2214
2215 We can't use util.cachefunc because it uses all arguments as the cache
2215 We can't use util.cachefunc because it uses all arguments as the cache
2216 key and this creates a cycle since the arguments include the repo and
2216 key and this creates a cycle since the arguments include the repo and
2217 memctx.
2217 memctx.
2218 """
2218 """
2219 cache = {}
2219 cache = {}
2220
2220
2221 def getfilectx(repo, memctx, path):
2221 def getfilectx(repo, memctx, path):
2222 if path not in cache:
2222 if path not in cache:
2223 cache[path] = func(repo, memctx, path)
2223 cache[path] = func(repo, memctx, path)
2224 return cache[path]
2224 return cache[path]
2225
2225
2226 return getfilectx
2226 return getfilectx
2227
2227
2228 def memfilefromctx(ctx):
2228 def memfilefromctx(ctx):
2229 """Given a context return a memfilectx for ctx[path]
2229 """Given a context return a memfilectx for ctx[path]
2230
2230
2231 This is a convenience method for building a memctx based on another
2231 This is a convenience method for building a memctx based on another
2232 context.
2232 context.
2233 """
2233 """
2234 def getfilectx(repo, memctx, path):
2234 def getfilectx(repo, memctx, path):
2235 fctx = ctx[path]
2235 fctx = ctx[path]
2236 copied = fctx.copysource()
2236 copysource = fctx.copysource()
2237 return memfilectx(repo, memctx, path, fctx.data(),
2237 return memfilectx(repo, memctx, path, fctx.data(),
2238 islink=fctx.islink(), isexec=fctx.isexec(),
2238 islink=fctx.islink(), isexec=fctx.isexec(),
2239 copied=copied)
2239 copysource=copysource)
2240
2240
2241 return getfilectx
2241 return getfilectx
2242
2242
2243 def memfilefrompatch(patchstore):
2243 def memfilefrompatch(patchstore):
2244 """Given a patch (e.g. patchstore object) return a memfilectx
2244 """Given a patch (e.g. patchstore object) return a memfilectx
2245
2245
2246 This is a convenience method for building a memctx based on a patchstore.
2246 This is a convenience method for building a memctx based on a patchstore.
2247 """
2247 """
2248 def getfilectx(repo, memctx, path):
2248 def getfilectx(repo, memctx, path):
2249 data, mode, copied = patchstore.getfile(path)
2249 data, mode, copysource = patchstore.getfile(path)
2250 if data is None:
2250 if data is None:
2251 return None
2251 return None
2252 islink, isexec = mode
2252 islink, isexec = mode
2253 return memfilectx(repo, memctx, path, data, islink=islink,
2253 return memfilectx(repo, memctx, path, data, islink=islink,
2254 isexec=isexec, copied=copied)
2254 isexec=isexec, copysource=copysource)
2255
2255
2256 return getfilectx
2256 return getfilectx
2257
2257
2258 class memctx(committablectx):
2258 class memctx(committablectx):
2259 """Use memctx to perform in-memory commits via localrepo.commitctx().
2259 """Use memctx to perform in-memory commits via localrepo.commitctx().
2260
2260
2261 Revision information is supplied at initialization time while
2261 Revision information is supplied at initialization time while
2262 related files data and is made available through a callback
2262 related files data and is made available through a callback
2263 mechanism. 'repo' is the current localrepo, 'parents' is a
2263 mechanism. 'repo' is the current localrepo, 'parents' is a
2264 sequence of two parent revisions identifiers (pass None for every
2264 sequence of two parent revisions identifiers (pass None for every
2265 missing parent), 'text' is the commit message and 'files' lists
2265 missing parent), 'text' is the commit message and 'files' lists
2266 names of files touched by the revision (normalized and relative to
2266 names of files touched by the revision (normalized and relative to
2267 repository root).
2267 repository root).
2268
2268
2269 filectxfn(repo, memctx, path) is a callable receiving the
2269 filectxfn(repo, memctx, path) is a callable receiving the
2270 repository, the current memctx object and the normalized path of
2270 repository, the current memctx object and the normalized path of
2271 requested file, relative to repository root. It is fired by the
2271 requested file, relative to repository root. It is fired by the
2272 commit function for every file in 'files', but calls order is
2272 commit function for every file in 'files', but calls order is
2273 undefined. If the file is available in the revision being
2273 undefined. If the file is available in the revision being
2274 committed (updated or added), filectxfn returns a memfilectx
2274 committed (updated or added), filectxfn returns a memfilectx
2275 object. If the file was removed, filectxfn return None for recent
2275 object. If the file was removed, filectxfn return None for recent
2276 Mercurial. Moved files are represented by marking the source file
2276 Mercurial. Moved files are represented by marking the source file
2277 removed and the new file added with copy information (see
2277 removed and the new file added with copy information (see
2278 memfilectx).
2278 memfilectx).
2279
2279
2280 user receives the committer name and defaults to current
2280 user receives the committer name and defaults to current
2281 repository username, date is the commit date in any format
2281 repository username, date is the commit date in any format
2282 supported by dateutil.parsedate() and defaults to current date, extra
2282 supported by dateutil.parsedate() and defaults to current date, extra
2283 is a dictionary of metadata or is left empty.
2283 is a dictionary of metadata or is left empty.
2284 """
2284 """
2285
2285
2286 # Mercurial <= 3.1 expects the filectxfn to raise IOError for missing files.
2286 # Mercurial <= 3.1 expects the filectxfn to raise IOError for missing files.
2287 # Extensions that need to retain compatibility across Mercurial 3.1 can use
2287 # Extensions that need to retain compatibility across Mercurial 3.1 can use
2288 # this field to determine what to do in filectxfn.
2288 # this field to determine what to do in filectxfn.
2289 _returnnoneformissingfiles = True
2289 _returnnoneformissingfiles = True
2290
2290
2291 def __init__(self, repo, parents, text, files, filectxfn, user=None,
2291 def __init__(self, repo, parents, text, files, filectxfn, user=None,
2292 date=None, extra=None, branch=None, editor=False):
2292 date=None, extra=None, branch=None, editor=False):
2293 super(memctx, self).__init__(repo, text, user, date, extra)
2293 super(memctx, self).__init__(repo, text, user, date, extra)
2294 self._rev = None
2294 self._rev = None
2295 self._node = None
2295 self._node = None
2296 parents = [(p or nullid) for p in parents]
2296 parents = [(p or nullid) for p in parents]
2297 p1, p2 = parents
2297 p1, p2 = parents
2298 self._parents = [self._repo[p] for p in (p1, p2)]
2298 self._parents = [self._repo[p] for p in (p1, p2)]
2299 files = sorted(set(files))
2299 files = sorted(set(files))
2300 self._files = files
2300 self._files = files
2301 if branch is not None:
2301 if branch is not None:
2302 self._extra['branch'] = encoding.fromlocal(branch)
2302 self._extra['branch'] = encoding.fromlocal(branch)
2303 self.substate = {}
2303 self.substate = {}
2304
2304
2305 if isinstance(filectxfn, patch.filestore):
2305 if isinstance(filectxfn, patch.filestore):
2306 filectxfn = memfilefrompatch(filectxfn)
2306 filectxfn = memfilefrompatch(filectxfn)
2307 elif not callable(filectxfn):
2307 elif not callable(filectxfn):
2308 # if store is not callable, wrap it in a function
2308 # if store is not callable, wrap it in a function
2309 filectxfn = memfilefromctx(filectxfn)
2309 filectxfn = memfilefromctx(filectxfn)
2310
2310
2311 # memoizing increases performance for e.g. vcs convert scenarios.
2311 # memoizing increases performance for e.g. vcs convert scenarios.
2312 self._filectxfn = makecachingfilectxfn(filectxfn)
2312 self._filectxfn = makecachingfilectxfn(filectxfn)
2313
2313
2314 if editor:
2314 if editor:
2315 self._text = editor(self._repo, self, [])
2315 self._text = editor(self._repo, self, [])
2316 self._repo.savecommitmessage(self._text)
2316 self._repo.savecommitmessage(self._text)
2317
2317
2318 def filectx(self, path, filelog=None):
2318 def filectx(self, path, filelog=None):
2319 """get a file context from the working directory
2319 """get a file context from the working directory
2320
2320
2321 Returns None if file doesn't exist and should be removed."""
2321 Returns None if file doesn't exist and should be removed."""
2322 return self._filectxfn(self._repo, self, path)
2322 return self._filectxfn(self._repo, self, path)
2323
2323
2324 def commit(self):
2324 def commit(self):
2325 """commit context to the repo"""
2325 """commit context to the repo"""
2326 return self._repo.commitctx(self)
2326 return self._repo.commitctx(self)
2327
2327
2328 @propertycache
2328 @propertycache
2329 def _manifest(self):
2329 def _manifest(self):
2330 """generate a manifest based on the return values of filectxfn"""
2330 """generate a manifest based on the return values of filectxfn"""
2331
2331
2332 # keep this simple for now; just worry about p1
2332 # keep this simple for now; just worry about p1
2333 pctx = self._parents[0]
2333 pctx = self._parents[0]
2334 man = pctx.manifest().copy()
2334 man = pctx.manifest().copy()
2335
2335
2336 for f in self._status.modified:
2336 for f in self._status.modified:
2337 man[f] = modifiednodeid
2337 man[f] = modifiednodeid
2338
2338
2339 for f in self._status.added:
2339 for f in self._status.added:
2340 man[f] = addednodeid
2340 man[f] = addednodeid
2341
2341
2342 for f in self._status.removed:
2342 for f in self._status.removed:
2343 if f in man:
2343 if f in man:
2344 del man[f]
2344 del man[f]
2345
2345
2346 return man
2346 return man
2347
2347
2348 @propertycache
2348 @propertycache
2349 def _status(self):
2349 def _status(self):
2350 """Calculate exact status from ``files`` specified at construction
2350 """Calculate exact status from ``files`` specified at construction
2351 """
2351 """
2352 man1 = self.p1().manifest()
2352 man1 = self.p1().manifest()
2353 p2 = self._parents[1]
2353 p2 = self._parents[1]
2354 # "1 < len(self._parents)" can't be used for checking
2354 # "1 < len(self._parents)" can't be used for checking
2355 # existence of the 2nd parent, because "memctx._parents" is
2355 # existence of the 2nd parent, because "memctx._parents" is
2356 # explicitly initialized by the list, of which length is 2.
2356 # explicitly initialized by the list, of which length is 2.
2357 if p2.node() != nullid:
2357 if p2.node() != nullid:
2358 man2 = p2.manifest()
2358 man2 = p2.manifest()
2359 managing = lambda f: f in man1 or f in man2
2359 managing = lambda f: f in man1 or f in man2
2360 else:
2360 else:
2361 managing = lambda f: f in man1
2361 managing = lambda f: f in man1
2362
2362
2363 modified, added, removed = [], [], []
2363 modified, added, removed = [], [], []
2364 for f in self._files:
2364 for f in self._files:
2365 if not managing(f):
2365 if not managing(f):
2366 added.append(f)
2366 added.append(f)
2367 elif self[f]:
2367 elif self[f]:
2368 modified.append(f)
2368 modified.append(f)
2369 else:
2369 else:
2370 removed.append(f)
2370 removed.append(f)
2371
2371
2372 return scmutil.status(modified, added, removed, [], [], [], [])
2372 return scmutil.status(modified, added, removed, [], [], [], [])
2373
2373
2374 class memfilectx(committablefilectx):
2374 class memfilectx(committablefilectx):
2375 """memfilectx represents an in-memory file to commit.
2375 """memfilectx represents an in-memory file to commit.
2376
2376
2377 See memctx and committablefilectx for more details.
2377 See memctx and committablefilectx for more details.
2378 """
2378 """
2379 def __init__(self, repo, changectx, path, data, islink=False,
2379 def __init__(self, repo, changectx, path, data, islink=False,
2380 isexec=False, copied=None):
2380 isexec=False, copysource=None):
2381 """
2381 """
2382 path is the normalized file path relative to repository root.
2382 path is the normalized file path relative to repository root.
2383 data is the file content as a string.
2383 data is the file content as a string.
2384 islink is True if the file is a symbolic link.
2384 islink is True if the file is a symbolic link.
2385 isexec is True if the file is executable.
2385 isexec is True if the file is executable.
2386 copied is the source file path if current file was copied in the
2386 copied is the source file path if current file was copied in the
2387 revision being committed, or None."""
2387 revision being committed, or None."""
2388 super(memfilectx, self).__init__(repo, path, None, changectx)
2388 super(memfilectx, self).__init__(repo, path, None, changectx)
2389 self._data = data
2389 self._data = data
2390 if islink:
2390 if islink:
2391 self._flags = 'l'
2391 self._flags = 'l'
2392 elif isexec:
2392 elif isexec:
2393 self._flags = 'x'
2393 self._flags = 'x'
2394 else:
2394 else:
2395 self._flags = ''
2395 self._flags = ''
2396 self._copied = None
2396 self._copied = None
2397 if copied:
2397 if copysource:
2398 self._copied = (copied, nullid)
2398 self._copied = (copysource, nullid)
2399
2399
2400 def cmp(self, fctx):
2400 def cmp(self, fctx):
2401 return self.data() != fctx.data()
2401 return self.data() != fctx.data()
2402
2402
2403 def data(self):
2403 def data(self):
2404 return self._data
2404 return self._data
2405
2405
2406 def remove(self, ignoremissing=False):
2406 def remove(self, ignoremissing=False):
2407 """wraps unlink for a repo's working directory"""
2407 """wraps unlink for a repo's working directory"""
2408 # need to figure out what to do here
2408 # need to figure out what to do here
2409 del self._changectx[self._path]
2409 del self._changectx[self._path]
2410
2410
2411 def write(self, data, flags, **kwargs):
2411 def write(self, data, flags, **kwargs):
2412 """wraps repo.wwrite"""
2412 """wraps repo.wwrite"""
2413 self._data = data
2413 self._data = data
2414
2414
2415
2415
2416 class metadataonlyctx(committablectx):
2416 class metadataonlyctx(committablectx):
2417 """Like memctx but it's reusing the manifest of different commit.
2417 """Like memctx but it's reusing the manifest of different commit.
2418 Intended to be used by lightweight operations that are creating
2418 Intended to be used by lightweight operations that are creating
2419 metadata-only changes.
2419 metadata-only changes.
2420
2420
2421 Revision information is supplied at initialization time. 'repo' is the
2421 Revision information is supplied at initialization time. 'repo' is the
2422 current localrepo, 'ctx' is original revision which manifest we're reuisng
2422 current localrepo, 'ctx' is original revision which manifest we're reuisng
2423 'parents' is a sequence of two parent revisions identifiers (pass None for
2423 'parents' is a sequence of two parent revisions identifiers (pass None for
2424 every missing parent), 'text' is the commit.
2424 every missing parent), 'text' is the commit.
2425
2425
2426 user receives the committer name and defaults to current repository
2426 user receives the committer name and defaults to current repository
2427 username, date is the commit date in any format supported by
2427 username, date is the commit date in any format supported by
2428 dateutil.parsedate() and defaults to current date, extra is a dictionary of
2428 dateutil.parsedate() and defaults to current date, extra is a dictionary of
2429 metadata or is left empty.
2429 metadata or is left empty.
2430 """
2430 """
2431 def __init__(self, repo, originalctx, parents=None, text=None, user=None,
2431 def __init__(self, repo, originalctx, parents=None, text=None, user=None,
2432 date=None, extra=None, editor=False):
2432 date=None, extra=None, editor=False):
2433 if text is None:
2433 if text is None:
2434 text = originalctx.description()
2434 text = originalctx.description()
2435 super(metadataonlyctx, self).__init__(repo, text, user, date, extra)
2435 super(metadataonlyctx, self).__init__(repo, text, user, date, extra)
2436 self._rev = None
2436 self._rev = None
2437 self._node = None
2437 self._node = None
2438 self._originalctx = originalctx
2438 self._originalctx = originalctx
2439 self._manifestnode = originalctx.manifestnode()
2439 self._manifestnode = originalctx.manifestnode()
2440 if parents is None:
2440 if parents is None:
2441 parents = originalctx.parents()
2441 parents = originalctx.parents()
2442 else:
2442 else:
2443 parents = [repo[p] for p in parents if p is not None]
2443 parents = [repo[p] for p in parents if p is not None]
2444 parents = parents[:]
2444 parents = parents[:]
2445 while len(parents) < 2:
2445 while len(parents) < 2:
2446 parents.append(repo[nullid])
2446 parents.append(repo[nullid])
2447 p1, p2 = self._parents = parents
2447 p1, p2 = self._parents = parents
2448
2448
2449 # sanity check to ensure that the reused manifest parents are
2449 # sanity check to ensure that the reused manifest parents are
2450 # manifests of our commit parents
2450 # manifests of our commit parents
2451 mp1, mp2 = self.manifestctx().parents
2451 mp1, mp2 = self.manifestctx().parents
2452 if p1 != nullid and p1.manifestnode() != mp1:
2452 if p1 != nullid and p1.manifestnode() != mp1:
2453 raise RuntimeError(r"can't reuse the manifest: its p1 "
2453 raise RuntimeError(r"can't reuse the manifest: its p1 "
2454 r"doesn't match the new ctx p1")
2454 r"doesn't match the new ctx p1")
2455 if p2 != nullid and p2.manifestnode() != mp2:
2455 if p2 != nullid and p2.manifestnode() != mp2:
2456 raise RuntimeError(r"can't reuse the manifest: "
2456 raise RuntimeError(r"can't reuse the manifest: "
2457 r"its p2 doesn't match the new ctx p2")
2457 r"its p2 doesn't match the new ctx p2")
2458
2458
2459 self._files = originalctx.files()
2459 self._files = originalctx.files()
2460 self.substate = {}
2460 self.substate = {}
2461
2461
2462 if editor:
2462 if editor:
2463 self._text = editor(self._repo, self, [])
2463 self._text = editor(self._repo, self, [])
2464 self._repo.savecommitmessage(self._text)
2464 self._repo.savecommitmessage(self._text)
2465
2465
2466 def manifestnode(self):
2466 def manifestnode(self):
2467 return self._manifestnode
2467 return self._manifestnode
2468
2468
2469 @property
2469 @property
2470 def _manifestctx(self):
2470 def _manifestctx(self):
2471 return self._repo.manifestlog[self._manifestnode]
2471 return self._repo.manifestlog[self._manifestnode]
2472
2472
2473 def filectx(self, path, filelog=None):
2473 def filectx(self, path, filelog=None):
2474 return self._originalctx.filectx(path, filelog=filelog)
2474 return self._originalctx.filectx(path, filelog=filelog)
2475
2475
2476 def commit(self):
2476 def commit(self):
2477 """commit context to the repo"""
2477 """commit context to the repo"""
2478 return self._repo.commitctx(self)
2478 return self._repo.commitctx(self)
2479
2479
2480 @property
2480 @property
2481 def _manifest(self):
2481 def _manifest(self):
2482 return self._originalctx.manifest()
2482 return self._originalctx.manifest()
2483
2483
2484 @propertycache
2484 @propertycache
2485 def _status(self):
2485 def _status(self):
2486 """Calculate exact status from ``files`` specified in the ``origctx``
2486 """Calculate exact status from ``files`` specified in the ``origctx``
2487 and parents manifests.
2487 and parents manifests.
2488 """
2488 """
2489 man1 = self.p1().manifest()
2489 man1 = self.p1().manifest()
2490 p2 = self._parents[1]
2490 p2 = self._parents[1]
2491 # "1 < len(self._parents)" can't be used for checking
2491 # "1 < len(self._parents)" can't be used for checking
2492 # existence of the 2nd parent, because "metadataonlyctx._parents" is
2492 # existence of the 2nd parent, because "metadataonlyctx._parents" is
2493 # explicitly initialized by the list, of which length is 2.
2493 # explicitly initialized by the list, of which length is 2.
2494 if p2.node() != nullid:
2494 if p2.node() != nullid:
2495 man2 = p2.manifest()
2495 man2 = p2.manifest()
2496 managing = lambda f: f in man1 or f in man2
2496 managing = lambda f: f in man1 or f in man2
2497 else:
2497 else:
2498 managing = lambda f: f in man1
2498 managing = lambda f: f in man1
2499
2499
2500 modified, added, removed = [], [], []
2500 modified, added, removed = [], [], []
2501 for f in self._files:
2501 for f in self._files:
2502 if not managing(f):
2502 if not managing(f):
2503 added.append(f)
2503 added.append(f)
2504 elif f in self:
2504 elif f in self:
2505 modified.append(f)
2505 modified.append(f)
2506 else:
2506 else:
2507 removed.append(f)
2507 removed.append(f)
2508
2508
2509 return scmutil.status(modified, added, removed, [], [], [], [])
2509 return scmutil.status(modified, added, removed, [], [], [], [])
2510
2510
2511 class arbitraryfilectx(object):
2511 class arbitraryfilectx(object):
2512 """Allows you to use filectx-like functions on a file in an arbitrary
2512 """Allows you to use filectx-like functions on a file in an arbitrary
2513 location on disk, possibly not in the working directory.
2513 location on disk, possibly not in the working directory.
2514 """
2514 """
2515 def __init__(self, path, repo=None):
2515 def __init__(self, path, repo=None):
2516 # Repo is optional because contrib/simplemerge uses this class.
2516 # Repo is optional because contrib/simplemerge uses this class.
2517 self._repo = repo
2517 self._repo = repo
2518 self._path = path
2518 self._path = path
2519
2519
2520 def cmp(self, fctx):
2520 def cmp(self, fctx):
2521 # filecmp follows symlinks whereas `cmp` should not, so skip the fast
2521 # filecmp follows symlinks whereas `cmp` should not, so skip the fast
2522 # path if either side is a symlink.
2522 # path if either side is a symlink.
2523 symlinks = ('l' in self.flags() or 'l' in fctx.flags())
2523 symlinks = ('l' in self.flags() or 'l' in fctx.flags())
2524 if not symlinks and isinstance(fctx, workingfilectx) and self._repo:
2524 if not symlinks and isinstance(fctx, workingfilectx) and self._repo:
2525 # Add a fast-path for merge if both sides are disk-backed.
2525 # Add a fast-path for merge if both sides are disk-backed.
2526 # Note that filecmp uses the opposite return values (True if same)
2526 # Note that filecmp uses the opposite return values (True if same)
2527 # from our cmp functions (True if different).
2527 # from our cmp functions (True if different).
2528 return not filecmp.cmp(self.path(), self._repo.wjoin(fctx.path()))
2528 return not filecmp.cmp(self.path(), self._repo.wjoin(fctx.path()))
2529 return self.data() != fctx.data()
2529 return self.data() != fctx.data()
2530
2530
2531 def path(self):
2531 def path(self):
2532 return self._path
2532 return self._path
2533
2533
2534 def flags(self):
2534 def flags(self):
2535 return ''
2535 return ''
2536
2536
2537 def data(self):
2537 def data(self):
2538 return util.readfile(self._path)
2538 return util.readfile(self._path)
2539
2539
2540 def decodeddata(self):
2540 def decodeddata(self):
2541 with open(self._path, "rb") as f:
2541 with open(self._path, "rb") as f:
2542 return f.read()
2542 return f.read()
2543
2543
2544 def remove(self):
2544 def remove(self):
2545 util.unlink(self._path)
2545 util.unlink(self._path)
2546
2546
2547 def write(self, data, flags, **kwargs):
2547 def write(self, data, flags, **kwargs):
2548 assert not flags
2548 assert not flags
2549 with open(self._path, "wb") as f:
2549 with open(self._path, "wb") as f:
2550 f.write(data)
2550 f.write(data)
General Comments 0
You need to be logged in to leave comments. Login now