##// END OF EJS Templates
update: clarify update() call sites by specifying argument names...
Martin von Zweigbergk -
r40721:b14fdf1f default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,1659 +1,1658 b''
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description and date
39 # r, roll = like fold, but discard this commit's description and date
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit commit message without changing commit content
41 # m, mess = edit commit message without changing commit content
42 # b, base = checkout changeset and apply further changesets from there
42 # b, base = checkout changeset and apply further changesets from there
43 #
43 #
44
44
45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
46 for each revision in your history. For example, if you had meant to add gamma
46 for each revision in your history. For example, if you had meant to add gamma
47 before beta, and then wanted to add delta in the same revision as beta, you
47 before beta, and then wanted to add delta in the same revision as beta, you
48 would reorganize the file to look like this::
48 would reorganize the file to look like this::
49
49
50 pick 030b686bedc4 Add gamma
50 pick 030b686bedc4 Add gamma
51 pick c561b4e977df Add beta
51 pick c561b4e977df Add beta
52 fold 7c2fd3b9020c Add delta
52 fold 7c2fd3b9020c Add delta
53
53
54 # Edit history between c561b4e977df and 7c2fd3b9020c
54 # Edit history between c561b4e977df and 7c2fd3b9020c
55 #
55 #
56 # Commits are listed from least to most recent
56 # Commits are listed from least to most recent
57 #
57 #
58 # Commands:
58 # Commands:
59 # p, pick = use commit
59 # p, pick = use commit
60 # e, edit = use commit, but stop for amending
60 # e, edit = use commit, but stop for amending
61 # f, fold = use commit, but combine it with the one above
61 # f, fold = use commit, but combine it with the one above
62 # r, roll = like fold, but discard this commit's description and date
62 # r, roll = like fold, but discard this commit's description and date
63 # d, drop = remove commit from history
63 # d, drop = remove commit from history
64 # m, mess = edit commit message without changing commit content
64 # m, mess = edit commit message without changing commit content
65 # b, base = checkout changeset and apply further changesets from there
65 # b, base = checkout changeset and apply further changesets from there
66 #
66 #
67
67
68 At which point you close the editor and ``histedit`` starts working. When you
68 At which point you close the editor and ``histedit`` starts working. When you
69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
70 those revisions together, offering you a chance to clean up the commit message::
70 those revisions together, offering you a chance to clean up the commit message::
71
71
72 Add beta
72 Add beta
73 ***
73 ***
74 Add delta
74 Add delta
75
75
76 Edit the commit message to your liking, then close the editor. The date used
76 Edit the commit message to your liking, then close the editor. The date used
77 for the commit will be the later of the two commits' dates. For this example,
77 for the commit will be the later of the two commits' dates. For this example,
78 let's assume that the commit message was changed to ``Add beta and delta.``
78 let's assume that the commit message was changed to ``Add beta and delta.``
79 After histedit has run and had a chance to remove any old or temporary
79 After histedit has run and had a chance to remove any old or temporary
80 revisions it needed, the history looks like this::
80 revisions it needed, the history looks like this::
81
81
82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
83 | Add beta and delta.
83 | Add beta and delta.
84 |
84 |
85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
86 | Add gamma
86 | Add gamma
87 |
87 |
88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
89 Add alpha
89 Add alpha
90
90
91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
92 ones) until after it has completed all the editing operations, so it will
92 ones) until after it has completed all the editing operations, so it will
93 probably perform several strip operations when it's done. For the above example,
93 probably perform several strip operations when it's done. For the above example,
94 it had to run strip twice. Strip can be slow depending on a variety of factors,
94 it had to run strip twice. Strip can be slow depending on a variety of factors,
95 so you might need to be a little patient. You can choose to keep the original
95 so you might need to be a little patient. You can choose to keep the original
96 revisions by passing the ``--keep`` flag.
96 revisions by passing the ``--keep`` flag.
97
97
98 The ``edit`` operation will drop you back to a command prompt,
98 The ``edit`` operation will drop you back to a command prompt,
99 allowing you to edit files freely, or even use ``hg record`` to commit
99 allowing you to edit files freely, or even use ``hg record`` to commit
100 some changes as a separate commit. When you're done, any remaining
100 some changes as a separate commit. When you're done, any remaining
101 uncommitted changes will be committed as well. When done, run ``hg
101 uncommitted changes will be committed as well. When done, run ``hg
102 histedit --continue`` to finish this step. If there are uncommitted
102 histedit --continue`` to finish this step. If there are uncommitted
103 changes, you'll be prompted for a new commit message, but the default
103 changes, you'll be prompted for a new commit message, but the default
104 commit message will be the original message for the ``edit`` ed
104 commit message will be the original message for the ``edit`` ed
105 revision, and the date of the original commit will be preserved.
105 revision, and the date of the original commit will be preserved.
106
106
107 The ``message`` operation will give you a chance to revise a commit
107 The ``message`` operation will give you a chance to revise a commit
108 message without changing the contents. It's a shortcut for doing
108 message without changing the contents. It's a shortcut for doing
109 ``edit`` immediately followed by `hg histedit --continue``.
109 ``edit`` immediately followed by `hg histedit --continue``.
110
110
111 If ``histedit`` encounters a conflict when moving a revision (while
111 If ``histedit`` encounters a conflict when moving a revision (while
112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
113 ``edit`` with the difference that it won't prompt you for a commit
113 ``edit`` with the difference that it won't prompt you for a commit
114 message when done. If you decide at this point that you don't like how
114 message when done. If you decide at this point that you don't like how
115 much work it will be to rearrange history, or that you made a mistake,
115 much work it will be to rearrange history, or that you made a mistake,
116 you can use ``hg histedit --abort`` to abandon the new changes you
116 you can use ``hg histedit --abort`` to abandon the new changes you
117 have made and return to the state before you attempted to edit your
117 have made and return to the state before you attempted to edit your
118 history.
118 history.
119
119
120 If we clone the histedit-ed example repository above and add four more
120 If we clone the histedit-ed example repository above and add four more
121 changes, such that we have the following history::
121 changes, such that we have the following history::
122
122
123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
124 | Add theta
124 | Add theta
125 |
125 |
126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
127 | Add eta
127 | Add eta
128 |
128 |
129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
130 | Add zeta
130 | Add zeta
131 |
131 |
132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
133 | Add epsilon
133 | Add epsilon
134 |
134 |
135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
136 | Add beta and delta.
136 | Add beta and delta.
137 |
137 |
138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
139 | Add gamma
139 | Add gamma
140 |
140 |
141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
142 Add alpha
142 Add alpha
143
143
144 If you run ``hg histedit --outgoing`` on the clone then it is the same
144 If you run ``hg histedit --outgoing`` on the clone then it is the same
145 as running ``hg histedit 836302820282``. If you need plan to push to a
145 as running ``hg histedit 836302820282``. If you need plan to push to a
146 repository that Mercurial does not detect to be related to the source
146 repository that Mercurial does not detect to be related to the source
147 repo, you can add a ``--force`` option.
147 repo, you can add a ``--force`` option.
148
148
149 Config
149 Config
150 ------
150 ------
151
151
152 Histedit rule lines are truncated to 80 characters by default. You
152 Histedit rule lines are truncated to 80 characters by default. You
153 can customize this behavior by setting a different length in your
153 can customize this behavior by setting a different length in your
154 configuration file::
154 configuration file::
155
155
156 [histedit]
156 [histedit]
157 linelen = 120 # truncate rule lines at 120 characters
157 linelen = 120 # truncate rule lines at 120 characters
158
158
159 ``hg histedit`` attempts to automatically choose an appropriate base
159 ``hg histedit`` attempts to automatically choose an appropriate base
160 revision to use. To change which base revision is used, define a
160 revision to use. To change which base revision is used, define a
161 revset in your configuration file::
161 revset in your configuration file::
162
162
163 [histedit]
163 [histedit]
164 defaultrev = only(.) & draft()
164 defaultrev = only(.) & draft()
165
165
166 By default each edited revision needs to be present in histedit commands.
166 By default each edited revision needs to be present in histedit commands.
167 To remove revision you need to use ``drop`` operation. You can configure
167 To remove revision you need to use ``drop`` operation. You can configure
168 the drop to be implicit for missing commits by adding::
168 the drop to be implicit for missing commits by adding::
169
169
170 [histedit]
170 [histedit]
171 dropmissing = True
171 dropmissing = True
172
172
173 By default, histedit will close the transaction after each action. For
173 By default, histedit will close the transaction after each action. For
174 performance purposes, you can configure histedit to use a single transaction
174 performance purposes, you can configure histedit to use a single transaction
175 across the entire histedit. WARNING: This setting introduces a significant risk
175 across the entire histedit. WARNING: This setting introduces a significant risk
176 of losing the work you've done in a histedit if the histedit aborts
176 of losing the work you've done in a histedit if the histedit aborts
177 unexpectedly::
177 unexpectedly::
178
178
179 [histedit]
179 [histedit]
180 singletransaction = True
180 singletransaction = True
181
181
182 """
182 """
183
183
184 from __future__ import absolute_import
184 from __future__ import absolute_import
185
185
186 import os
186 import os
187
187
188 from mercurial.i18n import _
188 from mercurial.i18n import _
189 from mercurial import (
189 from mercurial import (
190 bundle2,
190 bundle2,
191 cmdutil,
191 cmdutil,
192 context,
192 context,
193 copies,
193 copies,
194 destutil,
194 destutil,
195 discovery,
195 discovery,
196 error,
196 error,
197 exchange,
197 exchange,
198 extensions,
198 extensions,
199 hg,
199 hg,
200 lock,
200 lock,
201 merge as mergemod,
201 merge as mergemod,
202 mergeutil,
202 mergeutil,
203 node,
203 node,
204 obsolete,
204 obsolete,
205 pycompat,
205 pycompat,
206 registrar,
206 registrar,
207 repair,
207 repair,
208 scmutil,
208 scmutil,
209 state as statemod,
209 state as statemod,
210 util,
210 util,
211 )
211 )
212 from mercurial.utils import (
212 from mercurial.utils import (
213 stringutil,
213 stringutil,
214 )
214 )
215
215
216 pickle = util.pickle
216 pickle = util.pickle
217 release = lock.release
217 release = lock.release
218 cmdtable = {}
218 cmdtable = {}
219 command = registrar.command(cmdtable)
219 command = registrar.command(cmdtable)
220
220
221 configtable = {}
221 configtable = {}
222 configitem = registrar.configitem(configtable)
222 configitem = registrar.configitem(configtable)
223 configitem('experimental', 'histedit.autoverb',
223 configitem('experimental', 'histedit.autoverb',
224 default=False,
224 default=False,
225 )
225 )
226 configitem('histedit', 'defaultrev',
226 configitem('histedit', 'defaultrev',
227 default=None,
227 default=None,
228 )
228 )
229 configitem('histedit', 'dropmissing',
229 configitem('histedit', 'dropmissing',
230 default=False,
230 default=False,
231 )
231 )
232 configitem('histedit', 'linelen',
232 configitem('histedit', 'linelen',
233 default=80,
233 default=80,
234 )
234 )
235 configitem('histedit', 'singletransaction',
235 configitem('histedit', 'singletransaction',
236 default=False,
236 default=False,
237 )
237 )
238
238
239 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
239 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
240 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
240 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
241 # be specifying the version(s) of Mercurial they are tested with, or
241 # be specifying the version(s) of Mercurial they are tested with, or
242 # leave the attribute unspecified.
242 # leave the attribute unspecified.
243 testedwith = 'ships-with-hg-core'
243 testedwith = 'ships-with-hg-core'
244
244
245 actiontable = {}
245 actiontable = {}
246 primaryactions = set()
246 primaryactions = set()
247 secondaryactions = set()
247 secondaryactions = set()
248 tertiaryactions = set()
248 tertiaryactions = set()
249 internalactions = set()
249 internalactions = set()
250
250
251 def geteditcomment(ui, first, last):
251 def geteditcomment(ui, first, last):
252 """ construct the editor comment
252 """ construct the editor comment
253 The comment includes::
253 The comment includes::
254 - an intro
254 - an intro
255 - sorted primary commands
255 - sorted primary commands
256 - sorted short commands
256 - sorted short commands
257 - sorted long commands
257 - sorted long commands
258 - additional hints
258 - additional hints
259
259
260 Commands are only included once.
260 Commands are only included once.
261 """
261 """
262 intro = _("""Edit history between %s and %s
262 intro = _("""Edit history between %s and %s
263
263
264 Commits are listed from least to most recent
264 Commits are listed from least to most recent
265
265
266 You can reorder changesets by reordering the lines
266 You can reorder changesets by reordering the lines
267
267
268 Commands:
268 Commands:
269 """)
269 """)
270 actions = []
270 actions = []
271 def addverb(v):
271 def addverb(v):
272 a = actiontable[v]
272 a = actiontable[v]
273 lines = a.message.split("\n")
273 lines = a.message.split("\n")
274 if len(a.verbs):
274 if len(a.verbs):
275 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
275 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
276 actions.append(" %s = %s" % (v, lines[0]))
276 actions.append(" %s = %s" % (v, lines[0]))
277 actions.extend([' %s' for l in lines[1:]])
277 actions.extend([' %s' for l in lines[1:]])
278
278
279 for v in (
279 for v in (
280 sorted(primaryactions) +
280 sorted(primaryactions) +
281 sorted(secondaryactions) +
281 sorted(secondaryactions) +
282 sorted(tertiaryactions)
282 sorted(tertiaryactions)
283 ):
283 ):
284 addverb(v)
284 addverb(v)
285 actions.append('')
285 actions.append('')
286
286
287 hints = []
287 hints = []
288 if ui.configbool('histedit', 'dropmissing'):
288 if ui.configbool('histedit', 'dropmissing'):
289 hints.append("Deleting a changeset from the list "
289 hints.append("Deleting a changeset from the list "
290 "will DISCARD it from the edited history!")
290 "will DISCARD it from the edited history!")
291
291
292 lines = (intro % (first, last)).split('\n') + actions + hints
292 lines = (intro % (first, last)).split('\n') + actions + hints
293
293
294 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
294 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
295
295
296 class histeditstate(object):
296 class histeditstate(object):
297 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
297 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
298 topmost=None, replacements=None, lock=None, wlock=None):
298 topmost=None, replacements=None, lock=None, wlock=None):
299 self.repo = repo
299 self.repo = repo
300 self.actions = actions
300 self.actions = actions
301 self.keep = keep
301 self.keep = keep
302 self.topmost = topmost
302 self.topmost = topmost
303 self.parentctxnode = parentctxnode
303 self.parentctxnode = parentctxnode
304 self.lock = lock
304 self.lock = lock
305 self.wlock = wlock
305 self.wlock = wlock
306 self.backupfile = None
306 self.backupfile = None
307 self.stateobj = statemod.cmdstate(repo, 'histedit-state')
307 self.stateobj = statemod.cmdstate(repo, 'histedit-state')
308 if replacements is None:
308 if replacements is None:
309 self.replacements = []
309 self.replacements = []
310 else:
310 else:
311 self.replacements = replacements
311 self.replacements = replacements
312
312
313 def read(self):
313 def read(self):
314 """Load histedit state from disk and set fields appropriately."""
314 """Load histedit state from disk and set fields appropriately."""
315 if not self.stateobj.exists():
315 if not self.stateobj.exists():
316 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
316 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
317
317
318 data = self._read()
318 data = self._read()
319
319
320 self.parentctxnode = data['parentctxnode']
320 self.parentctxnode = data['parentctxnode']
321 actions = parserules(data['rules'], self)
321 actions = parserules(data['rules'], self)
322 self.actions = actions
322 self.actions = actions
323 self.keep = data['keep']
323 self.keep = data['keep']
324 self.topmost = data['topmost']
324 self.topmost = data['topmost']
325 self.replacements = data['replacements']
325 self.replacements = data['replacements']
326 self.backupfile = data['backupfile']
326 self.backupfile = data['backupfile']
327
327
328 def _read(self):
328 def _read(self):
329 fp = self.repo.vfs.read('histedit-state')
329 fp = self.repo.vfs.read('histedit-state')
330 if fp.startswith('v1\n'):
330 if fp.startswith('v1\n'):
331 data = self._load()
331 data = self._load()
332 parentctxnode, rules, keep, topmost, replacements, backupfile = data
332 parentctxnode, rules, keep, topmost, replacements, backupfile = data
333 else:
333 else:
334 data = pickle.loads(fp)
334 data = pickle.loads(fp)
335 parentctxnode, rules, keep, topmost, replacements = data
335 parentctxnode, rules, keep, topmost, replacements = data
336 backupfile = None
336 backupfile = None
337 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
337 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
338
338
339 return {'parentctxnode': parentctxnode, "rules": rules, "keep": keep,
339 return {'parentctxnode': parentctxnode, "rules": rules, "keep": keep,
340 "topmost": topmost, "replacements": replacements,
340 "topmost": topmost, "replacements": replacements,
341 "backupfile": backupfile}
341 "backupfile": backupfile}
342
342
343 def write(self, tr=None):
343 def write(self, tr=None):
344 if tr:
344 if tr:
345 tr.addfilegenerator('histedit-state', ('histedit-state',),
345 tr.addfilegenerator('histedit-state', ('histedit-state',),
346 self._write, location='plain')
346 self._write, location='plain')
347 else:
347 else:
348 with self.repo.vfs("histedit-state", "w") as f:
348 with self.repo.vfs("histedit-state", "w") as f:
349 self._write(f)
349 self._write(f)
350
350
351 def _write(self, fp):
351 def _write(self, fp):
352 fp.write('v1\n')
352 fp.write('v1\n')
353 fp.write('%s\n' % node.hex(self.parentctxnode))
353 fp.write('%s\n' % node.hex(self.parentctxnode))
354 fp.write('%s\n' % node.hex(self.topmost))
354 fp.write('%s\n' % node.hex(self.topmost))
355 fp.write('%s\n' % ('True' if self.keep else 'False'))
355 fp.write('%s\n' % ('True' if self.keep else 'False'))
356 fp.write('%d\n' % len(self.actions))
356 fp.write('%d\n' % len(self.actions))
357 for action in self.actions:
357 for action in self.actions:
358 fp.write('%s\n' % action.tostate())
358 fp.write('%s\n' % action.tostate())
359 fp.write('%d\n' % len(self.replacements))
359 fp.write('%d\n' % len(self.replacements))
360 for replacement in self.replacements:
360 for replacement in self.replacements:
361 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
361 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
362 for r in replacement[1])))
362 for r in replacement[1])))
363 backupfile = self.backupfile
363 backupfile = self.backupfile
364 if not backupfile:
364 if not backupfile:
365 backupfile = ''
365 backupfile = ''
366 fp.write('%s\n' % backupfile)
366 fp.write('%s\n' % backupfile)
367
367
368 def _load(self):
368 def _load(self):
369 fp = self.repo.vfs('histedit-state', 'r')
369 fp = self.repo.vfs('histedit-state', 'r')
370 lines = [l[:-1] for l in fp.readlines()]
370 lines = [l[:-1] for l in fp.readlines()]
371
371
372 index = 0
372 index = 0
373 lines[index] # version number
373 lines[index] # version number
374 index += 1
374 index += 1
375
375
376 parentctxnode = node.bin(lines[index])
376 parentctxnode = node.bin(lines[index])
377 index += 1
377 index += 1
378
378
379 topmost = node.bin(lines[index])
379 topmost = node.bin(lines[index])
380 index += 1
380 index += 1
381
381
382 keep = lines[index] == 'True'
382 keep = lines[index] == 'True'
383 index += 1
383 index += 1
384
384
385 # Rules
385 # Rules
386 rules = []
386 rules = []
387 rulelen = int(lines[index])
387 rulelen = int(lines[index])
388 index += 1
388 index += 1
389 for i in pycompat.xrange(rulelen):
389 for i in pycompat.xrange(rulelen):
390 ruleaction = lines[index]
390 ruleaction = lines[index]
391 index += 1
391 index += 1
392 rule = lines[index]
392 rule = lines[index]
393 index += 1
393 index += 1
394 rules.append((ruleaction, rule))
394 rules.append((ruleaction, rule))
395
395
396 # Replacements
396 # Replacements
397 replacements = []
397 replacements = []
398 replacementlen = int(lines[index])
398 replacementlen = int(lines[index])
399 index += 1
399 index += 1
400 for i in pycompat.xrange(replacementlen):
400 for i in pycompat.xrange(replacementlen):
401 replacement = lines[index]
401 replacement = lines[index]
402 original = node.bin(replacement[:40])
402 original = node.bin(replacement[:40])
403 succ = [node.bin(replacement[i:i + 40]) for i in
403 succ = [node.bin(replacement[i:i + 40]) for i in
404 range(40, len(replacement), 40)]
404 range(40, len(replacement), 40)]
405 replacements.append((original, succ))
405 replacements.append((original, succ))
406 index += 1
406 index += 1
407
407
408 backupfile = lines[index]
408 backupfile = lines[index]
409 index += 1
409 index += 1
410
410
411 fp.close()
411 fp.close()
412
412
413 return parentctxnode, rules, keep, topmost, replacements, backupfile
413 return parentctxnode, rules, keep, topmost, replacements, backupfile
414
414
415 def clear(self):
415 def clear(self):
416 if self.inprogress():
416 if self.inprogress():
417 self.repo.vfs.unlink('histedit-state')
417 self.repo.vfs.unlink('histedit-state')
418
418
419 def inprogress(self):
419 def inprogress(self):
420 return self.repo.vfs.exists('histedit-state')
420 return self.repo.vfs.exists('histedit-state')
421
421
422
422
423 class histeditaction(object):
423 class histeditaction(object):
424 def __init__(self, state, node):
424 def __init__(self, state, node):
425 self.state = state
425 self.state = state
426 self.repo = state.repo
426 self.repo = state.repo
427 self.node = node
427 self.node = node
428
428
429 @classmethod
429 @classmethod
430 def fromrule(cls, state, rule):
430 def fromrule(cls, state, rule):
431 """Parses the given rule, returning an instance of the histeditaction.
431 """Parses the given rule, returning an instance of the histeditaction.
432 """
432 """
433 ruleid = rule.strip().split(' ', 1)[0]
433 ruleid = rule.strip().split(' ', 1)[0]
434 # ruleid can be anything from rev numbers, hashes, "bookmarks" etc
434 # ruleid can be anything from rev numbers, hashes, "bookmarks" etc
435 # Check for validation of rule ids and get the rulehash
435 # Check for validation of rule ids and get the rulehash
436 try:
436 try:
437 rev = node.bin(ruleid)
437 rev = node.bin(ruleid)
438 except TypeError:
438 except TypeError:
439 try:
439 try:
440 _ctx = scmutil.revsingle(state.repo, ruleid)
440 _ctx = scmutil.revsingle(state.repo, ruleid)
441 rulehash = _ctx.hex()
441 rulehash = _ctx.hex()
442 rev = node.bin(rulehash)
442 rev = node.bin(rulehash)
443 except error.RepoLookupError:
443 except error.RepoLookupError:
444 raise error.ParseError(_("invalid changeset %s") % ruleid)
444 raise error.ParseError(_("invalid changeset %s") % ruleid)
445 return cls(state, rev)
445 return cls(state, rev)
446
446
447 def verify(self, prev, expected, seen):
447 def verify(self, prev, expected, seen):
448 """ Verifies semantic correctness of the rule"""
448 """ Verifies semantic correctness of the rule"""
449 repo = self.repo
449 repo = self.repo
450 ha = node.hex(self.node)
450 ha = node.hex(self.node)
451 self.node = scmutil.resolvehexnodeidprefix(repo, ha)
451 self.node = scmutil.resolvehexnodeidprefix(repo, ha)
452 if self.node is None:
452 if self.node is None:
453 raise error.ParseError(_('unknown changeset %s listed') % ha[:12])
453 raise error.ParseError(_('unknown changeset %s listed') % ha[:12])
454 self._verifynodeconstraints(prev, expected, seen)
454 self._verifynodeconstraints(prev, expected, seen)
455
455
456 def _verifynodeconstraints(self, prev, expected, seen):
456 def _verifynodeconstraints(self, prev, expected, seen):
457 # by default command need a node in the edited list
457 # by default command need a node in the edited list
458 if self.node not in expected:
458 if self.node not in expected:
459 raise error.ParseError(_('%s "%s" changeset was not a candidate')
459 raise error.ParseError(_('%s "%s" changeset was not a candidate')
460 % (self.verb, node.short(self.node)),
460 % (self.verb, node.short(self.node)),
461 hint=_('only use listed changesets'))
461 hint=_('only use listed changesets'))
462 # and only one command per node
462 # and only one command per node
463 if self.node in seen:
463 if self.node in seen:
464 raise error.ParseError(_('duplicated command for changeset %s') %
464 raise error.ParseError(_('duplicated command for changeset %s') %
465 node.short(self.node))
465 node.short(self.node))
466
466
467 def torule(self):
467 def torule(self):
468 """build a histedit rule line for an action
468 """build a histedit rule line for an action
469
469
470 by default lines are in the form:
470 by default lines are in the form:
471 <hash> <rev> <summary>
471 <hash> <rev> <summary>
472 """
472 """
473 ctx = self.repo[self.node]
473 ctx = self.repo[self.node]
474 summary = _getsummary(ctx)
474 summary = _getsummary(ctx)
475 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
475 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
476 # trim to 75 columns by default so it's not stupidly wide in my editor
476 # trim to 75 columns by default so it's not stupidly wide in my editor
477 # (the 5 more are left for verb)
477 # (the 5 more are left for verb)
478 maxlen = self.repo.ui.configint('histedit', 'linelen')
478 maxlen = self.repo.ui.configint('histedit', 'linelen')
479 maxlen = max(maxlen, 22) # avoid truncating hash
479 maxlen = max(maxlen, 22) # avoid truncating hash
480 return stringutil.ellipsis(line, maxlen)
480 return stringutil.ellipsis(line, maxlen)
481
481
482 def tostate(self):
482 def tostate(self):
483 """Print an action in format used by histedit state files
483 """Print an action in format used by histedit state files
484 (the first line is a verb, the remainder is the second)
484 (the first line is a verb, the remainder is the second)
485 """
485 """
486 return "%s\n%s" % (self.verb, node.hex(self.node))
486 return "%s\n%s" % (self.verb, node.hex(self.node))
487
487
488 def run(self):
488 def run(self):
489 """Runs the action. The default behavior is simply apply the action's
489 """Runs the action. The default behavior is simply apply the action's
490 rulectx onto the current parentctx."""
490 rulectx onto the current parentctx."""
491 self.applychange()
491 self.applychange()
492 self.continuedirty()
492 self.continuedirty()
493 return self.continueclean()
493 return self.continueclean()
494
494
495 def applychange(self):
495 def applychange(self):
496 """Applies the changes from this action's rulectx onto the current
496 """Applies the changes from this action's rulectx onto the current
497 parentctx, but does not commit them."""
497 parentctx, but does not commit them."""
498 repo = self.repo
498 repo = self.repo
499 rulectx = repo[self.node]
499 rulectx = repo[self.node]
500 repo.ui.pushbuffer(error=True, labeled=True)
500 repo.ui.pushbuffer(error=True, labeled=True)
501 hg.update(repo, self.state.parentctxnode, quietempty=True)
501 hg.update(repo, self.state.parentctxnode, quietempty=True)
502 stats = applychanges(repo.ui, repo, rulectx, {})
502 stats = applychanges(repo.ui, repo, rulectx, {})
503 repo.dirstate.setbranch(rulectx.branch())
503 repo.dirstate.setbranch(rulectx.branch())
504 if stats.unresolvedcount:
504 if stats.unresolvedcount:
505 buf = repo.ui.popbuffer()
505 buf = repo.ui.popbuffer()
506 repo.ui.write(buf)
506 repo.ui.write(buf)
507 raise error.InterventionRequired(
507 raise error.InterventionRequired(
508 _('Fix up the change (%s %s)') %
508 _('Fix up the change (%s %s)') %
509 (self.verb, node.short(self.node)),
509 (self.verb, node.short(self.node)),
510 hint=_('hg histedit --continue to resume'))
510 hint=_('hg histedit --continue to resume'))
511 else:
511 else:
512 repo.ui.popbuffer()
512 repo.ui.popbuffer()
513
513
514 def continuedirty(self):
514 def continuedirty(self):
515 """Continues the action when changes have been applied to the working
515 """Continues the action when changes have been applied to the working
516 copy. The default behavior is to commit the dirty changes."""
516 copy. The default behavior is to commit the dirty changes."""
517 repo = self.repo
517 repo = self.repo
518 rulectx = repo[self.node]
518 rulectx = repo[self.node]
519
519
520 editor = self.commiteditor()
520 editor = self.commiteditor()
521 commit = commitfuncfor(repo, rulectx)
521 commit = commitfuncfor(repo, rulectx)
522
522
523 commit(text=rulectx.description(), user=rulectx.user(),
523 commit(text=rulectx.description(), user=rulectx.user(),
524 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
524 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
525
525
526 def commiteditor(self):
526 def commiteditor(self):
527 """The editor to be used to edit the commit message."""
527 """The editor to be used to edit the commit message."""
528 return False
528 return False
529
529
530 def continueclean(self):
530 def continueclean(self):
531 """Continues the action when the working copy is clean. The default
531 """Continues the action when the working copy is clean. The default
532 behavior is to accept the current commit as the new version of the
532 behavior is to accept the current commit as the new version of the
533 rulectx."""
533 rulectx."""
534 ctx = self.repo['.']
534 ctx = self.repo['.']
535 if ctx.node() == self.state.parentctxnode:
535 if ctx.node() == self.state.parentctxnode:
536 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
536 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
537 node.short(self.node))
537 node.short(self.node))
538 return ctx, [(self.node, tuple())]
538 return ctx, [(self.node, tuple())]
539 if ctx.node() == self.node:
539 if ctx.node() == self.node:
540 # Nothing changed
540 # Nothing changed
541 return ctx, []
541 return ctx, []
542 return ctx, [(self.node, (ctx.node(),))]
542 return ctx, [(self.node, (ctx.node(),))]
543
543
544 def commitfuncfor(repo, src):
544 def commitfuncfor(repo, src):
545 """Build a commit function for the replacement of <src>
545 """Build a commit function for the replacement of <src>
546
546
547 This function ensure we apply the same treatment to all changesets.
547 This function ensure we apply the same treatment to all changesets.
548
548
549 - Add a 'histedit_source' entry in extra.
549 - Add a 'histedit_source' entry in extra.
550
550
551 Note that fold has its own separated logic because its handling is a bit
551 Note that fold has its own separated logic because its handling is a bit
552 different and not easily factored out of the fold method.
552 different and not easily factored out of the fold method.
553 """
553 """
554 phasemin = src.phase()
554 phasemin = src.phase()
555 def commitfunc(**kwargs):
555 def commitfunc(**kwargs):
556 overrides = {('phases', 'new-commit'): phasemin}
556 overrides = {('phases', 'new-commit'): phasemin}
557 with repo.ui.configoverride(overrides, 'histedit'):
557 with repo.ui.configoverride(overrides, 'histedit'):
558 extra = kwargs.get(r'extra', {}).copy()
558 extra = kwargs.get(r'extra', {}).copy()
559 extra['histedit_source'] = src.hex()
559 extra['histedit_source'] = src.hex()
560 kwargs[r'extra'] = extra
560 kwargs[r'extra'] = extra
561 return repo.commit(**kwargs)
561 return repo.commit(**kwargs)
562 return commitfunc
562 return commitfunc
563
563
564 def applychanges(ui, repo, ctx, opts):
564 def applychanges(ui, repo, ctx, opts):
565 """Merge changeset from ctx (only) in the current working directory"""
565 """Merge changeset from ctx (only) in the current working directory"""
566 wcpar = repo.dirstate.parents()[0]
566 wcpar = repo.dirstate.parents()[0]
567 if ctx.p1().node() == wcpar:
567 if ctx.p1().node() == wcpar:
568 # edits are "in place" we do not need to make any merge,
568 # edits are "in place" we do not need to make any merge,
569 # just applies changes on parent for editing
569 # just applies changes on parent for editing
570 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
570 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
571 stats = mergemod.updateresult(0, 0, 0, 0)
571 stats = mergemod.updateresult(0, 0, 0, 0)
572 else:
572 else:
573 try:
573 try:
574 # ui.forcemerge is an internal variable, do not document
574 # ui.forcemerge is an internal variable, do not document
575 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
575 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
576 'histedit')
576 'histedit')
577 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
577 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
578 finally:
578 finally:
579 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
579 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
580 return stats
580 return stats
581
581
582 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
582 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
583 """collapse the set of revisions from first to last as new one.
583 """collapse the set of revisions from first to last as new one.
584
584
585 Expected commit options are:
585 Expected commit options are:
586 - message
586 - message
587 - date
587 - date
588 - username
588 - username
589 Commit message is edited in all cases.
589 Commit message is edited in all cases.
590
590
591 This function works in memory."""
591 This function works in memory."""
592 ctxs = list(repo.set('%d::%d', firstctx.rev(), lastctx.rev()))
592 ctxs = list(repo.set('%d::%d', firstctx.rev(), lastctx.rev()))
593 if not ctxs:
593 if not ctxs:
594 return None
594 return None
595 for c in ctxs:
595 for c in ctxs:
596 if not c.mutable():
596 if not c.mutable():
597 raise error.ParseError(
597 raise error.ParseError(
598 _("cannot fold into public change %s") % node.short(c.node()))
598 _("cannot fold into public change %s") % node.short(c.node()))
599 base = firstctx.parents()[0]
599 base = firstctx.parents()[0]
600
600
601 # commit a new version of the old changeset, including the update
601 # commit a new version of the old changeset, including the update
602 # collect all files which might be affected
602 # collect all files which might be affected
603 files = set()
603 files = set()
604 for ctx in ctxs:
604 for ctx in ctxs:
605 files.update(ctx.files())
605 files.update(ctx.files())
606
606
607 # Recompute copies (avoid recording a -> b -> a)
607 # Recompute copies (avoid recording a -> b -> a)
608 copied = copies.pathcopies(base, lastctx)
608 copied = copies.pathcopies(base, lastctx)
609
609
610 # prune files which were reverted by the updates
610 # prune files which were reverted by the updates
611 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
611 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
612 # commit version of these files as defined by head
612 # commit version of these files as defined by head
613 headmf = lastctx.manifest()
613 headmf = lastctx.manifest()
614 def filectxfn(repo, ctx, path):
614 def filectxfn(repo, ctx, path):
615 if path in headmf:
615 if path in headmf:
616 fctx = lastctx[path]
616 fctx = lastctx[path]
617 flags = fctx.flags()
617 flags = fctx.flags()
618 mctx = context.memfilectx(repo, ctx,
618 mctx = context.memfilectx(repo, ctx,
619 fctx.path(), fctx.data(),
619 fctx.path(), fctx.data(),
620 islink='l' in flags,
620 islink='l' in flags,
621 isexec='x' in flags,
621 isexec='x' in flags,
622 copied=copied.get(path))
622 copied=copied.get(path))
623 return mctx
623 return mctx
624 return None
624 return None
625
625
626 if commitopts.get('message'):
626 if commitopts.get('message'):
627 message = commitopts['message']
627 message = commitopts['message']
628 else:
628 else:
629 message = firstctx.description()
629 message = firstctx.description()
630 user = commitopts.get('user')
630 user = commitopts.get('user')
631 date = commitopts.get('date')
631 date = commitopts.get('date')
632 extra = commitopts.get('extra')
632 extra = commitopts.get('extra')
633
633
634 parents = (firstctx.p1().node(), firstctx.p2().node())
634 parents = (firstctx.p1().node(), firstctx.p2().node())
635 editor = None
635 editor = None
636 if not skipprompt:
636 if not skipprompt:
637 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
637 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
638 new = context.memctx(repo,
638 new = context.memctx(repo,
639 parents=parents,
639 parents=parents,
640 text=message,
640 text=message,
641 files=files,
641 files=files,
642 filectxfn=filectxfn,
642 filectxfn=filectxfn,
643 user=user,
643 user=user,
644 date=date,
644 date=date,
645 extra=extra,
645 extra=extra,
646 editor=editor)
646 editor=editor)
647 return repo.commitctx(new)
647 return repo.commitctx(new)
648
648
649 def _isdirtywc(repo):
649 def _isdirtywc(repo):
650 return repo[None].dirty(missing=True)
650 return repo[None].dirty(missing=True)
651
651
652 def abortdirty():
652 def abortdirty():
653 raise error.Abort(_('working copy has pending changes'),
653 raise error.Abort(_('working copy has pending changes'),
654 hint=_('amend, commit, or revert them and run histedit '
654 hint=_('amend, commit, or revert them and run histedit '
655 '--continue, or abort with histedit --abort'))
655 '--continue, or abort with histedit --abort'))
656
656
657 def action(verbs, message, priority=False, internal=False):
657 def action(verbs, message, priority=False, internal=False):
658 def wrap(cls):
658 def wrap(cls):
659 assert not priority or not internal
659 assert not priority or not internal
660 verb = verbs[0]
660 verb = verbs[0]
661 if priority:
661 if priority:
662 primaryactions.add(verb)
662 primaryactions.add(verb)
663 elif internal:
663 elif internal:
664 internalactions.add(verb)
664 internalactions.add(verb)
665 elif len(verbs) > 1:
665 elif len(verbs) > 1:
666 secondaryactions.add(verb)
666 secondaryactions.add(verb)
667 else:
667 else:
668 tertiaryactions.add(verb)
668 tertiaryactions.add(verb)
669
669
670 cls.verb = verb
670 cls.verb = verb
671 cls.verbs = verbs
671 cls.verbs = verbs
672 cls.message = message
672 cls.message = message
673 for verb in verbs:
673 for verb in verbs:
674 actiontable[verb] = cls
674 actiontable[verb] = cls
675 return cls
675 return cls
676 return wrap
676 return wrap
677
677
678 @action(['pick', 'p'],
678 @action(['pick', 'p'],
679 _('use commit'),
679 _('use commit'),
680 priority=True)
680 priority=True)
681 class pick(histeditaction):
681 class pick(histeditaction):
682 def run(self):
682 def run(self):
683 rulectx = self.repo[self.node]
683 rulectx = self.repo[self.node]
684 if rulectx.parents()[0].node() == self.state.parentctxnode:
684 if rulectx.parents()[0].node() == self.state.parentctxnode:
685 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
685 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
686 return rulectx, []
686 return rulectx, []
687
687
688 return super(pick, self).run()
688 return super(pick, self).run()
689
689
690 @action(['edit', 'e'],
690 @action(['edit', 'e'],
691 _('use commit, but stop for amending'),
691 _('use commit, but stop for amending'),
692 priority=True)
692 priority=True)
693 class edit(histeditaction):
693 class edit(histeditaction):
694 def run(self):
694 def run(self):
695 repo = self.repo
695 repo = self.repo
696 rulectx = repo[self.node]
696 rulectx = repo[self.node]
697 hg.update(repo, self.state.parentctxnode, quietempty=True)
697 hg.update(repo, self.state.parentctxnode, quietempty=True)
698 applychanges(repo.ui, repo, rulectx, {})
698 applychanges(repo.ui, repo, rulectx, {})
699 raise error.InterventionRequired(
699 raise error.InterventionRequired(
700 _('Editing (%s), you may commit or record as needed now.')
700 _('Editing (%s), you may commit or record as needed now.')
701 % node.short(self.node),
701 % node.short(self.node),
702 hint=_('hg histedit --continue to resume'))
702 hint=_('hg histedit --continue to resume'))
703
703
704 def commiteditor(self):
704 def commiteditor(self):
705 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
705 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
706
706
707 @action(['fold', 'f'],
707 @action(['fold', 'f'],
708 _('use commit, but combine it with the one above'))
708 _('use commit, but combine it with the one above'))
709 class fold(histeditaction):
709 class fold(histeditaction):
710 def verify(self, prev, expected, seen):
710 def verify(self, prev, expected, seen):
711 """ Verifies semantic correctness of the fold rule"""
711 """ Verifies semantic correctness of the fold rule"""
712 super(fold, self).verify(prev, expected, seen)
712 super(fold, self).verify(prev, expected, seen)
713 repo = self.repo
713 repo = self.repo
714 if not prev:
714 if not prev:
715 c = repo[self.node].parents()[0]
715 c = repo[self.node].parents()[0]
716 elif not prev.verb in ('pick', 'base'):
716 elif not prev.verb in ('pick', 'base'):
717 return
717 return
718 else:
718 else:
719 c = repo[prev.node]
719 c = repo[prev.node]
720 if not c.mutable():
720 if not c.mutable():
721 raise error.ParseError(
721 raise error.ParseError(
722 _("cannot fold into public change %s") % node.short(c.node()))
722 _("cannot fold into public change %s") % node.short(c.node()))
723
723
724
724
725 def continuedirty(self):
725 def continuedirty(self):
726 repo = self.repo
726 repo = self.repo
727 rulectx = repo[self.node]
727 rulectx = repo[self.node]
728
728
729 commit = commitfuncfor(repo, rulectx)
729 commit = commitfuncfor(repo, rulectx)
730 commit(text='fold-temp-revision %s' % node.short(self.node),
730 commit(text='fold-temp-revision %s' % node.short(self.node),
731 user=rulectx.user(), date=rulectx.date(),
731 user=rulectx.user(), date=rulectx.date(),
732 extra=rulectx.extra())
732 extra=rulectx.extra())
733
733
734 def continueclean(self):
734 def continueclean(self):
735 repo = self.repo
735 repo = self.repo
736 ctx = repo['.']
736 ctx = repo['.']
737 rulectx = repo[self.node]
737 rulectx = repo[self.node]
738 parentctxnode = self.state.parentctxnode
738 parentctxnode = self.state.parentctxnode
739 if ctx.node() == parentctxnode:
739 if ctx.node() == parentctxnode:
740 repo.ui.warn(_('%s: empty changeset\n') %
740 repo.ui.warn(_('%s: empty changeset\n') %
741 node.short(self.node))
741 node.short(self.node))
742 return ctx, [(self.node, (parentctxnode,))]
742 return ctx, [(self.node, (parentctxnode,))]
743
743
744 parentctx = repo[parentctxnode]
744 parentctx = repo[parentctxnode]
745 newcommits = set(c.node() for c in repo.set('(%d::. - %d)',
745 newcommits = set(c.node() for c in repo.set('(%d::. - %d)',
746 parentctx.rev(),
746 parentctx.rev(),
747 parentctx.rev()))
747 parentctx.rev()))
748 if not newcommits:
748 if not newcommits:
749 repo.ui.warn(_('%s: cannot fold - working copy is not a '
749 repo.ui.warn(_('%s: cannot fold - working copy is not a '
750 'descendant of previous commit %s\n') %
750 'descendant of previous commit %s\n') %
751 (node.short(self.node), node.short(parentctxnode)))
751 (node.short(self.node), node.short(parentctxnode)))
752 return ctx, [(self.node, (ctx.node(),))]
752 return ctx, [(self.node, (ctx.node(),))]
753
753
754 middlecommits = newcommits.copy()
754 middlecommits = newcommits.copy()
755 middlecommits.discard(ctx.node())
755 middlecommits.discard(ctx.node())
756
756
757 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
757 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
758 middlecommits)
758 middlecommits)
759
759
760 def skipprompt(self):
760 def skipprompt(self):
761 """Returns true if the rule should skip the message editor.
761 """Returns true if the rule should skip the message editor.
762
762
763 For example, 'fold' wants to show an editor, but 'rollup'
763 For example, 'fold' wants to show an editor, but 'rollup'
764 doesn't want to.
764 doesn't want to.
765 """
765 """
766 return False
766 return False
767
767
768 def mergedescs(self):
768 def mergedescs(self):
769 """Returns true if the rule should merge messages of multiple changes.
769 """Returns true if the rule should merge messages of multiple changes.
770
770
771 This exists mainly so that 'rollup' rules can be a subclass of
771 This exists mainly so that 'rollup' rules can be a subclass of
772 'fold'.
772 'fold'.
773 """
773 """
774 return True
774 return True
775
775
776 def firstdate(self):
776 def firstdate(self):
777 """Returns true if the rule should preserve the date of the first
777 """Returns true if the rule should preserve the date of the first
778 change.
778 change.
779
779
780 This exists mainly so that 'rollup' rules can be a subclass of
780 This exists mainly so that 'rollup' rules can be a subclass of
781 'fold'.
781 'fold'.
782 """
782 """
783 return False
783 return False
784
784
785 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
785 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
786 parent = ctx.parents()[0].node()
786 parent = ctx.parents()[0].node()
787 hg.updaterepo(repo, parent, overwrite=False)
787 hg.updaterepo(repo, parent, overwrite=False)
788 ### prepare new commit data
788 ### prepare new commit data
789 commitopts = {}
789 commitopts = {}
790 commitopts['user'] = ctx.user()
790 commitopts['user'] = ctx.user()
791 # commit message
791 # commit message
792 if not self.mergedescs():
792 if not self.mergedescs():
793 newmessage = ctx.description()
793 newmessage = ctx.description()
794 else:
794 else:
795 newmessage = '\n***\n'.join(
795 newmessage = '\n***\n'.join(
796 [ctx.description()] +
796 [ctx.description()] +
797 [repo[r].description() for r in internalchanges] +
797 [repo[r].description() for r in internalchanges] +
798 [oldctx.description()]) + '\n'
798 [oldctx.description()]) + '\n'
799 commitopts['message'] = newmessage
799 commitopts['message'] = newmessage
800 # date
800 # date
801 if self.firstdate():
801 if self.firstdate():
802 commitopts['date'] = ctx.date()
802 commitopts['date'] = ctx.date()
803 else:
803 else:
804 commitopts['date'] = max(ctx.date(), oldctx.date())
804 commitopts['date'] = max(ctx.date(), oldctx.date())
805 extra = ctx.extra().copy()
805 extra = ctx.extra().copy()
806 # histedit_source
806 # histedit_source
807 # note: ctx is likely a temporary commit but that the best we can do
807 # note: ctx is likely a temporary commit but that the best we can do
808 # here. This is sufficient to solve issue3681 anyway.
808 # here. This is sufficient to solve issue3681 anyway.
809 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
809 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
810 commitopts['extra'] = extra
810 commitopts['extra'] = extra
811 phasemin = max(ctx.phase(), oldctx.phase())
811 phasemin = max(ctx.phase(), oldctx.phase())
812 overrides = {('phases', 'new-commit'): phasemin}
812 overrides = {('phases', 'new-commit'): phasemin}
813 with repo.ui.configoverride(overrides, 'histedit'):
813 with repo.ui.configoverride(overrides, 'histedit'):
814 n = collapse(repo, ctx, repo[newnode], commitopts,
814 n = collapse(repo, ctx, repo[newnode], commitopts,
815 skipprompt=self.skipprompt())
815 skipprompt=self.skipprompt())
816 if n is None:
816 if n is None:
817 return ctx, []
817 return ctx, []
818 hg.updaterepo(repo, n, overwrite=False)
818 hg.updaterepo(repo, n, overwrite=False)
819 replacements = [(oldctx.node(), (newnode,)),
819 replacements = [(oldctx.node(), (newnode,)),
820 (ctx.node(), (n,)),
820 (ctx.node(), (n,)),
821 (newnode, (n,)),
821 (newnode, (n,)),
822 ]
822 ]
823 for ich in internalchanges:
823 for ich in internalchanges:
824 replacements.append((ich, (n,)))
824 replacements.append((ich, (n,)))
825 return repo[n], replacements
825 return repo[n], replacements
826
826
827 @action(['base', 'b'],
827 @action(['base', 'b'],
828 _('checkout changeset and apply further changesets from there'))
828 _('checkout changeset and apply further changesets from there'))
829 class base(histeditaction):
829 class base(histeditaction):
830
830
831 def run(self):
831 def run(self):
832 if self.repo['.'].node() != self.node:
832 if self.repo['.'].node() != self.node:
833 mergemod.update(self.repo, self.node, False, True)
833 mergemod.update(self.repo, self.node, branchmerge=False, force=True)
834 # branchmerge, force)
835 return self.continueclean()
834 return self.continueclean()
836
835
837 def continuedirty(self):
836 def continuedirty(self):
838 abortdirty()
837 abortdirty()
839
838
840 def continueclean(self):
839 def continueclean(self):
841 basectx = self.repo['.']
840 basectx = self.repo['.']
842 return basectx, []
841 return basectx, []
843
842
844 def _verifynodeconstraints(self, prev, expected, seen):
843 def _verifynodeconstraints(self, prev, expected, seen):
845 # base can only be use with a node not in the edited set
844 # base can only be use with a node not in the edited set
846 if self.node in expected:
845 if self.node in expected:
847 msg = _('%s "%s" changeset was an edited list candidate')
846 msg = _('%s "%s" changeset was an edited list candidate')
848 raise error.ParseError(
847 raise error.ParseError(
849 msg % (self.verb, node.short(self.node)),
848 msg % (self.verb, node.short(self.node)),
850 hint=_('base must only use unlisted changesets'))
849 hint=_('base must only use unlisted changesets'))
851
850
852 @action(['_multifold'],
851 @action(['_multifold'],
853 _(
852 _(
854 """fold subclass used for when multiple folds happen in a row
853 """fold subclass used for when multiple folds happen in a row
855
854
856 We only want to fire the editor for the folded message once when
855 We only want to fire the editor for the folded message once when
857 (say) four changes are folded down into a single change. This is
856 (say) four changes are folded down into a single change. This is
858 similar to rollup, but we should preserve both messages so that
857 similar to rollup, but we should preserve both messages so that
859 when the last fold operation runs we can show the user all the
858 when the last fold operation runs we can show the user all the
860 commit messages in their editor.
859 commit messages in their editor.
861 """),
860 """),
862 internal=True)
861 internal=True)
863 class _multifold(fold):
862 class _multifold(fold):
864 def skipprompt(self):
863 def skipprompt(self):
865 return True
864 return True
866
865
867 @action(["roll", "r"],
866 @action(["roll", "r"],
868 _("like fold, but discard this commit's description and date"))
867 _("like fold, but discard this commit's description and date"))
869 class rollup(fold):
868 class rollup(fold):
870 def mergedescs(self):
869 def mergedescs(self):
871 return False
870 return False
872
871
873 def skipprompt(self):
872 def skipprompt(self):
874 return True
873 return True
875
874
876 def firstdate(self):
875 def firstdate(self):
877 return True
876 return True
878
877
879 @action(["drop", "d"],
878 @action(["drop", "d"],
880 _('remove commit from history'))
879 _('remove commit from history'))
881 class drop(histeditaction):
880 class drop(histeditaction):
882 def run(self):
881 def run(self):
883 parentctx = self.repo[self.state.parentctxnode]
882 parentctx = self.repo[self.state.parentctxnode]
884 return parentctx, [(self.node, tuple())]
883 return parentctx, [(self.node, tuple())]
885
884
886 @action(["mess", "m"],
885 @action(["mess", "m"],
887 _('edit commit message without changing commit content'),
886 _('edit commit message without changing commit content'),
888 priority=True)
887 priority=True)
889 class message(histeditaction):
888 class message(histeditaction):
890 def commiteditor(self):
889 def commiteditor(self):
891 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
890 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
892
891
893 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
892 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
894 """utility function to find the first outgoing changeset
893 """utility function to find the first outgoing changeset
895
894
896 Used by initialization code"""
895 Used by initialization code"""
897 if opts is None:
896 if opts is None:
898 opts = {}
897 opts = {}
899 dest = ui.expandpath(remote or 'default-push', remote or 'default')
898 dest = ui.expandpath(remote or 'default-push', remote or 'default')
900 dest, branches = hg.parseurl(dest, None)[:2]
899 dest, branches = hg.parseurl(dest, None)[:2]
901 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
900 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
902
901
903 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
902 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
904 other = hg.peer(repo, opts, dest)
903 other = hg.peer(repo, opts, dest)
905
904
906 if revs:
905 if revs:
907 revs = [repo.lookup(rev) for rev in revs]
906 revs = [repo.lookup(rev) for rev in revs]
908
907
909 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
908 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
910 if not outgoing.missing:
909 if not outgoing.missing:
911 raise error.Abort(_('no outgoing ancestors'))
910 raise error.Abort(_('no outgoing ancestors'))
912 roots = list(repo.revs("roots(%ln)", outgoing.missing))
911 roots = list(repo.revs("roots(%ln)", outgoing.missing))
913 if len(roots) > 1:
912 if len(roots) > 1:
914 msg = _('there are ambiguous outgoing revisions')
913 msg = _('there are ambiguous outgoing revisions')
915 hint = _("see 'hg help histedit' for more detail")
914 hint = _("see 'hg help histedit' for more detail")
916 raise error.Abort(msg, hint=hint)
915 raise error.Abort(msg, hint=hint)
917 return repo[roots[0]].node()
916 return repo[roots[0]].node()
918
917
919 @command('histedit',
918 @command('histedit',
920 [('', 'commands', '',
919 [('', 'commands', '',
921 _('read history edits from the specified file'), _('FILE')),
920 _('read history edits from the specified file'), _('FILE')),
922 ('c', 'continue', False, _('continue an edit already in progress')),
921 ('c', 'continue', False, _('continue an edit already in progress')),
923 ('', 'edit-plan', False, _('edit remaining actions list')),
922 ('', 'edit-plan', False, _('edit remaining actions list')),
924 ('k', 'keep', False,
923 ('k', 'keep', False,
925 _("don't strip old nodes after edit is complete")),
924 _("don't strip old nodes after edit is complete")),
926 ('', 'abort', False, _('abort an edit in progress')),
925 ('', 'abort', False, _('abort an edit in progress')),
927 ('o', 'outgoing', False, _('changesets not found in destination')),
926 ('o', 'outgoing', False, _('changesets not found in destination')),
928 ('f', 'force', False,
927 ('f', 'force', False,
929 _('force outgoing even for unrelated repositories')),
928 _('force outgoing even for unrelated repositories')),
930 ('r', 'rev', [], _('first revision to be edited'), _('REV'))] +
929 ('r', 'rev', [], _('first revision to be edited'), _('REV'))] +
931 cmdutil.formatteropts,
930 cmdutil.formatteropts,
932 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"),
931 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"),
933 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
932 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
934 def histedit(ui, repo, *freeargs, **opts):
933 def histedit(ui, repo, *freeargs, **opts):
935 """interactively edit changeset history
934 """interactively edit changeset history
936
935
937 This command lets you edit a linear series of changesets (up to
936 This command lets you edit a linear series of changesets (up to
938 and including the working directory, which should be clean).
937 and including the working directory, which should be clean).
939 You can:
938 You can:
940
939
941 - `pick` to [re]order a changeset
940 - `pick` to [re]order a changeset
942
941
943 - `drop` to omit changeset
942 - `drop` to omit changeset
944
943
945 - `mess` to reword the changeset commit message
944 - `mess` to reword the changeset commit message
946
945
947 - `fold` to combine it with the preceding changeset (using the later date)
946 - `fold` to combine it with the preceding changeset (using the later date)
948
947
949 - `roll` like fold, but discarding this commit's description and date
948 - `roll` like fold, but discarding this commit's description and date
950
949
951 - `edit` to edit this changeset (preserving date)
950 - `edit` to edit this changeset (preserving date)
952
951
953 - `base` to checkout changeset and apply further changesets from there
952 - `base` to checkout changeset and apply further changesets from there
954
953
955 There are a number of ways to select the root changeset:
954 There are a number of ways to select the root changeset:
956
955
957 - Specify ANCESTOR directly
956 - Specify ANCESTOR directly
958
957
959 - Use --outgoing -- it will be the first linear changeset not
958 - Use --outgoing -- it will be the first linear changeset not
960 included in destination. (See :hg:`help config.paths.default-push`)
959 included in destination. (See :hg:`help config.paths.default-push`)
961
960
962 - Otherwise, the value from the "histedit.defaultrev" config option
961 - Otherwise, the value from the "histedit.defaultrev" config option
963 is used as a revset to select the base revision when ANCESTOR is not
962 is used as a revset to select the base revision when ANCESTOR is not
964 specified. The first revision returned by the revset is used. By
963 specified. The first revision returned by the revset is used. By
965 default, this selects the editable history that is unique to the
964 default, this selects the editable history that is unique to the
966 ancestry of the working directory.
965 ancestry of the working directory.
967
966
968 .. container:: verbose
967 .. container:: verbose
969
968
970 If you use --outgoing, this command will abort if there are ambiguous
969 If you use --outgoing, this command will abort if there are ambiguous
971 outgoing revisions. For example, if there are multiple branches
970 outgoing revisions. For example, if there are multiple branches
972 containing outgoing revisions.
971 containing outgoing revisions.
973
972
974 Use "min(outgoing() and ::.)" or similar revset specification
973 Use "min(outgoing() and ::.)" or similar revset specification
975 instead of --outgoing to specify edit target revision exactly in
974 instead of --outgoing to specify edit target revision exactly in
976 such ambiguous situation. See :hg:`help revsets` for detail about
975 such ambiguous situation. See :hg:`help revsets` for detail about
977 selecting revisions.
976 selecting revisions.
978
977
979 .. container:: verbose
978 .. container:: verbose
980
979
981 Examples:
980 Examples:
982
981
983 - A number of changes have been made.
982 - A number of changes have been made.
984 Revision 3 is no longer needed.
983 Revision 3 is no longer needed.
985
984
986 Start history editing from revision 3::
985 Start history editing from revision 3::
987
986
988 hg histedit -r 3
987 hg histedit -r 3
989
988
990 An editor opens, containing the list of revisions,
989 An editor opens, containing the list of revisions,
991 with specific actions specified::
990 with specific actions specified::
992
991
993 pick 5339bf82f0ca 3 Zworgle the foobar
992 pick 5339bf82f0ca 3 Zworgle the foobar
994 pick 8ef592ce7cc4 4 Bedazzle the zerlog
993 pick 8ef592ce7cc4 4 Bedazzle the zerlog
995 pick 0a9639fcda9d 5 Morgify the cromulancy
994 pick 0a9639fcda9d 5 Morgify the cromulancy
996
995
997 Additional information about the possible actions
996 Additional information about the possible actions
998 to take appears below the list of revisions.
997 to take appears below the list of revisions.
999
998
1000 To remove revision 3 from the history,
999 To remove revision 3 from the history,
1001 its action (at the beginning of the relevant line)
1000 its action (at the beginning of the relevant line)
1002 is changed to 'drop'::
1001 is changed to 'drop'::
1003
1002
1004 drop 5339bf82f0ca 3 Zworgle the foobar
1003 drop 5339bf82f0ca 3 Zworgle the foobar
1005 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1004 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1006 pick 0a9639fcda9d 5 Morgify the cromulancy
1005 pick 0a9639fcda9d 5 Morgify the cromulancy
1007
1006
1008 - A number of changes have been made.
1007 - A number of changes have been made.
1009 Revision 2 and 4 need to be swapped.
1008 Revision 2 and 4 need to be swapped.
1010
1009
1011 Start history editing from revision 2::
1010 Start history editing from revision 2::
1012
1011
1013 hg histedit -r 2
1012 hg histedit -r 2
1014
1013
1015 An editor opens, containing the list of revisions,
1014 An editor opens, containing the list of revisions,
1016 with specific actions specified::
1015 with specific actions specified::
1017
1016
1018 pick 252a1af424ad 2 Blorb a morgwazzle
1017 pick 252a1af424ad 2 Blorb a morgwazzle
1019 pick 5339bf82f0ca 3 Zworgle the foobar
1018 pick 5339bf82f0ca 3 Zworgle the foobar
1020 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1019 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1021
1020
1022 To swap revision 2 and 4, its lines are swapped
1021 To swap revision 2 and 4, its lines are swapped
1023 in the editor::
1022 in the editor::
1024
1023
1025 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1024 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1026 pick 5339bf82f0ca 3 Zworgle the foobar
1025 pick 5339bf82f0ca 3 Zworgle the foobar
1027 pick 252a1af424ad 2 Blorb a morgwazzle
1026 pick 252a1af424ad 2 Blorb a morgwazzle
1028
1027
1029 Returns 0 on success, 1 if user intervention is required (not only
1028 Returns 0 on success, 1 if user intervention is required (not only
1030 for intentional "edit" command, but also for resolving unexpected
1029 for intentional "edit" command, but also for resolving unexpected
1031 conflicts).
1030 conflicts).
1032 """
1031 """
1033 state = histeditstate(repo)
1032 state = histeditstate(repo)
1034 try:
1033 try:
1035 state.wlock = repo.wlock()
1034 state.wlock = repo.wlock()
1036 state.lock = repo.lock()
1035 state.lock = repo.lock()
1037 _histedit(ui, repo, state, *freeargs, **opts)
1036 _histedit(ui, repo, state, *freeargs, **opts)
1038 finally:
1037 finally:
1039 release(state.lock, state.wlock)
1038 release(state.lock, state.wlock)
1040
1039
1041 goalcontinue = 'continue'
1040 goalcontinue = 'continue'
1042 goalabort = 'abort'
1041 goalabort = 'abort'
1043 goaleditplan = 'edit-plan'
1042 goaleditplan = 'edit-plan'
1044 goalnew = 'new'
1043 goalnew = 'new'
1045
1044
1046 def _getgoal(opts):
1045 def _getgoal(opts):
1047 if opts.get('continue'):
1046 if opts.get('continue'):
1048 return goalcontinue
1047 return goalcontinue
1049 if opts.get('abort'):
1048 if opts.get('abort'):
1050 return goalabort
1049 return goalabort
1051 if opts.get('edit_plan'):
1050 if opts.get('edit_plan'):
1052 return goaleditplan
1051 return goaleditplan
1053 return goalnew
1052 return goalnew
1054
1053
1055 def _readfile(ui, path):
1054 def _readfile(ui, path):
1056 if path == '-':
1055 if path == '-':
1057 with ui.timeblockedsection('histedit'):
1056 with ui.timeblockedsection('histedit'):
1058 return ui.fin.read()
1057 return ui.fin.read()
1059 else:
1058 else:
1060 with open(path, 'rb') as f:
1059 with open(path, 'rb') as f:
1061 return f.read()
1060 return f.read()
1062
1061
1063 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1062 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1064 # TODO only abort if we try to histedit mq patches, not just
1063 # TODO only abort if we try to histedit mq patches, not just
1065 # blanket if mq patches are applied somewhere
1064 # blanket if mq patches are applied somewhere
1066 mq = getattr(repo, 'mq', None)
1065 mq = getattr(repo, 'mq', None)
1067 if mq and mq.applied:
1066 if mq and mq.applied:
1068 raise error.Abort(_('source has mq patches applied'))
1067 raise error.Abort(_('source has mq patches applied'))
1069
1068
1070 # basic argument incompatibility processing
1069 # basic argument incompatibility processing
1071 outg = opts.get('outgoing')
1070 outg = opts.get('outgoing')
1072 editplan = opts.get('edit_plan')
1071 editplan = opts.get('edit_plan')
1073 abort = opts.get('abort')
1072 abort = opts.get('abort')
1074 force = opts.get('force')
1073 force = opts.get('force')
1075 if force and not outg:
1074 if force and not outg:
1076 raise error.Abort(_('--force only allowed with --outgoing'))
1075 raise error.Abort(_('--force only allowed with --outgoing'))
1077 if goal == 'continue':
1076 if goal == 'continue':
1078 if any((outg, abort, revs, freeargs, rules, editplan)):
1077 if any((outg, abort, revs, freeargs, rules, editplan)):
1079 raise error.Abort(_('no arguments allowed with --continue'))
1078 raise error.Abort(_('no arguments allowed with --continue'))
1080 elif goal == 'abort':
1079 elif goal == 'abort':
1081 if any((outg, revs, freeargs, rules, editplan)):
1080 if any((outg, revs, freeargs, rules, editplan)):
1082 raise error.Abort(_('no arguments allowed with --abort'))
1081 raise error.Abort(_('no arguments allowed with --abort'))
1083 elif goal == 'edit-plan':
1082 elif goal == 'edit-plan':
1084 if any((outg, revs, freeargs)):
1083 if any((outg, revs, freeargs)):
1085 raise error.Abort(_('only --commands argument allowed with '
1084 raise error.Abort(_('only --commands argument allowed with '
1086 '--edit-plan'))
1085 '--edit-plan'))
1087 else:
1086 else:
1088 if state.inprogress():
1087 if state.inprogress():
1089 raise error.Abort(_('history edit already in progress, try '
1088 raise error.Abort(_('history edit already in progress, try '
1090 '--continue or --abort'))
1089 '--continue or --abort'))
1091 if outg:
1090 if outg:
1092 if revs:
1091 if revs:
1093 raise error.Abort(_('no revisions allowed with --outgoing'))
1092 raise error.Abort(_('no revisions allowed with --outgoing'))
1094 if len(freeargs) > 1:
1093 if len(freeargs) > 1:
1095 raise error.Abort(
1094 raise error.Abort(
1096 _('only one repo argument allowed with --outgoing'))
1095 _('only one repo argument allowed with --outgoing'))
1097 else:
1096 else:
1098 revs.extend(freeargs)
1097 revs.extend(freeargs)
1099 if len(revs) == 0:
1098 if len(revs) == 0:
1100 defaultrev = destutil.desthistedit(ui, repo)
1099 defaultrev = destutil.desthistedit(ui, repo)
1101 if defaultrev is not None:
1100 if defaultrev is not None:
1102 revs.append(defaultrev)
1101 revs.append(defaultrev)
1103
1102
1104 if len(revs) != 1:
1103 if len(revs) != 1:
1105 raise error.Abort(
1104 raise error.Abort(
1106 _('histedit requires exactly one ancestor revision'))
1105 _('histedit requires exactly one ancestor revision'))
1107
1106
1108 def _histedit(ui, repo, state, *freeargs, **opts):
1107 def _histedit(ui, repo, state, *freeargs, **opts):
1109 opts = pycompat.byteskwargs(opts)
1108 opts = pycompat.byteskwargs(opts)
1110 fm = ui.formatter('histedit', opts)
1109 fm = ui.formatter('histedit', opts)
1111 fm.startitem()
1110 fm.startitem()
1112 goal = _getgoal(opts)
1111 goal = _getgoal(opts)
1113 revs = opts.get('rev', [])
1112 revs = opts.get('rev', [])
1114 # experimental config: ui.history-editing-backup
1113 # experimental config: ui.history-editing-backup
1115 nobackup = not ui.configbool('ui', 'history-editing-backup')
1114 nobackup = not ui.configbool('ui', 'history-editing-backup')
1116 rules = opts.get('commands', '')
1115 rules = opts.get('commands', '')
1117 state.keep = opts.get('keep', False)
1116 state.keep = opts.get('keep', False)
1118
1117
1119 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1118 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1120
1119
1121 # rebuild state
1120 # rebuild state
1122 if goal == goalcontinue:
1121 if goal == goalcontinue:
1123 state.read()
1122 state.read()
1124 state = bootstrapcontinue(ui, state, opts)
1123 state = bootstrapcontinue(ui, state, opts)
1125 elif goal == goaleditplan:
1124 elif goal == goaleditplan:
1126 _edithisteditplan(ui, repo, state, rules)
1125 _edithisteditplan(ui, repo, state, rules)
1127 return
1126 return
1128 elif goal == goalabort:
1127 elif goal == goalabort:
1129 _aborthistedit(ui, repo, state, nobackup=nobackup)
1128 _aborthistedit(ui, repo, state, nobackup=nobackup)
1130 return
1129 return
1131 else:
1130 else:
1132 # goal == goalnew
1131 # goal == goalnew
1133 _newhistedit(ui, repo, state, revs, freeargs, opts)
1132 _newhistedit(ui, repo, state, revs, freeargs, opts)
1134
1133
1135 _continuehistedit(ui, repo, state)
1134 _continuehistedit(ui, repo, state)
1136 _finishhistedit(ui, repo, state, fm)
1135 _finishhistedit(ui, repo, state, fm)
1137 fm.end()
1136 fm.end()
1138
1137
1139 def _continuehistedit(ui, repo, state):
1138 def _continuehistedit(ui, repo, state):
1140 """This function runs after either:
1139 """This function runs after either:
1141 - bootstrapcontinue (if the goal is 'continue')
1140 - bootstrapcontinue (if the goal is 'continue')
1142 - _newhistedit (if the goal is 'new')
1141 - _newhistedit (if the goal is 'new')
1143 """
1142 """
1144 # preprocess rules so that we can hide inner folds from the user
1143 # preprocess rules so that we can hide inner folds from the user
1145 # and only show one editor
1144 # and only show one editor
1146 actions = state.actions[:]
1145 actions = state.actions[:]
1147 for idx, (action, nextact) in enumerate(
1146 for idx, (action, nextact) in enumerate(
1148 zip(actions, actions[1:] + [None])):
1147 zip(actions, actions[1:] + [None])):
1149 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1148 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1150 state.actions[idx].__class__ = _multifold
1149 state.actions[idx].__class__ = _multifold
1151
1150
1152 # Force an initial state file write, so the user can run --abort/continue
1151 # Force an initial state file write, so the user can run --abort/continue
1153 # even if there's an exception before the first transaction serialize.
1152 # even if there's an exception before the first transaction serialize.
1154 state.write()
1153 state.write()
1155
1154
1156 tr = None
1155 tr = None
1157 # Don't use singletransaction by default since it rolls the entire
1156 # Don't use singletransaction by default since it rolls the entire
1158 # transaction back if an unexpected exception happens (like a
1157 # transaction back if an unexpected exception happens (like a
1159 # pretxncommit hook throws, or the user aborts the commit msg editor).
1158 # pretxncommit hook throws, or the user aborts the commit msg editor).
1160 if ui.configbool("histedit", "singletransaction"):
1159 if ui.configbool("histedit", "singletransaction"):
1161 # Don't use a 'with' for the transaction, since actions may close
1160 # Don't use a 'with' for the transaction, since actions may close
1162 # and reopen a transaction. For example, if the action executes an
1161 # and reopen a transaction. For example, if the action executes an
1163 # external process it may choose to commit the transaction first.
1162 # external process it may choose to commit the transaction first.
1164 tr = repo.transaction('histedit')
1163 tr = repo.transaction('histedit')
1165 progress = ui.makeprogress(_("editing"), unit=_('changes'),
1164 progress = ui.makeprogress(_("editing"), unit=_('changes'),
1166 total=len(state.actions))
1165 total=len(state.actions))
1167 with progress, util.acceptintervention(tr):
1166 with progress, util.acceptintervention(tr):
1168 while state.actions:
1167 while state.actions:
1169 state.write(tr=tr)
1168 state.write(tr=tr)
1170 actobj = state.actions[0]
1169 actobj = state.actions[0]
1171 progress.increment(item=actobj.torule())
1170 progress.increment(item=actobj.torule())
1172 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1171 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1173 actobj.torule()))
1172 actobj.torule()))
1174 parentctx, replacement_ = actobj.run()
1173 parentctx, replacement_ = actobj.run()
1175 state.parentctxnode = parentctx.node()
1174 state.parentctxnode = parentctx.node()
1176 state.replacements.extend(replacement_)
1175 state.replacements.extend(replacement_)
1177 state.actions.pop(0)
1176 state.actions.pop(0)
1178
1177
1179 state.write()
1178 state.write()
1180
1179
1181 def _finishhistedit(ui, repo, state, fm):
1180 def _finishhistedit(ui, repo, state, fm):
1182 """This action runs when histedit is finishing its session"""
1181 """This action runs when histedit is finishing its session"""
1183 hg.updaterepo(repo, state.parentctxnode, overwrite=False)
1182 hg.updaterepo(repo, state.parentctxnode, overwrite=False)
1184
1183
1185 mapping, tmpnodes, created, ntm = processreplacement(state)
1184 mapping, tmpnodes, created, ntm = processreplacement(state)
1186 if mapping:
1185 if mapping:
1187 for prec, succs in mapping.iteritems():
1186 for prec, succs in mapping.iteritems():
1188 if not succs:
1187 if not succs:
1189 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1188 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1190 else:
1189 else:
1191 ui.debug('histedit: %s is replaced by %s\n' % (
1190 ui.debug('histedit: %s is replaced by %s\n' % (
1192 node.short(prec), node.short(succs[0])))
1191 node.short(prec), node.short(succs[0])))
1193 if len(succs) > 1:
1192 if len(succs) > 1:
1194 m = 'histedit: %s'
1193 m = 'histedit: %s'
1195 for n in succs[1:]:
1194 for n in succs[1:]:
1196 ui.debug(m % node.short(n))
1195 ui.debug(m % node.short(n))
1197
1196
1198 if not state.keep:
1197 if not state.keep:
1199 if mapping:
1198 if mapping:
1200 movetopmostbookmarks(repo, state.topmost, ntm)
1199 movetopmostbookmarks(repo, state.topmost, ntm)
1201 # TODO update mq state
1200 # TODO update mq state
1202 else:
1201 else:
1203 mapping = {}
1202 mapping = {}
1204
1203
1205 for n in tmpnodes:
1204 for n in tmpnodes:
1206 if n in repo:
1205 if n in repo:
1207 mapping[n] = ()
1206 mapping[n] = ()
1208
1207
1209 # remove entries about unknown nodes
1208 # remove entries about unknown nodes
1210 nodemap = repo.unfiltered().changelog.nodemap
1209 nodemap = repo.unfiltered().changelog.nodemap
1211 mapping = {k: v for k, v in mapping.items()
1210 mapping = {k: v for k, v in mapping.items()
1212 if k in nodemap and all(n in nodemap for n in v)}
1211 if k in nodemap and all(n in nodemap for n in v)}
1213 scmutil.cleanupnodes(repo, mapping, 'histedit')
1212 scmutil.cleanupnodes(repo, mapping, 'histedit')
1214 hf = fm.hexfunc
1213 hf = fm.hexfunc
1215 fl = fm.formatlist
1214 fl = fm.formatlist
1216 fd = fm.formatdict
1215 fd = fm.formatdict
1217 nodechanges = fd({hf(oldn): fl([hf(n) for n in newn], name='node')
1216 nodechanges = fd({hf(oldn): fl([hf(n) for n in newn], name='node')
1218 for oldn, newn in mapping.iteritems()},
1217 for oldn, newn in mapping.iteritems()},
1219 key="oldnode", value="newnodes")
1218 key="oldnode", value="newnodes")
1220 fm.data(nodechanges=nodechanges)
1219 fm.data(nodechanges=nodechanges)
1221
1220
1222 state.clear()
1221 state.clear()
1223 if os.path.exists(repo.sjoin('undo')):
1222 if os.path.exists(repo.sjoin('undo')):
1224 os.unlink(repo.sjoin('undo'))
1223 os.unlink(repo.sjoin('undo'))
1225 if repo.vfs.exists('histedit-last-edit.txt'):
1224 if repo.vfs.exists('histedit-last-edit.txt'):
1226 repo.vfs.unlink('histedit-last-edit.txt')
1225 repo.vfs.unlink('histedit-last-edit.txt')
1227
1226
1228 def _aborthistedit(ui, repo, state, nobackup=False):
1227 def _aborthistedit(ui, repo, state, nobackup=False):
1229 try:
1228 try:
1230 state.read()
1229 state.read()
1231 __, leafs, tmpnodes, __ = processreplacement(state)
1230 __, leafs, tmpnodes, __ = processreplacement(state)
1232 ui.debug('restore wc to old parent %s\n'
1231 ui.debug('restore wc to old parent %s\n'
1233 % node.short(state.topmost))
1232 % node.short(state.topmost))
1234
1233
1235 # Recover our old commits if necessary
1234 # Recover our old commits if necessary
1236 if not state.topmost in repo and state.backupfile:
1235 if not state.topmost in repo and state.backupfile:
1237 backupfile = repo.vfs.join(state.backupfile)
1236 backupfile = repo.vfs.join(state.backupfile)
1238 f = hg.openpath(ui, backupfile)
1237 f = hg.openpath(ui, backupfile)
1239 gen = exchange.readbundle(ui, f, backupfile)
1238 gen = exchange.readbundle(ui, f, backupfile)
1240 with repo.transaction('histedit.abort') as tr:
1239 with repo.transaction('histedit.abort') as tr:
1241 bundle2.applybundle(repo, gen, tr, source='histedit',
1240 bundle2.applybundle(repo, gen, tr, source='histedit',
1242 url='bundle:' + backupfile)
1241 url='bundle:' + backupfile)
1243
1242
1244 os.remove(backupfile)
1243 os.remove(backupfile)
1245
1244
1246 # check whether we should update away
1245 # check whether we should update away
1247 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1246 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1248 state.parentctxnode, leafs | tmpnodes):
1247 state.parentctxnode, leafs | tmpnodes):
1249 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1248 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1250 cleanupnode(ui, repo, tmpnodes, nobackup=nobackup)
1249 cleanupnode(ui, repo, tmpnodes, nobackup=nobackup)
1251 cleanupnode(ui, repo, leafs, nobackup=nobackup)
1250 cleanupnode(ui, repo, leafs, nobackup=nobackup)
1252 except Exception:
1251 except Exception:
1253 if state.inprogress():
1252 if state.inprogress():
1254 ui.warn(_('warning: encountered an exception during histedit '
1253 ui.warn(_('warning: encountered an exception during histedit '
1255 '--abort; the repository may not have been completely '
1254 '--abort; the repository may not have been completely '
1256 'cleaned up\n'))
1255 'cleaned up\n'))
1257 raise
1256 raise
1258 finally:
1257 finally:
1259 state.clear()
1258 state.clear()
1260
1259
1261 def _edithisteditplan(ui, repo, state, rules):
1260 def _edithisteditplan(ui, repo, state, rules):
1262 state.read()
1261 state.read()
1263 if not rules:
1262 if not rules:
1264 comment = geteditcomment(ui,
1263 comment = geteditcomment(ui,
1265 node.short(state.parentctxnode),
1264 node.short(state.parentctxnode),
1266 node.short(state.topmost))
1265 node.short(state.topmost))
1267 rules = ruleeditor(repo, ui, state.actions, comment)
1266 rules = ruleeditor(repo, ui, state.actions, comment)
1268 else:
1267 else:
1269 rules = _readfile(ui, rules)
1268 rules = _readfile(ui, rules)
1270 actions = parserules(rules, state)
1269 actions = parserules(rules, state)
1271 ctxs = [repo[act.node] \
1270 ctxs = [repo[act.node] \
1272 for act in state.actions if act.node]
1271 for act in state.actions if act.node]
1273 warnverifyactions(ui, repo, actions, state, ctxs)
1272 warnverifyactions(ui, repo, actions, state, ctxs)
1274 state.actions = actions
1273 state.actions = actions
1275 state.write()
1274 state.write()
1276
1275
1277 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1276 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1278 outg = opts.get('outgoing')
1277 outg = opts.get('outgoing')
1279 rules = opts.get('commands', '')
1278 rules = opts.get('commands', '')
1280 force = opts.get('force')
1279 force = opts.get('force')
1281
1280
1282 cmdutil.checkunfinished(repo)
1281 cmdutil.checkunfinished(repo)
1283 cmdutil.bailifchanged(repo)
1282 cmdutil.bailifchanged(repo)
1284
1283
1285 topmost, empty = repo.dirstate.parents()
1284 topmost, empty = repo.dirstate.parents()
1286 if outg:
1285 if outg:
1287 if freeargs:
1286 if freeargs:
1288 remote = freeargs[0]
1287 remote = freeargs[0]
1289 else:
1288 else:
1290 remote = None
1289 remote = None
1291 root = findoutgoing(ui, repo, remote, force, opts)
1290 root = findoutgoing(ui, repo, remote, force, opts)
1292 else:
1291 else:
1293 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1292 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1294 if len(rr) != 1:
1293 if len(rr) != 1:
1295 raise error.Abort(_('The specified revisions must have '
1294 raise error.Abort(_('The specified revisions must have '
1296 'exactly one common root'))
1295 'exactly one common root'))
1297 root = rr[0].node()
1296 root = rr[0].node()
1298
1297
1299 revs = between(repo, root, topmost, state.keep)
1298 revs = between(repo, root, topmost, state.keep)
1300 if not revs:
1299 if not revs:
1301 raise error.Abort(_('%s is not an ancestor of working directory') %
1300 raise error.Abort(_('%s is not an ancestor of working directory') %
1302 node.short(root))
1301 node.short(root))
1303
1302
1304 ctxs = [repo[r] for r in revs]
1303 ctxs = [repo[r] for r in revs]
1305 if not rules:
1304 if not rules:
1306 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1305 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1307 actions = [pick(state, r) for r in revs]
1306 actions = [pick(state, r) for r in revs]
1308 rules = ruleeditor(repo, ui, actions, comment)
1307 rules = ruleeditor(repo, ui, actions, comment)
1309 else:
1308 else:
1310 rules = _readfile(ui, rules)
1309 rules = _readfile(ui, rules)
1311 actions = parserules(rules, state)
1310 actions = parserules(rules, state)
1312 warnverifyactions(ui, repo, actions, state, ctxs)
1311 warnverifyactions(ui, repo, actions, state, ctxs)
1313
1312
1314 parentctxnode = repo[root].parents()[0].node()
1313 parentctxnode = repo[root].parents()[0].node()
1315
1314
1316 state.parentctxnode = parentctxnode
1315 state.parentctxnode = parentctxnode
1317 state.actions = actions
1316 state.actions = actions
1318 state.topmost = topmost
1317 state.topmost = topmost
1319 state.replacements = []
1318 state.replacements = []
1320
1319
1321 ui.log("histedit", "%d actions to histedit", len(actions),
1320 ui.log("histedit", "%d actions to histedit", len(actions),
1322 histedit_num_actions=len(actions))
1321 histedit_num_actions=len(actions))
1323
1322
1324 # Create a backup so we can always abort completely.
1323 # Create a backup so we can always abort completely.
1325 backupfile = None
1324 backupfile = None
1326 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1325 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1327 backupfile = repair.backupbundle(repo, [parentctxnode],
1326 backupfile = repair.backupbundle(repo, [parentctxnode],
1328 [topmost], root, 'histedit')
1327 [topmost], root, 'histedit')
1329 state.backupfile = backupfile
1328 state.backupfile = backupfile
1330
1329
1331 def _getsummary(ctx):
1330 def _getsummary(ctx):
1332 # a common pattern is to extract the summary but default to the empty
1331 # a common pattern is to extract the summary but default to the empty
1333 # string
1332 # string
1334 summary = ctx.description() or ''
1333 summary = ctx.description() or ''
1335 if summary:
1334 if summary:
1336 summary = summary.splitlines()[0]
1335 summary = summary.splitlines()[0]
1337 return summary
1336 return summary
1338
1337
1339 def bootstrapcontinue(ui, state, opts):
1338 def bootstrapcontinue(ui, state, opts):
1340 repo = state.repo
1339 repo = state.repo
1341
1340
1342 ms = mergemod.mergestate.read(repo)
1341 ms = mergemod.mergestate.read(repo)
1343 mergeutil.checkunresolved(ms)
1342 mergeutil.checkunresolved(ms)
1344
1343
1345 if state.actions:
1344 if state.actions:
1346 actobj = state.actions.pop(0)
1345 actobj = state.actions.pop(0)
1347
1346
1348 if _isdirtywc(repo):
1347 if _isdirtywc(repo):
1349 actobj.continuedirty()
1348 actobj.continuedirty()
1350 if _isdirtywc(repo):
1349 if _isdirtywc(repo):
1351 abortdirty()
1350 abortdirty()
1352
1351
1353 parentctx, replacements = actobj.continueclean()
1352 parentctx, replacements = actobj.continueclean()
1354
1353
1355 state.parentctxnode = parentctx.node()
1354 state.parentctxnode = parentctx.node()
1356 state.replacements.extend(replacements)
1355 state.replacements.extend(replacements)
1357
1356
1358 return state
1357 return state
1359
1358
1360 def between(repo, old, new, keep):
1359 def between(repo, old, new, keep):
1361 """select and validate the set of revision to edit
1360 """select and validate the set of revision to edit
1362
1361
1363 When keep is false, the specified set can't have children."""
1362 When keep is false, the specified set can't have children."""
1364 revs = repo.revs('%n::%n', old, new)
1363 revs = repo.revs('%n::%n', old, new)
1365 if revs and not keep:
1364 if revs and not keep:
1366 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1365 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1367 repo.revs('(%ld::) - (%ld)', revs, revs)):
1366 repo.revs('(%ld::) - (%ld)', revs, revs)):
1368 raise error.Abort(_('can only histedit a changeset together '
1367 raise error.Abort(_('can only histedit a changeset together '
1369 'with all its descendants'))
1368 'with all its descendants'))
1370 if repo.revs('(%ld) and merge()', revs):
1369 if repo.revs('(%ld) and merge()', revs):
1371 raise error.Abort(_('cannot edit history that contains merges'))
1370 raise error.Abort(_('cannot edit history that contains merges'))
1372 root = repo[revs.first()] # list is already sorted by repo.revs()
1371 root = repo[revs.first()] # list is already sorted by repo.revs()
1373 if not root.mutable():
1372 if not root.mutable():
1374 raise error.Abort(_('cannot edit public changeset: %s') % root,
1373 raise error.Abort(_('cannot edit public changeset: %s') % root,
1375 hint=_("see 'hg help phases' for details"))
1374 hint=_("see 'hg help phases' for details"))
1376 return pycompat.maplist(repo.changelog.node, revs)
1375 return pycompat.maplist(repo.changelog.node, revs)
1377
1376
1378 def ruleeditor(repo, ui, actions, editcomment=""):
1377 def ruleeditor(repo, ui, actions, editcomment=""):
1379 """open an editor to edit rules
1378 """open an editor to edit rules
1380
1379
1381 rules are in the format [ [act, ctx], ...] like in state.rules
1380 rules are in the format [ [act, ctx], ...] like in state.rules
1382 """
1381 """
1383 if repo.ui.configbool("experimental", "histedit.autoverb"):
1382 if repo.ui.configbool("experimental", "histedit.autoverb"):
1384 newact = util.sortdict()
1383 newact = util.sortdict()
1385 for act in actions:
1384 for act in actions:
1386 ctx = repo[act.node]
1385 ctx = repo[act.node]
1387 summary = _getsummary(ctx)
1386 summary = _getsummary(ctx)
1388 fword = summary.split(' ', 1)[0].lower()
1387 fword = summary.split(' ', 1)[0].lower()
1389 added = False
1388 added = False
1390
1389
1391 # if it doesn't end with the special character '!' just skip this
1390 # if it doesn't end with the special character '!' just skip this
1392 if fword.endswith('!'):
1391 if fword.endswith('!'):
1393 fword = fword[:-1]
1392 fword = fword[:-1]
1394 if fword in primaryactions | secondaryactions | tertiaryactions:
1393 if fword in primaryactions | secondaryactions | tertiaryactions:
1395 act.verb = fword
1394 act.verb = fword
1396 # get the target summary
1395 # get the target summary
1397 tsum = summary[len(fword) + 1:].lstrip()
1396 tsum = summary[len(fword) + 1:].lstrip()
1398 # safe but slow: reverse iterate over the actions so we
1397 # safe but slow: reverse iterate over the actions so we
1399 # don't clash on two commits having the same summary
1398 # don't clash on two commits having the same summary
1400 for na, l in reversed(list(newact.iteritems())):
1399 for na, l in reversed(list(newact.iteritems())):
1401 actx = repo[na.node]
1400 actx = repo[na.node]
1402 asum = _getsummary(actx)
1401 asum = _getsummary(actx)
1403 if asum == tsum:
1402 if asum == tsum:
1404 added = True
1403 added = True
1405 l.append(act)
1404 l.append(act)
1406 break
1405 break
1407
1406
1408 if not added:
1407 if not added:
1409 newact[act] = []
1408 newact[act] = []
1410
1409
1411 # copy over and flatten the new list
1410 # copy over and flatten the new list
1412 actions = []
1411 actions = []
1413 for na, l in newact.iteritems():
1412 for na, l in newact.iteritems():
1414 actions.append(na)
1413 actions.append(na)
1415 actions += l
1414 actions += l
1416
1415
1417 rules = '\n'.join([act.torule() for act in actions])
1416 rules = '\n'.join([act.torule() for act in actions])
1418 rules += '\n\n'
1417 rules += '\n\n'
1419 rules += editcomment
1418 rules += editcomment
1420 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1419 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1421 repopath=repo.path, action='histedit')
1420 repopath=repo.path, action='histedit')
1422
1421
1423 # Save edit rules in .hg/histedit-last-edit.txt in case
1422 # Save edit rules in .hg/histedit-last-edit.txt in case
1424 # the user needs to ask for help after something
1423 # the user needs to ask for help after something
1425 # surprising happens.
1424 # surprising happens.
1426 with repo.vfs('histedit-last-edit.txt', 'wb') as f:
1425 with repo.vfs('histedit-last-edit.txt', 'wb') as f:
1427 f.write(rules)
1426 f.write(rules)
1428
1427
1429 return rules
1428 return rules
1430
1429
1431 def parserules(rules, state):
1430 def parserules(rules, state):
1432 """Read the histedit rules string and return list of action objects """
1431 """Read the histedit rules string and return list of action objects """
1433 rules = [l for l in (r.strip() for r in rules.splitlines())
1432 rules = [l for l in (r.strip() for r in rules.splitlines())
1434 if l and not l.startswith('#')]
1433 if l and not l.startswith('#')]
1435 actions = []
1434 actions = []
1436 for r in rules:
1435 for r in rules:
1437 if ' ' not in r:
1436 if ' ' not in r:
1438 raise error.ParseError(_('malformed line "%s"') % r)
1437 raise error.ParseError(_('malformed line "%s"') % r)
1439 verb, rest = r.split(' ', 1)
1438 verb, rest = r.split(' ', 1)
1440
1439
1441 if verb not in actiontable:
1440 if verb not in actiontable:
1442 raise error.ParseError(_('unknown action "%s"') % verb)
1441 raise error.ParseError(_('unknown action "%s"') % verb)
1443
1442
1444 action = actiontable[verb].fromrule(state, rest)
1443 action = actiontable[verb].fromrule(state, rest)
1445 actions.append(action)
1444 actions.append(action)
1446 return actions
1445 return actions
1447
1446
1448 def warnverifyactions(ui, repo, actions, state, ctxs):
1447 def warnverifyactions(ui, repo, actions, state, ctxs):
1449 try:
1448 try:
1450 verifyactions(actions, state, ctxs)
1449 verifyactions(actions, state, ctxs)
1451 except error.ParseError:
1450 except error.ParseError:
1452 if repo.vfs.exists('histedit-last-edit.txt'):
1451 if repo.vfs.exists('histedit-last-edit.txt'):
1453 ui.warn(_('warning: histedit rules saved '
1452 ui.warn(_('warning: histedit rules saved '
1454 'to: .hg/histedit-last-edit.txt\n'))
1453 'to: .hg/histedit-last-edit.txt\n'))
1455 raise
1454 raise
1456
1455
1457 def verifyactions(actions, state, ctxs):
1456 def verifyactions(actions, state, ctxs):
1458 """Verify that there exists exactly one action per given changeset and
1457 """Verify that there exists exactly one action per given changeset and
1459 other constraints.
1458 other constraints.
1460
1459
1461 Will abort if there are to many or too few rules, a malformed rule,
1460 Will abort if there are to many or too few rules, a malformed rule,
1462 or a rule on a changeset outside of the user-given range.
1461 or a rule on a changeset outside of the user-given range.
1463 """
1462 """
1464 expected = set(c.node() for c in ctxs)
1463 expected = set(c.node() for c in ctxs)
1465 seen = set()
1464 seen = set()
1466 prev = None
1465 prev = None
1467
1466
1468 if actions and actions[0].verb in ['roll', 'fold']:
1467 if actions and actions[0].verb in ['roll', 'fold']:
1469 raise error.ParseError(_('first changeset cannot use verb "%s"') %
1468 raise error.ParseError(_('first changeset cannot use verb "%s"') %
1470 actions[0].verb)
1469 actions[0].verb)
1471
1470
1472 for action in actions:
1471 for action in actions:
1473 action.verify(prev, expected, seen)
1472 action.verify(prev, expected, seen)
1474 prev = action
1473 prev = action
1475 if action.node is not None:
1474 if action.node is not None:
1476 seen.add(action.node)
1475 seen.add(action.node)
1477 missing = sorted(expected - seen) # sort to stabilize output
1476 missing = sorted(expected - seen) # sort to stabilize output
1478
1477
1479 if state.repo.ui.configbool('histedit', 'dropmissing'):
1478 if state.repo.ui.configbool('histedit', 'dropmissing'):
1480 if len(actions) == 0:
1479 if len(actions) == 0:
1481 raise error.ParseError(_('no rules provided'),
1480 raise error.ParseError(_('no rules provided'),
1482 hint=_('use strip extension to remove commits'))
1481 hint=_('use strip extension to remove commits'))
1483
1482
1484 drops = [drop(state, n) for n in missing]
1483 drops = [drop(state, n) for n in missing]
1485 # put the in the beginning so they execute immediately and
1484 # put the in the beginning so they execute immediately and
1486 # don't show in the edit-plan in the future
1485 # don't show in the edit-plan in the future
1487 actions[:0] = drops
1486 actions[:0] = drops
1488 elif missing:
1487 elif missing:
1489 raise error.ParseError(_('missing rules for changeset %s') %
1488 raise error.ParseError(_('missing rules for changeset %s') %
1490 node.short(missing[0]),
1489 node.short(missing[0]),
1491 hint=_('use "drop %s" to discard, see also: '
1490 hint=_('use "drop %s" to discard, see also: '
1492 "'hg help -e histedit.config'")
1491 "'hg help -e histedit.config'")
1493 % node.short(missing[0]))
1492 % node.short(missing[0]))
1494
1493
1495 def adjustreplacementsfrommarkers(repo, oldreplacements):
1494 def adjustreplacementsfrommarkers(repo, oldreplacements):
1496 """Adjust replacements from obsolescence markers
1495 """Adjust replacements from obsolescence markers
1497
1496
1498 Replacements structure is originally generated based on
1497 Replacements structure is originally generated based on
1499 histedit's state and does not account for changes that are
1498 histedit's state and does not account for changes that are
1500 not recorded there. This function fixes that by adding
1499 not recorded there. This function fixes that by adding
1501 data read from obsolescence markers"""
1500 data read from obsolescence markers"""
1502 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1501 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1503 return oldreplacements
1502 return oldreplacements
1504
1503
1505 unfi = repo.unfiltered()
1504 unfi = repo.unfiltered()
1506 nm = unfi.changelog.nodemap
1505 nm = unfi.changelog.nodemap
1507 obsstore = repo.obsstore
1506 obsstore = repo.obsstore
1508 newreplacements = list(oldreplacements)
1507 newreplacements = list(oldreplacements)
1509 oldsuccs = [r[1] for r in oldreplacements]
1508 oldsuccs = [r[1] for r in oldreplacements]
1510 # successors that have already been added to succstocheck once
1509 # successors that have already been added to succstocheck once
1511 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1510 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1512 succstocheck = list(seensuccs)
1511 succstocheck = list(seensuccs)
1513 while succstocheck:
1512 while succstocheck:
1514 n = succstocheck.pop()
1513 n = succstocheck.pop()
1515 missing = nm.get(n) is None
1514 missing = nm.get(n) is None
1516 markers = obsstore.successors.get(n, ())
1515 markers = obsstore.successors.get(n, ())
1517 if missing and not markers:
1516 if missing and not markers:
1518 # dead end, mark it as such
1517 # dead end, mark it as such
1519 newreplacements.append((n, ()))
1518 newreplacements.append((n, ()))
1520 for marker in markers:
1519 for marker in markers:
1521 nsuccs = marker[1]
1520 nsuccs = marker[1]
1522 newreplacements.append((n, nsuccs))
1521 newreplacements.append((n, nsuccs))
1523 for nsucc in nsuccs:
1522 for nsucc in nsuccs:
1524 if nsucc not in seensuccs:
1523 if nsucc not in seensuccs:
1525 seensuccs.add(nsucc)
1524 seensuccs.add(nsucc)
1526 succstocheck.append(nsucc)
1525 succstocheck.append(nsucc)
1527
1526
1528 return newreplacements
1527 return newreplacements
1529
1528
1530 def processreplacement(state):
1529 def processreplacement(state):
1531 """process the list of replacements to return
1530 """process the list of replacements to return
1532
1531
1533 1) the final mapping between original and created nodes
1532 1) the final mapping between original and created nodes
1534 2) the list of temporary node created by histedit
1533 2) the list of temporary node created by histedit
1535 3) the list of new commit created by histedit"""
1534 3) the list of new commit created by histedit"""
1536 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1535 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1537 allsuccs = set()
1536 allsuccs = set()
1538 replaced = set()
1537 replaced = set()
1539 fullmapping = {}
1538 fullmapping = {}
1540 # initialize basic set
1539 # initialize basic set
1541 # fullmapping records all operations recorded in replacement
1540 # fullmapping records all operations recorded in replacement
1542 for rep in replacements:
1541 for rep in replacements:
1543 allsuccs.update(rep[1])
1542 allsuccs.update(rep[1])
1544 replaced.add(rep[0])
1543 replaced.add(rep[0])
1545 fullmapping.setdefault(rep[0], set()).update(rep[1])
1544 fullmapping.setdefault(rep[0], set()).update(rep[1])
1546 new = allsuccs - replaced
1545 new = allsuccs - replaced
1547 tmpnodes = allsuccs & replaced
1546 tmpnodes = allsuccs & replaced
1548 # Reduce content fullmapping into direct relation between original nodes
1547 # Reduce content fullmapping into direct relation between original nodes
1549 # and final node created during history edition
1548 # and final node created during history edition
1550 # Dropped changeset are replaced by an empty list
1549 # Dropped changeset are replaced by an empty list
1551 toproceed = set(fullmapping)
1550 toproceed = set(fullmapping)
1552 final = {}
1551 final = {}
1553 while toproceed:
1552 while toproceed:
1554 for x in list(toproceed):
1553 for x in list(toproceed):
1555 succs = fullmapping[x]
1554 succs = fullmapping[x]
1556 for s in list(succs):
1555 for s in list(succs):
1557 if s in toproceed:
1556 if s in toproceed:
1558 # non final node with unknown closure
1557 # non final node with unknown closure
1559 # We can't process this now
1558 # We can't process this now
1560 break
1559 break
1561 elif s in final:
1560 elif s in final:
1562 # non final node, replace with closure
1561 # non final node, replace with closure
1563 succs.remove(s)
1562 succs.remove(s)
1564 succs.update(final[s])
1563 succs.update(final[s])
1565 else:
1564 else:
1566 final[x] = succs
1565 final[x] = succs
1567 toproceed.remove(x)
1566 toproceed.remove(x)
1568 # remove tmpnodes from final mapping
1567 # remove tmpnodes from final mapping
1569 for n in tmpnodes:
1568 for n in tmpnodes:
1570 del final[n]
1569 del final[n]
1571 # we expect all changes involved in final to exist in the repo
1570 # we expect all changes involved in final to exist in the repo
1572 # turn `final` into list (topologically sorted)
1571 # turn `final` into list (topologically sorted)
1573 nm = state.repo.changelog.nodemap
1572 nm = state.repo.changelog.nodemap
1574 for prec, succs in final.items():
1573 for prec, succs in final.items():
1575 final[prec] = sorted(succs, key=nm.get)
1574 final[prec] = sorted(succs, key=nm.get)
1576
1575
1577 # computed topmost element (necessary for bookmark)
1576 # computed topmost element (necessary for bookmark)
1578 if new:
1577 if new:
1579 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1578 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1580 elif not final:
1579 elif not final:
1581 # Nothing rewritten at all. we won't need `newtopmost`
1580 # Nothing rewritten at all. we won't need `newtopmost`
1582 # It is the same as `oldtopmost` and `processreplacement` know it
1581 # It is the same as `oldtopmost` and `processreplacement` know it
1583 newtopmost = None
1582 newtopmost = None
1584 else:
1583 else:
1585 # every body died. The newtopmost is the parent of the root.
1584 # every body died. The newtopmost is the parent of the root.
1586 r = state.repo.changelog.rev
1585 r = state.repo.changelog.rev
1587 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1586 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1588
1587
1589 return final, tmpnodes, new, newtopmost
1588 return final, tmpnodes, new, newtopmost
1590
1589
1591 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
1590 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
1592 """Move bookmark from oldtopmost to newly created topmost
1591 """Move bookmark from oldtopmost to newly created topmost
1593
1592
1594 This is arguably a feature and we may only want that for the active
1593 This is arguably a feature and we may only want that for the active
1595 bookmark. But the behavior is kept compatible with the old version for now.
1594 bookmark. But the behavior is kept compatible with the old version for now.
1596 """
1595 """
1597 if not oldtopmost or not newtopmost:
1596 if not oldtopmost or not newtopmost:
1598 return
1597 return
1599 oldbmarks = repo.nodebookmarks(oldtopmost)
1598 oldbmarks = repo.nodebookmarks(oldtopmost)
1600 if oldbmarks:
1599 if oldbmarks:
1601 with repo.lock(), repo.transaction('histedit') as tr:
1600 with repo.lock(), repo.transaction('histedit') as tr:
1602 marks = repo._bookmarks
1601 marks = repo._bookmarks
1603 changes = []
1602 changes = []
1604 for name in oldbmarks:
1603 for name in oldbmarks:
1605 changes.append((name, newtopmost))
1604 changes.append((name, newtopmost))
1606 marks.applychanges(repo, tr, changes)
1605 marks.applychanges(repo, tr, changes)
1607
1606
1608 def cleanupnode(ui, repo, nodes, nobackup=False):
1607 def cleanupnode(ui, repo, nodes, nobackup=False):
1609 """strip a group of nodes from the repository
1608 """strip a group of nodes from the repository
1610
1609
1611 The set of node to strip may contains unknown nodes."""
1610 The set of node to strip may contains unknown nodes."""
1612 with repo.lock():
1611 with repo.lock():
1613 # do not let filtering get in the way of the cleanse
1612 # do not let filtering get in the way of the cleanse
1614 # we should probably get rid of obsolescence marker created during the
1613 # we should probably get rid of obsolescence marker created during the
1615 # histedit, but we currently do not have such information.
1614 # histedit, but we currently do not have such information.
1616 repo = repo.unfiltered()
1615 repo = repo.unfiltered()
1617 # Find all nodes that need to be stripped
1616 # Find all nodes that need to be stripped
1618 # (we use %lr instead of %ln to silently ignore unknown items)
1617 # (we use %lr instead of %ln to silently ignore unknown items)
1619 nm = repo.changelog.nodemap
1618 nm = repo.changelog.nodemap
1620 nodes = sorted(n for n in nodes if n in nm)
1619 nodes = sorted(n for n in nodes if n in nm)
1621 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1620 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1622 if roots:
1621 if roots:
1623 backup = not nobackup
1622 backup = not nobackup
1624 repair.strip(ui, repo, roots, backup=backup)
1623 repair.strip(ui, repo, roots, backup=backup)
1625
1624
1626 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1625 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1627 if isinstance(nodelist, str):
1626 if isinstance(nodelist, str):
1628 nodelist = [nodelist]
1627 nodelist = [nodelist]
1629 state = histeditstate(repo)
1628 state = histeditstate(repo)
1630 if state.inprogress():
1629 if state.inprogress():
1631 state.read()
1630 state.read()
1632 histedit_nodes = {action.node for action
1631 histedit_nodes = {action.node for action
1633 in state.actions if action.node}
1632 in state.actions if action.node}
1634 common_nodes = histedit_nodes & set(nodelist)
1633 common_nodes = histedit_nodes & set(nodelist)
1635 if common_nodes:
1634 if common_nodes:
1636 raise error.Abort(_("histedit in progress, can't strip %s")
1635 raise error.Abort(_("histedit in progress, can't strip %s")
1637 % ', '.join(node.short(x) for x in common_nodes))
1636 % ', '.join(node.short(x) for x in common_nodes))
1638 return orig(ui, repo, nodelist, *args, **kwargs)
1637 return orig(ui, repo, nodelist, *args, **kwargs)
1639
1638
1640 extensions.wrapfunction(repair, 'strip', stripwrapper)
1639 extensions.wrapfunction(repair, 'strip', stripwrapper)
1641
1640
1642 def summaryhook(ui, repo):
1641 def summaryhook(ui, repo):
1643 state = histeditstate(repo)
1642 state = histeditstate(repo)
1644 if not state.inprogress():
1643 if not state.inprogress():
1645 return
1644 return
1646 state.read()
1645 state.read()
1647 if state.actions:
1646 if state.actions:
1648 # i18n: column positioning for "hg summary"
1647 # i18n: column positioning for "hg summary"
1649 ui.write(_('hist: %s (histedit --continue)\n') %
1648 ui.write(_('hist: %s (histedit --continue)\n') %
1650 (ui.label(_('%d remaining'), 'histedit.remaining') %
1649 (ui.label(_('%d remaining'), 'histedit.remaining') %
1651 len(state.actions)))
1650 len(state.actions)))
1652
1651
1653 def extsetup(ui):
1652 def extsetup(ui):
1654 cmdutil.summaryhooks.add('histedit', summaryhook)
1653 cmdutil.summaryhooks.add('histedit', summaryhook)
1655 cmdutil.unfinishedstates.append(
1654 cmdutil.unfinishedstates.append(
1656 ['histedit-state', False, True, _('histedit in progress'),
1655 ['histedit-state', False, True, _('histedit in progress'),
1657 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1656 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1658 cmdutil.afterresolvedstates.append(
1657 cmdutil.afterresolvedstates.append(
1659 ['histedit-state', _('hg histedit --continue')])
1658 ['histedit-state', _('hg histedit --continue')])
@@ -1,1947 +1,1948 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 https://mercurial-scm.org/wiki/RebaseExtension
14 https://mercurial-scm.org/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from __future__ import absolute_import
17 from __future__ import absolute_import
18
18
19 import errno
19 import errno
20 import os
20 import os
21
21
22 from mercurial.i18n import _
22 from mercurial.i18n import _
23 from mercurial.node import (
23 from mercurial.node import (
24 nullrev,
24 nullrev,
25 short,
25 short,
26 )
26 )
27 from mercurial import (
27 from mercurial import (
28 bookmarks,
28 bookmarks,
29 cmdutil,
29 cmdutil,
30 commands,
30 commands,
31 copies,
31 copies,
32 destutil,
32 destutil,
33 dirstateguard,
33 dirstateguard,
34 error,
34 error,
35 extensions,
35 extensions,
36 hg,
36 hg,
37 merge as mergemod,
37 merge as mergemod,
38 mergeutil,
38 mergeutil,
39 obsolete,
39 obsolete,
40 obsutil,
40 obsutil,
41 patch,
41 patch,
42 phases,
42 phases,
43 pycompat,
43 pycompat,
44 registrar,
44 registrar,
45 repair,
45 repair,
46 revset,
46 revset,
47 revsetlang,
47 revsetlang,
48 scmutil,
48 scmutil,
49 smartset,
49 smartset,
50 state as statemod,
50 state as statemod,
51 util,
51 util,
52 )
52 )
53
53
54 # The following constants are used throughout the rebase module. The ordering of
54 # The following constants are used throughout the rebase module. The ordering of
55 # their values must be maintained.
55 # their values must be maintained.
56
56
57 # Indicates that a revision needs to be rebased
57 # Indicates that a revision needs to be rebased
58 revtodo = -1
58 revtodo = -1
59 revtodostr = '-1'
59 revtodostr = '-1'
60
60
61 # legacy revstates no longer needed in current code
61 # legacy revstates no longer needed in current code
62 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
62 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
63 legacystates = {'-2', '-3', '-4', '-5'}
63 legacystates = {'-2', '-3', '-4', '-5'}
64
64
65 cmdtable = {}
65 cmdtable = {}
66 command = registrar.command(cmdtable)
66 command = registrar.command(cmdtable)
67 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
67 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
68 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
68 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
69 # be specifying the version(s) of Mercurial they are tested with, or
69 # be specifying the version(s) of Mercurial they are tested with, or
70 # leave the attribute unspecified.
70 # leave the attribute unspecified.
71 testedwith = 'ships-with-hg-core'
71 testedwith = 'ships-with-hg-core'
72
72
73 def _nothingtorebase():
73 def _nothingtorebase():
74 return 1
74 return 1
75
75
76 def _savegraft(ctx, extra):
76 def _savegraft(ctx, extra):
77 s = ctx.extra().get('source', None)
77 s = ctx.extra().get('source', None)
78 if s is not None:
78 if s is not None:
79 extra['source'] = s
79 extra['source'] = s
80 s = ctx.extra().get('intermediate-source', None)
80 s = ctx.extra().get('intermediate-source', None)
81 if s is not None:
81 if s is not None:
82 extra['intermediate-source'] = s
82 extra['intermediate-source'] = s
83
83
84 def _savebranch(ctx, extra):
84 def _savebranch(ctx, extra):
85 extra['branch'] = ctx.branch()
85 extra['branch'] = ctx.branch()
86
86
87 def _destrebase(repo, sourceset, destspace=None):
87 def _destrebase(repo, sourceset, destspace=None):
88 """small wrapper around destmerge to pass the right extra args
88 """small wrapper around destmerge to pass the right extra args
89
89
90 Please wrap destutil.destmerge instead."""
90 Please wrap destutil.destmerge instead."""
91 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
91 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
92 onheadcheck=False, destspace=destspace)
92 onheadcheck=False, destspace=destspace)
93
93
94 revsetpredicate = registrar.revsetpredicate()
94 revsetpredicate = registrar.revsetpredicate()
95
95
96 @revsetpredicate('_destrebase')
96 @revsetpredicate('_destrebase')
97 def _revsetdestrebase(repo, subset, x):
97 def _revsetdestrebase(repo, subset, x):
98 # ``_rebasedefaultdest()``
98 # ``_rebasedefaultdest()``
99
99
100 # default destination for rebase.
100 # default destination for rebase.
101 # # XXX: Currently private because I expect the signature to change.
101 # # XXX: Currently private because I expect the signature to change.
102 # # XXX: - bailing out in case of ambiguity vs returning all data.
102 # # XXX: - bailing out in case of ambiguity vs returning all data.
103 # i18n: "_rebasedefaultdest" is a keyword
103 # i18n: "_rebasedefaultdest" is a keyword
104 sourceset = None
104 sourceset = None
105 if x is not None:
105 if x is not None:
106 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
106 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
107 return subset & smartset.baseset([_destrebase(repo, sourceset)])
107 return subset & smartset.baseset([_destrebase(repo, sourceset)])
108
108
109 @revsetpredicate('_destautoorphanrebase')
109 @revsetpredicate('_destautoorphanrebase')
110 def _revsetdestautoorphanrebase(repo, subset, x):
110 def _revsetdestautoorphanrebase(repo, subset, x):
111 """automatic rebase destination for a single orphan revision"""
111 """automatic rebase destination for a single orphan revision"""
112 unfi = repo.unfiltered()
112 unfi = repo.unfiltered()
113 obsoleted = unfi.revs('obsolete()')
113 obsoleted = unfi.revs('obsolete()')
114
114
115 src = revset.getset(repo, subset, x).first()
115 src = revset.getset(repo, subset, x).first()
116
116
117 # Empty src or already obsoleted - Do not return a destination
117 # Empty src or already obsoleted - Do not return a destination
118 if not src or src in obsoleted:
118 if not src or src in obsoleted:
119 return smartset.baseset()
119 return smartset.baseset()
120 dests = destutil.orphanpossibledestination(repo, src)
120 dests = destutil.orphanpossibledestination(repo, src)
121 if len(dests) > 1:
121 if len(dests) > 1:
122 raise error.Abort(
122 raise error.Abort(
123 _("ambiguous automatic rebase: %r could end up on any of %r") % (
123 _("ambiguous automatic rebase: %r could end up on any of %r") % (
124 src, dests))
124 src, dests))
125 # We have zero or one destination, so we can just return here.
125 # We have zero or one destination, so we can just return here.
126 return smartset.baseset(dests)
126 return smartset.baseset(dests)
127
127
128 def _ctxdesc(ctx):
128 def _ctxdesc(ctx):
129 """short description for a context"""
129 """short description for a context"""
130 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
130 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
131 ctx.description().split('\n', 1)[0])
131 ctx.description().split('\n', 1)[0])
132 repo = ctx.repo()
132 repo = ctx.repo()
133 names = []
133 names = []
134 for nsname, ns in repo.names.iteritems():
134 for nsname, ns in repo.names.iteritems():
135 if nsname == 'branches':
135 if nsname == 'branches':
136 continue
136 continue
137 names.extend(ns.names(repo, ctx.node()))
137 names.extend(ns.names(repo, ctx.node()))
138 if names:
138 if names:
139 desc += ' (%s)' % ' '.join(names)
139 desc += ' (%s)' % ' '.join(names)
140 return desc
140 return desc
141
141
142 class rebaseruntime(object):
142 class rebaseruntime(object):
143 """This class is a container for rebase runtime state"""
143 """This class is a container for rebase runtime state"""
144 def __init__(self, repo, ui, inmemory=False, opts=None):
144 def __init__(self, repo, ui, inmemory=False, opts=None):
145 if opts is None:
145 if opts is None:
146 opts = {}
146 opts = {}
147
147
148 # prepared: whether we have rebasestate prepared or not. Currently it
148 # prepared: whether we have rebasestate prepared or not. Currently it
149 # decides whether "self.repo" is unfiltered or not.
149 # decides whether "self.repo" is unfiltered or not.
150 # The rebasestate has explicit hash to hash instructions not depending
150 # The rebasestate has explicit hash to hash instructions not depending
151 # on visibility. If rebasestate exists (in-memory or on-disk), use
151 # on visibility. If rebasestate exists (in-memory or on-disk), use
152 # unfiltered repo to avoid visibility issues.
152 # unfiltered repo to avoid visibility issues.
153 # Before knowing rebasestate (i.e. when starting a new rebase (not
153 # Before knowing rebasestate (i.e. when starting a new rebase (not
154 # --continue or --abort)), the original repo should be used so
154 # --continue or --abort)), the original repo should be used so
155 # visibility-dependent revsets are correct.
155 # visibility-dependent revsets are correct.
156 self.prepared = False
156 self.prepared = False
157 self._repo = repo
157 self._repo = repo
158
158
159 self.ui = ui
159 self.ui = ui
160 self.opts = opts
160 self.opts = opts
161 self.originalwd = None
161 self.originalwd = None
162 self.external = nullrev
162 self.external = nullrev
163 # Mapping between the old revision id and either what is the new rebased
163 # Mapping between the old revision id and either what is the new rebased
164 # revision or what needs to be done with the old revision. The state
164 # revision or what needs to be done with the old revision. The state
165 # dict will be what contains most of the rebase progress state.
165 # dict will be what contains most of the rebase progress state.
166 self.state = {}
166 self.state = {}
167 self.activebookmark = None
167 self.activebookmark = None
168 self.destmap = {}
168 self.destmap = {}
169 self.skipped = set()
169 self.skipped = set()
170
170
171 self.collapsef = opts.get('collapse', False)
171 self.collapsef = opts.get('collapse', False)
172 self.collapsemsg = cmdutil.logmessage(ui, opts)
172 self.collapsemsg = cmdutil.logmessage(ui, opts)
173 self.date = opts.get('date', None)
173 self.date = opts.get('date', None)
174
174
175 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
175 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
176 self.extrafns = [_savegraft]
176 self.extrafns = [_savegraft]
177 if e:
177 if e:
178 self.extrafns = [e]
178 self.extrafns = [e]
179
179
180 self.backupf = ui.configbool('ui', 'history-editing-backup')
180 self.backupf = ui.configbool('ui', 'history-editing-backup')
181 self.keepf = opts.get('keep', False)
181 self.keepf = opts.get('keep', False)
182 self.keepbranchesf = opts.get('keepbranches', False)
182 self.keepbranchesf = opts.get('keepbranches', False)
183 self.obsoletenotrebased = {}
183 self.obsoletenotrebased = {}
184 self.obsoletewithoutsuccessorindestination = set()
184 self.obsoletewithoutsuccessorindestination = set()
185 self.inmemory = inmemory
185 self.inmemory = inmemory
186 self.stateobj = statemod.cmdstate(repo, 'rebasestate')
186 self.stateobj = statemod.cmdstate(repo, 'rebasestate')
187
187
188 @property
188 @property
189 def repo(self):
189 def repo(self):
190 if self.prepared:
190 if self.prepared:
191 return self._repo.unfiltered()
191 return self._repo.unfiltered()
192 else:
192 else:
193 return self._repo
193 return self._repo
194
194
195 def storestatus(self, tr=None):
195 def storestatus(self, tr=None):
196 """Store the current status to allow recovery"""
196 """Store the current status to allow recovery"""
197 if tr:
197 if tr:
198 tr.addfilegenerator('rebasestate', ('rebasestate',),
198 tr.addfilegenerator('rebasestate', ('rebasestate',),
199 self._writestatus, location='plain')
199 self._writestatus, location='plain')
200 else:
200 else:
201 with self.repo.vfs("rebasestate", "w") as f:
201 with self.repo.vfs("rebasestate", "w") as f:
202 self._writestatus(f)
202 self._writestatus(f)
203
203
204 def _writestatus(self, f):
204 def _writestatus(self, f):
205 repo = self.repo
205 repo = self.repo
206 assert repo.filtername is None
206 assert repo.filtername is None
207 f.write(repo[self.originalwd].hex() + '\n')
207 f.write(repo[self.originalwd].hex() + '\n')
208 # was "dest". we now write dest per src root below.
208 # was "dest". we now write dest per src root below.
209 f.write('\n')
209 f.write('\n')
210 f.write(repo[self.external].hex() + '\n')
210 f.write(repo[self.external].hex() + '\n')
211 f.write('%d\n' % int(self.collapsef))
211 f.write('%d\n' % int(self.collapsef))
212 f.write('%d\n' % int(self.keepf))
212 f.write('%d\n' % int(self.keepf))
213 f.write('%d\n' % int(self.keepbranchesf))
213 f.write('%d\n' % int(self.keepbranchesf))
214 f.write('%s\n' % (self.activebookmark or ''))
214 f.write('%s\n' % (self.activebookmark or ''))
215 destmap = self.destmap
215 destmap = self.destmap
216 for d, v in self.state.iteritems():
216 for d, v in self.state.iteritems():
217 oldrev = repo[d].hex()
217 oldrev = repo[d].hex()
218 if v >= 0:
218 if v >= 0:
219 newrev = repo[v].hex()
219 newrev = repo[v].hex()
220 else:
220 else:
221 newrev = "%d" % v
221 newrev = "%d" % v
222 destnode = repo[destmap[d]].hex()
222 destnode = repo[destmap[d]].hex()
223 f.write("%s:%s:%s\n" % (oldrev, newrev, destnode))
223 f.write("%s:%s:%s\n" % (oldrev, newrev, destnode))
224 repo.ui.debug('rebase status stored\n')
224 repo.ui.debug('rebase status stored\n')
225
225
226 def restorestatus(self):
226 def restorestatus(self):
227 """Restore a previously stored status"""
227 """Restore a previously stored status"""
228 if not self.stateobj.exists():
228 if not self.stateobj.exists():
229 cmdutil.wrongtooltocontinue(self.repo, _('rebase'))
229 cmdutil.wrongtooltocontinue(self.repo, _('rebase'))
230
230
231 data = self._read()
231 data = self._read()
232 self.repo.ui.debug('rebase status resumed\n')
232 self.repo.ui.debug('rebase status resumed\n')
233
233
234 self.originalwd = data['originalwd']
234 self.originalwd = data['originalwd']
235 self.destmap = data['destmap']
235 self.destmap = data['destmap']
236 self.state = data['state']
236 self.state = data['state']
237 self.skipped = data['skipped']
237 self.skipped = data['skipped']
238 self.collapsef = data['collapse']
238 self.collapsef = data['collapse']
239 self.keepf = data['keep']
239 self.keepf = data['keep']
240 self.keepbranchesf = data['keepbranches']
240 self.keepbranchesf = data['keepbranches']
241 self.external = data['external']
241 self.external = data['external']
242 self.activebookmark = data['activebookmark']
242 self.activebookmark = data['activebookmark']
243
243
244 def _read(self):
244 def _read(self):
245 self.prepared = True
245 self.prepared = True
246 repo = self.repo
246 repo = self.repo
247 assert repo.filtername is None
247 assert repo.filtername is None
248 data = {'keepbranches': None, 'collapse': None, 'activebookmark': None,
248 data = {'keepbranches': None, 'collapse': None, 'activebookmark': None,
249 'external': nullrev, 'keep': None, 'originalwd': None}
249 'external': nullrev, 'keep': None, 'originalwd': None}
250 legacydest = None
250 legacydest = None
251 state = {}
251 state = {}
252 destmap = {}
252 destmap = {}
253
253
254 if True:
254 if True:
255 f = repo.vfs("rebasestate")
255 f = repo.vfs("rebasestate")
256 for i, l in enumerate(f.read().splitlines()):
256 for i, l in enumerate(f.read().splitlines()):
257 if i == 0:
257 if i == 0:
258 data['originalwd'] = repo[l].rev()
258 data['originalwd'] = repo[l].rev()
259 elif i == 1:
259 elif i == 1:
260 # this line should be empty in newer version. but legacy
260 # this line should be empty in newer version. but legacy
261 # clients may still use it
261 # clients may still use it
262 if l:
262 if l:
263 legacydest = repo[l].rev()
263 legacydest = repo[l].rev()
264 elif i == 2:
264 elif i == 2:
265 data['external'] = repo[l].rev()
265 data['external'] = repo[l].rev()
266 elif i == 3:
266 elif i == 3:
267 data['collapse'] = bool(int(l))
267 data['collapse'] = bool(int(l))
268 elif i == 4:
268 elif i == 4:
269 data['keep'] = bool(int(l))
269 data['keep'] = bool(int(l))
270 elif i == 5:
270 elif i == 5:
271 data['keepbranches'] = bool(int(l))
271 data['keepbranches'] = bool(int(l))
272 elif i == 6 and not (len(l) == 81 and ':' in l):
272 elif i == 6 and not (len(l) == 81 and ':' in l):
273 # line 6 is a recent addition, so for backwards
273 # line 6 is a recent addition, so for backwards
274 # compatibility check that the line doesn't look like the
274 # compatibility check that the line doesn't look like the
275 # oldrev:newrev lines
275 # oldrev:newrev lines
276 data['activebookmark'] = l
276 data['activebookmark'] = l
277 else:
277 else:
278 args = l.split(':')
278 args = l.split(':')
279 oldrev = repo[args[0]].rev()
279 oldrev = repo[args[0]].rev()
280 newrev = args[1]
280 newrev = args[1]
281 if newrev in legacystates:
281 if newrev in legacystates:
282 continue
282 continue
283 if len(args) > 2:
283 if len(args) > 2:
284 destrev = repo[args[2]].rev()
284 destrev = repo[args[2]].rev()
285 else:
285 else:
286 destrev = legacydest
286 destrev = legacydest
287 destmap[oldrev] = destrev
287 destmap[oldrev] = destrev
288 if newrev == revtodostr:
288 if newrev == revtodostr:
289 state[oldrev] = revtodo
289 state[oldrev] = revtodo
290 # Legacy compat special case
290 # Legacy compat special case
291 else:
291 else:
292 state[oldrev] = repo[newrev].rev()
292 state[oldrev] = repo[newrev].rev()
293
293
294 if data['keepbranches'] is None:
294 if data['keepbranches'] is None:
295 raise error.Abort(_('.hg/rebasestate is incomplete'))
295 raise error.Abort(_('.hg/rebasestate is incomplete'))
296
296
297 data['destmap'] = destmap
297 data['destmap'] = destmap
298 data['state'] = state
298 data['state'] = state
299 skipped = set()
299 skipped = set()
300 # recompute the set of skipped revs
300 # recompute the set of skipped revs
301 if not data['collapse']:
301 if not data['collapse']:
302 seen = set(destmap.values())
302 seen = set(destmap.values())
303 for old, new in sorted(state.items()):
303 for old, new in sorted(state.items()):
304 if new != revtodo and new in seen:
304 if new != revtodo and new in seen:
305 skipped.add(old)
305 skipped.add(old)
306 seen.add(new)
306 seen.add(new)
307 data['skipped'] = skipped
307 data['skipped'] = skipped
308 repo.ui.debug('computed skipped revs: %s\n' %
308 repo.ui.debug('computed skipped revs: %s\n' %
309 (' '.join('%d' % r for r in sorted(skipped)) or ''))
309 (' '.join('%d' % r for r in sorted(skipped)) or ''))
310
310
311 return data
311 return data
312
312
313 def _handleskippingobsolete(self, obsoleterevs, destmap):
313 def _handleskippingobsolete(self, obsoleterevs, destmap):
314 """Compute structures necessary for skipping obsolete revisions
314 """Compute structures necessary for skipping obsolete revisions
315
315
316 obsoleterevs: iterable of all obsolete revisions in rebaseset
316 obsoleterevs: iterable of all obsolete revisions in rebaseset
317 destmap: {srcrev: destrev} destination revisions
317 destmap: {srcrev: destrev} destination revisions
318 """
318 """
319 self.obsoletenotrebased = {}
319 self.obsoletenotrebased = {}
320 if not self.ui.configbool('experimental', 'rebaseskipobsolete'):
320 if not self.ui.configbool('experimental', 'rebaseskipobsolete'):
321 return
321 return
322 obsoleteset = set(obsoleterevs)
322 obsoleteset = set(obsoleterevs)
323 (self.obsoletenotrebased,
323 (self.obsoletenotrebased,
324 self.obsoletewithoutsuccessorindestination,
324 self.obsoletewithoutsuccessorindestination,
325 obsoleteextinctsuccessors) = _computeobsoletenotrebased(
325 obsoleteextinctsuccessors) = _computeobsoletenotrebased(
326 self.repo, obsoleteset, destmap)
326 self.repo, obsoleteset, destmap)
327 skippedset = set(self.obsoletenotrebased)
327 skippedset = set(self.obsoletenotrebased)
328 skippedset.update(self.obsoletewithoutsuccessorindestination)
328 skippedset.update(self.obsoletewithoutsuccessorindestination)
329 skippedset.update(obsoleteextinctsuccessors)
329 skippedset.update(obsoleteextinctsuccessors)
330 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
330 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
331
331
332 def _prepareabortorcontinue(self, isabort, backup=True, suppwarns=False):
332 def _prepareabortorcontinue(self, isabort, backup=True, suppwarns=False):
333 try:
333 try:
334 self.restorestatus()
334 self.restorestatus()
335 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
335 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
336 except error.RepoLookupError:
336 except error.RepoLookupError:
337 if isabort:
337 if isabort:
338 clearstatus(self.repo)
338 clearstatus(self.repo)
339 clearcollapsemsg(self.repo)
339 clearcollapsemsg(self.repo)
340 self.repo.ui.warn(_('rebase aborted (no revision is removed,'
340 self.repo.ui.warn(_('rebase aborted (no revision is removed,'
341 ' only broken state is cleared)\n'))
341 ' only broken state is cleared)\n'))
342 return 0
342 return 0
343 else:
343 else:
344 msg = _('cannot continue inconsistent rebase')
344 msg = _('cannot continue inconsistent rebase')
345 hint = _('use "hg rebase --abort" to clear broken state')
345 hint = _('use "hg rebase --abort" to clear broken state')
346 raise error.Abort(msg, hint=hint)
346 raise error.Abort(msg, hint=hint)
347
347
348 if isabort:
348 if isabort:
349 backup = backup and self.backupf
349 backup = backup and self.backupf
350 return abort(self.repo, self.originalwd, self.destmap, self.state,
350 return abort(self.repo, self.originalwd, self.destmap, self.state,
351 activebookmark=self.activebookmark, backup=backup,
351 activebookmark=self.activebookmark, backup=backup,
352 suppwarns=suppwarns)
352 suppwarns=suppwarns)
353
353
354 def _preparenewrebase(self, destmap):
354 def _preparenewrebase(self, destmap):
355 if not destmap:
355 if not destmap:
356 return _nothingtorebase()
356 return _nothingtorebase()
357
357
358 rebaseset = destmap.keys()
358 rebaseset = destmap.keys()
359 allowunstable = obsolete.isenabled(self.repo, obsolete.allowunstableopt)
359 allowunstable = obsolete.isenabled(self.repo, obsolete.allowunstableopt)
360 if (not (self.keepf or allowunstable)
360 if (not (self.keepf or allowunstable)
361 and self.repo.revs('first(children(%ld) - %ld)',
361 and self.repo.revs('first(children(%ld) - %ld)',
362 rebaseset, rebaseset)):
362 rebaseset, rebaseset)):
363 raise error.Abort(
363 raise error.Abort(
364 _("can't remove original changesets with"
364 _("can't remove original changesets with"
365 " unrebased descendants"),
365 " unrebased descendants"),
366 hint=_('use --keep to keep original changesets'))
366 hint=_('use --keep to keep original changesets'))
367
367
368 result = buildstate(self.repo, destmap, self.collapsef)
368 result = buildstate(self.repo, destmap, self.collapsef)
369
369
370 if not result:
370 if not result:
371 # Empty state built, nothing to rebase
371 # Empty state built, nothing to rebase
372 self.ui.status(_('nothing to rebase\n'))
372 self.ui.status(_('nothing to rebase\n'))
373 return _nothingtorebase()
373 return _nothingtorebase()
374
374
375 for root in self.repo.set('roots(%ld)', rebaseset):
375 for root in self.repo.set('roots(%ld)', rebaseset):
376 if not self.keepf and not root.mutable():
376 if not self.keepf and not root.mutable():
377 raise error.Abort(_("can't rebase public changeset %s")
377 raise error.Abort(_("can't rebase public changeset %s")
378 % root,
378 % root,
379 hint=_("see 'hg help phases' for details"))
379 hint=_("see 'hg help phases' for details"))
380
380
381 (self.originalwd, self.destmap, self.state) = result
381 (self.originalwd, self.destmap, self.state) = result
382 if self.collapsef:
382 if self.collapsef:
383 dests = set(self.destmap.values())
383 dests = set(self.destmap.values())
384 if len(dests) != 1:
384 if len(dests) != 1:
385 raise error.Abort(
385 raise error.Abort(
386 _('--collapse does not work with multiple destinations'))
386 _('--collapse does not work with multiple destinations'))
387 destrev = next(iter(dests))
387 destrev = next(iter(dests))
388 destancestors = self.repo.changelog.ancestors([destrev],
388 destancestors = self.repo.changelog.ancestors([destrev],
389 inclusive=True)
389 inclusive=True)
390 self.external = externalparent(self.repo, self.state, destancestors)
390 self.external = externalparent(self.repo, self.state, destancestors)
391
391
392 for destrev in sorted(set(destmap.values())):
392 for destrev in sorted(set(destmap.values())):
393 dest = self.repo[destrev]
393 dest = self.repo[destrev]
394 if dest.closesbranch() and not self.keepbranchesf:
394 if dest.closesbranch() and not self.keepbranchesf:
395 self.ui.status(_('reopening closed branch head %s\n') % dest)
395 self.ui.status(_('reopening closed branch head %s\n') % dest)
396
396
397 self.prepared = True
397 self.prepared = True
398
398
399 def _assignworkingcopy(self):
399 def _assignworkingcopy(self):
400 if self.inmemory:
400 if self.inmemory:
401 from mercurial.context import overlayworkingctx
401 from mercurial.context import overlayworkingctx
402 self.wctx = overlayworkingctx(self.repo)
402 self.wctx = overlayworkingctx(self.repo)
403 self.repo.ui.debug("rebasing in-memory\n")
403 self.repo.ui.debug("rebasing in-memory\n")
404 else:
404 else:
405 self.wctx = self.repo[None]
405 self.wctx = self.repo[None]
406 self.repo.ui.debug("rebasing on disk\n")
406 self.repo.ui.debug("rebasing on disk\n")
407 self.repo.ui.log("rebase", "", rebase_imm_used=self.inmemory)
407 self.repo.ui.log("rebase", "", rebase_imm_used=self.inmemory)
408
408
409 def _performrebase(self, tr):
409 def _performrebase(self, tr):
410 self._assignworkingcopy()
410 self._assignworkingcopy()
411 repo, ui = self.repo, self.ui
411 repo, ui = self.repo, self.ui
412 if self.keepbranchesf:
412 if self.keepbranchesf:
413 # insert _savebranch at the start of extrafns so if
413 # insert _savebranch at the start of extrafns so if
414 # there's a user-provided extrafn it can clobber branch if
414 # there's a user-provided extrafn it can clobber branch if
415 # desired
415 # desired
416 self.extrafns.insert(0, _savebranch)
416 self.extrafns.insert(0, _savebranch)
417 if self.collapsef:
417 if self.collapsef:
418 branches = set()
418 branches = set()
419 for rev in self.state:
419 for rev in self.state:
420 branches.add(repo[rev].branch())
420 branches.add(repo[rev].branch())
421 if len(branches) > 1:
421 if len(branches) > 1:
422 raise error.Abort(_('cannot collapse multiple named '
422 raise error.Abort(_('cannot collapse multiple named '
423 'branches'))
423 'branches'))
424
424
425 # Calculate self.obsoletenotrebased
425 # Calculate self.obsoletenotrebased
426 obsrevs = _filterobsoleterevs(self.repo, self.state)
426 obsrevs = _filterobsoleterevs(self.repo, self.state)
427 self._handleskippingobsolete(obsrevs, self.destmap)
427 self._handleskippingobsolete(obsrevs, self.destmap)
428
428
429 # Keep track of the active bookmarks in order to reset them later
429 # Keep track of the active bookmarks in order to reset them later
430 self.activebookmark = self.activebookmark or repo._activebookmark
430 self.activebookmark = self.activebookmark or repo._activebookmark
431 if self.activebookmark:
431 if self.activebookmark:
432 bookmarks.deactivate(repo)
432 bookmarks.deactivate(repo)
433
433
434 # Store the state before we begin so users can run 'hg rebase --abort'
434 # Store the state before we begin so users can run 'hg rebase --abort'
435 # if we fail before the transaction closes.
435 # if we fail before the transaction closes.
436 self.storestatus()
436 self.storestatus()
437 if tr:
437 if tr:
438 # When using single transaction, store state when transaction
438 # When using single transaction, store state when transaction
439 # commits.
439 # commits.
440 self.storestatus(tr)
440 self.storestatus(tr)
441
441
442 cands = [k for k, v in self.state.iteritems() if v == revtodo]
442 cands = [k for k, v in self.state.iteritems() if v == revtodo]
443 p = repo.ui.makeprogress(_("rebasing"), unit=_('changesets'),
443 p = repo.ui.makeprogress(_("rebasing"), unit=_('changesets'),
444 total=len(cands))
444 total=len(cands))
445 def progress(ctx):
445 def progress(ctx):
446 p.increment(item=("%d:%s" % (ctx.rev(), ctx)))
446 p.increment(item=("%d:%s" % (ctx.rev(), ctx)))
447 allowdivergence = self.ui.configbool(
447 allowdivergence = self.ui.configbool(
448 'experimental', 'evolution.allowdivergence')
448 'experimental', 'evolution.allowdivergence')
449 for subset in sortsource(self.destmap):
449 for subset in sortsource(self.destmap):
450 sortedrevs = self.repo.revs('sort(%ld, -topo)', subset)
450 sortedrevs = self.repo.revs('sort(%ld, -topo)', subset)
451 if not allowdivergence:
451 if not allowdivergence:
452 sortedrevs -= self.repo.revs(
452 sortedrevs -= self.repo.revs(
453 'descendants(%ld) and not %ld',
453 'descendants(%ld) and not %ld',
454 self.obsoletewithoutsuccessorindestination,
454 self.obsoletewithoutsuccessorindestination,
455 self.obsoletewithoutsuccessorindestination,
455 self.obsoletewithoutsuccessorindestination,
456 )
456 )
457 for rev in sortedrevs:
457 for rev in sortedrevs:
458 self._rebasenode(tr, rev, allowdivergence, progress)
458 self._rebasenode(tr, rev, allowdivergence, progress)
459 p.complete()
459 p.complete()
460 ui.note(_('rebase merging completed\n'))
460 ui.note(_('rebase merging completed\n'))
461
461
462 def _concludenode(self, rev, p1, p2, editor, commitmsg=None):
462 def _concludenode(self, rev, p1, p2, editor, commitmsg=None):
463 '''Commit the wd changes with parents p1 and p2.
463 '''Commit the wd changes with parents p1 and p2.
464
464
465 Reuse commit info from rev but also store useful information in extra.
465 Reuse commit info from rev but also store useful information in extra.
466 Return node of committed revision.'''
466 Return node of committed revision.'''
467 repo = self.repo
467 repo = self.repo
468 ctx = repo[rev]
468 ctx = repo[rev]
469 if commitmsg is None:
469 if commitmsg is None:
470 commitmsg = ctx.description()
470 commitmsg = ctx.description()
471 date = self.date
471 date = self.date
472 if date is None:
472 if date is None:
473 date = ctx.date()
473 date = ctx.date()
474 extra = {'rebase_source': ctx.hex()}
474 extra = {'rebase_source': ctx.hex()}
475 for c in self.extrafns:
475 for c in self.extrafns:
476 c(ctx, extra)
476 c(ctx, extra)
477 keepbranch = self.keepbranchesf and repo[p1].branch() != ctx.branch()
477 keepbranch = self.keepbranchesf and repo[p1].branch() != ctx.branch()
478 destphase = max(ctx.phase(), phases.draft)
478 destphase = max(ctx.phase(), phases.draft)
479 overrides = {('phases', 'new-commit'): destphase}
479 overrides = {('phases', 'new-commit'): destphase}
480 if keepbranch:
480 if keepbranch:
481 overrides[('ui', 'allowemptycommit')] = True
481 overrides[('ui', 'allowemptycommit')] = True
482 with repo.ui.configoverride(overrides, 'rebase'):
482 with repo.ui.configoverride(overrides, 'rebase'):
483 if self.inmemory:
483 if self.inmemory:
484 newnode = commitmemorynode(repo, p1, p2,
484 newnode = commitmemorynode(repo, p1, p2,
485 wctx=self.wctx,
485 wctx=self.wctx,
486 extra=extra,
486 extra=extra,
487 commitmsg=commitmsg,
487 commitmsg=commitmsg,
488 editor=editor,
488 editor=editor,
489 user=ctx.user(),
489 user=ctx.user(),
490 date=date)
490 date=date)
491 mergemod.mergestate.clean(repo)
491 mergemod.mergestate.clean(repo)
492 else:
492 else:
493 newnode = commitnode(repo, p1, p2,
493 newnode = commitnode(repo, p1, p2,
494 extra=extra,
494 extra=extra,
495 commitmsg=commitmsg,
495 commitmsg=commitmsg,
496 editor=editor,
496 editor=editor,
497 user=ctx.user(),
497 user=ctx.user(),
498 date=date)
498 date=date)
499
499
500 if newnode is None:
500 if newnode is None:
501 # If it ended up being a no-op commit, then the normal
501 # If it ended up being a no-op commit, then the normal
502 # merge state clean-up path doesn't happen, so do it
502 # merge state clean-up path doesn't happen, so do it
503 # here. Fix issue5494
503 # here. Fix issue5494
504 mergemod.mergestate.clean(repo)
504 mergemod.mergestate.clean(repo)
505 return newnode
505 return newnode
506
506
507 def _rebasenode(self, tr, rev, allowdivergence, progressfn):
507 def _rebasenode(self, tr, rev, allowdivergence, progressfn):
508 repo, ui, opts = self.repo, self.ui, self.opts
508 repo, ui, opts = self.repo, self.ui, self.opts
509 dest = self.destmap[rev]
509 dest = self.destmap[rev]
510 ctx = repo[rev]
510 ctx = repo[rev]
511 desc = _ctxdesc(ctx)
511 desc = _ctxdesc(ctx)
512 if self.state[rev] == rev:
512 if self.state[rev] == rev:
513 ui.status(_('already rebased %s\n') % desc)
513 ui.status(_('already rebased %s\n') % desc)
514 elif (not allowdivergence
514 elif (not allowdivergence
515 and rev in self.obsoletewithoutsuccessorindestination):
515 and rev in self.obsoletewithoutsuccessorindestination):
516 msg = _('note: not rebasing %s and its descendants as '
516 msg = _('note: not rebasing %s and its descendants as '
517 'this would cause divergence\n') % desc
517 'this would cause divergence\n') % desc
518 repo.ui.status(msg)
518 repo.ui.status(msg)
519 self.skipped.add(rev)
519 self.skipped.add(rev)
520 elif rev in self.obsoletenotrebased:
520 elif rev in self.obsoletenotrebased:
521 succ = self.obsoletenotrebased[rev]
521 succ = self.obsoletenotrebased[rev]
522 if succ is None:
522 if succ is None:
523 msg = _('note: not rebasing %s, it has no '
523 msg = _('note: not rebasing %s, it has no '
524 'successor\n') % desc
524 'successor\n') % desc
525 else:
525 else:
526 succdesc = _ctxdesc(repo[succ])
526 succdesc = _ctxdesc(repo[succ])
527 msg = (_('note: not rebasing %s, already in '
527 msg = (_('note: not rebasing %s, already in '
528 'destination as %s\n') % (desc, succdesc))
528 'destination as %s\n') % (desc, succdesc))
529 repo.ui.status(msg)
529 repo.ui.status(msg)
530 # Make clearrebased aware state[rev] is not a true successor
530 # Make clearrebased aware state[rev] is not a true successor
531 self.skipped.add(rev)
531 self.skipped.add(rev)
532 # Record rev as moved to its desired destination in self.state.
532 # Record rev as moved to its desired destination in self.state.
533 # This helps bookmark and working parent movement.
533 # This helps bookmark and working parent movement.
534 dest = max(adjustdest(repo, rev, self.destmap, self.state,
534 dest = max(adjustdest(repo, rev, self.destmap, self.state,
535 self.skipped))
535 self.skipped))
536 self.state[rev] = dest
536 self.state[rev] = dest
537 elif self.state[rev] == revtodo:
537 elif self.state[rev] == revtodo:
538 ui.status(_('rebasing %s\n') % desc)
538 ui.status(_('rebasing %s\n') % desc)
539 progressfn(ctx)
539 progressfn(ctx)
540 p1, p2, base = defineparents(repo, rev, self.destmap,
540 p1, p2, base = defineparents(repo, rev, self.destmap,
541 self.state, self.skipped,
541 self.state, self.skipped,
542 self.obsoletenotrebased)
542 self.obsoletenotrebased)
543 if len(repo[None].parents()) == 2:
543 if len(repo[None].parents()) == 2:
544 repo.ui.debug('resuming interrupted rebase\n')
544 repo.ui.debug('resuming interrupted rebase\n')
545 else:
545 else:
546 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
546 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
547 with ui.configoverride(overrides, 'rebase'):
547 with ui.configoverride(overrides, 'rebase'):
548 stats = rebasenode(repo, rev, p1, base, self.collapsef,
548 stats = rebasenode(repo, rev, p1, base, self.collapsef,
549 dest, wctx=self.wctx)
549 dest, wctx=self.wctx)
550 if stats.unresolvedcount > 0:
550 if stats.unresolvedcount > 0:
551 if self.inmemory:
551 if self.inmemory:
552 raise error.InMemoryMergeConflictsError()
552 raise error.InMemoryMergeConflictsError()
553 else:
553 else:
554 raise error.InterventionRequired(
554 raise error.InterventionRequired(
555 _('unresolved conflicts (see hg '
555 _('unresolved conflicts (see hg '
556 'resolve, then hg rebase --continue)'))
556 'resolve, then hg rebase --continue)'))
557 if not self.collapsef:
557 if not self.collapsef:
558 merging = p2 != nullrev
558 merging = p2 != nullrev
559 editform = cmdutil.mergeeditform(merging, 'rebase')
559 editform = cmdutil.mergeeditform(merging, 'rebase')
560 editor = cmdutil.getcommiteditor(editform=editform,
560 editor = cmdutil.getcommiteditor(editform=editform,
561 **pycompat.strkwargs(opts))
561 **pycompat.strkwargs(opts))
562 newnode = self._concludenode(rev, p1, p2, editor)
562 newnode = self._concludenode(rev, p1, p2, editor)
563 else:
563 else:
564 # Skip commit if we are collapsing
564 # Skip commit if we are collapsing
565 if self.inmemory:
565 if self.inmemory:
566 self.wctx.setbase(repo[p1])
566 self.wctx.setbase(repo[p1])
567 else:
567 else:
568 repo.setparents(repo[p1].node())
568 repo.setparents(repo[p1].node())
569 newnode = None
569 newnode = None
570 # Update the state
570 # Update the state
571 if newnode is not None:
571 if newnode is not None:
572 self.state[rev] = repo[newnode].rev()
572 self.state[rev] = repo[newnode].rev()
573 ui.debug('rebased as %s\n' % short(newnode))
573 ui.debug('rebased as %s\n' % short(newnode))
574 else:
574 else:
575 if not self.collapsef:
575 if not self.collapsef:
576 ui.warn(_('note: rebase of %d:%s created no changes '
576 ui.warn(_('note: rebase of %d:%s created no changes '
577 'to commit\n') % (rev, ctx))
577 'to commit\n') % (rev, ctx))
578 self.skipped.add(rev)
578 self.skipped.add(rev)
579 self.state[rev] = p1
579 self.state[rev] = p1
580 ui.debug('next revision set to %d\n' % p1)
580 ui.debug('next revision set to %d\n' % p1)
581 else:
581 else:
582 ui.status(_('already rebased %s as %s\n') %
582 ui.status(_('already rebased %s as %s\n') %
583 (desc, repo[self.state[rev]]))
583 (desc, repo[self.state[rev]]))
584 if not tr:
584 if not tr:
585 # When not using single transaction, store state after each
585 # When not using single transaction, store state after each
586 # commit is completely done. On InterventionRequired, we thus
586 # commit is completely done. On InterventionRequired, we thus
587 # won't store the status. Instead, we'll hit the "len(parents) == 2"
587 # won't store the status. Instead, we'll hit the "len(parents) == 2"
588 # case and realize that the commit was in progress.
588 # case and realize that the commit was in progress.
589 self.storestatus()
589 self.storestatus()
590
590
591 def _finishrebase(self):
591 def _finishrebase(self):
592 repo, ui, opts = self.repo, self.ui, self.opts
592 repo, ui, opts = self.repo, self.ui, self.opts
593 fm = ui.formatter('rebase', opts)
593 fm = ui.formatter('rebase', opts)
594 fm.startitem()
594 fm.startitem()
595 if self.collapsef:
595 if self.collapsef:
596 p1, p2, _base = defineparents(repo, min(self.state), self.destmap,
596 p1, p2, _base = defineparents(repo, min(self.state), self.destmap,
597 self.state, self.skipped,
597 self.state, self.skipped,
598 self.obsoletenotrebased)
598 self.obsoletenotrebased)
599 editopt = opts.get('edit')
599 editopt = opts.get('edit')
600 editform = 'rebase.collapse'
600 editform = 'rebase.collapse'
601 if self.collapsemsg:
601 if self.collapsemsg:
602 commitmsg = self.collapsemsg
602 commitmsg = self.collapsemsg
603 else:
603 else:
604 commitmsg = 'Collapsed revision'
604 commitmsg = 'Collapsed revision'
605 for rebased in sorted(self.state):
605 for rebased in sorted(self.state):
606 if rebased not in self.skipped:
606 if rebased not in self.skipped:
607 commitmsg += '\n* %s' % repo[rebased].description()
607 commitmsg += '\n* %s' % repo[rebased].description()
608 editopt = True
608 editopt = True
609 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
609 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
610 revtoreuse = max(self.state)
610 revtoreuse = max(self.state)
611
611
612 newnode = self._concludenode(revtoreuse, p1, self.external,
612 newnode = self._concludenode(revtoreuse, p1, self.external,
613 editor, commitmsg=commitmsg)
613 editor, commitmsg=commitmsg)
614
614
615 if newnode is not None:
615 if newnode is not None:
616 newrev = repo[newnode].rev()
616 newrev = repo[newnode].rev()
617 for oldrev in self.state:
617 for oldrev in self.state:
618 self.state[oldrev] = newrev
618 self.state[oldrev] = newrev
619
619
620 if 'qtip' in repo.tags():
620 if 'qtip' in repo.tags():
621 updatemq(repo, self.state, self.skipped,
621 updatemq(repo, self.state, self.skipped,
622 **pycompat.strkwargs(opts))
622 **pycompat.strkwargs(opts))
623
623
624 # restore original working directory
624 # restore original working directory
625 # (we do this before stripping)
625 # (we do this before stripping)
626 newwd = self.state.get(self.originalwd, self.originalwd)
626 newwd = self.state.get(self.originalwd, self.originalwd)
627 if newwd < 0:
627 if newwd < 0:
628 # original directory is a parent of rebase set root or ignored
628 # original directory is a parent of rebase set root or ignored
629 newwd = self.originalwd
629 newwd = self.originalwd
630 if newwd not in [c.rev() for c in repo[None].parents()]:
630 if newwd not in [c.rev() for c in repo[None].parents()]:
631 ui.note(_("update back to initial working directory parent\n"))
631 ui.note(_("update back to initial working directory parent\n"))
632 hg.updaterepo(repo, newwd, overwrite=False)
632 hg.updaterepo(repo, newwd, overwrite=False)
633
633
634 collapsedas = None
634 collapsedas = None
635 if self.collapsef and not self.keepf:
635 if self.collapsef and not self.keepf:
636 collapsedas = newnode
636 collapsedas = newnode
637 clearrebased(ui, repo, self.destmap, self.state, self.skipped,
637 clearrebased(ui, repo, self.destmap, self.state, self.skipped,
638 collapsedas, self.keepf, fm=fm, backup=self.backupf)
638 collapsedas, self.keepf, fm=fm, backup=self.backupf)
639
639
640 clearstatus(repo)
640 clearstatus(repo)
641 clearcollapsemsg(repo)
641 clearcollapsemsg(repo)
642
642
643 ui.note(_("rebase completed\n"))
643 ui.note(_("rebase completed\n"))
644 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
644 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
645 if self.skipped:
645 if self.skipped:
646 skippedlen = len(self.skipped)
646 skippedlen = len(self.skipped)
647 ui.note(_("%d revisions have been skipped\n") % skippedlen)
647 ui.note(_("%d revisions have been skipped\n") % skippedlen)
648 fm.end()
648 fm.end()
649
649
650 if (self.activebookmark and self.activebookmark in repo._bookmarks and
650 if (self.activebookmark and self.activebookmark in repo._bookmarks and
651 repo['.'].node() == repo._bookmarks[self.activebookmark]):
651 repo['.'].node() == repo._bookmarks[self.activebookmark]):
652 bookmarks.activate(repo, self.activebookmark)
652 bookmarks.activate(repo, self.activebookmark)
653
653
654 @command('rebase',
654 @command('rebase',
655 [('s', 'source', '',
655 [('s', 'source', '',
656 _('rebase the specified changeset and descendants'), _('REV')),
656 _('rebase the specified changeset and descendants'), _('REV')),
657 ('b', 'base', '',
657 ('b', 'base', '',
658 _('rebase everything from branching point of specified changeset'),
658 _('rebase everything from branching point of specified changeset'),
659 _('REV')),
659 _('REV')),
660 ('r', 'rev', [],
660 ('r', 'rev', [],
661 _('rebase these revisions'),
661 _('rebase these revisions'),
662 _('REV')),
662 _('REV')),
663 ('d', 'dest', '',
663 ('d', 'dest', '',
664 _('rebase onto the specified changeset'), _('REV')),
664 _('rebase onto the specified changeset'), _('REV')),
665 ('', 'collapse', False, _('collapse the rebased changesets')),
665 ('', 'collapse', False, _('collapse the rebased changesets')),
666 ('m', 'message', '',
666 ('m', 'message', '',
667 _('use text as collapse commit message'), _('TEXT')),
667 _('use text as collapse commit message'), _('TEXT')),
668 ('e', 'edit', False, _('invoke editor on commit messages')),
668 ('e', 'edit', False, _('invoke editor on commit messages')),
669 ('l', 'logfile', '',
669 ('l', 'logfile', '',
670 _('read collapse commit message from file'), _('FILE')),
670 _('read collapse commit message from file'), _('FILE')),
671 ('k', 'keep', False, _('keep original changesets')),
671 ('k', 'keep', False, _('keep original changesets')),
672 ('', 'keepbranches', False, _('keep original branch names')),
672 ('', 'keepbranches', False, _('keep original branch names')),
673 ('D', 'detach', False, _('(DEPRECATED)')),
673 ('D', 'detach', False, _('(DEPRECATED)')),
674 ('i', 'interactive', False, _('(DEPRECATED)')),
674 ('i', 'interactive', False, _('(DEPRECATED)')),
675 ('t', 'tool', '', _('specify merge tool')),
675 ('t', 'tool', '', _('specify merge tool')),
676 ('', 'stop', False, _('stop interrupted rebase')),
676 ('', 'stop', False, _('stop interrupted rebase')),
677 ('c', 'continue', False, _('continue an interrupted rebase')),
677 ('c', 'continue', False, _('continue an interrupted rebase')),
678 ('a', 'abort', False, _('abort an interrupted rebase')),
678 ('a', 'abort', False, _('abort an interrupted rebase')),
679 ('', 'auto-orphans', '', _('automatically rebase orphan revisions '
679 ('', 'auto-orphans', '', _('automatically rebase orphan revisions '
680 'in the specified revset (EXPERIMENTAL)')),
680 'in the specified revset (EXPERIMENTAL)')),
681 ] + cmdutil.dryrunopts + cmdutil.formatteropts + cmdutil.confirmopts,
681 ] + cmdutil.dryrunopts + cmdutil.formatteropts + cmdutil.confirmopts,
682 _('[-s REV | -b REV] [-d REV] [OPTION]'),
682 _('[-s REV | -b REV] [-d REV] [OPTION]'),
683 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
683 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
684 def rebase(ui, repo, **opts):
684 def rebase(ui, repo, **opts):
685 """move changeset (and descendants) to a different branch
685 """move changeset (and descendants) to a different branch
686
686
687 Rebase uses repeated merging to graft changesets from one part of
687 Rebase uses repeated merging to graft changesets from one part of
688 history (the source) onto another (the destination). This can be
688 history (the source) onto another (the destination). This can be
689 useful for linearizing *local* changes relative to a master
689 useful for linearizing *local* changes relative to a master
690 development tree.
690 development tree.
691
691
692 Published commits cannot be rebased (see :hg:`help phases`).
692 Published commits cannot be rebased (see :hg:`help phases`).
693 To copy commits, see :hg:`help graft`.
693 To copy commits, see :hg:`help graft`.
694
694
695 If you don't specify a destination changeset (``-d/--dest``), rebase
695 If you don't specify a destination changeset (``-d/--dest``), rebase
696 will use the same logic as :hg:`merge` to pick a destination. if
696 will use the same logic as :hg:`merge` to pick a destination. if
697 the current branch contains exactly one other head, the other head
697 the current branch contains exactly one other head, the other head
698 is merged with by default. Otherwise, an explicit revision with
698 is merged with by default. Otherwise, an explicit revision with
699 which to merge with must be provided. (destination changeset is not
699 which to merge with must be provided. (destination changeset is not
700 modified by rebasing, but new changesets are added as its
700 modified by rebasing, but new changesets are added as its
701 descendants.)
701 descendants.)
702
702
703 Here are the ways to select changesets:
703 Here are the ways to select changesets:
704
704
705 1. Explicitly select them using ``--rev``.
705 1. Explicitly select them using ``--rev``.
706
706
707 2. Use ``--source`` to select a root changeset and include all of its
707 2. Use ``--source`` to select a root changeset and include all of its
708 descendants.
708 descendants.
709
709
710 3. Use ``--base`` to select a changeset; rebase will find ancestors
710 3. Use ``--base`` to select a changeset; rebase will find ancestors
711 and their descendants which are not also ancestors of the destination.
711 and their descendants which are not also ancestors of the destination.
712
712
713 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
713 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
714 rebase will use ``--base .`` as above.
714 rebase will use ``--base .`` as above.
715
715
716 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
716 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
717 can be used in ``--dest``. Destination would be calculated per source
717 can be used in ``--dest``. Destination would be calculated per source
718 revision with ``SRC`` substituted by that single source revision and
718 revision with ``SRC`` substituted by that single source revision and
719 ``ALLSRC`` substituted by all source revisions.
719 ``ALLSRC`` substituted by all source revisions.
720
720
721 Rebase will destroy original changesets unless you use ``--keep``.
721 Rebase will destroy original changesets unless you use ``--keep``.
722 It will also move your bookmarks (even if you do).
722 It will also move your bookmarks (even if you do).
723
723
724 Some changesets may be dropped if they do not contribute changes
724 Some changesets may be dropped if they do not contribute changes
725 (e.g. merges from the destination branch).
725 (e.g. merges from the destination branch).
726
726
727 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
727 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
728 a named branch with two heads. You will need to explicitly specify source
728 a named branch with two heads. You will need to explicitly specify source
729 and/or destination.
729 and/or destination.
730
730
731 If you need to use a tool to automate merge/conflict decisions, you
731 If you need to use a tool to automate merge/conflict decisions, you
732 can specify one with ``--tool``, see :hg:`help merge-tools`.
732 can specify one with ``--tool``, see :hg:`help merge-tools`.
733 As a caveat: the tool will not be used to mediate when a file was
733 As a caveat: the tool will not be used to mediate when a file was
734 deleted, there is no hook presently available for this.
734 deleted, there is no hook presently available for this.
735
735
736 If a rebase is interrupted to manually resolve a conflict, it can be
736 If a rebase is interrupted to manually resolve a conflict, it can be
737 continued with --continue/-c, aborted with --abort/-a, or stopped with
737 continued with --continue/-c, aborted with --abort/-a, or stopped with
738 --stop.
738 --stop.
739
739
740 .. container:: verbose
740 .. container:: verbose
741
741
742 Examples:
742 Examples:
743
743
744 - move "local changes" (current commit back to branching point)
744 - move "local changes" (current commit back to branching point)
745 to the current branch tip after a pull::
745 to the current branch tip after a pull::
746
746
747 hg rebase
747 hg rebase
748
748
749 - move a single changeset to the stable branch::
749 - move a single changeset to the stable branch::
750
750
751 hg rebase -r 5f493448 -d stable
751 hg rebase -r 5f493448 -d stable
752
752
753 - splice a commit and all its descendants onto another part of history::
753 - splice a commit and all its descendants onto another part of history::
754
754
755 hg rebase --source c0c3 --dest 4cf9
755 hg rebase --source c0c3 --dest 4cf9
756
756
757 - rebase everything on a branch marked by a bookmark onto the
757 - rebase everything on a branch marked by a bookmark onto the
758 default branch::
758 default branch::
759
759
760 hg rebase --base myfeature --dest default
760 hg rebase --base myfeature --dest default
761
761
762 - collapse a sequence of changes into a single commit::
762 - collapse a sequence of changes into a single commit::
763
763
764 hg rebase --collapse -r 1520:1525 -d .
764 hg rebase --collapse -r 1520:1525 -d .
765
765
766 - move a named branch while preserving its name::
766 - move a named branch while preserving its name::
767
767
768 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
768 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
769
769
770 - stabilize orphaned changesets so history looks linear::
770 - stabilize orphaned changesets so history looks linear::
771
771
772 hg rebase -r 'orphan()-obsolete()'\
772 hg rebase -r 'orphan()-obsolete()'\
773 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
773 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
774 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
774 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
775
775
776 Configuration Options:
776 Configuration Options:
777
777
778 You can make rebase require a destination if you set the following config
778 You can make rebase require a destination if you set the following config
779 option::
779 option::
780
780
781 [commands]
781 [commands]
782 rebase.requiredest = True
782 rebase.requiredest = True
783
783
784 By default, rebase will close the transaction after each commit. For
784 By default, rebase will close the transaction after each commit. For
785 performance purposes, you can configure rebase to use a single transaction
785 performance purposes, you can configure rebase to use a single transaction
786 across the entire rebase. WARNING: This setting introduces a significant
786 across the entire rebase. WARNING: This setting introduces a significant
787 risk of losing the work you've done in a rebase if the rebase aborts
787 risk of losing the work you've done in a rebase if the rebase aborts
788 unexpectedly::
788 unexpectedly::
789
789
790 [rebase]
790 [rebase]
791 singletransaction = True
791 singletransaction = True
792
792
793 By default, rebase writes to the working copy, but you can configure it to
793 By default, rebase writes to the working copy, but you can configure it to
794 run in-memory for for better performance, and to allow it to run if the
794 run in-memory for for better performance, and to allow it to run if the
795 working copy is dirty::
795 working copy is dirty::
796
796
797 [rebase]
797 [rebase]
798 experimental.inmemory = True
798 experimental.inmemory = True
799
799
800 Return Values:
800 Return Values:
801
801
802 Returns 0 on success, 1 if nothing to rebase or there are
802 Returns 0 on success, 1 if nothing to rebase or there are
803 unresolved conflicts.
803 unresolved conflicts.
804
804
805 """
805 """
806 opts = pycompat.byteskwargs(opts)
806 opts = pycompat.byteskwargs(opts)
807 inmemory = ui.configbool('rebase', 'experimental.inmemory')
807 inmemory = ui.configbool('rebase', 'experimental.inmemory')
808 dryrun = opts.get('dry_run')
808 dryrun = opts.get('dry_run')
809 confirm = opts.get('confirm')
809 confirm = opts.get('confirm')
810 selactions = [k for k in ['abort', 'stop', 'continue'] if opts.get(k)]
810 selactions = [k for k in ['abort', 'stop', 'continue'] if opts.get(k)]
811 if len(selactions) > 1:
811 if len(selactions) > 1:
812 raise error.Abort(_('cannot use --%s with --%s')
812 raise error.Abort(_('cannot use --%s with --%s')
813 % tuple(selactions[:2]))
813 % tuple(selactions[:2]))
814 action = selactions[0] if selactions else None
814 action = selactions[0] if selactions else None
815 if dryrun and action:
815 if dryrun and action:
816 raise error.Abort(_('cannot specify both --dry-run and --%s') % action)
816 raise error.Abort(_('cannot specify both --dry-run and --%s') % action)
817 if confirm and action:
817 if confirm and action:
818 raise error.Abort(_('cannot specify both --confirm and --%s') % action)
818 raise error.Abort(_('cannot specify both --confirm and --%s') % action)
819 if dryrun and confirm:
819 if dryrun and confirm:
820 raise error.Abort(_('cannot specify both --confirm and --dry-run'))
820 raise error.Abort(_('cannot specify both --confirm and --dry-run'))
821
821
822 if action or repo.currenttransaction() is not None:
822 if action or repo.currenttransaction() is not None:
823 # in-memory rebase is not compatible with resuming rebases.
823 # in-memory rebase is not compatible with resuming rebases.
824 # (Or if it is run within a transaction, since the restart logic can
824 # (Or if it is run within a transaction, since the restart logic can
825 # fail the entire transaction.)
825 # fail the entire transaction.)
826 inmemory = False
826 inmemory = False
827
827
828 if opts.get('auto_orphans'):
828 if opts.get('auto_orphans'):
829 for key in opts:
829 for key in opts:
830 if key != 'auto_orphans' and opts.get(key):
830 if key != 'auto_orphans' and opts.get(key):
831 raise error.Abort(_('--auto-orphans is incompatible with %s') %
831 raise error.Abort(_('--auto-orphans is incompatible with %s') %
832 ('--' + key))
832 ('--' + key))
833 userrevs = list(repo.revs(opts.get('auto_orphans')))
833 userrevs = list(repo.revs(opts.get('auto_orphans')))
834 opts['rev'] = [revsetlang.formatspec('%ld and orphan()', userrevs)]
834 opts['rev'] = [revsetlang.formatspec('%ld and orphan()', userrevs)]
835 opts['dest'] = '_destautoorphanrebase(SRC)'
835 opts['dest'] = '_destautoorphanrebase(SRC)'
836
836
837 if dryrun or confirm:
837 if dryrun or confirm:
838 return _dryrunrebase(ui, repo, action, opts)
838 return _dryrunrebase(ui, repo, action, opts)
839 elif action == 'stop':
839 elif action == 'stop':
840 rbsrt = rebaseruntime(repo, ui)
840 rbsrt = rebaseruntime(repo, ui)
841 with repo.wlock(), repo.lock():
841 with repo.wlock(), repo.lock():
842 rbsrt.restorestatus()
842 rbsrt.restorestatus()
843 if rbsrt.collapsef:
843 if rbsrt.collapsef:
844 raise error.Abort(_("cannot stop in --collapse session"))
844 raise error.Abort(_("cannot stop in --collapse session"))
845 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
845 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
846 if not (rbsrt.keepf or allowunstable):
846 if not (rbsrt.keepf or allowunstable):
847 raise error.Abort(_("cannot remove original changesets with"
847 raise error.Abort(_("cannot remove original changesets with"
848 " unrebased descendants"),
848 " unrebased descendants"),
849 hint=_('either enable obsmarkers to allow unstable '
849 hint=_('either enable obsmarkers to allow unstable '
850 'revisions or use --keep to keep original '
850 'revisions or use --keep to keep original '
851 'changesets'))
851 'changesets'))
852 if needupdate(repo, rbsrt.state):
852 if needupdate(repo, rbsrt.state):
853 # update to the current working revision
853 # update to the current working revision
854 # to clear interrupted merge
854 # to clear interrupted merge
855 hg.updaterepo(repo, rbsrt.originalwd, overwrite=True)
855 hg.updaterepo(repo, rbsrt.originalwd, overwrite=True)
856 rbsrt._finishrebase()
856 rbsrt._finishrebase()
857 return 0
857 return 0
858 elif inmemory:
858 elif inmemory:
859 try:
859 try:
860 # in-memory merge doesn't support conflicts, so if we hit any, abort
860 # in-memory merge doesn't support conflicts, so if we hit any, abort
861 # and re-run as an on-disk merge.
861 # and re-run as an on-disk merge.
862 overrides = {('rebase', 'singletransaction'): True}
862 overrides = {('rebase', 'singletransaction'): True}
863 with ui.configoverride(overrides, 'rebase'):
863 with ui.configoverride(overrides, 'rebase'):
864 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
864 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
865 except error.InMemoryMergeConflictsError:
865 except error.InMemoryMergeConflictsError:
866 ui.warn(_('hit merge conflicts; re-running rebase without in-memory'
866 ui.warn(_('hit merge conflicts; re-running rebase without in-memory'
867 ' merge\n'))
867 ' merge\n'))
868 _dorebase(ui, repo, action='abort', opts={})
868 _dorebase(ui, repo, action='abort', opts={})
869 return _dorebase(ui, repo, action, opts, inmemory=False)
869 return _dorebase(ui, repo, action, opts, inmemory=False)
870 else:
870 else:
871 return _dorebase(ui, repo, action, opts)
871 return _dorebase(ui, repo, action, opts)
872
872
873 def _dryrunrebase(ui, repo, action, opts):
873 def _dryrunrebase(ui, repo, action, opts):
874 rbsrt = rebaseruntime(repo, ui, inmemory=True, opts=opts)
874 rbsrt = rebaseruntime(repo, ui, inmemory=True, opts=opts)
875 confirm = opts.get('confirm')
875 confirm = opts.get('confirm')
876 if confirm:
876 if confirm:
877 ui.status(_('starting in-memory rebase\n'))
877 ui.status(_('starting in-memory rebase\n'))
878 else:
878 else:
879 ui.status(_('starting dry-run rebase; repository will not be '
879 ui.status(_('starting dry-run rebase; repository will not be '
880 'changed\n'))
880 'changed\n'))
881 with repo.wlock(), repo.lock():
881 with repo.wlock(), repo.lock():
882 needsabort = True
882 needsabort = True
883 try:
883 try:
884 overrides = {('rebase', 'singletransaction'): True}
884 overrides = {('rebase', 'singletransaction'): True}
885 with ui.configoverride(overrides, 'rebase'):
885 with ui.configoverride(overrides, 'rebase'):
886 _origrebase(ui, repo, action, opts, rbsrt, inmemory=True,
886 _origrebase(ui, repo, action, opts, rbsrt, inmemory=True,
887 leaveunfinished=True)
887 leaveunfinished=True)
888 except error.InMemoryMergeConflictsError:
888 except error.InMemoryMergeConflictsError:
889 ui.status(_('hit a merge conflict\n'))
889 ui.status(_('hit a merge conflict\n'))
890 return 1
890 return 1
891 else:
891 else:
892 if confirm:
892 if confirm:
893 ui.status(_('rebase completed successfully\n'))
893 ui.status(_('rebase completed successfully\n'))
894 if not ui.promptchoice(_(b'apply changes (yn)?'
894 if not ui.promptchoice(_(b'apply changes (yn)?'
895 b'$$ &Yes $$ &No')):
895 b'$$ &Yes $$ &No')):
896 # finish unfinished rebase
896 # finish unfinished rebase
897 rbsrt._finishrebase()
897 rbsrt._finishrebase()
898 else:
898 else:
899 rbsrt._prepareabortorcontinue(isabort=True, backup=False,
899 rbsrt._prepareabortorcontinue(isabort=True, backup=False,
900 suppwarns=True)
900 suppwarns=True)
901 needsabort = False
901 needsabort = False
902 else:
902 else:
903 ui.status(_('dry-run rebase completed successfully; run without'
903 ui.status(_('dry-run rebase completed successfully; run without'
904 ' -n/--dry-run to perform this rebase\n'))
904 ' -n/--dry-run to perform this rebase\n'))
905 return 0
905 return 0
906 finally:
906 finally:
907 if needsabort:
907 if needsabort:
908 # no need to store backup in case of dryrun
908 # no need to store backup in case of dryrun
909 rbsrt._prepareabortorcontinue(isabort=True, backup=False,
909 rbsrt._prepareabortorcontinue(isabort=True, backup=False,
910 suppwarns=True)
910 suppwarns=True)
911
911
912 def _dorebase(ui, repo, action, opts, inmemory=False):
912 def _dorebase(ui, repo, action, opts, inmemory=False):
913 rbsrt = rebaseruntime(repo, ui, inmemory, opts)
913 rbsrt = rebaseruntime(repo, ui, inmemory, opts)
914 return _origrebase(ui, repo, action, opts, rbsrt, inmemory=inmemory)
914 return _origrebase(ui, repo, action, opts, rbsrt, inmemory=inmemory)
915
915
916 def _origrebase(ui, repo, action, opts, rbsrt, inmemory=False,
916 def _origrebase(ui, repo, action, opts, rbsrt, inmemory=False,
917 leaveunfinished=False):
917 leaveunfinished=False):
918 assert action != 'stop'
918 assert action != 'stop'
919 with repo.wlock(), repo.lock():
919 with repo.wlock(), repo.lock():
920 # Validate input and define rebasing points
920 # Validate input and define rebasing points
921 destf = opts.get('dest', None)
921 destf = opts.get('dest', None)
922 srcf = opts.get('source', None)
922 srcf = opts.get('source', None)
923 basef = opts.get('base', None)
923 basef = opts.get('base', None)
924 revf = opts.get('rev', [])
924 revf = opts.get('rev', [])
925 # search default destination in this space
925 # search default destination in this space
926 # used in the 'hg pull --rebase' case, see issue 5214.
926 # used in the 'hg pull --rebase' case, see issue 5214.
927 destspace = opts.get('_destspace')
927 destspace = opts.get('_destspace')
928 if opts.get('interactive'):
928 if opts.get('interactive'):
929 try:
929 try:
930 if extensions.find('histedit'):
930 if extensions.find('histedit'):
931 enablehistedit = ''
931 enablehistedit = ''
932 except KeyError:
932 except KeyError:
933 enablehistedit = " --config extensions.histedit="
933 enablehistedit = " --config extensions.histedit="
934 help = "hg%s help -e histedit" % enablehistedit
934 help = "hg%s help -e histedit" % enablehistedit
935 msg = _("interactive history editing is supported by the "
935 msg = _("interactive history editing is supported by the "
936 "'histedit' extension (see \"%s\")") % help
936 "'histedit' extension (see \"%s\")") % help
937 raise error.Abort(msg)
937 raise error.Abort(msg)
938
938
939 if rbsrt.collapsemsg and not rbsrt.collapsef:
939 if rbsrt.collapsemsg and not rbsrt.collapsef:
940 raise error.Abort(
940 raise error.Abort(
941 _('message can only be specified with collapse'))
941 _('message can only be specified with collapse'))
942
942
943 if action:
943 if action:
944 if rbsrt.collapsef:
944 if rbsrt.collapsef:
945 raise error.Abort(
945 raise error.Abort(
946 _('cannot use collapse with continue or abort'))
946 _('cannot use collapse with continue or abort'))
947 if srcf or basef or destf:
947 if srcf or basef or destf:
948 raise error.Abort(
948 raise error.Abort(
949 _('abort and continue do not allow specifying revisions'))
949 _('abort and continue do not allow specifying revisions'))
950 if action == 'abort' and opts.get('tool', False):
950 if action == 'abort' and opts.get('tool', False):
951 ui.warn(_('tool option will be ignored\n'))
951 ui.warn(_('tool option will be ignored\n'))
952 if action == 'continue':
952 if action == 'continue':
953 ms = mergemod.mergestate.read(repo)
953 ms = mergemod.mergestate.read(repo)
954 mergeutil.checkunresolved(ms)
954 mergeutil.checkunresolved(ms)
955
955
956 retcode = rbsrt._prepareabortorcontinue(isabort=(action == 'abort'))
956 retcode = rbsrt._prepareabortorcontinue(isabort=(action == 'abort'))
957 if retcode is not None:
957 if retcode is not None:
958 return retcode
958 return retcode
959 else:
959 else:
960 destmap = _definedestmap(ui, repo, inmemory, destf, srcf, basef,
960 destmap = _definedestmap(ui, repo, inmemory, destf, srcf, basef,
961 revf, destspace=destspace)
961 revf, destspace=destspace)
962 retcode = rbsrt._preparenewrebase(destmap)
962 retcode = rbsrt._preparenewrebase(destmap)
963 if retcode is not None:
963 if retcode is not None:
964 return retcode
964 return retcode
965 storecollapsemsg(repo, rbsrt.collapsemsg)
965 storecollapsemsg(repo, rbsrt.collapsemsg)
966
966
967 tr = None
967 tr = None
968
968
969 singletr = ui.configbool('rebase', 'singletransaction')
969 singletr = ui.configbool('rebase', 'singletransaction')
970 if singletr:
970 if singletr:
971 tr = repo.transaction('rebase')
971 tr = repo.transaction('rebase')
972
972
973 # If `rebase.singletransaction` is enabled, wrap the entire operation in
973 # If `rebase.singletransaction` is enabled, wrap the entire operation in
974 # one transaction here. Otherwise, transactions are obtained when
974 # one transaction here. Otherwise, transactions are obtained when
975 # committing each node, which is slower but allows partial success.
975 # committing each node, which is slower but allows partial success.
976 with util.acceptintervention(tr):
976 with util.acceptintervention(tr):
977 # Same logic for the dirstate guard, except we don't create one when
977 # Same logic for the dirstate guard, except we don't create one when
978 # rebasing in-memory (it's not needed).
978 # rebasing in-memory (it's not needed).
979 dsguard = None
979 dsguard = None
980 if singletr and not inmemory:
980 if singletr and not inmemory:
981 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
981 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
982 with util.acceptintervention(dsguard):
982 with util.acceptintervention(dsguard):
983 rbsrt._performrebase(tr)
983 rbsrt._performrebase(tr)
984 if not leaveunfinished:
984 if not leaveunfinished:
985 rbsrt._finishrebase()
985 rbsrt._finishrebase()
986
986
987 def _definedestmap(ui, repo, inmemory, destf=None, srcf=None, basef=None,
987 def _definedestmap(ui, repo, inmemory, destf=None, srcf=None, basef=None,
988 revf=None, destspace=None):
988 revf=None, destspace=None):
989 """use revisions argument to define destmap {srcrev: destrev}"""
989 """use revisions argument to define destmap {srcrev: destrev}"""
990 if revf is None:
990 if revf is None:
991 revf = []
991 revf = []
992
992
993 # destspace is here to work around issues with `hg pull --rebase` see
993 # destspace is here to work around issues with `hg pull --rebase` see
994 # issue5214 for details
994 # issue5214 for details
995 if srcf and basef:
995 if srcf and basef:
996 raise error.Abort(_('cannot specify both a source and a base'))
996 raise error.Abort(_('cannot specify both a source and a base'))
997 if revf and basef:
997 if revf and basef:
998 raise error.Abort(_('cannot specify both a revision and a base'))
998 raise error.Abort(_('cannot specify both a revision and a base'))
999 if revf and srcf:
999 if revf and srcf:
1000 raise error.Abort(_('cannot specify both a revision and a source'))
1000 raise error.Abort(_('cannot specify both a revision and a source'))
1001
1001
1002 if not inmemory:
1002 if not inmemory:
1003 cmdutil.checkunfinished(repo)
1003 cmdutil.checkunfinished(repo)
1004 cmdutil.bailifchanged(repo)
1004 cmdutil.bailifchanged(repo)
1005
1005
1006 if ui.configbool('commands', 'rebase.requiredest') and not destf:
1006 if ui.configbool('commands', 'rebase.requiredest') and not destf:
1007 raise error.Abort(_('you must specify a destination'),
1007 raise error.Abort(_('you must specify a destination'),
1008 hint=_('use: hg rebase -d REV'))
1008 hint=_('use: hg rebase -d REV'))
1009
1009
1010 dest = None
1010 dest = None
1011
1011
1012 if revf:
1012 if revf:
1013 rebaseset = scmutil.revrange(repo, revf)
1013 rebaseset = scmutil.revrange(repo, revf)
1014 if not rebaseset:
1014 if not rebaseset:
1015 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
1015 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
1016 return None
1016 return None
1017 elif srcf:
1017 elif srcf:
1018 src = scmutil.revrange(repo, [srcf])
1018 src = scmutil.revrange(repo, [srcf])
1019 if not src:
1019 if not src:
1020 ui.status(_('empty "source" revision set - nothing to rebase\n'))
1020 ui.status(_('empty "source" revision set - nothing to rebase\n'))
1021 return None
1021 return None
1022 rebaseset = repo.revs('(%ld)::', src)
1022 rebaseset = repo.revs('(%ld)::', src)
1023 assert rebaseset
1023 assert rebaseset
1024 else:
1024 else:
1025 base = scmutil.revrange(repo, [basef or '.'])
1025 base = scmutil.revrange(repo, [basef or '.'])
1026 if not base:
1026 if not base:
1027 ui.status(_('empty "base" revision set - '
1027 ui.status(_('empty "base" revision set - '
1028 "can't compute rebase set\n"))
1028 "can't compute rebase set\n"))
1029 return None
1029 return None
1030 if destf:
1030 if destf:
1031 # --base does not support multiple destinations
1031 # --base does not support multiple destinations
1032 dest = scmutil.revsingle(repo, destf)
1032 dest = scmutil.revsingle(repo, destf)
1033 else:
1033 else:
1034 dest = repo[_destrebase(repo, base, destspace=destspace)]
1034 dest = repo[_destrebase(repo, base, destspace=destspace)]
1035 destf = bytes(dest)
1035 destf = bytes(dest)
1036
1036
1037 roots = [] # selected children of branching points
1037 roots = [] # selected children of branching points
1038 bpbase = {} # {branchingpoint: [origbase]}
1038 bpbase = {} # {branchingpoint: [origbase]}
1039 for b in base: # group bases by branching points
1039 for b in base: # group bases by branching points
1040 bp = repo.revs('ancestor(%d, %d)', b, dest.rev()).first()
1040 bp = repo.revs('ancestor(%d, %d)', b, dest.rev()).first()
1041 bpbase[bp] = bpbase.get(bp, []) + [b]
1041 bpbase[bp] = bpbase.get(bp, []) + [b]
1042 if None in bpbase:
1042 if None in bpbase:
1043 # emulate the old behavior, showing "nothing to rebase" (a better
1043 # emulate the old behavior, showing "nothing to rebase" (a better
1044 # behavior may be abort with "cannot find branching point" error)
1044 # behavior may be abort with "cannot find branching point" error)
1045 bpbase.clear()
1045 bpbase.clear()
1046 for bp, bs in bpbase.iteritems(): # calculate roots
1046 for bp, bs in bpbase.iteritems(): # calculate roots
1047 roots += list(repo.revs('children(%d) & ancestors(%ld)', bp, bs))
1047 roots += list(repo.revs('children(%d) & ancestors(%ld)', bp, bs))
1048
1048
1049 rebaseset = repo.revs('%ld::', roots)
1049 rebaseset = repo.revs('%ld::', roots)
1050
1050
1051 if not rebaseset:
1051 if not rebaseset:
1052 # transform to list because smartsets are not comparable to
1052 # transform to list because smartsets are not comparable to
1053 # lists. This should be improved to honor laziness of
1053 # lists. This should be improved to honor laziness of
1054 # smartset.
1054 # smartset.
1055 if list(base) == [dest.rev()]:
1055 if list(base) == [dest.rev()]:
1056 if basef:
1056 if basef:
1057 ui.status(_('nothing to rebase - %s is both "base"'
1057 ui.status(_('nothing to rebase - %s is both "base"'
1058 ' and destination\n') % dest)
1058 ' and destination\n') % dest)
1059 else:
1059 else:
1060 ui.status(_('nothing to rebase - working directory '
1060 ui.status(_('nothing to rebase - working directory '
1061 'parent is also destination\n'))
1061 'parent is also destination\n'))
1062 elif not repo.revs('%ld - ::%d', base, dest.rev()):
1062 elif not repo.revs('%ld - ::%d', base, dest.rev()):
1063 if basef:
1063 if basef:
1064 ui.status(_('nothing to rebase - "base" %s is '
1064 ui.status(_('nothing to rebase - "base" %s is '
1065 'already an ancestor of destination '
1065 'already an ancestor of destination '
1066 '%s\n') %
1066 '%s\n') %
1067 ('+'.join(bytes(repo[r]) for r in base),
1067 ('+'.join(bytes(repo[r]) for r in base),
1068 dest))
1068 dest))
1069 else:
1069 else:
1070 ui.status(_('nothing to rebase - working '
1070 ui.status(_('nothing to rebase - working '
1071 'directory parent is already an '
1071 'directory parent is already an '
1072 'ancestor of destination %s\n') % dest)
1072 'ancestor of destination %s\n') % dest)
1073 else: # can it happen?
1073 else: # can it happen?
1074 ui.status(_('nothing to rebase from %s to %s\n') %
1074 ui.status(_('nothing to rebase from %s to %s\n') %
1075 ('+'.join(bytes(repo[r]) for r in base), dest))
1075 ('+'.join(bytes(repo[r]) for r in base), dest))
1076 return None
1076 return None
1077
1077
1078 rebasingwcp = repo['.'].rev() in rebaseset
1078 rebasingwcp = repo['.'].rev() in rebaseset
1079 ui.log("rebase", "", rebase_rebasing_wcp=rebasingwcp)
1079 ui.log("rebase", "", rebase_rebasing_wcp=rebasingwcp)
1080 if inmemory and rebasingwcp:
1080 if inmemory and rebasingwcp:
1081 # Check these since we did not before.
1081 # Check these since we did not before.
1082 cmdutil.checkunfinished(repo)
1082 cmdutil.checkunfinished(repo)
1083 cmdutil.bailifchanged(repo)
1083 cmdutil.bailifchanged(repo)
1084
1084
1085 if not destf:
1085 if not destf:
1086 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1086 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1087 destf = bytes(dest)
1087 destf = bytes(dest)
1088
1088
1089 allsrc = revsetlang.formatspec('%ld', rebaseset)
1089 allsrc = revsetlang.formatspec('%ld', rebaseset)
1090 alias = {'ALLSRC': allsrc}
1090 alias = {'ALLSRC': allsrc}
1091
1091
1092 if dest is None:
1092 if dest is None:
1093 try:
1093 try:
1094 # fast path: try to resolve dest without SRC alias
1094 # fast path: try to resolve dest without SRC alias
1095 dest = scmutil.revsingle(repo, destf, localalias=alias)
1095 dest = scmutil.revsingle(repo, destf, localalias=alias)
1096 except error.RepoLookupError:
1096 except error.RepoLookupError:
1097 # multi-dest path: resolve dest for each SRC separately
1097 # multi-dest path: resolve dest for each SRC separately
1098 destmap = {}
1098 destmap = {}
1099 for r in rebaseset:
1099 for r in rebaseset:
1100 alias['SRC'] = revsetlang.formatspec('%d', r)
1100 alias['SRC'] = revsetlang.formatspec('%d', r)
1101 # use repo.anyrevs instead of scmutil.revsingle because we
1101 # use repo.anyrevs instead of scmutil.revsingle because we
1102 # don't want to abort if destset is empty.
1102 # don't want to abort if destset is empty.
1103 destset = repo.anyrevs([destf], user=True, localalias=alias)
1103 destset = repo.anyrevs([destf], user=True, localalias=alias)
1104 size = len(destset)
1104 size = len(destset)
1105 if size == 1:
1105 if size == 1:
1106 destmap[r] = destset.first()
1106 destmap[r] = destset.first()
1107 elif size == 0:
1107 elif size == 0:
1108 ui.note(_('skipping %s - empty destination\n') % repo[r])
1108 ui.note(_('skipping %s - empty destination\n') % repo[r])
1109 else:
1109 else:
1110 raise error.Abort(_('rebase destination for %s is not '
1110 raise error.Abort(_('rebase destination for %s is not '
1111 'unique') % repo[r])
1111 'unique') % repo[r])
1112
1112
1113 if dest is not None:
1113 if dest is not None:
1114 # single-dest case: assign dest to each rev in rebaseset
1114 # single-dest case: assign dest to each rev in rebaseset
1115 destrev = dest.rev()
1115 destrev = dest.rev()
1116 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1116 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1117
1117
1118 if not destmap:
1118 if not destmap:
1119 ui.status(_('nothing to rebase - empty destination\n'))
1119 ui.status(_('nothing to rebase - empty destination\n'))
1120 return None
1120 return None
1121
1121
1122 return destmap
1122 return destmap
1123
1123
1124 def externalparent(repo, state, destancestors):
1124 def externalparent(repo, state, destancestors):
1125 """Return the revision that should be used as the second parent
1125 """Return the revision that should be used as the second parent
1126 when the revisions in state is collapsed on top of destancestors.
1126 when the revisions in state is collapsed on top of destancestors.
1127 Abort if there is more than one parent.
1127 Abort if there is more than one parent.
1128 """
1128 """
1129 parents = set()
1129 parents = set()
1130 source = min(state)
1130 source = min(state)
1131 for rev in state:
1131 for rev in state:
1132 if rev == source:
1132 if rev == source:
1133 continue
1133 continue
1134 for p in repo[rev].parents():
1134 for p in repo[rev].parents():
1135 if (p.rev() not in state
1135 if (p.rev() not in state
1136 and p.rev() not in destancestors):
1136 and p.rev() not in destancestors):
1137 parents.add(p.rev())
1137 parents.add(p.rev())
1138 if not parents:
1138 if not parents:
1139 return nullrev
1139 return nullrev
1140 if len(parents) == 1:
1140 if len(parents) == 1:
1141 return parents.pop()
1141 return parents.pop()
1142 raise error.Abort(_('unable to collapse on top of %d, there is more '
1142 raise error.Abort(_('unable to collapse on top of %d, there is more '
1143 'than one external parent: %s') %
1143 'than one external parent: %s') %
1144 (max(destancestors),
1144 (max(destancestors),
1145 ', '.join("%d" % p for p in sorted(parents))))
1145 ', '.join("%d" % p for p in sorted(parents))))
1146
1146
1147 def commitmemorynode(repo, p1, p2, wctx, editor, extra, user, date, commitmsg):
1147 def commitmemorynode(repo, p1, p2, wctx, editor, extra, user, date, commitmsg):
1148 '''Commit the memory changes with parents p1 and p2.
1148 '''Commit the memory changes with parents p1 and p2.
1149 Return node of committed revision.'''
1149 Return node of committed revision.'''
1150 # Replicates the empty check in ``repo.commit``.
1150 # Replicates the empty check in ``repo.commit``.
1151 if wctx.isempty() and not repo.ui.configbool('ui', 'allowemptycommit'):
1151 if wctx.isempty() and not repo.ui.configbool('ui', 'allowemptycommit'):
1152 return None
1152 return None
1153
1153
1154 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1154 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1155 # ``branch`` (used when passing ``--keepbranches``).
1155 # ``branch`` (used when passing ``--keepbranches``).
1156 branch = repo[p1].branch()
1156 branch = repo[p1].branch()
1157 if 'branch' in extra:
1157 if 'branch' in extra:
1158 branch = extra['branch']
1158 branch = extra['branch']
1159
1159
1160 memctx = wctx.tomemctx(commitmsg, parents=(p1, p2), date=date,
1160 memctx = wctx.tomemctx(commitmsg, parents=(p1, p2), date=date,
1161 extra=extra, user=user, branch=branch, editor=editor)
1161 extra=extra, user=user, branch=branch, editor=editor)
1162 commitres = repo.commitctx(memctx)
1162 commitres = repo.commitctx(memctx)
1163 wctx.clean() # Might be reused
1163 wctx.clean() # Might be reused
1164 return commitres
1164 return commitres
1165
1165
1166 def commitnode(repo, p1, p2, editor, extra, user, date, commitmsg):
1166 def commitnode(repo, p1, p2, editor, extra, user, date, commitmsg):
1167 '''Commit the wd changes with parents p1 and p2.
1167 '''Commit the wd changes with parents p1 and p2.
1168 Return node of committed revision.'''
1168 Return node of committed revision.'''
1169 dsguard = util.nullcontextmanager()
1169 dsguard = util.nullcontextmanager()
1170 if not repo.ui.configbool('rebase', 'singletransaction'):
1170 if not repo.ui.configbool('rebase', 'singletransaction'):
1171 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
1171 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
1172 with dsguard:
1172 with dsguard:
1173 repo.setparents(repo[p1].node(), repo[p2].node())
1173 repo.setparents(repo[p1].node(), repo[p2].node())
1174
1174
1175 # Commit might fail if unresolved files exist
1175 # Commit might fail if unresolved files exist
1176 newnode = repo.commit(text=commitmsg, user=user, date=date,
1176 newnode = repo.commit(text=commitmsg, user=user, date=date,
1177 extra=extra, editor=editor)
1177 extra=extra, editor=editor)
1178
1178
1179 repo.dirstate.setbranch(repo[newnode].branch())
1179 repo.dirstate.setbranch(repo[newnode].branch())
1180 return newnode
1180 return newnode
1181
1181
1182 def rebasenode(repo, rev, p1, base, collapse, dest, wctx):
1182 def rebasenode(repo, rev, p1, base, collapse, dest, wctx):
1183 'Rebase a single revision rev on top of p1 using base as merge ancestor'
1183 'Rebase a single revision rev on top of p1 using base as merge ancestor'
1184 # Merge phase
1184 # Merge phase
1185 # Update to destination and merge it with local
1185 # Update to destination and merge it with local
1186 if wctx.isinmemory():
1186 if wctx.isinmemory():
1187 wctx.setbase(repo[p1])
1187 wctx.setbase(repo[p1])
1188 else:
1188 else:
1189 if repo['.'].rev() != p1:
1189 if repo['.'].rev() != p1:
1190 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
1190 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
1191 mergemod.update(repo, p1, False, True)
1191 mergemod.update(repo, p1, branchmerge=False, force=True)
1192 else:
1192 else:
1193 repo.ui.debug(" already in destination\n")
1193 repo.ui.debug(" already in destination\n")
1194 # This is, alas, necessary to invalidate workingctx's manifest cache,
1194 # This is, alas, necessary to invalidate workingctx's manifest cache,
1195 # as well as other data we litter on it in other places.
1195 # as well as other data we litter on it in other places.
1196 wctx = repo[None]
1196 wctx = repo[None]
1197 repo.dirstate.write(repo.currenttransaction())
1197 repo.dirstate.write(repo.currenttransaction())
1198 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
1198 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
1199 if base is not None:
1199 if base is not None:
1200 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
1200 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
1201 # When collapsing in-place, the parent is the common ancestor, we
1201 # When collapsing in-place, the parent is the common ancestor, we
1202 # have to allow merging with it.
1202 # have to allow merging with it.
1203 stats = mergemod.update(repo, rev, True, True, base, collapse,
1203 stats = mergemod.update(repo, rev, branchmerge=True, force=True,
1204 ancestor=base, mergeancestor=collapse,
1204 labels=['dest', 'source'], wc=wctx)
1205 labels=['dest', 'source'], wc=wctx)
1205 if collapse:
1206 if collapse:
1206 copies.duplicatecopies(repo, wctx, rev, dest)
1207 copies.duplicatecopies(repo, wctx, rev, dest)
1207 else:
1208 else:
1208 # If we're not using --collapse, we need to
1209 # If we're not using --collapse, we need to
1209 # duplicate copies between the revision we're
1210 # duplicate copies between the revision we're
1210 # rebasing and its first parent, but *not*
1211 # rebasing and its first parent, but *not*
1211 # duplicate any copies that have already been
1212 # duplicate any copies that have already been
1212 # performed in the destination.
1213 # performed in the destination.
1213 p1rev = repo[rev].p1().rev()
1214 p1rev = repo[rev].p1().rev()
1214 copies.duplicatecopies(repo, wctx, rev, p1rev, skiprev=dest)
1215 copies.duplicatecopies(repo, wctx, rev, p1rev, skiprev=dest)
1215 return stats
1216 return stats
1216
1217
1217 def adjustdest(repo, rev, destmap, state, skipped):
1218 def adjustdest(repo, rev, destmap, state, skipped):
1218 """adjust rebase destination given the current rebase state
1219 """adjust rebase destination given the current rebase state
1219
1220
1220 rev is what is being rebased. Return a list of two revs, which are the
1221 rev is what is being rebased. Return a list of two revs, which are the
1221 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1222 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1222 nullrev, return dest without adjustment for it.
1223 nullrev, return dest without adjustment for it.
1223
1224
1224 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1225 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1225 to B1, and E's destination will be adjusted from F to B1.
1226 to B1, and E's destination will be adjusted from F to B1.
1226
1227
1227 B1 <- written during rebasing B
1228 B1 <- written during rebasing B
1228 |
1229 |
1229 F <- original destination of B, E
1230 F <- original destination of B, E
1230 |
1231 |
1231 | E <- rev, which is being rebased
1232 | E <- rev, which is being rebased
1232 | |
1233 | |
1233 | D <- prev, one parent of rev being checked
1234 | D <- prev, one parent of rev being checked
1234 | |
1235 | |
1235 | x <- skipped, ex. no successor or successor in (::dest)
1236 | x <- skipped, ex. no successor or successor in (::dest)
1236 | |
1237 | |
1237 | C <- rebased as C', different destination
1238 | C <- rebased as C', different destination
1238 | |
1239 | |
1239 | B <- rebased as B1 C'
1240 | B <- rebased as B1 C'
1240 |/ |
1241 |/ |
1241 A G <- destination of C, different
1242 A G <- destination of C, different
1242
1243
1243 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1244 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1244 first move C to C1, G to G1, and when it's checking H, the adjusted
1245 first move C to C1, G to G1, and when it's checking H, the adjusted
1245 destinations will be [C1, G1].
1246 destinations will be [C1, G1].
1246
1247
1247 H C1 G1
1248 H C1 G1
1248 /| | /
1249 /| | /
1249 F G |/
1250 F G |/
1250 K | | -> K
1251 K | | -> K
1251 | C D |
1252 | C D |
1252 | |/ |
1253 | |/ |
1253 | B | ...
1254 | B | ...
1254 |/ |/
1255 |/ |/
1255 A A
1256 A A
1256
1257
1257 Besides, adjust dest according to existing rebase information. For example,
1258 Besides, adjust dest according to existing rebase information. For example,
1258
1259
1259 B C D B needs to be rebased on top of C, C needs to be rebased on top
1260 B C D B needs to be rebased on top of C, C needs to be rebased on top
1260 \|/ of D. We will rebase C first.
1261 \|/ of D. We will rebase C first.
1261 A
1262 A
1262
1263
1263 C' After rebasing C, when considering B's destination, use C'
1264 C' After rebasing C, when considering B's destination, use C'
1264 | instead of the original C.
1265 | instead of the original C.
1265 B D
1266 B D
1266 \ /
1267 \ /
1267 A
1268 A
1268 """
1269 """
1269 # pick already rebased revs with same dest from state as interesting source
1270 # pick already rebased revs with same dest from state as interesting source
1270 dest = destmap[rev]
1271 dest = destmap[rev]
1271 source = [s for s, d in state.items()
1272 source = [s for s, d in state.items()
1272 if d > 0 and destmap[s] == dest and s not in skipped]
1273 if d > 0 and destmap[s] == dest and s not in skipped]
1273
1274
1274 result = []
1275 result = []
1275 for prev in repo.changelog.parentrevs(rev):
1276 for prev in repo.changelog.parentrevs(rev):
1276 adjusted = dest
1277 adjusted = dest
1277 if prev != nullrev:
1278 if prev != nullrev:
1278 candidate = repo.revs('max(%ld and (::%d))', source, prev).first()
1279 candidate = repo.revs('max(%ld and (::%d))', source, prev).first()
1279 if candidate is not None:
1280 if candidate is not None:
1280 adjusted = state[candidate]
1281 adjusted = state[candidate]
1281 if adjusted == dest and dest in state:
1282 if adjusted == dest and dest in state:
1282 adjusted = state[dest]
1283 adjusted = state[dest]
1283 if adjusted == revtodo:
1284 if adjusted == revtodo:
1284 # sortsource should produce an order that makes this impossible
1285 # sortsource should produce an order that makes this impossible
1285 raise error.ProgrammingError(
1286 raise error.ProgrammingError(
1286 'rev %d should be rebased already at this time' % dest)
1287 'rev %d should be rebased already at this time' % dest)
1287 result.append(adjusted)
1288 result.append(adjusted)
1288 return result
1289 return result
1289
1290
1290 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1291 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1291 """
1292 """
1292 Abort if rebase will create divergence or rebase is noop because of markers
1293 Abort if rebase will create divergence or rebase is noop because of markers
1293
1294
1294 `rebaseobsrevs`: set of obsolete revision in source
1295 `rebaseobsrevs`: set of obsolete revision in source
1295 `rebaseobsskipped`: set of revisions from source skipped because they have
1296 `rebaseobsskipped`: set of revisions from source skipped because they have
1296 successors in destination or no non-obsolete successor.
1297 successors in destination or no non-obsolete successor.
1297 """
1298 """
1298 # Obsolete node with successors not in dest leads to divergence
1299 # Obsolete node with successors not in dest leads to divergence
1299 divergenceok = ui.configbool('experimental',
1300 divergenceok = ui.configbool('experimental',
1300 'evolution.allowdivergence')
1301 'evolution.allowdivergence')
1301 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1302 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1302
1303
1303 if divergencebasecandidates and not divergenceok:
1304 if divergencebasecandidates and not divergenceok:
1304 divhashes = (bytes(repo[r])
1305 divhashes = (bytes(repo[r])
1305 for r in divergencebasecandidates)
1306 for r in divergencebasecandidates)
1306 msg = _("this rebase will cause "
1307 msg = _("this rebase will cause "
1307 "divergences from: %s")
1308 "divergences from: %s")
1308 h = _("to force the rebase please set "
1309 h = _("to force the rebase please set "
1309 "experimental.evolution.allowdivergence=True")
1310 "experimental.evolution.allowdivergence=True")
1310 raise error.Abort(msg % (",".join(divhashes),), hint=h)
1311 raise error.Abort(msg % (",".join(divhashes),), hint=h)
1311
1312
1312 def successorrevs(unfi, rev):
1313 def successorrevs(unfi, rev):
1313 """yield revision numbers for successors of rev"""
1314 """yield revision numbers for successors of rev"""
1314 assert unfi.filtername is None
1315 assert unfi.filtername is None
1315 nodemap = unfi.changelog.nodemap
1316 nodemap = unfi.changelog.nodemap
1316 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1317 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1317 if s in nodemap:
1318 if s in nodemap:
1318 yield nodemap[s]
1319 yield nodemap[s]
1319
1320
1320 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1321 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1321 """Return new parents and optionally a merge base for rev being rebased
1322 """Return new parents and optionally a merge base for rev being rebased
1322
1323
1323 The destination specified by "dest" cannot always be used directly because
1324 The destination specified by "dest" cannot always be used directly because
1324 previously rebase result could affect destination. For example,
1325 previously rebase result could affect destination. For example,
1325
1326
1326 D E rebase -r C+D+E -d B
1327 D E rebase -r C+D+E -d B
1327 |/ C will be rebased to C'
1328 |/ C will be rebased to C'
1328 B C D's new destination will be C' instead of B
1329 B C D's new destination will be C' instead of B
1329 |/ E's new destination will be C' instead of B
1330 |/ E's new destination will be C' instead of B
1330 A
1331 A
1331
1332
1332 The new parents of a merge is slightly more complicated. See the comment
1333 The new parents of a merge is slightly more complicated. See the comment
1333 block below.
1334 block below.
1334 """
1335 """
1335 # use unfiltered changelog since successorrevs may return filtered nodes
1336 # use unfiltered changelog since successorrevs may return filtered nodes
1336 assert repo.filtername is None
1337 assert repo.filtername is None
1337 cl = repo.changelog
1338 cl = repo.changelog
1338 isancestor = cl.isancestorrev
1339 isancestor = cl.isancestorrev
1339
1340
1340 dest = destmap[rev]
1341 dest = destmap[rev]
1341 oldps = repo.changelog.parentrevs(rev) # old parents
1342 oldps = repo.changelog.parentrevs(rev) # old parents
1342 newps = [nullrev, nullrev] # new parents
1343 newps = [nullrev, nullrev] # new parents
1343 dests = adjustdest(repo, rev, destmap, state, skipped)
1344 dests = adjustdest(repo, rev, destmap, state, skipped)
1344 bases = list(oldps) # merge base candidates, initially just old parents
1345 bases = list(oldps) # merge base candidates, initially just old parents
1345
1346
1346 if all(r == nullrev for r in oldps[1:]):
1347 if all(r == nullrev for r in oldps[1:]):
1347 # For non-merge changeset, just move p to adjusted dest as requested.
1348 # For non-merge changeset, just move p to adjusted dest as requested.
1348 newps[0] = dests[0]
1349 newps[0] = dests[0]
1349 else:
1350 else:
1350 # For merge changeset, if we move p to dests[i] unconditionally, both
1351 # For merge changeset, if we move p to dests[i] unconditionally, both
1351 # parents may change and the end result looks like "the merge loses a
1352 # parents may change and the end result looks like "the merge loses a
1352 # parent", which is a surprise. This is a limit because "--dest" only
1353 # parent", which is a surprise. This is a limit because "--dest" only
1353 # accepts one dest per src.
1354 # accepts one dest per src.
1354 #
1355 #
1355 # Therefore, only move p with reasonable conditions (in this order):
1356 # Therefore, only move p with reasonable conditions (in this order):
1356 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1357 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1357 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1358 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1358 #
1359 #
1359 # Comparing with adjustdest, the logic here does some additional work:
1360 # Comparing with adjustdest, the logic here does some additional work:
1360 # 1. decide which parents will not be moved towards dest
1361 # 1. decide which parents will not be moved towards dest
1361 # 2. if the above decision is "no", should a parent still be moved
1362 # 2. if the above decision is "no", should a parent still be moved
1362 # because it was rebased?
1363 # because it was rebased?
1363 #
1364 #
1364 # For example:
1365 # For example:
1365 #
1366 #
1366 # C # "rebase -r C -d D" is an error since none of the parents
1367 # C # "rebase -r C -d D" is an error since none of the parents
1367 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1368 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1368 # A B D # B (using rule "2."), since B will be rebased.
1369 # A B D # B (using rule "2."), since B will be rebased.
1369 #
1370 #
1370 # The loop tries to be not rely on the fact that a Mercurial node has
1371 # The loop tries to be not rely on the fact that a Mercurial node has
1371 # at most 2 parents.
1372 # at most 2 parents.
1372 for i, p in enumerate(oldps):
1373 for i, p in enumerate(oldps):
1373 np = p # new parent
1374 np = p # new parent
1374 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1375 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1375 np = dests[i]
1376 np = dests[i]
1376 elif p in state and state[p] > 0:
1377 elif p in state and state[p] > 0:
1377 np = state[p]
1378 np = state[p]
1378
1379
1379 # "bases" only record "special" merge bases that cannot be
1380 # "bases" only record "special" merge bases that cannot be
1380 # calculated from changelog DAG (i.e. isancestor(p, np) is False).
1381 # calculated from changelog DAG (i.e. isancestor(p, np) is False).
1381 # For example:
1382 # For example:
1382 #
1383 #
1383 # B' # rebase -s B -d D, when B was rebased to B'. dest for C
1384 # B' # rebase -s B -d D, when B was rebased to B'. dest for C
1384 # | C # is B', but merge base for C is B, instead of
1385 # | C # is B', but merge base for C is B, instead of
1385 # D | # changelog.ancestor(C, B') == A. If changelog DAG and
1386 # D | # changelog.ancestor(C, B') == A. If changelog DAG and
1386 # | B # "state" edges are merged (so there will be an edge from
1387 # | B # "state" edges are merged (so there will be an edge from
1387 # |/ # B to B'), the merge base is still ancestor(C, B') in
1388 # |/ # B to B'), the merge base is still ancestor(C, B') in
1388 # A # the merged graph.
1389 # A # the merged graph.
1389 #
1390 #
1390 # Also see https://bz.mercurial-scm.org/show_bug.cgi?id=1950#c8
1391 # Also see https://bz.mercurial-scm.org/show_bug.cgi?id=1950#c8
1391 # which uses "virtual null merge" to explain this situation.
1392 # which uses "virtual null merge" to explain this situation.
1392 if isancestor(p, np):
1393 if isancestor(p, np):
1393 bases[i] = nullrev
1394 bases[i] = nullrev
1394
1395
1395 # If one parent becomes an ancestor of the other, drop the ancestor
1396 # If one parent becomes an ancestor of the other, drop the ancestor
1396 for j, x in enumerate(newps[:i]):
1397 for j, x in enumerate(newps[:i]):
1397 if x == nullrev:
1398 if x == nullrev:
1398 continue
1399 continue
1399 if isancestor(np, x): # CASE-1
1400 if isancestor(np, x): # CASE-1
1400 np = nullrev
1401 np = nullrev
1401 elif isancestor(x, np): # CASE-2
1402 elif isancestor(x, np): # CASE-2
1402 newps[j] = np
1403 newps[j] = np
1403 np = nullrev
1404 np = nullrev
1404 # New parents forming an ancestor relationship does not
1405 # New parents forming an ancestor relationship does not
1405 # mean the old parents have a similar relationship. Do not
1406 # mean the old parents have a similar relationship. Do not
1406 # set bases[x] to nullrev.
1407 # set bases[x] to nullrev.
1407 bases[j], bases[i] = bases[i], bases[j]
1408 bases[j], bases[i] = bases[i], bases[j]
1408
1409
1409 newps[i] = np
1410 newps[i] = np
1410
1411
1411 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1412 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1412 # base. If only p2 changes, merging using unchanged p1 as merge base is
1413 # base. If only p2 changes, merging using unchanged p1 as merge base is
1413 # suboptimal. Therefore swap parents to make the merge sane.
1414 # suboptimal. Therefore swap parents to make the merge sane.
1414 if newps[1] != nullrev and oldps[0] == newps[0]:
1415 if newps[1] != nullrev and oldps[0] == newps[0]:
1415 assert len(newps) == 2 and len(oldps) == 2
1416 assert len(newps) == 2 and len(oldps) == 2
1416 newps.reverse()
1417 newps.reverse()
1417 bases.reverse()
1418 bases.reverse()
1418
1419
1419 # No parent change might be an error because we fail to make rev a
1420 # No parent change might be an error because we fail to make rev a
1420 # descendent of requested dest. This can happen, for example:
1421 # descendent of requested dest. This can happen, for example:
1421 #
1422 #
1422 # C # rebase -r C -d D
1423 # C # rebase -r C -d D
1423 # /| # None of A and B will be changed to D and rebase fails.
1424 # /| # None of A and B will be changed to D and rebase fails.
1424 # A B D
1425 # A B D
1425 if set(newps) == set(oldps) and dest not in newps:
1426 if set(newps) == set(oldps) and dest not in newps:
1426 raise error.Abort(_('cannot rebase %d:%s without '
1427 raise error.Abort(_('cannot rebase %d:%s without '
1427 'moving at least one of its parents')
1428 'moving at least one of its parents')
1428 % (rev, repo[rev]))
1429 % (rev, repo[rev]))
1429
1430
1430 # Source should not be ancestor of dest. The check here guarantees it's
1431 # Source should not be ancestor of dest. The check here guarantees it's
1431 # impossible. With multi-dest, the initial check does not cover complex
1432 # impossible. With multi-dest, the initial check does not cover complex
1432 # cases since we don't have abstractions to dry-run rebase cheaply.
1433 # cases since we don't have abstractions to dry-run rebase cheaply.
1433 if any(p != nullrev and isancestor(rev, p) for p in newps):
1434 if any(p != nullrev and isancestor(rev, p) for p in newps):
1434 raise error.Abort(_('source is ancestor of destination'))
1435 raise error.Abort(_('source is ancestor of destination'))
1435
1436
1436 # "rebasenode" updates to new p1, use the corresponding merge base.
1437 # "rebasenode" updates to new p1, use the corresponding merge base.
1437 if bases[0] != nullrev:
1438 if bases[0] != nullrev:
1438 base = bases[0]
1439 base = bases[0]
1439 else:
1440 else:
1440 base = None
1441 base = None
1441
1442
1442 # Check if the merge will contain unwanted changes. That may happen if
1443 # Check if the merge will contain unwanted changes. That may happen if
1443 # there are multiple special (non-changelog ancestor) merge bases, which
1444 # there are multiple special (non-changelog ancestor) merge bases, which
1444 # cannot be handled well by the 3-way merge algorithm. For example:
1445 # cannot be handled well by the 3-way merge algorithm. For example:
1445 #
1446 #
1446 # F
1447 # F
1447 # /|
1448 # /|
1448 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1449 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1449 # | | # as merge base, the difference between D and F will include
1450 # | | # as merge base, the difference between D and F will include
1450 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1451 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1451 # |/ # chosen, the rebased F will contain B.
1452 # |/ # chosen, the rebased F will contain B.
1452 # A Z
1453 # A Z
1453 #
1454 #
1454 # But our merge base candidates (D and E in above case) could still be
1455 # But our merge base candidates (D and E in above case) could still be
1455 # better than the default (ancestor(F, Z) == null). Therefore still
1456 # better than the default (ancestor(F, Z) == null). Therefore still
1456 # pick one (so choose p1 above).
1457 # pick one (so choose p1 above).
1457 if sum(1 for b in bases if b != nullrev) > 1:
1458 if sum(1 for b in bases if b != nullrev) > 1:
1458 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1459 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1459 for i, base in enumerate(bases):
1460 for i, base in enumerate(bases):
1460 if base == nullrev:
1461 if base == nullrev:
1461 continue
1462 continue
1462 # Revisions in the side (not chosen as merge base) branch that
1463 # Revisions in the side (not chosen as merge base) branch that
1463 # might contain "surprising" contents
1464 # might contain "surprising" contents
1464 siderevs = list(repo.revs('((%ld-%d) %% (%d+%d))',
1465 siderevs = list(repo.revs('((%ld-%d) %% (%d+%d))',
1465 bases, base, base, dest))
1466 bases, base, base, dest))
1466
1467
1467 # If those revisions are covered by rebaseset, the result is good.
1468 # If those revisions are covered by rebaseset, the result is good.
1468 # A merge in rebaseset would be considered to cover its ancestors.
1469 # A merge in rebaseset would be considered to cover its ancestors.
1469 if siderevs:
1470 if siderevs:
1470 rebaseset = [r for r, d in state.items()
1471 rebaseset = [r for r, d in state.items()
1471 if d > 0 and r not in obsskipped]
1472 if d > 0 and r not in obsskipped]
1472 merges = [r for r in rebaseset
1473 merges = [r for r in rebaseset
1473 if cl.parentrevs(r)[1] != nullrev]
1474 if cl.parentrevs(r)[1] != nullrev]
1474 unwanted[i] = list(repo.revs('%ld - (::%ld) - %ld',
1475 unwanted[i] = list(repo.revs('%ld - (::%ld) - %ld',
1475 siderevs, merges, rebaseset))
1476 siderevs, merges, rebaseset))
1476
1477
1477 # Choose a merge base that has a minimal number of unwanted revs.
1478 # Choose a merge base that has a minimal number of unwanted revs.
1478 l, i = min((len(revs), i)
1479 l, i = min((len(revs), i)
1479 for i, revs in enumerate(unwanted) if revs is not None)
1480 for i, revs in enumerate(unwanted) if revs is not None)
1480 base = bases[i]
1481 base = bases[i]
1481
1482
1482 # newps[0] should match merge base if possible. Currently, if newps[i]
1483 # newps[0] should match merge base if possible. Currently, if newps[i]
1483 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1484 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1484 # the other's ancestor. In that case, it's fine to not swap newps here.
1485 # the other's ancestor. In that case, it's fine to not swap newps here.
1485 # (see CASE-1 and CASE-2 above)
1486 # (see CASE-1 and CASE-2 above)
1486 if i != 0 and newps[i] != nullrev:
1487 if i != 0 and newps[i] != nullrev:
1487 newps[0], newps[i] = newps[i], newps[0]
1488 newps[0], newps[i] = newps[i], newps[0]
1488
1489
1489 # The merge will include unwanted revisions. Abort now. Revisit this if
1490 # The merge will include unwanted revisions. Abort now. Revisit this if
1490 # we have a more advanced merge algorithm that handles multiple bases.
1491 # we have a more advanced merge algorithm that handles multiple bases.
1491 if l > 0:
1492 if l > 0:
1492 unwanteddesc = _(' or ').join(
1493 unwanteddesc = _(' or ').join(
1493 (', '.join('%d:%s' % (r, repo[r]) for r in revs)
1494 (', '.join('%d:%s' % (r, repo[r]) for r in revs)
1494 for revs in unwanted if revs is not None))
1495 for revs in unwanted if revs is not None))
1495 raise error.Abort(
1496 raise error.Abort(
1496 _('rebasing %d:%s will include unwanted changes from %s')
1497 _('rebasing %d:%s will include unwanted changes from %s')
1497 % (rev, repo[rev], unwanteddesc))
1498 % (rev, repo[rev], unwanteddesc))
1498
1499
1499 repo.ui.debug(" future parents are %d and %d\n" % tuple(newps))
1500 repo.ui.debug(" future parents are %d and %d\n" % tuple(newps))
1500
1501
1501 return newps[0], newps[1], base
1502 return newps[0], newps[1], base
1502
1503
1503 def isagitpatch(repo, patchname):
1504 def isagitpatch(repo, patchname):
1504 'Return true if the given patch is in git format'
1505 'Return true if the given patch is in git format'
1505 mqpatch = os.path.join(repo.mq.path, patchname)
1506 mqpatch = os.path.join(repo.mq.path, patchname)
1506 for line in patch.linereader(open(mqpatch, 'rb')):
1507 for line in patch.linereader(open(mqpatch, 'rb')):
1507 if line.startswith('diff --git'):
1508 if line.startswith('diff --git'):
1508 return True
1509 return True
1509 return False
1510 return False
1510
1511
1511 def updatemq(repo, state, skipped, **opts):
1512 def updatemq(repo, state, skipped, **opts):
1512 'Update rebased mq patches - finalize and then import them'
1513 'Update rebased mq patches - finalize and then import them'
1513 mqrebase = {}
1514 mqrebase = {}
1514 mq = repo.mq
1515 mq = repo.mq
1515 original_series = mq.fullseries[:]
1516 original_series = mq.fullseries[:]
1516 skippedpatches = set()
1517 skippedpatches = set()
1517
1518
1518 for p in mq.applied:
1519 for p in mq.applied:
1519 rev = repo[p.node].rev()
1520 rev = repo[p.node].rev()
1520 if rev in state:
1521 if rev in state:
1521 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
1522 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
1522 (rev, p.name))
1523 (rev, p.name))
1523 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1524 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1524 else:
1525 else:
1525 # Applied but not rebased, not sure this should happen
1526 # Applied but not rebased, not sure this should happen
1526 skippedpatches.add(p.name)
1527 skippedpatches.add(p.name)
1527
1528
1528 if mqrebase:
1529 if mqrebase:
1529 mq.finish(repo, mqrebase.keys())
1530 mq.finish(repo, mqrebase.keys())
1530
1531
1531 # We must start import from the newest revision
1532 # We must start import from the newest revision
1532 for rev in sorted(mqrebase, reverse=True):
1533 for rev in sorted(mqrebase, reverse=True):
1533 if rev not in skipped:
1534 if rev not in skipped:
1534 name, isgit = mqrebase[rev]
1535 name, isgit = mqrebase[rev]
1535 repo.ui.note(_('updating mq patch %s to %d:%s\n') %
1536 repo.ui.note(_('updating mq patch %s to %d:%s\n') %
1536 (name, state[rev], repo[state[rev]]))
1537 (name, state[rev], repo[state[rev]]))
1537 mq.qimport(repo, (), patchname=name, git=isgit,
1538 mq.qimport(repo, (), patchname=name, git=isgit,
1538 rev=["%d" % state[rev]])
1539 rev=["%d" % state[rev]])
1539 else:
1540 else:
1540 # Rebased and skipped
1541 # Rebased and skipped
1541 skippedpatches.add(mqrebase[rev][0])
1542 skippedpatches.add(mqrebase[rev][0])
1542
1543
1543 # Patches were either applied and rebased and imported in
1544 # Patches were either applied and rebased and imported in
1544 # order, applied and removed or unapplied. Discard the removed
1545 # order, applied and removed or unapplied. Discard the removed
1545 # ones while preserving the original series order and guards.
1546 # ones while preserving the original series order and guards.
1546 newseries = [s for s in original_series
1547 newseries = [s for s in original_series
1547 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
1548 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
1548 mq.fullseries[:] = newseries
1549 mq.fullseries[:] = newseries
1549 mq.seriesdirty = True
1550 mq.seriesdirty = True
1550 mq.savedirty()
1551 mq.savedirty()
1551
1552
1552 def storecollapsemsg(repo, collapsemsg):
1553 def storecollapsemsg(repo, collapsemsg):
1553 'Store the collapse message to allow recovery'
1554 'Store the collapse message to allow recovery'
1554 collapsemsg = collapsemsg or ''
1555 collapsemsg = collapsemsg or ''
1555 f = repo.vfs("last-message.txt", "w")
1556 f = repo.vfs("last-message.txt", "w")
1556 f.write("%s\n" % collapsemsg)
1557 f.write("%s\n" % collapsemsg)
1557 f.close()
1558 f.close()
1558
1559
1559 def clearcollapsemsg(repo):
1560 def clearcollapsemsg(repo):
1560 'Remove collapse message file'
1561 'Remove collapse message file'
1561 repo.vfs.unlinkpath("last-message.txt", ignoremissing=True)
1562 repo.vfs.unlinkpath("last-message.txt", ignoremissing=True)
1562
1563
1563 def restorecollapsemsg(repo, isabort):
1564 def restorecollapsemsg(repo, isabort):
1564 'Restore previously stored collapse message'
1565 'Restore previously stored collapse message'
1565 try:
1566 try:
1566 f = repo.vfs("last-message.txt")
1567 f = repo.vfs("last-message.txt")
1567 collapsemsg = f.readline().strip()
1568 collapsemsg = f.readline().strip()
1568 f.close()
1569 f.close()
1569 except IOError as err:
1570 except IOError as err:
1570 if err.errno != errno.ENOENT:
1571 if err.errno != errno.ENOENT:
1571 raise
1572 raise
1572 if isabort:
1573 if isabort:
1573 # Oh well, just abort like normal
1574 # Oh well, just abort like normal
1574 collapsemsg = ''
1575 collapsemsg = ''
1575 else:
1576 else:
1576 raise error.Abort(_('missing .hg/last-message.txt for rebase'))
1577 raise error.Abort(_('missing .hg/last-message.txt for rebase'))
1577 return collapsemsg
1578 return collapsemsg
1578
1579
1579 def clearstatus(repo):
1580 def clearstatus(repo):
1580 'Remove the status files'
1581 'Remove the status files'
1581 # Make sure the active transaction won't write the state file
1582 # Make sure the active transaction won't write the state file
1582 tr = repo.currenttransaction()
1583 tr = repo.currenttransaction()
1583 if tr:
1584 if tr:
1584 tr.removefilegenerator('rebasestate')
1585 tr.removefilegenerator('rebasestate')
1585 repo.vfs.unlinkpath("rebasestate", ignoremissing=True)
1586 repo.vfs.unlinkpath("rebasestate", ignoremissing=True)
1586
1587
1587 def needupdate(repo, state):
1588 def needupdate(repo, state):
1588 '''check whether we should `update --clean` away from a merge, or if
1589 '''check whether we should `update --clean` away from a merge, or if
1589 somehow the working dir got forcibly updated, e.g. by older hg'''
1590 somehow the working dir got forcibly updated, e.g. by older hg'''
1590 parents = [p.rev() for p in repo[None].parents()]
1591 parents = [p.rev() for p in repo[None].parents()]
1591
1592
1592 # Are we in a merge state at all?
1593 # Are we in a merge state at all?
1593 if len(parents) < 2:
1594 if len(parents) < 2:
1594 return False
1595 return False
1595
1596
1596 # We should be standing on the first as-of-yet unrebased commit.
1597 # We should be standing on the first as-of-yet unrebased commit.
1597 firstunrebased = min([old for old, new in state.iteritems()
1598 firstunrebased = min([old for old, new in state.iteritems()
1598 if new == nullrev])
1599 if new == nullrev])
1599 if firstunrebased in parents:
1600 if firstunrebased in parents:
1600 return True
1601 return True
1601
1602
1602 return False
1603 return False
1603
1604
1604 def abort(repo, originalwd, destmap, state, activebookmark=None, backup=True,
1605 def abort(repo, originalwd, destmap, state, activebookmark=None, backup=True,
1605 suppwarns=False):
1606 suppwarns=False):
1606 '''Restore the repository to its original state. Additional args:
1607 '''Restore the repository to its original state. Additional args:
1607
1608
1608 activebookmark: the name of the bookmark that should be active after the
1609 activebookmark: the name of the bookmark that should be active after the
1609 restore'''
1610 restore'''
1610
1611
1611 try:
1612 try:
1612 # If the first commits in the rebased set get skipped during the rebase,
1613 # If the first commits in the rebased set get skipped during the rebase,
1613 # their values within the state mapping will be the dest rev id. The
1614 # their values within the state mapping will be the dest rev id. The
1614 # rebased list must must not contain the dest rev (issue4896)
1615 # rebased list must must not contain the dest rev (issue4896)
1615 rebased = [s for r, s in state.items()
1616 rebased = [s for r, s in state.items()
1616 if s >= 0 and s != r and s != destmap[r]]
1617 if s >= 0 and s != r and s != destmap[r]]
1617 immutable = [d for d in rebased if not repo[d].mutable()]
1618 immutable = [d for d in rebased if not repo[d].mutable()]
1618 cleanup = True
1619 cleanup = True
1619 if immutable:
1620 if immutable:
1620 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1621 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1621 % ', '.join(bytes(repo[r]) for r in immutable),
1622 % ', '.join(bytes(repo[r]) for r in immutable),
1622 hint=_("see 'hg help phases' for details"))
1623 hint=_("see 'hg help phases' for details"))
1623 cleanup = False
1624 cleanup = False
1624
1625
1625 descendants = set()
1626 descendants = set()
1626 if rebased:
1627 if rebased:
1627 descendants = set(repo.changelog.descendants(rebased))
1628 descendants = set(repo.changelog.descendants(rebased))
1628 if descendants - set(rebased):
1629 if descendants - set(rebased):
1629 repo.ui.warn(_("warning: new changesets detected on destination "
1630 repo.ui.warn(_("warning: new changesets detected on destination "
1630 "branch, can't strip\n"))
1631 "branch, can't strip\n"))
1631 cleanup = False
1632 cleanup = False
1632
1633
1633 if cleanup:
1634 if cleanup:
1634 shouldupdate = False
1635 shouldupdate = False
1635 if rebased:
1636 if rebased:
1636 strippoints = [
1637 strippoints = [
1637 c.node() for c in repo.set('roots(%ld)', rebased)]
1638 c.node() for c in repo.set('roots(%ld)', rebased)]
1638
1639
1639 updateifonnodes = set(rebased)
1640 updateifonnodes = set(rebased)
1640 updateifonnodes.update(destmap.values())
1641 updateifonnodes.update(destmap.values())
1641 updateifonnodes.add(originalwd)
1642 updateifonnodes.add(originalwd)
1642 shouldupdate = repo['.'].rev() in updateifonnodes
1643 shouldupdate = repo['.'].rev() in updateifonnodes
1643
1644
1644 # Update away from the rebase if necessary
1645 # Update away from the rebase if necessary
1645 if shouldupdate or needupdate(repo, state):
1646 if shouldupdate or needupdate(repo, state):
1646 mergemod.update(repo, originalwd, False, True)
1647 mergemod.update(repo, originalwd, branchmerge=False, force=True)
1647
1648
1648 # Strip from the first rebased revision
1649 # Strip from the first rebased revision
1649 if rebased:
1650 if rebased:
1650 repair.strip(repo.ui, repo, strippoints, backup=backup)
1651 repair.strip(repo.ui, repo, strippoints, backup=backup)
1651
1652
1652 if activebookmark and activebookmark in repo._bookmarks:
1653 if activebookmark and activebookmark in repo._bookmarks:
1653 bookmarks.activate(repo, activebookmark)
1654 bookmarks.activate(repo, activebookmark)
1654
1655
1655 finally:
1656 finally:
1656 clearstatus(repo)
1657 clearstatus(repo)
1657 clearcollapsemsg(repo)
1658 clearcollapsemsg(repo)
1658 if not suppwarns:
1659 if not suppwarns:
1659 repo.ui.warn(_('rebase aborted\n'))
1660 repo.ui.warn(_('rebase aborted\n'))
1660 return 0
1661 return 0
1661
1662
1662 def sortsource(destmap):
1663 def sortsource(destmap):
1663 """yield source revisions in an order that we only rebase things once
1664 """yield source revisions in an order that we only rebase things once
1664
1665
1665 If source and destination overlaps, we should filter out revisions
1666 If source and destination overlaps, we should filter out revisions
1666 depending on other revisions which hasn't been rebased yet.
1667 depending on other revisions which hasn't been rebased yet.
1667
1668
1668 Yield a sorted list of revisions each time.
1669 Yield a sorted list of revisions each time.
1669
1670
1670 For example, when rebasing A to B, B to C. This function yields [B], then
1671 For example, when rebasing A to B, B to C. This function yields [B], then
1671 [A], indicating B needs to be rebased first.
1672 [A], indicating B needs to be rebased first.
1672
1673
1673 Raise if there is a cycle so the rebase is impossible.
1674 Raise if there is a cycle so the rebase is impossible.
1674 """
1675 """
1675 srcset = set(destmap)
1676 srcset = set(destmap)
1676 while srcset:
1677 while srcset:
1677 srclist = sorted(srcset)
1678 srclist = sorted(srcset)
1678 result = []
1679 result = []
1679 for r in srclist:
1680 for r in srclist:
1680 if destmap[r] not in srcset:
1681 if destmap[r] not in srcset:
1681 result.append(r)
1682 result.append(r)
1682 if not result:
1683 if not result:
1683 raise error.Abort(_('source and destination form a cycle'))
1684 raise error.Abort(_('source and destination form a cycle'))
1684 srcset -= set(result)
1685 srcset -= set(result)
1685 yield result
1686 yield result
1686
1687
1687 def buildstate(repo, destmap, collapse):
1688 def buildstate(repo, destmap, collapse):
1688 '''Define which revisions are going to be rebased and where
1689 '''Define which revisions are going to be rebased and where
1689
1690
1690 repo: repo
1691 repo: repo
1691 destmap: {srcrev: destrev}
1692 destmap: {srcrev: destrev}
1692 '''
1693 '''
1693 rebaseset = destmap.keys()
1694 rebaseset = destmap.keys()
1694 originalwd = repo['.'].rev()
1695 originalwd = repo['.'].rev()
1695
1696
1696 # This check isn't strictly necessary, since mq detects commits over an
1697 # This check isn't strictly necessary, since mq detects commits over an
1697 # applied patch. But it prevents messing up the working directory when
1698 # applied patch. But it prevents messing up the working directory when
1698 # a partially completed rebase is blocked by mq.
1699 # a partially completed rebase is blocked by mq.
1699 if 'qtip' in repo.tags():
1700 if 'qtip' in repo.tags():
1700 mqapplied = set(repo[s.node].rev() for s in repo.mq.applied)
1701 mqapplied = set(repo[s.node].rev() for s in repo.mq.applied)
1701 if set(destmap.values()) & mqapplied:
1702 if set(destmap.values()) & mqapplied:
1702 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1703 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1703
1704
1704 # Get "cycle" error early by exhausting the generator.
1705 # Get "cycle" error early by exhausting the generator.
1705 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
1706 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
1706 if not sortedsrc:
1707 if not sortedsrc:
1707 raise error.Abort(_('no matching revisions'))
1708 raise error.Abort(_('no matching revisions'))
1708
1709
1709 # Only check the first batch of revisions to rebase not depending on other
1710 # Only check the first batch of revisions to rebase not depending on other
1710 # rebaseset. This means "source is ancestor of destination" for the second
1711 # rebaseset. This means "source is ancestor of destination" for the second
1711 # (and following) batches of revisions are not checked here. We rely on
1712 # (and following) batches of revisions are not checked here. We rely on
1712 # "defineparents" to do that check.
1713 # "defineparents" to do that check.
1713 roots = list(repo.set('roots(%ld)', sortedsrc[0]))
1714 roots = list(repo.set('roots(%ld)', sortedsrc[0]))
1714 if not roots:
1715 if not roots:
1715 raise error.Abort(_('no matching revisions'))
1716 raise error.Abort(_('no matching revisions'))
1716 def revof(r):
1717 def revof(r):
1717 return r.rev()
1718 return r.rev()
1718 roots = sorted(roots, key=revof)
1719 roots = sorted(roots, key=revof)
1719 state = dict.fromkeys(rebaseset, revtodo)
1720 state = dict.fromkeys(rebaseset, revtodo)
1720 emptyrebase = (len(sortedsrc) == 1)
1721 emptyrebase = (len(sortedsrc) == 1)
1721 for root in roots:
1722 for root in roots:
1722 dest = repo[destmap[root.rev()]]
1723 dest = repo[destmap[root.rev()]]
1723 commonbase = root.ancestor(dest)
1724 commonbase = root.ancestor(dest)
1724 if commonbase == root:
1725 if commonbase == root:
1725 raise error.Abort(_('source is ancestor of destination'))
1726 raise error.Abort(_('source is ancestor of destination'))
1726 if commonbase == dest:
1727 if commonbase == dest:
1727 wctx = repo[None]
1728 wctx = repo[None]
1728 if dest == wctx.p1():
1729 if dest == wctx.p1():
1729 # when rebasing to '.', it will use the current wd branch name
1730 # when rebasing to '.', it will use the current wd branch name
1730 samebranch = root.branch() == wctx.branch()
1731 samebranch = root.branch() == wctx.branch()
1731 else:
1732 else:
1732 samebranch = root.branch() == dest.branch()
1733 samebranch = root.branch() == dest.branch()
1733 if not collapse and samebranch and dest in root.parents():
1734 if not collapse and samebranch and dest in root.parents():
1734 # mark the revision as done by setting its new revision
1735 # mark the revision as done by setting its new revision
1735 # equal to its old (current) revisions
1736 # equal to its old (current) revisions
1736 state[root.rev()] = root.rev()
1737 state[root.rev()] = root.rev()
1737 repo.ui.debug('source is a child of destination\n')
1738 repo.ui.debug('source is a child of destination\n')
1738 continue
1739 continue
1739
1740
1740 emptyrebase = False
1741 emptyrebase = False
1741 repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root))
1742 repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root))
1742 if emptyrebase:
1743 if emptyrebase:
1743 return None
1744 return None
1744 for rev in sorted(state):
1745 for rev in sorted(state):
1745 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
1746 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
1746 # if all parents of this revision are done, then so is this revision
1747 # if all parents of this revision are done, then so is this revision
1747 if parents and all((state.get(p) == p for p in parents)):
1748 if parents and all((state.get(p) == p for p in parents)):
1748 state[rev] = rev
1749 state[rev] = rev
1749 return originalwd, destmap, state
1750 return originalwd, destmap, state
1750
1751
1751 def clearrebased(ui, repo, destmap, state, skipped, collapsedas=None,
1752 def clearrebased(ui, repo, destmap, state, skipped, collapsedas=None,
1752 keepf=False, fm=None, backup=True):
1753 keepf=False, fm=None, backup=True):
1753 """dispose of rebased revision at the end of the rebase
1754 """dispose of rebased revision at the end of the rebase
1754
1755
1755 If `collapsedas` is not None, the rebase was a collapse whose result if the
1756 If `collapsedas` is not None, the rebase was a collapse whose result if the
1756 `collapsedas` node.
1757 `collapsedas` node.
1757
1758
1758 If `keepf` is not True, the rebase has --keep set and no nodes should be
1759 If `keepf` is not True, the rebase has --keep set and no nodes should be
1759 removed (but bookmarks still need to be moved).
1760 removed (but bookmarks still need to be moved).
1760
1761
1761 If `backup` is False, no backup will be stored when stripping rebased
1762 If `backup` is False, no backup will be stored when stripping rebased
1762 revisions.
1763 revisions.
1763 """
1764 """
1764 tonode = repo.changelog.node
1765 tonode = repo.changelog.node
1765 replacements = {}
1766 replacements = {}
1766 moves = {}
1767 moves = {}
1767 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
1768 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
1768
1769
1769 collapsednodes = []
1770 collapsednodes = []
1770 for rev, newrev in sorted(state.items()):
1771 for rev, newrev in sorted(state.items()):
1771 if newrev >= 0 and newrev != rev:
1772 if newrev >= 0 and newrev != rev:
1772 oldnode = tonode(rev)
1773 oldnode = tonode(rev)
1773 newnode = collapsedas or tonode(newrev)
1774 newnode = collapsedas or tonode(newrev)
1774 moves[oldnode] = newnode
1775 moves[oldnode] = newnode
1775 if not keepf:
1776 if not keepf:
1776 succs = None
1777 succs = None
1777 if rev in skipped:
1778 if rev in skipped:
1778 if stripcleanup or not repo[rev].obsolete():
1779 if stripcleanup or not repo[rev].obsolete():
1779 succs = ()
1780 succs = ()
1780 elif collapsedas:
1781 elif collapsedas:
1781 collapsednodes.append(oldnode)
1782 collapsednodes.append(oldnode)
1782 else:
1783 else:
1783 succs = (newnode,)
1784 succs = (newnode,)
1784 if succs is not None:
1785 if succs is not None:
1785 replacements[(oldnode,)] = succs
1786 replacements[(oldnode,)] = succs
1786 if collapsednodes:
1787 if collapsednodes:
1787 replacements[tuple(collapsednodes)] = (collapsedas,)
1788 replacements[tuple(collapsednodes)] = (collapsedas,)
1788 scmutil.cleanupnodes(repo, replacements, 'rebase', moves, backup=backup)
1789 scmutil.cleanupnodes(repo, replacements, 'rebase', moves, backup=backup)
1789 if fm:
1790 if fm:
1790 hf = fm.hexfunc
1791 hf = fm.hexfunc
1791 fl = fm.formatlist
1792 fl = fm.formatlist
1792 fd = fm.formatdict
1793 fd = fm.formatdict
1793 changes = {}
1794 changes = {}
1794 for oldns, newn in replacements.iteritems():
1795 for oldns, newn in replacements.iteritems():
1795 for oldn in oldns:
1796 for oldn in oldns:
1796 changes[hf(oldn)] = fl([hf(n) for n in newn], name='node')
1797 changes[hf(oldn)] = fl([hf(n) for n in newn], name='node')
1797 nodechanges = fd(changes, key="oldnode", value="newnodes")
1798 nodechanges = fd(changes, key="oldnode", value="newnodes")
1798 fm.data(nodechanges=nodechanges)
1799 fm.data(nodechanges=nodechanges)
1799
1800
1800 def pullrebase(orig, ui, repo, *args, **opts):
1801 def pullrebase(orig, ui, repo, *args, **opts):
1801 'Call rebase after pull if the latter has been invoked with --rebase'
1802 'Call rebase after pull if the latter has been invoked with --rebase'
1802 ret = None
1803 ret = None
1803 if opts.get(r'rebase'):
1804 if opts.get(r'rebase'):
1804 if ui.configbool('commands', 'rebase.requiredest'):
1805 if ui.configbool('commands', 'rebase.requiredest'):
1805 msg = _('rebase destination required by configuration')
1806 msg = _('rebase destination required by configuration')
1806 hint = _('use hg pull followed by hg rebase -d DEST')
1807 hint = _('use hg pull followed by hg rebase -d DEST')
1807 raise error.Abort(msg, hint=hint)
1808 raise error.Abort(msg, hint=hint)
1808
1809
1809 with repo.wlock(), repo.lock():
1810 with repo.wlock(), repo.lock():
1810 if opts.get(r'update'):
1811 if opts.get(r'update'):
1811 del opts[r'update']
1812 del opts[r'update']
1812 ui.debug('--update and --rebase are not compatible, ignoring '
1813 ui.debug('--update and --rebase are not compatible, ignoring '
1813 'the update flag\n')
1814 'the update flag\n')
1814
1815
1815 cmdutil.checkunfinished(repo)
1816 cmdutil.checkunfinished(repo)
1816 cmdutil.bailifchanged(repo, hint=_('cannot pull with rebase: '
1817 cmdutil.bailifchanged(repo, hint=_('cannot pull with rebase: '
1817 'please commit or shelve your changes first'))
1818 'please commit or shelve your changes first'))
1818
1819
1819 revsprepull = len(repo)
1820 revsprepull = len(repo)
1820 origpostincoming = commands.postincoming
1821 origpostincoming = commands.postincoming
1821 def _dummy(*args, **kwargs):
1822 def _dummy(*args, **kwargs):
1822 pass
1823 pass
1823 commands.postincoming = _dummy
1824 commands.postincoming = _dummy
1824 try:
1825 try:
1825 ret = orig(ui, repo, *args, **opts)
1826 ret = orig(ui, repo, *args, **opts)
1826 finally:
1827 finally:
1827 commands.postincoming = origpostincoming
1828 commands.postincoming = origpostincoming
1828 revspostpull = len(repo)
1829 revspostpull = len(repo)
1829 if revspostpull > revsprepull:
1830 if revspostpull > revsprepull:
1830 # --rev option from pull conflict with rebase own --rev
1831 # --rev option from pull conflict with rebase own --rev
1831 # dropping it
1832 # dropping it
1832 if r'rev' in opts:
1833 if r'rev' in opts:
1833 del opts[r'rev']
1834 del opts[r'rev']
1834 # positional argument from pull conflicts with rebase's own
1835 # positional argument from pull conflicts with rebase's own
1835 # --source.
1836 # --source.
1836 if r'source' in opts:
1837 if r'source' in opts:
1837 del opts[r'source']
1838 del opts[r'source']
1838 # revsprepull is the len of the repo, not revnum of tip.
1839 # revsprepull is the len of the repo, not revnum of tip.
1839 destspace = list(repo.changelog.revs(start=revsprepull))
1840 destspace = list(repo.changelog.revs(start=revsprepull))
1840 opts[r'_destspace'] = destspace
1841 opts[r'_destspace'] = destspace
1841 try:
1842 try:
1842 rebase(ui, repo, **opts)
1843 rebase(ui, repo, **opts)
1843 except error.NoMergeDestAbort:
1844 except error.NoMergeDestAbort:
1844 # we can maybe update instead
1845 # we can maybe update instead
1845 rev, _a, _b = destutil.destupdate(repo)
1846 rev, _a, _b = destutil.destupdate(repo)
1846 if rev == repo['.'].rev():
1847 if rev == repo['.'].rev():
1847 ui.status(_('nothing to rebase\n'))
1848 ui.status(_('nothing to rebase\n'))
1848 else:
1849 else:
1849 ui.status(_('nothing to rebase - updating instead\n'))
1850 ui.status(_('nothing to rebase - updating instead\n'))
1850 # not passing argument to get the bare update behavior
1851 # not passing argument to get the bare update behavior
1851 # with warning and trumpets
1852 # with warning and trumpets
1852 commands.update(ui, repo)
1853 commands.update(ui, repo)
1853 else:
1854 else:
1854 if opts.get(r'tool'):
1855 if opts.get(r'tool'):
1855 raise error.Abort(_('--tool can only be used with --rebase'))
1856 raise error.Abort(_('--tool can only be used with --rebase'))
1856 ret = orig(ui, repo, *args, **opts)
1857 ret = orig(ui, repo, *args, **opts)
1857
1858
1858 return ret
1859 return ret
1859
1860
1860 def _filterobsoleterevs(repo, revs):
1861 def _filterobsoleterevs(repo, revs):
1861 """returns a set of the obsolete revisions in revs"""
1862 """returns a set of the obsolete revisions in revs"""
1862 return set(r for r in revs if repo[r].obsolete())
1863 return set(r for r in revs if repo[r].obsolete())
1863
1864
1864 def _computeobsoletenotrebased(repo, rebaseobsrevs, destmap):
1865 def _computeobsoletenotrebased(repo, rebaseobsrevs, destmap):
1865 """Return (obsoletenotrebased, obsoletewithoutsuccessorindestination).
1866 """Return (obsoletenotrebased, obsoletewithoutsuccessorindestination).
1866
1867
1867 `obsoletenotrebased` is a mapping mapping obsolete => successor for all
1868 `obsoletenotrebased` is a mapping mapping obsolete => successor for all
1868 obsolete nodes to be rebased given in `rebaseobsrevs`.
1869 obsolete nodes to be rebased given in `rebaseobsrevs`.
1869
1870
1870 `obsoletewithoutsuccessorindestination` is a set with obsolete revisions
1871 `obsoletewithoutsuccessorindestination` is a set with obsolete revisions
1871 without a successor in destination.
1872 without a successor in destination.
1872
1873
1873 `obsoleteextinctsuccessors` is a set of obsolete revisions with only
1874 `obsoleteextinctsuccessors` is a set of obsolete revisions with only
1874 obsolete successors.
1875 obsolete successors.
1875 """
1876 """
1876 obsoletenotrebased = {}
1877 obsoletenotrebased = {}
1877 obsoletewithoutsuccessorindestination = set([])
1878 obsoletewithoutsuccessorindestination = set([])
1878 obsoleteextinctsuccessors = set([])
1879 obsoleteextinctsuccessors = set([])
1879
1880
1880 assert repo.filtername is None
1881 assert repo.filtername is None
1881 cl = repo.changelog
1882 cl = repo.changelog
1882 nodemap = cl.nodemap
1883 nodemap = cl.nodemap
1883 extinctrevs = set(repo.revs('extinct()'))
1884 extinctrevs = set(repo.revs('extinct()'))
1884 for srcrev in rebaseobsrevs:
1885 for srcrev in rebaseobsrevs:
1885 srcnode = cl.node(srcrev)
1886 srcnode = cl.node(srcrev)
1886 # XXX: more advanced APIs are required to handle split correctly
1887 # XXX: more advanced APIs are required to handle split correctly
1887 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
1888 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
1888 # obsutil.allsuccessors includes node itself
1889 # obsutil.allsuccessors includes node itself
1889 successors.remove(srcnode)
1890 successors.remove(srcnode)
1890 succrevs = {nodemap[s] for s in successors if s in nodemap}
1891 succrevs = {nodemap[s] for s in successors if s in nodemap}
1891 if succrevs.issubset(extinctrevs):
1892 if succrevs.issubset(extinctrevs):
1892 # all successors are extinct
1893 # all successors are extinct
1893 obsoleteextinctsuccessors.add(srcrev)
1894 obsoleteextinctsuccessors.add(srcrev)
1894 if not successors:
1895 if not successors:
1895 # no successor
1896 # no successor
1896 obsoletenotrebased[srcrev] = None
1897 obsoletenotrebased[srcrev] = None
1897 else:
1898 else:
1898 dstrev = destmap[srcrev]
1899 dstrev = destmap[srcrev]
1899 for succrev in succrevs:
1900 for succrev in succrevs:
1900 if cl.isancestorrev(succrev, dstrev):
1901 if cl.isancestorrev(succrev, dstrev):
1901 obsoletenotrebased[srcrev] = succrev
1902 obsoletenotrebased[srcrev] = succrev
1902 break
1903 break
1903 else:
1904 else:
1904 # If 'srcrev' has a successor in rebase set but none in
1905 # If 'srcrev' has a successor in rebase set but none in
1905 # destination (which would be catched above), we shall skip it
1906 # destination (which would be catched above), we shall skip it
1906 # and its descendants to avoid divergence.
1907 # and its descendants to avoid divergence.
1907 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
1908 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
1908 obsoletewithoutsuccessorindestination.add(srcrev)
1909 obsoletewithoutsuccessorindestination.add(srcrev)
1909
1910
1910 return (
1911 return (
1911 obsoletenotrebased,
1912 obsoletenotrebased,
1912 obsoletewithoutsuccessorindestination,
1913 obsoletewithoutsuccessorindestination,
1913 obsoleteextinctsuccessors,
1914 obsoleteextinctsuccessors,
1914 )
1915 )
1915
1916
1916 def summaryhook(ui, repo):
1917 def summaryhook(ui, repo):
1917 if not repo.vfs.exists('rebasestate'):
1918 if not repo.vfs.exists('rebasestate'):
1918 return
1919 return
1919 try:
1920 try:
1920 rbsrt = rebaseruntime(repo, ui, {})
1921 rbsrt = rebaseruntime(repo, ui, {})
1921 rbsrt.restorestatus()
1922 rbsrt.restorestatus()
1922 state = rbsrt.state
1923 state = rbsrt.state
1923 except error.RepoLookupError:
1924 except error.RepoLookupError:
1924 # i18n: column positioning for "hg summary"
1925 # i18n: column positioning for "hg summary"
1925 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1926 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1926 ui.write(msg)
1927 ui.write(msg)
1927 return
1928 return
1928 numrebased = len([i for i in state.itervalues() if i >= 0])
1929 numrebased = len([i for i in state.itervalues() if i >= 0])
1929 # i18n: column positioning for "hg summary"
1930 # i18n: column positioning for "hg summary"
1930 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1931 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1931 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1932 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1932 ui.label(_('%d remaining'), 'rebase.remaining') %
1933 ui.label(_('%d remaining'), 'rebase.remaining') %
1933 (len(state) - numrebased)))
1934 (len(state) - numrebased)))
1934
1935
1935 def uisetup(ui):
1936 def uisetup(ui):
1936 #Replace pull with a decorator to provide --rebase option
1937 #Replace pull with a decorator to provide --rebase option
1937 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1938 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1938 entry[1].append(('', 'rebase', None,
1939 entry[1].append(('', 'rebase', None,
1939 _("rebase working directory to branch head")))
1940 _("rebase working directory to branch head")))
1940 entry[1].append(('t', 'tool', '',
1941 entry[1].append(('t', 'tool', '',
1941 _("specify merge tool for rebase")))
1942 _("specify merge tool for rebase")))
1942 cmdutil.summaryhooks.add('rebase', summaryhook)
1943 cmdutil.summaryhooks.add('rebase', summaryhook)
1943 cmdutil.unfinishedstates.append(
1944 cmdutil.unfinishedstates.append(
1944 ['rebasestate', False, False, _('rebase in progress'),
1945 ['rebasestate', False, False, _('rebase in progress'),
1945 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1946 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1946 cmdutil.afterresolvedstates.append(
1947 cmdutil.afterresolvedstates.append(
1947 ['rebasestate', _('hg rebase --continue')])
1948 ['rebasestate', _('hg rebase --continue')])
@@ -1,1152 +1,1152 b''
1 # shelve.py - save/restore working directory state
1 # shelve.py - save/restore working directory state
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """save and restore changes to the working directory
8 """save and restore changes to the working directory
9
9
10 The "hg shelve" command saves changes made to the working directory
10 The "hg shelve" command saves changes made to the working directory
11 and reverts those changes, resetting the working directory to a clean
11 and reverts those changes, resetting the working directory to a clean
12 state.
12 state.
13
13
14 Later on, the "hg unshelve" command restores the changes saved by "hg
14 Later on, the "hg unshelve" command restores the changes saved by "hg
15 shelve". Changes can be restored even after updating to a different
15 shelve". Changes can be restored even after updating to a different
16 parent, in which case Mercurial's merge machinery will resolve any
16 parent, in which case Mercurial's merge machinery will resolve any
17 conflicts if necessary.
17 conflicts if necessary.
18
18
19 You can have more than one shelved change outstanding at a time; each
19 You can have more than one shelved change outstanding at a time; each
20 shelved change has a distinct name. For details, see the help for "hg
20 shelved change has a distinct name. For details, see the help for "hg
21 shelve".
21 shelve".
22 """
22 """
23 from __future__ import absolute_import
23 from __future__ import absolute_import
24
24
25 import collections
25 import collections
26 import errno
26 import errno
27 import itertools
27 import itertools
28 import stat
28 import stat
29
29
30 from mercurial.i18n import _
30 from mercurial.i18n import _
31 from mercurial import (
31 from mercurial import (
32 bookmarks,
32 bookmarks,
33 bundle2,
33 bundle2,
34 bundlerepo,
34 bundlerepo,
35 changegroup,
35 changegroup,
36 cmdutil,
36 cmdutil,
37 discovery,
37 discovery,
38 error,
38 error,
39 exchange,
39 exchange,
40 hg,
40 hg,
41 lock as lockmod,
41 lock as lockmod,
42 mdiff,
42 mdiff,
43 merge,
43 merge,
44 narrowspec,
44 narrowspec,
45 node as nodemod,
45 node as nodemod,
46 patch,
46 patch,
47 phases,
47 phases,
48 pycompat,
48 pycompat,
49 registrar,
49 registrar,
50 repair,
50 repair,
51 scmutil,
51 scmutil,
52 templatefilters,
52 templatefilters,
53 util,
53 util,
54 vfs as vfsmod,
54 vfs as vfsmod,
55 )
55 )
56
56
57 from . import (
57 from . import (
58 rebase,
58 rebase,
59 )
59 )
60 from mercurial.utils import (
60 from mercurial.utils import (
61 dateutil,
61 dateutil,
62 stringutil,
62 stringutil,
63 )
63 )
64
64
65 cmdtable = {}
65 cmdtable = {}
66 command = registrar.command(cmdtable)
66 command = registrar.command(cmdtable)
67 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
67 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
68 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
68 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
69 # be specifying the version(s) of Mercurial they are tested with, or
69 # be specifying the version(s) of Mercurial they are tested with, or
70 # leave the attribute unspecified.
70 # leave the attribute unspecified.
71 testedwith = 'ships-with-hg-core'
71 testedwith = 'ships-with-hg-core'
72
72
73 configtable = {}
73 configtable = {}
74 configitem = registrar.configitem(configtable)
74 configitem = registrar.configitem(configtable)
75
75
76 configitem('shelve', 'maxbackups',
76 configitem('shelve', 'maxbackups',
77 default=10,
77 default=10,
78 )
78 )
79
79
80 backupdir = 'shelve-backup'
80 backupdir = 'shelve-backup'
81 shelvedir = 'shelved'
81 shelvedir = 'shelved'
82 shelvefileextensions = ['hg', 'patch', 'shelve']
82 shelvefileextensions = ['hg', 'patch', 'shelve']
83 # universal extension is present in all types of shelves
83 # universal extension is present in all types of shelves
84 patchextension = 'patch'
84 patchextension = 'patch'
85
85
86 # we never need the user, so we use a
86 # we never need the user, so we use a
87 # generic user for all shelve operations
87 # generic user for all shelve operations
88 shelveuser = 'shelve@localhost'
88 shelveuser = 'shelve@localhost'
89
89
90 class shelvedfile(object):
90 class shelvedfile(object):
91 """Helper for the file storing a single shelve
91 """Helper for the file storing a single shelve
92
92
93 Handles common functions on shelve files (.hg/.patch) using
93 Handles common functions on shelve files (.hg/.patch) using
94 the vfs layer"""
94 the vfs layer"""
95 def __init__(self, repo, name, filetype=None):
95 def __init__(self, repo, name, filetype=None):
96 self.repo = repo
96 self.repo = repo
97 self.name = name
97 self.name = name
98 self.vfs = vfsmod.vfs(repo.vfs.join(shelvedir))
98 self.vfs = vfsmod.vfs(repo.vfs.join(shelvedir))
99 self.backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
99 self.backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
100 self.ui = self.repo.ui
100 self.ui = self.repo.ui
101 if filetype:
101 if filetype:
102 self.fname = name + '.' + filetype
102 self.fname = name + '.' + filetype
103 else:
103 else:
104 self.fname = name
104 self.fname = name
105
105
106 def exists(self):
106 def exists(self):
107 return self.vfs.exists(self.fname)
107 return self.vfs.exists(self.fname)
108
108
109 def filename(self):
109 def filename(self):
110 return self.vfs.join(self.fname)
110 return self.vfs.join(self.fname)
111
111
112 def backupfilename(self):
112 def backupfilename(self):
113 def gennames(base):
113 def gennames(base):
114 yield base
114 yield base
115 base, ext = base.rsplit('.', 1)
115 base, ext = base.rsplit('.', 1)
116 for i in itertools.count(1):
116 for i in itertools.count(1):
117 yield '%s-%d.%s' % (base, i, ext)
117 yield '%s-%d.%s' % (base, i, ext)
118
118
119 name = self.backupvfs.join(self.fname)
119 name = self.backupvfs.join(self.fname)
120 for n in gennames(name):
120 for n in gennames(name):
121 if not self.backupvfs.exists(n):
121 if not self.backupvfs.exists(n):
122 return n
122 return n
123
123
124 def movetobackup(self):
124 def movetobackup(self):
125 if not self.backupvfs.isdir():
125 if not self.backupvfs.isdir():
126 self.backupvfs.makedir()
126 self.backupvfs.makedir()
127 util.rename(self.filename(), self.backupfilename())
127 util.rename(self.filename(), self.backupfilename())
128
128
129 def stat(self):
129 def stat(self):
130 return self.vfs.stat(self.fname)
130 return self.vfs.stat(self.fname)
131
131
132 def opener(self, mode='rb'):
132 def opener(self, mode='rb'):
133 try:
133 try:
134 return self.vfs(self.fname, mode)
134 return self.vfs(self.fname, mode)
135 except IOError as err:
135 except IOError as err:
136 if err.errno != errno.ENOENT:
136 if err.errno != errno.ENOENT:
137 raise
137 raise
138 raise error.Abort(_("shelved change '%s' not found") % self.name)
138 raise error.Abort(_("shelved change '%s' not found") % self.name)
139
139
140 def applybundle(self):
140 def applybundle(self):
141 fp = self.opener()
141 fp = self.opener()
142 try:
142 try:
143 targetphase = phases.internal
143 targetphase = phases.internal
144 if not phases.supportinternal(self.repo):
144 if not phases.supportinternal(self.repo):
145 targetphase = phases.secret
145 targetphase = phases.secret
146 gen = exchange.readbundle(self.repo.ui, fp, self.fname, self.vfs)
146 gen = exchange.readbundle(self.repo.ui, fp, self.fname, self.vfs)
147 pretip = self.repo['tip']
147 pretip = self.repo['tip']
148 tr = self.repo.currenttransaction()
148 tr = self.repo.currenttransaction()
149 bundle2.applybundle(self.repo, gen, tr,
149 bundle2.applybundle(self.repo, gen, tr,
150 source='unshelve',
150 source='unshelve',
151 url='bundle:' + self.vfs.join(self.fname),
151 url='bundle:' + self.vfs.join(self.fname),
152 targetphase=targetphase)
152 targetphase=targetphase)
153 shelvectx = self.repo['tip']
153 shelvectx = self.repo['tip']
154 if pretip == shelvectx:
154 if pretip == shelvectx:
155 shelverev = tr.changes['revduplicates'][-1]
155 shelverev = tr.changes['revduplicates'][-1]
156 shelvectx = self.repo[shelverev]
156 shelvectx = self.repo[shelverev]
157 return shelvectx
157 return shelvectx
158 finally:
158 finally:
159 fp.close()
159 fp.close()
160
160
161 def bundlerepo(self):
161 def bundlerepo(self):
162 path = self.vfs.join(self.fname)
162 path = self.vfs.join(self.fname)
163 return bundlerepo.instance(self.repo.baseui,
163 return bundlerepo.instance(self.repo.baseui,
164 'bundle://%s+%s' % (self.repo.root, path))
164 'bundle://%s+%s' % (self.repo.root, path))
165
165
166 def writebundle(self, bases, node):
166 def writebundle(self, bases, node):
167 cgversion = changegroup.safeversion(self.repo)
167 cgversion = changegroup.safeversion(self.repo)
168 if cgversion == '01':
168 if cgversion == '01':
169 btype = 'HG10BZ'
169 btype = 'HG10BZ'
170 compression = None
170 compression = None
171 else:
171 else:
172 btype = 'HG20'
172 btype = 'HG20'
173 compression = 'BZ'
173 compression = 'BZ'
174
174
175 repo = self.repo.unfiltered()
175 repo = self.repo.unfiltered()
176
176
177 outgoing = discovery.outgoing(repo, missingroots=bases,
177 outgoing = discovery.outgoing(repo, missingroots=bases,
178 missingheads=[node])
178 missingheads=[node])
179 cg = changegroup.makechangegroup(repo, outgoing, cgversion, 'shelve')
179 cg = changegroup.makechangegroup(repo, outgoing, cgversion, 'shelve')
180
180
181 bundle2.writebundle(self.ui, cg, self.fname, btype, self.vfs,
181 bundle2.writebundle(self.ui, cg, self.fname, btype, self.vfs,
182 compression=compression)
182 compression=compression)
183
183
184 def writeinfo(self, info):
184 def writeinfo(self, info):
185 scmutil.simplekeyvaluefile(self.vfs, self.fname).write(info)
185 scmutil.simplekeyvaluefile(self.vfs, self.fname).write(info)
186
186
187 def readinfo(self):
187 def readinfo(self):
188 return scmutil.simplekeyvaluefile(self.vfs, self.fname).read()
188 return scmutil.simplekeyvaluefile(self.vfs, self.fname).read()
189
189
190 class shelvedstate(object):
190 class shelvedstate(object):
191 """Handle persistence during unshelving operations.
191 """Handle persistence during unshelving operations.
192
192
193 Handles saving and restoring a shelved state. Ensures that different
193 Handles saving and restoring a shelved state. Ensures that different
194 versions of a shelved state are possible and handles them appropriately.
194 versions of a shelved state are possible and handles them appropriately.
195 """
195 """
196 _version = 2
196 _version = 2
197 _filename = 'shelvedstate'
197 _filename = 'shelvedstate'
198 _keep = 'keep'
198 _keep = 'keep'
199 _nokeep = 'nokeep'
199 _nokeep = 'nokeep'
200 # colon is essential to differentiate from a real bookmark name
200 # colon is essential to differentiate from a real bookmark name
201 _noactivebook = ':no-active-bookmark'
201 _noactivebook = ':no-active-bookmark'
202
202
203 @classmethod
203 @classmethod
204 def _verifyandtransform(cls, d):
204 def _verifyandtransform(cls, d):
205 """Some basic shelvestate syntactic verification and transformation"""
205 """Some basic shelvestate syntactic verification and transformation"""
206 try:
206 try:
207 d['originalwctx'] = nodemod.bin(d['originalwctx'])
207 d['originalwctx'] = nodemod.bin(d['originalwctx'])
208 d['pendingctx'] = nodemod.bin(d['pendingctx'])
208 d['pendingctx'] = nodemod.bin(d['pendingctx'])
209 d['parents'] = [nodemod.bin(h)
209 d['parents'] = [nodemod.bin(h)
210 for h in d['parents'].split(' ')]
210 for h in d['parents'].split(' ')]
211 d['nodestoremove'] = [nodemod.bin(h)
211 d['nodestoremove'] = [nodemod.bin(h)
212 for h in d['nodestoremove'].split(' ')]
212 for h in d['nodestoremove'].split(' ')]
213 except (ValueError, TypeError, KeyError) as err:
213 except (ValueError, TypeError, KeyError) as err:
214 raise error.CorruptedState(pycompat.bytestr(err))
214 raise error.CorruptedState(pycompat.bytestr(err))
215
215
216 @classmethod
216 @classmethod
217 def _getversion(cls, repo):
217 def _getversion(cls, repo):
218 """Read version information from shelvestate file"""
218 """Read version information from shelvestate file"""
219 fp = repo.vfs(cls._filename)
219 fp = repo.vfs(cls._filename)
220 try:
220 try:
221 version = int(fp.readline().strip())
221 version = int(fp.readline().strip())
222 except ValueError as err:
222 except ValueError as err:
223 raise error.CorruptedState(pycompat.bytestr(err))
223 raise error.CorruptedState(pycompat.bytestr(err))
224 finally:
224 finally:
225 fp.close()
225 fp.close()
226 return version
226 return version
227
227
228 @classmethod
228 @classmethod
229 def _readold(cls, repo):
229 def _readold(cls, repo):
230 """Read the old position-based version of a shelvestate file"""
230 """Read the old position-based version of a shelvestate file"""
231 # Order is important, because old shelvestate file uses it
231 # Order is important, because old shelvestate file uses it
232 # to detemine values of fields (i.g. name is on the second line,
232 # to detemine values of fields (i.g. name is on the second line,
233 # originalwctx is on the third and so forth). Please do not change.
233 # originalwctx is on the third and so forth). Please do not change.
234 keys = ['version', 'name', 'originalwctx', 'pendingctx', 'parents',
234 keys = ['version', 'name', 'originalwctx', 'pendingctx', 'parents',
235 'nodestoremove', 'branchtorestore', 'keep', 'activebook']
235 'nodestoremove', 'branchtorestore', 'keep', 'activebook']
236 # this is executed only seldomly, so it is not a big deal
236 # this is executed only seldomly, so it is not a big deal
237 # that we open this file twice
237 # that we open this file twice
238 fp = repo.vfs(cls._filename)
238 fp = repo.vfs(cls._filename)
239 d = {}
239 d = {}
240 try:
240 try:
241 for key in keys:
241 for key in keys:
242 d[key] = fp.readline().strip()
242 d[key] = fp.readline().strip()
243 finally:
243 finally:
244 fp.close()
244 fp.close()
245 return d
245 return d
246
246
247 @classmethod
247 @classmethod
248 def load(cls, repo):
248 def load(cls, repo):
249 version = cls._getversion(repo)
249 version = cls._getversion(repo)
250 if version < cls._version:
250 if version < cls._version:
251 d = cls._readold(repo)
251 d = cls._readold(repo)
252 elif version == cls._version:
252 elif version == cls._version:
253 d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
253 d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
254 .read(firstlinenonkeyval=True)
254 .read(firstlinenonkeyval=True)
255 else:
255 else:
256 raise error.Abort(_('this version of shelve is incompatible '
256 raise error.Abort(_('this version of shelve is incompatible '
257 'with the version used in this repo'))
257 'with the version used in this repo'))
258
258
259 cls._verifyandtransform(d)
259 cls._verifyandtransform(d)
260 try:
260 try:
261 obj = cls()
261 obj = cls()
262 obj.name = d['name']
262 obj.name = d['name']
263 obj.wctx = repo[d['originalwctx']]
263 obj.wctx = repo[d['originalwctx']]
264 obj.pendingctx = repo[d['pendingctx']]
264 obj.pendingctx = repo[d['pendingctx']]
265 obj.parents = d['parents']
265 obj.parents = d['parents']
266 obj.nodestoremove = d['nodestoremove']
266 obj.nodestoremove = d['nodestoremove']
267 obj.branchtorestore = d.get('branchtorestore', '')
267 obj.branchtorestore = d.get('branchtorestore', '')
268 obj.keep = d.get('keep') == cls._keep
268 obj.keep = d.get('keep') == cls._keep
269 obj.activebookmark = ''
269 obj.activebookmark = ''
270 if d.get('activebook', '') != cls._noactivebook:
270 if d.get('activebook', '') != cls._noactivebook:
271 obj.activebookmark = d.get('activebook', '')
271 obj.activebookmark = d.get('activebook', '')
272 except (error.RepoLookupError, KeyError) as err:
272 except (error.RepoLookupError, KeyError) as err:
273 raise error.CorruptedState(pycompat.bytestr(err))
273 raise error.CorruptedState(pycompat.bytestr(err))
274
274
275 return obj
275 return obj
276
276
277 @classmethod
277 @classmethod
278 def save(cls, repo, name, originalwctx, pendingctx, nodestoremove,
278 def save(cls, repo, name, originalwctx, pendingctx, nodestoremove,
279 branchtorestore, keep=False, activebook=''):
279 branchtorestore, keep=False, activebook=''):
280 info = {
280 info = {
281 "name": name,
281 "name": name,
282 "originalwctx": nodemod.hex(originalwctx.node()),
282 "originalwctx": nodemod.hex(originalwctx.node()),
283 "pendingctx": nodemod.hex(pendingctx.node()),
283 "pendingctx": nodemod.hex(pendingctx.node()),
284 "parents": ' '.join([nodemod.hex(p)
284 "parents": ' '.join([nodemod.hex(p)
285 for p in repo.dirstate.parents()]),
285 for p in repo.dirstate.parents()]),
286 "nodestoremove": ' '.join([nodemod.hex(n)
286 "nodestoremove": ' '.join([nodemod.hex(n)
287 for n in nodestoremove]),
287 for n in nodestoremove]),
288 "branchtorestore": branchtorestore,
288 "branchtorestore": branchtorestore,
289 "keep": cls._keep if keep else cls._nokeep,
289 "keep": cls._keep if keep else cls._nokeep,
290 "activebook": activebook or cls._noactivebook
290 "activebook": activebook or cls._noactivebook
291 }
291 }
292 scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
292 scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
293 .write(info, firstline=("%d" % cls._version))
293 .write(info, firstline=("%d" % cls._version))
294
294
295 @classmethod
295 @classmethod
296 def clear(cls, repo):
296 def clear(cls, repo):
297 repo.vfs.unlinkpath(cls._filename, ignoremissing=True)
297 repo.vfs.unlinkpath(cls._filename, ignoremissing=True)
298
298
299 def cleanupoldbackups(repo):
299 def cleanupoldbackups(repo):
300 vfs = vfsmod.vfs(repo.vfs.join(backupdir))
300 vfs = vfsmod.vfs(repo.vfs.join(backupdir))
301 maxbackups = repo.ui.configint('shelve', 'maxbackups')
301 maxbackups = repo.ui.configint('shelve', 'maxbackups')
302 hgfiles = [f for f in vfs.listdir()
302 hgfiles = [f for f in vfs.listdir()
303 if f.endswith('.' + patchextension)]
303 if f.endswith('.' + patchextension)]
304 hgfiles = sorted([(vfs.stat(f)[stat.ST_MTIME], f) for f in hgfiles])
304 hgfiles = sorted([(vfs.stat(f)[stat.ST_MTIME], f) for f in hgfiles])
305 if maxbackups > 0 and maxbackups < len(hgfiles):
305 if maxbackups > 0 and maxbackups < len(hgfiles):
306 bordermtime = hgfiles[-maxbackups][0]
306 bordermtime = hgfiles[-maxbackups][0]
307 else:
307 else:
308 bordermtime = None
308 bordermtime = None
309 for mtime, f in hgfiles[:len(hgfiles) - maxbackups]:
309 for mtime, f in hgfiles[:len(hgfiles) - maxbackups]:
310 if mtime == bordermtime:
310 if mtime == bordermtime:
311 # keep it, because timestamp can't decide exact order of backups
311 # keep it, because timestamp can't decide exact order of backups
312 continue
312 continue
313 base = f[:-(1 + len(patchextension))]
313 base = f[:-(1 + len(patchextension))]
314 for ext in shelvefileextensions:
314 for ext in shelvefileextensions:
315 vfs.tryunlink(base + '.' + ext)
315 vfs.tryunlink(base + '.' + ext)
316
316
317 def _backupactivebookmark(repo):
317 def _backupactivebookmark(repo):
318 activebookmark = repo._activebookmark
318 activebookmark = repo._activebookmark
319 if activebookmark:
319 if activebookmark:
320 bookmarks.deactivate(repo)
320 bookmarks.deactivate(repo)
321 return activebookmark
321 return activebookmark
322
322
323 def _restoreactivebookmark(repo, mark):
323 def _restoreactivebookmark(repo, mark):
324 if mark:
324 if mark:
325 bookmarks.activate(repo, mark)
325 bookmarks.activate(repo, mark)
326
326
327 def _aborttransaction(repo):
327 def _aborttransaction(repo):
328 '''Abort current transaction for shelve/unshelve, but keep dirstate
328 '''Abort current transaction for shelve/unshelve, but keep dirstate
329 '''
329 '''
330 tr = repo.currenttransaction()
330 tr = repo.currenttransaction()
331 dirstatebackupname = 'dirstate.shelve'
331 dirstatebackupname = 'dirstate.shelve'
332 narrowspecbackupname = 'narrowspec.shelve'
332 narrowspecbackupname = 'narrowspec.shelve'
333 repo.dirstate.savebackup(tr, dirstatebackupname)
333 repo.dirstate.savebackup(tr, dirstatebackupname)
334 narrowspec.savebackup(repo, narrowspecbackupname)
334 narrowspec.savebackup(repo, narrowspecbackupname)
335 tr.abort()
335 tr.abort()
336 narrowspec.restorebackup(repo, narrowspecbackupname)
336 narrowspec.restorebackup(repo, narrowspecbackupname)
337 repo.dirstate.restorebackup(None, dirstatebackupname)
337 repo.dirstate.restorebackup(None, dirstatebackupname)
338
338
339 def getshelvename(repo, parent, opts):
339 def getshelvename(repo, parent, opts):
340 """Decide on the name this shelve is going to have"""
340 """Decide on the name this shelve is going to have"""
341 def gennames():
341 def gennames():
342 yield label
342 yield label
343 for i in itertools.count(1):
343 for i in itertools.count(1):
344 yield '%s-%02d' % (label, i)
344 yield '%s-%02d' % (label, i)
345 name = opts.get('name')
345 name = opts.get('name')
346 label = repo._activebookmark or parent.branch() or 'default'
346 label = repo._activebookmark or parent.branch() or 'default'
347 # slashes aren't allowed in filenames, therefore we rename it
347 # slashes aren't allowed in filenames, therefore we rename it
348 label = label.replace('/', '_')
348 label = label.replace('/', '_')
349 label = label.replace('\\', '_')
349 label = label.replace('\\', '_')
350 # filenames must not start with '.' as it should not be hidden
350 # filenames must not start with '.' as it should not be hidden
351 if label.startswith('.'):
351 if label.startswith('.'):
352 label = label.replace('.', '_', 1)
352 label = label.replace('.', '_', 1)
353
353
354 if name:
354 if name:
355 if shelvedfile(repo, name, patchextension).exists():
355 if shelvedfile(repo, name, patchextension).exists():
356 e = _("a shelved change named '%s' already exists") % name
356 e = _("a shelved change named '%s' already exists") % name
357 raise error.Abort(e)
357 raise error.Abort(e)
358
358
359 # ensure we are not creating a subdirectory or a hidden file
359 # ensure we are not creating a subdirectory or a hidden file
360 if '/' in name or '\\' in name:
360 if '/' in name or '\\' in name:
361 raise error.Abort(_('shelved change names can not contain slashes'))
361 raise error.Abort(_('shelved change names can not contain slashes'))
362 if name.startswith('.'):
362 if name.startswith('.'):
363 raise error.Abort(_("shelved change names can not start with '.'"))
363 raise error.Abort(_("shelved change names can not start with '.'"))
364
364
365 else:
365 else:
366 for n in gennames():
366 for n in gennames():
367 if not shelvedfile(repo, n, patchextension).exists():
367 if not shelvedfile(repo, n, patchextension).exists():
368 name = n
368 name = n
369 break
369 break
370
370
371 return name
371 return name
372
372
373 def mutableancestors(ctx):
373 def mutableancestors(ctx):
374 """return all mutable ancestors for ctx (included)
374 """return all mutable ancestors for ctx (included)
375
375
376 Much faster than the revset ancestors(ctx) & draft()"""
376 Much faster than the revset ancestors(ctx) & draft()"""
377 seen = {nodemod.nullrev}
377 seen = {nodemod.nullrev}
378 visit = collections.deque()
378 visit = collections.deque()
379 visit.append(ctx)
379 visit.append(ctx)
380 while visit:
380 while visit:
381 ctx = visit.popleft()
381 ctx = visit.popleft()
382 yield ctx.node()
382 yield ctx.node()
383 for parent in ctx.parents():
383 for parent in ctx.parents():
384 rev = parent.rev()
384 rev = parent.rev()
385 if rev not in seen:
385 if rev not in seen:
386 seen.add(rev)
386 seen.add(rev)
387 if parent.mutable():
387 if parent.mutable():
388 visit.append(parent)
388 visit.append(parent)
389
389
390 def getcommitfunc(extra, interactive, editor=False):
390 def getcommitfunc(extra, interactive, editor=False):
391 def commitfunc(ui, repo, message, match, opts):
391 def commitfunc(ui, repo, message, match, opts):
392 hasmq = util.safehasattr(repo, 'mq')
392 hasmq = util.safehasattr(repo, 'mq')
393 if hasmq:
393 if hasmq:
394 saved, repo.mq.checkapplied = repo.mq.checkapplied, False
394 saved, repo.mq.checkapplied = repo.mq.checkapplied, False
395
395
396 targetphase = phases.internal
396 targetphase = phases.internal
397 if not phases.supportinternal(repo):
397 if not phases.supportinternal(repo):
398 targetphase = phases.secret
398 targetphase = phases.secret
399 overrides = {('phases', 'new-commit'): targetphase}
399 overrides = {('phases', 'new-commit'): targetphase}
400 try:
400 try:
401 editor_ = False
401 editor_ = False
402 if editor:
402 if editor:
403 editor_ = cmdutil.getcommiteditor(editform='shelve.shelve',
403 editor_ = cmdutil.getcommiteditor(editform='shelve.shelve',
404 **pycompat.strkwargs(opts))
404 **pycompat.strkwargs(opts))
405 with repo.ui.configoverride(overrides):
405 with repo.ui.configoverride(overrides):
406 return repo.commit(message, shelveuser, opts.get('date'),
406 return repo.commit(message, shelveuser, opts.get('date'),
407 match, editor=editor_, extra=extra)
407 match, editor=editor_, extra=extra)
408 finally:
408 finally:
409 if hasmq:
409 if hasmq:
410 repo.mq.checkapplied = saved
410 repo.mq.checkapplied = saved
411
411
412 def interactivecommitfunc(ui, repo, *pats, **opts):
412 def interactivecommitfunc(ui, repo, *pats, **opts):
413 opts = pycompat.byteskwargs(opts)
413 opts = pycompat.byteskwargs(opts)
414 match = scmutil.match(repo['.'], pats, {})
414 match = scmutil.match(repo['.'], pats, {})
415 message = opts['message']
415 message = opts['message']
416 return commitfunc(ui, repo, message, match, opts)
416 return commitfunc(ui, repo, message, match, opts)
417
417
418 return interactivecommitfunc if interactive else commitfunc
418 return interactivecommitfunc if interactive else commitfunc
419
419
420 def _nothingtoshelvemessaging(ui, repo, pats, opts):
420 def _nothingtoshelvemessaging(ui, repo, pats, opts):
421 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
421 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
422 if stat.deleted:
422 if stat.deleted:
423 ui.status(_("nothing changed (%d missing files, see "
423 ui.status(_("nothing changed (%d missing files, see "
424 "'hg status')\n") % len(stat.deleted))
424 "'hg status')\n") % len(stat.deleted))
425 else:
425 else:
426 ui.status(_("nothing changed\n"))
426 ui.status(_("nothing changed\n"))
427
427
428 def _shelvecreatedcommit(repo, node, name):
428 def _shelvecreatedcommit(repo, node, name):
429 info = {'node': nodemod.hex(node)}
429 info = {'node': nodemod.hex(node)}
430 shelvedfile(repo, name, 'shelve').writeinfo(info)
430 shelvedfile(repo, name, 'shelve').writeinfo(info)
431 bases = list(mutableancestors(repo[node]))
431 bases = list(mutableancestors(repo[node]))
432 shelvedfile(repo, name, 'hg').writebundle(bases, node)
432 shelvedfile(repo, name, 'hg').writebundle(bases, node)
433 with shelvedfile(repo, name, patchextension).opener('wb') as fp:
433 with shelvedfile(repo, name, patchextension).opener('wb') as fp:
434 cmdutil.exportfile(repo, [node], fp, opts=mdiff.diffopts(git=True))
434 cmdutil.exportfile(repo, [node], fp, opts=mdiff.diffopts(git=True))
435
435
436 def _includeunknownfiles(repo, pats, opts, extra):
436 def _includeunknownfiles(repo, pats, opts, extra):
437 s = repo.status(match=scmutil.match(repo[None], pats, opts),
437 s = repo.status(match=scmutil.match(repo[None], pats, opts),
438 unknown=True)
438 unknown=True)
439 if s.unknown:
439 if s.unknown:
440 extra['shelve_unknown'] = '\0'.join(s.unknown)
440 extra['shelve_unknown'] = '\0'.join(s.unknown)
441 repo[None].add(s.unknown)
441 repo[None].add(s.unknown)
442
442
443 def _finishshelve(repo):
443 def _finishshelve(repo):
444 if phases.supportinternal(repo):
444 if phases.supportinternal(repo):
445 backupname = 'dirstate.shelve'
445 backupname = 'dirstate.shelve'
446 tr = repo.currenttransaction()
446 tr = repo.currenttransaction()
447 repo.dirstate.savebackup(tr, backupname)
447 repo.dirstate.savebackup(tr, backupname)
448 tr.close()
448 tr.close()
449 repo.dirstate.restorebackup(None, backupname)
449 repo.dirstate.restorebackup(None, backupname)
450 else:
450 else:
451 _aborttransaction(repo)
451 _aborttransaction(repo)
452
452
453 def createcmd(ui, repo, pats, opts):
453 def createcmd(ui, repo, pats, opts):
454 """subcommand that creates a new shelve"""
454 """subcommand that creates a new shelve"""
455 with repo.wlock():
455 with repo.wlock():
456 cmdutil.checkunfinished(repo)
456 cmdutil.checkunfinished(repo)
457 return _docreatecmd(ui, repo, pats, opts)
457 return _docreatecmd(ui, repo, pats, opts)
458
458
459 def _docreatecmd(ui, repo, pats, opts):
459 def _docreatecmd(ui, repo, pats, opts):
460 wctx = repo[None]
460 wctx = repo[None]
461 parents = wctx.parents()
461 parents = wctx.parents()
462 if len(parents) > 1:
462 if len(parents) > 1:
463 raise error.Abort(_('cannot shelve while merging'))
463 raise error.Abort(_('cannot shelve while merging'))
464 parent = parents[0]
464 parent = parents[0]
465 origbranch = wctx.branch()
465 origbranch = wctx.branch()
466
466
467 if parent.node() != nodemod.nullid:
467 if parent.node() != nodemod.nullid:
468 desc = "changes to: %s" % parent.description().split('\n', 1)[0]
468 desc = "changes to: %s" % parent.description().split('\n', 1)[0]
469 else:
469 else:
470 desc = '(changes in empty repository)'
470 desc = '(changes in empty repository)'
471
471
472 if not opts.get('message'):
472 if not opts.get('message'):
473 opts['message'] = desc
473 opts['message'] = desc
474
474
475 lock = tr = activebookmark = None
475 lock = tr = activebookmark = None
476 try:
476 try:
477 lock = repo.lock()
477 lock = repo.lock()
478
478
479 # use an uncommitted transaction to generate the bundle to avoid
479 # use an uncommitted transaction to generate the bundle to avoid
480 # pull races. ensure we don't print the abort message to stderr.
480 # pull races. ensure we don't print the abort message to stderr.
481 tr = repo.transaction('commit', report=lambda x: None)
481 tr = repo.transaction('commit', report=lambda x: None)
482
482
483 interactive = opts.get('interactive', False)
483 interactive = opts.get('interactive', False)
484 includeunknown = (opts.get('unknown', False) and
484 includeunknown = (opts.get('unknown', False) and
485 not opts.get('addremove', False))
485 not opts.get('addremove', False))
486
486
487 name = getshelvename(repo, parent, opts)
487 name = getshelvename(repo, parent, opts)
488 activebookmark = _backupactivebookmark(repo)
488 activebookmark = _backupactivebookmark(repo)
489 extra = {'internal': 'shelve'}
489 extra = {'internal': 'shelve'}
490 if includeunknown:
490 if includeunknown:
491 _includeunknownfiles(repo, pats, opts, extra)
491 _includeunknownfiles(repo, pats, opts, extra)
492
492
493 if _iswctxonnewbranch(repo) and not _isbareshelve(pats, opts):
493 if _iswctxonnewbranch(repo) and not _isbareshelve(pats, opts):
494 # In non-bare shelve we don't store newly created branch
494 # In non-bare shelve we don't store newly created branch
495 # at bundled commit
495 # at bundled commit
496 repo.dirstate.setbranch(repo['.'].branch())
496 repo.dirstate.setbranch(repo['.'].branch())
497
497
498 commitfunc = getcommitfunc(extra, interactive, editor=True)
498 commitfunc = getcommitfunc(extra, interactive, editor=True)
499 if not interactive:
499 if not interactive:
500 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
500 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
501 else:
501 else:
502 node = cmdutil.dorecord(ui, repo, commitfunc, None,
502 node = cmdutil.dorecord(ui, repo, commitfunc, None,
503 False, cmdutil.recordfilter, *pats,
503 False, cmdutil.recordfilter, *pats,
504 **pycompat.strkwargs(opts))
504 **pycompat.strkwargs(opts))
505 if not node:
505 if not node:
506 _nothingtoshelvemessaging(ui, repo, pats, opts)
506 _nothingtoshelvemessaging(ui, repo, pats, opts)
507 return 1
507 return 1
508
508
509 _shelvecreatedcommit(repo, node, name)
509 _shelvecreatedcommit(repo, node, name)
510
510
511 if ui.formatted():
511 if ui.formatted():
512 desc = stringutil.ellipsis(desc, ui.termwidth())
512 desc = stringutil.ellipsis(desc, ui.termwidth())
513 ui.status(_('shelved as %s\n') % name)
513 ui.status(_('shelved as %s\n') % name)
514 hg.update(repo, parent.node())
514 hg.update(repo, parent.node())
515 if origbranch != repo['.'].branch() and not _isbareshelve(pats, opts):
515 if origbranch != repo['.'].branch() and not _isbareshelve(pats, opts):
516 repo.dirstate.setbranch(origbranch)
516 repo.dirstate.setbranch(origbranch)
517
517
518 _finishshelve(repo)
518 _finishshelve(repo)
519 finally:
519 finally:
520 _restoreactivebookmark(repo, activebookmark)
520 _restoreactivebookmark(repo, activebookmark)
521 lockmod.release(tr, lock)
521 lockmod.release(tr, lock)
522
522
523 def _isbareshelve(pats, opts):
523 def _isbareshelve(pats, opts):
524 return (not pats
524 return (not pats
525 and not opts.get('interactive', False)
525 and not opts.get('interactive', False)
526 and not opts.get('include', False)
526 and not opts.get('include', False)
527 and not opts.get('exclude', False))
527 and not opts.get('exclude', False))
528
528
529 def _iswctxonnewbranch(repo):
529 def _iswctxonnewbranch(repo):
530 return repo[None].branch() != repo['.'].branch()
530 return repo[None].branch() != repo['.'].branch()
531
531
532 def cleanupcmd(ui, repo):
532 def cleanupcmd(ui, repo):
533 """subcommand that deletes all shelves"""
533 """subcommand that deletes all shelves"""
534
534
535 with repo.wlock():
535 with repo.wlock():
536 for (name, _type) in repo.vfs.readdir(shelvedir):
536 for (name, _type) in repo.vfs.readdir(shelvedir):
537 suffix = name.rsplit('.', 1)[-1]
537 suffix = name.rsplit('.', 1)[-1]
538 if suffix in shelvefileextensions:
538 if suffix in shelvefileextensions:
539 shelvedfile(repo, name).movetobackup()
539 shelvedfile(repo, name).movetobackup()
540 cleanupoldbackups(repo)
540 cleanupoldbackups(repo)
541
541
542 def deletecmd(ui, repo, pats):
542 def deletecmd(ui, repo, pats):
543 """subcommand that deletes a specific shelve"""
543 """subcommand that deletes a specific shelve"""
544 if not pats:
544 if not pats:
545 raise error.Abort(_('no shelved changes specified!'))
545 raise error.Abort(_('no shelved changes specified!'))
546 with repo.wlock():
546 with repo.wlock():
547 try:
547 try:
548 for name in pats:
548 for name in pats:
549 for suffix in shelvefileextensions:
549 for suffix in shelvefileextensions:
550 shfile = shelvedfile(repo, name, suffix)
550 shfile = shelvedfile(repo, name, suffix)
551 # patch file is necessary, as it should
551 # patch file is necessary, as it should
552 # be present for any kind of shelve,
552 # be present for any kind of shelve,
553 # but the .hg file is optional as in future we
553 # but the .hg file is optional as in future we
554 # will add obsolete shelve with does not create a
554 # will add obsolete shelve with does not create a
555 # bundle
555 # bundle
556 if shfile.exists() or suffix == patchextension:
556 if shfile.exists() or suffix == patchextension:
557 shfile.movetobackup()
557 shfile.movetobackup()
558 cleanupoldbackups(repo)
558 cleanupoldbackups(repo)
559 except OSError as err:
559 except OSError as err:
560 if err.errno != errno.ENOENT:
560 if err.errno != errno.ENOENT:
561 raise
561 raise
562 raise error.Abort(_("shelved change '%s' not found") % name)
562 raise error.Abort(_("shelved change '%s' not found") % name)
563
563
564 def listshelves(repo):
564 def listshelves(repo):
565 """return all shelves in repo as list of (time, filename)"""
565 """return all shelves in repo as list of (time, filename)"""
566 try:
566 try:
567 names = repo.vfs.readdir(shelvedir)
567 names = repo.vfs.readdir(shelvedir)
568 except OSError as err:
568 except OSError as err:
569 if err.errno != errno.ENOENT:
569 if err.errno != errno.ENOENT:
570 raise
570 raise
571 return []
571 return []
572 info = []
572 info = []
573 for (name, _type) in names:
573 for (name, _type) in names:
574 pfx, sfx = name.rsplit('.', 1)
574 pfx, sfx = name.rsplit('.', 1)
575 if not pfx or sfx != patchextension:
575 if not pfx or sfx != patchextension:
576 continue
576 continue
577 st = shelvedfile(repo, name).stat()
577 st = shelvedfile(repo, name).stat()
578 info.append((st[stat.ST_MTIME], shelvedfile(repo, pfx).filename()))
578 info.append((st[stat.ST_MTIME], shelvedfile(repo, pfx).filename()))
579 return sorted(info, reverse=True)
579 return sorted(info, reverse=True)
580
580
581 def listcmd(ui, repo, pats, opts):
581 def listcmd(ui, repo, pats, opts):
582 """subcommand that displays the list of shelves"""
582 """subcommand that displays the list of shelves"""
583 pats = set(pats)
583 pats = set(pats)
584 width = 80
584 width = 80
585 if not ui.plain():
585 if not ui.plain():
586 width = ui.termwidth()
586 width = ui.termwidth()
587 namelabel = 'shelve.newest'
587 namelabel = 'shelve.newest'
588 ui.pager('shelve')
588 ui.pager('shelve')
589 for mtime, name in listshelves(repo):
589 for mtime, name in listshelves(repo):
590 sname = util.split(name)[1]
590 sname = util.split(name)[1]
591 if pats and sname not in pats:
591 if pats and sname not in pats:
592 continue
592 continue
593 ui.write(sname, label=namelabel)
593 ui.write(sname, label=namelabel)
594 namelabel = 'shelve.name'
594 namelabel = 'shelve.name'
595 if ui.quiet:
595 if ui.quiet:
596 ui.write('\n')
596 ui.write('\n')
597 continue
597 continue
598 ui.write(' ' * (16 - len(sname)))
598 ui.write(' ' * (16 - len(sname)))
599 used = 16
599 used = 16
600 date = dateutil.makedate(mtime)
600 date = dateutil.makedate(mtime)
601 age = '(%s)' % templatefilters.age(date, abbrev=True)
601 age = '(%s)' % templatefilters.age(date, abbrev=True)
602 ui.write(age, label='shelve.age')
602 ui.write(age, label='shelve.age')
603 ui.write(' ' * (12 - len(age)))
603 ui.write(' ' * (12 - len(age)))
604 used += 12
604 used += 12
605 with open(name + '.' + patchextension, 'rb') as fp:
605 with open(name + '.' + patchextension, 'rb') as fp:
606 while True:
606 while True:
607 line = fp.readline()
607 line = fp.readline()
608 if not line:
608 if not line:
609 break
609 break
610 if not line.startswith('#'):
610 if not line.startswith('#'):
611 desc = line.rstrip()
611 desc = line.rstrip()
612 if ui.formatted():
612 if ui.formatted():
613 desc = stringutil.ellipsis(desc, width - used)
613 desc = stringutil.ellipsis(desc, width - used)
614 ui.write(desc)
614 ui.write(desc)
615 break
615 break
616 ui.write('\n')
616 ui.write('\n')
617 if not (opts['patch'] or opts['stat']):
617 if not (opts['patch'] or opts['stat']):
618 continue
618 continue
619 difflines = fp.readlines()
619 difflines = fp.readlines()
620 if opts['patch']:
620 if opts['patch']:
621 for chunk, label in patch.difflabel(iter, difflines):
621 for chunk, label in patch.difflabel(iter, difflines):
622 ui.write(chunk, label=label)
622 ui.write(chunk, label=label)
623 if opts['stat']:
623 if opts['stat']:
624 for chunk, label in patch.diffstatui(difflines, width=width):
624 for chunk, label in patch.diffstatui(difflines, width=width):
625 ui.write(chunk, label=label)
625 ui.write(chunk, label=label)
626
626
627 def patchcmds(ui, repo, pats, opts):
627 def patchcmds(ui, repo, pats, opts):
628 """subcommand that displays shelves"""
628 """subcommand that displays shelves"""
629 if len(pats) == 0:
629 if len(pats) == 0:
630 shelves = listshelves(repo)
630 shelves = listshelves(repo)
631 if not shelves:
631 if not shelves:
632 raise error.Abort(_("there are no shelves to show"))
632 raise error.Abort(_("there are no shelves to show"))
633 mtime, name = shelves[0]
633 mtime, name = shelves[0]
634 sname = util.split(name)[1]
634 sname = util.split(name)[1]
635 pats = [sname]
635 pats = [sname]
636
636
637 for shelfname in pats:
637 for shelfname in pats:
638 if not shelvedfile(repo, shelfname, patchextension).exists():
638 if not shelvedfile(repo, shelfname, patchextension).exists():
639 raise error.Abort(_("cannot find shelf %s") % shelfname)
639 raise error.Abort(_("cannot find shelf %s") % shelfname)
640
640
641 listcmd(ui, repo, pats, opts)
641 listcmd(ui, repo, pats, opts)
642
642
643 def checkparents(repo, state):
643 def checkparents(repo, state):
644 """check parent while resuming an unshelve"""
644 """check parent while resuming an unshelve"""
645 if state.parents != repo.dirstate.parents():
645 if state.parents != repo.dirstate.parents():
646 raise error.Abort(_('working directory parents do not match unshelve '
646 raise error.Abort(_('working directory parents do not match unshelve '
647 'state'))
647 'state'))
648
648
649 def pathtofiles(repo, files):
649 def pathtofiles(repo, files):
650 cwd = repo.getcwd()
650 cwd = repo.getcwd()
651 return [repo.pathto(f, cwd) for f in files]
651 return [repo.pathto(f, cwd) for f in files]
652
652
653 def unshelveabort(ui, repo, state, opts):
653 def unshelveabort(ui, repo, state, opts):
654 """subcommand that abort an in-progress unshelve"""
654 """subcommand that abort an in-progress unshelve"""
655 with repo.lock():
655 with repo.lock():
656 try:
656 try:
657 checkparents(repo, state)
657 checkparents(repo, state)
658
658
659 merge.update(repo, state.pendingctx, False, True)
659 merge.update(repo, state.pendingctx, branchmerge=False, force=True)
660 if (state.activebookmark
660 if (state.activebookmark
661 and state.activebookmark in repo._bookmarks):
661 and state.activebookmark in repo._bookmarks):
662 bookmarks.activate(repo, state.activebookmark)
662 bookmarks.activate(repo, state.activebookmark)
663
663
664 if repo.vfs.exists('unshelverebasestate'):
664 if repo.vfs.exists('unshelverebasestate'):
665 repo.vfs.rename('unshelverebasestate', 'rebasestate')
665 repo.vfs.rename('unshelverebasestate', 'rebasestate')
666 rebase.clearstatus(repo)
666 rebase.clearstatus(repo)
667
667
668 mergefiles(ui, repo, state.wctx, state.pendingctx)
668 mergefiles(ui, repo, state.wctx, state.pendingctx)
669 if not phases.supportinternal(repo):
669 if not phases.supportinternal(repo):
670 repair.strip(ui, repo, state.nodestoremove, backup=False,
670 repair.strip(ui, repo, state.nodestoremove, backup=False,
671 topic='shelve')
671 topic='shelve')
672 finally:
672 finally:
673 shelvedstate.clear(repo)
673 shelvedstate.clear(repo)
674 ui.warn(_("unshelve of '%s' aborted\n") % state.name)
674 ui.warn(_("unshelve of '%s' aborted\n") % state.name)
675
675
676 def mergefiles(ui, repo, wctx, shelvectx):
676 def mergefiles(ui, repo, wctx, shelvectx):
677 """updates to wctx and merges the changes from shelvectx into the
677 """updates to wctx and merges the changes from shelvectx into the
678 dirstate."""
678 dirstate."""
679 with ui.configoverride({('ui', 'quiet'): True}):
679 with ui.configoverride({('ui', 'quiet'): True}):
680 hg.update(repo, wctx.node())
680 hg.update(repo, wctx.node())
681 files = []
681 files = []
682 files.extend(shelvectx.files())
682 files.extend(shelvectx.files())
683 files.extend(shelvectx.parents()[0].files())
683 files.extend(shelvectx.parents()[0].files())
684
684
685 # revert will overwrite unknown files, so move them out of the way
685 # revert will overwrite unknown files, so move them out of the way
686 for file in repo.status(unknown=True).unknown:
686 for file in repo.status(unknown=True).unknown:
687 if file in files:
687 if file in files:
688 util.rename(file, scmutil.origpath(ui, repo, file))
688 util.rename(file, scmutil.origpath(ui, repo, file))
689 ui.pushbuffer(True)
689 ui.pushbuffer(True)
690 cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents(),
690 cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents(),
691 *pathtofiles(repo, files),
691 *pathtofiles(repo, files),
692 **{r'no_backup': True})
692 **{r'no_backup': True})
693 ui.popbuffer()
693 ui.popbuffer()
694
694
695 def restorebranch(ui, repo, branchtorestore):
695 def restorebranch(ui, repo, branchtorestore):
696 if branchtorestore and branchtorestore != repo.dirstate.branch():
696 if branchtorestore and branchtorestore != repo.dirstate.branch():
697 repo.dirstate.setbranch(branchtorestore)
697 repo.dirstate.setbranch(branchtorestore)
698 ui.status(_('marked working directory as branch %s\n')
698 ui.status(_('marked working directory as branch %s\n')
699 % branchtorestore)
699 % branchtorestore)
700
700
701 def unshelvecleanup(ui, repo, name, opts):
701 def unshelvecleanup(ui, repo, name, opts):
702 """remove related files after an unshelve"""
702 """remove related files after an unshelve"""
703 if not opts.get('keep'):
703 if not opts.get('keep'):
704 for filetype in shelvefileextensions:
704 for filetype in shelvefileextensions:
705 shfile = shelvedfile(repo, name, filetype)
705 shfile = shelvedfile(repo, name, filetype)
706 if shfile.exists():
706 if shfile.exists():
707 shfile.movetobackup()
707 shfile.movetobackup()
708 cleanupoldbackups(repo)
708 cleanupoldbackups(repo)
709
709
710 def unshelvecontinue(ui, repo, state, opts):
710 def unshelvecontinue(ui, repo, state, opts):
711 """subcommand to continue an in-progress unshelve"""
711 """subcommand to continue an in-progress unshelve"""
712 # We're finishing off a merge. First parent is our original
712 # We're finishing off a merge. First parent is our original
713 # parent, second is the temporary "fake" commit we're unshelving.
713 # parent, second is the temporary "fake" commit we're unshelving.
714 with repo.lock():
714 with repo.lock():
715 checkparents(repo, state)
715 checkparents(repo, state)
716 ms = merge.mergestate.read(repo)
716 ms = merge.mergestate.read(repo)
717 if list(ms.unresolved()):
717 if list(ms.unresolved()):
718 raise error.Abort(
718 raise error.Abort(
719 _("unresolved conflicts, can't continue"),
719 _("unresolved conflicts, can't continue"),
720 hint=_("see 'hg resolve', then 'hg unshelve --continue'"))
720 hint=_("see 'hg resolve', then 'hg unshelve --continue'"))
721
721
722 shelvectx = repo[state.parents[1]]
722 shelvectx = repo[state.parents[1]]
723 pendingctx = state.pendingctx
723 pendingctx = state.pendingctx
724
724
725 with repo.dirstate.parentchange():
725 with repo.dirstate.parentchange():
726 repo.setparents(state.pendingctx.node(), nodemod.nullid)
726 repo.setparents(state.pendingctx.node(), nodemod.nullid)
727 repo.dirstate.write(repo.currenttransaction())
727 repo.dirstate.write(repo.currenttransaction())
728
728
729 targetphase = phases.internal
729 targetphase = phases.internal
730 if not phases.supportinternal(repo):
730 if not phases.supportinternal(repo):
731 targetphase = phases.secret
731 targetphase = phases.secret
732 overrides = {('phases', 'new-commit'): targetphase}
732 overrides = {('phases', 'new-commit'): targetphase}
733 with repo.ui.configoverride(overrides, 'unshelve'):
733 with repo.ui.configoverride(overrides, 'unshelve'):
734 with repo.dirstate.parentchange():
734 with repo.dirstate.parentchange():
735 repo.setparents(state.parents[0], nodemod.nullid)
735 repo.setparents(state.parents[0], nodemod.nullid)
736 newnode = repo.commit(text=shelvectx.description(),
736 newnode = repo.commit(text=shelvectx.description(),
737 extra=shelvectx.extra(),
737 extra=shelvectx.extra(),
738 user=shelvectx.user(),
738 user=shelvectx.user(),
739 date=shelvectx.date())
739 date=shelvectx.date())
740
740
741 if newnode is None:
741 if newnode is None:
742 # If it ended up being a no-op commit, then the normal
742 # If it ended up being a no-op commit, then the normal
743 # merge state clean-up path doesn't happen, so do it
743 # merge state clean-up path doesn't happen, so do it
744 # here. Fix issue5494
744 # here. Fix issue5494
745 merge.mergestate.clean(repo)
745 merge.mergestate.clean(repo)
746 shelvectx = state.pendingctx
746 shelvectx = state.pendingctx
747 msg = _('note: unshelved changes already existed '
747 msg = _('note: unshelved changes already existed '
748 'in the working copy\n')
748 'in the working copy\n')
749 ui.status(msg)
749 ui.status(msg)
750 else:
750 else:
751 # only strip the shelvectx if we produced one
751 # only strip the shelvectx if we produced one
752 state.nodestoremove.append(newnode)
752 state.nodestoremove.append(newnode)
753 shelvectx = repo[newnode]
753 shelvectx = repo[newnode]
754
754
755 hg.updaterepo(repo, pendingctx.node(), overwrite=False)
755 hg.updaterepo(repo, pendingctx.node(), overwrite=False)
756
756
757 if repo.vfs.exists('unshelverebasestate'):
757 if repo.vfs.exists('unshelverebasestate'):
758 repo.vfs.rename('unshelverebasestate', 'rebasestate')
758 repo.vfs.rename('unshelverebasestate', 'rebasestate')
759 rebase.clearstatus(repo)
759 rebase.clearstatus(repo)
760
760
761 mergefiles(ui, repo, state.wctx, shelvectx)
761 mergefiles(ui, repo, state.wctx, shelvectx)
762 restorebranch(ui, repo, state.branchtorestore)
762 restorebranch(ui, repo, state.branchtorestore)
763
763
764 if not phases.supportinternal(repo):
764 if not phases.supportinternal(repo):
765 repair.strip(ui, repo, state.nodestoremove, backup=False,
765 repair.strip(ui, repo, state.nodestoremove, backup=False,
766 topic='shelve')
766 topic='shelve')
767 _restoreactivebookmark(repo, state.activebookmark)
767 _restoreactivebookmark(repo, state.activebookmark)
768 shelvedstate.clear(repo)
768 shelvedstate.clear(repo)
769 unshelvecleanup(ui, repo, state.name, opts)
769 unshelvecleanup(ui, repo, state.name, opts)
770 ui.status(_("unshelve of '%s' complete\n") % state.name)
770 ui.status(_("unshelve of '%s' complete\n") % state.name)
771
771
772 def _commitworkingcopychanges(ui, repo, opts, tmpwctx):
772 def _commitworkingcopychanges(ui, repo, opts, tmpwctx):
773 """Temporarily commit working copy changes before moving unshelve commit"""
773 """Temporarily commit working copy changes before moving unshelve commit"""
774 # Store pending changes in a commit and remember added in case a shelve
774 # Store pending changes in a commit and remember added in case a shelve
775 # contains unknown files that are part of the pending change
775 # contains unknown files that are part of the pending change
776 s = repo.status()
776 s = repo.status()
777 addedbefore = frozenset(s.added)
777 addedbefore = frozenset(s.added)
778 if not (s.modified or s.added or s.removed):
778 if not (s.modified or s.added or s.removed):
779 return tmpwctx, addedbefore
779 return tmpwctx, addedbefore
780 ui.status(_("temporarily committing pending changes "
780 ui.status(_("temporarily committing pending changes "
781 "(restore with 'hg unshelve --abort')\n"))
781 "(restore with 'hg unshelve --abort')\n"))
782 extra = {'internal': 'shelve'}
782 extra = {'internal': 'shelve'}
783 commitfunc = getcommitfunc(extra=extra, interactive=False,
783 commitfunc = getcommitfunc(extra=extra, interactive=False,
784 editor=False)
784 editor=False)
785 tempopts = {}
785 tempopts = {}
786 tempopts['message'] = "pending changes temporary commit"
786 tempopts['message'] = "pending changes temporary commit"
787 tempopts['date'] = opts.get('date')
787 tempopts['date'] = opts.get('date')
788 with ui.configoverride({('ui', 'quiet'): True}):
788 with ui.configoverride({('ui', 'quiet'): True}):
789 node = cmdutil.commit(ui, repo, commitfunc, [], tempopts)
789 node = cmdutil.commit(ui, repo, commitfunc, [], tempopts)
790 tmpwctx = repo[node]
790 tmpwctx = repo[node]
791 return tmpwctx, addedbefore
791 return tmpwctx, addedbefore
792
792
793 def _unshelverestorecommit(ui, repo, basename):
793 def _unshelverestorecommit(ui, repo, basename):
794 """Recreate commit in the repository during the unshelve"""
794 """Recreate commit in the repository during the unshelve"""
795 repo = repo.unfiltered()
795 repo = repo.unfiltered()
796 node = None
796 node = None
797 if shelvedfile(repo, basename, 'shelve').exists():
797 if shelvedfile(repo, basename, 'shelve').exists():
798 node = shelvedfile(repo, basename, 'shelve').readinfo()['node']
798 node = shelvedfile(repo, basename, 'shelve').readinfo()['node']
799 if node is None or node not in repo:
799 if node is None or node not in repo:
800 with ui.configoverride({('ui', 'quiet'): True}):
800 with ui.configoverride({('ui', 'quiet'): True}):
801 shelvectx = shelvedfile(repo, basename, 'hg').applybundle()
801 shelvectx = shelvedfile(repo, basename, 'hg').applybundle()
802 # We might not strip the unbundled changeset, so we should keep track of
802 # We might not strip the unbundled changeset, so we should keep track of
803 # the unshelve node in case we need to reuse it (eg: unshelve --keep)
803 # the unshelve node in case we need to reuse it (eg: unshelve --keep)
804 if node is None:
804 if node is None:
805 info = {'node': nodemod.hex(shelvectx.node())}
805 info = {'node': nodemod.hex(shelvectx.node())}
806 shelvedfile(repo, basename, 'shelve').writeinfo(info)
806 shelvedfile(repo, basename, 'shelve').writeinfo(info)
807 else:
807 else:
808 shelvectx = repo[node]
808 shelvectx = repo[node]
809
809
810 return repo, shelvectx
810 return repo, shelvectx
811
811
812 def _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev, basename, pctx,
812 def _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev, basename, pctx,
813 tmpwctx, shelvectx, branchtorestore,
813 tmpwctx, shelvectx, branchtorestore,
814 activebookmark):
814 activebookmark):
815 """Rebase restored commit from its original location to a destination"""
815 """Rebase restored commit from its original location to a destination"""
816 # If the shelve is not immediately on top of the commit
816 # If the shelve is not immediately on top of the commit
817 # we'll be merging with, rebase it to be on top.
817 # we'll be merging with, rebase it to be on top.
818 if tmpwctx.node() == shelvectx.parents()[0].node():
818 if tmpwctx.node() == shelvectx.parents()[0].node():
819 return shelvectx
819 return shelvectx
820
820
821 overrides = {
821 overrides = {
822 ('ui', 'forcemerge'): opts.get('tool', ''),
822 ('ui', 'forcemerge'): opts.get('tool', ''),
823 ('phases', 'new-commit'): phases.secret,
823 ('phases', 'new-commit'): phases.secret,
824 }
824 }
825 with repo.ui.configoverride(overrides, 'unshelve'):
825 with repo.ui.configoverride(overrides, 'unshelve'):
826 ui.status(_('rebasing shelved changes\n'))
826 ui.status(_('rebasing shelved changes\n'))
827 stats = merge.graft(repo, shelvectx, shelvectx.p1(),
827 stats = merge.graft(repo, shelvectx, shelvectx.p1(),
828 labels=['shelve', 'working-copy'],
828 labels=['shelve', 'working-copy'],
829 keepconflictparent=True)
829 keepconflictparent=True)
830 if stats.unresolvedcount:
830 if stats.unresolvedcount:
831 tr.close()
831 tr.close()
832
832
833 nodestoremove = [repo.changelog.node(rev)
833 nodestoremove = [repo.changelog.node(rev)
834 for rev in pycompat.xrange(oldtiprev, len(repo))]
834 for rev in pycompat.xrange(oldtiprev, len(repo))]
835 shelvedstate.save(repo, basename, pctx, tmpwctx, nodestoremove,
835 shelvedstate.save(repo, basename, pctx, tmpwctx, nodestoremove,
836 branchtorestore, opts.get('keep'), activebookmark)
836 branchtorestore, opts.get('keep'), activebookmark)
837 raise error.InterventionRequired(
837 raise error.InterventionRequired(
838 _("unresolved conflicts (see 'hg resolve', then "
838 _("unresolved conflicts (see 'hg resolve', then "
839 "'hg unshelve --continue')"))
839 "'hg unshelve --continue')"))
840
840
841 with repo.dirstate.parentchange():
841 with repo.dirstate.parentchange():
842 repo.setparents(tmpwctx.node(), nodemod.nullid)
842 repo.setparents(tmpwctx.node(), nodemod.nullid)
843 newnode = repo.commit(text=shelvectx.description(),
843 newnode = repo.commit(text=shelvectx.description(),
844 extra=shelvectx.extra(),
844 extra=shelvectx.extra(),
845 user=shelvectx.user(),
845 user=shelvectx.user(),
846 date=shelvectx.date())
846 date=shelvectx.date())
847
847
848 if newnode is None:
848 if newnode is None:
849 # If it ended up being a no-op commit, then the normal
849 # If it ended up being a no-op commit, then the normal
850 # merge state clean-up path doesn't happen, so do it
850 # merge state clean-up path doesn't happen, so do it
851 # here. Fix issue5494
851 # here. Fix issue5494
852 merge.mergestate.clean(repo)
852 merge.mergestate.clean(repo)
853 shelvectx = tmpwctx
853 shelvectx = tmpwctx
854 msg = _('note: unshelved changes already existed '
854 msg = _('note: unshelved changes already existed '
855 'in the working copy\n')
855 'in the working copy\n')
856 ui.status(msg)
856 ui.status(msg)
857 else:
857 else:
858 shelvectx = repo[newnode]
858 shelvectx = repo[newnode]
859 hg.updaterepo(repo, tmpwctx.node(), False)
859 hg.updaterepo(repo, tmpwctx.node(), False)
860
860
861 return shelvectx
861 return shelvectx
862
862
863 def _forgetunknownfiles(repo, shelvectx, addedbefore):
863 def _forgetunknownfiles(repo, shelvectx, addedbefore):
864 # Forget any files that were unknown before the shelve, unknown before
864 # Forget any files that were unknown before the shelve, unknown before
865 # unshelve started, but are now added.
865 # unshelve started, but are now added.
866 shelveunknown = shelvectx.extra().get('shelve_unknown')
866 shelveunknown = shelvectx.extra().get('shelve_unknown')
867 if not shelveunknown:
867 if not shelveunknown:
868 return
868 return
869 shelveunknown = frozenset(shelveunknown.split('\0'))
869 shelveunknown = frozenset(shelveunknown.split('\0'))
870 addedafter = frozenset(repo.status().added)
870 addedafter = frozenset(repo.status().added)
871 toforget = (addedafter & shelveunknown) - addedbefore
871 toforget = (addedafter & shelveunknown) - addedbefore
872 repo[None].forget(toforget)
872 repo[None].forget(toforget)
873
873
874 def _finishunshelve(repo, oldtiprev, tr, activebookmark):
874 def _finishunshelve(repo, oldtiprev, tr, activebookmark):
875 _restoreactivebookmark(repo, activebookmark)
875 _restoreactivebookmark(repo, activebookmark)
876 # The transaction aborting will strip all the commits for us,
876 # The transaction aborting will strip all the commits for us,
877 # but it doesn't update the inmemory structures, so addchangegroup
877 # but it doesn't update the inmemory structures, so addchangegroup
878 # hooks still fire and try to operate on the missing commits.
878 # hooks still fire and try to operate on the missing commits.
879 # Clean up manually to prevent this.
879 # Clean up manually to prevent this.
880 repo.unfiltered().changelog.strip(oldtiprev, tr)
880 repo.unfiltered().changelog.strip(oldtiprev, tr)
881 _aborttransaction(repo)
881 _aborttransaction(repo)
882
882
883 def _checkunshelveuntrackedproblems(ui, repo, shelvectx):
883 def _checkunshelveuntrackedproblems(ui, repo, shelvectx):
884 """Check potential problems which may result from working
884 """Check potential problems which may result from working
885 copy having untracked changes."""
885 copy having untracked changes."""
886 wcdeleted = set(repo.status().deleted)
886 wcdeleted = set(repo.status().deleted)
887 shelvetouched = set(shelvectx.files())
887 shelvetouched = set(shelvectx.files())
888 intersection = wcdeleted.intersection(shelvetouched)
888 intersection = wcdeleted.intersection(shelvetouched)
889 if intersection:
889 if intersection:
890 m = _("shelved change touches missing files")
890 m = _("shelved change touches missing files")
891 hint = _("run hg status to see which files are missing")
891 hint = _("run hg status to see which files are missing")
892 raise error.Abort(m, hint=hint)
892 raise error.Abort(m, hint=hint)
893
893
894 @command('unshelve',
894 @command('unshelve',
895 [('a', 'abort', None,
895 [('a', 'abort', None,
896 _('abort an incomplete unshelve operation')),
896 _('abort an incomplete unshelve operation')),
897 ('c', 'continue', None,
897 ('c', 'continue', None,
898 _('continue an incomplete unshelve operation')),
898 _('continue an incomplete unshelve operation')),
899 ('k', 'keep', None,
899 ('k', 'keep', None,
900 _('keep shelve after unshelving')),
900 _('keep shelve after unshelving')),
901 ('n', 'name', '',
901 ('n', 'name', '',
902 _('restore shelved change with given name'), _('NAME')),
902 _('restore shelved change with given name'), _('NAME')),
903 ('t', 'tool', '', _('specify merge tool')),
903 ('t', 'tool', '', _('specify merge tool')),
904 ('', 'date', '',
904 ('', 'date', '',
905 _('set date for temporary commits (DEPRECATED)'), _('DATE'))],
905 _('set date for temporary commits (DEPRECATED)'), _('DATE'))],
906 _('hg unshelve [[-n] SHELVED]'),
906 _('hg unshelve [[-n] SHELVED]'),
907 helpcategory=command.CATEGORY_WORKING_DIRECTORY)
907 helpcategory=command.CATEGORY_WORKING_DIRECTORY)
908 def unshelve(ui, repo, *shelved, **opts):
908 def unshelve(ui, repo, *shelved, **opts):
909 """restore a shelved change to the working directory
909 """restore a shelved change to the working directory
910
910
911 This command accepts an optional name of a shelved change to
911 This command accepts an optional name of a shelved change to
912 restore. If none is given, the most recent shelved change is used.
912 restore. If none is given, the most recent shelved change is used.
913
913
914 If a shelved change is applied successfully, the bundle that
914 If a shelved change is applied successfully, the bundle that
915 contains the shelved changes is moved to a backup location
915 contains the shelved changes is moved to a backup location
916 (.hg/shelve-backup).
916 (.hg/shelve-backup).
917
917
918 Since you can restore a shelved change on top of an arbitrary
918 Since you can restore a shelved change on top of an arbitrary
919 commit, it is possible that unshelving will result in a conflict
919 commit, it is possible that unshelving will result in a conflict
920 between your changes and the commits you are unshelving onto. If
920 between your changes and the commits you are unshelving onto. If
921 this occurs, you must resolve the conflict, then use
921 this occurs, you must resolve the conflict, then use
922 ``--continue`` to complete the unshelve operation. (The bundle
922 ``--continue`` to complete the unshelve operation. (The bundle
923 will not be moved until you successfully complete the unshelve.)
923 will not be moved until you successfully complete the unshelve.)
924
924
925 (Alternatively, you can use ``--abort`` to abandon an unshelve
925 (Alternatively, you can use ``--abort`` to abandon an unshelve
926 that causes a conflict. This reverts the unshelved changes, and
926 that causes a conflict. This reverts the unshelved changes, and
927 leaves the bundle in place.)
927 leaves the bundle in place.)
928
928
929 If bare shelved change(when no files are specified, without interactive,
929 If bare shelved change(when no files are specified, without interactive,
930 include and exclude option) was done on newly created branch it would
930 include and exclude option) was done on newly created branch it would
931 restore branch information to the working directory.
931 restore branch information to the working directory.
932
932
933 After a successful unshelve, the shelved changes are stored in a
933 After a successful unshelve, the shelved changes are stored in a
934 backup directory. Only the N most recent backups are kept. N
934 backup directory. Only the N most recent backups are kept. N
935 defaults to 10 but can be overridden using the ``shelve.maxbackups``
935 defaults to 10 but can be overridden using the ``shelve.maxbackups``
936 configuration option.
936 configuration option.
937
937
938 .. container:: verbose
938 .. container:: verbose
939
939
940 Timestamp in seconds is used to decide order of backups. More
940 Timestamp in seconds is used to decide order of backups. More
941 than ``maxbackups`` backups are kept, if same timestamp
941 than ``maxbackups`` backups are kept, if same timestamp
942 prevents from deciding exact order of them, for safety.
942 prevents from deciding exact order of them, for safety.
943 """
943 """
944 with repo.wlock():
944 with repo.wlock():
945 return _dounshelve(ui, repo, *shelved, **opts)
945 return _dounshelve(ui, repo, *shelved, **opts)
946
946
947 def _dounshelve(ui, repo, *shelved, **opts):
947 def _dounshelve(ui, repo, *shelved, **opts):
948 opts = pycompat.byteskwargs(opts)
948 opts = pycompat.byteskwargs(opts)
949 abortf = opts.get('abort')
949 abortf = opts.get('abort')
950 continuef = opts.get('continue')
950 continuef = opts.get('continue')
951 if not abortf and not continuef:
951 if not abortf and not continuef:
952 cmdutil.checkunfinished(repo)
952 cmdutil.checkunfinished(repo)
953 shelved = list(shelved)
953 shelved = list(shelved)
954 if opts.get("name"):
954 if opts.get("name"):
955 shelved.append(opts["name"])
955 shelved.append(opts["name"])
956
956
957 if abortf or continuef:
957 if abortf or continuef:
958 if abortf and continuef:
958 if abortf and continuef:
959 raise error.Abort(_('cannot use both abort and continue'))
959 raise error.Abort(_('cannot use both abort and continue'))
960 if shelved:
960 if shelved:
961 raise error.Abort(_('cannot combine abort/continue with '
961 raise error.Abort(_('cannot combine abort/continue with '
962 'naming a shelved change'))
962 'naming a shelved change'))
963 if abortf and opts.get('tool', False):
963 if abortf and opts.get('tool', False):
964 ui.warn(_('tool option will be ignored\n'))
964 ui.warn(_('tool option will be ignored\n'))
965
965
966 try:
966 try:
967 state = shelvedstate.load(repo)
967 state = shelvedstate.load(repo)
968 if opts.get('keep') is None:
968 if opts.get('keep') is None:
969 opts['keep'] = state.keep
969 opts['keep'] = state.keep
970 except IOError as err:
970 except IOError as err:
971 if err.errno != errno.ENOENT:
971 if err.errno != errno.ENOENT:
972 raise
972 raise
973 cmdutil.wrongtooltocontinue(repo, _('unshelve'))
973 cmdutil.wrongtooltocontinue(repo, _('unshelve'))
974 except error.CorruptedState as err:
974 except error.CorruptedState as err:
975 ui.debug(pycompat.bytestr(err) + '\n')
975 ui.debug(pycompat.bytestr(err) + '\n')
976 if continuef:
976 if continuef:
977 msg = _('corrupted shelved state file')
977 msg = _('corrupted shelved state file')
978 hint = _('please run hg unshelve --abort to abort unshelve '
978 hint = _('please run hg unshelve --abort to abort unshelve '
979 'operation')
979 'operation')
980 raise error.Abort(msg, hint=hint)
980 raise error.Abort(msg, hint=hint)
981 elif abortf:
981 elif abortf:
982 msg = _('could not read shelved state file, your working copy '
982 msg = _('could not read shelved state file, your working copy '
983 'may be in an unexpected state\nplease update to some '
983 'may be in an unexpected state\nplease update to some '
984 'commit\n')
984 'commit\n')
985 ui.warn(msg)
985 ui.warn(msg)
986 shelvedstate.clear(repo)
986 shelvedstate.clear(repo)
987 return
987 return
988
988
989 if abortf:
989 if abortf:
990 return unshelveabort(ui, repo, state, opts)
990 return unshelveabort(ui, repo, state, opts)
991 elif continuef:
991 elif continuef:
992 return unshelvecontinue(ui, repo, state, opts)
992 return unshelvecontinue(ui, repo, state, opts)
993 elif len(shelved) > 1:
993 elif len(shelved) > 1:
994 raise error.Abort(_('can only unshelve one change at a time'))
994 raise error.Abort(_('can only unshelve one change at a time'))
995 elif not shelved:
995 elif not shelved:
996 shelved = listshelves(repo)
996 shelved = listshelves(repo)
997 if not shelved:
997 if not shelved:
998 raise error.Abort(_('no shelved changes to apply!'))
998 raise error.Abort(_('no shelved changes to apply!'))
999 basename = util.split(shelved[0][1])[1]
999 basename = util.split(shelved[0][1])[1]
1000 ui.status(_("unshelving change '%s'\n") % basename)
1000 ui.status(_("unshelving change '%s'\n") % basename)
1001 else:
1001 else:
1002 basename = shelved[0]
1002 basename = shelved[0]
1003
1003
1004 if not shelvedfile(repo, basename, patchextension).exists():
1004 if not shelvedfile(repo, basename, patchextension).exists():
1005 raise error.Abort(_("shelved change '%s' not found") % basename)
1005 raise error.Abort(_("shelved change '%s' not found") % basename)
1006
1006
1007 repo = repo.unfiltered()
1007 repo = repo.unfiltered()
1008 lock = tr = None
1008 lock = tr = None
1009 try:
1009 try:
1010 lock = repo.lock()
1010 lock = repo.lock()
1011 tr = repo.transaction('unshelve', report=lambda x: None)
1011 tr = repo.transaction('unshelve', report=lambda x: None)
1012 oldtiprev = len(repo)
1012 oldtiprev = len(repo)
1013
1013
1014 pctx = repo['.']
1014 pctx = repo['.']
1015 tmpwctx = pctx
1015 tmpwctx = pctx
1016 # The goal is to have a commit structure like so:
1016 # The goal is to have a commit structure like so:
1017 # ...-> pctx -> tmpwctx -> shelvectx
1017 # ...-> pctx -> tmpwctx -> shelvectx
1018 # where tmpwctx is an optional commit with the user's pending changes
1018 # where tmpwctx is an optional commit with the user's pending changes
1019 # and shelvectx is the unshelved changes. Then we merge it all down
1019 # and shelvectx is the unshelved changes. Then we merge it all down
1020 # to the original pctx.
1020 # to the original pctx.
1021
1021
1022 activebookmark = _backupactivebookmark(repo)
1022 activebookmark = _backupactivebookmark(repo)
1023 tmpwctx, addedbefore = _commitworkingcopychanges(ui, repo, opts,
1023 tmpwctx, addedbefore = _commitworkingcopychanges(ui, repo, opts,
1024 tmpwctx)
1024 tmpwctx)
1025 repo, shelvectx = _unshelverestorecommit(ui, repo, basename)
1025 repo, shelvectx = _unshelverestorecommit(ui, repo, basename)
1026 _checkunshelveuntrackedproblems(ui, repo, shelvectx)
1026 _checkunshelveuntrackedproblems(ui, repo, shelvectx)
1027 branchtorestore = ''
1027 branchtorestore = ''
1028 if shelvectx.branch() != shelvectx.p1().branch():
1028 if shelvectx.branch() != shelvectx.p1().branch():
1029 branchtorestore = shelvectx.branch()
1029 branchtorestore = shelvectx.branch()
1030
1030
1031 shelvectx = _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev,
1031 shelvectx = _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev,
1032 basename, pctx, tmpwctx,
1032 basename, pctx, tmpwctx,
1033 shelvectx, branchtorestore,
1033 shelvectx, branchtorestore,
1034 activebookmark)
1034 activebookmark)
1035 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
1035 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
1036 with ui.configoverride(overrides, 'unshelve'):
1036 with ui.configoverride(overrides, 'unshelve'):
1037 mergefiles(ui, repo, pctx, shelvectx)
1037 mergefiles(ui, repo, pctx, shelvectx)
1038 restorebranch(ui, repo, branchtorestore)
1038 restorebranch(ui, repo, branchtorestore)
1039 _forgetunknownfiles(repo, shelvectx, addedbefore)
1039 _forgetunknownfiles(repo, shelvectx, addedbefore)
1040
1040
1041 shelvedstate.clear(repo)
1041 shelvedstate.clear(repo)
1042 _finishunshelve(repo, oldtiprev, tr, activebookmark)
1042 _finishunshelve(repo, oldtiprev, tr, activebookmark)
1043 unshelvecleanup(ui, repo, basename, opts)
1043 unshelvecleanup(ui, repo, basename, opts)
1044 finally:
1044 finally:
1045 if tr:
1045 if tr:
1046 tr.release()
1046 tr.release()
1047 lockmod.release(lock)
1047 lockmod.release(lock)
1048
1048
1049 @command('shelve',
1049 @command('shelve',
1050 [('A', 'addremove', None,
1050 [('A', 'addremove', None,
1051 _('mark new/missing files as added/removed before shelving')),
1051 _('mark new/missing files as added/removed before shelving')),
1052 ('u', 'unknown', None,
1052 ('u', 'unknown', None,
1053 _('store unknown files in the shelve')),
1053 _('store unknown files in the shelve')),
1054 ('', 'cleanup', None,
1054 ('', 'cleanup', None,
1055 _('delete all shelved changes')),
1055 _('delete all shelved changes')),
1056 ('', 'date', '',
1056 ('', 'date', '',
1057 _('shelve with the specified commit date'), _('DATE')),
1057 _('shelve with the specified commit date'), _('DATE')),
1058 ('d', 'delete', None,
1058 ('d', 'delete', None,
1059 _('delete the named shelved change(s)')),
1059 _('delete the named shelved change(s)')),
1060 ('e', 'edit', False,
1060 ('e', 'edit', False,
1061 _('invoke editor on commit messages')),
1061 _('invoke editor on commit messages')),
1062 ('l', 'list', None,
1062 ('l', 'list', None,
1063 _('list current shelves')),
1063 _('list current shelves')),
1064 ('m', 'message', '',
1064 ('m', 'message', '',
1065 _('use text as shelve message'), _('TEXT')),
1065 _('use text as shelve message'), _('TEXT')),
1066 ('n', 'name', '',
1066 ('n', 'name', '',
1067 _('use the given name for the shelved commit'), _('NAME')),
1067 _('use the given name for the shelved commit'), _('NAME')),
1068 ('p', 'patch', None,
1068 ('p', 'patch', None,
1069 _('output patches for changes (provide the names of the shelved '
1069 _('output patches for changes (provide the names of the shelved '
1070 'changes as positional arguments)')),
1070 'changes as positional arguments)')),
1071 ('i', 'interactive', None,
1071 ('i', 'interactive', None,
1072 _('interactive mode, only works while creating a shelve')),
1072 _('interactive mode, only works while creating a shelve')),
1073 ('', 'stat', None,
1073 ('', 'stat', None,
1074 _('output diffstat-style summary of changes (provide the names of '
1074 _('output diffstat-style summary of changes (provide the names of '
1075 'the shelved changes as positional arguments)')
1075 'the shelved changes as positional arguments)')
1076 )] + cmdutil.walkopts,
1076 )] + cmdutil.walkopts,
1077 _('hg shelve [OPTION]... [FILE]...'),
1077 _('hg shelve [OPTION]... [FILE]...'),
1078 helpcategory=command.CATEGORY_WORKING_DIRECTORY)
1078 helpcategory=command.CATEGORY_WORKING_DIRECTORY)
1079 def shelvecmd(ui, repo, *pats, **opts):
1079 def shelvecmd(ui, repo, *pats, **opts):
1080 '''save and set aside changes from the working directory
1080 '''save and set aside changes from the working directory
1081
1081
1082 Shelving takes files that "hg status" reports as not clean, saves
1082 Shelving takes files that "hg status" reports as not clean, saves
1083 the modifications to a bundle (a shelved change), and reverts the
1083 the modifications to a bundle (a shelved change), and reverts the
1084 files so that their state in the working directory becomes clean.
1084 files so that their state in the working directory becomes clean.
1085
1085
1086 To restore these changes to the working directory, using "hg
1086 To restore these changes to the working directory, using "hg
1087 unshelve"; this will work even if you switch to a different
1087 unshelve"; this will work even if you switch to a different
1088 commit.
1088 commit.
1089
1089
1090 When no files are specified, "hg shelve" saves all not-clean
1090 When no files are specified, "hg shelve" saves all not-clean
1091 files. If specific files or directories are named, only changes to
1091 files. If specific files or directories are named, only changes to
1092 those files are shelved.
1092 those files are shelved.
1093
1093
1094 In bare shelve (when no files are specified, without interactive,
1094 In bare shelve (when no files are specified, without interactive,
1095 include and exclude option), shelving remembers information if the
1095 include and exclude option), shelving remembers information if the
1096 working directory was on newly created branch, in other words working
1096 working directory was on newly created branch, in other words working
1097 directory was on different branch than its first parent. In this
1097 directory was on different branch than its first parent. In this
1098 situation unshelving restores branch information to the working directory.
1098 situation unshelving restores branch information to the working directory.
1099
1099
1100 Each shelved change has a name that makes it easier to find later.
1100 Each shelved change has a name that makes it easier to find later.
1101 The name of a shelved change defaults to being based on the active
1101 The name of a shelved change defaults to being based on the active
1102 bookmark, or if there is no active bookmark, the current named
1102 bookmark, or if there is no active bookmark, the current named
1103 branch. To specify a different name, use ``--name``.
1103 branch. To specify a different name, use ``--name``.
1104
1104
1105 To see a list of existing shelved changes, use the ``--list``
1105 To see a list of existing shelved changes, use the ``--list``
1106 option. For each shelved change, this will print its name, age,
1106 option. For each shelved change, this will print its name, age,
1107 and description; use ``--patch`` or ``--stat`` for more details.
1107 and description; use ``--patch`` or ``--stat`` for more details.
1108
1108
1109 To delete specific shelved changes, use ``--delete``. To delete
1109 To delete specific shelved changes, use ``--delete``. To delete
1110 all shelved changes, use ``--cleanup``.
1110 all shelved changes, use ``--cleanup``.
1111 '''
1111 '''
1112 opts = pycompat.byteskwargs(opts)
1112 opts = pycompat.byteskwargs(opts)
1113 allowables = [
1113 allowables = [
1114 ('addremove', {'create'}), # 'create' is pseudo action
1114 ('addremove', {'create'}), # 'create' is pseudo action
1115 ('unknown', {'create'}),
1115 ('unknown', {'create'}),
1116 ('cleanup', {'cleanup'}),
1116 ('cleanup', {'cleanup'}),
1117 # ('date', {'create'}), # ignored for passing '--date "0 0"' in tests
1117 # ('date', {'create'}), # ignored for passing '--date "0 0"' in tests
1118 ('delete', {'delete'}),
1118 ('delete', {'delete'}),
1119 ('edit', {'create'}),
1119 ('edit', {'create'}),
1120 ('list', {'list'}),
1120 ('list', {'list'}),
1121 ('message', {'create'}),
1121 ('message', {'create'}),
1122 ('name', {'create'}),
1122 ('name', {'create'}),
1123 ('patch', {'patch', 'list'}),
1123 ('patch', {'patch', 'list'}),
1124 ('stat', {'stat', 'list'}),
1124 ('stat', {'stat', 'list'}),
1125 ]
1125 ]
1126 def checkopt(opt):
1126 def checkopt(opt):
1127 if opts.get(opt):
1127 if opts.get(opt):
1128 for i, allowable in allowables:
1128 for i, allowable in allowables:
1129 if opts[i] and opt not in allowable:
1129 if opts[i] and opt not in allowable:
1130 raise error.Abort(_("options '--%s' and '--%s' may not be "
1130 raise error.Abort(_("options '--%s' and '--%s' may not be "
1131 "used together") % (opt, i))
1131 "used together") % (opt, i))
1132 return True
1132 return True
1133 if checkopt('cleanup'):
1133 if checkopt('cleanup'):
1134 if pats:
1134 if pats:
1135 raise error.Abort(_("cannot specify names when using '--cleanup'"))
1135 raise error.Abort(_("cannot specify names when using '--cleanup'"))
1136 return cleanupcmd(ui, repo)
1136 return cleanupcmd(ui, repo)
1137 elif checkopt('delete'):
1137 elif checkopt('delete'):
1138 return deletecmd(ui, repo, pats)
1138 return deletecmd(ui, repo, pats)
1139 elif checkopt('list'):
1139 elif checkopt('list'):
1140 return listcmd(ui, repo, pats, opts)
1140 return listcmd(ui, repo, pats, opts)
1141 elif checkopt('patch') or checkopt('stat'):
1141 elif checkopt('patch') or checkopt('stat'):
1142 return patchcmds(ui, repo, pats, opts)
1142 return patchcmds(ui, repo, pats, opts)
1143 else:
1143 else:
1144 return createcmd(ui, repo, pats, opts)
1144 return createcmd(ui, repo, pats, opts)
1145
1145
1146 def extsetup(ui):
1146 def extsetup(ui):
1147 cmdutil.unfinishedstates.append(
1147 cmdutil.unfinishedstates.append(
1148 [shelvedstate._filename, False, False,
1148 [shelvedstate._filename, False, False,
1149 _('unshelve already in progress'),
1149 _('unshelve already in progress'),
1150 _("use 'hg unshelve --continue' or 'hg unshelve --abort'")])
1150 _("use 'hg unshelve --continue' or 'hg unshelve --abort'")])
1151 cmdutil.afterresolvedstates.append(
1151 cmdutil.afterresolvedstates.append(
1152 [shelvedstate._filename, _('hg unshelve --continue')])
1152 [shelvedstate._filename, _('hg unshelve --continue')])
@@ -1,768 +1,769 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to transplant changesets from another branch
8 '''command to transplant changesets from another branch
9
9
10 This extension allows you to transplant changes to another parent revision,
10 This extension allows you to transplant changes to another parent revision,
11 possibly in another repository. The transplant is done using 'diff' patches.
11 possibly in another repository. The transplant is done using 'diff' patches.
12
12
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 map from a changeset hash to its hash in the source repository.
14 map from a changeset hash to its hash in the source repository.
15 '''
15 '''
16 from __future__ import absolute_import
16 from __future__ import absolute_import
17
17
18 import os
18 import os
19
19
20 from mercurial.i18n import _
20 from mercurial.i18n import _
21 from mercurial import (
21 from mercurial import (
22 bundlerepo,
22 bundlerepo,
23 cmdutil,
23 cmdutil,
24 error,
24 error,
25 exchange,
25 exchange,
26 hg,
26 hg,
27 logcmdutil,
27 logcmdutil,
28 match,
28 match,
29 merge,
29 merge,
30 node as nodemod,
30 node as nodemod,
31 patch,
31 patch,
32 pycompat,
32 pycompat,
33 registrar,
33 registrar,
34 revlog,
34 revlog,
35 revset,
35 revset,
36 scmutil,
36 scmutil,
37 smartset,
37 smartset,
38 util,
38 util,
39 vfs as vfsmod,
39 vfs as vfsmod,
40 )
40 )
41 from mercurial.utils import (
41 from mercurial.utils import (
42 procutil,
42 procutil,
43 stringutil,
43 stringutil,
44 )
44 )
45
45
46 class TransplantError(error.Abort):
46 class TransplantError(error.Abort):
47 pass
47 pass
48
48
49 cmdtable = {}
49 cmdtable = {}
50 command = registrar.command(cmdtable)
50 command = registrar.command(cmdtable)
51 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
51 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
52 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
52 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
53 # be specifying the version(s) of Mercurial they are tested with, or
53 # be specifying the version(s) of Mercurial they are tested with, or
54 # leave the attribute unspecified.
54 # leave the attribute unspecified.
55 testedwith = 'ships-with-hg-core'
55 testedwith = 'ships-with-hg-core'
56
56
57 configtable = {}
57 configtable = {}
58 configitem = registrar.configitem(configtable)
58 configitem = registrar.configitem(configtable)
59
59
60 configitem('transplant', 'filter',
60 configitem('transplant', 'filter',
61 default=None,
61 default=None,
62 )
62 )
63 configitem('transplant', 'log',
63 configitem('transplant', 'log',
64 default=None,
64 default=None,
65 )
65 )
66
66
67 class transplantentry(object):
67 class transplantentry(object):
68 def __init__(self, lnode, rnode):
68 def __init__(self, lnode, rnode):
69 self.lnode = lnode
69 self.lnode = lnode
70 self.rnode = rnode
70 self.rnode = rnode
71
71
72 class transplants(object):
72 class transplants(object):
73 def __init__(self, path=None, transplantfile=None, opener=None):
73 def __init__(self, path=None, transplantfile=None, opener=None):
74 self.path = path
74 self.path = path
75 self.transplantfile = transplantfile
75 self.transplantfile = transplantfile
76 self.opener = opener
76 self.opener = opener
77
77
78 if not opener:
78 if not opener:
79 self.opener = vfsmod.vfs(self.path)
79 self.opener = vfsmod.vfs(self.path)
80 self.transplants = {}
80 self.transplants = {}
81 self.dirty = False
81 self.dirty = False
82 self.read()
82 self.read()
83
83
84 def read(self):
84 def read(self):
85 abspath = os.path.join(self.path, self.transplantfile)
85 abspath = os.path.join(self.path, self.transplantfile)
86 if self.transplantfile and os.path.exists(abspath):
86 if self.transplantfile and os.path.exists(abspath):
87 for line in self.opener.read(self.transplantfile).splitlines():
87 for line in self.opener.read(self.transplantfile).splitlines():
88 lnode, rnode = map(revlog.bin, line.split(':'))
88 lnode, rnode = map(revlog.bin, line.split(':'))
89 list = self.transplants.setdefault(rnode, [])
89 list = self.transplants.setdefault(rnode, [])
90 list.append(transplantentry(lnode, rnode))
90 list.append(transplantentry(lnode, rnode))
91
91
92 def write(self):
92 def write(self):
93 if self.dirty and self.transplantfile:
93 if self.dirty and self.transplantfile:
94 if not os.path.isdir(self.path):
94 if not os.path.isdir(self.path):
95 os.mkdir(self.path)
95 os.mkdir(self.path)
96 fp = self.opener(self.transplantfile, 'w')
96 fp = self.opener(self.transplantfile, 'w')
97 for list in self.transplants.itervalues():
97 for list in self.transplants.itervalues():
98 for t in list:
98 for t in list:
99 l, r = map(nodemod.hex, (t.lnode, t.rnode))
99 l, r = map(nodemod.hex, (t.lnode, t.rnode))
100 fp.write(l + ':' + r + '\n')
100 fp.write(l + ':' + r + '\n')
101 fp.close()
101 fp.close()
102 self.dirty = False
102 self.dirty = False
103
103
104 def get(self, rnode):
104 def get(self, rnode):
105 return self.transplants.get(rnode) or []
105 return self.transplants.get(rnode) or []
106
106
107 def set(self, lnode, rnode):
107 def set(self, lnode, rnode):
108 list = self.transplants.setdefault(rnode, [])
108 list = self.transplants.setdefault(rnode, [])
109 list.append(transplantentry(lnode, rnode))
109 list.append(transplantentry(lnode, rnode))
110 self.dirty = True
110 self.dirty = True
111
111
112 def remove(self, transplant):
112 def remove(self, transplant):
113 list = self.transplants.get(transplant.rnode)
113 list = self.transplants.get(transplant.rnode)
114 if list:
114 if list:
115 del list[list.index(transplant)]
115 del list[list.index(transplant)]
116 self.dirty = True
116 self.dirty = True
117
117
118 class transplanter(object):
118 class transplanter(object):
119 def __init__(self, ui, repo, opts):
119 def __init__(self, ui, repo, opts):
120 self.ui = ui
120 self.ui = ui
121 self.path = repo.vfs.join('transplant')
121 self.path = repo.vfs.join('transplant')
122 self.opener = vfsmod.vfs(self.path)
122 self.opener = vfsmod.vfs(self.path)
123 self.transplants = transplants(self.path, 'transplants',
123 self.transplants = transplants(self.path, 'transplants',
124 opener=self.opener)
124 opener=self.opener)
125 def getcommiteditor():
125 def getcommiteditor():
126 editform = cmdutil.mergeeditform(repo[None], 'transplant')
126 editform = cmdutil.mergeeditform(repo[None], 'transplant')
127 return cmdutil.getcommiteditor(editform=editform,
127 return cmdutil.getcommiteditor(editform=editform,
128 **pycompat.strkwargs(opts))
128 **pycompat.strkwargs(opts))
129 self.getcommiteditor = getcommiteditor
129 self.getcommiteditor = getcommiteditor
130
130
131 def applied(self, repo, node, parent):
131 def applied(self, repo, node, parent):
132 '''returns True if a node is already an ancestor of parent
132 '''returns True if a node is already an ancestor of parent
133 or is parent or has already been transplanted'''
133 or is parent or has already been transplanted'''
134 if hasnode(repo, parent):
134 if hasnode(repo, parent):
135 parentrev = repo.changelog.rev(parent)
135 parentrev = repo.changelog.rev(parent)
136 if hasnode(repo, node):
136 if hasnode(repo, node):
137 rev = repo.changelog.rev(node)
137 rev = repo.changelog.rev(node)
138 reachable = repo.changelog.ancestors([parentrev], rev,
138 reachable = repo.changelog.ancestors([parentrev], rev,
139 inclusive=True)
139 inclusive=True)
140 if rev in reachable:
140 if rev in reachable:
141 return True
141 return True
142 for t in self.transplants.get(node):
142 for t in self.transplants.get(node):
143 # it might have been stripped
143 # it might have been stripped
144 if not hasnode(repo, t.lnode):
144 if not hasnode(repo, t.lnode):
145 self.transplants.remove(t)
145 self.transplants.remove(t)
146 return False
146 return False
147 lnoderev = repo.changelog.rev(t.lnode)
147 lnoderev = repo.changelog.rev(t.lnode)
148 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
148 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
149 inclusive=True):
149 inclusive=True):
150 return True
150 return True
151 return False
151 return False
152
152
153 def apply(self, repo, source, revmap, merges, opts=None):
153 def apply(self, repo, source, revmap, merges, opts=None):
154 '''apply the revisions in revmap one by one in revision order'''
154 '''apply the revisions in revmap one by one in revision order'''
155 if opts is None:
155 if opts is None:
156 opts = {}
156 opts = {}
157 revs = sorted(revmap)
157 revs = sorted(revmap)
158 p1, p2 = repo.dirstate.parents()
158 p1, p2 = repo.dirstate.parents()
159 pulls = []
159 pulls = []
160 diffopts = patch.difffeatureopts(self.ui, opts)
160 diffopts = patch.difffeatureopts(self.ui, opts)
161 diffopts.git = True
161 diffopts.git = True
162
162
163 lock = tr = None
163 lock = tr = None
164 try:
164 try:
165 lock = repo.lock()
165 lock = repo.lock()
166 tr = repo.transaction('transplant')
166 tr = repo.transaction('transplant')
167 for rev in revs:
167 for rev in revs:
168 node = revmap[rev]
168 node = revmap[rev]
169 revstr = '%d:%s' % (rev, nodemod.short(node))
169 revstr = '%d:%s' % (rev, nodemod.short(node))
170
170
171 if self.applied(repo, node, p1):
171 if self.applied(repo, node, p1):
172 self.ui.warn(_('skipping already applied revision %s\n') %
172 self.ui.warn(_('skipping already applied revision %s\n') %
173 revstr)
173 revstr)
174 continue
174 continue
175
175
176 parents = source.changelog.parents(node)
176 parents = source.changelog.parents(node)
177 if not (opts.get('filter') or opts.get('log')):
177 if not (opts.get('filter') or opts.get('log')):
178 # If the changeset parent is the same as the
178 # If the changeset parent is the same as the
179 # wdir's parent, just pull it.
179 # wdir's parent, just pull it.
180 if parents[0] == p1:
180 if parents[0] == p1:
181 pulls.append(node)
181 pulls.append(node)
182 p1 = node
182 p1 = node
183 continue
183 continue
184 if pulls:
184 if pulls:
185 if source != repo:
185 if source != repo:
186 exchange.pull(repo, source.peer(), heads=pulls)
186 exchange.pull(repo, source.peer(), heads=pulls)
187 merge.update(repo, pulls[-1], False, False)
187 merge.update(repo, pulls[-1], branchmerge=False,
188 force=False)
188 p1, p2 = repo.dirstate.parents()
189 p1, p2 = repo.dirstate.parents()
189 pulls = []
190 pulls = []
190
191
191 domerge = False
192 domerge = False
192 if node in merges:
193 if node in merges:
193 # pulling all the merge revs at once would mean we
194 # pulling all the merge revs at once would mean we
194 # couldn't transplant after the latest even if
195 # couldn't transplant after the latest even if
195 # transplants before them fail.
196 # transplants before them fail.
196 domerge = True
197 domerge = True
197 if not hasnode(repo, node):
198 if not hasnode(repo, node):
198 exchange.pull(repo, source.peer(), heads=[node])
199 exchange.pull(repo, source.peer(), heads=[node])
199
200
200 skipmerge = False
201 skipmerge = False
201 if parents[1] != revlog.nullid:
202 if parents[1] != revlog.nullid:
202 if not opts.get('parent'):
203 if not opts.get('parent'):
203 self.ui.note(_('skipping merge changeset %d:%s\n')
204 self.ui.note(_('skipping merge changeset %d:%s\n')
204 % (rev, nodemod.short(node)))
205 % (rev, nodemod.short(node)))
205 skipmerge = True
206 skipmerge = True
206 else:
207 else:
207 parent = source.lookup(opts['parent'])
208 parent = source.lookup(opts['parent'])
208 if parent not in parents:
209 if parent not in parents:
209 raise error.Abort(_('%s is not a parent of %s') %
210 raise error.Abort(_('%s is not a parent of %s') %
210 (nodemod.short(parent),
211 (nodemod.short(parent),
211 nodemod.short(node)))
212 nodemod.short(node)))
212 else:
213 else:
213 parent = parents[0]
214 parent = parents[0]
214
215
215 if skipmerge:
216 if skipmerge:
216 patchfile = None
217 patchfile = None
217 else:
218 else:
218 fd, patchfile = pycompat.mkstemp(prefix='hg-transplant-')
219 fd, patchfile = pycompat.mkstemp(prefix='hg-transplant-')
219 fp = os.fdopen(fd, r'wb')
220 fp = os.fdopen(fd, r'wb')
220 gen = patch.diff(source, parent, node, opts=diffopts)
221 gen = patch.diff(source, parent, node, opts=diffopts)
221 for chunk in gen:
222 for chunk in gen:
222 fp.write(chunk)
223 fp.write(chunk)
223 fp.close()
224 fp.close()
224
225
225 del revmap[rev]
226 del revmap[rev]
226 if patchfile or domerge:
227 if patchfile or domerge:
227 try:
228 try:
228 try:
229 try:
229 n = self.applyone(repo, node,
230 n = self.applyone(repo, node,
230 source.changelog.read(node),
231 source.changelog.read(node),
231 patchfile, merge=domerge,
232 patchfile, merge=domerge,
232 log=opts.get('log'),
233 log=opts.get('log'),
233 filter=opts.get('filter'))
234 filter=opts.get('filter'))
234 except TransplantError:
235 except TransplantError:
235 # Do not rollback, it is up to the user to
236 # Do not rollback, it is up to the user to
236 # fix the merge or cancel everything
237 # fix the merge or cancel everything
237 tr.close()
238 tr.close()
238 raise
239 raise
239 if n and domerge:
240 if n and domerge:
240 self.ui.status(_('%s merged at %s\n') % (revstr,
241 self.ui.status(_('%s merged at %s\n') % (revstr,
241 nodemod.short(n)))
242 nodemod.short(n)))
242 elif n:
243 elif n:
243 self.ui.status(_('%s transplanted to %s\n')
244 self.ui.status(_('%s transplanted to %s\n')
244 % (nodemod.short(node),
245 % (nodemod.short(node),
245 nodemod.short(n)))
246 nodemod.short(n)))
246 finally:
247 finally:
247 if patchfile:
248 if patchfile:
248 os.unlink(patchfile)
249 os.unlink(patchfile)
249 tr.close()
250 tr.close()
250 if pulls:
251 if pulls:
251 exchange.pull(repo, source.peer(), heads=pulls)
252 exchange.pull(repo, source.peer(), heads=pulls)
252 merge.update(repo, pulls[-1], False, False)
253 merge.update(repo, pulls[-1], branchmerge=False, force=False)
253 finally:
254 finally:
254 self.saveseries(revmap, merges)
255 self.saveseries(revmap, merges)
255 self.transplants.write()
256 self.transplants.write()
256 if tr:
257 if tr:
257 tr.release()
258 tr.release()
258 if lock:
259 if lock:
259 lock.release()
260 lock.release()
260
261
261 def filter(self, filter, node, changelog, patchfile):
262 def filter(self, filter, node, changelog, patchfile):
262 '''arbitrarily rewrite changeset before applying it'''
263 '''arbitrarily rewrite changeset before applying it'''
263
264
264 self.ui.status(_('filtering %s\n') % patchfile)
265 self.ui.status(_('filtering %s\n') % patchfile)
265 user, date, msg = (changelog[1], changelog[2], changelog[4])
266 user, date, msg = (changelog[1], changelog[2], changelog[4])
266 fd, headerfile = pycompat.mkstemp(prefix='hg-transplant-')
267 fd, headerfile = pycompat.mkstemp(prefix='hg-transplant-')
267 fp = os.fdopen(fd, r'wb')
268 fp = os.fdopen(fd, r'wb')
268 fp.write("# HG changeset patch\n")
269 fp.write("# HG changeset patch\n")
269 fp.write("# User %s\n" % user)
270 fp.write("# User %s\n" % user)
270 fp.write("# Date %d %d\n" % date)
271 fp.write("# Date %d %d\n" % date)
271 fp.write(msg + '\n')
272 fp.write(msg + '\n')
272 fp.close()
273 fp.close()
273
274
274 try:
275 try:
275 self.ui.system('%s %s %s' % (filter,
276 self.ui.system('%s %s %s' % (filter,
276 procutil.shellquote(headerfile),
277 procutil.shellquote(headerfile),
277 procutil.shellquote(patchfile)),
278 procutil.shellquote(patchfile)),
278 environ={'HGUSER': changelog[1],
279 environ={'HGUSER': changelog[1],
279 'HGREVISION': nodemod.hex(node),
280 'HGREVISION': nodemod.hex(node),
280 },
281 },
281 onerr=error.Abort, errprefix=_('filter failed'),
282 onerr=error.Abort, errprefix=_('filter failed'),
282 blockedtag='transplant_filter')
283 blockedtag='transplant_filter')
283 user, date, msg = self.parselog(open(headerfile, 'rb'))[1:4]
284 user, date, msg = self.parselog(open(headerfile, 'rb'))[1:4]
284 finally:
285 finally:
285 os.unlink(headerfile)
286 os.unlink(headerfile)
286
287
287 return (user, date, msg)
288 return (user, date, msg)
288
289
289 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
290 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
290 filter=None):
291 filter=None):
291 '''apply the patch in patchfile to the repository as a transplant'''
292 '''apply the patch in patchfile to the repository as a transplant'''
292 (manifest, user, (time, timezone), files, message) = cl[:5]
293 (manifest, user, (time, timezone), files, message) = cl[:5]
293 date = "%d %d" % (time, timezone)
294 date = "%d %d" % (time, timezone)
294 extra = {'transplant_source': node}
295 extra = {'transplant_source': node}
295 if filter:
296 if filter:
296 (user, date, message) = self.filter(filter, node, cl, patchfile)
297 (user, date, message) = self.filter(filter, node, cl, patchfile)
297
298
298 if log:
299 if log:
299 # we don't translate messages inserted into commits
300 # we don't translate messages inserted into commits
300 message += '\n(transplanted from %s)' % nodemod.hex(node)
301 message += '\n(transplanted from %s)' % nodemod.hex(node)
301
302
302 self.ui.status(_('applying %s\n') % nodemod.short(node))
303 self.ui.status(_('applying %s\n') % nodemod.short(node))
303 self.ui.note('%s %s\n%s\n' % (user, date, message))
304 self.ui.note('%s %s\n%s\n' % (user, date, message))
304
305
305 if not patchfile and not merge:
306 if not patchfile and not merge:
306 raise error.Abort(_('can only omit patchfile if merging'))
307 raise error.Abort(_('can only omit patchfile if merging'))
307 if patchfile:
308 if patchfile:
308 try:
309 try:
309 files = set()
310 files = set()
310 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
311 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
311 files = list(files)
312 files = list(files)
312 except Exception as inst:
313 except Exception as inst:
313 seriespath = os.path.join(self.path, 'series')
314 seriespath = os.path.join(self.path, 'series')
314 if os.path.exists(seriespath):
315 if os.path.exists(seriespath):
315 os.unlink(seriespath)
316 os.unlink(seriespath)
316 p1 = repo.dirstate.p1()
317 p1 = repo.dirstate.p1()
317 p2 = node
318 p2 = node
318 self.log(user, date, message, p1, p2, merge=merge)
319 self.log(user, date, message, p1, p2, merge=merge)
319 self.ui.write(stringutil.forcebytestr(inst) + '\n')
320 self.ui.write(stringutil.forcebytestr(inst) + '\n')
320 raise TransplantError(_('fix up the working directory and run '
321 raise TransplantError(_('fix up the working directory and run '
321 'hg transplant --continue'))
322 'hg transplant --continue'))
322 else:
323 else:
323 files = None
324 files = None
324 if merge:
325 if merge:
325 p1, p2 = repo.dirstate.parents()
326 p1, p2 = repo.dirstate.parents()
326 repo.setparents(p1, node)
327 repo.setparents(p1, node)
327 m = match.always(repo.root, '')
328 m = match.always(repo.root, '')
328 else:
329 else:
329 m = match.exact(repo.root, '', files)
330 m = match.exact(repo.root, '', files)
330
331
331 n = repo.commit(message, user, date, extra=extra, match=m,
332 n = repo.commit(message, user, date, extra=extra, match=m,
332 editor=self.getcommiteditor())
333 editor=self.getcommiteditor())
333 if not n:
334 if not n:
334 self.ui.warn(_('skipping emptied changeset %s\n') %
335 self.ui.warn(_('skipping emptied changeset %s\n') %
335 nodemod.short(node))
336 nodemod.short(node))
336 return None
337 return None
337 if not merge:
338 if not merge:
338 self.transplants.set(n, node)
339 self.transplants.set(n, node)
339
340
340 return n
341 return n
341
342
342 def canresume(self):
343 def canresume(self):
343 return os.path.exists(os.path.join(self.path, 'journal'))
344 return os.path.exists(os.path.join(self.path, 'journal'))
344
345
345 def resume(self, repo, source, opts):
346 def resume(self, repo, source, opts):
346 '''recover last transaction and apply remaining changesets'''
347 '''recover last transaction and apply remaining changesets'''
347 if os.path.exists(os.path.join(self.path, 'journal')):
348 if os.path.exists(os.path.join(self.path, 'journal')):
348 n, node = self.recover(repo, source, opts)
349 n, node = self.recover(repo, source, opts)
349 if n:
350 if n:
350 self.ui.status(_('%s transplanted as %s\n') %
351 self.ui.status(_('%s transplanted as %s\n') %
351 (nodemod.short(node),
352 (nodemod.short(node),
352 nodemod.short(n)))
353 nodemod.short(n)))
353 else:
354 else:
354 self.ui.status(_('%s skipped due to empty diff\n')
355 self.ui.status(_('%s skipped due to empty diff\n')
355 % (nodemod.short(node),))
356 % (nodemod.short(node),))
356 seriespath = os.path.join(self.path, 'series')
357 seriespath = os.path.join(self.path, 'series')
357 if not os.path.exists(seriespath):
358 if not os.path.exists(seriespath):
358 self.transplants.write()
359 self.transplants.write()
359 return
360 return
360 nodes, merges = self.readseries()
361 nodes, merges = self.readseries()
361 revmap = {}
362 revmap = {}
362 for n in nodes:
363 for n in nodes:
363 revmap[source.changelog.rev(n)] = n
364 revmap[source.changelog.rev(n)] = n
364 os.unlink(seriespath)
365 os.unlink(seriespath)
365
366
366 self.apply(repo, source, revmap, merges, opts)
367 self.apply(repo, source, revmap, merges, opts)
367
368
368 def recover(self, repo, source, opts):
369 def recover(self, repo, source, opts):
369 '''commit working directory using journal metadata'''
370 '''commit working directory using journal metadata'''
370 node, user, date, message, parents = self.readlog()
371 node, user, date, message, parents = self.readlog()
371 merge = False
372 merge = False
372
373
373 if not user or not date or not message or not parents[0]:
374 if not user or not date or not message or not parents[0]:
374 raise error.Abort(_('transplant log file is corrupt'))
375 raise error.Abort(_('transplant log file is corrupt'))
375
376
376 parent = parents[0]
377 parent = parents[0]
377 if len(parents) > 1:
378 if len(parents) > 1:
378 if opts.get('parent'):
379 if opts.get('parent'):
379 parent = source.lookup(opts['parent'])
380 parent = source.lookup(opts['parent'])
380 if parent not in parents:
381 if parent not in parents:
381 raise error.Abort(_('%s is not a parent of %s') %
382 raise error.Abort(_('%s is not a parent of %s') %
382 (nodemod.short(parent),
383 (nodemod.short(parent),
383 nodemod.short(node)))
384 nodemod.short(node)))
384 else:
385 else:
385 merge = True
386 merge = True
386
387
387 extra = {'transplant_source': node}
388 extra = {'transplant_source': node}
388 try:
389 try:
389 p1, p2 = repo.dirstate.parents()
390 p1, p2 = repo.dirstate.parents()
390 if p1 != parent:
391 if p1 != parent:
391 raise error.Abort(_('working directory not at transplant '
392 raise error.Abort(_('working directory not at transplant '
392 'parent %s') % nodemod.hex(parent))
393 'parent %s') % nodemod.hex(parent))
393 if merge:
394 if merge:
394 repo.setparents(p1, parents[1])
395 repo.setparents(p1, parents[1])
395 modified, added, removed, deleted = repo.status()[:4]
396 modified, added, removed, deleted = repo.status()[:4]
396 if merge or modified or added or removed or deleted:
397 if merge or modified or added or removed or deleted:
397 n = repo.commit(message, user, date, extra=extra,
398 n = repo.commit(message, user, date, extra=extra,
398 editor=self.getcommiteditor())
399 editor=self.getcommiteditor())
399 if not n:
400 if not n:
400 raise error.Abort(_('commit failed'))
401 raise error.Abort(_('commit failed'))
401 if not merge:
402 if not merge:
402 self.transplants.set(n, node)
403 self.transplants.set(n, node)
403 else:
404 else:
404 n = None
405 n = None
405 self.unlog()
406 self.unlog()
406
407
407 return n, node
408 return n, node
408 finally:
409 finally:
409 # TODO: get rid of this meaningless try/finally enclosing.
410 # TODO: get rid of this meaningless try/finally enclosing.
410 # this is kept only to reduce changes in a patch.
411 # this is kept only to reduce changes in a patch.
411 pass
412 pass
412
413
413 def readseries(self):
414 def readseries(self):
414 nodes = []
415 nodes = []
415 merges = []
416 merges = []
416 cur = nodes
417 cur = nodes
417 for line in self.opener.read('series').splitlines():
418 for line in self.opener.read('series').splitlines():
418 if line.startswith('# Merges'):
419 if line.startswith('# Merges'):
419 cur = merges
420 cur = merges
420 continue
421 continue
421 cur.append(revlog.bin(line))
422 cur.append(revlog.bin(line))
422
423
423 return (nodes, merges)
424 return (nodes, merges)
424
425
425 def saveseries(self, revmap, merges):
426 def saveseries(self, revmap, merges):
426 if not revmap:
427 if not revmap:
427 return
428 return
428
429
429 if not os.path.isdir(self.path):
430 if not os.path.isdir(self.path):
430 os.mkdir(self.path)
431 os.mkdir(self.path)
431 series = self.opener('series', 'w')
432 series = self.opener('series', 'w')
432 for rev in sorted(revmap):
433 for rev in sorted(revmap):
433 series.write(nodemod.hex(revmap[rev]) + '\n')
434 series.write(nodemod.hex(revmap[rev]) + '\n')
434 if merges:
435 if merges:
435 series.write('# Merges\n')
436 series.write('# Merges\n')
436 for m in merges:
437 for m in merges:
437 series.write(nodemod.hex(m) + '\n')
438 series.write(nodemod.hex(m) + '\n')
438 series.close()
439 series.close()
439
440
440 def parselog(self, fp):
441 def parselog(self, fp):
441 parents = []
442 parents = []
442 message = []
443 message = []
443 node = revlog.nullid
444 node = revlog.nullid
444 inmsg = False
445 inmsg = False
445 user = None
446 user = None
446 date = None
447 date = None
447 for line in fp.read().splitlines():
448 for line in fp.read().splitlines():
448 if inmsg:
449 if inmsg:
449 message.append(line)
450 message.append(line)
450 elif line.startswith('# User '):
451 elif line.startswith('# User '):
451 user = line[7:]
452 user = line[7:]
452 elif line.startswith('# Date '):
453 elif line.startswith('# Date '):
453 date = line[7:]
454 date = line[7:]
454 elif line.startswith('# Node ID '):
455 elif line.startswith('# Node ID '):
455 node = revlog.bin(line[10:])
456 node = revlog.bin(line[10:])
456 elif line.startswith('# Parent '):
457 elif line.startswith('# Parent '):
457 parents.append(revlog.bin(line[9:]))
458 parents.append(revlog.bin(line[9:]))
458 elif not line.startswith('# '):
459 elif not line.startswith('# '):
459 inmsg = True
460 inmsg = True
460 message.append(line)
461 message.append(line)
461 if None in (user, date):
462 if None in (user, date):
462 raise error.Abort(_("filter corrupted changeset (no user or date)"))
463 raise error.Abort(_("filter corrupted changeset (no user or date)"))
463 return (node, user, date, '\n'.join(message), parents)
464 return (node, user, date, '\n'.join(message), parents)
464
465
465 def log(self, user, date, message, p1, p2, merge=False):
466 def log(self, user, date, message, p1, p2, merge=False):
466 '''journal changelog metadata for later recover'''
467 '''journal changelog metadata for later recover'''
467
468
468 if not os.path.isdir(self.path):
469 if not os.path.isdir(self.path):
469 os.mkdir(self.path)
470 os.mkdir(self.path)
470 fp = self.opener('journal', 'w')
471 fp = self.opener('journal', 'w')
471 fp.write('# User %s\n' % user)
472 fp.write('# User %s\n' % user)
472 fp.write('# Date %s\n' % date)
473 fp.write('# Date %s\n' % date)
473 fp.write('# Node ID %s\n' % nodemod.hex(p2))
474 fp.write('# Node ID %s\n' % nodemod.hex(p2))
474 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
475 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
475 if merge:
476 if merge:
476 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
477 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
477 fp.write(message.rstrip() + '\n')
478 fp.write(message.rstrip() + '\n')
478 fp.close()
479 fp.close()
479
480
480 def readlog(self):
481 def readlog(self):
481 return self.parselog(self.opener('journal'))
482 return self.parselog(self.opener('journal'))
482
483
483 def unlog(self):
484 def unlog(self):
484 '''remove changelog journal'''
485 '''remove changelog journal'''
485 absdst = os.path.join(self.path, 'journal')
486 absdst = os.path.join(self.path, 'journal')
486 if os.path.exists(absdst):
487 if os.path.exists(absdst):
487 os.unlink(absdst)
488 os.unlink(absdst)
488
489
489 def transplantfilter(self, repo, source, root):
490 def transplantfilter(self, repo, source, root):
490 def matchfn(node):
491 def matchfn(node):
491 if self.applied(repo, node, root):
492 if self.applied(repo, node, root):
492 return False
493 return False
493 if source.changelog.parents(node)[1] != revlog.nullid:
494 if source.changelog.parents(node)[1] != revlog.nullid:
494 return False
495 return False
495 extra = source.changelog.read(node)[5]
496 extra = source.changelog.read(node)[5]
496 cnode = extra.get('transplant_source')
497 cnode = extra.get('transplant_source')
497 if cnode and self.applied(repo, cnode, root):
498 if cnode and self.applied(repo, cnode, root):
498 return False
499 return False
499 return True
500 return True
500
501
501 return matchfn
502 return matchfn
502
503
503 def hasnode(repo, node):
504 def hasnode(repo, node):
504 try:
505 try:
505 return repo.changelog.rev(node) is not None
506 return repo.changelog.rev(node) is not None
506 except error.StorageError:
507 except error.StorageError:
507 return False
508 return False
508
509
509 def browserevs(ui, repo, nodes, opts):
510 def browserevs(ui, repo, nodes, opts):
510 '''interactively transplant changesets'''
511 '''interactively transplant changesets'''
511 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
512 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
512 transplants = []
513 transplants = []
513 merges = []
514 merges = []
514 prompt = _('apply changeset? [ynmpcq?]:'
515 prompt = _('apply changeset? [ynmpcq?]:'
515 '$$ &yes, transplant this changeset'
516 '$$ &yes, transplant this changeset'
516 '$$ &no, skip this changeset'
517 '$$ &no, skip this changeset'
517 '$$ &merge at this changeset'
518 '$$ &merge at this changeset'
518 '$$ show &patch'
519 '$$ show &patch'
519 '$$ &commit selected changesets'
520 '$$ &commit selected changesets'
520 '$$ &quit and cancel transplant'
521 '$$ &quit and cancel transplant'
521 '$$ &? (show this help)')
522 '$$ &? (show this help)')
522 for node in nodes:
523 for node in nodes:
523 displayer.show(repo[node])
524 displayer.show(repo[node])
524 action = None
525 action = None
525 while not action:
526 while not action:
526 choice = ui.promptchoice(prompt)
527 choice = ui.promptchoice(prompt)
527 action = 'ynmpcq?'[choice:choice + 1]
528 action = 'ynmpcq?'[choice:choice + 1]
528 if action == '?':
529 if action == '?':
529 for c, t in ui.extractchoices(prompt)[1]:
530 for c, t in ui.extractchoices(prompt)[1]:
530 ui.write('%s: %s\n' % (c, t))
531 ui.write('%s: %s\n' % (c, t))
531 action = None
532 action = None
532 elif action == 'p':
533 elif action == 'p':
533 parent = repo.changelog.parents(node)[0]
534 parent = repo.changelog.parents(node)[0]
534 for chunk in patch.diff(repo, parent, node):
535 for chunk in patch.diff(repo, parent, node):
535 ui.write(chunk)
536 ui.write(chunk)
536 action = None
537 action = None
537 if action == 'y':
538 if action == 'y':
538 transplants.append(node)
539 transplants.append(node)
539 elif action == 'm':
540 elif action == 'm':
540 merges.append(node)
541 merges.append(node)
541 elif action == 'c':
542 elif action == 'c':
542 break
543 break
543 elif action == 'q':
544 elif action == 'q':
544 transplants = ()
545 transplants = ()
545 merges = ()
546 merges = ()
546 break
547 break
547 displayer.close()
548 displayer.close()
548 return (transplants, merges)
549 return (transplants, merges)
549
550
550 @command('transplant',
551 @command('transplant',
551 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
552 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
552 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
553 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
553 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
554 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
554 ('p', 'prune', [], _('skip over REV'), _('REV')),
555 ('p', 'prune', [], _('skip over REV'), _('REV')),
555 ('m', 'merge', [], _('merge at REV'), _('REV')),
556 ('m', 'merge', [], _('merge at REV'), _('REV')),
556 ('', 'parent', '',
557 ('', 'parent', '',
557 _('parent to choose when transplanting merge'), _('REV')),
558 _('parent to choose when transplanting merge'), _('REV')),
558 ('e', 'edit', False, _('invoke editor on commit messages')),
559 ('e', 'edit', False, _('invoke editor on commit messages')),
559 ('', 'log', None, _('append transplant info to log message')),
560 ('', 'log', None, _('append transplant info to log message')),
560 ('c', 'continue', None, _('continue last transplant session '
561 ('c', 'continue', None, _('continue last transplant session '
561 'after fixing conflicts')),
562 'after fixing conflicts')),
562 ('', 'filter', '',
563 ('', 'filter', '',
563 _('filter changesets through command'), _('CMD'))],
564 _('filter changesets through command'), _('CMD'))],
564 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
565 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
565 '[-m REV] [REV]...'),
566 '[-m REV] [REV]...'),
566 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
567 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
567 def transplant(ui, repo, *revs, **opts):
568 def transplant(ui, repo, *revs, **opts):
568 '''transplant changesets from another branch
569 '''transplant changesets from another branch
569
570
570 Selected changesets will be applied on top of the current working
571 Selected changesets will be applied on top of the current working
571 directory with the log of the original changeset. The changesets
572 directory with the log of the original changeset. The changesets
572 are copied and will thus appear twice in the history with different
573 are copied and will thus appear twice in the history with different
573 identities.
574 identities.
574
575
575 Consider using the graft command if everything is inside the same
576 Consider using the graft command if everything is inside the same
576 repository - it will use merges and will usually give a better result.
577 repository - it will use merges and will usually give a better result.
577 Use the rebase extension if the changesets are unpublished and you want
578 Use the rebase extension if the changesets are unpublished and you want
578 to move them instead of copying them.
579 to move them instead of copying them.
579
580
580 If --log is specified, log messages will have a comment appended
581 If --log is specified, log messages will have a comment appended
581 of the form::
582 of the form::
582
583
583 (transplanted from CHANGESETHASH)
584 (transplanted from CHANGESETHASH)
584
585
585 You can rewrite the changelog message with the --filter option.
586 You can rewrite the changelog message with the --filter option.
586 Its argument will be invoked with the current changelog message as
587 Its argument will be invoked with the current changelog message as
587 $1 and the patch as $2.
588 $1 and the patch as $2.
588
589
589 --source/-s specifies another repository to use for selecting changesets,
590 --source/-s specifies another repository to use for selecting changesets,
590 just as if it temporarily had been pulled.
591 just as if it temporarily had been pulled.
591 If --branch/-b is specified, these revisions will be used as
592 If --branch/-b is specified, these revisions will be used as
592 heads when deciding which changesets to transplant, just as if only
593 heads when deciding which changesets to transplant, just as if only
593 these revisions had been pulled.
594 these revisions had been pulled.
594 If --all/-a is specified, all the revisions up to the heads specified
595 If --all/-a is specified, all the revisions up to the heads specified
595 with --branch will be transplanted.
596 with --branch will be transplanted.
596
597
597 Example:
598 Example:
598
599
599 - transplant all changes up to REV on top of your current revision::
600 - transplant all changes up to REV on top of your current revision::
600
601
601 hg transplant --branch REV --all
602 hg transplant --branch REV --all
602
603
603 You can optionally mark selected transplanted changesets as merge
604 You can optionally mark selected transplanted changesets as merge
604 changesets. You will not be prompted to transplant any ancestors
605 changesets. You will not be prompted to transplant any ancestors
605 of a merged transplant, and you can merge descendants of them
606 of a merged transplant, and you can merge descendants of them
606 normally instead of transplanting them.
607 normally instead of transplanting them.
607
608
608 Merge changesets may be transplanted directly by specifying the
609 Merge changesets may be transplanted directly by specifying the
609 proper parent changeset by calling :hg:`transplant --parent`.
610 proper parent changeset by calling :hg:`transplant --parent`.
610
611
611 If no merges or revisions are provided, :hg:`transplant` will
612 If no merges or revisions are provided, :hg:`transplant` will
612 start an interactive changeset browser.
613 start an interactive changeset browser.
613
614
614 If a changeset application fails, you can fix the merge by hand
615 If a changeset application fails, you can fix the merge by hand
615 and then resume where you left off by calling :hg:`transplant
616 and then resume where you left off by calling :hg:`transplant
616 --continue/-c`.
617 --continue/-c`.
617 '''
618 '''
618 with repo.wlock():
619 with repo.wlock():
619 return _dotransplant(ui, repo, *revs, **opts)
620 return _dotransplant(ui, repo, *revs, **opts)
620
621
621 def _dotransplant(ui, repo, *revs, **opts):
622 def _dotransplant(ui, repo, *revs, **opts):
622 def incwalk(repo, csets, match=util.always):
623 def incwalk(repo, csets, match=util.always):
623 for node in csets:
624 for node in csets:
624 if match(node):
625 if match(node):
625 yield node
626 yield node
626
627
627 def transplantwalk(repo, dest, heads, match=util.always):
628 def transplantwalk(repo, dest, heads, match=util.always):
628 '''Yield all nodes that are ancestors of a head but not ancestors
629 '''Yield all nodes that are ancestors of a head but not ancestors
629 of dest.
630 of dest.
630 If no heads are specified, the heads of repo will be used.'''
631 If no heads are specified, the heads of repo will be used.'''
631 if not heads:
632 if not heads:
632 heads = repo.heads()
633 heads = repo.heads()
633 ancestors = []
634 ancestors = []
634 ctx = repo[dest]
635 ctx = repo[dest]
635 for head in heads:
636 for head in heads:
636 ancestors.append(ctx.ancestor(repo[head]).node())
637 ancestors.append(ctx.ancestor(repo[head]).node())
637 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
638 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
638 if match(node):
639 if match(node):
639 yield node
640 yield node
640
641
641 def checkopts(opts, revs):
642 def checkopts(opts, revs):
642 if opts.get('continue'):
643 if opts.get('continue'):
643 if opts.get('branch') or opts.get('all') or opts.get('merge'):
644 if opts.get('branch') or opts.get('all') or opts.get('merge'):
644 raise error.Abort(_('--continue is incompatible with '
645 raise error.Abort(_('--continue is incompatible with '
645 '--branch, --all and --merge'))
646 '--branch, --all and --merge'))
646 return
647 return
647 if not (opts.get('source') or revs or
648 if not (opts.get('source') or revs or
648 opts.get('merge') or opts.get('branch')):
649 opts.get('merge') or opts.get('branch')):
649 raise error.Abort(_('no source URL, branch revision, or revision '
650 raise error.Abort(_('no source URL, branch revision, or revision '
650 'list provided'))
651 'list provided'))
651 if opts.get('all'):
652 if opts.get('all'):
652 if not opts.get('branch'):
653 if not opts.get('branch'):
653 raise error.Abort(_('--all requires a branch revision'))
654 raise error.Abort(_('--all requires a branch revision'))
654 if revs:
655 if revs:
655 raise error.Abort(_('--all is incompatible with a '
656 raise error.Abort(_('--all is incompatible with a '
656 'revision list'))
657 'revision list'))
657
658
658 opts = pycompat.byteskwargs(opts)
659 opts = pycompat.byteskwargs(opts)
659 checkopts(opts, revs)
660 checkopts(opts, revs)
660
661
661 if not opts.get('log'):
662 if not opts.get('log'):
662 # deprecated config: transplant.log
663 # deprecated config: transplant.log
663 opts['log'] = ui.config('transplant', 'log')
664 opts['log'] = ui.config('transplant', 'log')
664 if not opts.get('filter'):
665 if not opts.get('filter'):
665 # deprecated config: transplant.filter
666 # deprecated config: transplant.filter
666 opts['filter'] = ui.config('transplant', 'filter')
667 opts['filter'] = ui.config('transplant', 'filter')
667
668
668 tp = transplanter(ui, repo, opts)
669 tp = transplanter(ui, repo, opts)
669
670
670 p1, p2 = repo.dirstate.parents()
671 p1, p2 = repo.dirstate.parents()
671 if len(repo) > 0 and p1 == revlog.nullid:
672 if len(repo) > 0 and p1 == revlog.nullid:
672 raise error.Abort(_('no revision checked out'))
673 raise error.Abort(_('no revision checked out'))
673 if opts.get('continue'):
674 if opts.get('continue'):
674 if not tp.canresume():
675 if not tp.canresume():
675 raise error.Abort(_('no transplant to continue'))
676 raise error.Abort(_('no transplant to continue'))
676 else:
677 else:
677 cmdutil.checkunfinished(repo)
678 cmdutil.checkunfinished(repo)
678 if p2 != revlog.nullid:
679 if p2 != revlog.nullid:
679 raise error.Abort(_('outstanding uncommitted merges'))
680 raise error.Abort(_('outstanding uncommitted merges'))
680 m, a, r, d = repo.status()[:4]
681 m, a, r, d = repo.status()[:4]
681 if m or a or r or d:
682 if m or a or r or d:
682 raise error.Abort(_('outstanding local changes'))
683 raise error.Abort(_('outstanding local changes'))
683
684
684 sourcerepo = opts.get('source')
685 sourcerepo = opts.get('source')
685 if sourcerepo:
686 if sourcerepo:
686 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
687 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
687 heads = pycompat.maplist(peer.lookup, opts.get('branch', ()))
688 heads = pycompat.maplist(peer.lookup, opts.get('branch', ()))
688 target = set(heads)
689 target = set(heads)
689 for r in revs:
690 for r in revs:
690 try:
691 try:
691 target.add(peer.lookup(r))
692 target.add(peer.lookup(r))
692 except error.RepoError:
693 except error.RepoError:
693 pass
694 pass
694 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
695 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
695 onlyheads=sorted(target), force=True)
696 onlyheads=sorted(target), force=True)
696 else:
697 else:
697 source = repo
698 source = repo
698 heads = pycompat.maplist(source.lookup, opts.get('branch', ()))
699 heads = pycompat.maplist(source.lookup, opts.get('branch', ()))
699 cleanupfn = None
700 cleanupfn = None
700
701
701 try:
702 try:
702 if opts.get('continue'):
703 if opts.get('continue'):
703 tp.resume(repo, source, opts)
704 tp.resume(repo, source, opts)
704 return
705 return
705
706
706 tf = tp.transplantfilter(repo, source, p1)
707 tf = tp.transplantfilter(repo, source, p1)
707 if opts.get('prune'):
708 if opts.get('prune'):
708 prune = set(source[r].node()
709 prune = set(source[r].node()
709 for r in scmutil.revrange(source, opts.get('prune')))
710 for r in scmutil.revrange(source, opts.get('prune')))
710 matchfn = lambda x: tf(x) and x not in prune
711 matchfn = lambda x: tf(x) and x not in prune
711 else:
712 else:
712 matchfn = tf
713 matchfn = tf
713 merges = pycompat.maplist(source.lookup, opts.get('merge', ()))
714 merges = pycompat.maplist(source.lookup, opts.get('merge', ()))
714 revmap = {}
715 revmap = {}
715 if revs:
716 if revs:
716 for r in scmutil.revrange(source, revs):
717 for r in scmutil.revrange(source, revs):
717 revmap[int(r)] = source[r].node()
718 revmap[int(r)] = source[r].node()
718 elif opts.get('all') or not merges:
719 elif opts.get('all') or not merges:
719 if source != repo:
720 if source != repo:
720 alltransplants = incwalk(source, csets, match=matchfn)
721 alltransplants = incwalk(source, csets, match=matchfn)
721 else:
722 else:
722 alltransplants = transplantwalk(source, p1, heads,
723 alltransplants = transplantwalk(source, p1, heads,
723 match=matchfn)
724 match=matchfn)
724 if opts.get('all'):
725 if opts.get('all'):
725 revs = alltransplants
726 revs = alltransplants
726 else:
727 else:
727 revs, newmerges = browserevs(ui, source, alltransplants, opts)
728 revs, newmerges = browserevs(ui, source, alltransplants, opts)
728 merges.extend(newmerges)
729 merges.extend(newmerges)
729 for r in revs:
730 for r in revs:
730 revmap[source.changelog.rev(r)] = r
731 revmap[source.changelog.rev(r)] = r
731 for r in merges:
732 for r in merges:
732 revmap[source.changelog.rev(r)] = r
733 revmap[source.changelog.rev(r)] = r
733
734
734 tp.apply(repo, source, revmap, merges, opts)
735 tp.apply(repo, source, revmap, merges, opts)
735 finally:
736 finally:
736 if cleanupfn:
737 if cleanupfn:
737 cleanupfn()
738 cleanupfn()
738
739
739 revsetpredicate = registrar.revsetpredicate()
740 revsetpredicate = registrar.revsetpredicate()
740
741
741 @revsetpredicate('transplanted([set])')
742 @revsetpredicate('transplanted([set])')
742 def revsettransplanted(repo, subset, x):
743 def revsettransplanted(repo, subset, x):
743 """Transplanted changesets in set, or all transplanted changesets.
744 """Transplanted changesets in set, or all transplanted changesets.
744 """
745 """
745 if x:
746 if x:
746 s = revset.getset(repo, subset, x)
747 s = revset.getset(repo, subset, x)
747 else:
748 else:
748 s = subset
749 s = subset
749 return smartset.baseset([r for r in s if
750 return smartset.baseset([r for r in s if
750 repo[r].extra().get('transplant_source')])
751 repo[r].extra().get('transplant_source')])
751
752
752 templatekeyword = registrar.templatekeyword()
753 templatekeyword = registrar.templatekeyword()
753
754
754 @templatekeyword('transplanted', requires={'ctx'})
755 @templatekeyword('transplanted', requires={'ctx'})
755 def kwtransplanted(context, mapping):
756 def kwtransplanted(context, mapping):
756 """String. The node identifier of the transplanted
757 """String. The node identifier of the transplanted
757 changeset if any."""
758 changeset if any."""
758 ctx = context.resource(mapping, 'ctx')
759 ctx = context.resource(mapping, 'ctx')
759 n = ctx.extra().get('transplant_source')
760 n = ctx.extra().get('transplant_source')
760 return n and nodemod.hex(n) or ''
761 return n and nodemod.hex(n) or ''
761
762
762 def extsetup(ui):
763 def extsetup(ui):
763 cmdutil.unfinishedstates.append(
764 cmdutil.unfinishedstates.append(
764 ['transplant/journal', True, False, _('transplant in progress'),
765 ['transplant/journal', True, False, _('transplant in progress'),
765 _("use 'hg transplant --continue' or 'hg update' to abort")])
766 _("use 'hg transplant --continue' or 'hg update' to abort")])
766
767
767 # tell hggettext to extract docstrings from these functions:
768 # tell hggettext to extract docstrings from these functions:
768 i18nfunctions = [revsettransplanted, kwtransplanted]
769 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3313 +1,3313 b''
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import os
11 import os
12 import re
12 import re
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import (
15 from .node import (
16 hex,
16 hex,
17 nullid,
17 nullid,
18 nullrev,
18 nullrev,
19 short,
19 short,
20 )
20 )
21
21
22 from . import (
22 from . import (
23 bookmarks,
23 bookmarks,
24 changelog,
24 changelog,
25 copies,
25 copies,
26 crecord as crecordmod,
26 crecord as crecordmod,
27 dirstateguard,
27 dirstateguard,
28 encoding,
28 encoding,
29 error,
29 error,
30 formatter,
30 formatter,
31 logcmdutil,
31 logcmdutil,
32 match as matchmod,
32 match as matchmod,
33 merge as mergemod,
33 merge as mergemod,
34 mergeutil,
34 mergeutil,
35 obsolete,
35 obsolete,
36 patch,
36 patch,
37 pathutil,
37 pathutil,
38 phases,
38 phases,
39 pycompat,
39 pycompat,
40 revlog,
40 revlog,
41 rewriteutil,
41 rewriteutil,
42 scmutil,
42 scmutil,
43 smartset,
43 smartset,
44 subrepoutil,
44 subrepoutil,
45 templatekw,
45 templatekw,
46 templater,
46 templater,
47 util,
47 util,
48 vfs as vfsmod,
48 vfs as vfsmod,
49 )
49 )
50
50
51 from .utils import (
51 from .utils import (
52 dateutil,
52 dateutil,
53 stringutil,
53 stringutil,
54 )
54 )
55
55
56 stringio = util.stringio
56 stringio = util.stringio
57
57
58 # templates of common command options
58 # templates of common command options
59
59
60 dryrunopts = [
60 dryrunopts = [
61 ('n', 'dry-run', None,
61 ('n', 'dry-run', None,
62 _('do not perform actions, just print output')),
62 _('do not perform actions, just print output')),
63 ]
63 ]
64
64
65 confirmopts = [
65 confirmopts = [
66 ('', 'confirm', None,
66 ('', 'confirm', None,
67 _('ask before applying actions')),
67 _('ask before applying actions')),
68 ]
68 ]
69
69
70 remoteopts = [
70 remoteopts = [
71 ('e', 'ssh', '',
71 ('e', 'ssh', '',
72 _('specify ssh command to use'), _('CMD')),
72 _('specify ssh command to use'), _('CMD')),
73 ('', 'remotecmd', '',
73 ('', 'remotecmd', '',
74 _('specify hg command to run on the remote side'), _('CMD')),
74 _('specify hg command to run on the remote side'), _('CMD')),
75 ('', 'insecure', None,
75 ('', 'insecure', None,
76 _('do not verify server certificate (ignoring web.cacerts config)')),
76 _('do not verify server certificate (ignoring web.cacerts config)')),
77 ]
77 ]
78
78
79 walkopts = [
79 walkopts = [
80 ('I', 'include', [],
80 ('I', 'include', [],
81 _('include names matching the given patterns'), _('PATTERN')),
81 _('include names matching the given patterns'), _('PATTERN')),
82 ('X', 'exclude', [],
82 ('X', 'exclude', [],
83 _('exclude names matching the given patterns'), _('PATTERN')),
83 _('exclude names matching the given patterns'), _('PATTERN')),
84 ]
84 ]
85
85
86 commitopts = [
86 commitopts = [
87 ('m', 'message', '',
87 ('m', 'message', '',
88 _('use text as commit message'), _('TEXT')),
88 _('use text as commit message'), _('TEXT')),
89 ('l', 'logfile', '',
89 ('l', 'logfile', '',
90 _('read commit message from file'), _('FILE')),
90 _('read commit message from file'), _('FILE')),
91 ]
91 ]
92
92
93 commitopts2 = [
93 commitopts2 = [
94 ('d', 'date', '',
94 ('d', 'date', '',
95 _('record the specified date as commit date'), _('DATE')),
95 _('record the specified date as commit date'), _('DATE')),
96 ('u', 'user', '',
96 ('u', 'user', '',
97 _('record the specified user as committer'), _('USER')),
97 _('record the specified user as committer'), _('USER')),
98 ]
98 ]
99
99
100 formatteropts = [
100 formatteropts = [
101 ('T', 'template', '',
101 ('T', 'template', '',
102 _('display with template'), _('TEMPLATE')),
102 _('display with template'), _('TEMPLATE')),
103 ]
103 ]
104
104
105 templateopts = [
105 templateopts = [
106 ('', 'style', '',
106 ('', 'style', '',
107 _('display using template map file (DEPRECATED)'), _('STYLE')),
107 _('display using template map file (DEPRECATED)'), _('STYLE')),
108 ('T', 'template', '',
108 ('T', 'template', '',
109 _('display with template'), _('TEMPLATE')),
109 _('display with template'), _('TEMPLATE')),
110 ]
110 ]
111
111
112 logopts = [
112 logopts = [
113 ('p', 'patch', None, _('show patch')),
113 ('p', 'patch', None, _('show patch')),
114 ('g', 'git', None, _('use git extended diff format')),
114 ('g', 'git', None, _('use git extended diff format')),
115 ('l', 'limit', '',
115 ('l', 'limit', '',
116 _('limit number of changes displayed'), _('NUM')),
116 _('limit number of changes displayed'), _('NUM')),
117 ('M', 'no-merges', None, _('do not show merges')),
117 ('M', 'no-merges', None, _('do not show merges')),
118 ('', 'stat', None, _('output diffstat-style summary of changes')),
118 ('', 'stat', None, _('output diffstat-style summary of changes')),
119 ('G', 'graph', None, _("show the revision DAG")),
119 ('G', 'graph', None, _("show the revision DAG")),
120 ] + templateopts
120 ] + templateopts
121
121
122 diffopts = [
122 diffopts = [
123 ('a', 'text', None, _('treat all files as text')),
123 ('a', 'text', None, _('treat all files as text')),
124 ('g', 'git', None, _('use git extended diff format')),
124 ('g', 'git', None, _('use git extended diff format')),
125 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
125 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
126 ('', 'nodates', None, _('omit dates from diff headers'))
126 ('', 'nodates', None, _('omit dates from diff headers'))
127 ]
127 ]
128
128
129 diffwsopts = [
129 diffwsopts = [
130 ('w', 'ignore-all-space', None,
130 ('w', 'ignore-all-space', None,
131 _('ignore white space when comparing lines')),
131 _('ignore white space when comparing lines')),
132 ('b', 'ignore-space-change', None,
132 ('b', 'ignore-space-change', None,
133 _('ignore changes in the amount of white space')),
133 _('ignore changes in the amount of white space')),
134 ('B', 'ignore-blank-lines', None,
134 ('B', 'ignore-blank-lines', None,
135 _('ignore changes whose lines are all blank')),
135 _('ignore changes whose lines are all blank')),
136 ('Z', 'ignore-space-at-eol', None,
136 ('Z', 'ignore-space-at-eol', None,
137 _('ignore changes in whitespace at EOL')),
137 _('ignore changes in whitespace at EOL')),
138 ]
138 ]
139
139
140 diffopts2 = [
140 diffopts2 = [
141 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
141 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
142 ('p', 'show-function', None, _('show which function each change is in')),
142 ('p', 'show-function', None, _('show which function each change is in')),
143 ('', 'reverse', None, _('produce a diff that undoes the changes')),
143 ('', 'reverse', None, _('produce a diff that undoes the changes')),
144 ] + diffwsopts + [
144 ] + diffwsopts + [
145 ('U', 'unified', '',
145 ('U', 'unified', '',
146 _('number of lines of context to show'), _('NUM')),
146 _('number of lines of context to show'), _('NUM')),
147 ('', 'stat', None, _('output diffstat-style summary of changes')),
147 ('', 'stat', None, _('output diffstat-style summary of changes')),
148 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
148 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
149 ]
149 ]
150
150
151 mergetoolopts = [
151 mergetoolopts = [
152 ('t', 'tool', '', _('specify merge tool'), _('TOOL')),
152 ('t', 'tool', '', _('specify merge tool'), _('TOOL')),
153 ]
153 ]
154
154
155 similarityopts = [
155 similarityopts = [
156 ('s', 'similarity', '',
156 ('s', 'similarity', '',
157 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
157 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
158 ]
158 ]
159
159
160 subrepoopts = [
160 subrepoopts = [
161 ('S', 'subrepos', None,
161 ('S', 'subrepos', None,
162 _('recurse into subrepositories'))
162 _('recurse into subrepositories'))
163 ]
163 ]
164
164
165 debugrevlogopts = [
165 debugrevlogopts = [
166 ('c', 'changelog', False, _('open changelog')),
166 ('c', 'changelog', False, _('open changelog')),
167 ('m', 'manifest', False, _('open manifest')),
167 ('m', 'manifest', False, _('open manifest')),
168 ('', 'dir', '', _('open directory manifest')),
168 ('', 'dir', '', _('open directory manifest')),
169 ]
169 ]
170
170
171 # special string such that everything below this line will be ingored in the
171 # special string such that everything below this line will be ingored in the
172 # editor text
172 # editor text
173 _linebelow = "^HG: ------------------------ >8 ------------------------$"
173 _linebelow = "^HG: ------------------------ >8 ------------------------$"
174
174
175 def ishunk(x):
175 def ishunk(x):
176 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
176 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
177 return isinstance(x, hunkclasses)
177 return isinstance(x, hunkclasses)
178
178
179 def newandmodified(chunks, originalchunks):
179 def newandmodified(chunks, originalchunks):
180 newlyaddedandmodifiedfiles = set()
180 newlyaddedandmodifiedfiles = set()
181 for chunk in chunks:
181 for chunk in chunks:
182 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
182 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
183 originalchunks:
183 originalchunks:
184 newlyaddedandmodifiedfiles.add(chunk.header.filename())
184 newlyaddedandmodifiedfiles.add(chunk.header.filename())
185 return newlyaddedandmodifiedfiles
185 return newlyaddedandmodifiedfiles
186
186
187 def parsealiases(cmd):
187 def parsealiases(cmd):
188 return cmd.split("|")
188 return cmd.split("|")
189
189
190 def setupwrapcolorwrite(ui):
190 def setupwrapcolorwrite(ui):
191 # wrap ui.write so diff output can be labeled/colorized
191 # wrap ui.write so diff output can be labeled/colorized
192 def wrapwrite(orig, *args, **kw):
192 def wrapwrite(orig, *args, **kw):
193 label = kw.pop(r'label', '')
193 label = kw.pop(r'label', '')
194 for chunk, l in patch.difflabel(lambda: args):
194 for chunk, l in patch.difflabel(lambda: args):
195 orig(chunk, label=label + l)
195 orig(chunk, label=label + l)
196
196
197 oldwrite = ui.write
197 oldwrite = ui.write
198 def wrap(*args, **kwargs):
198 def wrap(*args, **kwargs):
199 return wrapwrite(oldwrite, *args, **kwargs)
199 return wrapwrite(oldwrite, *args, **kwargs)
200 setattr(ui, 'write', wrap)
200 setattr(ui, 'write', wrap)
201 return oldwrite
201 return oldwrite
202
202
203 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
203 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
204 try:
204 try:
205 if usecurses:
205 if usecurses:
206 if testfile:
206 if testfile:
207 recordfn = crecordmod.testdecorator(
207 recordfn = crecordmod.testdecorator(
208 testfile, crecordmod.testchunkselector)
208 testfile, crecordmod.testchunkselector)
209 else:
209 else:
210 recordfn = crecordmod.chunkselector
210 recordfn = crecordmod.chunkselector
211
211
212 return crecordmod.filterpatch(ui, originalhunks, recordfn,
212 return crecordmod.filterpatch(ui, originalhunks, recordfn,
213 operation)
213 operation)
214 except crecordmod.fallbackerror as e:
214 except crecordmod.fallbackerror as e:
215 ui.warn('%s\n' % e.message)
215 ui.warn('%s\n' % e.message)
216 ui.warn(_('falling back to text mode\n'))
216 ui.warn(_('falling back to text mode\n'))
217
217
218 return patch.filterpatch(ui, originalhunks, operation)
218 return patch.filterpatch(ui, originalhunks, operation)
219
219
220 def recordfilter(ui, originalhunks, operation=None):
220 def recordfilter(ui, originalhunks, operation=None):
221 """ Prompts the user to filter the originalhunks and return a list of
221 """ Prompts the user to filter the originalhunks and return a list of
222 selected hunks.
222 selected hunks.
223 *operation* is used for to build ui messages to indicate the user what
223 *operation* is used for to build ui messages to indicate the user what
224 kind of filtering they are doing: reverting, committing, shelving, etc.
224 kind of filtering they are doing: reverting, committing, shelving, etc.
225 (see patch.filterpatch).
225 (see patch.filterpatch).
226 """
226 """
227 usecurses = crecordmod.checkcurses(ui)
227 usecurses = crecordmod.checkcurses(ui)
228 testfile = ui.config('experimental', 'crecordtest')
228 testfile = ui.config('experimental', 'crecordtest')
229 oldwrite = setupwrapcolorwrite(ui)
229 oldwrite = setupwrapcolorwrite(ui)
230 try:
230 try:
231 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
231 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
232 testfile, operation)
232 testfile, operation)
233 finally:
233 finally:
234 ui.write = oldwrite
234 ui.write = oldwrite
235 return newchunks, newopts
235 return newchunks, newopts
236
236
237 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
237 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
238 filterfn, *pats, **opts):
238 filterfn, *pats, **opts):
239 opts = pycompat.byteskwargs(opts)
239 opts = pycompat.byteskwargs(opts)
240 if not ui.interactive():
240 if not ui.interactive():
241 if cmdsuggest:
241 if cmdsuggest:
242 msg = _('running non-interactively, use %s instead') % cmdsuggest
242 msg = _('running non-interactively, use %s instead') % cmdsuggest
243 else:
243 else:
244 msg = _('running non-interactively')
244 msg = _('running non-interactively')
245 raise error.Abort(msg)
245 raise error.Abort(msg)
246
246
247 # make sure username is set before going interactive
247 # make sure username is set before going interactive
248 if not opts.get('user'):
248 if not opts.get('user'):
249 ui.username() # raise exception, username not provided
249 ui.username() # raise exception, username not provided
250
250
251 def recordfunc(ui, repo, message, match, opts):
251 def recordfunc(ui, repo, message, match, opts):
252 """This is generic record driver.
252 """This is generic record driver.
253
253
254 Its job is to interactively filter local changes, and
254 Its job is to interactively filter local changes, and
255 accordingly prepare working directory into a state in which the
255 accordingly prepare working directory into a state in which the
256 job can be delegated to a non-interactive commit command such as
256 job can be delegated to a non-interactive commit command such as
257 'commit' or 'qrefresh'.
257 'commit' or 'qrefresh'.
258
258
259 After the actual job is done by non-interactive command, the
259 After the actual job is done by non-interactive command, the
260 working directory is restored to its original state.
260 working directory is restored to its original state.
261
261
262 In the end we'll record interesting changes, and everything else
262 In the end we'll record interesting changes, and everything else
263 will be left in place, so the user can continue working.
263 will be left in place, so the user can continue working.
264 """
264 """
265
265
266 checkunfinished(repo, commit=True)
266 checkunfinished(repo, commit=True)
267 wctx = repo[None]
267 wctx = repo[None]
268 merge = len(wctx.parents()) > 1
268 merge = len(wctx.parents()) > 1
269 if merge:
269 if merge:
270 raise error.Abort(_('cannot partially commit a merge '
270 raise error.Abort(_('cannot partially commit a merge '
271 '(use "hg commit" instead)'))
271 '(use "hg commit" instead)'))
272
272
273 def fail(f, msg):
273 def fail(f, msg):
274 raise error.Abort('%s: %s' % (f, msg))
274 raise error.Abort('%s: %s' % (f, msg))
275
275
276 force = opts.get('force')
276 force = opts.get('force')
277 if not force:
277 if not force:
278 vdirs = []
278 vdirs = []
279 match.explicitdir = vdirs.append
279 match.explicitdir = vdirs.append
280 match.bad = fail
280 match.bad = fail
281
281
282 status = repo.status(match=match)
282 status = repo.status(match=match)
283 if not force:
283 if not force:
284 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
284 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
285 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
285 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
286 diffopts.nodates = True
286 diffopts.nodates = True
287 diffopts.git = True
287 diffopts.git = True
288 diffopts.showfunc = True
288 diffopts.showfunc = True
289 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
289 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
290 originalchunks = patch.parsepatch(originaldiff)
290 originalchunks = patch.parsepatch(originaldiff)
291
291
292 # 1. filter patch, since we are intending to apply subset of it
292 # 1. filter patch, since we are intending to apply subset of it
293 try:
293 try:
294 chunks, newopts = filterfn(ui, originalchunks)
294 chunks, newopts = filterfn(ui, originalchunks)
295 except error.PatchError as err:
295 except error.PatchError as err:
296 raise error.Abort(_('error parsing patch: %s') % err)
296 raise error.Abort(_('error parsing patch: %s') % err)
297 opts.update(newopts)
297 opts.update(newopts)
298
298
299 # We need to keep a backup of files that have been newly added and
299 # We need to keep a backup of files that have been newly added and
300 # modified during the recording process because there is a previous
300 # modified during the recording process because there is a previous
301 # version without the edit in the workdir
301 # version without the edit in the workdir
302 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
302 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
303 contenders = set()
303 contenders = set()
304 for h in chunks:
304 for h in chunks:
305 try:
305 try:
306 contenders.update(set(h.files()))
306 contenders.update(set(h.files()))
307 except AttributeError:
307 except AttributeError:
308 pass
308 pass
309
309
310 changed = status.modified + status.added + status.removed
310 changed = status.modified + status.added + status.removed
311 newfiles = [f for f in changed if f in contenders]
311 newfiles = [f for f in changed if f in contenders]
312 if not newfiles:
312 if not newfiles:
313 ui.status(_('no changes to record\n'))
313 ui.status(_('no changes to record\n'))
314 return 0
314 return 0
315
315
316 modified = set(status.modified)
316 modified = set(status.modified)
317
317
318 # 2. backup changed files, so we can restore them in the end
318 # 2. backup changed files, so we can restore them in the end
319
319
320 if backupall:
320 if backupall:
321 tobackup = changed
321 tobackup = changed
322 else:
322 else:
323 tobackup = [f for f in newfiles if f in modified or f in \
323 tobackup = [f for f in newfiles if f in modified or f in \
324 newlyaddedandmodifiedfiles]
324 newlyaddedandmodifiedfiles]
325 backups = {}
325 backups = {}
326 if tobackup:
326 if tobackup:
327 backupdir = repo.vfs.join('record-backups')
327 backupdir = repo.vfs.join('record-backups')
328 try:
328 try:
329 os.mkdir(backupdir)
329 os.mkdir(backupdir)
330 except OSError as err:
330 except OSError as err:
331 if err.errno != errno.EEXIST:
331 if err.errno != errno.EEXIST:
332 raise
332 raise
333 try:
333 try:
334 # backup continues
334 # backup continues
335 for f in tobackup:
335 for f in tobackup:
336 fd, tmpname = pycompat.mkstemp(prefix=f.replace('/', '_') + '.',
336 fd, tmpname = pycompat.mkstemp(prefix=f.replace('/', '_') + '.',
337 dir=backupdir)
337 dir=backupdir)
338 os.close(fd)
338 os.close(fd)
339 ui.debug('backup %r as %r\n' % (f, tmpname))
339 ui.debug('backup %r as %r\n' % (f, tmpname))
340 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
340 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
341 backups[f] = tmpname
341 backups[f] = tmpname
342
342
343 fp = stringio()
343 fp = stringio()
344 for c in chunks:
344 for c in chunks:
345 fname = c.filename()
345 fname = c.filename()
346 if fname in backups:
346 if fname in backups:
347 c.write(fp)
347 c.write(fp)
348 dopatch = fp.tell()
348 dopatch = fp.tell()
349 fp.seek(0)
349 fp.seek(0)
350
350
351 # 2.5 optionally review / modify patch in text editor
351 # 2.5 optionally review / modify patch in text editor
352 if opts.get('review', False):
352 if opts.get('review', False):
353 patchtext = (crecordmod.diffhelptext
353 patchtext = (crecordmod.diffhelptext
354 + crecordmod.patchhelptext
354 + crecordmod.patchhelptext
355 + fp.read())
355 + fp.read())
356 reviewedpatch = ui.edit(patchtext, "",
356 reviewedpatch = ui.edit(patchtext, "",
357 action="diff",
357 action="diff",
358 repopath=repo.path)
358 repopath=repo.path)
359 fp.truncate(0)
359 fp.truncate(0)
360 fp.write(reviewedpatch)
360 fp.write(reviewedpatch)
361 fp.seek(0)
361 fp.seek(0)
362
362
363 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
363 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
364 # 3a. apply filtered patch to clean repo (clean)
364 # 3a. apply filtered patch to clean repo (clean)
365 if backups:
365 if backups:
366 # Equivalent to hg.revert
366 # Equivalent to hg.revert
367 m = scmutil.matchfiles(repo, backups.keys())
367 m = scmutil.matchfiles(repo, backups.keys())
368 mergemod.update(repo, repo.dirstate.p1(),
368 mergemod.update(repo, repo.dirstate.p1(), branchmerge=False,
369 False, True, matcher=m)
369 force=True, matcher=m)
370
370
371 # 3b. (apply)
371 # 3b. (apply)
372 if dopatch:
372 if dopatch:
373 try:
373 try:
374 ui.debug('applying patch\n')
374 ui.debug('applying patch\n')
375 ui.debug(fp.getvalue())
375 ui.debug(fp.getvalue())
376 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
376 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
377 except error.PatchError as err:
377 except error.PatchError as err:
378 raise error.Abort(pycompat.bytestr(err))
378 raise error.Abort(pycompat.bytestr(err))
379 del fp
379 del fp
380
380
381 # 4. We prepared working directory according to filtered
381 # 4. We prepared working directory according to filtered
382 # patch. Now is the time to delegate the job to
382 # patch. Now is the time to delegate the job to
383 # commit/qrefresh or the like!
383 # commit/qrefresh or the like!
384
384
385 # Make all of the pathnames absolute.
385 # Make all of the pathnames absolute.
386 newfiles = [repo.wjoin(nf) for nf in newfiles]
386 newfiles = [repo.wjoin(nf) for nf in newfiles]
387 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
387 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
388 finally:
388 finally:
389 # 5. finally restore backed-up files
389 # 5. finally restore backed-up files
390 try:
390 try:
391 dirstate = repo.dirstate
391 dirstate = repo.dirstate
392 for realname, tmpname in backups.iteritems():
392 for realname, tmpname in backups.iteritems():
393 ui.debug('restoring %r to %r\n' % (tmpname, realname))
393 ui.debug('restoring %r to %r\n' % (tmpname, realname))
394
394
395 if dirstate[realname] == 'n':
395 if dirstate[realname] == 'n':
396 # without normallookup, restoring timestamp
396 # without normallookup, restoring timestamp
397 # may cause partially committed files
397 # may cause partially committed files
398 # to be treated as unmodified
398 # to be treated as unmodified
399 dirstate.normallookup(realname)
399 dirstate.normallookup(realname)
400
400
401 # copystat=True here and above are a hack to trick any
401 # copystat=True here and above are a hack to trick any
402 # editors that have f open that we haven't modified them.
402 # editors that have f open that we haven't modified them.
403 #
403 #
404 # Also note that this racy as an editor could notice the
404 # Also note that this racy as an editor could notice the
405 # file's mtime before we've finished writing it.
405 # file's mtime before we've finished writing it.
406 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
406 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
407 os.unlink(tmpname)
407 os.unlink(tmpname)
408 if tobackup:
408 if tobackup:
409 os.rmdir(backupdir)
409 os.rmdir(backupdir)
410 except OSError:
410 except OSError:
411 pass
411 pass
412
412
413 def recordinwlock(ui, repo, message, match, opts):
413 def recordinwlock(ui, repo, message, match, opts):
414 with repo.wlock():
414 with repo.wlock():
415 return recordfunc(ui, repo, message, match, opts)
415 return recordfunc(ui, repo, message, match, opts)
416
416
417 return commit(ui, repo, recordinwlock, pats, opts)
417 return commit(ui, repo, recordinwlock, pats, opts)
418
418
419 class dirnode(object):
419 class dirnode(object):
420 """
420 """
421 Represent a directory in user working copy with information required for
421 Represent a directory in user working copy with information required for
422 the purpose of tersing its status.
422 the purpose of tersing its status.
423
423
424 path is the path to the directory, without a trailing '/'
424 path is the path to the directory, without a trailing '/'
425
425
426 statuses is a set of statuses of all files in this directory (this includes
426 statuses is a set of statuses of all files in this directory (this includes
427 all the files in all the subdirectories too)
427 all the files in all the subdirectories too)
428
428
429 files is a list of files which are direct child of this directory
429 files is a list of files which are direct child of this directory
430
430
431 subdirs is a dictionary of sub-directory name as the key and it's own
431 subdirs is a dictionary of sub-directory name as the key and it's own
432 dirnode object as the value
432 dirnode object as the value
433 """
433 """
434
434
435 def __init__(self, dirpath):
435 def __init__(self, dirpath):
436 self.path = dirpath
436 self.path = dirpath
437 self.statuses = set([])
437 self.statuses = set([])
438 self.files = []
438 self.files = []
439 self.subdirs = {}
439 self.subdirs = {}
440
440
441 def _addfileindir(self, filename, status):
441 def _addfileindir(self, filename, status):
442 """Add a file in this directory as a direct child."""
442 """Add a file in this directory as a direct child."""
443 self.files.append((filename, status))
443 self.files.append((filename, status))
444
444
445 def addfile(self, filename, status):
445 def addfile(self, filename, status):
446 """
446 """
447 Add a file to this directory or to its direct parent directory.
447 Add a file to this directory or to its direct parent directory.
448
448
449 If the file is not direct child of this directory, we traverse to the
449 If the file is not direct child of this directory, we traverse to the
450 directory of which this file is a direct child of and add the file
450 directory of which this file is a direct child of and add the file
451 there.
451 there.
452 """
452 """
453
453
454 # the filename contains a path separator, it means it's not the direct
454 # the filename contains a path separator, it means it's not the direct
455 # child of this directory
455 # child of this directory
456 if '/' in filename:
456 if '/' in filename:
457 subdir, filep = filename.split('/', 1)
457 subdir, filep = filename.split('/', 1)
458
458
459 # does the dirnode object for subdir exists
459 # does the dirnode object for subdir exists
460 if subdir not in self.subdirs:
460 if subdir not in self.subdirs:
461 subdirpath = pathutil.join(self.path, subdir)
461 subdirpath = pathutil.join(self.path, subdir)
462 self.subdirs[subdir] = dirnode(subdirpath)
462 self.subdirs[subdir] = dirnode(subdirpath)
463
463
464 # try adding the file in subdir
464 # try adding the file in subdir
465 self.subdirs[subdir].addfile(filep, status)
465 self.subdirs[subdir].addfile(filep, status)
466
466
467 else:
467 else:
468 self._addfileindir(filename, status)
468 self._addfileindir(filename, status)
469
469
470 if status not in self.statuses:
470 if status not in self.statuses:
471 self.statuses.add(status)
471 self.statuses.add(status)
472
472
473 def iterfilepaths(self):
473 def iterfilepaths(self):
474 """Yield (status, path) for files directly under this directory."""
474 """Yield (status, path) for files directly under this directory."""
475 for f, st in self.files:
475 for f, st in self.files:
476 yield st, pathutil.join(self.path, f)
476 yield st, pathutil.join(self.path, f)
477
477
478 def tersewalk(self, terseargs):
478 def tersewalk(self, terseargs):
479 """
479 """
480 Yield (status, path) obtained by processing the status of this
480 Yield (status, path) obtained by processing the status of this
481 dirnode.
481 dirnode.
482
482
483 terseargs is the string of arguments passed by the user with `--terse`
483 terseargs is the string of arguments passed by the user with `--terse`
484 flag.
484 flag.
485
485
486 Following are the cases which can happen:
486 Following are the cases which can happen:
487
487
488 1) All the files in the directory (including all the files in its
488 1) All the files in the directory (including all the files in its
489 subdirectories) share the same status and the user has asked us to terse
489 subdirectories) share the same status and the user has asked us to terse
490 that status. -> yield (status, dirpath). dirpath will end in '/'.
490 that status. -> yield (status, dirpath). dirpath will end in '/'.
491
491
492 2) Otherwise, we do following:
492 2) Otherwise, we do following:
493
493
494 a) Yield (status, filepath) for all the files which are in this
494 a) Yield (status, filepath) for all the files which are in this
495 directory (only the ones in this directory, not the subdirs)
495 directory (only the ones in this directory, not the subdirs)
496
496
497 b) Recurse the function on all the subdirectories of this
497 b) Recurse the function on all the subdirectories of this
498 directory
498 directory
499 """
499 """
500
500
501 if len(self.statuses) == 1:
501 if len(self.statuses) == 1:
502 onlyst = self.statuses.pop()
502 onlyst = self.statuses.pop()
503
503
504 # Making sure we terse only when the status abbreviation is
504 # Making sure we terse only when the status abbreviation is
505 # passed as terse argument
505 # passed as terse argument
506 if onlyst in terseargs:
506 if onlyst in terseargs:
507 yield onlyst, self.path + '/'
507 yield onlyst, self.path + '/'
508 return
508 return
509
509
510 # add the files to status list
510 # add the files to status list
511 for st, fpath in self.iterfilepaths():
511 for st, fpath in self.iterfilepaths():
512 yield st, fpath
512 yield st, fpath
513
513
514 #recurse on the subdirs
514 #recurse on the subdirs
515 for dirobj in self.subdirs.values():
515 for dirobj in self.subdirs.values():
516 for st, fpath in dirobj.tersewalk(terseargs):
516 for st, fpath in dirobj.tersewalk(terseargs):
517 yield st, fpath
517 yield st, fpath
518
518
519 def tersedir(statuslist, terseargs):
519 def tersedir(statuslist, terseargs):
520 """
520 """
521 Terse the status if all the files in a directory shares the same status.
521 Terse the status if all the files in a directory shares the same status.
522
522
523 statuslist is scmutil.status() object which contains a list of files for
523 statuslist is scmutil.status() object which contains a list of files for
524 each status.
524 each status.
525 terseargs is string which is passed by the user as the argument to `--terse`
525 terseargs is string which is passed by the user as the argument to `--terse`
526 flag.
526 flag.
527
527
528 The function makes a tree of objects of dirnode class, and at each node it
528 The function makes a tree of objects of dirnode class, and at each node it
529 stores the information required to know whether we can terse a certain
529 stores the information required to know whether we can terse a certain
530 directory or not.
530 directory or not.
531 """
531 """
532 # the order matters here as that is used to produce final list
532 # the order matters here as that is used to produce final list
533 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
533 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
534
534
535 # checking the argument validity
535 # checking the argument validity
536 for s in pycompat.bytestr(terseargs):
536 for s in pycompat.bytestr(terseargs):
537 if s not in allst:
537 if s not in allst:
538 raise error.Abort(_("'%s' not recognized") % s)
538 raise error.Abort(_("'%s' not recognized") % s)
539
539
540 # creating a dirnode object for the root of the repo
540 # creating a dirnode object for the root of the repo
541 rootobj = dirnode('')
541 rootobj = dirnode('')
542 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
542 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
543 'ignored', 'removed')
543 'ignored', 'removed')
544
544
545 tersedict = {}
545 tersedict = {}
546 for attrname in pstatus:
546 for attrname in pstatus:
547 statuschar = attrname[0:1]
547 statuschar = attrname[0:1]
548 for f in getattr(statuslist, attrname):
548 for f in getattr(statuslist, attrname):
549 rootobj.addfile(f, statuschar)
549 rootobj.addfile(f, statuschar)
550 tersedict[statuschar] = []
550 tersedict[statuschar] = []
551
551
552 # we won't be tersing the root dir, so add files in it
552 # we won't be tersing the root dir, so add files in it
553 for st, fpath in rootobj.iterfilepaths():
553 for st, fpath in rootobj.iterfilepaths():
554 tersedict[st].append(fpath)
554 tersedict[st].append(fpath)
555
555
556 # process each sub-directory and build tersedict
556 # process each sub-directory and build tersedict
557 for subdir in rootobj.subdirs.values():
557 for subdir in rootobj.subdirs.values():
558 for st, f in subdir.tersewalk(terseargs):
558 for st, f in subdir.tersewalk(terseargs):
559 tersedict[st].append(f)
559 tersedict[st].append(f)
560
560
561 tersedlist = []
561 tersedlist = []
562 for st in allst:
562 for st in allst:
563 tersedict[st].sort()
563 tersedict[st].sort()
564 tersedlist.append(tersedict[st])
564 tersedlist.append(tersedict[st])
565
565
566 return tersedlist
566 return tersedlist
567
567
568 def _commentlines(raw):
568 def _commentlines(raw):
569 '''Surround lineswith a comment char and a new line'''
569 '''Surround lineswith a comment char and a new line'''
570 lines = raw.splitlines()
570 lines = raw.splitlines()
571 commentedlines = ['# %s' % line for line in lines]
571 commentedlines = ['# %s' % line for line in lines]
572 return '\n'.join(commentedlines) + '\n'
572 return '\n'.join(commentedlines) + '\n'
573
573
574 def _conflictsmsg(repo):
574 def _conflictsmsg(repo):
575 mergestate = mergemod.mergestate.read(repo)
575 mergestate = mergemod.mergestate.read(repo)
576 if not mergestate.active():
576 if not mergestate.active():
577 return
577 return
578
578
579 m = scmutil.match(repo[None])
579 m = scmutil.match(repo[None])
580 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
580 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
581 if unresolvedlist:
581 if unresolvedlist:
582 mergeliststr = '\n'.join(
582 mergeliststr = '\n'.join(
583 [' %s' % util.pathto(repo.root, encoding.getcwd(), path)
583 [' %s' % util.pathto(repo.root, encoding.getcwd(), path)
584 for path in sorted(unresolvedlist)])
584 for path in sorted(unresolvedlist)])
585 msg = _('''Unresolved merge conflicts:
585 msg = _('''Unresolved merge conflicts:
586
586
587 %s
587 %s
588
588
589 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
589 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
590 else:
590 else:
591 msg = _('No unresolved merge conflicts.')
591 msg = _('No unresolved merge conflicts.')
592
592
593 return _commentlines(msg)
593 return _commentlines(msg)
594
594
595 def _helpmessage(continuecmd, abortcmd):
595 def _helpmessage(continuecmd, abortcmd):
596 msg = _('To continue: %s\n'
596 msg = _('To continue: %s\n'
597 'To abort: %s') % (continuecmd, abortcmd)
597 'To abort: %s') % (continuecmd, abortcmd)
598 return _commentlines(msg)
598 return _commentlines(msg)
599
599
600 def _rebasemsg():
600 def _rebasemsg():
601 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
601 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
602
602
603 def _histeditmsg():
603 def _histeditmsg():
604 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
604 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
605
605
606 def _unshelvemsg():
606 def _unshelvemsg():
607 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
607 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
608
608
609 def _graftmsg():
609 def _graftmsg():
610 # tweakdefaults requires `update` to have a rev hence the `.`
610 # tweakdefaults requires `update` to have a rev hence the `.`
611 return _helpmessage('hg graft --continue', 'hg graft --abort')
611 return _helpmessage('hg graft --continue', 'hg graft --abort')
612
612
613 def _mergemsg():
613 def _mergemsg():
614 # tweakdefaults requires `update` to have a rev hence the `.`
614 # tweakdefaults requires `update` to have a rev hence the `.`
615 return _helpmessage('hg commit', 'hg merge --abort')
615 return _helpmessage('hg commit', 'hg merge --abort')
616
616
617 def _bisectmsg():
617 def _bisectmsg():
618 msg = _('To mark the changeset good: hg bisect --good\n'
618 msg = _('To mark the changeset good: hg bisect --good\n'
619 'To mark the changeset bad: hg bisect --bad\n'
619 'To mark the changeset bad: hg bisect --bad\n'
620 'To abort: hg bisect --reset\n')
620 'To abort: hg bisect --reset\n')
621 return _commentlines(msg)
621 return _commentlines(msg)
622
622
623 def fileexistspredicate(filename):
623 def fileexistspredicate(filename):
624 return lambda repo: repo.vfs.exists(filename)
624 return lambda repo: repo.vfs.exists(filename)
625
625
626 def _mergepredicate(repo):
626 def _mergepredicate(repo):
627 return len(repo[None].parents()) > 1
627 return len(repo[None].parents()) > 1
628
628
629 STATES = (
629 STATES = (
630 # (state, predicate to detect states, helpful message function)
630 # (state, predicate to detect states, helpful message function)
631 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
631 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
632 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
632 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
633 ('graft', fileexistspredicate('graftstate'), _graftmsg),
633 ('graft', fileexistspredicate('graftstate'), _graftmsg),
634 ('unshelve', fileexistspredicate('shelvedstate'), _unshelvemsg),
634 ('unshelve', fileexistspredicate('shelvedstate'), _unshelvemsg),
635 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
635 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
636 # The merge state is part of a list that will be iterated over.
636 # The merge state is part of a list that will be iterated over.
637 # They need to be last because some of the other unfinished states may also
637 # They need to be last because some of the other unfinished states may also
638 # be in a merge or update state (eg. rebase, histedit, graft, etc).
638 # be in a merge or update state (eg. rebase, histedit, graft, etc).
639 # We want those to have priority.
639 # We want those to have priority.
640 ('merge', _mergepredicate, _mergemsg),
640 ('merge', _mergepredicate, _mergemsg),
641 )
641 )
642
642
643 def _getrepostate(repo):
643 def _getrepostate(repo):
644 # experimental config: commands.status.skipstates
644 # experimental config: commands.status.skipstates
645 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
645 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
646 for state, statedetectionpredicate, msgfn in STATES:
646 for state, statedetectionpredicate, msgfn in STATES:
647 if state in skip:
647 if state in skip:
648 continue
648 continue
649 if statedetectionpredicate(repo):
649 if statedetectionpredicate(repo):
650 return (state, statedetectionpredicate, msgfn)
650 return (state, statedetectionpredicate, msgfn)
651
651
652 def morestatus(repo, fm):
652 def morestatus(repo, fm):
653 statetuple = _getrepostate(repo)
653 statetuple = _getrepostate(repo)
654 label = 'status.morestatus'
654 label = 'status.morestatus'
655 if statetuple:
655 if statetuple:
656 state, statedetectionpredicate, helpfulmsg = statetuple
656 state, statedetectionpredicate, helpfulmsg = statetuple
657 statemsg = _('The repository is in an unfinished *%s* state.') % state
657 statemsg = _('The repository is in an unfinished *%s* state.') % state
658 fm.plain('%s\n' % _commentlines(statemsg), label=label)
658 fm.plain('%s\n' % _commentlines(statemsg), label=label)
659 conmsg = _conflictsmsg(repo)
659 conmsg = _conflictsmsg(repo)
660 if conmsg:
660 if conmsg:
661 fm.plain('%s\n' % conmsg, label=label)
661 fm.plain('%s\n' % conmsg, label=label)
662 if helpfulmsg:
662 if helpfulmsg:
663 helpmsg = helpfulmsg()
663 helpmsg = helpfulmsg()
664 fm.plain('%s\n' % helpmsg, label=label)
664 fm.plain('%s\n' % helpmsg, label=label)
665
665
666 def findpossible(cmd, table, strict=False):
666 def findpossible(cmd, table, strict=False):
667 """
667 """
668 Return cmd -> (aliases, command table entry)
668 Return cmd -> (aliases, command table entry)
669 for each matching command.
669 for each matching command.
670 Return debug commands (or their aliases) only if no normal command matches.
670 Return debug commands (or their aliases) only if no normal command matches.
671 """
671 """
672 choice = {}
672 choice = {}
673 debugchoice = {}
673 debugchoice = {}
674
674
675 if cmd in table:
675 if cmd in table:
676 # short-circuit exact matches, "log" alias beats "log|history"
676 # short-circuit exact matches, "log" alias beats "log|history"
677 keys = [cmd]
677 keys = [cmd]
678 else:
678 else:
679 keys = table.keys()
679 keys = table.keys()
680
680
681 allcmds = []
681 allcmds = []
682 for e in keys:
682 for e in keys:
683 aliases = parsealiases(e)
683 aliases = parsealiases(e)
684 allcmds.extend(aliases)
684 allcmds.extend(aliases)
685 found = None
685 found = None
686 if cmd in aliases:
686 if cmd in aliases:
687 found = cmd
687 found = cmd
688 elif not strict:
688 elif not strict:
689 for a in aliases:
689 for a in aliases:
690 if a.startswith(cmd):
690 if a.startswith(cmd):
691 found = a
691 found = a
692 break
692 break
693 if found is not None:
693 if found is not None:
694 if aliases[0].startswith("debug") or found.startswith("debug"):
694 if aliases[0].startswith("debug") or found.startswith("debug"):
695 debugchoice[found] = (aliases, table[e])
695 debugchoice[found] = (aliases, table[e])
696 else:
696 else:
697 choice[found] = (aliases, table[e])
697 choice[found] = (aliases, table[e])
698
698
699 if not choice and debugchoice:
699 if not choice and debugchoice:
700 choice = debugchoice
700 choice = debugchoice
701
701
702 return choice, allcmds
702 return choice, allcmds
703
703
704 def findcmd(cmd, table, strict=True):
704 def findcmd(cmd, table, strict=True):
705 """Return (aliases, command table entry) for command string."""
705 """Return (aliases, command table entry) for command string."""
706 choice, allcmds = findpossible(cmd, table, strict)
706 choice, allcmds = findpossible(cmd, table, strict)
707
707
708 if cmd in choice:
708 if cmd in choice:
709 return choice[cmd]
709 return choice[cmd]
710
710
711 if len(choice) > 1:
711 if len(choice) > 1:
712 clist = sorted(choice)
712 clist = sorted(choice)
713 raise error.AmbiguousCommand(cmd, clist)
713 raise error.AmbiguousCommand(cmd, clist)
714
714
715 if choice:
715 if choice:
716 return list(choice.values())[0]
716 return list(choice.values())[0]
717
717
718 raise error.UnknownCommand(cmd, allcmds)
718 raise error.UnknownCommand(cmd, allcmds)
719
719
720 def changebranch(ui, repo, revs, label):
720 def changebranch(ui, repo, revs, label):
721 """ Change the branch name of given revs to label """
721 """ Change the branch name of given revs to label """
722
722
723 with repo.wlock(), repo.lock(), repo.transaction('branches'):
723 with repo.wlock(), repo.lock(), repo.transaction('branches'):
724 # abort in case of uncommitted merge or dirty wdir
724 # abort in case of uncommitted merge or dirty wdir
725 bailifchanged(repo)
725 bailifchanged(repo)
726 revs = scmutil.revrange(repo, revs)
726 revs = scmutil.revrange(repo, revs)
727 if not revs:
727 if not revs:
728 raise error.Abort("empty revision set")
728 raise error.Abort("empty revision set")
729 roots = repo.revs('roots(%ld)', revs)
729 roots = repo.revs('roots(%ld)', revs)
730 if len(roots) > 1:
730 if len(roots) > 1:
731 raise error.Abort(_("cannot change branch of non-linear revisions"))
731 raise error.Abort(_("cannot change branch of non-linear revisions"))
732 rewriteutil.precheck(repo, revs, 'change branch of')
732 rewriteutil.precheck(repo, revs, 'change branch of')
733
733
734 root = repo[roots.first()]
734 root = repo[roots.first()]
735 if not root.p1().branch() == label and label in repo.branchmap():
735 if not root.p1().branch() == label and label in repo.branchmap():
736 raise error.Abort(_("a branch of the same name already exists"))
736 raise error.Abort(_("a branch of the same name already exists"))
737
737
738 if repo.revs('merge() and %ld', revs):
738 if repo.revs('merge() and %ld', revs):
739 raise error.Abort(_("cannot change branch of a merge commit"))
739 raise error.Abort(_("cannot change branch of a merge commit"))
740 if repo.revs('obsolete() and %ld', revs):
740 if repo.revs('obsolete() and %ld', revs):
741 raise error.Abort(_("cannot change branch of a obsolete changeset"))
741 raise error.Abort(_("cannot change branch of a obsolete changeset"))
742
742
743 # make sure only topological heads
743 # make sure only topological heads
744 if repo.revs('heads(%ld) - head()', revs):
744 if repo.revs('heads(%ld) - head()', revs):
745 raise error.Abort(_("cannot change branch in middle of a stack"))
745 raise error.Abort(_("cannot change branch in middle of a stack"))
746
746
747 replacements = {}
747 replacements = {}
748 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
748 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
749 # mercurial.subrepo -> mercurial.cmdutil
749 # mercurial.subrepo -> mercurial.cmdutil
750 from . import context
750 from . import context
751 for rev in revs:
751 for rev in revs:
752 ctx = repo[rev]
752 ctx = repo[rev]
753 oldbranch = ctx.branch()
753 oldbranch = ctx.branch()
754 # check if ctx has same branch
754 # check if ctx has same branch
755 if oldbranch == label:
755 if oldbranch == label:
756 continue
756 continue
757
757
758 def filectxfn(repo, newctx, path):
758 def filectxfn(repo, newctx, path):
759 try:
759 try:
760 return ctx[path]
760 return ctx[path]
761 except error.ManifestLookupError:
761 except error.ManifestLookupError:
762 return None
762 return None
763
763
764 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
764 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
765 % (hex(ctx.node()), oldbranch, label))
765 % (hex(ctx.node()), oldbranch, label))
766 extra = ctx.extra()
766 extra = ctx.extra()
767 extra['branch_change'] = hex(ctx.node())
767 extra['branch_change'] = hex(ctx.node())
768 # While changing branch of set of linear commits, make sure that
768 # While changing branch of set of linear commits, make sure that
769 # we base our commits on new parent rather than old parent which
769 # we base our commits on new parent rather than old parent which
770 # was obsoleted while changing the branch
770 # was obsoleted while changing the branch
771 p1 = ctx.p1().node()
771 p1 = ctx.p1().node()
772 p2 = ctx.p2().node()
772 p2 = ctx.p2().node()
773 if p1 in replacements:
773 if p1 in replacements:
774 p1 = replacements[p1][0]
774 p1 = replacements[p1][0]
775 if p2 in replacements:
775 if p2 in replacements:
776 p2 = replacements[p2][0]
776 p2 = replacements[p2][0]
777
777
778 mc = context.memctx(repo, (p1, p2),
778 mc = context.memctx(repo, (p1, p2),
779 ctx.description(),
779 ctx.description(),
780 ctx.files(),
780 ctx.files(),
781 filectxfn,
781 filectxfn,
782 user=ctx.user(),
782 user=ctx.user(),
783 date=ctx.date(),
783 date=ctx.date(),
784 extra=extra,
784 extra=extra,
785 branch=label)
785 branch=label)
786
786
787 newnode = repo.commitctx(mc)
787 newnode = repo.commitctx(mc)
788 replacements[ctx.node()] = (newnode,)
788 replacements[ctx.node()] = (newnode,)
789 ui.debug('new node id is %s\n' % hex(newnode))
789 ui.debug('new node id is %s\n' % hex(newnode))
790
790
791 # create obsmarkers and move bookmarks
791 # create obsmarkers and move bookmarks
792 scmutil.cleanupnodes(repo, replacements, 'branch-change', fixphase=True)
792 scmutil.cleanupnodes(repo, replacements, 'branch-change', fixphase=True)
793
793
794 # move the working copy too
794 # move the working copy too
795 wctx = repo[None]
795 wctx = repo[None]
796 # in-progress merge is a bit too complex for now.
796 # in-progress merge is a bit too complex for now.
797 if len(wctx.parents()) == 1:
797 if len(wctx.parents()) == 1:
798 newid = replacements.get(wctx.p1().node())
798 newid = replacements.get(wctx.p1().node())
799 if newid is not None:
799 if newid is not None:
800 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
800 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
801 # mercurial.cmdutil
801 # mercurial.cmdutil
802 from . import hg
802 from . import hg
803 hg.update(repo, newid[0], quietempty=True)
803 hg.update(repo, newid[0], quietempty=True)
804
804
805 ui.status(_("changed branch on %d changesets\n") % len(replacements))
805 ui.status(_("changed branch on %d changesets\n") % len(replacements))
806
806
807 def findrepo(p):
807 def findrepo(p):
808 while not os.path.isdir(os.path.join(p, ".hg")):
808 while not os.path.isdir(os.path.join(p, ".hg")):
809 oldp, p = p, os.path.dirname(p)
809 oldp, p = p, os.path.dirname(p)
810 if p == oldp:
810 if p == oldp:
811 return None
811 return None
812
812
813 return p
813 return p
814
814
815 def bailifchanged(repo, merge=True, hint=None):
815 def bailifchanged(repo, merge=True, hint=None):
816 """ enforce the precondition that working directory must be clean.
816 """ enforce the precondition that working directory must be clean.
817
817
818 'merge' can be set to false if a pending uncommitted merge should be
818 'merge' can be set to false if a pending uncommitted merge should be
819 ignored (such as when 'update --check' runs).
819 ignored (such as when 'update --check' runs).
820
820
821 'hint' is the usual hint given to Abort exception.
821 'hint' is the usual hint given to Abort exception.
822 """
822 """
823
823
824 if merge and repo.dirstate.p2() != nullid:
824 if merge and repo.dirstate.p2() != nullid:
825 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
825 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
826 modified, added, removed, deleted = repo.status()[:4]
826 modified, added, removed, deleted = repo.status()[:4]
827 if modified or added or removed or deleted:
827 if modified or added or removed or deleted:
828 raise error.Abort(_('uncommitted changes'), hint=hint)
828 raise error.Abort(_('uncommitted changes'), hint=hint)
829 ctx = repo[None]
829 ctx = repo[None]
830 for s in sorted(ctx.substate):
830 for s in sorted(ctx.substate):
831 ctx.sub(s).bailifchanged(hint=hint)
831 ctx.sub(s).bailifchanged(hint=hint)
832
832
833 def logmessage(ui, opts):
833 def logmessage(ui, opts):
834 """ get the log message according to -m and -l option """
834 """ get the log message according to -m and -l option """
835 message = opts.get('message')
835 message = opts.get('message')
836 logfile = opts.get('logfile')
836 logfile = opts.get('logfile')
837
837
838 if message and logfile:
838 if message and logfile:
839 raise error.Abort(_('options --message and --logfile are mutually '
839 raise error.Abort(_('options --message and --logfile are mutually '
840 'exclusive'))
840 'exclusive'))
841 if not message and logfile:
841 if not message and logfile:
842 try:
842 try:
843 if isstdiofilename(logfile):
843 if isstdiofilename(logfile):
844 message = ui.fin.read()
844 message = ui.fin.read()
845 else:
845 else:
846 message = '\n'.join(util.readfile(logfile).splitlines())
846 message = '\n'.join(util.readfile(logfile).splitlines())
847 except IOError as inst:
847 except IOError as inst:
848 raise error.Abort(_("can't read commit message '%s': %s") %
848 raise error.Abort(_("can't read commit message '%s': %s") %
849 (logfile, encoding.strtolocal(inst.strerror)))
849 (logfile, encoding.strtolocal(inst.strerror)))
850 return message
850 return message
851
851
852 def mergeeditform(ctxorbool, baseformname):
852 def mergeeditform(ctxorbool, baseformname):
853 """return appropriate editform name (referencing a committemplate)
853 """return appropriate editform name (referencing a committemplate)
854
854
855 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
855 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
856 merging is committed.
856 merging is committed.
857
857
858 This returns baseformname with '.merge' appended if it is a merge,
858 This returns baseformname with '.merge' appended if it is a merge,
859 otherwise '.normal' is appended.
859 otherwise '.normal' is appended.
860 """
860 """
861 if isinstance(ctxorbool, bool):
861 if isinstance(ctxorbool, bool):
862 if ctxorbool:
862 if ctxorbool:
863 return baseformname + ".merge"
863 return baseformname + ".merge"
864 elif len(ctxorbool.parents()) > 1:
864 elif len(ctxorbool.parents()) > 1:
865 return baseformname + ".merge"
865 return baseformname + ".merge"
866
866
867 return baseformname + ".normal"
867 return baseformname + ".normal"
868
868
869 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
869 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
870 editform='', **opts):
870 editform='', **opts):
871 """get appropriate commit message editor according to '--edit' option
871 """get appropriate commit message editor according to '--edit' option
872
872
873 'finishdesc' is a function to be called with edited commit message
873 'finishdesc' is a function to be called with edited commit message
874 (= 'description' of the new changeset) just after editing, but
874 (= 'description' of the new changeset) just after editing, but
875 before checking empty-ness. It should return actual text to be
875 before checking empty-ness. It should return actual text to be
876 stored into history. This allows to change description before
876 stored into history. This allows to change description before
877 storing.
877 storing.
878
878
879 'extramsg' is a extra message to be shown in the editor instead of
879 'extramsg' is a extra message to be shown in the editor instead of
880 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
880 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
881 is automatically added.
881 is automatically added.
882
882
883 'editform' is a dot-separated list of names, to distinguish
883 'editform' is a dot-separated list of names, to distinguish
884 the purpose of commit text editing.
884 the purpose of commit text editing.
885
885
886 'getcommiteditor' returns 'commitforceeditor' regardless of
886 'getcommiteditor' returns 'commitforceeditor' regardless of
887 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
887 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
888 they are specific for usage in MQ.
888 they are specific for usage in MQ.
889 """
889 """
890 if edit or finishdesc or extramsg:
890 if edit or finishdesc or extramsg:
891 return lambda r, c, s: commitforceeditor(r, c, s,
891 return lambda r, c, s: commitforceeditor(r, c, s,
892 finishdesc=finishdesc,
892 finishdesc=finishdesc,
893 extramsg=extramsg,
893 extramsg=extramsg,
894 editform=editform)
894 editform=editform)
895 elif editform:
895 elif editform:
896 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
896 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
897 else:
897 else:
898 return commiteditor
898 return commiteditor
899
899
900 def _escapecommandtemplate(tmpl):
900 def _escapecommandtemplate(tmpl):
901 parts = []
901 parts = []
902 for typ, start, end in templater.scantemplate(tmpl, raw=True):
902 for typ, start, end in templater.scantemplate(tmpl, raw=True):
903 if typ == b'string':
903 if typ == b'string':
904 parts.append(stringutil.escapestr(tmpl[start:end]))
904 parts.append(stringutil.escapestr(tmpl[start:end]))
905 else:
905 else:
906 parts.append(tmpl[start:end])
906 parts.append(tmpl[start:end])
907 return b''.join(parts)
907 return b''.join(parts)
908
908
909 def rendercommandtemplate(ui, tmpl, props):
909 def rendercommandtemplate(ui, tmpl, props):
910 r"""Expand a literal template 'tmpl' in a way suitable for command line
910 r"""Expand a literal template 'tmpl' in a way suitable for command line
911
911
912 '\' in outermost string is not taken as an escape character because it
912 '\' in outermost string is not taken as an escape character because it
913 is a directory separator on Windows.
913 is a directory separator on Windows.
914
914
915 >>> from . import ui as uimod
915 >>> from . import ui as uimod
916 >>> ui = uimod.ui()
916 >>> ui = uimod.ui()
917 >>> rendercommandtemplate(ui, b'c:\\{path}', {b'path': b'foo'})
917 >>> rendercommandtemplate(ui, b'c:\\{path}', {b'path': b'foo'})
918 'c:\\foo'
918 'c:\\foo'
919 >>> rendercommandtemplate(ui, b'{"c:\\{path}"}', {'path': b'foo'})
919 >>> rendercommandtemplate(ui, b'{"c:\\{path}"}', {'path': b'foo'})
920 'c:{path}'
920 'c:{path}'
921 """
921 """
922 if not tmpl:
922 if not tmpl:
923 return tmpl
923 return tmpl
924 t = formatter.maketemplater(ui, _escapecommandtemplate(tmpl))
924 t = formatter.maketemplater(ui, _escapecommandtemplate(tmpl))
925 return t.renderdefault(props)
925 return t.renderdefault(props)
926
926
927 def rendertemplate(ctx, tmpl, props=None):
927 def rendertemplate(ctx, tmpl, props=None):
928 """Expand a literal template 'tmpl' byte-string against one changeset
928 """Expand a literal template 'tmpl' byte-string against one changeset
929
929
930 Each props item must be a stringify-able value or a callable returning
930 Each props item must be a stringify-able value or a callable returning
931 such value, i.e. no bare list nor dict should be passed.
931 such value, i.e. no bare list nor dict should be passed.
932 """
932 """
933 repo = ctx.repo()
933 repo = ctx.repo()
934 tres = formatter.templateresources(repo.ui, repo)
934 tres = formatter.templateresources(repo.ui, repo)
935 t = formatter.maketemplater(repo.ui, tmpl, defaults=templatekw.keywords,
935 t = formatter.maketemplater(repo.ui, tmpl, defaults=templatekw.keywords,
936 resources=tres)
936 resources=tres)
937 mapping = {'ctx': ctx}
937 mapping = {'ctx': ctx}
938 if props:
938 if props:
939 mapping.update(props)
939 mapping.update(props)
940 return t.renderdefault(mapping)
940 return t.renderdefault(mapping)
941
941
942 def _buildfntemplate(pat, total=None, seqno=None, revwidth=None, pathname=None):
942 def _buildfntemplate(pat, total=None, seqno=None, revwidth=None, pathname=None):
943 r"""Convert old-style filename format string to template string
943 r"""Convert old-style filename format string to template string
944
944
945 >>> _buildfntemplate(b'foo-%b-%n.patch', seqno=0)
945 >>> _buildfntemplate(b'foo-%b-%n.patch', seqno=0)
946 'foo-{reporoot|basename}-{seqno}.patch'
946 'foo-{reporoot|basename}-{seqno}.patch'
947 >>> _buildfntemplate(b'%R{tags % "{tag}"}%H')
947 >>> _buildfntemplate(b'%R{tags % "{tag}"}%H')
948 '{rev}{tags % "{tag}"}{node}'
948 '{rev}{tags % "{tag}"}{node}'
949
949
950 '\' in outermost strings has to be escaped because it is a directory
950 '\' in outermost strings has to be escaped because it is a directory
951 separator on Windows:
951 separator on Windows:
952
952
953 >>> _buildfntemplate(b'c:\\tmp\\%R\\%n.patch', seqno=0)
953 >>> _buildfntemplate(b'c:\\tmp\\%R\\%n.patch', seqno=0)
954 'c:\\\\tmp\\\\{rev}\\\\{seqno}.patch'
954 'c:\\\\tmp\\\\{rev}\\\\{seqno}.patch'
955 >>> _buildfntemplate(b'\\\\foo\\bar.patch')
955 >>> _buildfntemplate(b'\\\\foo\\bar.patch')
956 '\\\\\\\\foo\\\\bar.patch'
956 '\\\\\\\\foo\\\\bar.patch'
957 >>> _buildfntemplate(b'\\{tags % "{tag}"}')
957 >>> _buildfntemplate(b'\\{tags % "{tag}"}')
958 '\\\\{tags % "{tag}"}'
958 '\\\\{tags % "{tag}"}'
959
959
960 but inner strings follow the template rules (i.e. '\' is taken as an
960 but inner strings follow the template rules (i.e. '\' is taken as an
961 escape character):
961 escape character):
962
962
963 >>> _buildfntemplate(br'{"c:\tmp"}', seqno=0)
963 >>> _buildfntemplate(br'{"c:\tmp"}', seqno=0)
964 '{"c:\\tmp"}'
964 '{"c:\\tmp"}'
965 """
965 """
966 expander = {
966 expander = {
967 b'H': b'{node}',
967 b'H': b'{node}',
968 b'R': b'{rev}',
968 b'R': b'{rev}',
969 b'h': b'{node|short}',
969 b'h': b'{node|short}',
970 b'm': br'{sub(r"[^\w]", "_", desc|firstline)}',
970 b'm': br'{sub(r"[^\w]", "_", desc|firstline)}',
971 b'r': b'{if(revwidth, pad(rev, revwidth, "0", left=True), rev)}',
971 b'r': b'{if(revwidth, pad(rev, revwidth, "0", left=True), rev)}',
972 b'%': b'%',
972 b'%': b'%',
973 b'b': b'{reporoot|basename}',
973 b'b': b'{reporoot|basename}',
974 }
974 }
975 if total is not None:
975 if total is not None:
976 expander[b'N'] = b'{total}'
976 expander[b'N'] = b'{total}'
977 if seqno is not None:
977 if seqno is not None:
978 expander[b'n'] = b'{seqno}'
978 expander[b'n'] = b'{seqno}'
979 if total is not None and seqno is not None:
979 if total is not None and seqno is not None:
980 expander[b'n'] = b'{pad(seqno, total|stringify|count, "0", left=True)}'
980 expander[b'n'] = b'{pad(seqno, total|stringify|count, "0", left=True)}'
981 if pathname is not None:
981 if pathname is not None:
982 expander[b's'] = b'{pathname|basename}'
982 expander[b's'] = b'{pathname|basename}'
983 expander[b'd'] = b'{if(pathname|dirname, pathname|dirname, ".")}'
983 expander[b'd'] = b'{if(pathname|dirname, pathname|dirname, ".")}'
984 expander[b'p'] = b'{pathname}'
984 expander[b'p'] = b'{pathname}'
985
985
986 newname = []
986 newname = []
987 for typ, start, end in templater.scantemplate(pat, raw=True):
987 for typ, start, end in templater.scantemplate(pat, raw=True):
988 if typ != b'string':
988 if typ != b'string':
989 newname.append(pat[start:end])
989 newname.append(pat[start:end])
990 continue
990 continue
991 i = start
991 i = start
992 while i < end:
992 while i < end:
993 n = pat.find(b'%', i, end)
993 n = pat.find(b'%', i, end)
994 if n < 0:
994 if n < 0:
995 newname.append(stringutil.escapestr(pat[i:end]))
995 newname.append(stringutil.escapestr(pat[i:end]))
996 break
996 break
997 newname.append(stringutil.escapestr(pat[i:n]))
997 newname.append(stringutil.escapestr(pat[i:n]))
998 if n + 2 > end:
998 if n + 2 > end:
999 raise error.Abort(_("incomplete format spec in output "
999 raise error.Abort(_("incomplete format spec in output "
1000 "filename"))
1000 "filename"))
1001 c = pat[n + 1:n + 2]
1001 c = pat[n + 1:n + 2]
1002 i = n + 2
1002 i = n + 2
1003 try:
1003 try:
1004 newname.append(expander[c])
1004 newname.append(expander[c])
1005 except KeyError:
1005 except KeyError:
1006 raise error.Abort(_("invalid format spec '%%%s' in output "
1006 raise error.Abort(_("invalid format spec '%%%s' in output "
1007 "filename") % c)
1007 "filename") % c)
1008 return ''.join(newname)
1008 return ''.join(newname)
1009
1009
1010 def makefilename(ctx, pat, **props):
1010 def makefilename(ctx, pat, **props):
1011 if not pat:
1011 if not pat:
1012 return pat
1012 return pat
1013 tmpl = _buildfntemplate(pat, **props)
1013 tmpl = _buildfntemplate(pat, **props)
1014 # BUG: alias expansion shouldn't be made against template fragments
1014 # BUG: alias expansion shouldn't be made against template fragments
1015 # rewritten from %-format strings, but we have no easy way to partially
1015 # rewritten from %-format strings, but we have no easy way to partially
1016 # disable the expansion.
1016 # disable the expansion.
1017 return rendertemplate(ctx, tmpl, pycompat.byteskwargs(props))
1017 return rendertemplate(ctx, tmpl, pycompat.byteskwargs(props))
1018
1018
1019 def isstdiofilename(pat):
1019 def isstdiofilename(pat):
1020 """True if the given pat looks like a filename denoting stdin/stdout"""
1020 """True if the given pat looks like a filename denoting stdin/stdout"""
1021 return not pat or pat == '-'
1021 return not pat or pat == '-'
1022
1022
1023 class _unclosablefile(object):
1023 class _unclosablefile(object):
1024 def __init__(self, fp):
1024 def __init__(self, fp):
1025 self._fp = fp
1025 self._fp = fp
1026
1026
1027 def close(self):
1027 def close(self):
1028 pass
1028 pass
1029
1029
1030 def __iter__(self):
1030 def __iter__(self):
1031 return iter(self._fp)
1031 return iter(self._fp)
1032
1032
1033 def __getattr__(self, attr):
1033 def __getattr__(self, attr):
1034 return getattr(self._fp, attr)
1034 return getattr(self._fp, attr)
1035
1035
1036 def __enter__(self):
1036 def __enter__(self):
1037 return self
1037 return self
1038
1038
1039 def __exit__(self, exc_type, exc_value, exc_tb):
1039 def __exit__(self, exc_type, exc_value, exc_tb):
1040 pass
1040 pass
1041
1041
1042 def makefileobj(ctx, pat, mode='wb', **props):
1042 def makefileobj(ctx, pat, mode='wb', **props):
1043 writable = mode not in ('r', 'rb')
1043 writable = mode not in ('r', 'rb')
1044
1044
1045 if isstdiofilename(pat):
1045 if isstdiofilename(pat):
1046 repo = ctx.repo()
1046 repo = ctx.repo()
1047 if writable:
1047 if writable:
1048 fp = repo.ui.fout
1048 fp = repo.ui.fout
1049 else:
1049 else:
1050 fp = repo.ui.fin
1050 fp = repo.ui.fin
1051 return _unclosablefile(fp)
1051 return _unclosablefile(fp)
1052 fn = makefilename(ctx, pat, **props)
1052 fn = makefilename(ctx, pat, **props)
1053 return open(fn, mode)
1053 return open(fn, mode)
1054
1054
1055 def openstorage(repo, cmd, file_, opts, returnrevlog=False):
1055 def openstorage(repo, cmd, file_, opts, returnrevlog=False):
1056 """opens the changelog, manifest, a filelog or a given revlog"""
1056 """opens the changelog, manifest, a filelog or a given revlog"""
1057 cl = opts['changelog']
1057 cl = opts['changelog']
1058 mf = opts['manifest']
1058 mf = opts['manifest']
1059 dir = opts['dir']
1059 dir = opts['dir']
1060 msg = None
1060 msg = None
1061 if cl and mf:
1061 if cl and mf:
1062 msg = _('cannot specify --changelog and --manifest at the same time')
1062 msg = _('cannot specify --changelog and --manifest at the same time')
1063 elif cl and dir:
1063 elif cl and dir:
1064 msg = _('cannot specify --changelog and --dir at the same time')
1064 msg = _('cannot specify --changelog and --dir at the same time')
1065 elif cl or mf or dir:
1065 elif cl or mf or dir:
1066 if file_:
1066 if file_:
1067 msg = _('cannot specify filename with --changelog or --manifest')
1067 msg = _('cannot specify filename with --changelog or --manifest')
1068 elif not repo:
1068 elif not repo:
1069 msg = _('cannot specify --changelog or --manifest or --dir '
1069 msg = _('cannot specify --changelog or --manifest or --dir '
1070 'without a repository')
1070 'without a repository')
1071 if msg:
1071 if msg:
1072 raise error.Abort(msg)
1072 raise error.Abort(msg)
1073
1073
1074 r = None
1074 r = None
1075 if repo:
1075 if repo:
1076 if cl:
1076 if cl:
1077 r = repo.unfiltered().changelog
1077 r = repo.unfiltered().changelog
1078 elif dir:
1078 elif dir:
1079 if 'treemanifest' not in repo.requirements:
1079 if 'treemanifest' not in repo.requirements:
1080 raise error.Abort(_("--dir can only be used on repos with "
1080 raise error.Abort(_("--dir can only be used on repos with "
1081 "treemanifest enabled"))
1081 "treemanifest enabled"))
1082 if not dir.endswith('/'):
1082 if not dir.endswith('/'):
1083 dir = dir + '/'
1083 dir = dir + '/'
1084 dirlog = repo.manifestlog.getstorage(dir)
1084 dirlog = repo.manifestlog.getstorage(dir)
1085 if len(dirlog):
1085 if len(dirlog):
1086 r = dirlog
1086 r = dirlog
1087 elif mf:
1087 elif mf:
1088 r = repo.manifestlog.getstorage(b'')
1088 r = repo.manifestlog.getstorage(b'')
1089 elif file_:
1089 elif file_:
1090 filelog = repo.file(file_)
1090 filelog = repo.file(file_)
1091 if len(filelog):
1091 if len(filelog):
1092 r = filelog
1092 r = filelog
1093
1093
1094 # Not all storage may be revlogs. If requested, try to return an actual
1094 # Not all storage may be revlogs. If requested, try to return an actual
1095 # revlog instance.
1095 # revlog instance.
1096 if returnrevlog:
1096 if returnrevlog:
1097 if isinstance(r, revlog.revlog):
1097 if isinstance(r, revlog.revlog):
1098 pass
1098 pass
1099 elif util.safehasattr(r, '_revlog'):
1099 elif util.safehasattr(r, '_revlog'):
1100 r = r._revlog
1100 r = r._revlog
1101 elif r is not None:
1101 elif r is not None:
1102 raise error.Abort(_('%r does not appear to be a revlog') % r)
1102 raise error.Abort(_('%r does not appear to be a revlog') % r)
1103
1103
1104 if not r:
1104 if not r:
1105 if not returnrevlog:
1105 if not returnrevlog:
1106 raise error.Abort(_('cannot give path to non-revlog'))
1106 raise error.Abort(_('cannot give path to non-revlog'))
1107
1107
1108 if not file_:
1108 if not file_:
1109 raise error.CommandError(cmd, _('invalid arguments'))
1109 raise error.CommandError(cmd, _('invalid arguments'))
1110 if not os.path.isfile(file_):
1110 if not os.path.isfile(file_):
1111 raise error.Abort(_("revlog '%s' not found") % file_)
1111 raise error.Abort(_("revlog '%s' not found") % file_)
1112 r = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False),
1112 r = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False),
1113 file_[:-2] + ".i")
1113 file_[:-2] + ".i")
1114 return r
1114 return r
1115
1115
1116 def openrevlog(repo, cmd, file_, opts):
1116 def openrevlog(repo, cmd, file_, opts):
1117 """Obtain a revlog backing storage of an item.
1117 """Obtain a revlog backing storage of an item.
1118
1118
1119 This is similar to ``openstorage()`` except it always returns a revlog.
1119 This is similar to ``openstorage()`` except it always returns a revlog.
1120
1120
1121 In most cases, a caller cares about the main storage object - not the
1121 In most cases, a caller cares about the main storage object - not the
1122 revlog backing it. Therefore, this function should only be used by code
1122 revlog backing it. Therefore, this function should only be used by code
1123 that needs to examine low-level revlog implementation details. e.g. debug
1123 that needs to examine low-level revlog implementation details. e.g. debug
1124 commands.
1124 commands.
1125 """
1125 """
1126 return openstorage(repo, cmd, file_, opts, returnrevlog=True)
1126 return openstorage(repo, cmd, file_, opts, returnrevlog=True)
1127
1127
1128 def copy(ui, repo, pats, opts, rename=False):
1128 def copy(ui, repo, pats, opts, rename=False):
1129 # called with the repo lock held
1129 # called with the repo lock held
1130 #
1130 #
1131 # hgsep => pathname that uses "/" to separate directories
1131 # hgsep => pathname that uses "/" to separate directories
1132 # ossep => pathname that uses os.sep to separate directories
1132 # ossep => pathname that uses os.sep to separate directories
1133 cwd = repo.getcwd()
1133 cwd = repo.getcwd()
1134 targets = {}
1134 targets = {}
1135 after = opts.get("after")
1135 after = opts.get("after")
1136 dryrun = opts.get("dry_run")
1136 dryrun = opts.get("dry_run")
1137 wctx = repo[None]
1137 wctx = repo[None]
1138
1138
1139 def walkpat(pat):
1139 def walkpat(pat):
1140 srcs = []
1140 srcs = []
1141 if after:
1141 if after:
1142 badstates = '?'
1142 badstates = '?'
1143 else:
1143 else:
1144 badstates = '?r'
1144 badstates = '?r'
1145 m = scmutil.match(wctx, [pat], opts, globbed=True)
1145 m = scmutil.match(wctx, [pat], opts, globbed=True)
1146 for abs in wctx.walk(m):
1146 for abs in wctx.walk(m):
1147 state = repo.dirstate[abs]
1147 state = repo.dirstate[abs]
1148 rel = m.rel(abs)
1148 rel = m.rel(abs)
1149 exact = m.exact(abs)
1149 exact = m.exact(abs)
1150 if state in badstates:
1150 if state in badstates:
1151 if exact and state == '?':
1151 if exact and state == '?':
1152 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1152 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1153 if exact and state == 'r':
1153 if exact and state == 'r':
1154 ui.warn(_('%s: not copying - file has been marked for'
1154 ui.warn(_('%s: not copying - file has been marked for'
1155 ' remove\n') % rel)
1155 ' remove\n') % rel)
1156 continue
1156 continue
1157 # abs: hgsep
1157 # abs: hgsep
1158 # rel: ossep
1158 # rel: ossep
1159 srcs.append((abs, rel, exact))
1159 srcs.append((abs, rel, exact))
1160 return srcs
1160 return srcs
1161
1161
1162 # abssrc: hgsep
1162 # abssrc: hgsep
1163 # relsrc: ossep
1163 # relsrc: ossep
1164 # otarget: ossep
1164 # otarget: ossep
1165 def copyfile(abssrc, relsrc, otarget, exact):
1165 def copyfile(abssrc, relsrc, otarget, exact):
1166 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1166 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1167 if '/' in abstarget:
1167 if '/' in abstarget:
1168 # We cannot normalize abstarget itself, this would prevent
1168 # We cannot normalize abstarget itself, this would prevent
1169 # case only renames, like a => A.
1169 # case only renames, like a => A.
1170 abspath, absname = abstarget.rsplit('/', 1)
1170 abspath, absname = abstarget.rsplit('/', 1)
1171 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1171 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1172 reltarget = repo.pathto(abstarget, cwd)
1172 reltarget = repo.pathto(abstarget, cwd)
1173 target = repo.wjoin(abstarget)
1173 target = repo.wjoin(abstarget)
1174 src = repo.wjoin(abssrc)
1174 src = repo.wjoin(abssrc)
1175 state = repo.dirstate[abstarget]
1175 state = repo.dirstate[abstarget]
1176
1176
1177 scmutil.checkportable(ui, abstarget)
1177 scmutil.checkportable(ui, abstarget)
1178
1178
1179 # check for collisions
1179 # check for collisions
1180 prevsrc = targets.get(abstarget)
1180 prevsrc = targets.get(abstarget)
1181 if prevsrc is not None:
1181 if prevsrc is not None:
1182 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1182 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1183 (reltarget, repo.pathto(abssrc, cwd),
1183 (reltarget, repo.pathto(abssrc, cwd),
1184 repo.pathto(prevsrc, cwd)))
1184 repo.pathto(prevsrc, cwd)))
1185 return True # report a failure
1185 return True # report a failure
1186
1186
1187 # check for overwrites
1187 # check for overwrites
1188 exists = os.path.lexists(target)
1188 exists = os.path.lexists(target)
1189 samefile = False
1189 samefile = False
1190 if exists and abssrc != abstarget:
1190 if exists and abssrc != abstarget:
1191 if (repo.dirstate.normalize(abssrc) ==
1191 if (repo.dirstate.normalize(abssrc) ==
1192 repo.dirstate.normalize(abstarget)):
1192 repo.dirstate.normalize(abstarget)):
1193 if not rename:
1193 if not rename:
1194 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1194 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1195 return True # report a failure
1195 return True # report a failure
1196 exists = False
1196 exists = False
1197 samefile = True
1197 samefile = True
1198
1198
1199 if not after and exists or after and state in 'mn':
1199 if not after and exists or after and state in 'mn':
1200 if not opts['force']:
1200 if not opts['force']:
1201 if state in 'mn':
1201 if state in 'mn':
1202 msg = _('%s: not overwriting - file already committed\n')
1202 msg = _('%s: not overwriting - file already committed\n')
1203 if after:
1203 if after:
1204 flags = '--after --force'
1204 flags = '--after --force'
1205 else:
1205 else:
1206 flags = '--force'
1206 flags = '--force'
1207 if rename:
1207 if rename:
1208 hint = _("('hg rename %s' to replace the file by "
1208 hint = _("('hg rename %s' to replace the file by "
1209 'recording a rename)\n') % flags
1209 'recording a rename)\n') % flags
1210 else:
1210 else:
1211 hint = _("('hg copy %s' to replace the file by "
1211 hint = _("('hg copy %s' to replace the file by "
1212 'recording a copy)\n') % flags
1212 'recording a copy)\n') % flags
1213 else:
1213 else:
1214 msg = _('%s: not overwriting - file exists\n')
1214 msg = _('%s: not overwriting - file exists\n')
1215 if rename:
1215 if rename:
1216 hint = _("('hg rename --after' to record the rename)\n")
1216 hint = _("('hg rename --after' to record the rename)\n")
1217 else:
1217 else:
1218 hint = _("('hg copy --after' to record the copy)\n")
1218 hint = _("('hg copy --after' to record the copy)\n")
1219 ui.warn(msg % reltarget)
1219 ui.warn(msg % reltarget)
1220 ui.warn(hint)
1220 ui.warn(hint)
1221 return True # report a failure
1221 return True # report a failure
1222
1222
1223 if after:
1223 if after:
1224 if not exists:
1224 if not exists:
1225 if rename:
1225 if rename:
1226 ui.warn(_('%s: not recording move - %s does not exist\n') %
1226 ui.warn(_('%s: not recording move - %s does not exist\n') %
1227 (relsrc, reltarget))
1227 (relsrc, reltarget))
1228 else:
1228 else:
1229 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1229 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1230 (relsrc, reltarget))
1230 (relsrc, reltarget))
1231 return True # report a failure
1231 return True # report a failure
1232 elif not dryrun:
1232 elif not dryrun:
1233 try:
1233 try:
1234 if exists:
1234 if exists:
1235 os.unlink(target)
1235 os.unlink(target)
1236 targetdir = os.path.dirname(target) or '.'
1236 targetdir = os.path.dirname(target) or '.'
1237 if not os.path.isdir(targetdir):
1237 if not os.path.isdir(targetdir):
1238 os.makedirs(targetdir)
1238 os.makedirs(targetdir)
1239 if samefile:
1239 if samefile:
1240 tmp = target + "~hgrename"
1240 tmp = target + "~hgrename"
1241 os.rename(src, tmp)
1241 os.rename(src, tmp)
1242 os.rename(tmp, target)
1242 os.rename(tmp, target)
1243 else:
1243 else:
1244 # Preserve stat info on renames, not on copies; this matches
1244 # Preserve stat info on renames, not on copies; this matches
1245 # Linux CLI behavior.
1245 # Linux CLI behavior.
1246 util.copyfile(src, target, copystat=rename)
1246 util.copyfile(src, target, copystat=rename)
1247 srcexists = True
1247 srcexists = True
1248 except IOError as inst:
1248 except IOError as inst:
1249 if inst.errno == errno.ENOENT:
1249 if inst.errno == errno.ENOENT:
1250 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1250 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1251 srcexists = False
1251 srcexists = False
1252 else:
1252 else:
1253 ui.warn(_('%s: cannot copy - %s\n') %
1253 ui.warn(_('%s: cannot copy - %s\n') %
1254 (relsrc, encoding.strtolocal(inst.strerror)))
1254 (relsrc, encoding.strtolocal(inst.strerror)))
1255 if rename:
1255 if rename:
1256 hint = _("('hg rename --after' to record the rename)\n")
1256 hint = _("('hg rename --after' to record the rename)\n")
1257 else:
1257 else:
1258 hint = _("('hg copy --after' to record the copy)\n")
1258 hint = _("('hg copy --after' to record the copy)\n")
1259 return True # report a failure
1259 return True # report a failure
1260
1260
1261 if ui.verbose or not exact:
1261 if ui.verbose or not exact:
1262 if rename:
1262 if rename:
1263 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1263 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1264 else:
1264 else:
1265 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1265 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1266
1266
1267 targets[abstarget] = abssrc
1267 targets[abstarget] = abssrc
1268
1268
1269 # fix up dirstate
1269 # fix up dirstate
1270 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1270 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1271 dryrun=dryrun, cwd=cwd)
1271 dryrun=dryrun, cwd=cwd)
1272 if rename and not dryrun:
1272 if rename and not dryrun:
1273 if not after and srcexists and not samefile:
1273 if not after and srcexists and not samefile:
1274 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
1274 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
1275 repo.wvfs.unlinkpath(abssrc, rmdir=rmdir)
1275 repo.wvfs.unlinkpath(abssrc, rmdir=rmdir)
1276 wctx.forget([abssrc])
1276 wctx.forget([abssrc])
1277
1277
1278 # pat: ossep
1278 # pat: ossep
1279 # dest ossep
1279 # dest ossep
1280 # srcs: list of (hgsep, hgsep, ossep, bool)
1280 # srcs: list of (hgsep, hgsep, ossep, bool)
1281 # return: function that takes hgsep and returns ossep
1281 # return: function that takes hgsep and returns ossep
1282 def targetpathfn(pat, dest, srcs):
1282 def targetpathfn(pat, dest, srcs):
1283 if os.path.isdir(pat):
1283 if os.path.isdir(pat):
1284 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1284 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1285 abspfx = util.localpath(abspfx)
1285 abspfx = util.localpath(abspfx)
1286 if destdirexists:
1286 if destdirexists:
1287 striplen = len(os.path.split(abspfx)[0])
1287 striplen = len(os.path.split(abspfx)[0])
1288 else:
1288 else:
1289 striplen = len(abspfx)
1289 striplen = len(abspfx)
1290 if striplen:
1290 if striplen:
1291 striplen += len(pycompat.ossep)
1291 striplen += len(pycompat.ossep)
1292 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1292 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1293 elif destdirexists:
1293 elif destdirexists:
1294 res = lambda p: os.path.join(dest,
1294 res = lambda p: os.path.join(dest,
1295 os.path.basename(util.localpath(p)))
1295 os.path.basename(util.localpath(p)))
1296 else:
1296 else:
1297 res = lambda p: dest
1297 res = lambda p: dest
1298 return res
1298 return res
1299
1299
1300 # pat: ossep
1300 # pat: ossep
1301 # dest ossep
1301 # dest ossep
1302 # srcs: list of (hgsep, hgsep, ossep, bool)
1302 # srcs: list of (hgsep, hgsep, ossep, bool)
1303 # return: function that takes hgsep and returns ossep
1303 # return: function that takes hgsep and returns ossep
1304 def targetpathafterfn(pat, dest, srcs):
1304 def targetpathafterfn(pat, dest, srcs):
1305 if matchmod.patkind(pat):
1305 if matchmod.patkind(pat):
1306 # a mercurial pattern
1306 # a mercurial pattern
1307 res = lambda p: os.path.join(dest,
1307 res = lambda p: os.path.join(dest,
1308 os.path.basename(util.localpath(p)))
1308 os.path.basename(util.localpath(p)))
1309 else:
1309 else:
1310 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1310 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1311 if len(abspfx) < len(srcs[0][0]):
1311 if len(abspfx) < len(srcs[0][0]):
1312 # A directory. Either the target path contains the last
1312 # A directory. Either the target path contains the last
1313 # component of the source path or it does not.
1313 # component of the source path or it does not.
1314 def evalpath(striplen):
1314 def evalpath(striplen):
1315 score = 0
1315 score = 0
1316 for s in srcs:
1316 for s in srcs:
1317 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1317 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1318 if os.path.lexists(t):
1318 if os.path.lexists(t):
1319 score += 1
1319 score += 1
1320 return score
1320 return score
1321
1321
1322 abspfx = util.localpath(abspfx)
1322 abspfx = util.localpath(abspfx)
1323 striplen = len(abspfx)
1323 striplen = len(abspfx)
1324 if striplen:
1324 if striplen:
1325 striplen += len(pycompat.ossep)
1325 striplen += len(pycompat.ossep)
1326 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1326 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1327 score = evalpath(striplen)
1327 score = evalpath(striplen)
1328 striplen1 = len(os.path.split(abspfx)[0])
1328 striplen1 = len(os.path.split(abspfx)[0])
1329 if striplen1:
1329 if striplen1:
1330 striplen1 += len(pycompat.ossep)
1330 striplen1 += len(pycompat.ossep)
1331 if evalpath(striplen1) > score:
1331 if evalpath(striplen1) > score:
1332 striplen = striplen1
1332 striplen = striplen1
1333 res = lambda p: os.path.join(dest,
1333 res = lambda p: os.path.join(dest,
1334 util.localpath(p)[striplen:])
1334 util.localpath(p)[striplen:])
1335 else:
1335 else:
1336 # a file
1336 # a file
1337 if destdirexists:
1337 if destdirexists:
1338 res = lambda p: os.path.join(dest,
1338 res = lambda p: os.path.join(dest,
1339 os.path.basename(util.localpath(p)))
1339 os.path.basename(util.localpath(p)))
1340 else:
1340 else:
1341 res = lambda p: dest
1341 res = lambda p: dest
1342 return res
1342 return res
1343
1343
1344 pats = scmutil.expandpats(pats)
1344 pats = scmutil.expandpats(pats)
1345 if not pats:
1345 if not pats:
1346 raise error.Abort(_('no source or destination specified'))
1346 raise error.Abort(_('no source or destination specified'))
1347 if len(pats) == 1:
1347 if len(pats) == 1:
1348 raise error.Abort(_('no destination specified'))
1348 raise error.Abort(_('no destination specified'))
1349 dest = pats.pop()
1349 dest = pats.pop()
1350 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1350 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1351 if not destdirexists:
1351 if not destdirexists:
1352 if len(pats) > 1 or matchmod.patkind(pats[0]):
1352 if len(pats) > 1 or matchmod.patkind(pats[0]):
1353 raise error.Abort(_('with multiple sources, destination must be an '
1353 raise error.Abort(_('with multiple sources, destination must be an '
1354 'existing directory'))
1354 'existing directory'))
1355 if util.endswithsep(dest):
1355 if util.endswithsep(dest):
1356 raise error.Abort(_('destination %s is not a directory') % dest)
1356 raise error.Abort(_('destination %s is not a directory') % dest)
1357
1357
1358 tfn = targetpathfn
1358 tfn = targetpathfn
1359 if after:
1359 if after:
1360 tfn = targetpathafterfn
1360 tfn = targetpathafterfn
1361 copylist = []
1361 copylist = []
1362 for pat in pats:
1362 for pat in pats:
1363 srcs = walkpat(pat)
1363 srcs = walkpat(pat)
1364 if not srcs:
1364 if not srcs:
1365 continue
1365 continue
1366 copylist.append((tfn(pat, dest, srcs), srcs))
1366 copylist.append((tfn(pat, dest, srcs), srcs))
1367 if not copylist:
1367 if not copylist:
1368 raise error.Abort(_('no files to copy'))
1368 raise error.Abort(_('no files to copy'))
1369
1369
1370 errors = 0
1370 errors = 0
1371 for targetpath, srcs in copylist:
1371 for targetpath, srcs in copylist:
1372 for abssrc, relsrc, exact in srcs:
1372 for abssrc, relsrc, exact in srcs:
1373 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1373 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1374 errors += 1
1374 errors += 1
1375
1375
1376 return errors != 0
1376 return errors != 0
1377
1377
1378 ## facility to let extension process additional data into an import patch
1378 ## facility to let extension process additional data into an import patch
1379 # list of identifier to be executed in order
1379 # list of identifier to be executed in order
1380 extrapreimport = [] # run before commit
1380 extrapreimport = [] # run before commit
1381 extrapostimport = [] # run after commit
1381 extrapostimport = [] # run after commit
1382 # mapping from identifier to actual import function
1382 # mapping from identifier to actual import function
1383 #
1383 #
1384 # 'preimport' are run before the commit is made and are provided the following
1384 # 'preimport' are run before the commit is made and are provided the following
1385 # arguments:
1385 # arguments:
1386 # - repo: the localrepository instance,
1386 # - repo: the localrepository instance,
1387 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1387 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1388 # - extra: the future extra dictionary of the changeset, please mutate it,
1388 # - extra: the future extra dictionary of the changeset, please mutate it,
1389 # - opts: the import options.
1389 # - opts: the import options.
1390 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1390 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1391 # mutation of in memory commit and more. Feel free to rework the code to get
1391 # mutation of in memory commit and more. Feel free to rework the code to get
1392 # there.
1392 # there.
1393 extrapreimportmap = {}
1393 extrapreimportmap = {}
1394 # 'postimport' are run after the commit is made and are provided the following
1394 # 'postimport' are run after the commit is made and are provided the following
1395 # argument:
1395 # argument:
1396 # - ctx: the changectx created by import.
1396 # - ctx: the changectx created by import.
1397 extrapostimportmap = {}
1397 extrapostimportmap = {}
1398
1398
1399 def tryimportone(ui, repo, patchdata, parents, opts, msgs, updatefunc):
1399 def tryimportone(ui, repo, patchdata, parents, opts, msgs, updatefunc):
1400 """Utility function used by commands.import to import a single patch
1400 """Utility function used by commands.import to import a single patch
1401
1401
1402 This function is explicitly defined here to help the evolve extension to
1402 This function is explicitly defined here to help the evolve extension to
1403 wrap this part of the import logic.
1403 wrap this part of the import logic.
1404
1404
1405 The API is currently a bit ugly because it a simple code translation from
1405 The API is currently a bit ugly because it a simple code translation from
1406 the import command. Feel free to make it better.
1406 the import command. Feel free to make it better.
1407
1407
1408 :patchdata: a dictionary containing parsed patch data (such as from
1408 :patchdata: a dictionary containing parsed patch data (such as from
1409 ``patch.extract()``)
1409 ``patch.extract()``)
1410 :parents: nodes that will be parent of the created commit
1410 :parents: nodes that will be parent of the created commit
1411 :opts: the full dict of option passed to the import command
1411 :opts: the full dict of option passed to the import command
1412 :msgs: list to save commit message to.
1412 :msgs: list to save commit message to.
1413 (used in case we need to save it when failing)
1413 (used in case we need to save it when failing)
1414 :updatefunc: a function that update a repo to a given node
1414 :updatefunc: a function that update a repo to a given node
1415 updatefunc(<repo>, <node>)
1415 updatefunc(<repo>, <node>)
1416 """
1416 """
1417 # avoid cycle context -> subrepo -> cmdutil
1417 # avoid cycle context -> subrepo -> cmdutil
1418 from . import context
1418 from . import context
1419
1419
1420 tmpname = patchdata.get('filename')
1420 tmpname = patchdata.get('filename')
1421 message = patchdata.get('message')
1421 message = patchdata.get('message')
1422 user = opts.get('user') or patchdata.get('user')
1422 user = opts.get('user') or patchdata.get('user')
1423 date = opts.get('date') or patchdata.get('date')
1423 date = opts.get('date') or patchdata.get('date')
1424 branch = patchdata.get('branch')
1424 branch = patchdata.get('branch')
1425 nodeid = patchdata.get('nodeid')
1425 nodeid = patchdata.get('nodeid')
1426 p1 = patchdata.get('p1')
1426 p1 = patchdata.get('p1')
1427 p2 = patchdata.get('p2')
1427 p2 = patchdata.get('p2')
1428
1428
1429 nocommit = opts.get('no_commit')
1429 nocommit = opts.get('no_commit')
1430 importbranch = opts.get('import_branch')
1430 importbranch = opts.get('import_branch')
1431 update = not opts.get('bypass')
1431 update = not opts.get('bypass')
1432 strip = opts["strip"]
1432 strip = opts["strip"]
1433 prefix = opts["prefix"]
1433 prefix = opts["prefix"]
1434 sim = float(opts.get('similarity') or 0)
1434 sim = float(opts.get('similarity') or 0)
1435
1435
1436 if not tmpname:
1436 if not tmpname:
1437 return None, None, False
1437 return None, None, False
1438
1438
1439 rejects = False
1439 rejects = False
1440
1440
1441 cmdline_message = logmessage(ui, opts)
1441 cmdline_message = logmessage(ui, opts)
1442 if cmdline_message:
1442 if cmdline_message:
1443 # pickup the cmdline msg
1443 # pickup the cmdline msg
1444 message = cmdline_message
1444 message = cmdline_message
1445 elif message:
1445 elif message:
1446 # pickup the patch msg
1446 # pickup the patch msg
1447 message = message.strip()
1447 message = message.strip()
1448 else:
1448 else:
1449 # launch the editor
1449 # launch the editor
1450 message = None
1450 message = None
1451 ui.debug('message:\n%s\n' % (message or ''))
1451 ui.debug('message:\n%s\n' % (message or ''))
1452
1452
1453 if len(parents) == 1:
1453 if len(parents) == 1:
1454 parents.append(repo[nullid])
1454 parents.append(repo[nullid])
1455 if opts.get('exact'):
1455 if opts.get('exact'):
1456 if not nodeid or not p1:
1456 if not nodeid or not p1:
1457 raise error.Abort(_('not a Mercurial patch'))
1457 raise error.Abort(_('not a Mercurial patch'))
1458 p1 = repo[p1]
1458 p1 = repo[p1]
1459 p2 = repo[p2 or nullid]
1459 p2 = repo[p2 or nullid]
1460 elif p2:
1460 elif p2:
1461 try:
1461 try:
1462 p1 = repo[p1]
1462 p1 = repo[p1]
1463 p2 = repo[p2]
1463 p2 = repo[p2]
1464 # Without any options, consider p2 only if the
1464 # Without any options, consider p2 only if the
1465 # patch is being applied on top of the recorded
1465 # patch is being applied on top of the recorded
1466 # first parent.
1466 # first parent.
1467 if p1 != parents[0]:
1467 if p1 != parents[0]:
1468 p1 = parents[0]
1468 p1 = parents[0]
1469 p2 = repo[nullid]
1469 p2 = repo[nullid]
1470 except error.RepoError:
1470 except error.RepoError:
1471 p1, p2 = parents
1471 p1, p2 = parents
1472 if p2.node() == nullid:
1472 if p2.node() == nullid:
1473 ui.warn(_("warning: import the patch as a normal revision\n"
1473 ui.warn(_("warning: import the patch as a normal revision\n"
1474 "(use --exact to import the patch as a merge)\n"))
1474 "(use --exact to import the patch as a merge)\n"))
1475 else:
1475 else:
1476 p1, p2 = parents
1476 p1, p2 = parents
1477
1477
1478 n = None
1478 n = None
1479 if update:
1479 if update:
1480 if p1 != parents[0]:
1480 if p1 != parents[0]:
1481 updatefunc(repo, p1.node())
1481 updatefunc(repo, p1.node())
1482 if p2 != parents[1]:
1482 if p2 != parents[1]:
1483 repo.setparents(p1.node(), p2.node())
1483 repo.setparents(p1.node(), p2.node())
1484
1484
1485 if opts.get('exact') or importbranch:
1485 if opts.get('exact') or importbranch:
1486 repo.dirstate.setbranch(branch or 'default')
1486 repo.dirstate.setbranch(branch or 'default')
1487
1487
1488 partial = opts.get('partial', False)
1488 partial = opts.get('partial', False)
1489 files = set()
1489 files = set()
1490 try:
1490 try:
1491 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1491 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1492 files=files, eolmode=None, similarity=sim / 100.0)
1492 files=files, eolmode=None, similarity=sim / 100.0)
1493 except error.PatchError as e:
1493 except error.PatchError as e:
1494 if not partial:
1494 if not partial:
1495 raise error.Abort(pycompat.bytestr(e))
1495 raise error.Abort(pycompat.bytestr(e))
1496 if partial:
1496 if partial:
1497 rejects = True
1497 rejects = True
1498
1498
1499 files = list(files)
1499 files = list(files)
1500 if nocommit:
1500 if nocommit:
1501 if message:
1501 if message:
1502 msgs.append(message)
1502 msgs.append(message)
1503 else:
1503 else:
1504 if opts.get('exact') or p2:
1504 if opts.get('exact') or p2:
1505 # If you got here, you either use --force and know what
1505 # If you got here, you either use --force and know what
1506 # you are doing or used --exact or a merge patch while
1506 # you are doing or used --exact or a merge patch while
1507 # being updated to its first parent.
1507 # being updated to its first parent.
1508 m = None
1508 m = None
1509 else:
1509 else:
1510 m = scmutil.matchfiles(repo, files or [])
1510 m = scmutil.matchfiles(repo, files or [])
1511 editform = mergeeditform(repo[None], 'import.normal')
1511 editform = mergeeditform(repo[None], 'import.normal')
1512 if opts.get('exact'):
1512 if opts.get('exact'):
1513 editor = None
1513 editor = None
1514 else:
1514 else:
1515 editor = getcommiteditor(editform=editform,
1515 editor = getcommiteditor(editform=editform,
1516 **pycompat.strkwargs(opts))
1516 **pycompat.strkwargs(opts))
1517 extra = {}
1517 extra = {}
1518 for idfunc in extrapreimport:
1518 for idfunc in extrapreimport:
1519 extrapreimportmap[idfunc](repo, patchdata, extra, opts)
1519 extrapreimportmap[idfunc](repo, patchdata, extra, opts)
1520 overrides = {}
1520 overrides = {}
1521 if partial:
1521 if partial:
1522 overrides[('ui', 'allowemptycommit')] = True
1522 overrides[('ui', 'allowemptycommit')] = True
1523 with repo.ui.configoverride(overrides, 'import'):
1523 with repo.ui.configoverride(overrides, 'import'):
1524 n = repo.commit(message, user,
1524 n = repo.commit(message, user,
1525 date, match=m,
1525 date, match=m,
1526 editor=editor, extra=extra)
1526 editor=editor, extra=extra)
1527 for idfunc in extrapostimport:
1527 for idfunc in extrapostimport:
1528 extrapostimportmap[idfunc](repo[n])
1528 extrapostimportmap[idfunc](repo[n])
1529 else:
1529 else:
1530 if opts.get('exact') or importbranch:
1530 if opts.get('exact') or importbranch:
1531 branch = branch or 'default'
1531 branch = branch or 'default'
1532 else:
1532 else:
1533 branch = p1.branch()
1533 branch = p1.branch()
1534 store = patch.filestore()
1534 store = patch.filestore()
1535 try:
1535 try:
1536 files = set()
1536 files = set()
1537 try:
1537 try:
1538 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1538 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1539 files, eolmode=None)
1539 files, eolmode=None)
1540 except error.PatchError as e:
1540 except error.PatchError as e:
1541 raise error.Abort(stringutil.forcebytestr(e))
1541 raise error.Abort(stringutil.forcebytestr(e))
1542 if opts.get('exact'):
1542 if opts.get('exact'):
1543 editor = None
1543 editor = None
1544 else:
1544 else:
1545 editor = getcommiteditor(editform='import.bypass')
1545 editor = getcommiteditor(editform='import.bypass')
1546 memctx = context.memctx(repo, (p1.node(), p2.node()),
1546 memctx = context.memctx(repo, (p1.node(), p2.node()),
1547 message,
1547 message,
1548 files=files,
1548 files=files,
1549 filectxfn=store,
1549 filectxfn=store,
1550 user=user,
1550 user=user,
1551 date=date,
1551 date=date,
1552 branch=branch,
1552 branch=branch,
1553 editor=editor)
1553 editor=editor)
1554 n = memctx.commit()
1554 n = memctx.commit()
1555 finally:
1555 finally:
1556 store.close()
1556 store.close()
1557 if opts.get('exact') and nocommit:
1557 if opts.get('exact') and nocommit:
1558 # --exact with --no-commit is still useful in that it does merge
1558 # --exact with --no-commit is still useful in that it does merge
1559 # and branch bits
1559 # and branch bits
1560 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1560 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1561 elif opts.get('exact') and (not n or hex(n) != nodeid):
1561 elif opts.get('exact') and (not n or hex(n) != nodeid):
1562 raise error.Abort(_('patch is damaged or loses information'))
1562 raise error.Abort(_('patch is damaged or loses information'))
1563 msg = _('applied to working directory')
1563 msg = _('applied to working directory')
1564 if n:
1564 if n:
1565 # i18n: refers to a short changeset id
1565 # i18n: refers to a short changeset id
1566 msg = _('created %s') % short(n)
1566 msg = _('created %s') % short(n)
1567 return msg, n, rejects
1567 return msg, n, rejects
1568
1568
1569 # facility to let extensions include additional data in an exported patch
1569 # facility to let extensions include additional data in an exported patch
1570 # list of identifiers to be executed in order
1570 # list of identifiers to be executed in order
1571 extraexport = []
1571 extraexport = []
1572 # mapping from identifier to actual export function
1572 # mapping from identifier to actual export function
1573 # function as to return a string to be added to the header or None
1573 # function as to return a string to be added to the header or None
1574 # it is given two arguments (sequencenumber, changectx)
1574 # it is given two arguments (sequencenumber, changectx)
1575 extraexportmap = {}
1575 extraexportmap = {}
1576
1576
1577 def _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts):
1577 def _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts):
1578 node = scmutil.binnode(ctx)
1578 node = scmutil.binnode(ctx)
1579 parents = [p.node() for p in ctx.parents() if p]
1579 parents = [p.node() for p in ctx.parents() if p]
1580 branch = ctx.branch()
1580 branch = ctx.branch()
1581 if switch_parent:
1581 if switch_parent:
1582 parents.reverse()
1582 parents.reverse()
1583
1583
1584 if parents:
1584 if parents:
1585 prev = parents[0]
1585 prev = parents[0]
1586 else:
1586 else:
1587 prev = nullid
1587 prev = nullid
1588
1588
1589 fm.context(ctx=ctx)
1589 fm.context(ctx=ctx)
1590 fm.plain('# HG changeset patch\n')
1590 fm.plain('# HG changeset patch\n')
1591 fm.write('user', '# User %s\n', ctx.user())
1591 fm.write('user', '# User %s\n', ctx.user())
1592 fm.plain('# Date %d %d\n' % ctx.date())
1592 fm.plain('# Date %d %d\n' % ctx.date())
1593 fm.write('date', '# %s\n', fm.formatdate(ctx.date()))
1593 fm.write('date', '# %s\n', fm.formatdate(ctx.date()))
1594 fm.condwrite(branch and branch != 'default',
1594 fm.condwrite(branch and branch != 'default',
1595 'branch', '# Branch %s\n', branch)
1595 'branch', '# Branch %s\n', branch)
1596 fm.write('node', '# Node ID %s\n', hex(node))
1596 fm.write('node', '# Node ID %s\n', hex(node))
1597 fm.plain('# Parent %s\n' % hex(prev))
1597 fm.plain('# Parent %s\n' % hex(prev))
1598 if len(parents) > 1:
1598 if len(parents) > 1:
1599 fm.plain('# Parent %s\n' % hex(parents[1]))
1599 fm.plain('# Parent %s\n' % hex(parents[1]))
1600 fm.data(parents=fm.formatlist(pycompat.maplist(hex, parents), name='node'))
1600 fm.data(parents=fm.formatlist(pycompat.maplist(hex, parents), name='node'))
1601
1601
1602 # TODO: redesign extraexportmap function to support formatter
1602 # TODO: redesign extraexportmap function to support formatter
1603 for headerid in extraexport:
1603 for headerid in extraexport:
1604 header = extraexportmap[headerid](seqno, ctx)
1604 header = extraexportmap[headerid](seqno, ctx)
1605 if header is not None:
1605 if header is not None:
1606 fm.plain('# %s\n' % header)
1606 fm.plain('# %s\n' % header)
1607
1607
1608 fm.write('desc', '%s\n', ctx.description().rstrip())
1608 fm.write('desc', '%s\n', ctx.description().rstrip())
1609 fm.plain('\n')
1609 fm.plain('\n')
1610
1610
1611 if fm.isplain():
1611 if fm.isplain():
1612 chunkiter = patch.diffui(repo, prev, node, match, opts=diffopts)
1612 chunkiter = patch.diffui(repo, prev, node, match, opts=diffopts)
1613 for chunk, label in chunkiter:
1613 for chunk, label in chunkiter:
1614 fm.plain(chunk, label=label)
1614 fm.plain(chunk, label=label)
1615 else:
1615 else:
1616 chunkiter = patch.diff(repo, prev, node, match, opts=diffopts)
1616 chunkiter = patch.diff(repo, prev, node, match, opts=diffopts)
1617 # TODO: make it structured?
1617 # TODO: make it structured?
1618 fm.data(diff=b''.join(chunkiter))
1618 fm.data(diff=b''.join(chunkiter))
1619
1619
1620 def _exportfile(repo, revs, fm, dest, switch_parent, diffopts, match):
1620 def _exportfile(repo, revs, fm, dest, switch_parent, diffopts, match):
1621 """Export changesets to stdout or a single file"""
1621 """Export changesets to stdout or a single file"""
1622 for seqno, rev in enumerate(revs, 1):
1622 for seqno, rev in enumerate(revs, 1):
1623 ctx = repo[rev]
1623 ctx = repo[rev]
1624 if not dest.startswith('<'):
1624 if not dest.startswith('<'):
1625 repo.ui.note("%s\n" % dest)
1625 repo.ui.note("%s\n" % dest)
1626 fm.startitem()
1626 fm.startitem()
1627 _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts)
1627 _exportsingle(repo, ctx, fm, match, switch_parent, seqno, diffopts)
1628
1628
1629 def _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, diffopts,
1629 def _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, diffopts,
1630 match):
1630 match):
1631 """Export changesets to possibly multiple files"""
1631 """Export changesets to possibly multiple files"""
1632 total = len(revs)
1632 total = len(revs)
1633 revwidth = max(len(str(rev)) for rev in revs)
1633 revwidth = max(len(str(rev)) for rev in revs)
1634 filemap = util.sortdict() # filename: [(seqno, rev), ...]
1634 filemap = util.sortdict() # filename: [(seqno, rev), ...]
1635
1635
1636 for seqno, rev in enumerate(revs, 1):
1636 for seqno, rev in enumerate(revs, 1):
1637 ctx = repo[rev]
1637 ctx = repo[rev]
1638 dest = makefilename(ctx, fntemplate,
1638 dest = makefilename(ctx, fntemplate,
1639 total=total, seqno=seqno, revwidth=revwidth)
1639 total=total, seqno=seqno, revwidth=revwidth)
1640 filemap.setdefault(dest, []).append((seqno, rev))
1640 filemap.setdefault(dest, []).append((seqno, rev))
1641
1641
1642 for dest in filemap:
1642 for dest in filemap:
1643 with formatter.maybereopen(basefm, dest) as fm:
1643 with formatter.maybereopen(basefm, dest) as fm:
1644 repo.ui.note("%s\n" % dest)
1644 repo.ui.note("%s\n" % dest)
1645 for seqno, rev in filemap[dest]:
1645 for seqno, rev in filemap[dest]:
1646 fm.startitem()
1646 fm.startitem()
1647 ctx = repo[rev]
1647 ctx = repo[rev]
1648 _exportsingle(repo, ctx, fm, match, switch_parent, seqno,
1648 _exportsingle(repo, ctx, fm, match, switch_parent, seqno,
1649 diffopts)
1649 diffopts)
1650
1650
1651 def export(repo, revs, basefm, fntemplate='hg-%h.patch', switch_parent=False,
1651 def export(repo, revs, basefm, fntemplate='hg-%h.patch', switch_parent=False,
1652 opts=None, match=None):
1652 opts=None, match=None):
1653 '''export changesets as hg patches
1653 '''export changesets as hg patches
1654
1654
1655 Args:
1655 Args:
1656 repo: The repository from which we're exporting revisions.
1656 repo: The repository from which we're exporting revisions.
1657 revs: A list of revisions to export as revision numbers.
1657 revs: A list of revisions to export as revision numbers.
1658 basefm: A formatter to which patches should be written.
1658 basefm: A formatter to which patches should be written.
1659 fntemplate: An optional string to use for generating patch file names.
1659 fntemplate: An optional string to use for generating patch file names.
1660 switch_parent: If True, show diffs against second parent when not nullid.
1660 switch_parent: If True, show diffs against second parent when not nullid.
1661 Default is false, which always shows diff against p1.
1661 Default is false, which always shows diff against p1.
1662 opts: diff options to use for generating the patch.
1662 opts: diff options to use for generating the patch.
1663 match: If specified, only export changes to files matching this matcher.
1663 match: If specified, only export changes to files matching this matcher.
1664
1664
1665 Returns:
1665 Returns:
1666 Nothing.
1666 Nothing.
1667
1667
1668 Side Effect:
1668 Side Effect:
1669 "HG Changeset Patch" data is emitted to one of the following
1669 "HG Changeset Patch" data is emitted to one of the following
1670 destinations:
1670 destinations:
1671 fntemplate specified: Each rev is written to a unique file named using
1671 fntemplate specified: Each rev is written to a unique file named using
1672 the given template.
1672 the given template.
1673 Otherwise: All revs will be written to basefm.
1673 Otherwise: All revs will be written to basefm.
1674 '''
1674 '''
1675 scmutil.prefetchfiles(repo, revs, match)
1675 scmutil.prefetchfiles(repo, revs, match)
1676
1676
1677 if not fntemplate:
1677 if not fntemplate:
1678 _exportfile(repo, revs, basefm, '<unnamed>', switch_parent, opts, match)
1678 _exportfile(repo, revs, basefm, '<unnamed>', switch_parent, opts, match)
1679 else:
1679 else:
1680 _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, opts,
1680 _exportfntemplate(repo, revs, basefm, fntemplate, switch_parent, opts,
1681 match)
1681 match)
1682
1682
1683 def exportfile(repo, revs, fp, switch_parent=False, opts=None, match=None):
1683 def exportfile(repo, revs, fp, switch_parent=False, opts=None, match=None):
1684 """Export changesets to the given file stream"""
1684 """Export changesets to the given file stream"""
1685 scmutil.prefetchfiles(repo, revs, match)
1685 scmutil.prefetchfiles(repo, revs, match)
1686
1686
1687 dest = getattr(fp, 'name', '<unnamed>')
1687 dest = getattr(fp, 'name', '<unnamed>')
1688 with formatter.formatter(repo.ui, fp, 'export', {}) as fm:
1688 with formatter.formatter(repo.ui, fp, 'export', {}) as fm:
1689 _exportfile(repo, revs, fm, dest, switch_parent, opts, match)
1689 _exportfile(repo, revs, fm, dest, switch_parent, opts, match)
1690
1690
1691 def showmarker(fm, marker, index=None):
1691 def showmarker(fm, marker, index=None):
1692 """utility function to display obsolescence marker in a readable way
1692 """utility function to display obsolescence marker in a readable way
1693
1693
1694 To be used by debug function."""
1694 To be used by debug function."""
1695 if index is not None:
1695 if index is not None:
1696 fm.write('index', '%i ', index)
1696 fm.write('index', '%i ', index)
1697 fm.write('prednode', '%s ', hex(marker.prednode()))
1697 fm.write('prednode', '%s ', hex(marker.prednode()))
1698 succs = marker.succnodes()
1698 succs = marker.succnodes()
1699 fm.condwrite(succs, 'succnodes', '%s ',
1699 fm.condwrite(succs, 'succnodes', '%s ',
1700 fm.formatlist(map(hex, succs), name='node'))
1700 fm.formatlist(map(hex, succs), name='node'))
1701 fm.write('flag', '%X ', marker.flags())
1701 fm.write('flag', '%X ', marker.flags())
1702 parents = marker.parentnodes()
1702 parents = marker.parentnodes()
1703 if parents is not None:
1703 if parents is not None:
1704 fm.write('parentnodes', '{%s} ',
1704 fm.write('parentnodes', '{%s} ',
1705 fm.formatlist(map(hex, parents), name='node', sep=', '))
1705 fm.formatlist(map(hex, parents), name='node', sep=', '))
1706 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1706 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1707 meta = marker.metadata().copy()
1707 meta = marker.metadata().copy()
1708 meta.pop('date', None)
1708 meta.pop('date', None)
1709 smeta = pycompat.rapply(pycompat.maybebytestr, meta)
1709 smeta = pycompat.rapply(pycompat.maybebytestr, meta)
1710 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1710 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1711 fm.plain('\n')
1711 fm.plain('\n')
1712
1712
1713 def finddate(ui, repo, date):
1713 def finddate(ui, repo, date):
1714 """Find the tipmost changeset that matches the given date spec"""
1714 """Find the tipmost changeset that matches the given date spec"""
1715
1715
1716 df = dateutil.matchdate(date)
1716 df = dateutil.matchdate(date)
1717 m = scmutil.matchall(repo)
1717 m = scmutil.matchall(repo)
1718 results = {}
1718 results = {}
1719
1719
1720 def prep(ctx, fns):
1720 def prep(ctx, fns):
1721 d = ctx.date()
1721 d = ctx.date()
1722 if df(d[0]):
1722 if df(d[0]):
1723 results[ctx.rev()] = d
1723 results[ctx.rev()] = d
1724
1724
1725 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1725 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1726 rev = ctx.rev()
1726 rev = ctx.rev()
1727 if rev in results:
1727 if rev in results:
1728 ui.status(_("found revision %s from %s\n") %
1728 ui.status(_("found revision %s from %s\n") %
1729 (rev, dateutil.datestr(results[rev])))
1729 (rev, dateutil.datestr(results[rev])))
1730 return '%d' % rev
1730 return '%d' % rev
1731
1731
1732 raise error.Abort(_("revision matching date not found"))
1732 raise error.Abort(_("revision matching date not found"))
1733
1733
1734 def increasingwindows(windowsize=8, sizelimit=512):
1734 def increasingwindows(windowsize=8, sizelimit=512):
1735 while True:
1735 while True:
1736 yield windowsize
1736 yield windowsize
1737 if windowsize < sizelimit:
1737 if windowsize < sizelimit:
1738 windowsize *= 2
1738 windowsize *= 2
1739
1739
1740 def _walkrevs(repo, opts):
1740 def _walkrevs(repo, opts):
1741 # Default --rev value depends on --follow but --follow behavior
1741 # Default --rev value depends on --follow but --follow behavior
1742 # depends on revisions resolved from --rev...
1742 # depends on revisions resolved from --rev...
1743 follow = opts.get('follow') or opts.get('follow_first')
1743 follow = opts.get('follow') or opts.get('follow_first')
1744 if opts.get('rev'):
1744 if opts.get('rev'):
1745 revs = scmutil.revrange(repo, opts['rev'])
1745 revs = scmutil.revrange(repo, opts['rev'])
1746 elif follow and repo.dirstate.p1() == nullid:
1746 elif follow and repo.dirstate.p1() == nullid:
1747 revs = smartset.baseset()
1747 revs = smartset.baseset()
1748 elif follow:
1748 elif follow:
1749 revs = repo.revs('reverse(:.)')
1749 revs = repo.revs('reverse(:.)')
1750 else:
1750 else:
1751 revs = smartset.spanset(repo)
1751 revs = smartset.spanset(repo)
1752 revs.reverse()
1752 revs.reverse()
1753 return revs
1753 return revs
1754
1754
1755 class FileWalkError(Exception):
1755 class FileWalkError(Exception):
1756 pass
1756 pass
1757
1757
1758 def walkfilerevs(repo, match, follow, revs, fncache):
1758 def walkfilerevs(repo, match, follow, revs, fncache):
1759 '''Walks the file history for the matched files.
1759 '''Walks the file history for the matched files.
1760
1760
1761 Returns the changeset revs that are involved in the file history.
1761 Returns the changeset revs that are involved in the file history.
1762
1762
1763 Throws FileWalkError if the file history can't be walked using
1763 Throws FileWalkError if the file history can't be walked using
1764 filelogs alone.
1764 filelogs alone.
1765 '''
1765 '''
1766 wanted = set()
1766 wanted = set()
1767 copies = []
1767 copies = []
1768 minrev, maxrev = min(revs), max(revs)
1768 minrev, maxrev = min(revs), max(revs)
1769 def filerevgen(filelog, last):
1769 def filerevgen(filelog, last):
1770 """
1770 """
1771 Only files, no patterns. Check the history of each file.
1771 Only files, no patterns. Check the history of each file.
1772
1772
1773 Examines filelog entries within minrev, maxrev linkrev range
1773 Examines filelog entries within minrev, maxrev linkrev range
1774 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1774 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1775 tuples in backwards order
1775 tuples in backwards order
1776 """
1776 """
1777 cl_count = len(repo)
1777 cl_count = len(repo)
1778 revs = []
1778 revs = []
1779 for j in pycompat.xrange(0, last + 1):
1779 for j in pycompat.xrange(0, last + 1):
1780 linkrev = filelog.linkrev(j)
1780 linkrev = filelog.linkrev(j)
1781 if linkrev < minrev:
1781 if linkrev < minrev:
1782 continue
1782 continue
1783 # only yield rev for which we have the changelog, it can
1783 # only yield rev for which we have the changelog, it can
1784 # happen while doing "hg log" during a pull or commit
1784 # happen while doing "hg log" during a pull or commit
1785 if linkrev >= cl_count:
1785 if linkrev >= cl_count:
1786 break
1786 break
1787
1787
1788 parentlinkrevs = []
1788 parentlinkrevs = []
1789 for p in filelog.parentrevs(j):
1789 for p in filelog.parentrevs(j):
1790 if p != nullrev:
1790 if p != nullrev:
1791 parentlinkrevs.append(filelog.linkrev(p))
1791 parentlinkrevs.append(filelog.linkrev(p))
1792 n = filelog.node(j)
1792 n = filelog.node(j)
1793 revs.append((linkrev, parentlinkrevs,
1793 revs.append((linkrev, parentlinkrevs,
1794 follow and filelog.renamed(n)))
1794 follow and filelog.renamed(n)))
1795
1795
1796 return reversed(revs)
1796 return reversed(revs)
1797 def iterfiles():
1797 def iterfiles():
1798 pctx = repo['.']
1798 pctx = repo['.']
1799 for filename in match.files():
1799 for filename in match.files():
1800 if follow:
1800 if follow:
1801 if filename not in pctx:
1801 if filename not in pctx:
1802 raise error.Abort(_('cannot follow file not in parent '
1802 raise error.Abort(_('cannot follow file not in parent '
1803 'revision: "%s"') % filename)
1803 'revision: "%s"') % filename)
1804 yield filename, pctx[filename].filenode()
1804 yield filename, pctx[filename].filenode()
1805 else:
1805 else:
1806 yield filename, None
1806 yield filename, None
1807 for filename_node in copies:
1807 for filename_node in copies:
1808 yield filename_node
1808 yield filename_node
1809
1809
1810 for file_, node in iterfiles():
1810 for file_, node in iterfiles():
1811 filelog = repo.file(file_)
1811 filelog = repo.file(file_)
1812 if not len(filelog):
1812 if not len(filelog):
1813 if node is None:
1813 if node is None:
1814 # A zero count may be a directory or deleted file, so
1814 # A zero count may be a directory or deleted file, so
1815 # try to find matching entries on the slow path.
1815 # try to find matching entries on the slow path.
1816 if follow:
1816 if follow:
1817 raise error.Abort(
1817 raise error.Abort(
1818 _('cannot follow nonexistent file: "%s"') % file_)
1818 _('cannot follow nonexistent file: "%s"') % file_)
1819 raise FileWalkError("Cannot walk via filelog")
1819 raise FileWalkError("Cannot walk via filelog")
1820 else:
1820 else:
1821 continue
1821 continue
1822
1822
1823 if node is None:
1823 if node is None:
1824 last = len(filelog) - 1
1824 last = len(filelog) - 1
1825 else:
1825 else:
1826 last = filelog.rev(node)
1826 last = filelog.rev(node)
1827
1827
1828 # keep track of all ancestors of the file
1828 # keep track of all ancestors of the file
1829 ancestors = {filelog.linkrev(last)}
1829 ancestors = {filelog.linkrev(last)}
1830
1830
1831 # iterate from latest to oldest revision
1831 # iterate from latest to oldest revision
1832 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1832 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1833 if not follow:
1833 if not follow:
1834 if rev > maxrev:
1834 if rev > maxrev:
1835 continue
1835 continue
1836 else:
1836 else:
1837 # Note that last might not be the first interesting
1837 # Note that last might not be the first interesting
1838 # rev to us:
1838 # rev to us:
1839 # if the file has been changed after maxrev, we'll
1839 # if the file has been changed after maxrev, we'll
1840 # have linkrev(last) > maxrev, and we still need
1840 # have linkrev(last) > maxrev, and we still need
1841 # to explore the file graph
1841 # to explore the file graph
1842 if rev not in ancestors:
1842 if rev not in ancestors:
1843 continue
1843 continue
1844 # XXX insert 1327 fix here
1844 # XXX insert 1327 fix here
1845 if flparentlinkrevs:
1845 if flparentlinkrevs:
1846 ancestors.update(flparentlinkrevs)
1846 ancestors.update(flparentlinkrevs)
1847
1847
1848 fncache.setdefault(rev, []).append(file_)
1848 fncache.setdefault(rev, []).append(file_)
1849 wanted.add(rev)
1849 wanted.add(rev)
1850 if copied:
1850 if copied:
1851 copies.append(copied)
1851 copies.append(copied)
1852
1852
1853 return wanted
1853 return wanted
1854
1854
1855 class _followfilter(object):
1855 class _followfilter(object):
1856 def __init__(self, repo, onlyfirst=False):
1856 def __init__(self, repo, onlyfirst=False):
1857 self.repo = repo
1857 self.repo = repo
1858 self.startrev = nullrev
1858 self.startrev = nullrev
1859 self.roots = set()
1859 self.roots = set()
1860 self.onlyfirst = onlyfirst
1860 self.onlyfirst = onlyfirst
1861
1861
1862 def match(self, rev):
1862 def match(self, rev):
1863 def realparents(rev):
1863 def realparents(rev):
1864 if self.onlyfirst:
1864 if self.onlyfirst:
1865 return self.repo.changelog.parentrevs(rev)[0:1]
1865 return self.repo.changelog.parentrevs(rev)[0:1]
1866 else:
1866 else:
1867 return filter(lambda x: x != nullrev,
1867 return filter(lambda x: x != nullrev,
1868 self.repo.changelog.parentrevs(rev))
1868 self.repo.changelog.parentrevs(rev))
1869
1869
1870 if self.startrev == nullrev:
1870 if self.startrev == nullrev:
1871 self.startrev = rev
1871 self.startrev = rev
1872 return True
1872 return True
1873
1873
1874 if rev > self.startrev:
1874 if rev > self.startrev:
1875 # forward: all descendants
1875 # forward: all descendants
1876 if not self.roots:
1876 if not self.roots:
1877 self.roots.add(self.startrev)
1877 self.roots.add(self.startrev)
1878 for parent in realparents(rev):
1878 for parent in realparents(rev):
1879 if parent in self.roots:
1879 if parent in self.roots:
1880 self.roots.add(rev)
1880 self.roots.add(rev)
1881 return True
1881 return True
1882 else:
1882 else:
1883 # backwards: all parents
1883 # backwards: all parents
1884 if not self.roots:
1884 if not self.roots:
1885 self.roots.update(realparents(self.startrev))
1885 self.roots.update(realparents(self.startrev))
1886 if rev in self.roots:
1886 if rev in self.roots:
1887 self.roots.remove(rev)
1887 self.roots.remove(rev)
1888 self.roots.update(realparents(rev))
1888 self.roots.update(realparents(rev))
1889 return True
1889 return True
1890
1890
1891 return False
1891 return False
1892
1892
1893 def walkchangerevs(repo, match, opts, prepare):
1893 def walkchangerevs(repo, match, opts, prepare):
1894 '''Iterate over files and the revs in which they changed.
1894 '''Iterate over files and the revs in which they changed.
1895
1895
1896 Callers most commonly need to iterate backwards over the history
1896 Callers most commonly need to iterate backwards over the history
1897 in which they are interested. Doing so has awful (quadratic-looking)
1897 in which they are interested. Doing so has awful (quadratic-looking)
1898 performance, so we use iterators in a "windowed" way.
1898 performance, so we use iterators in a "windowed" way.
1899
1899
1900 We walk a window of revisions in the desired order. Within the
1900 We walk a window of revisions in the desired order. Within the
1901 window, we first walk forwards to gather data, then in the desired
1901 window, we first walk forwards to gather data, then in the desired
1902 order (usually backwards) to display it.
1902 order (usually backwards) to display it.
1903
1903
1904 This function returns an iterator yielding contexts. Before
1904 This function returns an iterator yielding contexts. Before
1905 yielding each context, the iterator will first call the prepare
1905 yielding each context, the iterator will first call the prepare
1906 function on each context in the window in forward order.'''
1906 function on each context in the window in forward order.'''
1907
1907
1908 allfiles = opts.get('all_files')
1908 allfiles = opts.get('all_files')
1909 follow = opts.get('follow') or opts.get('follow_first')
1909 follow = opts.get('follow') or opts.get('follow_first')
1910 revs = _walkrevs(repo, opts)
1910 revs = _walkrevs(repo, opts)
1911 if not revs:
1911 if not revs:
1912 return []
1912 return []
1913 wanted = set()
1913 wanted = set()
1914 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1914 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1915 fncache = {}
1915 fncache = {}
1916 change = repo.__getitem__
1916 change = repo.__getitem__
1917
1917
1918 # First step is to fill wanted, the set of revisions that we want to yield.
1918 # First step is to fill wanted, the set of revisions that we want to yield.
1919 # When it does not induce extra cost, we also fill fncache for revisions in
1919 # When it does not induce extra cost, we also fill fncache for revisions in
1920 # wanted: a cache of filenames that were changed (ctx.files()) and that
1920 # wanted: a cache of filenames that were changed (ctx.files()) and that
1921 # match the file filtering conditions.
1921 # match the file filtering conditions.
1922
1922
1923 if match.always() or allfiles:
1923 if match.always() or allfiles:
1924 # No files, no patterns. Display all revs.
1924 # No files, no patterns. Display all revs.
1925 wanted = revs
1925 wanted = revs
1926 elif not slowpath:
1926 elif not slowpath:
1927 # We only have to read through the filelog to find wanted revisions
1927 # We only have to read through the filelog to find wanted revisions
1928
1928
1929 try:
1929 try:
1930 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1930 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1931 except FileWalkError:
1931 except FileWalkError:
1932 slowpath = True
1932 slowpath = True
1933
1933
1934 # We decided to fall back to the slowpath because at least one
1934 # We decided to fall back to the slowpath because at least one
1935 # of the paths was not a file. Check to see if at least one of them
1935 # of the paths was not a file. Check to see if at least one of them
1936 # existed in history, otherwise simply return
1936 # existed in history, otherwise simply return
1937 for path in match.files():
1937 for path in match.files():
1938 if path == '.' or path in repo.store:
1938 if path == '.' or path in repo.store:
1939 break
1939 break
1940 else:
1940 else:
1941 return []
1941 return []
1942
1942
1943 if slowpath:
1943 if slowpath:
1944 # We have to read the changelog to match filenames against
1944 # We have to read the changelog to match filenames against
1945 # changed files
1945 # changed files
1946
1946
1947 if follow:
1947 if follow:
1948 raise error.Abort(_('can only follow copies/renames for explicit '
1948 raise error.Abort(_('can only follow copies/renames for explicit '
1949 'filenames'))
1949 'filenames'))
1950
1950
1951 # The slow path checks files modified in every changeset.
1951 # The slow path checks files modified in every changeset.
1952 # This is really slow on large repos, so compute the set lazily.
1952 # This is really slow on large repos, so compute the set lazily.
1953 class lazywantedset(object):
1953 class lazywantedset(object):
1954 def __init__(self):
1954 def __init__(self):
1955 self.set = set()
1955 self.set = set()
1956 self.revs = set(revs)
1956 self.revs = set(revs)
1957
1957
1958 # No need to worry about locality here because it will be accessed
1958 # No need to worry about locality here because it will be accessed
1959 # in the same order as the increasing window below.
1959 # in the same order as the increasing window below.
1960 def __contains__(self, value):
1960 def __contains__(self, value):
1961 if value in self.set:
1961 if value in self.set:
1962 return True
1962 return True
1963 elif not value in self.revs:
1963 elif not value in self.revs:
1964 return False
1964 return False
1965 else:
1965 else:
1966 self.revs.discard(value)
1966 self.revs.discard(value)
1967 ctx = change(value)
1967 ctx = change(value)
1968 matches = [f for f in ctx.files() if match(f)]
1968 matches = [f for f in ctx.files() if match(f)]
1969 if matches:
1969 if matches:
1970 fncache[value] = matches
1970 fncache[value] = matches
1971 self.set.add(value)
1971 self.set.add(value)
1972 return True
1972 return True
1973 return False
1973 return False
1974
1974
1975 def discard(self, value):
1975 def discard(self, value):
1976 self.revs.discard(value)
1976 self.revs.discard(value)
1977 self.set.discard(value)
1977 self.set.discard(value)
1978
1978
1979 wanted = lazywantedset()
1979 wanted = lazywantedset()
1980
1980
1981 # it might be worthwhile to do this in the iterator if the rev range
1981 # it might be worthwhile to do this in the iterator if the rev range
1982 # is descending and the prune args are all within that range
1982 # is descending and the prune args are all within that range
1983 for rev in opts.get('prune', ()):
1983 for rev in opts.get('prune', ()):
1984 rev = repo[rev].rev()
1984 rev = repo[rev].rev()
1985 ff = _followfilter(repo)
1985 ff = _followfilter(repo)
1986 stop = min(revs[0], revs[-1])
1986 stop = min(revs[0], revs[-1])
1987 for x in pycompat.xrange(rev, stop - 1, -1):
1987 for x in pycompat.xrange(rev, stop - 1, -1):
1988 if ff.match(x):
1988 if ff.match(x):
1989 wanted = wanted - [x]
1989 wanted = wanted - [x]
1990
1990
1991 # Now that wanted is correctly initialized, we can iterate over the
1991 # Now that wanted is correctly initialized, we can iterate over the
1992 # revision range, yielding only revisions in wanted.
1992 # revision range, yielding only revisions in wanted.
1993 def iterate():
1993 def iterate():
1994 if follow and match.always():
1994 if follow and match.always():
1995 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1995 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1996 def want(rev):
1996 def want(rev):
1997 return ff.match(rev) and rev in wanted
1997 return ff.match(rev) and rev in wanted
1998 else:
1998 else:
1999 def want(rev):
1999 def want(rev):
2000 return rev in wanted
2000 return rev in wanted
2001
2001
2002 it = iter(revs)
2002 it = iter(revs)
2003 stopiteration = False
2003 stopiteration = False
2004 for windowsize in increasingwindows():
2004 for windowsize in increasingwindows():
2005 nrevs = []
2005 nrevs = []
2006 for i in pycompat.xrange(windowsize):
2006 for i in pycompat.xrange(windowsize):
2007 rev = next(it, None)
2007 rev = next(it, None)
2008 if rev is None:
2008 if rev is None:
2009 stopiteration = True
2009 stopiteration = True
2010 break
2010 break
2011 elif want(rev):
2011 elif want(rev):
2012 nrevs.append(rev)
2012 nrevs.append(rev)
2013 for rev in sorted(nrevs):
2013 for rev in sorted(nrevs):
2014 fns = fncache.get(rev)
2014 fns = fncache.get(rev)
2015 ctx = change(rev)
2015 ctx = change(rev)
2016 if not fns:
2016 if not fns:
2017 def fns_generator():
2017 def fns_generator():
2018 if allfiles:
2018 if allfiles:
2019 fiter = iter(ctx)
2019 fiter = iter(ctx)
2020 else:
2020 else:
2021 fiter = ctx.files()
2021 fiter = ctx.files()
2022 for f in fiter:
2022 for f in fiter:
2023 if match(f):
2023 if match(f):
2024 yield f
2024 yield f
2025 fns = fns_generator()
2025 fns = fns_generator()
2026 prepare(ctx, fns)
2026 prepare(ctx, fns)
2027 for rev in nrevs:
2027 for rev in nrevs:
2028 yield change(rev)
2028 yield change(rev)
2029
2029
2030 if stopiteration:
2030 if stopiteration:
2031 break
2031 break
2032
2032
2033 return iterate()
2033 return iterate()
2034
2034
2035 def add(ui, repo, match, prefix, explicitonly, **opts):
2035 def add(ui, repo, match, prefix, explicitonly, **opts):
2036 join = lambda f: os.path.join(prefix, f)
2036 join = lambda f: os.path.join(prefix, f)
2037 bad = []
2037 bad = []
2038
2038
2039 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2039 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2040 names = []
2040 names = []
2041 wctx = repo[None]
2041 wctx = repo[None]
2042 cca = None
2042 cca = None
2043 abort, warn = scmutil.checkportabilityalert(ui)
2043 abort, warn = scmutil.checkportabilityalert(ui)
2044 if abort or warn:
2044 if abort or warn:
2045 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2045 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2046
2046
2047 match = repo.narrowmatch(match, includeexact=True)
2047 match = repo.narrowmatch(match, includeexact=True)
2048 badmatch = matchmod.badmatch(match, badfn)
2048 badmatch = matchmod.badmatch(match, badfn)
2049 dirstate = repo.dirstate
2049 dirstate = repo.dirstate
2050 # We don't want to just call wctx.walk here, since it would return a lot of
2050 # We don't want to just call wctx.walk here, since it would return a lot of
2051 # clean files, which we aren't interested in and takes time.
2051 # clean files, which we aren't interested in and takes time.
2052 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
2052 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
2053 unknown=True, ignored=False, full=False)):
2053 unknown=True, ignored=False, full=False)):
2054 exact = match.exact(f)
2054 exact = match.exact(f)
2055 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2055 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2056 if cca:
2056 if cca:
2057 cca(f)
2057 cca(f)
2058 names.append(f)
2058 names.append(f)
2059 if ui.verbose or not exact:
2059 if ui.verbose or not exact:
2060 ui.status(_('adding %s\n') % match.rel(f),
2060 ui.status(_('adding %s\n') % match.rel(f),
2061 label='addremove.added')
2061 label='addremove.added')
2062
2062
2063 for subpath in sorted(wctx.substate):
2063 for subpath in sorted(wctx.substate):
2064 sub = wctx.sub(subpath)
2064 sub = wctx.sub(subpath)
2065 try:
2065 try:
2066 submatch = matchmod.subdirmatcher(subpath, match)
2066 submatch = matchmod.subdirmatcher(subpath, match)
2067 if opts.get(r'subrepos'):
2067 if opts.get(r'subrepos'):
2068 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2068 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2069 else:
2069 else:
2070 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2070 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2071 except error.LookupError:
2071 except error.LookupError:
2072 ui.status(_("skipping missing subrepository: %s\n")
2072 ui.status(_("skipping missing subrepository: %s\n")
2073 % join(subpath))
2073 % join(subpath))
2074
2074
2075 if not opts.get(r'dry_run'):
2075 if not opts.get(r'dry_run'):
2076 rejected = wctx.add(names, prefix)
2076 rejected = wctx.add(names, prefix)
2077 bad.extend(f for f in rejected if f in match.files())
2077 bad.extend(f for f in rejected if f in match.files())
2078 return bad
2078 return bad
2079
2079
2080 def addwebdirpath(repo, serverpath, webconf):
2080 def addwebdirpath(repo, serverpath, webconf):
2081 webconf[serverpath] = repo.root
2081 webconf[serverpath] = repo.root
2082 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2082 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2083
2083
2084 for r in repo.revs('filelog("path:.hgsub")'):
2084 for r in repo.revs('filelog("path:.hgsub")'):
2085 ctx = repo[r]
2085 ctx = repo[r]
2086 for subpath in ctx.substate:
2086 for subpath in ctx.substate:
2087 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2087 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2088
2088
2089 def forget(ui, repo, match, prefix, explicitonly, dryrun, interactive):
2089 def forget(ui, repo, match, prefix, explicitonly, dryrun, interactive):
2090 if dryrun and interactive:
2090 if dryrun and interactive:
2091 raise error.Abort(_("cannot specify both --dry-run and --interactive"))
2091 raise error.Abort(_("cannot specify both --dry-run and --interactive"))
2092 join = lambda f: os.path.join(prefix, f)
2092 join = lambda f: os.path.join(prefix, f)
2093 bad = []
2093 bad = []
2094 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2094 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2095 wctx = repo[None]
2095 wctx = repo[None]
2096 forgot = []
2096 forgot = []
2097
2097
2098 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2098 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2099 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2099 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2100 if explicitonly:
2100 if explicitonly:
2101 forget = [f for f in forget if match.exact(f)]
2101 forget = [f for f in forget if match.exact(f)]
2102
2102
2103 for subpath in sorted(wctx.substate):
2103 for subpath in sorted(wctx.substate):
2104 sub = wctx.sub(subpath)
2104 sub = wctx.sub(subpath)
2105 try:
2105 try:
2106 submatch = matchmod.subdirmatcher(subpath, match)
2106 submatch = matchmod.subdirmatcher(subpath, match)
2107 subbad, subforgot = sub.forget(submatch, prefix, dryrun=dryrun,
2107 subbad, subforgot = sub.forget(submatch, prefix, dryrun=dryrun,
2108 interactive=interactive)
2108 interactive=interactive)
2109 bad.extend([subpath + '/' + f for f in subbad])
2109 bad.extend([subpath + '/' + f for f in subbad])
2110 forgot.extend([subpath + '/' + f for f in subforgot])
2110 forgot.extend([subpath + '/' + f for f in subforgot])
2111 except error.LookupError:
2111 except error.LookupError:
2112 ui.status(_("skipping missing subrepository: %s\n")
2112 ui.status(_("skipping missing subrepository: %s\n")
2113 % join(subpath))
2113 % join(subpath))
2114
2114
2115 if not explicitonly:
2115 if not explicitonly:
2116 for f in match.files():
2116 for f in match.files():
2117 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2117 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2118 if f not in forgot:
2118 if f not in forgot:
2119 if repo.wvfs.exists(f):
2119 if repo.wvfs.exists(f):
2120 # Don't complain if the exact case match wasn't given.
2120 # Don't complain if the exact case match wasn't given.
2121 # But don't do this until after checking 'forgot', so
2121 # But don't do this until after checking 'forgot', so
2122 # that subrepo files aren't normalized, and this op is
2122 # that subrepo files aren't normalized, and this op is
2123 # purely from data cached by the status walk above.
2123 # purely from data cached by the status walk above.
2124 if repo.dirstate.normalize(f) in repo.dirstate:
2124 if repo.dirstate.normalize(f) in repo.dirstate:
2125 continue
2125 continue
2126 ui.warn(_('not removing %s: '
2126 ui.warn(_('not removing %s: '
2127 'file is already untracked\n')
2127 'file is already untracked\n')
2128 % match.rel(f))
2128 % match.rel(f))
2129 bad.append(f)
2129 bad.append(f)
2130
2130
2131 if interactive:
2131 if interactive:
2132 responses = _('[Ynsa?]'
2132 responses = _('[Ynsa?]'
2133 '$$ &Yes, forget this file'
2133 '$$ &Yes, forget this file'
2134 '$$ &No, skip this file'
2134 '$$ &No, skip this file'
2135 '$$ &Skip remaining files'
2135 '$$ &Skip remaining files'
2136 '$$ Include &all remaining files'
2136 '$$ Include &all remaining files'
2137 '$$ &? (display help)')
2137 '$$ &? (display help)')
2138 for filename in forget[:]:
2138 for filename in forget[:]:
2139 r = ui.promptchoice(_('forget %s %s') % (filename, responses))
2139 r = ui.promptchoice(_('forget %s %s') % (filename, responses))
2140 if r == 4: # ?
2140 if r == 4: # ?
2141 while r == 4:
2141 while r == 4:
2142 for c, t in ui.extractchoices(responses)[1]:
2142 for c, t in ui.extractchoices(responses)[1]:
2143 ui.write('%s - %s\n' % (c, encoding.lower(t)))
2143 ui.write('%s - %s\n' % (c, encoding.lower(t)))
2144 r = ui.promptchoice(_('forget %s %s') % (filename,
2144 r = ui.promptchoice(_('forget %s %s') % (filename,
2145 responses))
2145 responses))
2146 if r == 0: # yes
2146 if r == 0: # yes
2147 continue
2147 continue
2148 elif r == 1: # no
2148 elif r == 1: # no
2149 forget.remove(filename)
2149 forget.remove(filename)
2150 elif r == 2: # Skip
2150 elif r == 2: # Skip
2151 fnindex = forget.index(filename)
2151 fnindex = forget.index(filename)
2152 del forget[fnindex:]
2152 del forget[fnindex:]
2153 break
2153 break
2154 elif r == 3: # All
2154 elif r == 3: # All
2155 break
2155 break
2156
2156
2157 for f in forget:
2157 for f in forget:
2158 if ui.verbose or not match.exact(f) or interactive:
2158 if ui.verbose or not match.exact(f) or interactive:
2159 ui.status(_('removing %s\n') % match.rel(f),
2159 ui.status(_('removing %s\n') % match.rel(f),
2160 label='addremove.removed')
2160 label='addremove.removed')
2161
2161
2162 if not dryrun:
2162 if not dryrun:
2163 rejected = wctx.forget(forget, prefix)
2163 rejected = wctx.forget(forget, prefix)
2164 bad.extend(f for f in rejected if f in match.files())
2164 bad.extend(f for f in rejected if f in match.files())
2165 forgot.extend(f for f in forget if f not in rejected)
2165 forgot.extend(f for f in forget if f not in rejected)
2166 return bad, forgot
2166 return bad, forgot
2167
2167
2168 def files(ui, ctx, m, fm, fmt, subrepos):
2168 def files(ui, ctx, m, fm, fmt, subrepos):
2169 ret = 1
2169 ret = 1
2170
2170
2171 needsfctx = ui.verbose or {'size', 'flags'} & fm.datahint()
2171 needsfctx = ui.verbose or {'size', 'flags'} & fm.datahint()
2172 for f in ctx.matches(m):
2172 for f in ctx.matches(m):
2173 fm.startitem()
2173 fm.startitem()
2174 fm.context(ctx=ctx)
2174 fm.context(ctx=ctx)
2175 if needsfctx:
2175 if needsfctx:
2176 fc = ctx[f]
2176 fc = ctx[f]
2177 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2177 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2178 fm.data(path=f)
2178 fm.data(path=f)
2179 fm.plain(fmt % m.rel(f))
2179 fm.plain(fmt % m.rel(f))
2180 ret = 0
2180 ret = 0
2181
2181
2182 for subpath in sorted(ctx.substate):
2182 for subpath in sorted(ctx.substate):
2183 submatch = matchmod.subdirmatcher(subpath, m)
2183 submatch = matchmod.subdirmatcher(subpath, m)
2184 if (subrepos or m.exact(subpath) or any(submatch.files())):
2184 if (subrepos or m.exact(subpath) or any(submatch.files())):
2185 sub = ctx.sub(subpath)
2185 sub = ctx.sub(subpath)
2186 try:
2186 try:
2187 recurse = m.exact(subpath) or subrepos
2187 recurse = m.exact(subpath) or subrepos
2188 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2188 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2189 ret = 0
2189 ret = 0
2190 except error.LookupError:
2190 except error.LookupError:
2191 ui.status(_("skipping missing subrepository: %s\n")
2191 ui.status(_("skipping missing subrepository: %s\n")
2192 % m.abs(subpath))
2192 % m.abs(subpath))
2193
2193
2194 return ret
2194 return ret
2195
2195
2196 def remove(ui, repo, m, prefix, after, force, subrepos, dryrun, warnings=None):
2196 def remove(ui, repo, m, prefix, after, force, subrepos, dryrun, warnings=None):
2197 join = lambda f: os.path.join(prefix, f)
2197 join = lambda f: os.path.join(prefix, f)
2198 ret = 0
2198 ret = 0
2199 s = repo.status(match=m, clean=True)
2199 s = repo.status(match=m, clean=True)
2200 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2200 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2201
2201
2202 wctx = repo[None]
2202 wctx = repo[None]
2203
2203
2204 if warnings is None:
2204 if warnings is None:
2205 warnings = []
2205 warnings = []
2206 warn = True
2206 warn = True
2207 else:
2207 else:
2208 warn = False
2208 warn = False
2209
2209
2210 subs = sorted(wctx.substate)
2210 subs = sorted(wctx.substate)
2211 progress = ui.makeprogress(_('searching'), total=len(subs),
2211 progress = ui.makeprogress(_('searching'), total=len(subs),
2212 unit=_('subrepos'))
2212 unit=_('subrepos'))
2213 for subpath in subs:
2213 for subpath in subs:
2214 submatch = matchmod.subdirmatcher(subpath, m)
2214 submatch = matchmod.subdirmatcher(subpath, m)
2215 if subrepos or m.exact(subpath) or any(submatch.files()):
2215 if subrepos or m.exact(subpath) or any(submatch.files()):
2216 progress.increment()
2216 progress.increment()
2217 sub = wctx.sub(subpath)
2217 sub = wctx.sub(subpath)
2218 try:
2218 try:
2219 if sub.removefiles(submatch, prefix, after, force, subrepos,
2219 if sub.removefiles(submatch, prefix, after, force, subrepos,
2220 dryrun, warnings):
2220 dryrun, warnings):
2221 ret = 1
2221 ret = 1
2222 except error.LookupError:
2222 except error.LookupError:
2223 warnings.append(_("skipping missing subrepository: %s\n")
2223 warnings.append(_("skipping missing subrepository: %s\n")
2224 % join(subpath))
2224 % join(subpath))
2225 progress.complete()
2225 progress.complete()
2226
2226
2227 # warn about failure to delete explicit files/dirs
2227 # warn about failure to delete explicit files/dirs
2228 deleteddirs = util.dirs(deleted)
2228 deleteddirs = util.dirs(deleted)
2229 files = m.files()
2229 files = m.files()
2230 progress = ui.makeprogress(_('deleting'), total=len(files),
2230 progress = ui.makeprogress(_('deleting'), total=len(files),
2231 unit=_('files'))
2231 unit=_('files'))
2232 for f in files:
2232 for f in files:
2233 def insubrepo():
2233 def insubrepo():
2234 for subpath in wctx.substate:
2234 for subpath in wctx.substate:
2235 if f.startswith(subpath + '/'):
2235 if f.startswith(subpath + '/'):
2236 return True
2236 return True
2237 return False
2237 return False
2238
2238
2239 progress.increment()
2239 progress.increment()
2240 isdir = f in deleteddirs or wctx.hasdir(f)
2240 isdir = f in deleteddirs or wctx.hasdir(f)
2241 if (f in repo.dirstate or isdir or f == '.'
2241 if (f in repo.dirstate or isdir or f == '.'
2242 or insubrepo() or f in subs):
2242 or insubrepo() or f in subs):
2243 continue
2243 continue
2244
2244
2245 if repo.wvfs.exists(f):
2245 if repo.wvfs.exists(f):
2246 if repo.wvfs.isdir(f):
2246 if repo.wvfs.isdir(f):
2247 warnings.append(_('not removing %s: no tracked files\n')
2247 warnings.append(_('not removing %s: no tracked files\n')
2248 % m.rel(f))
2248 % m.rel(f))
2249 else:
2249 else:
2250 warnings.append(_('not removing %s: file is untracked\n')
2250 warnings.append(_('not removing %s: file is untracked\n')
2251 % m.rel(f))
2251 % m.rel(f))
2252 # missing files will generate a warning elsewhere
2252 # missing files will generate a warning elsewhere
2253 ret = 1
2253 ret = 1
2254 progress.complete()
2254 progress.complete()
2255
2255
2256 if force:
2256 if force:
2257 list = modified + deleted + clean + added
2257 list = modified + deleted + clean + added
2258 elif after:
2258 elif after:
2259 list = deleted
2259 list = deleted
2260 remaining = modified + added + clean
2260 remaining = modified + added + clean
2261 progress = ui.makeprogress(_('skipping'), total=len(remaining),
2261 progress = ui.makeprogress(_('skipping'), total=len(remaining),
2262 unit=_('files'))
2262 unit=_('files'))
2263 for f in remaining:
2263 for f in remaining:
2264 progress.increment()
2264 progress.increment()
2265 if ui.verbose or (f in files):
2265 if ui.verbose or (f in files):
2266 warnings.append(_('not removing %s: file still exists\n')
2266 warnings.append(_('not removing %s: file still exists\n')
2267 % m.rel(f))
2267 % m.rel(f))
2268 ret = 1
2268 ret = 1
2269 progress.complete()
2269 progress.complete()
2270 else:
2270 else:
2271 list = deleted + clean
2271 list = deleted + clean
2272 progress = ui.makeprogress(_('skipping'),
2272 progress = ui.makeprogress(_('skipping'),
2273 total=(len(modified) + len(added)),
2273 total=(len(modified) + len(added)),
2274 unit=_('files'))
2274 unit=_('files'))
2275 for f in modified:
2275 for f in modified:
2276 progress.increment()
2276 progress.increment()
2277 warnings.append(_('not removing %s: file is modified (use -f'
2277 warnings.append(_('not removing %s: file is modified (use -f'
2278 ' to force removal)\n') % m.rel(f))
2278 ' to force removal)\n') % m.rel(f))
2279 ret = 1
2279 ret = 1
2280 for f in added:
2280 for f in added:
2281 progress.increment()
2281 progress.increment()
2282 warnings.append(_("not removing %s: file has been marked for add"
2282 warnings.append(_("not removing %s: file has been marked for add"
2283 " (use 'hg forget' to undo add)\n") % m.rel(f))
2283 " (use 'hg forget' to undo add)\n") % m.rel(f))
2284 ret = 1
2284 ret = 1
2285 progress.complete()
2285 progress.complete()
2286
2286
2287 list = sorted(list)
2287 list = sorted(list)
2288 progress = ui.makeprogress(_('deleting'), total=len(list),
2288 progress = ui.makeprogress(_('deleting'), total=len(list),
2289 unit=_('files'))
2289 unit=_('files'))
2290 for f in list:
2290 for f in list:
2291 if ui.verbose or not m.exact(f):
2291 if ui.verbose or not m.exact(f):
2292 progress.increment()
2292 progress.increment()
2293 ui.status(_('removing %s\n') % m.rel(f),
2293 ui.status(_('removing %s\n') % m.rel(f),
2294 label='addremove.removed')
2294 label='addremove.removed')
2295 progress.complete()
2295 progress.complete()
2296
2296
2297 if not dryrun:
2297 if not dryrun:
2298 with repo.wlock():
2298 with repo.wlock():
2299 if not after:
2299 if not after:
2300 for f in list:
2300 for f in list:
2301 if f in added:
2301 if f in added:
2302 continue # we never unlink added files on remove
2302 continue # we never unlink added files on remove
2303 rmdir = repo.ui.configbool('experimental',
2303 rmdir = repo.ui.configbool('experimental',
2304 'removeemptydirs')
2304 'removeemptydirs')
2305 repo.wvfs.unlinkpath(f, ignoremissing=True, rmdir=rmdir)
2305 repo.wvfs.unlinkpath(f, ignoremissing=True, rmdir=rmdir)
2306 repo[None].forget(list)
2306 repo[None].forget(list)
2307
2307
2308 if warn:
2308 if warn:
2309 for warning in warnings:
2309 for warning in warnings:
2310 ui.warn(warning)
2310 ui.warn(warning)
2311
2311
2312 return ret
2312 return ret
2313
2313
2314 def _updatecatformatter(fm, ctx, matcher, path, decode):
2314 def _updatecatformatter(fm, ctx, matcher, path, decode):
2315 """Hook for adding data to the formatter used by ``hg cat``.
2315 """Hook for adding data to the formatter used by ``hg cat``.
2316
2316
2317 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2317 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2318 this method first."""
2318 this method first."""
2319 data = ctx[path].data()
2319 data = ctx[path].data()
2320 if decode:
2320 if decode:
2321 data = ctx.repo().wwritedata(path, data)
2321 data = ctx.repo().wwritedata(path, data)
2322 fm.startitem()
2322 fm.startitem()
2323 fm.context(ctx=ctx)
2323 fm.context(ctx=ctx)
2324 fm.write('data', '%s', data)
2324 fm.write('data', '%s', data)
2325 fm.data(path=path)
2325 fm.data(path=path)
2326
2326
2327 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2327 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2328 err = 1
2328 err = 1
2329 opts = pycompat.byteskwargs(opts)
2329 opts = pycompat.byteskwargs(opts)
2330
2330
2331 def write(path):
2331 def write(path):
2332 filename = None
2332 filename = None
2333 if fntemplate:
2333 if fntemplate:
2334 filename = makefilename(ctx, fntemplate,
2334 filename = makefilename(ctx, fntemplate,
2335 pathname=os.path.join(prefix, path))
2335 pathname=os.path.join(prefix, path))
2336 # attempt to create the directory if it does not already exist
2336 # attempt to create the directory if it does not already exist
2337 try:
2337 try:
2338 os.makedirs(os.path.dirname(filename))
2338 os.makedirs(os.path.dirname(filename))
2339 except OSError:
2339 except OSError:
2340 pass
2340 pass
2341 with formatter.maybereopen(basefm, filename) as fm:
2341 with formatter.maybereopen(basefm, filename) as fm:
2342 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2342 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2343
2343
2344 # Automation often uses hg cat on single files, so special case it
2344 # Automation often uses hg cat on single files, so special case it
2345 # for performance to avoid the cost of parsing the manifest.
2345 # for performance to avoid the cost of parsing the manifest.
2346 if len(matcher.files()) == 1 and not matcher.anypats():
2346 if len(matcher.files()) == 1 and not matcher.anypats():
2347 file = matcher.files()[0]
2347 file = matcher.files()[0]
2348 mfl = repo.manifestlog
2348 mfl = repo.manifestlog
2349 mfnode = ctx.manifestnode()
2349 mfnode = ctx.manifestnode()
2350 try:
2350 try:
2351 if mfnode and mfl[mfnode].find(file)[0]:
2351 if mfnode and mfl[mfnode].find(file)[0]:
2352 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2352 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2353 write(file)
2353 write(file)
2354 return 0
2354 return 0
2355 except KeyError:
2355 except KeyError:
2356 pass
2356 pass
2357
2357
2358 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2358 scmutil.prefetchfiles(repo, [ctx.rev()], matcher)
2359
2359
2360 for abs in ctx.walk(matcher):
2360 for abs in ctx.walk(matcher):
2361 write(abs)
2361 write(abs)
2362 err = 0
2362 err = 0
2363
2363
2364 for subpath in sorted(ctx.substate):
2364 for subpath in sorted(ctx.substate):
2365 sub = ctx.sub(subpath)
2365 sub = ctx.sub(subpath)
2366 try:
2366 try:
2367 submatch = matchmod.subdirmatcher(subpath, matcher)
2367 submatch = matchmod.subdirmatcher(subpath, matcher)
2368
2368
2369 if not sub.cat(submatch, basefm, fntemplate,
2369 if not sub.cat(submatch, basefm, fntemplate,
2370 os.path.join(prefix, sub._path),
2370 os.path.join(prefix, sub._path),
2371 **pycompat.strkwargs(opts)):
2371 **pycompat.strkwargs(opts)):
2372 err = 0
2372 err = 0
2373 except error.RepoLookupError:
2373 except error.RepoLookupError:
2374 ui.status(_("skipping missing subrepository: %s\n")
2374 ui.status(_("skipping missing subrepository: %s\n")
2375 % os.path.join(prefix, subpath))
2375 % os.path.join(prefix, subpath))
2376
2376
2377 return err
2377 return err
2378
2378
2379 def commit(ui, repo, commitfunc, pats, opts):
2379 def commit(ui, repo, commitfunc, pats, opts):
2380 '''commit the specified files or all outstanding changes'''
2380 '''commit the specified files or all outstanding changes'''
2381 date = opts.get('date')
2381 date = opts.get('date')
2382 if date:
2382 if date:
2383 opts['date'] = dateutil.parsedate(date)
2383 opts['date'] = dateutil.parsedate(date)
2384 message = logmessage(ui, opts)
2384 message = logmessage(ui, opts)
2385 matcher = scmutil.match(repo[None], pats, opts)
2385 matcher = scmutil.match(repo[None], pats, opts)
2386
2386
2387 dsguard = None
2387 dsguard = None
2388 # extract addremove carefully -- this function can be called from a command
2388 # extract addremove carefully -- this function can be called from a command
2389 # that doesn't support addremove
2389 # that doesn't support addremove
2390 if opts.get('addremove'):
2390 if opts.get('addremove'):
2391 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2391 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2392 with dsguard or util.nullcontextmanager():
2392 with dsguard or util.nullcontextmanager():
2393 if dsguard:
2393 if dsguard:
2394 if scmutil.addremove(repo, matcher, "", opts) != 0:
2394 if scmutil.addremove(repo, matcher, "", opts) != 0:
2395 raise error.Abort(
2395 raise error.Abort(
2396 _("failed to mark all new/missing files as added/removed"))
2396 _("failed to mark all new/missing files as added/removed"))
2397
2397
2398 return commitfunc(ui, repo, message, matcher, opts)
2398 return commitfunc(ui, repo, message, matcher, opts)
2399
2399
2400 def samefile(f, ctx1, ctx2):
2400 def samefile(f, ctx1, ctx2):
2401 if f in ctx1.manifest():
2401 if f in ctx1.manifest():
2402 a = ctx1.filectx(f)
2402 a = ctx1.filectx(f)
2403 if f in ctx2.manifest():
2403 if f in ctx2.manifest():
2404 b = ctx2.filectx(f)
2404 b = ctx2.filectx(f)
2405 return (not a.cmp(b)
2405 return (not a.cmp(b)
2406 and a.flags() == b.flags())
2406 and a.flags() == b.flags())
2407 else:
2407 else:
2408 return False
2408 return False
2409 else:
2409 else:
2410 return f not in ctx2.manifest()
2410 return f not in ctx2.manifest()
2411
2411
2412 def amend(ui, repo, old, extra, pats, opts):
2412 def amend(ui, repo, old, extra, pats, opts):
2413 # avoid cycle context -> subrepo -> cmdutil
2413 # avoid cycle context -> subrepo -> cmdutil
2414 from . import context
2414 from . import context
2415
2415
2416 # amend will reuse the existing user if not specified, but the obsolete
2416 # amend will reuse the existing user if not specified, but the obsolete
2417 # marker creation requires that the current user's name is specified.
2417 # marker creation requires that the current user's name is specified.
2418 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2418 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2419 ui.username() # raise exception if username not set
2419 ui.username() # raise exception if username not set
2420
2420
2421 ui.note(_('amending changeset %s\n') % old)
2421 ui.note(_('amending changeset %s\n') % old)
2422 base = old.p1()
2422 base = old.p1()
2423
2423
2424 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2424 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2425 # Participating changesets:
2425 # Participating changesets:
2426 #
2426 #
2427 # wctx o - workingctx that contains changes from working copy
2427 # wctx o - workingctx that contains changes from working copy
2428 # | to go into amending commit
2428 # | to go into amending commit
2429 # |
2429 # |
2430 # old o - changeset to amend
2430 # old o - changeset to amend
2431 # |
2431 # |
2432 # base o - first parent of the changeset to amend
2432 # base o - first parent of the changeset to amend
2433 wctx = repo[None]
2433 wctx = repo[None]
2434
2434
2435 # Copy to avoid mutating input
2435 # Copy to avoid mutating input
2436 extra = extra.copy()
2436 extra = extra.copy()
2437 # Update extra dict from amended commit (e.g. to preserve graft
2437 # Update extra dict from amended commit (e.g. to preserve graft
2438 # source)
2438 # source)
2439 extra.update(old.extra())
2439 extra.update(old.extra())
2440
2440
2441 # Also update it from the from the wctx
2441 # Also update it from the from the wctx
2442 extra.update(wctx.extra())
2442 extra.update(wctx.extra())
2443
2443
2444 user = opts.get('user') or old.user()
2444 user = opts.get('user') or old.user()
2445 date = opts.get('date') or old.date()
2445 date = opts.get('date') or old.date()
2446
2446
2447 # Parse the date to allow comparison between date and old.date()
2447 # Parse the date to allow comparison between date and old.date()
2448 date = dateutil.parsedate(date)
2448 date = dateutil.parsedate(date)
2449
2449
2450 if len(old.parents()) > 1:
2450 if len(old.parents()) > 1:
2451 # ctx.files() isn't reliable for merges, so fall back to the
2451 # ctx.files() isn't reliable for merges, so fall back to the
2452 # slower repo.status() method
2452 # slower repo.status() method
2453 files = set([fn for st in base.status(old)[:3]
2453 files = set([fn for st in base.status(old)[:3]
2454 for fn in st])
2454 for fn in st])
2455 else:
2455 else:
2456 files = set(old.files())
2456 files = set(old.files())
2457
2457
2458 # add/remove the files to the working copy if the "addremove" option
2458 # add/remove the files to the working copy if the "addremove" option
2459 # was specified.
2459 # was specified.
2460 matcher = scmutil.match(wctx, pats, opts)
2460 matcher = scmutil.match(wctx, pats, opts)
2461 if (opts.get('addremove')
2461 if (opts.get('addremove')
2462 and scmutil.addremove(repo, matcher, "", opts)):
2462 and scmutil.addremove(repo, matcher, "", opts)):
2463 raise error.Abort(
2463 raise error.Abort(
2464 _("failed to mark all new/missing files as added/removed"))
2464 _("failed to mark all new/missing files as added/removed"))
2465
2465
2466 # Check subrepos. This depends on in-place wctx._status update in
2466 # Check subrepos. This depends on in-place wctx._status update in
2467 # subrepo.precommit(). To minimize the risk of this hack, we do
2467 # subrepo.precommit(). To minimize the risk of this hack, we do
2468 # nothing if .hgsub does not exist.
2468 # nothing if .hgsub does not exist.
2469 if '.hgsub' in wctx or '.hgsub' in old:
2469 if '.hgsub' in wctx or '.hgsub' in old:
2470 subs, commitsubs, newsubstate = subrepoutil.precommit(
2470 subs, commitsubs, newsubstate = subrepoutil.precommit(
2471 ui, wctx, wctx._status, matcher)
2471 ui, wctx, wctx._status, matcher)
2472 # amend should abort if commitsubrepos is enabled
2472 # amend should abort if commitsubrepos is enabled
2473 assert not commitsubs
2473 assert not commitsubs
2474 if subs:
2474 if subs:
2475 subrepoutil.writestate(repo, newsubstate)
2475 subrepoutil.writestate(repo, newsubstate)
2476
2476
2477 ms = mergemod.mergestate.read(repo)
2477 ms = mergemod.mergestate.read(repo)
2478 mergeutil.checkunresolved(ms)
2478 mergeutil.checkunresolved(ms)
2479
2479
2480 filestoamend = set(f for f in wctx.files() if matcher(f))
2480 filestoamend = set(f for f in wctx.files() if matcher(f))
2481
2481
2482 changes = (len(filestoamend) > 0)
2482 changes = (len(filestoamend) > 0)
2483 if changes:
2483 if changes:
2484 # Recompute copies (avoid recording a -> b -> a)
2484 # Recompute copies (avoid recording a -> b -> a)
2485 copied = copies.pathcopies(base, wctx, matcher)
2485 copied = copies.pathcopies(base, wctx, matcher)
2486 if old.p2:
2486 if old.p2:
2487 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2487 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2488
2488
2489 # Prune files which were reverted by the updates: if old
2489 # Prune files which were reverted by the updates: if old
2490 # introduced file X and the file was renamed in the working
2490 # introduced file X and the file was renamed in the working
2491 # copy, then those two files are the same and
2491 # copy, then those two files are the same and
2492 # we can discard X from our list of files. Likewise if X
2492 # we can discard X from our list of files. Likewise if X
2493 # was removed, it's no longer relevant. If X is missing (aka
2493 # was removed, it's no longer relevant. If X is missing (aka
2494 # deleted), old X must be preserved.
2494 # deleted), old X must be preserved.
2495 files.update(filestoamend)
2495 files.update(filestoamend)
2496 files = [f for f in files if (not samefile(f, wctx, base)
2496 files = [f for f in files if (not samefile(f, wctx, base)
2497 or f in wctx.deleted())]
2497 or f in wctx.deleted())]
2498
2498
2499 def filectxfn(repo, ctx_, path):
2499 def filectxfn(repo, ctx_, path):
2500 try:
2500 try:
2501 # If the file being considered is not amongst the files
2501 # If the file being considered is not amongst the files
2502 # to be amended, we should return the file context from the
2502 # to be amended, we should return the file context from the
2503 # old changeset. This avoids issues when only some files in
2503 # old changeset. This avoids issues when only some files in
2504 # the working copy are being amended but there are also
2504 # the working copy are being amended but there are also
2505 # changes to other files from the old changeset.
2505 # changes to other files from the old changeset.
2506 if path not in filestoamend:
2506 if path not in filestoamend:
2507 return old.filectx(path)
2507 return old.filectx(path)
2508
2508
2509 # Return None for removed files.
2509 # Return None for removed files.
2510 if path in wctx.removed():
2510 if path in wctx.removed():
2511 return None
2511 return None
2512
2512
2513 fctx = wctx[path]
2513 fctx = wctx[path]
2514 flags = fctx.flags()
2514 flags = fctx.flags()
2515 mctx = context.memfilectx(repo, ctx_,
2515 mctx = context.memfilectx(repo, ctx_,
2516 fctx.path(), fctx.data(),
2516 fctx.path(), fctx.data(),
2517 islink='l' in flags,
2517 islink='l' in flags,
2518 isexec='x' in flags,
2518 isexec='x' in flags,
2519 copied=copied.get(path))
2519 copied=copied.get(path))
2520 return mctx
2520 return mctx
2521 except KeyError:
2521 except KeyError:
2522 return None
2522 return None
2523 else:
2523 else:
2524 ui.note(_('copying changeset %s to %s\n') % (old, base))
2524 ui.note(_('copying changeset %s to %s\n') % (old, base))
2525
2525
2526 # Use version of files as in the old cset
2526 # Use version of files as in the old cset
2527 def filectxfn(repo, ctx_, path):
2527 def filectxfn(repo, ctx_, path):
2528 try:
2528 try:
2529 return old.filectx(path)
2529 return old.filectx(path)
2530 except KeyError:
2530 except KeyError:
2531 return None
2531 return None
2532
2532
2533 # See if we got a message from -m or -l, if not, open the editor with
2533 # See if we got a message from -m or -l, if not, open the editor with
2534 # the message of the changeset to amend.
2534 # the message of the changeset to amend.
2535 message = logmessage(ui, opts)
2535 message = logmessage(ui, opts)
2536
2536
2537 editform = mergeeditform(old, 'commit.amend')
2537 editform = mergeeditform(old, 'commit.amend')
2538 editor = getcommiteditor(editform=editform,
2538 editor = getcommiteditor(editform=editform,
2539 **pycompat.strkwargs(opts))
2539 **pycompat.strkwargs(opts))
2540
2540
2541 if not message:
2541 if not message:
2542 editor = getcommiteditor(edit=True, editform=editform)
2542 editor = getcommiteditor(edit=True, editform=editform)
2543 message = old.description()
2543 message = old.description()
2544
2544
2545 pureextra = extra.copy()
2545 pureextra = extra.copy()
2546 extra['amend_source'] = old.hex()
2546 extra['amend_source'] = old.hex()
2547
2547
2548 new = context.memctx(repo,
2548 new = context.memctx(repo,
2549 parents=[base.node(), old.p2().node()],
2549 parents=[base.node(), old.p2().node()],
2550 text=message,
2550 text=message,
2551 files=files,
2551 files=files,
2552 filectxfn=filectxfn,
2552 filectxfn=filectxfn,
2553 user=user,
2553 user=user,
2554 date=date,
2554 date=date,
2555 extra=extra,
2555 extra=extra,
2556 editor=editor)
2556 editor=editor)
2557
2557
2558 newdesc = changelog.stripdesc(new.description())
2558 newdesc = changelog.stripdesc(new.description())
2559 if ((not changes)
2559 if ((not changes)
2560 and newdesc == old.description()
2560 and newdesc == old.description()
2561 and user == old.user()
2561 and user == old.user()
2562 and date == old.date()
2562 and date == old.date()
2563 and pureextra == old.extra()):
2563 and pureextra == old.extra()):
2564 # nothing changed. continuing here would create a new node
2564 # nothing changed. continuing here would create a new node
2565 # anyway because of the amend_source noise.
2565 # anyway because of the amend_source noise.
2566 #
2566 #
2567 # This not what we expect from amend.
2567 # This not what we expect from amend.
2568 return old.node()
2568 return old.node()
2569
2569
2570 commitphase = None
2570 commitphase = None
2571 if opts.get('secret'):
2571 if opts.get('secret'):
2572 commitphase = phases.secret
2572 commitphase = phases.secret
2573 newid = repo.commitctx(new)
2573 newid = repo.commitctx(new)
2574
2574
2575 # Reroute the working copy parent to the new changeset
2575 # Reroute the working copy parent to the new changeset
2576 repo.setparents(newid, nullid)
2576 repo.setparents(newid, nullid)
2577 mapping = {old.node(): (newid,)}
2577 mapping = {old.node(): (newid,)}
2578 obsmetadata = None
2578 obsmetadata = None
2579 if opts.get('note'):
2579 if opts.get('note'):
2580 obsmetadata = {'note': encoding.fromlocal(opts['note'])}
2580 obsmetadata = {'note': encoding.fromlocal(opts['note'])}
2581 backup = ui.configbool('ui', 'history-editing-backup')
2581 backup = ui.configbool('ui', 'history-editing-backup')
2582 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata,
2582 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata,
2583 fixphase=True, targetphase=commitphase,
2583 fixphase=True, targetphase=commitphase,
2584 backup=backup)
2584 backup=backup)
2585
2585
2586 # Fixing the dirstate because localrepo.commitctx does not update
2586 # Fixing the dirstate because localrepo.commitctx does not update
2587 # it. This is rather convenient because we did not need to update
2587 # it. This is rather convenient because we did not need to update
2588 # the dirstate for all the files in the new commit which commitctx
2588 # the dirstate for all the files in the new commit which commitctx
2589 # could have done if it updated the dirstate. Now, we can
2589 # could have done if it updated the dirstate. Now, we can
2590 # selectively update the dirstate only for the amended files.
2590 # selectively update the dirstate only for the amended files.
2591 dirstate = repo.dirstate
2591 dirstate = repo.dirstate
2592
2592
2593 # Update the state of the files which were added and
2593 # Update the state of the files which were added and
2594 # and modified in the amend to "normal" in the dirstate.
2594 # and modified in the amend to "normal" in the dirstate.
2595 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2595 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2596 for f in normalfiles:
2596 for f in normalfiles:
2597 dirstate.normal(f)
2597 dirstate.normal(f)
2598
2598
2599 # Update the state of files which were removed in the amend
2599 # Update the state of files which were removed in the amend
2600 # to "removed" in the dirstate.
2600 # to "removed" in the dirstate.
2601 removedfiles = set(wctx.removed()) & filestoamend
2601 removedfiles = set(wctx.removed()) & filestoamend
2602 for f in removedfiles:
2602 for f in removedfiles:
2603 dirstate.drop(f)
2603 dirstate.drop(f)
2604
2604
2605 return newid
2605 return newid
2606
2606
2607 def commiteditor(repo, ctx, subs, editform=''):
2607 def commiteditor(repo, ctx, subs, editform=''):
2608 if ctx.description():
2608 if ctx.description():
2609 return ctx.description()
2609 return ctx.description()
2610 return commitforceeditor(repo, ctx, subs, editform=editform,
2610 return commitforceeditor(repo, ctx, subs, editform=editform,
2611 unchangedmessagedetection=True)
2611 unchangedmessagedetection=True)
2612
2612
2613 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2613 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2614 editform='', unchangedmessagedetection=False):
2614 editform='', unchangedmessagedetection=False):
2615 if not extramsg:
2615 if not extramsg:
2616 extramsg = _("Leave message empty to abort commit.")
2616 extramsg = _("Leave message empty to abort commit.")
2617
2617
2618 forms = [e for e in editform.split('.') if e]
2618 forms = [e for e in editform.split('.') if e]
2619 forms.insert(0, 'changeset')
2619 forms.insert(0, 'changeset')
2620 templatetext = None
2620 templatetext = None
2621 while forms:
2621 while forms:
2622 ref = '.'.join(forms)
2622 ref = '.'.join(forms)
2623 if repo.ui.config('committemplate', ref):
2623 if repo.ui.config('committemplate', ref):
2624 templatetext = committext = buildcommittemplate(
2624 templatetext = committext = buildcommittemplate(
2625 repo, ctx, subs, extramsg, ref)
2625 repo, ctx, subs, extramsg, ref)
2626 break
2626 break
2627 forms.pop()
2627 forms.pop()
2628 else:
2628 else:
2629 committext = buildcommittext(repo, ctx, subs, extramsg)
2629 committext = buildcommittext(repo, ctx, subs, extramsg)
2630
2630
2631 # run editor in the repository root
2631 # run editor in the repository root
2632 olddir = encoding.getcwd()
2632 olddir = encoding.getcwd()
2633 os.chdir(repo.root)
2633 os.chdir(repo.root)
2634
2634
2635 # make in-memory changes visible to external process
2635 # make in-memory changes visible to external process
2636 tr = repo.currenttransaction()
2636 tr = repo.currenttransaction()
2637 repo.dirstate.write(tr)
2637 repo.dirstate.write(tr)
2638 pending = tr and tr.writepending() and repo.root
2638 pending = tr and tr.writepending() and repo.root
2639
2639
2640 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2640 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2641 editform=editform, pending=pending,
2641 editform=editform, pending=pending,
2642 repopath=repo.path, action='commit')
2642 repopath=repo.path, action='commit')
2643 text = editortext
2643 text = editortext
2644
2644
2645 # strip away anything below this special string (used for editors that want
2645 # strip away anything below this special string (used for editors that want
2646 # to display the diff)
2646 # to display the diff)
2647 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2647 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2648 if stripbelow:
2648 if stripbelow:
2649 text = text[:stripbelow.start()]
2649 text = text[:stripbelow.start()]
2650
2650
2651 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2651 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2652 os.chdir(olddir)
2652 os.chdir(olddir)
2653
2653
2654 if finishdesc:
2654 if finishdesc:
2655 text = finishdesc(text)
2655 text = finishdesc(text)
2656 if not text.strip():
2656 if not text.strip():
2657 raise error.Abort(_("empty commit message"))
2657 raise error.Abort(_("empty commit message"))
2658 if unchangedmessagedetection and editortext == templatetext:
2658 if unchangedmessagedetection and editortext == templatetext:
2659 raise error.Abort(_("commit message unchanged"))
2659 raise error.Abort(_("commit message unchanged"))
2660
2660
2661 return text
2661 return text
2662
2662
2663 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2663 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2664 ui = repo.ui
2664 ui = repo.ui
2665 spec = formatter.templatespec(ref, None, None)
2665 spec = formatter.templatespec(ref, None, None)
2666 t = logcmdutil.changesettemplater(ui, repo, spec)
2666 t = logcmdutil.changesettemplater(ui, repo, spec)
2667 t.t.cache.update((k, templater.unquotestring(v))
2667 t.t.cache.update((k, templater.unquotestring(v))
2668 for k, v in repo.ui.configitems('committemplate'))
2668 for k, v in repo.ui.configitems('committemplate'))
2669
2669
2670 if not extramsg:
2670 if not extramsg:
2671 extramsg = '' # ensure that extramsg is string
2671 extramsg = '' # ensure that extramsg is string
2672
2672
2673 ui.pushbuffer()
2673 ui.pushbuffer()
2674 t.show(ctx, extramsg=extramsg)
2674 t.show(ctx, extramsg=extramsg)
2675 return ui.popbuffer()
2675 return ui.popbuffer()
2676
2676
2677 def hgprefix(msg):
2677 def hgprefix(msg):
2678 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2678 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2679
2679
2680 def buildcommittext(repo, ctx, subs, extramsg):
2680 def buildcommittext(repo, ctx, subs, extramsg):
2681 edittext = []
2681 edittext = []
2682 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2682 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2683 if ctx.description():
2683 if ctx.description():
2684 edittext.append(ctx.description())
2684 edittext.append(ctx.description())
2685 edittext.append("")
2685 edittext.append("")
2686 edittext.append("") # Empty line between message and comments.
2686 edittext.append("") # Empty line between message and comments.
2687 edittext.append(hgprefix(_("Enter commit message."
2687 edittext.append(hgprefix(_("Enter commit message."
2688 " Lines beginning with 'HG:' are removed.")))
2688 " Lines beginning with 'HG:' are removed.")))
2689 edittext.append(hgprefix(extramsg))
2689 edittext.append(hgprefix(extramsg))
2690 edittext.append("HG: --")
2690 edittext.append("HG: --")
2691 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2691 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2692 if ctx.p2():
2692 if ctx.p2():
2693 edittext.append(hgprefix(_("branch merge")))
2693 edittext.append(hgprefix(_("branch merge")))
2694 if ctx.branch():
2694 if ctx.branch():
2695 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2695 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2696 if bookmarks.isactivewdirparent(repo):
2696 if bookmarks.isactivewdirparent(repo):
2697 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2697 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2698 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2698 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2699 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2699 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2700 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2700 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2701 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2701 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2702 if not added and not modified and not removed:
2702 if not added and not modified and not removed:
2703 edittext.append(hgprefix(_("no files changed")))
2703 edittext.append(hgprefix(_("no files changed")))
2704 edittext.append("")
2704 edittext.append("")
2705
2705
2706 return "\n".join(edittext)
2706 return "\n".join(edittext)
2707
2707
2708 def commitstatus(repo, node, branch, bheads=None, opts=None):
2708 def commitstatus(repo, node, branch, bheads=None, opts=None):
2709 if opts is None:
2709 if opts is None:
2710 opts = {}
2710 opts = {}
2711 ctx = repo[node]
2711 ctx = repo[node]
2712 parents = ctx.parents()
2712 parents = ctx.parents()
2713
2713
2714 if (not opts.get('amend') and bheads and node not in bheads and not
2714 if (not opts.get('amend') and bheads and node not in bheads and not
2715 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2715 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2716 repo.ui.status(_('created new head\n'))
2716 repo.ui.status(_('created new head\n'))
2717 # The message is not printed for initial roots. For the other
2717 # The message is not printed for initial roots. For the other
2718 # changesets, it is printed in the following situations:
2718 # changesets, it is printed in the following situations:
2719 #
2719 #
2720 # Par column: for the 2 parents with ...
2720 # Par column: for the 2 parents with ...
2721 # N: null or no parent
2721 # N: null or no parent
2722 # B: parent is on another named branch
2722 # B: parent is on another named branch
2723 # C: parent is a regular non head changeset
2723 # C: parent is a regular non head changeset
2724 # H: parent was a branch head of the current branch
2724 # H: parent was a branch head of the current branch
2725 # Msg column: whether we print "created new head" message
2725 # Msg column: whether we print "created new head" message
2726 # In the following, it is assumed that there already exists some
2726 # In the following, it is assumed that there already exists some
2727 # initial branch heads of the current branch, otherwise nothing is
2727 # initial branch heads of the current branch, otherwise nothing is
2728 # printed anyway.
2728 # printed anyway.
2729 #
2729 #
2730 # Par Msg Comment
2730 # Par Msg Comment
2731 # N N y additional topo root
2731 # N N y additional topo root
2732 #
2732 #
2733 # B N y additional branch root
2733 # B N y additional branch root
2734 # C N y additional topo head
2734 # C N y additional topo head
2735 # H N n usual case
2735 # H N n usual case
2736 #
2736 #
2737 # B B y weird additional branch root
2737 # B B y weird additional branch root
2738 # C B y branch merge
2738 # C B y branch merge
2739 # H B n merge with named branch
2739 # H B n merge with named branch
2740 #
2740 #
2741 # C C y additional head from merge
2741 # C C y additional head from merge
2742 # C H n merge with a head
2742 # C H n merge with a head
2743 #
2743 #
2744 # H H n head merge: head count decreases
2744 # H H n head merge: head count decreases
2745
2745
2746 if not opts.get('close_branch'):
2746 if not opts.get('close_branch'):
2747 for r in parents:
2747 for r in parents:
2748 if r.closesbranch() and r.branch() == branch:
2748 if r.closesbranch() and r.branch() == branch:
2749 repo.ui.status(_('reopening closed branch head %d\n') % r.rev())
2749 repo.ui.status(_('reopening closed branch head %d\n') % r.rev())
2750
2750
2751 if repo.ui.debugflag:
2751 if repo.ui.debugflag:
2752 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx.hex()))
2752 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx.hex()))
2753 elif repo.ui.verbose:
2753 elif repo.ui.verbose:
2754 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx))
2754 repo.ui.write(_('committed changeset %d:%s\n') % (ctx.rev(), ctx))
2755
2755
2756 def postcommitstatus(repo, pats, opts):
2756 def postcommitstatus(repo, pats, opts):
2757 return repo.status(match=scmutil.match(repo[None], pats, opts))
2757 return repo.status(match=scmutil.match(repo[None], pats, opts))
2758
2758
2759 def revert(ui, repo, ctx, parents, *pats, **opts):
2759 def revert(ui, repo, ctx, parents, *pats, **opts):
2760 opts = pycompat.byteskwargs(opts)
2760 opts = pycompat.byteskwargs(opts)
2761 parent, p2 = parents
2761 parent, p2 = parents
2762 node = ctx.node()
2762 node = ctx.node()
2763
2763
2764 mf = ctx.manifest()
2764 mf = ctx.manifest()
2765 if node == p2:
2765 if node == p2:
2766 parent = p2
2766 parent = p2
2767
2767
2768 # need all matching names in dirstate and manifest of target rev,
2768 # need all matching names in dirstate and manifest of target rev,
2769 # so have to walk both. do not print errors if files exist in one
2769 # so have to walk both. do not print errors if files exist in one
2770 # but not other. in both cases, filesets should be evaluated against
2770 # but not other. in both cases, filesets should be evaluated against
2771 # workingctx to get consistent result (issue4497). this means 'set:**'
2771 # workingctx to get consistent result (issue4497). this means 'set:**'
2772 # cannot be used to select missing files from target rev.
2772 # cannot be used to select missing files from target rev.
2773
2773
2774 # `names` is a mapping for all elements in working copy and target revision
2774 # `names` is a mapping for all elements in working copy and target revision
2775 # The mapping is in the form:
2775 # The mapping is in the form:
2776 # <abs path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2776 # <abs path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2777 names = {}
2777 names = {}
2778
2778
2779 with repo.wlock():
2779 with repo.wlock():
2780 ## filling of the `names` mapping
2780 ## filling of the `names` mapping
2781 # walk dirstate to fill `names`
2781 # walk dirstate to fill `names`
2782
2782
2783 interactive = opts.get('interactive', False)
2783 interactive = opts.get('interactive', False)
2784 wctx = repo[None]
2784 wctx = repo[None]
2785 m = scmutil.match(wctx, pats, opts)
2785 m = scmutil.match(wctx, pats, opts)
2786
2786
2787 # we'll need this later
2787 # we'll need this later
2788 targetsubs = sorted(s for s in wctx.substate if m(s))
2788 targetsubs = sorted(s for s in wctx.substate if m(s))
2789
2789
2790 if not m.always():
2790 if not m.always():
2791 matcher = matchmod.badmatch(m, lambda x, y: False)
2791 matcher = matchmod.badmatch(m, lambda x, y: False)
2792 for abs in wctx.walk(matcher):
2792 for abs in wctx.walk(matcher):
2793 names[abs] = m.rel(abs), m.exact(abs)
2793 names[abs] = m.rel(abs), m.exact(abs)
2794
2794
2795 # walk target manifest to fill `names`
2795 # walk target manifest to fill `names`
2796
2796
2797 def badfn(path, msg):
2797 def badfn(path, msg):
2798 if path in names:
2798 if path in names:
2799 return
2799 return
2800 if path in ctx.substate:
2800 if path in ctx.substate:
2801 return
2801 return
2802 path_ = path + '/'
2802 path_ = path + '/'
2803 for f in names:
2803 for f in names:
2804 if f.startswith(path_):
2804 if f.startswith(path_):
2805 return
2805 return
2806 ui.warn("%s: %s\n" % (m.rel(path), msg))
2806 ui.warn("%s: %s\n" % (m.rel(path), msg))
2807
2807
2808 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2808 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2809 if abs not in names:
2809 if abs not in names:
2810 names[abs] = m.rel(abs), m.exact(abs)
2810 names[abs] = m.rel(abs), m.exact(abs)
2811
2811
2812 # Find status of all file in `names`.
2812 # Find status of all file in `names`.
2813 m = scmutil.matchfiles(repo, names)
2813 m = scmutil.matchfiles(repo, names)
2814
2814
2815 changes = repo.status(node1=node, match=m,
2815 changes = repo.status(node1=node, match=m,
2816 unknown=True, ignored=True, clean=True)
2816 unknown=True, ignored=True, clean=True)
2817 else:
2817 else:
2818 changes = repo.status(node1=node, match=m)
2818 changes = repo.status(node1=node, match=m)
2819 for kind in changes:
2819 for kind in changes:
2820 for abs in kind:
2820 for abs in kind:
2821 names[abs] = m.rel(abs), m.exact(abs)
2821 names[abs] = m.rel(abs), m.exact(abs)
2822
2822
2823 m = scmutil.matchfiles(repo, names)
2823 m = scmutil.matchfiles(repo, names)
2824
2824
2825 modified = set(changes.modified)
2825 modified = set(changes.modified)
2826 added = set(changes.added)
2826 added = set(changes.added)
2827 removed = set(changes.removed)
2827 removed = set(changes.removed)
2828 _deleted = set(changes.deleted)
2828 _deleted = set(changes.deleted)
2829 unknown = set(changes.unknown)
2829 unknown = set(changes.unknown)
2830 unknown.update(changes.ignored)
2830 unknown.update(changes.ignored)
2831 clean = set(changes.clean)
2831 clean = set(changes.clean)
2832 modadded = set()
2832 modadded = set()
2833
2833
2834 # We need to account for the state of the file in the dirstate,
2834 # We need to account for the state of the file in the dirstate,
2835 # even when we revert against something else than parent. This will
2835 # even when we revert against something else than parent. This will
2836 # slightly alter the behavior of revert (doing back up or not, delete
2836 # slightly alter the behavior of revert (doing back up or not, delete
2837 # or just forget etc).
2837 # or just forget etc).
2838 if parent == node:
2838 if parent == node:
2839 dsmodified = modified
2839 dsmodified = modified
2840 dsadded = added
2840 dsadded = added
2841 dsremoved = removed
2841 dsremoved = removed
2842 # store all local modifications, useful later for rename detection
2842 # store all local modifications, useful later for rename detection
2843 localchanges = dsmodified | dsadded
2843 localchanges = dsmodified | dsadded
2844 modified, added, removed = set(), set(), set()
2844 modified, added, removed = set(), set(), set()
2845 else:
2845 else:
2846 changes = repo.status(node1=parent, match=m)
2846 changes = repo.status(node1=parent, match=m)
2847 dsmodified = set(changes.modified)
2847 dsmodified = set(changes.modified)
2848 dsadded = set(changes.added)
2848 dsadded = set(changes.added)
2849 dsremoved = set(changes.removed)
2849 dsremoved = set(changes.removed)
2850 # store all local modifications, useful later for rename detection
2850 # store all local modifications, useful later for rename detection
2851 localchanges = dsmodified | dsadded
2851 localchanges = dsmodified | dsadded
2852
2852
2853 # only take into account for removes between wc and target
2853 # only take into account for removes between wc and target
2854 clean |= dsremoved - removed
2854 clean |= dsremoved - removed
2855 dsremoved &= removed
2855 dsremoved &= removed
2856 # distinct between dirstate remove and other
2856 # distinct between dirstate remove and other
2857 removed -= dsremoved
2857 removed -= dsremoved
2858
2858
2859 modadded = added & dsmodified
2859 modadded = added & dsmodified
2860 added -= modadded
2860 added -= modadded
2861
2861
2862 # tell newly modified apart.
2862 # tell newly modified apart.
2863 dsmodified &= modified
2863 dsmodified &= modified
2864 dsmodified |= modified & dsadded # dirstate added may need backup
2864 dsmodified |= modified & dsadded # dirstate added may need backup
2865 modified -= dsmodified
2865 modified -= dsmodified
2866
2866
2867 # We need to wait for some post-processing to update this set
2867 # We need to wait for some post-processing to update this set
2868 # before making the distinction. The dirstate will be used for
2868 # before making the distinction. The dirstate will be used for
2869 # that purpose.
2869 # that purpose.
2870 dsadded = added
2870 dsadded = added
2871
2871
2872 # in case of merge, files that are actually added can be reported as
2872 # in case of merge, files that are actually added can be reported as
2873 # modified, we need to post process the result
2873 # modified, we need to post process the result
2874 if p2 != nullid:
2874 if p2 != nullid:
2875 mergeadd = set(dsmodified)
2875 mergeadd = set(dsmodified)
2876 for path in dsmodified:
2876 for path in dsmodified:
2877 if path in mf:
2877 if path in mf:
2878 mergeadd.remove(path)
2878 mergeadd.remove(path)
2879 dsadded |= mergeadd
2879 dsadded |= mergeadd
2880 dsmodified -= mergeadd
2880 dsmodified -= mergeadd
2881
2881
2882 # if f is a rename, update `names` to also revert the source
2882 # if f is a rename, update `names` to also revert the source
2883 cwd = repo.getcwd()
2883 cwd = repo.getcwd()
2884 for f in localchanges:
2884 for f in localchanges:
2885 src = repo.dirstate.copied(f)
2885 src = repo.dirstate.copied(f)
2886 # XXX should we check for rename down to target node?
2886 # XXX should we check for rename down to target node?
2887 if src and src not in names and repo.dirstate[src] == 'r':
2887 if src and src not in names and repo.dirstate[src] == 'r':
2888 dsremoved.add(src)
2888 dsremoved.add(src)
2889 names[src] = (repo.pathto(src, cwd), True)
2889 names[src] = (repo.pathto(src, cwd), True)
2890
2890
2891 # determine the exact nature of the deleted changesets
2891 # determine the exact nature of the deleted changesets
2892 deladded = set(_deleted)
2892 deladded = set(_deleted)
2893 for path in _deleted:
2893 for path in _deleted:
2894 if path in mf:
2894 if path in mf:
2895 deladded.remove(path)
2895 deladded.remove(path)
2896 deleted = _deleted - deladded
2896 deleted = _deleted - deladded
2897
2897
2898 # distinguish between file to forget and the other
2898 # distinguish between file to forget and the other
2899 added = set()
2899 added = set()
2900 for abs in dsadded:
2900 for abs in dsadded:
2901 if repo.dirstate[abs] != 'a':
2901 if repo.dirstate[abs] != 'a':
2902 added.add(abs)
2902 added.add(abs)
2903 dsadded -= added
2903 dsadded -= added
2904
2904
2905 for abs in deladded:
2905 for abs in deladded:
2906 if repo.dirstate[abs] == 'a':
2906 if repo.dirstate[abs] == 'a':
2907 dsadded.add(abs)
2907 dsadded.add(abs)
2908 deladded -= dsadded
2908 deladded -= dsadded
2909
2909
2910 # For files marked as removed, we check if an unknown file is present at
2910 # For files marked as removed, we check if an unknown file is present at
2911 # the same path. If a such file exists it may need to be backed up.
2911 # the same path. If a such file exists it may need to be backed up.
2912 # Making the distinction at this stage helps have simpler backup
2912 # Making the distinction at this stage helps have simpler backup
2913 # logic.
2913 # logic.
2914 removunk = set()
2914 removunk = set()
2915 for abs in removed:
2915 for abs in removed:
2916 target = repo.wjoin(abs)
2916 target = repo.wjoin(abs)
2917 if os.path.lexists(target):
2917 if os.path.lexists(target):
2918 removunk.add(abs)
2918 removunk.add(abs)
2919 removed -= removunk
2919 removed -= removunk
2920
2920
2921 dsremovunk = set()
2921 dsremovunk = set()
2922 for abs in dsremoved:
2922 for abs in dsremoved:
2923 target = repo.wjoin(abs)
2923 target = repo.wjoin(abs)
2924 if os.path.lexists(target):
2924 if os.path.lexists(target):
2925 dsremovunk.add(abs)
2925 dsremovunk.add(abs)
2926 dsremoved -= dsremovunk
2926 dsremoved -= dsremovunk
2927
2927
2928 # action to be actually performed by revert
2928 # action to be actually performed by revert
2929 # (<list of file>, message>) tuple
2929 # (<list of file>, message>) tuple
2930 actions = {'revert': ([], _('reverting %s\n')),
2930 actions = {'revert': ([], _('reverting %s\n')),
2931 'add': ([], _('adding %s\n')),
2931 'add': ([], _('adding %s\n')),
2932 'remove': ([], _('removing %s\n')),
2932 'remove': ([], _('removing %s\n')),
2933 'drop': ([], _('removing %s\n')),
2933 'drop': ([], _('removing %s\n')),
2934 'forget': ([], _('forgetting %s\n')),
2934 'forget': ([], _('forgetting %s\n')),
2935 'undelete': ([], _('undeleting %s\n')),
2935 'undelete': ([], _('undeleting %s\n')),
2936 'noop': (None, _('no changes needed to %s\n')),
2936 'noop': (None, _('no changes needed to %s\n')),
2937 'unknown': (None, _('file not managed: %s\n')),
2937 'unknown': (None, _('file not managed: %s\n')),
2938 }
2938 }
2939
2939
2940 # "constant" that convey the backup strategy.
2940 # "constant" that convey the backup strategy.
2941 # All set to `discard` if `no-backup` is set do avoid checking
2941 # All set to `discard` if `no-backup` is set do avoid checking
2942 # no_backup lower in the code.
2942 # no_backup lower in the code.
2943 # These values are ordered for comparison purposes
2943 # These values are ordered for comparison purposes
2944 backupinteractive = 3 # do backup if interactively modified
2944 backupinteractive = 3 # do backup if interactively modified
2945 backup = 2 # unconditionally do backup
2945 backup = 2 # unconditionally do backup
2946 check = 1 # check if the existing file differs from target
2946 check = 1 # check if the existing file differs from target
2947 discard = 0 # never do backup
2947 discard = 0 # never do backup
2948 if opts.get('no_backup'):
2948 if opts.get('no_backup'):
2949 backupinteractive = backup = check = discard
2949 backupinteractive = backup = check = discard
2950 if interactive:
2950 if interactive:
2951 dsmodifiedbackup = backupinteractive
2951 dsmodifiedbackup = backupinteractive
2952 else:
2952 else:
2953 dsmodifiedbackup = backup
2953 dsmodifiedbackup = backup
2954 tobackup = set()
2954 tobackup = set()
2955
2955
2956 backupanddel = actions['remove']
2956 backupanddel = actions['remove']
2957 if not opts.get('no_backup'):
2957 if not opts.get('no_backup'):
2958 backupanddel = actions['drop']
2958 backupanddel = actions['drop']
2959
2959
2960 disptable = (
2960 disptable = (
2961 # dispatch table:
2961 # dispatch table:
2962 # file state
2962 # file state
2963 # action
2963 # action
2964 # make backup
2964 # make backup
2965
2965
2966 ## Sets that results that will change file on disk
2966 ## Sets that results that will change file on disk
2967 # Modified compared to target, no local change
2967 # Modified compared to target, no local change
2968 (modified, actions['revert'], discard),
2968 (modified, actions['revert'], discard),
2969 # Modified compared to target, but local file is deleted
2969 # Modified compared to target, but local file is deleted
2970 (deleted, actions['revert'], discard),
2970 (deleted, actions['revert'], discard),
2971 # Modified compared to target, local change
2971 # Modified compared to target, local change
2972 (dsmodified, actions['revert'], dsmodifiedbackup),
2972 (dsmodified, actions['revert'], dsmodifiedbackup),
2973 # Added since target
2973 # Added since target
2974 (added, actions['remove'], discard),
2974 (added, actions['remove'], discard),
2975 # Added in working directory
2975 # Added in working directory
2976 (dsadded, actions['forget'], discard),
2976 (dsadded, actions['forget'], discard),
2977 # Added since target, have local modification
2977 # Added since target, have local modification
2978 (modadded, backupanddel, backup),
2978 (modadded, backupanddel, backup),
2979 # Added since target but file is missing in working directory
2979 # Added since target but file is missing in working directory
2980 (deladded, actions['drop'], discard),
2980 (deladded, actions['drop'], discard),
2981 # Removed since target, before working copy parent
2981 # Removed since target, before working copy parent
2982 (removed, actions['add'], discard),
2982 (removed, actions['add'], discard),
2983 # Same as `removed` but an unknown file exists at the same path
2983 # Same as `removed` but an unknown file exists at the same path
2984 (removunk, actions['add'], check),
2984 (removunk, actions['add'], check),
2985 # Removed since targe, marked as such in working copy parent
2985 # Removed since targe, marked as such in working copy parent
2986 (dsremoved, actions['undelete'], discard),
2986 (dsremoved, actions['undelete'], discard),
2987 # Same as `dsremoved` but an unknown file exists at the same path
2987 # Same as `dsremoved` but an unknown file exists at the same path
2988 (dsremovunk, actions['undelete'], check),
2988 (dsremovunk, actions['undelete'], check),
2989 ## the following sets does not result in any file changes
2989 ## the following sets does not result in any file changes
2990 # File with no modification
2990 # File with no modification
2991 (clean, actions['noop'], discard),
2991 (clean, actions['noop'], discard),
2992 # Existing file, not tracked anywhere
2992 # Existing file, not tracked anywhere
2993 (unknown, actions['unknown'], discard),
2993 (unknown, actions['unknown'], discard),
2994 )
2994 )
2995
2995
2996 for abs, (rel, exact) in sorted(names.items()):
2996 for abs, (rel, exact) in sorted(names.items()):
2997 # target file to be touch on disk (relative to cwd)
2997 # target file to be touch on disk (relative to cwd)
2998 target = repo.wjoin(abs)
2998 target = repo.wjoin(abs)
2999 # search the entry in the dispatch table.
2999 # search the entry in the dispatch table.
3000 # if the file is in any of these sets, it was touched in the working
3000 # if the file is in any of these sets, it was touched in the working
3001 # directory parent and we are sure it needs to be reverted.
3001 # directory parent and we are sure it needs to be reverted.
3002 for table, (xlist, msg), dobackup in disptable:
3002 for table, (xlist, msg), dobackup in disptable:
3003 if abs not in table:
3003 if abs not in table:
3004 continue
3004 continue
3005 if xlist is not None:
3005 if xlist is not None:
3006 xlist.append(abs)
3006 xlist.append(abs)
3007 if dobackup:
3007 if dobackup:
3008 # If in interactive mode, don't automatically create
3008 # If in interactive mode, don't automatically create
3009 # .orig files (issue4793)
3009 # .orig files (issue4793)
3010 if dobackup == backupinteractive:
3010 if dobackup == backupinteractive:
3011 tobackup.add(abs)
3011 tobackup.add(abs)
3012 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3012 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3013 bakname = scmutil.origpath(ui, repo, rel)
3013 bakname = scmutil.origpath(ui, repo, rel)
3014 ui.note(_('saving current version of %s as %s\n') %
3014 ui.note(_('saving current version of %s as %s\n') %
3015 (rel, bakname))
3015 (rel, bakname))
3016 if not opts.get('dry_run'):
3016 if not opts.get('dry_run'):
3017 if interactive:
3017 if interactive:
3018 util.copyfile(target, bakname)
3018 util.copyfile(target, bakname)
3019 else:
3019 else:
3020 util.rename(target, bakname)
3020 util.rename(target, bakname)
3021 if opts.get('dry_run'):
3021 if opts.get('dry_run'):
3022 if ui.verbose or not exact:
3022 if ui.verbose or not exact:
3023 ui.status(msg % rel)
3023 ui.status(msg % rel)
3024 elif exact:
3024 elif exact:
3025 ui.warn(msg % rel)
3025 ui.warn(msg % rel)
3026 break
3026 break
3027
3027
3028 if not opts.get('dry_run'):
3028 if not opts.get('dry_run'):
3029 needdata = ('revert', 'add', 'undelete')
3029 needdata = ('revert', 'add', 'undelete')
3030 oplist = [actions[name][0] for name in needdata]
3030 oplist = [actions[name][0] for name in needdata]
3031 prefetch = scmutil.prefetchfiles
3031 prefetch = scmutil.prefetchfiles
3032 matchfiles = scmutil.matchfiles
3032 matchfiles = scmutil.matchfiles
3033 prefetch(repo, [ctx.rev()],
3033 prefetch(repo, [ctx.rev()],
3034 matchfiles(repo,
3034 matchfiles(repo,
3035 [f for sublist in oplist for f in sublist]))
3035 [f for sublist in oplist for f in sublist]))
3036 _performrevert(repo, parents, ctx, names, actions, interactive,
3036 _performrevert(repo, parents, ctx, names, actions, interactive,
3037 tobackup)
3037 tobackup)
3038
3038
3039 if targetsubs:
3039 if targetsubs:
3040 # Revert the subrepos on the revert list
3040 # Revert the subrepos on the revert list
3041 for sub in targetsubs:
3041 for sub in targetsubs:
3042 try:
3042 try:
3043 wctx.sub(sub).revert(ctx.substate[sub], *pats,
3043 wctx.sub(sub).revert(ctx.substate[sub], *pats,
3044 **pycompat.strkwargs(opts))
3044 **pycompat.strkwargs(opts))
3045 except KeyError:
3045 except KeyError:
3046 raise error.Abort("subrepository '%s' does not exist in %s!"
3046 raise error.Abort("subrepository '%s' does not exist in %s!"
3047 % (sub, short(ctx.node())))
3047 % (sub, short(ctx.node())))
3048
3048
3049 def _performrevert(repo, parents, ctx, names, actions, interactive=False,
3049 def _performrevert(repo, parents, ctx, names, actions, interactive=False,
3050 tobackup=None):
3050 tobackup=None):
3051 """function that actually perform all the actions computed for revert
3051 """function that actually perform all the actions computed for revert
3052
3052
3053 This is an independent function to let extension to plug in and react to
3053 This is an independent function to let extension to plug in and react to
3054 the imminent revert.
3054 the imminent revert.
3055
3055
3056 Make sure you have the working directory locked when calling this function.
3056 Make sure you have the working directory locked when calling this function.
3057 """
3057 """
3058 parent, p2 = parents
3058 parent, p2 = parents
3059 node = ctx.node()
3059 node = ctx.node()
3060 excluded_files = []
3060 excluded_files = []
3061
3061
3062 def checkout(f):
3062 def checkout(f):
3063 fc = ctx[f]
3063 fc = ctx[f]
3064 repo.wwrite(f, fc.data(), fc.flags())
3064 repo.wwrite(f, fc.data(), fc.flags())
3065
3065
3066 def doremove(f):
3066 def doremove(f):
3067 try:
3067 try:
3068 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
3068 rmdir = repo.ui.configbool('experimental', 'removeemptydirs')
3069 repo.wvfs.unlinkpath(f, rmdir=rmdir)
3069 repo.wvfs.unlinkpath(f, rmdir=rmdir)
3070 except OSError:
3070 except OSError:
3071 pass
3071 pass
3072 repo.dirstate.remove(f)
3072 repo.dirstate.remove(f)
3073
3073
3074 def prntstatusmsg(action, f):
3074 def prntstatusmsg(action, f):
3075 rel, exact = names[f]
3075 rel, exact = names[f]
3076 if repo.ui.verbose or not exact:
3076 if repo.ui.verbose or not exact:
3077 repo.ui.status(actions[action][1] % rel)
3077 repo.ui.status(actions[action][1] % rel)
3078
3078
3079 audit_path = pathutil.pathauditor(repo.root, cached=True)
3079 audit_path = pathutil.pathauditor(repo.root, cached=True)
3080 for f in actions['forget'][0]:
3080 for f in actions['forget'][0]:
3081 if interactive:
3081 if interactive:
3082 choice = repo.ui.promptchoice(
3082 choice = repo.ui.promptchoice(
3083 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
3083 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
3084 if choice == 0:
3084 if choice == 0:
3085 prntstatusmsg('forget', f)
3085 prntstatusmsg('forget', f)
3086 repo.dirstate.drop(f)
3086 repo.dirstate.drop(f)
3087 else:
3087 else:
3088 excluded_files.append(f)
3088 excluded_files.append(f)
3089 else:
3089 else:
3090 prntstatusmsg('forget', f)
3090 prntstatusmsg('forget', f)
3091 repo.dirstate.drop(f)
3091 repo.dirstate.drop(f)
3092 for f in actions['remove'][0]:
3092 for f in actions['remove'][0]:
3093 audit_path(f)
3093 audit_path(f)
3094 if interactive:
3094 if interactive:
3095 choice = repo.ui.promptchoice(
3095 choice = repo.ui.promptchoice(
3096 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
3096 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
3097 if choice == 0:
3097 if choice == 0:
3098 prntstatusmsg('remove', f)
3098 prntstatusmsg('remove', f)
3099 doremove(f)
3099 doremove(f)
3100 else:
3100 else:
3101 excluded_files.append(f)
3101 excluded_files.append(f)
3102 else:
3102 else:
3103 prntstatusmsg('remove', f)
3103 prntstatusmsg('remove', f)
3104 doremove(f)
3104 doremove(f)
3105 for f in actions['drop'][0]:
3105 for f in actions['drop'][0]:
3106 audit_path(f)
3106 audit_path(f)
3107 prntstatusmsg('drop', f)
3107 prntstatusmsg('drop', f)
3108 repo.dirstate.remove(f)
3108 repo.dirstate.remove(f)
3109
3109
3110 normal = None
3110 normal = None
3111 if node == parent:
3111 if node == parent:
3112 # We're reverting to our parent. If possible, we'd like status
3112 # We're reverting to our parent. If possible, we'd like status
3113 # to report the file as clean. We have to use normallookup for
3113 # to report the file as clean. We have to use normallookup for
3114 # merges to avoid losing information about merged/dirty files.
3114 # merges to avoid losing information about merged/dirty files.
3115 if p2 != nullid:
3115 if p2 != nullid:
3116 normal = repo.dirstate.normallookup
3116 normal = repo.dirstate.normallookup
3117 else:
3117 else:
3118 normal = repo.dirstate.normal
3118 normal = repo.dirstate.normal
3119
3119
3120 newlyaddedandmodifiedfiles = set()
3120 newlyaddedandmodifiedfiles = set()
3121 if interactive:
3121 if interactive:
3122 # Prompt the user for changes to revert
3122 # Prompt the user for changes to revert
3123 torevert = [f for f in actions['revert'][0] if f not in excluded_files]
3123 torevert = [f for f in actions['revert'][0] if f not in excluded_files]
3124 m = scmutil.matchfiles(repo, torevert)
3124 m = scmutil.matchfiles(repo, torevert)
3125 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3125 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3126 diffopts.nodates = True
3126 diffopts.nodates = True
3127 diffopts.git = True
3127 diffopts.git = True
3128 operation = 'discard'
3128 operation = 'discard'
3129 reversehunks = True
3129 reversehunks = True
3130 if node != parent:
3130 if node != parent:
3131 operation = 'apply'
3131 operation = 'apply'
3132 reversehunks = False
3132 reversehunks = False
3133 if reversehunks:
3133 if reversehunks:
3134 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3134 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3135 else:
3135 else:
3136 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3136 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3137 originalchunks = patch.parsepatch(diff)
3137 originalchunks = patch.parsepatch(diff)
3138
3138
3139 try:
3139 try:
3140
3140
3141 chunks, opts = recordfilter(repo.ui, originalchunks,
3141 chunks, opts = recordfilter(repo.ui, originalchunks,
3142 operation=operation)
3142 operation=operation)
3143 if reversehunks:
3143 if reversehunks:
3144 chunks = patch.reversehunks(chunks)
3144 chunks = patch.reversehunks(chunks)
3145
3145
3146 except error.PatchError as err:
3146 except error.PatchError as err:
3147 raise error.Abort(_('error parsing patch: %s') % err)
3147 raise error.Abort(_('error parsing patch: %s') % err)
3148
3148
3149 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3149 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3150 if tobackup is None:
3150 if tobackup is None:
3151 tobackup = set()
3151 tobackup = set()
3152 # Apply changes
3152 # Apply changes
3153 fp = stringio()
3153 fp = stringio()
3154 # chunks are serialized per file, but files aren't sorted
3154 # chunks are serialized per file, but files aren't sorted
3155 for f in sorted(set(c.header.filename() for c in chunks if ishunk(c))):
3155 for f in sorted(set(c.header.filename() for c in chunks if ishunk(c))):
3156 prntstatusmsg('revert', f)
3156 prntstatusmsg('revert', f)
3157 for c in chunks:
3157 for c in chunks:
3158 if ishunk(c):
3158 if ishunk(c):
3159 abs = c.header.filename()
3159 abs = c.header.filename()
3160 # Create a backup file only if this hunk should be backed up
3160 # Create a backup file only if this hunk should be backed up
3161 if c.header.filename() in tobackup:
3161 if c.header.filename() in tobackup:
3162 target = repo.wjoin(abs)
3162 target = repo.wjoin(abs)
3163 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3163 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3164 util.copyfile(target, bakname)
3164 util.copyfile(target, bakname)
3165 tobackup.remove(abs)
3165 tobackup.remove(abs)
3166 c.write(fp)
3166 c.write(fp)
3167 dopatch = fp.tell()
3167 dopatch = fp.tell()
3168 fp.seek(0)
3168 fp.seek(0)
3169 if dopatch:
3169 if dopatch:
3170 try:
3170 try:
3171 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3171 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3172 except error.PatchError as err:
3172 except error.PatchError as err:
3173 raise error.Abort(pycompat.bytestr(err))
3173 raise error.Abort(pycompat.bytestr(err))
3174 del fp
3174 del fp
3175 else:
3175 else:
3176 for f in actions['revert'][0]:
3176 for f in actions['revert'][0]:
3177 prntstatusmsg('revert', f)
3177 prntstatusmsg('revert', f)
3178 checkout(f)
3178 checkout(f)
3179 if normal:
3179 if normal:
3180 normal(f)
3180 normal(f)
3181
3181
3182 for f in actions['add'][0]:
3182 for f in actions['add'][0]:
3183 # Don't checkout modified files, they are already created by the diff
3183 # Don't checkout modified files, they are already created by the diff
3184 if f not in newlyaddedandmodifiedfiles:
3184 if f not in newlyaddedandmodifiedfiles:
3185 prntstatusmsg('add', f)
3185 prntstatusmsg('add', f)
3186 checkout(f)
3186 checkout(f)
3187 repo.dirstate.add(f)
3187 repo.dirstate.add(f)
3188
3188
3189 normal = repo.dirstate.normallookup
3189 normal = repo.dirstate.normallookup
3190 if node == parent and p2 == nullid:
3190 if node == parent and p2 == nullid:
3191 normal = repo.dirstate.normal
3191 normal = repo.dirstate.normal
3192 for f in actions['undelete'][0]:
3192 for f in actions['undelete'][0]:
3193 prntstatusmsg('undelete', f)
3193 prntstatusmsg('undelete', f)
3194 checkout(f)
3194 checkout(f)
3195 normal(f)
3195 normal(f)
3196
3196
3197 copied = copies.pathcopies(repo[parent], ctx)
3197 copied = copies.pathcopies(repo[parent], ctx)
3198
3198
3199 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3199 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3200 if f in copied:
3200 if f in copied:
3201 repo.dirstate.copy(copied[f], f)
3201 repo.dirstate.copy(copied[f], f)
3202
3202
3203 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3203 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3204 # commands.outgoing. "missing" is "missing" of the result of
3204 # commands.outgoing. "missing" is "missing" of the result of
3205 # "findcommonoutgoing()"
3205 # "findcommonoutgoing()"
3206 outgoinghooks = util.hooks()
3206 outgoinghooks = util.hooks()
3207
3207
3208 # a list of (ui, repo) functions called by commands.summary
3208 # a list of (ui, repo) functions called by commands.summary
3209 summaryhooks = util.hooks()
3209 summaryhooks = util.hooks()
3210
3210
3211 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3211 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3212 #
3212 #
3213 # functions should return tuple of booleans below, if 'changes' is None:
3213 # functions should return tuple of booleans below, if 'changes' is None:
3214 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3214 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3215 #
3215 #
3216 # otherwise, 'changes' is a tuple of tuples below:
3216 # otherwise, 'changes' is a tuple of tuples below:
3217 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3217 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3218 # - (desturl, destbranch, destpeer, outgoing)
3218 # - (desturl, destbranch, destpeer, outgoing)
3219 summaryremotehooks = util.hooks()
3219 summaryremotehooks = util.hooks()
3220
3220
3221 # A list of state files kept by multistep operations like graft.
3221 # A list of state files kept by multistep operations like graft.
3222 # Since graft cannot be aborted, it is considered 'clearable' by update.
3222 # Since graft cannot be aborted, it is considered 'clearable' by update.
3223 # note: bisect is intentionally excluded
3223 # note: bisect is intentionally excluded
3224 # (state file, clearable, allowcommit, error, hint)
3224 # (state file, clearable, allowcommit, error, hint)
3225 unfinishedstates = [
3225 unfinishedstates = [
3226 ('graftstate', True, False, _('graft in progress'),
3226 ('graftstate', True, False, _('graft in progress'),
3227 _("use 'hg graft --continue' or 'hg graft --stop' to stop")),
3227 _("use 'hg graft --continue' or 'hg graft --stop' to stop")),
3228 ('updatestate', True, False, _('last update was interrupted'),
3228 ('updatestate', True, False, _('last update was interrupted'),
3229 _("use 'hg update' to get a consistent checkout"))
3229 _("use 'hg update' to get a consistent checkout"))
3230 ]
3230 ]
3231
3231
3232 def checkunfinished(repo, commit=False):
3232 def checkunfinished(repo, commit=False):
3233 '''Look for an unfinished multistep operation, like graft, and abort
3233 '''Look for an unfinished multistep operation, like graft, and abort
3234 if found. It's probably good to check this right before
3234 if found. It's probably good to check this right before
3235 bailifchanged().
3235 bailifchanged().
3236 '''
3236 '''
3237 # Check for non-clearable states first, so things like rebase will take
3237 # Check for non-clearable states first, so things like rebase will take
3238 # precedence over update.
3238 # precedence over update.
3239 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3239 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3240 if clearable or (commit and allowcommit):
3240 if clearable or (commit and allowcommit):
3241 continue
3241 continue
3242 if repo.vfs.exists(f):
3242 if repo.vfs.exists(f):
3243 raise error.Abort(msg, hint=hint)
3243 raise error.Abort(msg, hint=hint)
3244
3244
3245 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3245 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3246 if not clearable or (commit and allowcommit):
3246 if not clearable or (commit and allowcommit):
3247 continue
3247 continue
3248 if repo.vfs.exists(f):
3248 if repo.vfs.exists(f):
3249 raise error.Abort(msg, hint=hint)
3249 raise error.Abort(msg, hint=hint)
3250
3250
3251 def clearunfinished(repo):
3251 def clearunfinished(repo):
3252 '''Check for unfinished operations (as above), and clear the ones
3252 '''Check for unfinished operations (as above), and clear the ones
3253 that are clearable.
3253 that are clearable.
3254 '''
3254 '''
3255 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3255 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3256 if not clearable and repo.vfs.exists(f):
3256 if not clearable and repo.vfs.exists(f):
3257 raise error.Abort(msg, hint=hint)
3257 raise error.Abort(msg, hint=hint)
3258 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3258 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3259 if clearable and repo.vfs.exists(f):
3259 if clearable and repo.vfs.exists(f):
3260 util.unlink(repo.vfs.join(f))
3260 util.unlink(repo.vfs.join(f))
3261
3261
3262 afterresolvedstates = [
3262 afterresolvedstates = [
3263 ('graftstate',
3263 ('graftstate',
3264 _('hg graft --continue')),
3264 _('hg graft --continue')),
3265 ]
3265 ]
3266
3266
3267 def howtocontinue(repo):
3267 def howtocontinue(repo):
3268 '''Check for an unfinished operation and return the command to finish
3268 '''Check for an unfinished operation and return the command to finish
3269 it.
3269 it.
3270
3270
3271 afterresolvedstates tuples define a .hg/{file} and the corresponding
3271 afterresolvedstates tuples define a .hg/{file} and the corresponding
3272 command needed to finish it.
3272 command needed to finish it.
3273
3273
3274 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3274 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3275 a boolean.
3275 a boolean.
3276 '''
3276 '''
3277 contmsg = _("continue: %s")
3277 contmsg = _("continue: %s")
3278 for f, msg in afterresolvedstates:
3278 for f, msg in afterresolvedstates:
3279 if repo.vfs.exists(f):
3279 if repo.vfs.exists(f):
3280 return contmsg % msg, True
3280 return contmsg % msg, True
3281 if repo[None].dirty(missing=True, merge=False, branch=False):
3281 if repo[None].dirty(missing=True, merge=False, branch=False):
3282 return contmsg % _("hg commit"), False
3282 return contmsg % _("hg commit"), False
3283 return None, None
3283 return None, None
3284
3284
3285 def checkafterresolved(repo):
3285 def checkafterresolved(repo):
3286 '''Inform the user about the next action after completing hg resolve
3286 '''Inform the user about the next action after completing hg resolve
3287
3287
3288 If there's a matching afterresolvedstates, howtocontinue will yield
3288 If there's a matching afterresolvedstates, howtocontinue will yield
3289 repo.ui.warn as the reporter.
3289 repo.ui.warn as the reporter.
3290
3290
3291 Otherwise, it will yield repo.ui.note.
3291 Otherwise, it will yield repo.ui.note.
3292 '''
3292 '''
3293 msg, warning = howtocontinue(repo)
3293 msg, warning = howtocontinue(repo)
3294 if msg is not None:
3294 if msg is not None:
3295 if warning:
3295 if warning:
3296 repo.ui.warn("%s\n" % msg)
3296 repo.ui.warn("%s\n" % msg)
3297 else:
3297 else:
3298 repo.ui.note("%s\n" % msg)
3298 repo.ui.note("%s\n" % msg)
3299
3299
3300 def wrongtooltocontinue(repo, task):
3300 def wrongtooltocontinue(repo, task):
3301 '''Raise an abort suggesting how to properly continue if there is an
3301 '''Raise an abort suggesting how to properly continue if there is an
3302 active task.
3302 active task.
3303
3303
3304 Uses howtocontinue() to find the active task.
3304 Uses howtocontinue() to find the active task.
3305
3305
3306 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3306 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3307 a hint.
3307 a hint.
3308 '''
3308 '''
3309 after = howtocontinue(repo)
3309 after = howtocontinue(repo)
3310 hint = None
3310 hint = None
3311 if after[1]:
3311 if after[1]:
3312 hint = after[0]
3312 hint = after[0]
3313 raise error.Abort(_('no %s in progress') % task, hint=hint)
3313 raise error.Abort(_('no %s in progress') % task, hint=hint)
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now