##// END OF EJS Templates
phases: rewrite "immutable changeset" to "public changeset"...
Jordi Gutiérrez Hermoso -
r25411:d298805f default
parent child Browse files
Show More
@@ -1,1154 +1,1154
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description
39 # r, roll = like fold, but discard this commit's description
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit message without changing commit content
41 # m, mess = edit message without changing commit content
42 #
42 #
43
43
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 for each revision in your history. For example, if you had meant to add gamma
45 for each revision in your history. For example, if you had meant to add gamma
46 before beta, and then wanted to add delta in the same revision as beta, you
46 before beta, and then wanted to add delta in the same revision as beta, you
47 would reorganize the file to look like this::
47 would reorganize the file to look like this::
48
48
49 pick 030b686bedc4 Add gamma
49 pick 030b686bedc4 Add gamma
50 pick c561b4e977df Add beta
50 pick c561b4e977df Add beta
51 fold 7c2fd3b9020c Add delta
51 fold 7c2fd3b9020c Add delta
52
52
53 # Edit history between c561b4e977df and 7c2fd3b9020c
53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 #
54 #
55 # Commits are listed from least to most recent
55 # Commits are listed from least to most recent
56 #
56 #
57 # Commands:
57 # Commands:
58 # p, pick = use commit
58 # p, pick = use commit
59 # e, edit = use commit, but stop for amending
59 # e, edit = use commit, but stop for amending
60 # f, fold = use commit, but combine it with the one above
60 # f, fold = use commit, but combine it with the one above
61 # r, roll = like fold, but discard this commit's description
61 # r, roll = like fold, but discard this commit's description
62 # d, drop = remove commit from history
62 # d, drop = remove commit from history
63 # m, mess = edit message without changing commit content
63 # m, mess = edit message without changing commit content
64 #
64 #
65
65
66 At which point you close the editor and ``histedit`` starts working. When you
66 At which point you close the editor and ``histedit`` starts working. When you
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 those revisions together, offering you a chance to clean up the commit message::
68 those revisions together, offering you a chance to clean up the commit message::
69
69
70 Add beta
70 Add beta
71 ***
71 ***
72 Add delta
72 Add delta
73
73
74 Edit the commit message to your liking, then close the editor. For
74 Edit the commit message to your liking, then close the editor. For
75 this example, let's assume that the commit message was changed to
75 this example, let's assume that the commit message was changed to
76 ``Add beta and delta.`` After histedit has run and had a chance to
76 ``Add beta and delta.`` After histedit has run and had a chance to
77 remove any old or temporary revisions it needed, the history looks
77 remove any old or temporary revisions it needed, the history looks
78 like this::
78 like this::
79
79
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 | Add beta and delta.
81 | Add beta and delta.
82 |
82 |
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 | Add gamma
84 | Add gamma
85 |
85 |
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 Add alpha
87 Add alpha
88
88
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 ones) until after it has completed all the editing operations, so it will
90 ones) until after it has completed all the editing operations, so it will
91 probably perform several strip operations when it's done. For the above example,
91 probably perform several strip operations when it's done. For the above example,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 so you might need to be a little patient. You can choose to keep the original
93 so you might need to be a little patient. You can choose to keep the original
94 revisions by passing the ``--keep`` flag.
94 revisions by passing the ``--keep`` flag.
95
95
96 The ``edit`` operation will drop you back to a command prompt,
96 The ``edit`` operation will drop you back to a command prompt,
97 allowing you to edit files freely, or even use ``hg record`` to commit
97 allowing you to edit files freely, or even use ``hg record`` to commit
98 some changes as a separate commit. When you're done, any remaining
98 some changes as a separate commit. When you're done, any remaining
99 uncommitted changes will be committed as well. When done, run ``hg
99 uncommitted changes will be committed as well. When done, run ``hg
100 histedit --continue`` to finish this step. You'll be prompted for a
100 histedit --continue`` to finish this step. You'll be prompted for a
101 new commit message, but the default commit message will be the
101 new commit message, but the default commit message will be the
102 original message for the ``edit`` ed revision.
102 original message for the ``edit`` ed revision.
103
103
104 The ``message`` operation will give you a chance to revise a commit
104 The ``message`` operation will give you a chance to revise a commit
105 message without changing the contents. It's a shortcut for doing
105 message without changing the contents. It's a shortcut for doing
106 ``edit`` immediately followed by `hg histedit --continue``.
106 ``edit`` immediately followed by `hg histedit --continue``.
107
107
108 If ``histedit`` encounters a conflict when moving a revision (while
108 If ``histedit`` encounters a conflict when moving a revision (while
109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
110 ``edit`` with the difference that it won't prompt you for a commit
110 ``edit`` with the difference that it won't prompt you for a commit
111 message when done. If you decide at this point that you don't like how
111 message when done. If you decide at this point that you don't like how
112 much work it will be to rearrange history, or that you made a mistake,
112 much work it will be to rearrange history, or that you made a mistake,
113 you can use ``hg histedit --abort`` to abandon the new changes you
113 you can use ``hg histedit --abort`` to abandon the new changes you
114 have made and return to the state before you attempted to edit your
114 have made and return to the state before you attempted to edit your
115 history.
115 history.
116
116
117 If we clone the histedit-ed example repository above and add four more
117 If we clone the histedit-ed example repository above and add four more
118 changes, such that we have the following history::
118 changes, such that we have the following history::
119
119
120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
121 | Add theta
121 | Add theta
122 |
122 |
123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
124 | Add eta
124 | Add eta
125 |
125 |
126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
127 | Add zeta
127 | Add zeta
128 |
128 |
129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
130 | Add epsilon
130 | Add epsilon
131 |
131 |
132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
133 | Add beta and delta.
133 | Add beta and delta.
134 |
134 |
135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
136 | Add gamma
136 | Add gamma
137 |
137 |
138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
139 Add alpha
139 Add alpha
140
140
141 If you run ``hg histedit --outgoing`` on the clone then it is the same
141 If you run ``hg histedit --outgoing`` on the clone then it is the same
142 as running ``hg histedit 836302820282``. If you need plan to push to a
142 as running ``hg histedit 836302820282``. If you need plan to push to a
143 repository that Mercurial does not detect to be related to the source
143 repository that Mercurial does not detect to be related to the source
144 repo, you can add a ``--force`` option.
144 repo, you can add a ``--force`` option.
145
145
146 Histedit rule lines are truncated to 80 characters by default. You
146 Histedit rule lines are truncated to 80 characters by default. You
147 can customise this behaviour by setting a different length in your
147 can customise this behaviour by setting a different length in your
148 configuration file::
148 configuration file::
149
149
150 [histedit]
150 [histedit]
151 linelen = 120 # truncate rule lines at 120 characters
151 linelen = 120 # truncate rule lines at 120 characters
152 """
152 """
153
153
154 try:
154 try:
155 import cPickle as pickle
155 import cPickle as pickle
156 pickle.dump # import now
156 pickle.dump # import now
157 except ImportError:
157 except ImportError:
158 import pickle
158 import pickle
159 import errno
159 import errno
160 import os
160 import os
161 import sys
161 import sys
162
162
163 from mercurial import cmdutil
163 from mercurial import cmdutil
164 from mercurial import discovery
164 from mercurial import discovery
165 from mercurial import error
165 from mercurial import error
166 from mercurial import changegroup
166 from mercurial import changegroup
167 from mercurial import copies
167 from mercurial import copies
168 from mercurial import context
168 from mercurial import context
169 from mercurial import exchange
169 from mercurial import exchange
170 from mercurial import extensions
170 from mercurial import extensions
171 from mercurial import hg
171 from mercurial import hg
172 from mercurial import node
172 from mercurial import node
173 from mercurial import repair
173 from mercurial import repair
174 from mercurial import scmutil
174 from mercurial import scmutil
175 from mercurial import util
175 from mercurial import util
176 from mercurial import obsolete
176 from mercurial import obsolete
177 from mercurial import merge as mergemod
177 from mercurial import merge as mergemod
178 from mercurial.lock import release
178 from mercurial.lock import release
179 from mercurial.i18n import _
179 from mercurial.i18n import _
180
180
181 cmdtable = {}
181 cmdtable = {}
182 command = cmdutil.command(cmdtable)
182 command = cmdutil.command(cmdtable)
183
183
184 # Note for extension authors: ONLY specify testedwith = 'internal' for
184 # Note for extension authors: ONLY specify testedwith = 'internal' for
185 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
185 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
186 # be specifying the version(s) of Mercurial they are tested with, or
186 # be specifying the version(s) of Mercurial they are tested with, or
187 # leave the attribute unspecified.
187 # leave the attribute unspecified.
188 testedwith = 'internal'
188 testedwith = 'internal'
189
189
190 # i18n: command names and abbreviations must remain untranslated
190 # i18n: command names and abbreviations must remain untranslated
191 editcomment = _("""# Edit history between %s and %s
191 editcomment = _("""# Edit history between %s and %s
192 #
192 #
193 # Commits are listed from least to most recent
193 # Commits are listed from least to most recent
194 #
194 #
195 # Commands:
195 # Commands:
196 # p, pick = use commit
196 # p, pick = use commit
197 # e, edit = use commit, but stop for amending
197 # e, edit = use commit, but stop for amending
198 # f, fold = use commit, but combine it with the one above
198 # f, fold = use commit, but combine it with the one above
199 # r, roll = like fold, but discard this commit's description
199 # r, roll = like fold, but discard this commit's description
200 # d, drop = remove commit from history
200 # d, drop = remove commit from history
201 # m, mess = edit message without changing commit content
201 # m, mess = edit message without changing commit content
202 #
202 #
203 """)
203 """)
204
204
205 class histeditstate(object):
205 class histeditstate(object):
206 def __init__(self, repo, parentctxnode=None, rules=None, keep=None,
206 def __init__(self, repo, parentctxnode=None, rules=None, keep=None,
207 topmost=None, replacements=None, lock=None, wlock=None):
207 topmost=None, replacements=None, lock=None, wlock=None):
208 self.repo = repo
208 self.repo = repo
209 self.rules = rules
209 self.rules = rules
210 self.keep = keep
210 self.keep = keep
211 self.topmost = topmost
211 self.topmost = topmost
212 self.parentctxnode = parentctxnode
212 self.parentctxnode = parentctxnode
213 self.lock = lock
213 self.lock = lock
214 self.wlock = wlock
214 self.wlock = wlock
215 self.backupfile = None
215 self.backupfile = None
216 if replacements is None:
216 if replacements is None:
217 self.replacements = []
217 self.replacements = []
218 else:
218 else:
219 self.replacements = replacements
219 self.replacements = replacements
220
220
221 def read(self):
221 def read(self):
222 """Load histedit state from disk and set fields appropriately."""
222 """Load histedit state from disk and set fields appropriately."""
223 try:
223 try:
224 fp = self.repo.vfs('histedit-state', 'r')
224 fp = self.repo.vfs('histedit-state', 'r')
225 except IOError, err:
225 except IOError, err:
226 if err.errno != errno.ENOENT:
226 if err.errno != errno.ENOENT:
227 raise
227 raise
228 raise util.Abort(_('no histedit in progress'))
228 raise util.Abort(_('no histedit in progress'))
229
229
230 try:
230 try:
231 data = pickle.load(fp)
231 data = pickle.load(fp)
232 parentctxnode, rules, keep, topmost, replacements = data
232 parentctxnode, rules, keep, topmost, replacements = data
233 backupfile = None
233 backupfile = None
234 except pickle.UnpicklingError:
234 except pickle.UnpicklingError:
235 data = self._load()
235 data = self._load()
236 parentctxnode, rules, keep, topmost, replacements, backupfile = data
236 parentctxnode, rules, keep, topmost, replacements, backupfile = data
237
237
238 self.parentctxnode = parentctxnode
238 self.parentctxnode = parentctxnode
239 self.rules = rules
239 self.rules = rules
240 self.keep = keep
240 self.keep = keep
241 self.topmost = topmost
241 self.topmost = topmost
242 self.replacements = replacements
242 self.replacements = replacements
243 self.backupfile = backupfile
243 self.backupfile = backupfile
244
244
245 def write(self):
245 def write(self):
246 fp = self.repo.vfs('histedit-state', 'w')
246 fp = self.repo.vfs('histedit-state', 'w')
247 fp.write('v1\n')
247 fp.write('v1\n')
248 fp.write('%s\n' % node.hex(self.parentctxnode))
248 fp.write('%s\n' % node.hex(self.parentctxnode))
249 fp.write('%s\n' % node.hex(self.topmost))
249 fp.write('%s\n' % node.hex(self.topmost))
250 fp.write('%s\n' % self.keep)
250 fp.write('%s\n' % self.keep)
251 fp.write('%d\n' % len(self.rules))
251 fp.write('%d\n' % len(self.rules))
252 for rule in self.rules:
252 for rule in self.rules:
253 fp.write('%s\n' % rule[0]) # action
253 fp.write('%s\n' % rule[0]) # action
254 fp.write('%s\n' % rule[1]) # remainder
254 fp.write('%s\n' % rule[1]) # remainder
255 fp.write('%d\n' % len(self.replacements))
255 fp.write('%d\n' % len(self.replacements))
256 for replacement in self.replacements:
256 for replacement in self.replacements:
257 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
257 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
258 for r in replacement[1])))
258 for r in replacement[1])))
259 backupfile = self.backupfile
259 backupfile = self.backupfile
260 if not backupfile:
260 if not backupfile:
261 backupfile = ''
261 backupfile = ''
262 fp.write('%s\n' % backupfile)
262 fp.write('%s\n' % backupfile)
263 fp.close()
263 fp.close()
264
264
265 def _load(self):
265 def _load(self):
266 fp = self.repo.vfs('histedit-state', 'r')
266 fp = self.repo.vfs('histedit-state', 'r')
267 lines = [l[:-1] for l in fp.readlines()]
267 lines = [l[:-1] for l in fp.readlines()]
268
268
269 index = 0
269 index = 0
270 lines[index] # version number
270 lines[index] # version number
271 index += 1
271 index += 1
272
272
273 parentctxnode = node.bin(lines[index])
273 parentctxnode = node.bin(lines[index])
274 index += 1
274 index += 1
275
275
276 topmost = node.bin(lines[index])
276 topmost = node.bin(lines[index])
277 index += 1
277 index += 1
278
278
279 keep = lines[index] == 'True'
279 keep = lines[index] == 'True'
280 index += 1
280 index += 1
281
281
282 # Rules
282 # Rules
283 rules = []
283 rules = []
284 rulelen = int(lines[index])
284 rulelen = int(lines[index])
285 index += 1
285 index += 1
286 for i in xrange(rulelen):
286 for i in xrange(rulelen):
287 ruleaction = lines[index]
287 ruleaction = lines[index]
288 index += 1
288 index += 1
289 rule = lines[index]
289 rule = lines[index]
290 index += 1
290 index += 1
291 rules.append((ruleaction, rule))
291 rules.append((ruleaction, rule))
292
292
293 # Replacements
293 # Replacements
294 replacements = []
294 replacements = []
295 replacementlen = int(lines[index])
295 replacementlen = int(lines[index])
296 index += 1
296 index += 1
297 for i in xrange(replacementlen):
297 for i in xrange(replacementlen):
298 replacement = lines[index]
298 replacement = lines[index]
299 original = node.bin(replacement[:40])
299 original = node.bin(replacement[:40])
300 succ = [node.bin(replacement[i:i + 40]) for i in
300 succ = [node.bin(replacement[i:i + 40]) for i in
301 range(40, len(replacement), 40)]
301 range(40, len(replacement), 40)]
302 replacements.append((original, succ))
302 replacements.append((original, succ))
303 index += 1
303 index += 1
304
304
305 backupfile = lines[index]
305 backupfile = lines[index]
306 index += 1
306 index += 1
307
307
308 fp.close()
308 fp.close()
309
309
310 return parentctxnode, rules, keep, topmost, replacements, backupfile
310 return parentctxnode, rules, keep, topmost, replacements, backupfile
311
311
312 def clear(self):
312 def clear(self):
313 self.repo.vfs.unlink('histedit-state')
313 self.repo.vfs.unlink('histedit-state')
314
314
315 class histeditaction(object):
315 class histeditaction(object):
316 def __init__(self, state, node):
316 def __init__(self, state, node):
317 self.state = state
317 self.state = state
318 self.repo = state.repo
318 self.repo = state.repo
319 self.node = node
319 self.node = node
320
320
321 @classmethod
321 @classmethod
322 def fromrule(cls, state, rule):
322 def fromrule(cls, state, rule):
323 """Parses the given rule, returning an instance of the histeditaction.
323 """Parses the given rule, returning an instance of the histeditaction.
324 """
324 """
325 repo = state.repo
325 repo = state.repo
326 rulehash = rule.strip().split(' ', 1)[0]
326 rulehash = rule.strip().split(' ', 1)[0]
327 try:
327 try:
328 node = repo[rulehash].node()
328 node = repo[rulehash].node()
329 except error.RepoError:
329 except error.RepoError:
330 raise util.Abort(_('unknown changeset %s listed') % rulehash[:12])
330 raise util.Abort(_('unknown changeset %s listed') % rulehash[:12])
331 return cls(state, node)
331 return cls(state, node)
332
332
333 def run(self):
333 def run(self):
334 """Runs the action. The default behavior is simply apply the action's
334 """Runs the action. The default behavior is simply apply the action's
335 rulectx onto the current parentctx."""
335 rulectx onto the current parentctx."""
336 self.applychange()
336 self.applychange()
337 self.continuedirty()
337 self.continuedirty()
338 return self.continueclean()
338 return self.continueclean()
339
339
340 def applychange(self):
340 def applychange(self):
341 """Applies the changes from this action's rulectx onto the current
341 """Applies the changes from this action's rulectx onto the current
342 parentctx, but does not commit them."""
342 parentctx, but does not commit them."""
343 repo = self.repo
343 repo = self.repo
344 rulectx = repo[self.node]
344 rulectx = repo[self.node]
345 hg.update(repo, self.state.parentctxnode)
345 hg.update(repo, self.state.parentctxnode)
346 stats = applychanges(repo.ui, repo, rulectx, {})
346 stats = applychanges(repo.ui, repo, rulectx, {})
347 if stats and stats[3] > 0:
347 if stats and stats[3] > 0:
348 raise error.InterventionRequired(_('Fix up the change and run '
348 raise error.InterventionRequired(_('Fix up the change and run '
349 'hg histedit --continue'))
349 'hg histedit --continue'))
350
350
351 def continuedirty(self):
351 def continuedirty(self):
352 """Continues the action when changes have been applied to the working
352 """Continues the action when changes have been applied to the working
353 copy. The default behavior is to commit the dirty changes."""
353 copy. The default behavior is to commit the dirty changes."""
354 repo = self.repo
354 repo = self.repo
355 rulectx = repo[self.node]
355 rulectx = repo[self.node]
356
356
357 editor = self.commiteditor()
357 editor = self.commiteditor()
358 commit = commitfuncfor(repo, rulectx)
358 commit = commitfuncfor(repo, rulectx)
359
359
360 commit(text=rulectx.description(), user=rulectx.user(),
360 commit(text=rulectx.description(), user=rulectx.user(),
361 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
361 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
362
362
363 def commiteditor(self):
363 def commiteditor(self):
364 """The editor to be used to edit the commit message."""
364 """The editor to be used to edit the commit message."""
365 return False
365 return False
366
366
367 def continueclean(self):
367 def continueclean(self):
368 """Continues the action when the working copy is clean. The default
368 """Continues the action when the working copy is clean. The default
369 behavior is to accept the current commit as the new version of the
369 behavior is to accept the current commit as the new version of the
370 rulectx."""
370 rulectx."""
371 ctx = self.repo['.']
371 ctx = self.repo['.']
372 if ctx.node() == self.state.parentctxnode:
372 if ctx.node() == self.state.parentctxnode:
373 self.repo.ui.warn(_('%s: empty changeset\n') %
373 self.repo.ui.warn(_('%s: empty changeset\n') %
374 node.short(self.node))
374 node.short(self.node))
375 return ctx, [(self.node, tuple())]
375 return ctx, [(self.node, tuple())]
376 if ctx.node() == self.node:
376 if ctx.node() == self.node:
377 # Nothing changed
377 # Nothing changed
378 return ctx, []
378 return ctx, []
379 return ctx, [(self.node, (ctx.node(),))]
379 return ctx, [(self.node, (ctx.node(),))]
380
380
381 def commitfuncfor(repo, src):
381 def commitfuncfor(repo, src):
382 """Build a commit function for the replacement of <src>
382 """Build a commit function for the replacement of <src>
383
383
384 This function ensure we apply the same treatment to all changesets.
384 This function ensure we apply the same treatment to all changesets.
385
385
386 - Add a 'histedit_source' entry in extra.
386 - Add a 'histedit_source' entry in extra.
387
387
388 Note that fold have its own separated logic because its handling is a bit
388 Note that fold have its own separated logic because its handling is a bit
389 different and not easily factored out of the fold method.
389 different and not easily factored out of the fold method.
390 """
390 """
391 phasemin = src.phase()
391 phasemin = src.phase()
392 def commitfunc(**kwargs):
392 def commitfunc(**kwargs):
393 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
393 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
394 try:
394 try:
395 repo.ui.setconfig('phases', 'new-commit', phasemin,
395 repo.ui.setconfig('phases', 'new-commit', phasemin,
396 'histedit')
396 'histedit')
397 extra = kwargs.get('extra', {}).copy()
397 extra = kwargs.get('extra', {}).copy()
398 extra['histedit_source'] = src.hex()
398 extra['histedit_source'] = src.hex()
399 kwargs['extra'] = extra
399 kwargs['extra'] = extra
400 return repo.commit(**kwargs)
400 return repo.commit(**kwargs)
401 finally:
401 finally:
402 repo.ui.restoreconfig(phasebackup)
402 repo.ui.restoreconfig(phasebackup)
403 return commitfunc
403 return commitfunc
404
404
405 def applychanges(ui, repo, ctx, opts):
405 def applychanges(ui, repo, ctx, opts):
406 """Merge changeset from ctx (only) in the current working directory"""
406 """Merge changeset from ctx (only) in the current working directory"""
407 wcpar = repo.dirstate.parents()[0]
407 wcpar = repo.dirstate.parents()[0]
408 if ctx.p1().node() == wcpar:
408 if ctx.p1().node() == wcpar:
409 # edition ar "in place" we do not need to make any merge,
409 # edition ar "in place" we do not need to make any merge,
410 # just applies changes on parent for edition
410 # just applies changes on parent for edition
411 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
411 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
412 stats = None
412 stats = None
413 else:
413 else:
414 try:
414 try:
415 # ui.forcemerge is an internal variable, do not document
415 # ui.forcemerge is an internal variable, do not document
416 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
416 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
417 'histedit')
417 'histedit')
418 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
418 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
419 finally:
419 finally:
420 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
420 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
421 return stats
421 return stats
422
422
423 def collapse(repo, first, last, commitopts, skipprompt=False):
423 def collapse(repo, first, last, commitopts, skipprompt=False):
424 """collapse the set of revisions from first to last as new one.
424 """collapse the set of revisions from first to last as new one.
425
425
426 Expected commit options are:
426 Expected commit options are:
427 - message
427 - message
428 - date
428 - date
429 - username
429 - username
430 Commit message is edited in all cases.
430 Commit message is edited in all cases.
431
431
432 This function works in memory."""
432 This function works in memory."""
433 ctxs = list(repo.set('%d::%d', first, last))
433 ctxs = list(repo.set('%d::%d', first, last))
434 if not ctxs:
434 if not ctxs:
435 return None
435 return None
436 base = first.parents()[0]
436 base = first.parents()[0]
437
437
438 # commit a new version of the old changeset, including the update
438 # commit a new version of the old changeset, including the update
439 # collect all files which might be affected
439 # collect all files which might be affected
440 files = set()
440 files = set()
441 for ctx in ctxs:
441 for ctx in ctxs:
442 files.update(ctx.files())
442 files.update(ctx.files())
443
443
444 # Recompute copies (avoid recording a -> b -> a)
444 # Recompute copies (avoid recording a -> b -> a)
445 copied = copies.pathcopies(base, last)
445 copied = copies.pathcopies(base, last)
446
446
447 # prune files which were reverted by the updates
447 # prune files which were reverted by the updates
448 def samefile(f):
448 def samefile(f):
449 if f in last.manifest():
449 if f in last.manifest():
450 a = last.filectx(f)
450 a = last.filectx(f)
451 if f in base.manifest():
451 if f in base.manifest():
452 b = base.filectx(f)
452 b = base.filectx(f)
453 return (a.data() == b.data()
453 return (a.data() == b.data()
454 and a.flags() == b.flags())
454 and a.flags() == b.flags())
455 else:
455 else:
456 return False
456 return False
457 else:
457 else:
458 return f not in base.manifest()
458 return f not in base.manifest()
459 files = [f for f in files if not samefile(f)]
459 files = [f for f in files if not samefile(f)]
460 # commit version of these files as defined by head
460 # commit version of these files as defined by head
461 headmf = last.manifest()
461 headmf = last.manifest()
462 def filectxfn(repo, ctx, path):
462 def filectxfn(repo, ctx, path):
463 if path in headmf:
463 if path in headmf:
464 fctx = last[path]
464 fctx = last[path]
465 flags = fctx.flags()
465 flags = fctx.flags()
466 mctx = context.memfilectx(repo,
466 mctx = context.memfilectx(repo,
467 fctx.path(), fctx.data(),
467 fctx.path(), fctx.data(),
468 islink='l' in flags,
468 islink='l' in flags,
469 isexec='x' in flags,
469 isexec='x' in flags,
470 copied=copied.get(path))
470 copied=copied.get(path))
471 return mctx
471 return mctx
472 return None
472 return None
473
473
474 if commitopts.get('message'):
474 if commitopts.get('message'):
475 message = commitopts['message']
475 message = commitopts['message']
476 else:
476 else:
477 message = first.description()
477 message = first.description()
478 user = commitopts.get('user')
478 user = commitopts.get('user')
479 date = commitopts.get('date')
479 date = commitopts.get('date')
480 extra = commitopts.get('extra')
480 extra = commitopts.get('extra')
481
481
482 parents = (first.p1().node(), first.p2().node())
482 parents = (first.p1().node(), first.p2().node())
483 editor = None
483 editor = None
484 if not skipprompt:
484 if not skipprompt:
485 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
485 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
486 new = context.memctx(repo,
486 new = context.memctx(repo,
487 parents=parents,
487 parents=parents,
488 text=message,
488 text=message,
489 files=files,
489 files=files,
490 filectxfn=filectxfn,
490 filectxfn=filectxfn,
491 user=user,
491 user=user,
492 date=date,
492 date=date,
493 extra=extra,
493 extra=extra,
494 editor=editor)
494 editor=editor)
495 return repo.commitctx(new)
495 return repo.commitctx(new)
496
496
497 class pick(histeditaction):
497 class pick(histeditaction):
498 def run(self):
498 def run(self):
499 rulectx = self.repo[self.node]
499 rulectx = self.repo[self.node]
500 if rulectx.parents()[0].node() == self.state.parentctxnode:
500 if rulectx.parents()[0].node() == self.state.parentctxnode:
501 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
501 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
502 return rulectx, []
502 return rulectx, []
503
503
504 return super(pick, self).run()
504 return super(pick, self).run()
505
505
506 class edit(histeditaction):
506 class edit(histeditaction):
507 def run(self):
507 def run(self):
508 repo = self.repo
508 repo = self.repo
509 rulectx = repo[self.node]
509 rulectx = repo[self.node]
510 hg.update(repo, self.state.parentctxnode)
510 hg.update(repo, self.state.parentctxnode)
511 applychanges(repo.ui, repo, rulectx, {})
511 applychanges(repo.ui, repo, rulectx, {})
512 raise error.InterventionRequired(
512 raise error.InterventionRequired(
513 _('Make changes as needed, you may commit or record as needed '
513 _('Make changes as needed, you may commit or record as needed '
514 'now.\nWhen you are finished, run hg histedit --continue to '
514 'now.\nWhen you are finished, run hg histedit --continue to '
515 'resume.'))
515 'resume.'))
516
516
517 def commiteditor(self):
517 def commiteditor(self):
518 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
518 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
519
519
520 class fold(histeditaction):
520 class fold(histeditaction):
521 def continuedirty(self):
521 def continuedirty(self):
522 repo = self.repo
522 repo = self.repo
523 rulectx = repo[self.node]
523 rulectx = repo[self.node]
524
524
525 commit = commitfuncfor(repo, rulectx)
525 commit = commitfuncfor(repo, rulectx)
526 commit(text='fold-temp-revision %s' % node.short(self.node),
526 commit(text='fold-temp-revision %s' % node.short(self.node),
527 user=rulectx.user(), date=rulectx.date(),
527 user=rulectx.user(), date=rulectx.date(),
528 extra=rulectx.extra())
528 extra=rulectx.extra())
529
529
530 def continueclean(self):
530 def continueclean(self):
531 repo = self.repo
531 repo = self.repo
532 ctx = repo['.']
532 ctx = repo['.']
533 rulectx = repo[self.node]
533 rulectx = repo[self.node]
534 parentctxnode = self.state.parentctxnode
534 parentctxnode = self.state.parentctxnode
535 if ctx.node() == parentctxnode:
535 if ctx.node() == parentctxnode:
536 repo.ui.warn(_('%s: empty changeset\n') %
536 repo.ui.warn(_('%s: empty changeset\n') %
537 node.short(self.node))
537 node.short(self.node))
538 return ctx, [(self.node, (parentctxnode,))]
538 return ctx, [(self.node, (parentctxnode,))]
539
539
540 parentctx = repo[parentctxnode]
540 parentctx = repo[parentctxnode]
541 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
541 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
542 parentctx))
542 parentctx))
543 if not newcommits:
543 if not newcommits:
544 repo.ui.warn(_('%s: cannot fold - working copy is not a '
544 repo.ui.warn(_('%s: cannot fold - working copy is not a '
545 'descendant of previous commit %s\n') %
545 'descendant of previous commit %s\n') %
546 (node.short(self.node), node.short(parentctxnode)))
546 (node.short(self.node), node.short(parentctxnode)))
547 return ctx, [(self.node, (ctx.node(),))]
547 return ctx, [(self.node, (ctx.node(),))]
548
548
549 middlecommits = newcommits.copy()
549 middlecommits = newcommits.copy()
550 middlecommits.discard(ctx.node())
550 middlecommits.discard(ctx.node())
551
551
552 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
552 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
553 middlecommits)
553 middlecommits)
554
554
555 def skipprompt(self):
555 def skipprompt(self):
556 return False
556 return False
557
557
558 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
558 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
559 parent = ctx.parents()[0].node()
559 parent = ctx.parents()[0].node()
560 hg.update(repo, parent)
560 hg.update(repo, parent)
561 ### prepare new commit data
561 ### prepare new commit data
562 commitopts = {}
562 commitopts = {}
563 commitopts['user'] = ctx.user()
563 commitopts['user'] = ctx.user()
564 # commit message
564 # commit message
565 if self.skipprompt():
565 if self.skipprompt():
566 newmessage = ctx.description()
566 newmessage = ctx.description()
567 else:
567 else:
568 newmessage = '\n***\n'.join(
568 newmessage = '\n***\n'.join(
569 [ctx.description()] +
569 [ctx.description()] +
570 [repo[r].description() for r in internalchanges] +
570 [repo[r].description() for r in internalchanges] +
571 [oldctx.description()]) + '\n'
571 [oldctx.description()]) + '\n'
572 commitopts['message'] = newmessage
572 commitopts['message'] = newmessage
573 # date
573 # date
574 commitopts['date'] = max(ctx.date(), oldctx.date())
574 commitopts['date'] = max(ctx.date(), oldctx.date())
575 extra = ctx.extra().copy()
575 extra = ctx.extra().copy()
576 # histedit_source
576 # histedit_source
577 # note: ctx is likely a temporary commit but that the best we can do
577 # note: ctx is likely a temporary commit but that the best we can do
578 # here. This is sufficient to solve issue3681 anyway.
578 # here. This is sufficient to solve issue3681 anyway.
579 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
579 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
580 commitopts['extra'] = extra
580 commitopts['extra'] = extra
581 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
581 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
582 try:
582 try:
583 phasemin = max(ctx.phase(), oldctx.phase())
583 phasemin = max(ctx.phase(), oldctx.phase())
584 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
584 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
585 n = collapse(repo, ctx, repo[newnode], commitopts,
585 n = collapse(repo, ctx, repo[newnode], commitopts,
586 skipprompt=self.skipprompt())
586 skipprompt=self.skipprompt())
587 finally:
587 finally:
588 repo.ui.restoreconfig(phasebackup)
588 repo.ui.restoreconfig(phasebackup)
589 if n is None:
589 if n is None:
590 return ctx, []
590 return ctx, []
591 hg.update(repo, n)
591 hg.update(repo, n)
592 replacements = [(oldctx.node(), (newnode,)),
592 replacements = [(oldctx.node(), (newnode,)),
593 (ctx.node(), (n,)),
593 (ctx.node(), (n,)),
594 (newnode, (n,)),
594 (newnode, (n,)),
595 ]
595 ]
596 for ich in internalchanges:
596 for ich in internalchanges:
597 replacements.append((ich, (n,)))
597 replacements.append((ich, (n,)))
598 return repo[n], replacements
598 return repo[n], replacements
599
599
600 class rollup(fold):
600 class rollup(fold):
601 def skipprompt(self):
601 def skipprompt(self):
602 return True
602 return True
603
603
604 class drop(histeditaction):
604 class drop(histeditaction):
605 def run(self):
605 def run(self):
606 parentctx = self.repo[self.state.parentctxnode]
606 parentctx = self.repo[self.state.parentctxnode]
607 return parentctx, [(self.node, tuple())]
607 return parentctx, [(self.node, tuple())]
608
608
609 class message(histeditaction):
609 class message(histeditaction):
610 def commiteditor(self):
610 def commiteditor(self):
611 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
611 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
612
612
613 def findoutgoing(ui, repo, remote=None, force=False, opts={}):
613 def findoutgoing(ui, repo, remote=None, force=False, opts={}):
614 """utility function to find the first outgoing changeset
614 """utility function to find the first outgoing changeset
615
615
616 Used by initialisation code"""
616 Used by initialisation code"""
617 dest = ui.expandpath(remote or 'default-push', remote or 'default')
617 dest = ui.expandpath(remote or 'default-push', remote or 'default')
618 dest, revs = hg.parseurl(dest, None)[:2]
618 dest, revs = hg.parseurl(dest, None)[:2]
619 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
619 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
620
620
621 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
621 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
622 other = hg.peer(repo, opts, dest)
622 other = hg.peer(repo, opts, dest)
623
623
624 if revs:
624 if revs:
625 revs = [repo.lookup(rev) for rev in revs]
625 revs = [repo.lookup(rev) for rev in revs]
626
626
627 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
627 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
628 if not outgoing.missing:
628 if not outgoing.missing:
629 raise util.Abort(_('no outgoing ancestors'))
629 raise util.Abort(_('no outgoing ancestors'))
630 roots = list(repo.revs("roots(%ln)", outgoing.missing))
630 roots = list(repo.revs("roots(%ln)", outgoing.missing))
631 if 1 < len(roots):
631 if 1 < len(roots):
632 msg = _('there are ambiguous outgoing revisions')
632 msg = _('there are ambiguous outgoing revisions')
633 hint = _('see "hg help histedit" for more detail')
633 hint = _('see "hg help histedit" for more detail')
634 raise util.Abort(msg, hint=hint)
634 raise util.Abort(msg, hint=hint)
635 return repo.lookup(roots[0])
635 return repo.lookup(roots[0])
636
636
637 actiontable = {'p': pick,
637 actiontable = {'p': pick,
638 'pick': pick,
638 'pick': pick,
639 'e': edit,
639 'e': edit,
640 'edit': edit,
640 'edit': edit,
641 'f': fold,
641 'f': fold,
642 'fold': fold,
642 'fold': fold,
643 'r': rollup,
643 'r': rollup,
644 'roll': rollup,
644 'roll': rollup,
645 'd': drop,
645 'd': drop,
646 'drop': drop,
646 'drop': drop,
647 'm': message,
647 'm': message,
648 'mess': message,
648 'mess': message,
649 }
649 }
650
650
651 @command('histedit',
651 @command('histedit',
652 [('', 'commands', '',
652 [('', 'commands', '',
653 _('read history edits from the specified file'), _('FILE')),
653 _('read history edits from the specified file'), _('FILE')),
654 ('c', 'continue', False, _('continue an edit already in progress')),
654 ('c', 'continue', False, _('continue an edit already in progress')),
655 ('', 'edit-plan', False, _('edit remaining actions list')),
655 ('', 'edit-plan', False, _('edit remaining actions list')),
656 ('k', 'keep', False,
656 ('k', 'keep', False,
657 _("don't strip old nodes after edit is complete")),
657 _("don't strip old nodes after edit is complete")),
658 ('', 'abort', False, _('abort an edit in progress')),
658 ('', 'abort', False, _('abort an edit in progress')),
659 ('o', 'outgoing', False, _('changesets not found in destination')),
659 ('o', 'outgoing', False, _('changesets not found in destination')),
660 ('f', 'force', False,
660 ('f', 'force', False,
661 _('force outgoing even for unrelated repositories')),
661 _('force outgoing even for unrelated repositories')),
662 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
662 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
663 _("ANCESTOR | --outgoing [URL]"))
663 _("ANCESTOR | --outgoing [URL]"))
664 def histedit(ui, repo, *freeargs, **opts):
664 def histedit(ui, repo, *freeargs, **opts):
665 """interactively edit changeset history
665 """interactively edit changeset history
666
666
667 This command edits changesets between ANCESTOR and the parent of
667 This command edits changesets between ANCESTOR and the parent of
668 the working directory.
668 the working directory.
669
669
670 With --outgoing, this edits changesets not found in the
670 With --outgoing, this edits changesets not found in the
671 destination repository. If URL of the destination is omitted, the
671 destination repository. If URL of the destination is omitted, the
672 'default-push' (or 'default') path will be used.
672 'default-push' (or 'default') path will be used.
673
673
674 For safety, this command is aborted, also if there are ambiguous
674 For safety, this command is aborted, also if there are ambiguous
675 outgoing revisions which may confuse users: for example, there are
675 outgoing revisions which may confuse users: for example, there are
676 multiple branches containing outgoing revisions.
676 multiple branches containing outgoing revisions.
677
677
678 Use "min(outgoing() and ::.)" or similar revset specification
678 Use "min(outgoing() and ::.)" or similar revset specification
679 instead of --outgoing to specify edit target revision exactly in
679 instead of --outgoing to specify edit target revision exactly in
680 such ambiguous situation. See :hg:`help revsets` for detail about
680 such ambiguous situation. See :hg:`help revsets` for detail about
681 selecting revisions.
681 selecting revisions.
682
682
683 Returns 0 on success, 1 if user intervention is required (not only
683 Returns 0 on success, 1 if user intervention is required (not only
684 for intentional "edit" command, but also for resolving unexpected
684 for intentional "edit" command, but also for resolving unexpected
685 conflicts).
685 conflicts).
686 """
686 """
687 state = histeditstate(repo)
687 state = histeditstate(repo)
688 try:
688 try:
689 state.wlock = repo.wlock()
689 state.wlock = repo.wlock()
690 state.lock = repo.lock()
690 state.lock = repo.lock()
691 _histedit(ui, repo, state, *freeargs, **opts)
691 _histedit(ui, repo, state, *freeargs, **opts)
692 finally:
692 finally:
693 release(state.lock, state.wlock)
693 release(state.lock, state.wlock)
694
694
695 def _histedit(ui, repo, state, *freeargs, **opts):
695 def _histedit(ui, repo, state, *freeargs, **opts):
696 # TODO only abort if we try and histedit mq patches, not just
696 # TODO only abort if we try and histedit mq patches, not just
697 # blanket if mq patches are applied somewhere
697 # blanket if mq patches are applied somewhere
698 mq = getattr(repo, 'mq', None)
698 mq = getattr(repo, 'mq', None)
699 if mq and mq.applied:
699 if mq and mq.applied:
700 raise util.Abort(_('source has mq patches applied'))
700 raise util.Abort(_('source has mq patches applied'))
701
701
702 # basic argument incompatibility processing
702 # basic argument incompatibility processing
703 outg = opts.get('outgoing')
703 outg = opts.get('outgoing')
704 cont = opts.get('continue')
704 cont = opts.get('continue')
705 editplan = opts.get('edit_plan')
705 editplan = opts.get('edit_plan')
706 abort = opts.get('abort')
706 abort = opts.get('abort')
707 force = opts.get('force')
707 force = opts.get('force')
708 rules = opts.get('commands', '')
708 rules = opts.get('commands', '')
709 revs = opts.get('rev', [])
709 revs = opts.get('rev', [])
710 goal = 'new' # This invocation goal, in new, continue, abort
710 goal = 'new' # This invocation goal, in new, continue, abort
711 if force and not outg:
711 if force and not outg:
712 raise util.Abort(_('--force only allowed with --outgoing'))
712 raise util.Abort(_('--force only allowed with --outgoing'))
713 if cont:
713 if cont:
714 if any((outg, abort, revs, freeargs, rules, editplan)):
714 if any((outg, abort, revs, freeargs, rules, editplan)):
715 raise util.Abort(_('no arguments allowed with --continue'))
715 raise util.Abort(_('no arguments allowed with --continue'))
716 goal = 'continue'
716 goal = 'continue'
717 elif abort:
717 elif abort:
718 if any((outg, revs, freeargs, rules, editplan)):
718 if any((outg, revs, freeargs, rules, editplan)):
719 raise util.Abort(_('no arguments allowed with --abort'))
719 raise util.Abort(_('no arguments allowed with --abort'))
720 goal = 'abort'
720 goal = 'abort'
721 elif editplan:
721 elif editplan:
722 if any((outg, revs, freeargs)):
722 if any((outg, revs, freeargs)):
723 raise util.Abort(_('only --commands argument allowed with '
723 raise util.Abort(_('only --commands argument allowed with '
724 '--edit-plan'))
724 '--edit-plan'))
725 goal = 'edit-plan'
725 goal = 'edit-plan'
726 else:
726 else:
727 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
727 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
728 raise util.Abort(_('history edit already in progress, try '
728 raise util.Abort(_('history edit already in progress, try '
729 '--continue or --abort'))
729 '--continue or --abort'))
730 if outg:
730 if outg:
731 if revs:
731 if revs:
732 raise util.Abort(_('no revisions allowed with --outgoing'))
732 raise util.Abort(_('no revisions allowed with --outgoing'))
733 if len(freeargs) > 1:
733 if len(freeargs) > 1:
734 raise util.Abort(
734 raise util.Abort(
735 _('only one repo argument allowed with --outgoing'))
735 _('only one repo argument allowed with --outgoing'))
736 else:
736 else:
737 revs.extend(freeargs)
737 revs.extend(freeargs)
738 if len(revs) == 0:
738 if len(revs) == 0:
739 histeditdefault = ui.config('histedit', 'defaultrev')
739 histeditdefault = ui.config('histedit', 'defaultrev')
740 if histeditdefault:
740 if histeditdefault:
741 revs.append(histeditdefault)
741 revs.append(histeditdefault)
742 if len(revs) != 1:
742 if len(revs) != 1:
743 raise util.Abort(
743 raise util.Abort(
744 _('histedit requires exactly one ancestor revision'))
744 _('histedit requires exactly one ancestor revision'))
745
745
746
746
747 replacements = []
747 replacements = []
748 state.keep = opts.get('keep', False)
748 state.keep = opts.get('keep', False)
749
749
750 # rebuild state
750 # rebuild state
751 if goal == 'continue':
751 if goal == 'continue':
752 state.read()
752 state.read()
753 state = bootstrapcontinue(ui, state, opts)
753 state = bootstrapcontinue(ui, state, opts)
754 elif goal == 'edit-plan':
754 elif goal == 'edit-plan':
755 state.read()
755 state.read()
756 if not rules:
756 if not rules:
757 comment = editcomment % (node.short(state.parentctxnode),
757 comment = editcomment % (node.short(state.parentctxnode),
758 node.short(state.topmost))
758 node.short(state.topmost))
759 rules = ruleeditor(repo, ui, state.rules, comment)
759 rules = ruleeditor(repo, ui, state.rules, comment)
760 else:
760 else:
761 if rules == '-':
761 if rules == '-':
762 f = sys.stdin
762 f = sys.stdin
763 else:
763 else:
764 f = open(rules)
764 f = open(rules)
765 rules = f.read()
765 rules = f.read()
766 f.close()
766 f.close()
767 rules = [l for l in (r.strip() for r in rules.splitlines())
767 rules = [l for l in (r.strip() for r in rules.splitlines())
768 if l and not l.startswith('#')]
768 if l and not l.startswith('#')]
769 rules = verifyrules(rules, repo, [repo[c] for [_a, c] in state.rules])
769 rules = verifyrules(rules, repo, [repo[c] for [_a, c] in state.rules])
770 state.rules = rules
770 state.rules = rules
771 state.write()
771 state.write()
772 return
772 return
773 elif goal == 'abort':
773 elif goal == 'abort':
774 state.read()
774 state.read()
775 mapping, tmpnodes, leafs, _ntm = processreplacement(state)
775 mapping, tmpnodes, leafs, _ntm = processreplacement(state)
776 ui.debug('restore wc to old parent %s\n' % node.short(state.topmost))
776 ui.debug('restore wc to old parent %s\n' % node.short(state.topmost))
777
777
778 # Recover our old commits if necessary
778 # Recover our old commits if necessary
779 if not state.topmost in repo and state.backupfile:
779 if not state.topmost in repo and state.backupfile:
780 backupfile = repo.join(state.backupfile)
780 backupfile = repo.join(state.backupfile)
781 f = hg.openpath(ui, backupfile)
781 f = hg.openpath(ui, backupfile)
782 gen = exchange.readbundle(ui, f, backupfile)
782 gen = exchange.readbundle(ui, f, backupfile)
783 changegroup.addchangegroup(repo, gen, 'histedit',
783 changegroup.addchangegroup(repo, gen, 'histedit',
784 'bundle:' + backupfile)
784 'bundle:' + backupfile)
785 os.remove(backupfile)
785 os.remove(backupfile)
786
786
787 # check whether we should update away
787 # check whether we should update away
788 parentnodes = [c.node() for c in repo[None].parents()]
788 parentnodes = [c.node() for c in repo[None].parents()]
789 for n in leafs | set([state.parentctxnode]):
789 for n in leafs | set([state.parentctxnode]):
790 if n in parentnodes:
790 if n in parentnodes:
791 hg.clean(repo, state.topmost)
791 hg.clean(repo, state.topmost)
792 break
792 break
793 else:
793 else:
794 pass
794 pass
795 cleanupnode(ui, repo, 'created', tmpnodes)
795 cleanupnode(ui, repo, 'created', tmpnodes)
796 cleanupnode(ui, repo, 'temp', leafs)
796 cleanupnode(ui, repo, 'temp', leafs)
797 state.clear()
797 state.clear()
798 return
798 return
799 else:
799 else:
800 cmdutil.checkunfinished(repo)
800 cmdutil.checkunfinished(repo)
801 cmdutil.bailifchanged(repo)
801 cmdutil.bailifchanged(repo)
802
802
803 topmost, empty = repo.dirstate.parents()
803 topmost, empty = repo.dirstate.parents()
804 if outg:
804 if outg:
805 if freeargs:
805 if freeargs:
806 remote = freeargs[0]
806 remote = freeargs[0]
807 else:
807 else:
808 remote = None
808 remote = None
809 root = findoutgoing(ui, repo, remote, force, opts)
809 root = findoutgoing(ui, repo, remote, force, opts)
810 else:
810 else:
811 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
811 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
812 if len(rr) != 1:
812 if len(rr) != 1:
813 raise util.Abort(_('The specified revisions must have '
813 raise util.Abort(_('The specified revisions must have '
814 'exactly one common root'))
814 'exactly one common root'))
815 root = rr[0].node()
815 root = rr[0].node()
816
816
817 revs = between(repo, root, topmost, state.keep)
817 revs = between(repo, root, topmost, state.keep)
818 if not revs:
818 if not revs:
819 raise util.Abort(_('%s is not an ancestor of working directory') %
819 raise util.Abort(_('%s is not an ancestor of working directory') %
820 node.short(root))
820 node.short(root))
821
821
822 ctxs = [repo[r] for r in revs]
822 ctxs = [repo[r] for r in revs]
823 if not rules:
823 if not rules:
824 comment = editcomment % (node.short(root), node.short(topmost))
824 comment = editcomment % (node.short(root), node.short(topmost))
825 rules = ruleeditor(repo, ui, [['pick', c] for c in ctxs], comment)
825 rules = ruleeditor(repo, ui, [['pick', c] for c in ctxs], comment)
826 else:
826 else:
827 if rules == '-':
827 if rules == '-':
828 f = sys.stdin
828 f = sys.stdin
829 else:
829 else:
830 f = open(rules)
830 f = open(rules)
831 rules = f.read()
831 rules = f.read()
832 f.close()
832 f.close()
833 rules = [l for l in (r.strip() for r in rules.splitlines())
833 rules = [l for l in (r.strip() for r in rules.splitlines())
834 if l and not l.startswith('#')]
834 if l and not l.startswith('#')]
835 rules = verifyrules(rules, repo, ctxs)
835 rules = verifyrules(rules, repo, ctxs)
836
836
837 parentctxnode = repo[root].parents()[0].node()
837 parentctxnode = repo[root].parents()[0].node()
838
838
839 state.parentctxnode = parentctxnode
839 state.parentctxnode = parentctxnode
840 state.rules = rules
840 state.rules = rules
841 state.topmost = topmost
841 state.topmost = topmost
842 state.replacements = replacements
842 state.replacements = replacements
843
843
844 # Create a backup so we can always abort completely.
844 # Create a backup so we can always abort completely.
845 backupfile = None
845 backupfile = None
846 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
846 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
847 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
847 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
848 'histedit')
848 'histedit')
849 state.backupfile = backupfile
849 state.backupfile = backupfile
850
850
851 while state.rules:
851 while state.rules:
852 state.write()
852 state.write()
853 action, ha = state.rules.pop(0)
853 action, ha = state.rules.pop(0)
854 ui.debug('histedit: processing %s %s\n' % (action, ha[:12]))
854 ui.debug('histedit: processing %s %s\n' % (action, ha[:12]))
855 actobj = actiontable[action].fromrule(state, ha)
855 actobj = actiontable[action].fromrule(state, ha)
856 parentctx, replacement_ = actobj.run()
856 parentctx, replacement_ = actobj.run()
857 state.parentctxnode = parentctx.node()
857 state.parentctxnode = parentctx.node()
858 state.replacements.extend(replacement_)
858 state.replacements.extend(replacement_)
859 state.write()
859 state.write()
860
860
861 hg.update(repo, state.parentctxnode)
861 hg.update(repo, state.parentctxnode)
862
862
863 mapping, tmpnodes, created, ntm = processreplacement(state)
863 mapping, tmpnodes, created, ntm = processreplacement(state)
864 if mapping:
864 if mapping:
865 for prec, succs in mapping.iteritems():
865 for prec, succs in mapping.iteritems():
866 if not succs:
866 if not succs:
867 ui.debug('histedit: %s is dropped\n' % node.short(prec))
867 ui.debug('histedit: %s is dropped\n' % node.short(prec))
868 else:
868 else:
869 ui.debug('histedit: %s is replaced by %s\n' % (
869 ui.debug('histedit: %s is replaced by %s\n' % (
870 node.short(prec), node.short(succs[0])))
870 node.short(prec), node.short(succs[0])))
871 if len(succs) > 1:
871 if len(succs) > 1:
872 m = 'histedit: %s'
872 m = 'histedit: %s'
873 for n in succs[1:]:
873 for n in succs[1:]:
874 ui.debug(m % node.short(n))
874 ui.debug(m % node.short(n))
875
875
876 if not state.keep:
876 if not state.keep:
877 if mapping:
877 if mapping:
878 movebookmarks(ui, repo, mapping, state.topmost, ntm)
878 movebookmarks(ui, repo, mapping, state.topmost, ntm)
879 # TODO update mq state
879 # TODO update mq state
880 if obsolete.isenabled(repo, obsolete.createmarkersopt):
880 if obsolete.isenabled(repo, obsolete.createmarkersopt):
881 markers = []
881 markers = []
882 # sort by revision number because it sound "right"
882 # sort by revision number because it sound "right"
883 for prec in sorted(mapping, key=repo.changelog.rev):
883 for prec in sorted(mapping, key=repo.changelog.rev):
884 succs = mapping[prec]
884 succs = mapping[prec]
885 markers.append((repo[prec],
885 markers.append((repo[prec],
886 tuple(repo[s] for s in succs)))
886 tuple(repo[s] for s in succs)))
887 if markers:
887 if markers:
888 obsolete.createmarkers(repo, markers)
888 obsolete.createmarkers(repo, markers)
889 else:
889 else:
890 cleanupnode(ui, repo, 'replaced', mapping)
890 cleanupnode(ui, repo, 'replaced', mapping)
891
891
892 cleanupnode(ui, repo, 'temp', tmpnodes)
892 cleanupnode(ui, repo, 'temp', tmpnodes)
893 state.clear()
893 state.clear()
894 if os.path.exists(repo.sjoin('undo')):
894 if os.path.exists(repo.sjoin('undo')):
895 os.unlink(repo.sjoin('undo'))
895 os.unlink(repo.sjoin('undo'))
896
896
897 def bootstrapcontinue(ui, state, opts):
897 def bootstrapcontinue(ui, state, opts):
898 repo = state.repo
898 repo = state.repo
899 if state.rules:
899 if state.rules:
900 action, currentnode = state.rules.pop(0)
900 action, currentnode = state.rules.pop(0)
901
901
902 actobj = actiontable[action].fromrule(state, currentnode)
902 actobj = actiontable[action].fromrule(state, currentnode)
903
903
904 s = repo.status()
904 s = repo.status()
905 if s.modified or s.added or s.removed or s.deleted:
905 if s.modified or s.added or s.removed or s.deleted:
906 actobj.continuedirty()
906 actobj.continuedirty()
907 s = repo.status()
907 s = repo.status()
908 if s.modified or s.added or s.removed or s.deleted:
908 if s.modified or s.added or s.removed or s.deleted:
909 raise util.Abort(_("working copy still dirty"))
909 raise util.Abort(_("working copy still dirty"))
910
910
911 parentctx, replacements = actobj.continueclean()
911 parentctx, replacements = actobj.continueclean()
912
912
913 state.parentctxnode = parentctx.node()
913 state.parentctxnode = parentctx.node()
914 state.replacements.extend(replacements)
914 state.replacements.extend(replacements)
915
915
916 return state
916 return state
917
917
918 def between(repo, old, new, keep):
918 def between(repo, old, new, keep):
919 """select and validate the set of revision to edit
919 """select and validate the set of revision to edit
920
920
921 When keep is false, the specified set can't have children."""
921 When keep is false, the specified set can't have children."""
922 ctxs = list(repo.set('%n::%n', old, new))
922 ctxs = list(repo.set('%n::%n', old, new))
923 if ctxs and not keep:
923 if ctxs and not keep:
924 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
924 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
925 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
925 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
926 raise util.Abort(_('cannot edit history that would orphan nodes'))
926 raise util.Abort(_('cannot edit history that would orphan nodes'))
927 if repo.revs('(%ld) and merge()', ctxs):
927 if repo.revs('(%ld) and merge()', ctxs):
928 raise util.Abort(_('cannot edit history that contains merges'))
928 raise util.Abort(_('cannot edit history that contains merges'))
929 root = ctxs[0] # list is already sorted by repo.set
929 root = ctxs[0] # list is already sorted by repo.set
930 if not root.mutable():
930 if not root.mutable():
931 raise util.Abort(_('cannot edit immutable changeset: %s') % root)
931 raise util.Abort(_('cannot edit public changeset: %s') % root)
932 return [c.node() for c in ctxs]
932 return [c.node() for c in ctxs]
933
933
934 def makedesc(repo, action, rev):
934 def makedesc(repo, action, rev):
935 """build a initial action line for a ctx
935 """build a initial action line for a ctx
936
936
937 line are in the form:
937 line are in the form:
938
938
939 <action> <hash> <rev> <summary>
939 <action> <hash> <rev> <summary>
940 """
940 """
941 ctx = repo[rev]
941 ctx = repo[rev]
942 summary = ''
942 summary = ''
943 if ctx.description():
943 if ctx.description():
944 summary = ctx.description().splitlines()[0]
944 summary = ctx.description().splitlines()[0]
945 line = '%s %s %d %s' % (action, ctx, ctx.rev(), summary)
945 line = '%s %s %d %s' % (action, ctx, ctx.rev(), summary)
946 # trim to 80 columns so it's not stupidly wide in my editor
946 # trim to 80 columns so it's not stupidly wide in my editor
947 maxlen = repo.ui.configint('histedit', 'linelen', default=80)
947 maxlen = repo.ui.configint('histedit', 'linelen', default=80)
948 maxlen = max(maxlen, 22) # avoid truncating hash
948 maxlen = max(maxlen, 22) # avoid truncating hash
949 return util.ellipsis(line, maxlen)
949 return util.ellipsis(line, maxlen)
950
950
951 def ruleeditor(repo, ui, rules, editcomment=""):
951 def ruleeditor(repo, ui, rules, editcomment=""):
952 """open an editor to edit rules
952 """open an editor to edit rules
953
953
954 rules are in the format [ [act, ctx], ...] like in state.rules
954 rules are in the format [ [act, ctx], ...] like in state.rules
955 """
955 """
956 rules = '\n'.join([makedesc(repo, act, rev) for [act, rev] in rules])
956 rules = '\n'.join([makedesc(repo, act, rev) for [act, rev] in rules])
957 rules += '\n\n'
957 rules += '\n\n'
958 rules += editcomment
958 rules += editcomment
959 rules = ui.edit(rules, ui.username())
959 rules = ui.edit(rules, ui.username())
960
960
961 # Save edit rules in .hg/histedit-last-edit.txt in case
961 # Save edit rules in .hg/histedit-last-edit.txt in case
962 # the user needs to ask for help after something
962 # the user needs to ask for help after something
963 # surprising happens.
963 # surprising happens.
964 f = open(repo.join('histedit-last-edit.txt'), 'w')
964 f = open(repo.join('histedit-last-edit.txt'), 'w')
965 f.write(rules)
965 f.write(rules)
966 f.close()
966 f.close()
967
967
968 return rules
968 return rules
969
969
970 def verifyrules(rules, repo, ctxs):
970 def verifyrules(rules, repo, ctxs):
971 """Verify that there exists exactly one edit rule per given changeset.
971 """Verify that there exists exactly one edit rule per given changeset.
972
972
973 Will abort if there are to many or too few rules, a malformed rule,
973 Will abort if there are to many or too few rules, a malformed rule,
974 or a rule on a changeset outside of the user-given range.
974 or a rule on a changeset outside of the user-given range.
975 """
975 """
976 parsed = []
976 parsed = []
977 expected = set(c.hex() for c in ctxs)
977 expected = set(c.hex() for c in ctxs)
978 seen = set()
978 seen = set()
979 for r in rules:
979 for r in rules:
980 if ' ' not in r:
980 if ' ' not in r:
981 raise util.Abort(_('malformed line "%s"') % r)
981 raise util.Abort(_('malformed line "%s"') % r)
982 action, rest = r.split(' ', 1)
982 action, rest = r.split(' ', 1)
983 ha = rest.strip().split(' ', 1)[0]
983 ha = rest.strip().split(' ', 1)[0]
984 try:
984 try:
985 ha = repo[ha].hex()
985 ha = repo[ha].hex()
986 except error.RepoError:
986 except error.RepoError:
987 raise util.Abort(_('unknown changeset %s listed') % ha[:12])
987 raise util.Abort(_('unknown changeset %s listed') % ha[:12])
988 if ha not in expected:
988 if ha not in expected:
989 raise util.Abort(
989 raise util.Abort(
990 _('may not use changesets other than the ones listed'))
990 _('may not use changesets other than the ones listed'))
991 if ha in seen:
991 if ha in seen:
992 raise util.Abort(_('duplicated command for changeset %s') %
992 raise util.Abort(_('duplicated command for changeset %s') %
993 ha[:12])
993 ha[:12])
994 seen.add(ha)
994 seen.add(ha)
995 if action not in actiontable:
995 if action not in actiontable:
996 raise util.Abort(_('unknown action "%s"') % action)
996 raise util.Abort(_('unknown action "%s"') % action)
997 parsed.append([action, ha])
997 parsed.append([action, ha])
998 missing = sorted(expected - seen) # sort to stabilize output
998 missing = sorted(expected - seen) # sort to stabilize output
999 if missing:
999 if missing:
1000 raise util.Abort(_('missing rules for changeset %s') %
1000 raise util.Abort(_('missing rules for changeset %s') %
1001 missing[0][:12],
1001 missing[0][:12],
1002 hint=_('do you want to use the drop action?'))
1002 hint=_('do you want to use the drop action?'))
1003 return parsed
1003 return parsed
1004
1004
1005 def processreplacement(state):
1005 def processreplacement(state):
1006 """process the list of replacements to return
1006 """process the list of replacements to return
1007
1007
1008 1) the final mapping between original and created nodes
1008 1) the final mapping between original and created nodes
1009 2) the list of temporary node created by histedit
1009 2) the list of temporary node created by histedit
1010 3) the list of new commit created by histedit"""
1010 3) the list of new commit created by histedit"""
1011 replacements = state.replacements
1011 replacements = state.replacements
1012 allsuccs = set()
1012 allsuccs = set()
1013 replaced = set()
1013 replaced = set()
1014 fullmapping = {}
1014 fullmapping = {}
1015 # initialise basic set
1015 # initialise basic set
1016 # fullmapping record all operation recorded in replacement
1016 # fullmapping record all operation recorded in replacement
1017 for rep in replacements:
1017 for rep in replacements:
1018 allsuccs.update(rep[1])
1018 allsuccs.update(rep[1])
1019 replaced.add(rep[0])
1019 replaced.add(rep[0])
1020 fullmapping.setdefault(rep[0], set()).update(rep[1])
1020 fullmapping.setdefault(rep[0], set()).update(rep[1])
1021 new = allsuccs - replaced
1021 new = allsuccs - replaced
1022 tmpnodes = allsuccs & replaced
1022 tmpnodes = allsuccs & replaced
1023 # Reduce content fullmapping into direct relation between original nodes
1023 # Reduce content fullmapping into direct relation between original nodes
1024 # and final node created during history edition
1024 # and final node created during history edition
1025 # Dropped changeset are replaced by an empty list
1025 # Dropped changeset are replaced by an empty list
1026 toproceed = set(fullmapping)
1026 toproceed = set(fullmapping)
1027 final = {}
1027 final = {}
1028 while toproceed:
1028 while toproceed:
1029 for x in list(toproceed):
1029 for x in list(toproceed):
1030 succs = fullmapping[x]
1030 succs = fullmapping[x]
1031 for s in list(succs):
1031 for s in list(succs):
1032 if s in toproceed:
1032 if s in toproceed:
1033 # non final node with unknown closure
1033 # non final node with unknown closure
1034 # We can't process this now
1034 # We can't process this now
1035 break
1035 break
1036 elif s in final:
1036 elif s in final:
1037 # non final node, replace with closure
1037 # non final node, replace with closure
1038 succs.remove(s)
1038 succs.remove(s)
1039 succs.update(final[s])
1039 succs.update(final[s])
1040 else:
1040 else:
1041 final[x] = succs
1041 final[x] = succs
1042 toproceed.remove(x)
1042 toproceed.remove(x)
1043 # remove tmpnodes from final mapping
1043 # remove tmpnodes from final mapping
1044 for n in tmpnodes:
1044 for n in tmpnodes:
1045 del final[n]
1045 del final[n]
1046 # we expect all changes involved in final to exist in the repo
1046 # we expect all changes involved in final to exist in the repo
1047 # turn `final` into list (topologically sorted)
1047 # turn `final` into list (topologically sorted)
1048 nm = state.repo.changelog.nodemap
1048 nm = state.repo.changelog.nodemap
1049 for prec, succs in final.items():
1049 for prec, succs in final.items():
1050 final[prec] = sorted(succs, key=nm.get)
1050 final[prec] = sorted(succs, key=nm.get)
1051
1051
1052 # computed topmost element (necessary for bookmark)
1052 # computed topmost element (necessary for bookmark)
1053 if new:
1053 if new:
1054 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1054 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1055 elif not final:
1055 elif not final:
1056 # Nothing rewritten at all. we won't need `newtopmost`
1056 # Nothing rewritten at all. we won't need `newtopmost`
1057 # It is the same as `oldtopmost` and `processreplacement` know it
1057 # It is the same as `oldtopmost` and `processreplacement` know it
1058 newtopmost = None
1058 newtopmost = None
1059 else:
1059 else:
1060 # every body died. The newtopmost is the parent of the root.
1060 # every body died. The newtopmost is the parent of the root.
1061 r = state.repo.changelog.rev
1061 r = state.repo.changelog.rev
1062 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1062 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1063
1063
1064 return final, tmpnodes, new, newtopmost
1064 return final, tmpnodes, new, newtopmost
1065
1065
1066 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1066 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1067 """Move bookmark from old to newly created node"""
1067 """Move bookmark from old to newly created node"""
1068 if not mapping:
1068 if not mapping:
1069 # if nothing got rewritten there is not purpose for this function
1069 # if nothing got rewritten there is not purpose for this function
1070 return
1070 return
1071 moves = []
1071 moves = []
1072 for bk, old in sorted(repo._bookmarks.iteritems()):
1072 for bk, old in sorted(repo._bookmarks.iteritems()):
1073 if old == oldtopmost:
1073 if old == oldtopmost:
1074 # special case ensure bookmark stay on tip.
1074 # special case ensure bookmark stay on tip.
1075 #
1075 #
1076 # This is arguably a feature and we may only want that for the
1076 # This is arguably a feature and we may only want that for the
1077 # active bookmark. But the behavior is kept compatible with the old
1077 # active bookmark. But the behavior is kept compatible with the old
1078 # version for now.
1078 # version for now.
1079 moves.append((bk, newtopmost))
1079 moves.append((bk, newtopmost))
1080 continue
1080 continue
1081 base = old
1081 base = old
1082 new = mapping.get(base, None)
1082 new = mapping.get(base, None)
1083 if new is None:
1083 if new is None:
1084 continue
1084 continue
1085 while not new:
1085 while not new:
1086 # base is killed, trying with parent
1086 # base is killed, trying with parent
1087 base = repo[base].p1().node()
1087 base = repo[base].p1().node()
1088 new = mapping.get(base, (base,))
1088 new = mapping.get(base, (base,))
1089 # nothing to move
1089 # nothing to move
1090 moves.append((bk, new[-1]))
1090 moves.append((bk, new[-1]))
1091 if moves:
1091 if moves:
1092 marks = repo._bookmarks
1092 marks = repo._bookmarks
1093 for mark, new in moves:
1093 for mark, new in moves:
1094 old = marks[mark]
1094 old = marks[mark]
1095 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1095 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1096 % (mark, node.short(old), node.short(new)))
1096 % (mark, node.short(old), node.short(new)))
1097 marks[mark] = new
1097 marks[mark] = new
1098 marks.write()
1098 marks.write()
1099
1099
1100 def cleanupnode(ui, repo, name, nodes):
1100 def cleanupnode(ui, repo, name, nodes):
1101 """strip a group of nodes from the repository
1101 """strip a group of nodes from the repository
1102
1102
1103 The set of node to strip may contains unknown nodes."""
1103 The set of node to strip may contains unknown nodes."""
1104 ui.debug('should strip %s nodes %s\n' %
1104 ui.debug('should strip %s nodes %s\n' %
1105 (name, ', '.join([node.short(n) for n in nodes])))
1105 (name, ', '.join([node.short(n) for n in nodes])))
1106 lock = None
1106 lock = None
1107 try:
1107 try:
1108 lock = repo.lock()
1108 lock = repo.lock()
1109 # Find all node that need to be stripped
1109 # Find all node that need to be stripped
1110 # (we hg %lr instead of %ln to silently ignore unknown item
1110 # (we hg %lr instead of %ln to silently ignore unknown item
1111 nm = repo.changelog.nodemap
1111 nm = repo.changelog.nodemap
1112 nodes = sorted(n for n in nodes if n in nm)
1112 nodes = sorted(n for n in nodes if n in nm)
1113 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1113 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1114 for c in roots:
1114 for c in roots:
1115 # We should process node in reverse order to strip tip most first.
1115 # We should process node in reverse order to strip tip most first.
1116 # but this trigger a bug in changegroup hook.
1116 # but this trigger a bug in changegroup hook.
1117 # This would reduce bundle overhead
1117 # This would reduce bundle overhead
1118 repair.strip(ui, repo, c)
1118 repair.strip(ui, repo, c)
1119 finally:
1119 finally:
1120 release(lock)
1120 release(lock)
1121
1121
1122 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1122 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1123 if isinstance(nodelist, str):
1123 if isinstance(nodelist, str):
1124 nodelist = [nodelist]
1124 nodelist = [nodelist]
1125 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1125 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1126 state = histeditstate(repo)
1126 state = histeditstate(repo)
1127 state.read()
1127 state.read()
1128 histedit_nodes = set([repo[rulehash].node() for (action, rulehash)
1128 histedit_nodes = set([repo[rulehash].node() for (action, rulehash)
1129 in state.rules if rulehash in repo])
1129 in state.rules if rulehash in repo])
1130 strip_nodes = set([repo[n].node() for n in nodelist])
1130 strip_nodes = set([repo[n].node() for n in nodelist])
1131 common_nodes = histedit_nodes & strip_nodes
1131 common_nodes = histedit_nodes & strip_nodes
1132 if common_nodes:
1132 if common_nodes:
1133 raise util.Abort(_("histedit in progress, can't strip %s")
1133 raise util.Abort(_("histedit in progress, can't strip %s")
1134 % ', '.join(node.short(x) for x in common_nodes))
1134 % ', '.join(node.short(x) for x in common_nodes))
1135 return orig(ui, repo, nodelist, *args, **kwargs)
1135 return orig(ui, repo, nodelist, *args, **kwargs)
1136
1136
1137 extensions.wrapfunction(repair, 'strip', stripwrapper)
1137 extensions.wrapfunction(repair, 'strip', stripwrapper)
1138
1138
1139 def summaryhook(ui, repo):
1139 def summaryhook(ui, repo):
1140 if not os.path.exists(repo.join('histedit-state')):
1140 if not os.path.exists(repo.join('histedit-state')):
1141 return
1141 return
1142 state = histeditstate(repo)
1142 state = histeditstate(repo)
1143 state.read()
1143 state.read()
1144 if state.rules:
1144 if state.rules:
1145 # i18n: column positioning for "hg summary"
1145 # i18n: column positioning for "hg summary"
1146 ui.write(_('hist: %s (histedit --continue)\n') %
1146 ui.write(_('hist: %s (histedit --continue)\n') %
1147 (ui.label(_('%d remaining'), 'histedit.remaining') %
1147 (ui.label(_('%d remaining'), 'histedit.remaining') %
1148 len(state.rules)))
1148 len(state.rules)))
1149
1149
1150 def extsetup(ui):
1150 def extsetup(ui):
1151 cmdutil.summaryhooks.add('histedit', summaryhook)
1151 cmdutil.summaryhooks.add('histedit', summaryhook)
1152 cmdutil.unfinishedstates.append(
1152 cmdutil.unfinishedstates.append(
1153 ['histedit-state', False, True, _('histedit in progress'),
1153 ['histedit-state', False, True, _('histedit in progress'),
1154 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1154 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
@@ -1,3583 +1,3583
1 # mq.py - patch queues for mercurial
1 # mq.py - patch queues for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''manage a stack of patches
8 '''manage a stack of patches
9
9
10 This extension lets you work with a stack of patches in a Mercurial
10 This extension lets you work with a stack of patches in a Mercurial
11 repository. It manages two stacks of patches - all known patches, and
11 repository. It manages two stacks of patches - all known patches, and
12 applied patches (subset of known patches).
12 applied patches (subset of known patches).
13
13
14 Known patches are represented as patch files in the .hg/patches
14 Known patches are represented as patch files in the .hg/patches
15 directory. Applied patches are both patch files and changesets.
15 directory. Applied patches are both patch files and changesets.
16
16
17 Common tasks (use :hg:`help command` for more details)::
17 Common tasks (use :hg:`help command` for more details)::
18
18
19 create new patch qnew
19 create new patch qnew
20 import existing patch qimport
20 import existing patch qimport
21
21
22 print patch series qseries
22 print patch series qseries
23 print applied patches qapplied
23 print applied patches qapplied
24
24
25 add known patch to applied stack qpush
25 add known patch to applied stack qpush
26 remove patch from applied stack qpop
26 remove patch from applied stack qpop
27 refresh contents of top applied patch qrefresh
27 refresh contents of top applied patch qrefresh
28
28
29 By default, mq will automatically use git patches when required to
29 By default, mq will automatically use git patches when required to
30 avoid losing file mode changes, copy records, binary files or empty
30 avoid losing file mode changes, copy records, binary files or empty
31 files creations or deletions. This behaviour can be configured with::
31 files creations or deletions. This behaviour can be configured with::
32
32
33 [mq]
33 [mq]
34 git = auto/keep/yes/no
34 git = auto/keep/yes/no
35
35
36 If set to 'keep', mq will obey the [diff] section configuration while
36 If set to 'keep', mq will obey the [diff] section configuration while
37 preserving existing git patches upon qrefresh. If set to 'yes' or
37 preserving existing git patches upon qrefresh. If set to 'yes' or
38 'no', mq will override the [diff] section and always generate git or
38 'no', mq will override the [diff] section and always generate git or
39 regular patches, possibly losing data in the second case.
39 regular patches, possibly losing data in the second case.
40
40
41 It may be desirable for mq changesets to be kept in the secret phase (see
41 It may be desirable for mq changesets to be kept in the secret phase (see
42 :hg:`help phases`), which can be enabled with the following setting::
42 :hg:`help phases`), which can be enabled with the following setting::
43
43
44 [mq]
44 [mq]
45 secret = True
45 secret = True
46
46
47 You will by default be managing a patch queue named "patches". You can
47 You will by default be managing a patch queue named "patches". You can
48 create other, independent patch queues with the :hg:`qqueue` command.
48 create other, independent patch queues with the :hg:`qqueue` command.
49
49
50 If the working directory contains uncommitted files, qpush, qpop and
50 If the working directory contains uncommitted files, qpush, qpop and
51 qgoto abort immediately. If -f/--force is used, the changes are
51 qgoto abort immediately. If -f/--force is used, the changes are
52 discarded. Setting::
52 discarded. Setting::
53
53
54 [mq]
54 [mq]
55 keepchanges = True
55 keepchanges = True
56
56
57 make them behave as if --keep-changes were passed, and non-conflicting
57 make them behave as if --keep-changes were passed, and non-conflicting
58 local changes will be tolerated and preserved. If incompatible options
58 local changes will be tolerated and preserved. If incompatible options
59 such as -f/--force or --exact are passed, this setting is ignored.
59 such as -f/--force or --exact are passed, this setting is ignored.
60
60
61 This extension used to provide a strip command. This command now lives
61 This extension used to provide a strip command. This command now lives
62 in the strip extension.
62 in the strip extension.
63 '''
63 '''
64
64
65 from mercurial.i18n import _
65 from mercurial.i18n import _
66 from mercurial.node import bin, hex, short, nullid, nullrev
66 from mercurial.node import bin, hex, short, nullid, nullrev
67 from mercurial.lock import release
67 from mercurial.lock import release
68 from mercurial import commands, cmdutil, hg, scmutil, util, revset
68 from mercurial import commands, cmdutil, hg, scmutil, util, revset
69 from mercurial import extensions, error, phases
69 from mercurial import extensions, error, phases
70 from mercurial import patch as patchmod
70 from mercurial import patch as patchmod
71 from mercurial import localrepo
71 from mercurial import localrepo
72 from mercurial import subrepo
72 from mercurial import subrepo
73 import os, re, errno, shutil
73 import os, re, errno, shutil
74
74
75 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
75 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
76
76
77 cmdtable = {}
77 cmdtable = {}
78 command = cmdutil.command(cmdtable)
78 command = cmdutil.command(cmdtable)
79 # Note for extension authors: ONLY specify testedwith = 'internal' for
79 # Note for extension authors: ONLY specify testedwith = 'internal' for
80 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
80 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
81 # be specifying the version(s) of Mercurial they are tested with, or
81 # be specifying the version(s) of Mercurial they are tested with, or
82 # leave the attribute unspecified.
82 # leave the attribute unspecified.
83 testedwith = 'internal'
83 testedwith = 'internal'
84
84
85 # force load strip extension formerly included in mq and import some utility
85 # force load strip extension formerly included in mq and import some utility
86 try:
86 try:
87 stripext = extensions.find('strip')
87 stripext = extensions.find('strip')
88 except KeyError:
88 except KeyError:
89 # note: load is lazy so we could avoid the try-except,
89 # note: load is lazy so we could avoid the try-except,
90 # but I (marmoute) prefer this explicit code.
90 # but I (marmoute) prefer this explicit code.
91 class dummyui(object):
91 class dummyui(object):
92 def debug(self, msg):
92 def debug(self, msg):
93 pass
93 pass
94 stripext = extensions.load(dummyui(), 'strip', '')
94 stripext = extensions.load(dummyui(), 'strip', '')
95
95
96 strip = stripext.strip
96 strip = stripext.strip
97 checksubstate = stripext.checksubstate
97 checksubstate = stripext.checksubstate
98 checklocalchanges = stripext.checklocalchanges
98 checklocalchanges = stripext.checklocalchanges
99
99
100
100
101 # Patch names looks like unix-file names.
101 # Patch names looks like unix-file names.
102 # They must be joinable with queue directory and result in the patch path.
102 # They must be joinable with queue directory and result in the patch path.
103 normname = util.normpath
103 normname = util.normpath
104
104
105 class statusentry(object):
105 class statusentry(object):
106 def __init__(self, node, name):
106 def __init__(self, node, name):
107 self.node, self.name = node, name
107 self.node, self.name = node, name
108 def __repr__(self):
108 def __repr__(self):
109 return hex(self.node) + ':' + self.name
109 return hex(self.node) + ':' + self.name
110
110
111 # The order of the headers in 'hg export' HG patches:
111 # The order of the headers in 'hg export' HG patches:
112 HGHEADERS = [
112 HGHEADERS = [
113 # '# HG changeset patch',
113 # '# HG changeset patch',
114 '# User ',
114 '# User ',
115 '# Date ',
115 '# Date ',
116 '# ',
116 '# ',
117 '# Branch ',
117 '# Branch ',
118 '# Node ID ',
118 '# Node ID ',
119 '# Parent ', # can occur twice for merges - but that is not relevant for mq
119 '# Parent ', # can occur twice for merges - but that is not relevant for mq
120 ]
120 ]
121 # The order of headers in plain 'mail style' patches:
121 # The order of headers in plain 'mail style' patches:
122 PLAINHEADERS = {
122 PLAINHEADERS = {
123 'from': 0,
123 'from': 0,
124 'date': 1,
124 'date': 1,
125 'subject': 2,
125 'subject': 2,
126 }
126 }
127
127
128 def inserthgheader(lines, header, value):
128 def inserthgheader(lines, header, value):
129 """Assuming lines contains a HG patch header, add a header line with value.
129 """Assuming lines contains a HG patch header, add a header line with value.
130 >>> try: inserthgheader([], '# Date ', 'z')
130 >>> try: inserthgheader([], '# Date ', 'z')
131 ... except ValueError, inst: print "oops"
131 ... except ValueError, inst: print "oops"
132 oops
132 oops
133 >>> inserthgheader(['# HG changeset patch'], '# Date ', 'z')
133 >>> inserthgheader(['# HG changeset patch'], '# Date ', 'z')
134 ['# HG changeset patch', '# Date z']
134 ['# HG changeset patch', '# Date z']
135 >>> inserthgheader(['# HG changeset patch', ''], '# Date ', 'z')
135 >>> inserthgheader(['# HG changeset patch', ''], '# Date ', 'z')
136 ['# HG changeset patch', '# Date z', '']
136 ['# HG changeset patch', '# Date z', '']
137 >>> inserthgheader(['# HG changeset patch', '# User y'], '# Date ', 'z')
137 >>> inserthgheader(['# HG changeset patch', '# User y'], '# Date ', 'z')
138 ['# HG changeset patch', '# User y', '# Date z']
138 ['# HG changeset patch', '# User y', '# Date z']
139 >>> inserthgheader(['# HG changeset patch', '# Date x', '# User y'],
139 >>> inserthgheader(['# HG changeset patch', '# Date x', '# User y'],
140 ... '# User ', 'z')
140 ... '# User ', 'z')
141 ['# HG changeset patch', '# Date x', '# User z']
141 ['# HG changeset patch', '# Date x', '# User z']
142 >>> inserthgheader(['# HG changeset patch', '# Date y'], '# Date ', 'z')
142 >>> inserthgheader(['# HG changeset patch', '# Date y'], '# Date ', 'z')
143 ['# HG changeset patch', '# Date z']
143 ['# HG changeset patch', '# Date z']
144 >>> inserthgheader(['# HG changeset patch', '', '# Date y'], '# Date ', 'z')
144 >>> inserthgheader(['# HG changeset patch', '', '# Date y'], '# Date ', 'z')
145 ['# HG changeset patch', '# Date z', '', '# Date y']
145 ['# HG changeset patch', '# Date z', '', '# Date y']
146 >>> inserthgheader(['# HG changeset patch', '# Parent y'], '# Date ', 'z')
146 >>> inserthgheader(['# HG changeset patch', '# Parent y'], '# Date ', 'z')
147 ['# HG changeset patch', '# Date z', '# Parent y']
147 ['# HG changeset patch', '# Date z', '# Parent y']
148 """
148 """
149 start = lines.index('# HG changeset patch') + 1
149 start = lines.index('# HG changeset patch') + 1
150 newindex = HGHEADERS.index(header)
150 newindex = HGHEADERS.index(header)
151 bestpos = len(lines)
151 bestpos = len(lines)
152 for i in range(start, len(lines)):
152 for i in range(start, len(lines)):
153 line = lines[i]
153 line = lines[i]
154 if not line.startswith('# '):
154 if not line.startswith('# '):
155 bestpos = min(bestpos, i)
155 bestpos = min(bestpos, i)
156 break
156 break
157 for lineindex, h in enumerate(HGHEADERS):
157 for lineindex, h in enumerate(HGHEADERS):
158 if line.startswith(h):
158 if line.startswith(h):
159 if lineindex == newindex:
159 if lineindex == newindex:
160 lines[i] = header + value
160 lines[i] = header + value
161 return lines
161 return lines
162 if lineindex > newindex:
162 if lineindex > newindex:
163 bestpos = min(bestpos, i)
163 bestpos = min(bestpos, i)
164 break # next line
164 break # next line
165 lines.insert(bestpos, header + value)
165 lines.insert(bestpos, header + value)
166 return lines
166 return lines
167
167
168 def insertplainheader(lines, header, value):
168 def insertplainheader(lines, header, value):
169 """For lines containing a plain patch header, add a header line with value.
169 """For lines containing a plain patch header, add a header line with value.
170 >>> insertplainheader([], 'Date', 'z')
170 >>> insertplainheader([], 'Date', 'z')
171 ['Date: z']
171 ['Date: z']
172 >>> insertplainheader([''], 'Date', 'z')
172 >>> insertplainheader([''], 'Date', 'z')
173 ['Date: z', '']
173 ['Date: z', '']
174 >>> insertplainheader(['x'], 'Date', 'z')
174 >>> insertplainheader(['x'], 'Date', 'z')
175 ['Date: z', '', 'x']
175 ['Date: z', '', 'x']
176 >>> insertplainheader(['From: y', 'x'], 'Date', 'z')
176 >>> insertplainheader(['From: y', 'x'], 'Date', 'z')
177 ['From: y', 'Date: z', '', 'x']
177 ['From: y', 'Date: z', '', 'x']
178 >>> insertplainheader([' date : x', ' from : y', ''], 'From', 'z')
178 >>> insertplainheader([' date : x', ' from : y', ''], 'From', 'z')
179 [' date : x', 'From: z', '']
179 [' date : x', 'From: z', '']
180 >>> insertplainheader(['', 'Date: y'], 'Date', 'z')
180 >>> insertplainheader(['', 'Date: y'], 'Date', 'z')
181 ['Date: z', '', 'Date: y']
181 ['Date: z', '', 'Date: y']
182 >>> insertplainheader(['foo: bar', 'DATE: z', 'x'], 'From', 'y')
182 >>> insertplainheader(['foo: bar', 'DATE: z', 'x'], 'From', 'y')
183 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
183 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
184 """
184 """
185 newprio = PLAINHEADERS[header.lower()]
185 newprio = PLAINHEADERS[header.lower()]
186 bestpos = len(lines)
186 bestpos = len(lines)
187 for i, line in enumerate(lines):
187 for i, line in enumerate(lines):
188 if ':' in line:
188 if ':' in line:
189 lheader = line.split(':', 1)[0].strip().lower()
189 lheader = line.split(':', 1)[0].strip().lower()
190 lprio = PLAINHEADERS.get(lheader, newprio + 1)
190 lprio = PLAINHEADERS.get(lheader, newprio + 1)
191 if lprio == newprio:
191 if lprio == newprio:
192 lines[i] = '%s: %s' % (header, value)
192 lines[i] = '%s: %s' % (header, value)
193 return lines
193 return lines
194 if lprio > newprio and i < bestpos:
194 if lprio > newprio and i < bestpos:
195 bestpos = i
195 bestpos = i
196 else:
196 else:
197 if line:
197 if line:
198 lines.insert(i, '')
198 lines.insert(i, '')
199 if i < bestpos:
199 if i < bestpos:
200 bestpos = i
200 bestpos = i
201 break
201 break
202 lines.insert(bestpos, '%s: %s' % (header, value))
202 lines.insert(bestpos, '%s: %s' % (header, value))
203 return lines
203 return lines
204
204
205 class patchheader(object):
205 class patchheader(object):
206 def __init__(self, pf, plainmode=False):
206 def __init__(self, pf, plainmode=False):
207 def eatdiff(lines):
207 def eatdiff(lines):
208 while lines:
208 while lines:
209 l = lines[-1]
209 l = lines[-1]
210 if (l.startswith("diff -") or
210 if (l.startswith("diff -") or
211 l.startswith("Index:") or
211 l.startswith("Index:") or
212 l.startswith("===========")):
212 l.startswith("===========")):
213 del lines[-1]
213 del lines[-1]
214 else:
214 else:
215 break
215 break
216 def eatempty(lines):
216 def eatempty(lines):
217 while lines:
217 while lines:
218 if not lines[-1].strip():
218 if not lines[-1].strip():
219 del lines[-1]
219 del lines[-1]
220 else:
220 else:
221 break
221 break
222
222
223 message = []
223 message = []
224 comments = []
224 comments = []
225 user = None
225 user = None
226 date = None
226 date = None
227 parent = None
227 parent = None
228 format = None
228 format = None
229 subject = None
229 subject = None
230 branch = None
230 branch = None
231 nodeid = None
231 nodeid = None
232 diffstart = 0
232 diffstart = 0
233
233
234 for line in file(pf):
234 for line in file(pf):
235 line = line.rstrip()
235 line = line.rstrip()
236 if (line.startswith('diff --git')
236 if (line.startswith('diff --git')
237 or (diffstart and line.startswith('+++ '))):
237 or (diffstart and line.startswith('+++ '))):
238 diffstart = 2
238 diffstart = 2
239 break
239 break
240 diffstart = 0 # reset
240 diffstart = 0 # reset
241 if line.startswith("--- "):
241 if line.startswith("--- "):
242 diffstart = 1
242 diffstart = 1
243 continue
243 continue
244 elif format == "hgpatch":
244 elif format == "hgpatch":
245 # parse values when importing the result of an hg export
245 # parse values when importing the result of an hg export
246 if line.startswith("# User "):
246 if line.startswith("# User "):
247 user = line[7:]
247 user = line[7:]
248 elif line.startswith("# Date "):
248 elif line.startswith("# Date "):
249 date = line[7:]
249 date = line[7:]
250 elif line.startswith("# Parent "):
250 elif line.startswith("# Parent "):
251 parent = line[9:].lstrip() # handle double trailing space
251 parent = line[9:].lstrip() # handle double trailing space
252 elif line.startswith("# Branch "):
252 elif line.startswith("# Branch "):
253 branch = line[9:]
253 branch = line[9:]
254 elif line.startswith("# Node ID "):
254 elif line.startswith("# Node ID "):
255 nodeid = line[10:]
255 nodeid = line[10:]
256 elif not line.startswith("# ") and line:
256 elif not line.startswith("# ") and line:
257 message.append(line)
257 message.append(line)
258 format = None
258 format = None
259 elif line == '# HG changeset patch':
259 elif line == '# HG changeset patch':
260 message = []
260 message = []
261 format = "hgpatch"
261 format = "hgpatch"
262 elif (format != "tagdone" and (line.startswith("Subject: ") or
262 elif (format != "tagdone" and (line.startswith("Subject: ") or
263 line.startswith("subject: "))):
263 line.startswith("subject: "))):
264 subject = line[9:]
264 subject = line[9:]
265 format = "tag"
265 format = "tag"
266 elif (format != "tagdone" and (line.startswith("From: ") or
266 elif (format != "tagdone" and (line.startswith("From: ") or
267 line.startswith("from: "))):
267 line.startswith("from: "))):
268 user = line[6:]
268 user = line[6:]
269 format = "tag"
269 format = "tag"
270 elif (format != "tagdone" and (line.startswith("Date: ") or
270 elif (format != "tagdone" and (line.startswith("Date: ") or
271 line.startswith("date: "))):
271 line.startswith("date: "))):
272 date = line[6:]
272 date = line[6:]
273 format = "tag"
273 format = "tag"
274 elif format == "tag" and line == "":
274 elif format == "tag" and line == "":
275 # when looking for tags (subject: from: etc) they
275 # when looking for tags (subject: from: etc) they
276 # end once you find a blank line in the source
276 # end once you find a blank line in the source
277 format = "tagdone"
277 format = "tagdone"
278 elif message or line:
278 elif message or line:
279 message.append(line)
279 message.append(line)
280 comments.append(line)
280 comments.append(line)
281
281
282 eatdiff(message)
282 eatdiff(message)
283 eatdiff(comments)
283 eatdiff(comments)
284 # Remember the exact starting line of the patch diffs before consuming
284 # Remember the exact starting line of the patch diffs before consuming
285 # empty lines, for external use by TortoiseHg and others
285 # empty lines, for external use by TortoiseHg and others
286 self.diffstartline = len(comments)
286 self.diffstartline = len(comments)
287 eatempty(message)
287 eatempty(message)
288 eatempty(comments)
288 eatempty(comments)
289
289
290 # make sure message isn't empty
290 # make sure message isn't empty
291 if format and format.startswith("tag") and subject:
291 if format and format.startswith("tag") and subject:
292 message.insert(0, subject)
292 message.insert(0, subject)
293
293
294 self.message = message
294 self.message = message
295 self.comments = comments
295 self.comments = comments
296 self.user = user
296 self.user = user
297 self.date = date
297 self.date = date
298 self.parent = parent
298 self.parent = parent
299 # nodeid and branch are for external use by TortoiseHg and others
299 # nodeid and branch are for external use by TortoiseHg and others
300 self.nodeid = nodeid
300 self.nodeid = nodeid
301 self.branch = branch
301 self.branch = branch
302 self.haspatch = diffstart > 1
302 self.haspatch = diffstart > 1
303 self.plainmode = (plainmode or
303 self.plainmode = (plainmode or
304 '# HG changeset patch' not in self.comments and
304 '# HG changeset patch' not in self.comments and
305 any(c.startswith('Date: ') or
305 any(c.startswith('Date: ') or
306 c.startswith('From: ')
306 c.startswith('From: ')
307 for c in self.comments))
307 for c in self.comments))
308
308
309 def setuser(self, user):
309 def setuser(self, user):
310 try:
310 try:
311 inserthgheader(self.comments, '# User ', user)
311 inserthgheader(self.comments, '# User ', user)
312 except ValueError:
312 except ValueError:
313 if self.plainmode:
313 if self.plainmode:
314 insertplainheader(self.comments, 'From', user)
314 insertplainheader(self.comments, 'From', user)
315 else:
315 else:
316 tmp = ['# HG changeset patch', '# User ' + user]
316 tmp = ['# HG changeset patch', '# User ' + user]
317 self.comments = tmp + self.comments
317 self.comments = tmp + self.comments
318 self.user = user
318 self.user = user
319
319
320 def setdate(self, date):
320 def setdate(self, date):
321 try:
321 try:
322 inserthgheader(self.comments, '# Date ', date)
322 inserthgheader(self.comments, '# Date ', date)
323 except ValueError:
323 except ValueError:
324 if self.plainmode:
324 if self.plainmode:
325 insertplainheader(self.comments, 'Date', date)
325 insertplainheader(self.comments, 'Date', date)
326 else:
326 else:
327 tmp = ['# HG changeset patch', '# Date ' + date]
327 tmp = ['# HG changeset patch', '# Date ' + date]
328 self.comments = tmp + self.comments
328 self.comments = tmp + self.comments
329 self.date = date
329 self.date = date
330
330
331 def setparent(self, parent):
331 def setparent(self, parent):
332 try:
332 try:
333 inserthgheader(self.comments, '# Parent ', parent)
333 inserthgheader(self.comments, '# Parent ', parent)
334 except ValueError:
334 except ValueError:
335 if not self.plainmode:
335 if not self.plainmode:
336 tmp = ['# HG changeset patch', '# Parent ' + parent]
336 tmp = ['# HG changeset patch', '# Parent ' + parent]
337 self.comments = tmp + self.comments
337 self.comments = tmp + self.comments
338 self.parent = parent
338 self.parent = parent
339
339
340 def setmessage(self, message):
340 def setmessage(self, message):
341 if self.comments:
341 if self.comments:
342 self._delmsg()
342 self._delmsg()
343 self.message = [message]
343 self.message = [message]
344 if message:
344 if message:
345 if self.plainmode and self.comments and self.comments[-1]:
345 if self.plainmode and self.comments and self.comments[-1]:
346 self.comments.append('')
346 self.comments.append('')
347 self.comments.append(message)
347 self.comments.append(message)
348
348
349 def __str__(self):
349 def __str__(self):
350 s = '\n'.join(self.comments).rstrip()
350 s = '\n'.join(self.comments).rstrip()
351 if not s:
351 if not s:
352 return ''
352 return ''
353 return s + '\n\n'
353 return s + '\n\n'
354
354
355 def _delmsg(self):
355 def _delmsg(self):
356 '''Remove existing message, keeping the rest of the comments fields.
356 '''Remove existing message, keeping the rest of the comments fields.
357 If comments contains 'subject: ', message will prepend
357 If comments contains 'subject: ', message will prepend
358 the field and a blank line.'''
358 the field and a blank line.'''
359 if self.message:
359 if self.message:
360 subj = 'subject: ' + self.message[0].lower()
360 subj = 'subject: ' + self.message[0].lower()
361 for i in xrange(len(self.comments)):
361 for i in xrange(len(self.comments)):
362 if subj == self.comments[i].lower():
362 if subj == self.comments[i].lower():
363 del self.comments[i]
363 del self.comments[i]
364 self.message = self.message[2:]
364 self.message = self.message[2:]
365 break
365 break
366 ci = 0
366 ci = 0
367 for mi in self.message:
367 for mi in self.message:
368 while mi != self.comments[ci]:
368 while mi != self.comments[ci]:
369 ci += 1
369 ci += 1
370 del self.comments[ci]
370 del self.comments[ci]
371
371
372 def newcommit(repo, phase, *args, **kwargs):
372 def newcommit(repo, phase, *args, **kwargs):
373 """helper dedicated to ensure a commit respect mq.secret setting
373 """helper dedicated to ensure a commit respect mq.secret setting
374
374
375 It should be used instead of repo.commit inside the mq source for operation
375 It should be used instead of repo.commit inside the mq source for operation
376 creating new changeset.
376 creating new changeset.
377 """
377 """
378 repo = repo.unfiltered()
378 repo = repo.unfiltered()
379 if phase is None:
379 if phase is None:
380 if repo.ui.configbool('mq', 'secret', False):
380 if repo.ui.configbool('mq', 'secret', False):
381 phase = phases.secret
381 phase = phases.secret
382 if phase is not None:
382 if phase is not None:
383 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
383 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
384 allowemptybackup = repo.ui.backupconfig('ui', 'allowemptycommit')
384 allowemptybackup = repo.ui.backupconfig('ui', 'allowemptycommit')
385 try:
385 try:
386 if phase is not None:
386 if phase is not None:
387 repo.ui.setconfig('phases', 'new-commit', phase, 'mq')
387 repo.ui.setconfig('phases', 'new-commit', phase, 'mq')
388 repo.ui.setconfig('ui', 'allowemptycommit', True)
388 repo.ui.setconfig('ui', 'allowemptycommit', True)
389 return repo.commit(*args, **kwargs)
389 return repo.commit(*args, **kwargs)
390 finally:
390 finally:
391 repo.ui.restoreconfig(allowemptybackup)
391 repo.ui.restoreconfig(allowemptybackup)
392 if phase is not None:
392 if phase is not None:
393 repo.ui.restoreconfig(phasebackup)
393 repo.ui.restoreconfig(phasebackup)
394
394
395 class AbortNoCleanup(error.Abort):
395 class AbortNoCleanup(error.Abort):
396 pass
396 pass
397
397
398 class queue(object):
398 class queue(object):
399 def __init__(self, ui, baseui, path, patchdir=None):
399 def __init__(self, ui, baseui, path, patchdir=None):
400 self.basepath = path
400 self.basepath = path
401 try:
401 try:
402 fh = open(os.path.join(path, 'patches.queue'))
402 fh = open(os.path.join(path, 'patches.queue'))
403 cur = fh.read().rstrip()
403 cur = fh.read().rstrip()
404 fh.close()
404 fh.close()
405 if not cur:
405 if not cur:
406 curpath = os.path.join(path, 'patches')
406 curpath = os.path.join(path, 'patches')
407 else:
407 else:
408 curpath = os.path.join(path, 'patches-' + cur)
408 curpath = os.path.join(path, 'patches-' + cur)
409 except IOError:
409 except IOError:
410 curpath = os.path.join(path, 'patches')
410 curpath = os.path.join(path, 'patches')
411 self.path = patchdir or curpath
411 self.path = patchdir or curpath
412 self.opener = scmutil.opener(self.path)
412 self.opener = scmutil.opener(self.path)
413 self.ui = ui
413 self.ui = ui
414 self.baseui = baseui
414 self.baseui = baseui
415 self.applieddirty = False
415 self.applieddirty = False
416 self.seriesdirty = False
416 self.seriesdirty = False
417 self.added = []
417 self.added = []
418 self.seriespath = "series"
418 self.seriespath = "series"
419 self.statuspath = "status"
419 self.statuspath = "status"
420 self.guardspath = "guards"
420 self.guardspath = "guards"
421 self.activeguards = None
421 self.activeguards = None
422 self.guardsdirty = False
422 self.guardsdirty = False
423 # Handle mq.git as a bool with extended values
423 # Handle mq.git as a bool with extended values
424 try:
424 try:
425 gitmode = ui.configbool('mq', 'git', None)
425 gitmode = ui.configbool('mq', 'git', None)
426 if gitmode is None:
426 if gitmode is None:
427 raise error.ConfigError
427 raise error.ConfigError
428 if gitmode:
428 if gitmode:
429 self.gitmode = 'yes'
429 self.gitmode = 'yes'
430 else:
430 else:
431 self.gitmode = 'no'
431 self.gitmode = 'no'
432 except error.ConfigError:
432 except error.ConfigError:
433 self.gitmode = ui.config('mq', 'git', 'auto').lower()
433 self.gitmode = ui.config('mq', 'git', 'auto').lower()
434 self.plainmode = ui.configbool('mq', 'plain', False)
434 self.plainmode = ui.configbool('mq', 'plain', False)
435 self.checkapplied = True
435 self.checkapplied = True
436
436
437 @util.propertycache
437 @util.propertycache
438 def applied(self):
438 def applied(self):
439 def parselines(lines):
439 def parselines(lines):
440 for l in lines:
440 for l in lines:
441 entry = l.split(':', 1)
441 entry = l.split(':', 1)
442 if len(entry) > 1:
442 if len(entry) > 1:
443 n, name = entry
443 n, name = entry
444 yield statusentry(bin(n), name)
444 yield statusentry(bin(n), name)
445 elif l.strip():
445 elif l.strip():
446 self.ui.warn(_('malformated mq status line: %s\n') % entry)
446 self.ui.warn(_('malformated mq status line: %s\n') % entry)
447 # else we ignore empty lines
447 # else we ignore empty lines
448 try:
448 try:
449 lines = self.opener.read(self.statuspath).splitlines()
449 lines = self.opener.read(self.statuspath).splitlines()
450 return list(parselines(lines))
450 return list(parselines(lines))
451 except IOError, e:
451 except IOError, e:
452 if e.errno == errno.ENOENT:
452 if e.errno == errno.ENOENT:
453 return []
453 return []
454 raise
454 raise
455
455
456 @util.propertycache
456 @util.propertycache
457 def fullseries(self):
457 def fullseries(self):
458 try:
458 try:
459 return self.opener.read(self.seriespath).splitlines()
459 return self.opener.read(self.seriespath).splitlines()
460 except IOError, e:
460 except IOError, e:
461 if e.errno == errno.ENOENT:
461 if e.errno == errno.ENOENT:
462 return []
462 return []
463 raise
463 raise
464
464
465 @util.propertycache
465 @util.propertycache
466 def series(self):
466 def series(self):
467 self.parseseries()
467 self.parseseries()
468 return self.series
468 return self.series
469
469
470 @util.propertycache
470 @util.propertycache
471 def seriesguards(self):
471 def seriesguards(self):
472 self.parseseries()
472 self.parseseries()
473 return self.seriesguards
473 return self.seriesguards
474
474
475 def invalidate(self):
475 def invalidate(self):
476 for a in 'applied fullseries series seriesguards'.split():
476 for a in 'applied fullseries series seriesguards'.split():
477 if a in self.__dict__:
477 if a in self.__dict__:
478 delattr(self, a)
478 delattr(self, a)
479 self.applieddirty = False
479 self.applieddirty = False
480 self.seriesdirty = False
480 self.seriesdirty = False
481 self.guardsdirty = False
481 self.guardsdirty = False
482 self.activeguards = None
482 self.activeguards = None
483
483
484 def diffopts(self, opts={}, patchfn=None):
484 def diffopts(self, opts={}, patchfn=None):
485 diffopts = patchmod.diffopts(self.ui, opts)
485 diffopts = patchmod.diffopts(self.ui, opts)
486 if self.gitmode == 'auto':
486 if self.gitmode == 'auto':
487 diffopts.upgrade = True
487 diffopts.upgrade = True
488 elif self.gitmode == 'keep':
488 elif self.gitmode == 'keep':
489 pass
489 pass
490 elif self.gitmode in ('yes', 'no'):
490 elif self.gitmode in ('yes', 'no'):
491 diffopts.git = self.gitmode == 'yes'
491 diffopts.git = self.gitmode == 'yes'
492 else:
492 else:
493 raise util.Abort(_('mq.git option can be auto/keep/yes/no'
493 raise util.Abort(_('mq.git option can be auto/keep/yes/no'
494 ' got %s') % self.gitmode)
494 ' got %s') % self.gitmode)
495 if patchfn:
495 if patchfn:
496 diffopts = self.patchopts(diffopts, patchfn)
496 diffopts = self.patchopts(diffopts, patchfn)
497 return diffopts
497 return diffopts
498
498
499 def patchopts(self, diffopts, *patches):
499 def patchopts(self, diffopts, *patches):
500 """Return a copy of input diff options with git set to true if
500 """Return a copy of input diff options with git set to true if
501 referenced patch is a git patch and should be preserved as such.
501 referenced patch is a git patch and should be preserved as such.
502 """
502 """
503 diffopts = diffopts.copy()
503 diffopts = diffopts.copy()
504 if not diffopts.git and self.gitmode == 'keep':
504 if not diffopts.git and self.gitmode == 'keep':
505 for patchfn in patches:
505 for patchfn in patches:
506 patchf = self.opener(patchfn, 'r')
506 patchf = self.opener(patchfn, 'r')
507 # if the patch was a git patch, refresh it as a git patch
507 # if the patch was a git patch, refresh it as a git patch
508 for line in patchf:
508 for line in patchf:
509 if line.startswith('diff --git'):
509 if line.startswith('diff --git'):
510 diffopts.git = True
510 diffopts.git = True
511 break
511 break
512 patchf.close()
512 patchf.close()
513 return diffopts
513 return diffopts
514
514
515 def join(self, *p):
515 def join(self, *p):
516 return os.path.join(self.path, *p)
516 return os.path.join(self.path, *p)
517
517
518 def findseries(self, patch):
518 def findseries(self, patch):
519 def matchpatch(l):
519 def matchpatch(l):
520 l = l.split('#', 1)[0]
520 l = l.split('#', 1)[0]
521 return l.strip() == patch
521 return l.strip() == patch
522 for index, l in enumerate(self.fullseries):
522 for index, l in enumerate(self.fullseries):
523 if matchpatch(l):
523 if matchpatch(l):
524 return index
524 return index
525 return None
525 return None
526
526
527 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
527 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
528
528
529 def parseseries(self):
529 def parseseries(self):
530 self.series = []
530 self.series = []
531 self.seriesguards = []
531 self.seriesguards = []
532 for l in self.fullseries:
532 for l in self.fullseries:
533 h = l.find('#')
533 h = l.find('#')
534 if h == -1:
534 if h == -1:
535 patch = l
535 patch = l
536 comment = ''
536 comment = ''
537 elif h == 0:
537 elif h == 0:
538 continue
538 continue
539 else:
539 else:
540 patch = l[:h]
540 patch = l[:h]
541 comment = l[h:]
541 comment = l[h:]
542 patch = patch.strip()
542 patch = patch.strip()
543 if patch:
543 if patch:
544 if patch in self.series:
544 if patch in self.series:
545 raise util.Abort(_('%s appears more than once in %s') %
545 raise util.Abort(_('%s appears more than once in %s') %
546 (patch, self.join(self.seriespath)))
546 (patch, self.join(self.seriespath)))
547 self.series.append(patch)
547 self.series.append(patch)
548 self.seriesguards.append(self.guard_re.findall(comment))
548 self.seriesguards.append(self.guard_re.findall(comment))
549
549
550 def checkguard(self, guard):
550 def checkguard(self, guard):
551 if not guard:
551 if not guard:
552 return _('guard cannot be an empty string')
552 return _('guard cannot be an empty string')
553 bad_chars = '# \t\r\n\f'
553 bad_chars = '# \t\r\n\f'
554 first = guard[0]
554 first = guard[0]
555 if first in '-+':
555 if first in '-+':
556 return (_('guard %r starts with invalid character: %r') %
556 return (_('guard %r starts with invalid character: %r') %
557 (guard, first))
557 (guard, first))
558 for c in bad_chars:
558 for c in bad_chars:
559 if c in guard:
559 if c in guard:
560 return _('invalid character in guard %r: %r') % (guard, c)
560 return _('invalid character in guard %r: %r') % (guard, c)
561
561
562 def setactive(self, guards):
562 def setactive(self, guards):
563 for guard in guards:
563 for guard in guards:
564 bad = self.checkguard(guard)
564 bad = self.checkguard(guard)
565 if bad:
565 if bad:
566 raise util.Abort(bad)
566 raise util.Abort(bad)
567 guards = sorted(set(guards))
567 guards = sorted(set(guards))
568 self.ui.debug('active guards: %s\n' % ' '.join(guards))
568 self.ui.debug('active guards: %s\n' % ' '.join(guards))
569 self.activeguards = guards
569 self.activeguards = guards
570 self.guardsdirty = True
570 self.guardsdirty = True
571
571
572 def active(self):
572 def active(self):
573 if self.activeguards is None:
573 if self.activeguards is None:
574 self.activeguards = []
574 self.activeguards = []
575 try:
575 try:
576 guards = self.opener.read(self.guardspath).split()
576 guards = self.opener.read(self.guardspath).split()
577 except IOError, err:
577 except IOError, err:
578 if err.errno != errno.ENOENT:
578 if err.errno != errno.ENOENT:
579 raise
579 raise
580 guards = []
580 guards = []
581 for i, guard in enumerate(guards):
581 for i, guard in enumerate(guards):
582 bad = self.checkguard(guard)
582 bad = self.checkguard(guard)
583 if bad:
583 if bad:
584 self.ui.warn('%s:%d: %s\n' %
584 self.ui.warn('%s:%d: %s\n' %
585 (self.join(self.guardspath), i + 1, bad))
585 (self.join(self.guardspath), i + 1, bad))
586 else:
586 else:
587 self.activeguards.append(guard)
587 self.activeguards.append(guard)
588 return self.activeguards
588 return self.activeguards
589
589
590 def setguards(self, idx, guards):
590 def setguards(self, idx, guards):
591 for g in guards:
591 for g in guards:
592 if len(g) < 2:
592 if len(g) < 2:
593 raise util.Abort(_('guard %r too short') % g)
593 raise util.Abort(_('guard %r too short') % g)
594 if g[0] not in '-+':
594 if g[0] not in '-+':
595 raise util.Abort(_('guard %r starts with invalid char') % g)
595 raise util.Abort(_('guard %r starts with invalid char') % g)
596 bad = self.checkguard(g[1:])
596 bad = self.checkguard(g[1:])
597 if bad:
597 if bad:
598 raise util.Abort(bad)
598 raise util.Abort(bad)
599 drop = self.guard_re.sub('', self.fullseries[idx])
599 drop = self.guard_re.sub('', self.fullseries[idx])
600 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
600 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
601 self.parseseries()
601 self.parseseries()
602 self.seriesdirty = True
602 self.seriesdirty = True
603
603
604 def pushable(self, idx):
604 def pushable(self, idx):
605 if isinstance(idx, str):
605 if isinstance(idx, str):
606 idx = self.series.index(idx)
606 idx = self.series.index(idx)
607 patchguards = self.seriesguards[idx]
607 patchguards = self.seriesguards[idx]
608 if not patchguards:
608 if not patchguards:
609 return True, None
609 return True, None
610 guards = self.active()
610 guards = self.active()
611 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
611 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
612 if exactneg:
612 if exactneg:
613 return False, repr(exactneg[0])
613 return False, repr(exactneg[0])
614 pos = [g for g in patchguards if g[0] == '+']
614 pos = [g for g in patchguards if g[0] == '+']
615 exactpos = [g for g in pos if g[1:] in guards]
615 exactpos = [g for g in pos if g[1:] in guards]
616 if pos:
616 if pos:
617 if exactpos:
617 if exactpos:
618 return True, repr(exactpos[0])
618 return True, repr(exactpos[0])
619 return False, ' '.join(map(repr, pos))
619 return False, ' '.join(map(repr, pos))
620 return True, ''
620 return True, ''
621
621
622 def explainpushable(self, idx, all_patches=False):
622 def explainpushable(self, idx, all_patches=False):
623 if all_patches:
623 if all_patches:
624 write = self.ui.write
624 write = self.ui.write
625 else:
625 else:
626 write = self.ui.warn
626 write = self.ui.warn
627
627
628 if all_patches or self.ui.verbose:
628 if all_patches or self.ui.verbose:
629 if isinstance(idx, str):
629 if isinstance(idx, str):
630 idx = self.series.index(idx)
630 idx = self.series.index(idx)
631 pushable, why = self.pushable(idx)
631 pushable, why = self.pushable(idx)
632 if all_patches and pushable:
632 if all_patches and pushable:
633 if why is None:
633 if why is None:
634 write(_('allowing %s - no guards in effect\n') %
634 write(_('allowing %s - no guards in effect\n') %
635 self.series[idx])
635 self.series[idx])
636 else:
636 else:
637 if not why:
637 if not why:
638 write(_('allowing %s - no matching negative guards\n') %
638 write(_('allowing %s - no matching negative guards\n') %
639 self.series[idx])
639 self.series[idx])
640 else:
640 else:
641 write(_('allowing %s - guarded by %s\n') %
641 write(_('allowing %s - guarded by %s\n') %
642 (self.series[idx], why))
642 (self.series[idx], why))
643 if not pushable:
643 if not pushable:
644 if why:
644 if why:
645 write(_('skipping %s - guarded by %s\n') %
645 write(_('skipping %s - guarded by %s\n') %
646 (self.series[idx], why))
646 (self.series[idx], why))
647 else:
647 else:
648 write(_('skipping %s - no matching guards\n') %
648 write(_('skipping %s - no matching guards\n') %
649 self.series[idx])
649 self.series[idx])
650
650
651 def savedirty(self):
651 def savedirty(self):
652 def writelist(items, path):
652 def writelist(items, path):
653 fp = self.opener(path, 'w')
653 fp = self.opener(path, 'w')
654 for i in items:
654 for i in items:
655 fp.write("%s\n" % i)
655 fp.write("%s\n" % i)
656 fp.close()
656 fp.close()
657 if self.applieddirty:
657 if self.applieddirty:
658 writelist(map(str, self.applied), self.statuspath)
658 writelist(map(str, self.applied), self.statuspath)
659 self.applieddirty = False
659 self.applieddirty = False
660 if self.seriesdirty:
660 if self.seriesdirty:
661 writelist(self.fullseries, self.seriespath)
661 writelist(self.fullseries, self.seriespath)
662 self.seriesdirty = False
662 self.seriesdirty = False
663 if self.guardsdirty:
663 if self.guardsdirty:
664 writelist(self.activeguards, self.guardspath)
664 writelist(self.activeguards, self.guardspath)
665 self.guardsdirty = False
665 self.guardsdirty = False
666 if self.added:
666 if self.added:
667 qrepo = self.qrepo()
667 qrepo = self.qrepo()
668 if qrepo:
668 if qrepo:
669 qrepo[None].add(f for f in self.added if f not in qrepo[None])
669 qrepo[None].add(f for f in self.added if f not in qrepo[None])
670 self.added = []
670 self.added = []
671
671
672 def removeundo(self, repo):
672 def removeundo(self, repo):
673 undo = repo.sjoin('undo')
673 undo = repo.sjoin('undo')
674 if not os.path.exists(undo):
674 if not os.path.exists(undo):
675 return
675 return
676 try:
676 try:
677 os.unlink(undo)
677 os.unlink(undo)
678 except OSError, inst:
678 except OSError, inst:
679 self.ui.warn(_('error removing undo: %s\n') % str(inst))
679 self.ui.warn(_('error removing undo: %s\n') % str(inst))
680
680
681 def backup(self, repo, files, copy=False):
681 def backup(self, repo, files, copy=False):
682 # backup local changes in --force case
682 # backup local changes in --force case
683 for f in sorted(files):
683 for f in sorted(files):
684 absf = repo.wjoin(f)
684 absf = repo.wjoin(f)
685 if os.path.lexists(absf):
685 if os.path.lexists(absf):
686 self.ui.note(_('saving current version of %s as %s\n') %
686 self.ui.note(_('saving current version of %s as %s\n') %
687 (f, f + '.orig'))
687 (f, f + '.orig'))
688 if copy:
688 if copy:
689 util.copyfile(absf, absf + '.orig')
689 util.copyfile(absf, absf + '.orig')
690 else:
690 else:
691 util.rename(absf, absf + '.orig')
691 util.rename(absf, absf + '.orig')
692
692
693 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
693 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
694 fp=None, changes=None, opts={}):
694 fp=None, changes=None, opts={}):
695 stat = opts.get('stat')
695 stat = opts.get('stat')
696 m = scmutil.match(repo[node1], files, opts)
696 m = scmutil.match(repo[node1], files, opts)
697 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
697 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
698 changes, stat, fp)
698 changes, stat, fp)
699
699
700 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
700 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
701 # first try just applying the patch
701 # first try just applying the patch
702 (err, n) = self.apply(repo, [patch], update_status=False,
702 (err, n) = self.apply(repo, [patch], update_status=False,
703 strict=True, merge=rev)
703 strict=True, merge=rev)
704
704
705 if err == 0:
705 if err == 0:
706 return (err, n)
706 return (err, n)
707
707
708 if n is None:
708 if n is None:
709 raise util.Abort(_("apply failed for patch %s") % patch)
709 raise util.Abort(_("apply failed for patch %s") % patch)
710
710
711 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
711 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
712
712
713 # apply failed, strip away that rev and merge.
713 # apply failed, strip away that rev and merge.
714 hg.clean(repo, head)
714 hg.clean(repo, head)
715 strip(self.ui, repo, [n], update=False, backup=False)
715 strip(self.ui, repo, [n], update=False, backup=False)
716
716
717 ctx = repo[rev]
717 ctx = repo[rev]
718 ret = hg.merge(repo, rev)
718 ret = hg.merge(repo, rev)
719 if ret:
719 if ret:
720 raise util.Abort(_("update returned %d") % ret)
720 raise util.Abort(_("update returned %d") % ret)
721 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
721 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
722 if n is None:
722 if n is None:
723 raise util.Abort(_("repo commit failed"))
723 raise util.Abort(_("repo commit failed"))
724 try:
724 try:
725 ph = patchheader(mergeq.join(patch), self.plainmode)
725 ph = patchheader(mergeq.join(patch), self.plainmode)
726 except Exception:
726 except Exception:
727 raise util.Abort(_("unable to read %s") % patch)
727 raise util.Abort(_("unable to read %s") % patch)
728
728
729 diffopts = self.patchopts(diffopts, patch)
729 diffopts = self.patchopts(diffopts, patch)
730 patchf = self.opener(patch, "w")
730 patchf = self.opener(patch, "w")
731 comments = str(ph)
731 comments = str(ph)
732 if comments:
732 if comments:
733 patchf.write(comments)
733 patchf.write(comments)
734 self.printdiff(repo, diffopts, head, n, fp=patchf)
734 self.printdiff(repo, diffopts, head, n, fp=patchf)
735 patchf.close()
735 patchf.close()
736 self.removeundo(repo)
736 self.removeundo(repo)
737 return (0, n)
737 return (0, n)
738
738
739 def qparents(self, repo, rev=None):
739 def qparents(self, repo, rev=None):
740 """return the mq handled parent or p1
740 """return the mq handled parent or p1
741
741
742 In some case where mq get himself in being the parent of a merge the
742 In some case where mq get himself in being the parent of a merge the
743 appropriate parent may be p2.
743 appropriate parent may be p2.
744 (eg: an in progress merge started with mq disabled)
744 (eg: an in progress merge started with mq disabled)
745
745
746 If no parent are managed by mq, p1 is returned.
746 If no parent are managed by mq, p1 is returned.
747 """
747 """
748 if rev is None:
748 if rev is None:
749 (p1, p2) = repo.dirstate.parents()
749 (p1, p2) = repo.dirstate.parents()
750 if p2 == nullid:
750 if p2 == nullid:
751 return p1
751 return p1
752 if not self.applied:
752 if not self.applied:
753 return None
753 return None
754 return self.applied[-1].node
754 return self.applied[-1].node
755 p1, p2 = repo.changelog.parents(rev)
755 p1, p2 = repo.changelog.parents(rev)
756 if p2 != nullid and p2 in [x.node for x in self.applied]:
756 if p2 != nullid and p2 in [x.node for x in self.applied]:
757 return p2
757 return p2
758 return p1
758 return p1
759
759
760 def mergepatch(self, repo, mergeq, series, diffopts):
760 def mergepatch(self, repo, mergeq, series, diffopts):
761 if not self.applied:
761 if not self.applied:
762 # each of the patches merged in will have two parents. This
762 # each of the patches merged in will have two parents. This
763 # can confuse the qrefresh, qdiff, and strip code because it
763 # can confuse the qrefresh, qdiff, and strip code because it
764 # needs to know which parent is actually in the patch queue.
764 # needs to know which parent is actually in the patch queue.
765 # so, we insert a merge marker with only one parent. This way
765 # so, we insert a merge marker with only one parent. This way
766 # the first patch in the queue is never a merge patch
766 # the first patch in the queue is never a merge patch
767 #
767 #
768 pname = ".hg.patches.merge.marker"
768 pname = ".hg.patches.merge.marker"
769 n = newcommit(repo, None, '[mq]: merge marker', force=True)
769 n = newcommit(repo, None, '[mq]: merge marker', force=True)
770 self.removeundo(repo)
770 self.removeundo(repo)
771 self.applied.append(statusentry(n, pname))
771 self.applied.append(statusentry(n, pname))
772 self.applieddirty = True
772 self.applieddirty = True
773
773
774 head = self.qparents(repo)
774 head = self.qparents(repo)
775
775
776 for patch in series:
776 for patch in series:
777 patch = mergeq.lookup(patch, strict=True)
777 patch = mergeq.lookup(patch, strict=True)
778 if not patch:
778 if not patch:
779 self.ui.warn(_("patch %s does not exist\n") % patch)
779 self.ui.warn(_("patch %s does not exist\n") % patch)
780 return (1, None)
780 return (1, None)
781 pushable, reason = self.pushable(patch)
781 pushable, reason = self.pushable(patch)
782 if not pushable:
782 if not pushable:
783 self.explainpushable(patch, all_patches=True)
783 self.explainpushable(patch, all_patches=True)
784 continue
784 continue
785 info = mergeq.isapplied(patch)
785 info = mergeq.isapplied(patch)
786 if not info:
786 if not info:
787 self.ui.warn(_("patch %s is not applied\n") % patch)
787 self.ui.warn(_("patch %s is not applied\n") % patch)
788 return (1, None)
788 return (1, None)
789 rev = info[1]
789 rev = info[1]
790 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
790 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
791 if head:
791 if head:
792 self.applied.append(statusentry(head, patch))
792 self.applied.append(statusentry(head, patch))
793 self.applieddirty = True
793 self.applieddirty = True
794 if err:
794 if err:
795 return (err, head)
795 return (err, head)
796 self.savedirty()
796 self.savedirty()
797 return (0, head)
797 return (0, head)
798
798
799 def patch(self, repo, patchfile):
799 def patch(self, repo, patchfile):
800 '''Apply patchfile to the working directory.
800 '''Apply patchfile to the working directory.
801 patchfile: name of patch file'''
801 patchfile: name of patch file'''
802 files = set()
802 files = set()
803 try:
803 try:
804 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
804 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
805 files=files, eolmode=None)
805 files=files, eolmode=None)
806 return (True, list(files), fuzz)
806 return (True, list(files), fuzz)
807 except Exception, inst:
807 except Exception, inst:
808 self.ui.note(str(inst) + '\n')
808 self.ui.note(str(inst) + '\n')
809 if not self.ui.verbose:
809 if not self.ui.verbose:
810 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
810 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
811 self.ui.traceback()
811 self.ui.traceback()
812 return (False, list(files), False)
812 return (False, list(files), False)
813
813
814 def apply(self, repo, series, list=False, update_status=True,
814 def apply(self, repo, series, list=False, update_status=True,
815 strict=False, patchdir=None, merge=None, all_files=None,
815 strict=False, patchdir=None, merge=None, all_files=None,
816 tobackup=None, keepchanges=False):
816 tobackup=None, keepchanges=False):
817 wlock = dsguard = lock = tr = None
817 wlock = dsguard = lock = tr = None
818 try:
818 try:
819 wlock = repo.wlock()
819 wlock = repo.wlock()
820 dsguard = cmdutil.dirstateguard(repo, 'mq.apply')
820 dsguard = cmdutil.dirstateguard(repo, 'mq.apply')
821 lock = repo.lock()
821 lock = repo.lock()
822 tr = repo.transaction("qpush")
822 tr = repo.transaction("qpush")
823 try:
823 try:
824 ret = self._apply(repo, series, list, update_status,
824 ret = self._apply(repo, series, list, update_status,
825 strict, patchdir, merge, all_files=all_files,
825 strict, patchdir, merge, all_files=all_files,
826 tobackup=tobackup, keepchanges=keepchanges)
826 tobackup=tobackup, keepchanges=keepchanges)
827 tr.close()
827 tr.close()
828 self.savedirty()
828 self.savedirty()
829 dsguard.close()
829 dsguard.close()
830 return ret
830 return ret
831 except AbortNoCleanup:
831 except AbortNoCleanup:
832 tr.close()
832 tr.close()
833 self.savedirty()
833 self.savedirty()
834 dsguard.close()
834 dsguard.close()
835 raise
835 raise
836 except: # re-raises
836 except: # re-raises
837 try:
837 try:
838 tr.abort()
838 tr.abort()
839 finally:
839 finally:
840 repo.invalidate()
840 repo.invalidate()
841 self.invalidate()
841 self.invalidate()
842 raise
842 raise
843 finally:
843 finally:
844 release(tr, lock, dsguard, wlock)
844 release(tr, lock, dsguard, wlock)
845 self.removeundo(repo)
845 self.removeundo(repo)
846
846
847 def _apply(self, repo, series, list=False, update_status=True,
847 def _apply(self, repo, series, list=False, update_status=True,
848 strict=False, patchdir=None, merge=None, all_files=None,
848 strict=False, patchdir=None, merge=None, all_files=None,
849 tobackup=None, keepchanges=False):
849 tobackup=None, keepchanges=False):
850 """returns (error, hash)
850 """returns (error, hash)
851
851
852 error = 1 for unable to read, 2 for patch failed, 3 for patch
852 error = 1 for unable to read, 2 for patch failed, 3 for patch
853 fuzz. tobackup is None or a set of files to backup before they
853 fuzz. tobackup is None or a set of files to backup before they
854 are modified by a patch.
854 are modified by a patch.
855 """
855 """
856 # TODO unify with commands.py
856 # TODO unify with commands.py
857 if not patchdir:
857 if not patchdir:
858 patchdir = self.path
858 patchdir = self.path
859 err = 0
859 err = 0
860 n = None
860 n = None
861 for patchname in series:
861 for patchname in series:
862 pushable, reason = self.pushable(patchname)
862 pushable, reason = self.pushable(patchname)
863 if not pushable:
863 if not pushable:
864 self.explainpushable(patchname, all_patches=True)
864 self.explainpushable(patchname, all_patches=True)
865 continue
865 continue
866 self.ui.status(_("applying %s\n") % patchname)
866 self.ui.status(_("applying %s\n") % patchname)
867 pf = os.path.join(patchdir, patchname)
867 pf = os.path.join(patchdir, patchname)
868
868
869 try:
869 try:
870 ph = patchheader(self.join(patchname), self.plainmode)
870 ph = patchheader(self.join(patchname), self.plainmode)
871 except IOError:
871 except IOError:
872 self.ui.warn(_("unable to read %s\n") % patchname)
872 self.ui.warn(_("unable to read %s\n") % patchname)
873 err = 1
873 err = 1
874 break
874 break
875
875
876 message = ph.message
876 message = ph.message
877 if not message:
877 if not message:
878 # The commit message should not be translated
878 # The commit message should not be translated
879 message = "imported patch %s\n" % patchname
879 message = "imported patch %s\n" % patchname
880 else:
880 else:
881 if list:
881 if list:
882 # The commit message should not be translated
882 # The commit message should not be translated
883 message.append("\nimported patch %s" % patchname)
883 message.append("\nimported patch %s" % patchname)
884 message = '\n'.join(message)
884 message = '\n'.join(message)
885
885
886 if ph.haspatch:
886 if ph.haspatch:
887 if tobackup:
887 if tobackup:
888 touched = patchmod.changedfiles(self.ui, repo, pf)
888 touched = patchmod.changedfiles(self.ui, repo, pf)
889 touched = set(touched) & tobackup
889 touched = set(touched) & tobackup
890 if touched and keepchanges:
890 if touched and keepchanges:
891 raise AbortNoCleanup(
891 raise AbortNoCleanup(
892 _("conflicting local changes found"),
892 _("conflicting local changes found"),
893 hint=_("did you forget to qrefresh?"))
893 hint=_("did you forget to qrefresh?"))
894 self.backup(repo, touched, copy=True)
894 self.backup(repo, touched, copy=True)
895 tobackup = tobackup - touched
895 tobackup = tobackup - touched
896 (patcherr, files, fuzz) = self.patch(repo, pf)
896 (patcherr, files, fuzz) = self.patch(repo, pf)
897 if all_files is not None:
897 if all_files is not None:
898 all_files.update(files)
898 all_files.update(files)
899 patcherr = not patcherr
899 patcherr = not patcherr
900 else:
900 else:
901 self.ui.warn(_("patch %s is empty\n") % patchname)
901 self.ui.warn(_("patch %s is empty\n") % patchname)
902 patcherr, files, fuzz = 0, [], 0
902 patcherr, files, fuzz = 0, [], 0
903
903
904 if merge and files:
904 if merge and files:
905 # Mark as removed/merged and update dirstate parent info
905 # Mark as removed/merged and update dirstate parent info
906 removed = []
906 removed = []
907 merged = []
907 merged = []
908 for f in files:
908 for f in files:
909 if os.path.lexists(repo.wjoin(f)):
909 if os.path.lexists(repo.wjoin(f)):
910 merged.append(f)
910 merged.append(f)
911 else:
911 else:
912 removed.append(f)
912 removed.append(f)
913 repo.dirstate.beginparentchange()
913 repo.dirstate.beginparentchange()
914 for f in removed:
914 for f in removed:
915 repo.dirstate.remove(f)
915 repo.dirstate.remove(f)
916 for f in merged:
916 for f in merged:
917 repo.dirstate.merge(f)
917 repo.dirstate.merge(f)
918 p1, p2 = repo.dirstate.parents()
918 p1, p2 = repo.dirstate.parents()
919 repo.setparents(p1, merge)
919 repo.setparents(p1, merge)
920 repo.dirstate.endparentchange()
920 repo.dirstate.endparentchange()
921
921
922 if all_files and '.hgsubstate' in all_files:
922 if all_files and '.hgsubstate' in all_files:
923 wctx = repo[None]
923 wctx = repo[None]
924 pctx = repo['.']
924 pctx = repo['.']
925 overwrite = False
925 overwrite = False
926 mergedsubstate = subrepo.submerge(repo, pctx, wctx, wctx,
926 mergedsubstate = subrepo.submerge(repo, pctx, wctx, wctx,
927 overwrite)
927 overwrite)
928 files += mergedsubstate.keys()
928 files += mergedsubstate.keys()
929
929
930 match = scmutil.matchfiles(repo, files or [])
930 match = scmutil.matchfiles(repo, files or [])
931 oldtip = repo['tip']
931 oldtip = repo['tip']
932 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
932 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
933 force=True)
933 force=True)
934 if repo['tip'] == oldtip:
934 if repo['tip'] == oldtip:
935 raise util.Abort(_("qpush exactly duplicates child changeset"))
935 raise util.Abort(_("qpush exactly duplicates child changeset"))
936 if n is None:
936 if n is None:
937 raise util.Abort(_("repository commit failed"))
937 raise util.Abort(_("repository commit failed"))
938
938
939 if update_status:
939 if update_status:
940 self.applied.append(statusentry(n, patchname))
940 self.applied.append(statusentry(n, patchname))
941
941
942 if patcherr:
942 if patcherr:
943 self.ui.warn(_("patch failed, rejects left in working "
943 self.ui.warn(_("patch failed, rejects left in working "
944 "directory\n"))
944 "directory\n"))
945 err = 2
945 err = 2
946 break
946 break
947
947
948 if fuzz and strict:
948 if fuzz and strict:
949 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
949 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
950 err = 3
950 err = 3
951 break
951 break
952 return (err, n)
952 return (err, n)
953
953
954 def _cleanup(self, patches, numrevs, keep=False):
954 def _cleanup(self, patches, numrevs, keep=False):
955 if not keep:
955 if not keep:
956 r = self.qrepo()
956 r = self.qrepo()
957 if r:
957 if r:
958 r[None].forget(patches)
958 r[None].forget(patches)
959 for p in patches:
959 for p in patches:
960 try:
960 try:
961 os.unlink(self.join(p))
961 os.unlink(self.join(p))
962 except OSError, inst:
962 except OSError, inst:
963 if inst.errno != errno.ENOENT:
963 if inst.errno != errno.ENOENT:
964 raise
964 raise
965
965
966 qfinished = []
966 qfinished = []
967 if numrevs:
967 if numrevs:
968 qfinished = self.applied[:numrevs]
968 qfinished = self.applied[:numrevs]
969 del self.applied[:numrevs]
969 del self.applied[:numrevs]
970 self.applieddirty = True
970 self.applieddirty = True
971
971
972 unknown = []
972 unknown = []
973
973
974 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
974 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
975 reverse=True):
975 reverse=True):
976 if i is not None:
976 if i is not None:
977 del self.fullseries[i]
977 del self.fullseries[i]
978 else:
978 else:
979 unknown.append(p)
979 unknown.append(p)
980
980
981 if unknown:
981 if unknown:
982 if numrevs:
982 if numrevs:
983 rev = dict((entry.name, entry.node) for entry in qfinished)
983 rev = dict((entry.name, entry.node) for entry in qfinished)
984 for p in unknown:
984 for p in unknown:
985 msg = _('revision %s refers to unknown patches: %s\n')
985 msg = _('revision %s refers to unknown patches: %s\n')
986 self.ui.warn(msg % (short(rev[p]), p))
986 self.ui.warn(msg % (short(rev[p]), p))
987 else:
987 else:
988 msg = _('unknown patches: %s\n')
988 msg = _('unknown patches: %s\n')
989 raise util.Abort(''.join(msg % p for p in unknown))
989 raise util.Abort(''.join(msg % p for p in unknown))
990
990
991 self.parseseries()
991 self.parseseries()
992 self.seriesdirty = True
992 self.seriesdirty = True
993 return [entry.node for entry in qfinished]
993 return [entry.node for entry in qfinished]
994
994
995 def _revpatches(self, repo, revs):
995 def _revpatches(self, repo, revs):
996 firstrev = repo[self.applied[0].node].rev()
996 firstrev = repo[self.applied[0].node].rev()
997 patches = []
997 patches = []
998 for i, rev in enumerate(revs):
998 for i, rev in enumerate(revs):
999
999
1000 if rev < firstrev:
1000 if rev < firstrev:
1001 raise util.Abort(_('revision %d is not managed') % rev)
1001 raise util.Abort(_('revision %d is not managed') % rev)
1002
1002
1003 ctx = repo[rev]
1003 ctx = repo[rev]
1004 base = self.applied[i].node
1004 base = self.applied[i].node
1005 if ctx.node() != base:
1005 if ctx.node() != base:
1006 msg = _('cannot delete revision %d above applied patches')
1006 msg = _('cannot delete revision %d above applied patches')
1007 raise util.Abort(msg % rev)
1007 raise util.Abort(msg % rev)
1008
1008
1009 patch = self.applied[i].name
1009 patch = self.applied[i].name
1010 for fmt in ('[mq]: %s', 'imported patch %s'):
1010 for fmt in ('[mq]: %s', 'imported patch %s'):
1011 if ctx.description() == fmt % patch:
1011 if ctx.description() == fmt % patch:
1012 msg = _('patch %s finalized without changeset message\n')
1012 msg = _('patch %s finalized without changeset message\n')
1013 repo.ui.status(msg % patch)
1013 repo.ui.status(msg % patch)
1014 break
1014 break
1015
1015
1016 patches.append(patch)
1016 patches.append(patch)
1017 return patches
1017 return patches
1018
1018
1019 def finish(self, repo, revs):
1019 def finish(self, repo, revs):
1020 # Manually trigger phase computation to ensure phasedefaults is
1020 # Manually trigger phase computation to ensure phasedefaults is
1021 # executed before we remove the patches.
1021 # executed before we remove the patches.
1022 repo._phasecache
1022 repo._phasecache
1023 patches = self._revpatches(repo, sorted(revs))
1023 patches = self._revpatches(repo, sorted(revs))
1024 qfinished = self._cleanup(patches, len(patches))
1024 qfinished = self._cleanup(patches, len(patches))
1025 if qfinished and repo.ui.configbool('mq', 'secret', False):
1025 if qfinished and repo.ui.configbool('mq', 'secret', False):
1026 # only use this logic when the secret option is added
1026 # only use this logic when the secret option is added
1027 oldqbase = repo[qfinished[0]]
1027 oldqbase = repo[qfinished[0]]
1028 tphase = repo.ui.config('phases', 'new-commit', phases.draft)
1028 tphase = repo.ui.config('phases', 'new-commit', phases.draft)
1029 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1029 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1030 tr = repo.transaction('qfinish')
1030 tr = repo.transaction('qfinish')
1031 try:
1031 try:
1032 phases.advanceboundary(repo, tr, tphase, qfinished)
1032 phases.advanceboundary(repo, tr, tphase, qfinished)
1033 tr.close()
1033 tr.close()
1034 finally:
1034 finally:
1035 tr.release()
1035 tr.release()
1036
1036
1037 def delete(self, repo, patches, opts):
1037 def delete(self, repo, patches, opts):
1038 if not patches and not opts.get('rev'):
1038 if not patches and not opts.get('rev'):
1039 raise util.Abort(_('qdelete requires at least one revision or '
1039 raise util.Abort(_('qdelete requires at least one revision or '
1040 'patch name'))
1040 'patch name'))
1041
1041
1042 realpatches = []
1042 realpatches = []
1043 for patch in patches:
1043 for patch in patches:
1044 patch = self.lookup(patch, strict=True)
1044 patch = self.lookup(patch, strict=True)
1045 info = self.isapplied(patch)
1045 info = self.isapplied(patch)
1046 if info:
1046 if info:
1047 raise util.Abort(_("cannot delete applied patch %s") % patch)
1047 raise util.Abort(_("cannot delete applied patch %s") % patch)
1048 if patch not in self.series:
1048 if patch not in self.series:
1049 raise util.Abort(_("patch %s not in series file") % patch)
1049 raise util.Abort(_("patch %s not in series file") % patch)
1050 if patch not in realpatches:
1050 if patch not in realpatches:
1051 realpatches.append(patch)
1051 realpatches.append(patch)
1052
1052
1053 numrevs = 0
1053 numrevs = 0
1054 if opts.get('rev'):
1054 if opts.get('rev'):
1055 if not self.applied:
1055 if not self.applied:
1056 raise util.Abort(_('no patches applied'))
1056 raise util.Abort(_('no patches applied'))
1057 revs = scmutil.revrange(repo, opts.get('rev'))
1057 revs = scmutil.revrange(repo, opts.get('rev'))
1058 revs.sort()
1058 revs.sort()
1059 revpatches = self._revpatches(repo, revs)
1059 revpatches = self._revpatches(repo, revs)
1060 realpatches += revpatches
1060 realpatches += revpatches
1061 numrevs = len(revpatches)
1061 numrevs = len(revpatches)
1062
1062
1063 self._cleanup(realpatches, numrevs, opts.get('keep'))
1063 self._cleanup(realpatches, numrevs, opts.get('keep'))
1064
1064
1065 def checktoppatch(self, repo):
1065 def checktoppatch(self, repo):
1066 '''check that working directory is at qtip'''
1066 '''check that working directory is at qtip'''
1067 if self.applied:
1067 if self.applied:
1068 top = self.applied[-1].node
1068 top = self.applied[-1].node
1069 patch = self.applied[-1].name
1069 patch = self.applied[-1].name
1070 if repo.dirstate.p1() != top:
1070 if repo.dirstate.p1() != top:
1071 raise util.Abort(_("working directory revision is not qtip"))
1071 raise util.Abort(_("working directory revision is not qtip"))
1072 return top, patch
1072 return top, patch
1073 return None, None
1073 return None, None
1074
1074
1075 def putsubstate2changes(self, substatestate, changes):
1075 def putsubstate2changes(self, substatestate, changes):
1076 for files in changes[:3]:
1076 for files in changes[:3]:
1077 if '.hgsubstate' in files:
1077 if '.hgsubstate' in files:
1078 return # already listed up
1078 return # already listed up
1079 # not yet listed up
1079 # not yet listed up
1080 if substatestate in 'a?':
1080 if substatestate in 'a?':
1081 changes[1].append('.hgsubstate')
1081 changes[1].append('.hgsubstate')
1082 elif substatestate in 'r':
1082 elif substatestate in 'r':
1083 changes[2].append('.hgsubstate')
1083 changes[2].append('.hgsubstate')
1084 else: # modified
1084 else: # modified
1085 changes[0].append('.hgsubstate')
1085 changes[0].append('.hgsubstate')
1086
1086
1087 def checklocalchanges(self, repo, force=False, refresh=True):
1087 def checklocalchanges(self, repo, force=False, refresh=True):
1088 excsuffix = ''
1088 excsuffix = ''
1089 if refresh:
1089 if refresh:
1090 excsuffix = ', refresh first'
1090 excsuffix = ', refresh first'
1091 # plain versions for i18n tool to detect them
1091 # plain versions for i18n tool to detect them
1092 _("local changes found, refresh first")
1092 _("local changes found, refresh first")
1093 _("local changed subrepos found, refresh first")
1093 _("local changed subrepos found, refresh first")
1094 return checklocalchanges(repo, force, excsuffix)
1094 return checklocalchanges(repo, force, excsuffix)
1095
1095
1096 _reserved = ('series', 'status', 'guards', '.', '..')
1096 _reserved = ('series', 'status', 'guards', '.', '..')
1097 def checkreservedname(self, name):
1097 def checkreservedname(self, name):
1098 if name in self._reserved:
1098 if name in self._reserved:
1099 raise util.Abort(_('"%s" cannot be used as the name of a patch')
1099 raise util.Abort(_('"%s" cannot be used as the name of a patch')
1100 % name)
1100 % name)
1101 for prefix in ('.hg', '.mq'):
1101 for prefix in ('.hg', '.mq'):
1102 if name.startswith(prefix):
1102 if name.startswith(prefix):
1103 raise util.Abort(_('patch name cannot begin with "%s"')
1103 raise util.Abort(_('patch name cannot begin with "%s"')
1104 % prefix)
1104 % prefix)
1105 for c in ('#', ':'):
1105 for c in ('#', ':'):
1106 if c in name:
1106 if c in name:
1107 raise util.Abort(_('"%s" cannot be used in the name of a patch')
1107 raise util.Abort(_('"%s" cannot be used in the name of a patch')
1108 % c)
1108 % c)
1109
1109
1110 def checkpatchname(self, name, force=False):
1110 def checkpatchname(self, name, force=False):
1111 self.checkreservedname(name)
1111 self.checkreservedname(name)
1112 if not force and os.path.exists(self.join(name)):
1112 if not force and os.path.exists(self.join(name)):
1113 if os.path.isdir(self.join(name)):
1113 if os.path.isdir(self.join(name)):
1114 raise util.Abort(_('"%s" already exists as a directory')
1114 raise util.Abort(_('"%s" already exists as a directory')
1115 % name)
1115 % name)
1116 else:
1116 else:
1117 raise util.Abort(_('patch "%s" already exists') % name)
1117 raise util.Abort(_('patch "%s" already exists') % name)
1118
1118
1119 def checkkeepchanges(self, keepchanges, force):
1119 def checkkeepchanges(self, keepchanges, force):
1120 if force and keepchanges:
1120 if force and keepchanges:
1121 raise util.Abort(_('cannot use both --force and --keep-changes'))
1121 raise util.Abort(_('cannot use both --force and --keep-changes'))
1122
1122
1123 def new(self, repo, patchfn, *pats, **opts):
1123 def new(self, repo, patchfn, *pats, **opts):
1124 """options:
1124 """options:
1125 msg: a string or a no-argument function returning a string
1125 msg: a string or a no-argument function returning a string
1126 """
1126 """
1127 msg = opts.get('msg')
1127 msg = opts.get('msg')
1128 edit = opts.get('edit')
1128 edit = opts.get('edit')
1129 editform = opts.get('editform', 'mq.qnew')
1129 editform = opts.get('editform', 'mq.qnew')
1130 user = opts.get('user')
1130 user = opts.get('user')
1131 date = opts.get('date')
1131 date = opts.get('date')
1132 if date:
1132 if date:
1133 date = util.parsedate(date)
1133 date = util.parsedate(date)
1134 diffopts = self.diffopts({'git': opts.get('git')})
1134 diffopts = self.diffopts({'git': opts.get('git')})
1135 if opts.get('checkname', True):
1135 if opts.get('checkname', True):
1136 self.checkpatchname(patchfn)
1136 self.checkpatchname(patchfn)
1137 inclsubs = checksubstate(repo)
1137 inclsubs = checksubstate(repo)
1138 if inclsubs:
1138 if inclsubs:
1139 substatestate = repo.dirstate['.hgsubstate']
1139 substatestate = repo.dirstate['.hgsubstate']
1140 if opts.get('include') or opts.get('exclude') or pats:
1140 if opts.get('include') or opts.get('exclude') or pats:
1141 match = scmutil.match(repo[None], pats, opts)
1141 match = scmutil.match(repo[None], pats, opts)
1142 # detect missing files in pats
1142 # detect missing files in pats
1143 def badfn(f, msg):
1143 def badfn(f, msg):
1144 if f != '.hgsubstate': # .hgsubstate is auto-created
1144 if f != '.hgsubstate': # .hgsubstate is auto-created
1145 raise util.Abort('%s: %s' % (f, msg))
1145 raise util.Abort('%s: %s' % (f, msg))
1146 match.bad = badfn
1146 match.bad = badfn
1147 changes = repo.status(match=match)
1147 changes = repo.status(match=match)
1148 else:
1148 else:
1149 changes = self.checklocalchanges(repo, force=True)
1149 changes = self.checklocalchanges(repo, force=True)
1150 commitfiles = list(inclsubs)
1150 commitfiles = list(inclsubs)
1151 for files in changes[:3]:
1151 for files in changes[:3]:
1152 commitfiles.extend(files)
1152 commitfiles.extend(files)
1153 match = scmutil.matchfiles(repo, commitfiles)
1153 match = scmutil.matchfiles(repo, commitfiles)
1154 if len(repo[None].parents()) > 1:
1154 if len(repo[None].parents()) > 1:
1155 raise util.Abort(_('cannot manage merge changesets'))
1155 raise util.Abort(_('cannot manage merge changesets'))
1156 self.checktoppatch(repo)
1156 self.checktoppatch(repo)
1157 insert = self.fullseriesend()
1157 insert = self.fullseriesend()
1158 wlock = repo.wlock()
1158 wlock = repo.wlock()
1159 try:
1159 try:
1160 try:
1160 try:
1161 # if patch file write fails, abort early
1161 # if patch file write fails, abort early
1162 p = self.opener(patchfn, "w")
1162 p = self.opener(patchfn, "w")
1163 except IOError, e:
1163 except IOError, e:
1164 raise util.Abort(_('cannot write patch "%s": %s')
1164 raise util.Abort(_('cannot write patch "%s": %s')
1165 % (patchfn, e.strerror))
1165 % (patchfn, e.strerror))
1166 try:
1166 try:
1167 defaultmsg = "[mq]: %s" % patchfn
1167 defaultmsg = "[mq]: %s" % patchfn
1168 editor = cmdutil.getcommiteditor(editform=editform)
1168 editor = cmdutil.getcommiteditor(editform=editform)
1169 if edit:
1169 if edit:
1170 def finishdesc(desc):
1170 def finishdesc(desc):
1171 if desc.rstrip():
1171 if desc.rstrip():
1172 return desc
1172 return desc
1173 else:
1173 else:
1174 return defaultmsg
1174 return defaultmsg
1175 # i18n: this message is shown in editor with "HG: " prefix
1175 # i18n: this message is shown in editor with "HG: " prefix
1176 extramsg = _('Leave message empty to use default message.')
1176 extramsg = _('Leave message empty to use default message.')
1177 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1177 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1178 extramsg=extramsg,
1178 extramsg=extramsg,
1179 editform=editform)
1179 editform=editform)
1180 commitmsg = msg
1180 commitmsg = msg
1181 else:
1181 else:
1182 commitmsg = msg or defaultmsg
1182 commitmsg = msg or defaultmsg
1183
1183
1184 n = newcommit(repo, None, commitmsg, user, date, match=match,
1184 n = newcommit(repo, None, commitmsg, user, date, match=match,
1185 force=True, editor=editor)
1185 force=True, editor=editor)
1186 if n is None:
1186 if n is None:
1187 raise util.Abort(_("repo commit failed"))
1187 raise util.Abort(_("repo commit failed"))
1188 try:
1188 try:
1189 self.fullseries[insert:insert] = [patchfn]
1189 self.fullseries[insert:insert] = [patchfn]
1190 self.applied.append(statusentry(n, patchfn))
1190 self.applied.append(statusentry(n, patchfn))
1191 self.parseseries()
1191 self.parseseries()
1192 self.seriesdirty = True
1192 self.seriesdirty = True
1193 self.applieddirty = True
1193 self.applieddirty = True
1194 nctx = repo[n]
1194 nctx = repo[n]
1195 ph = patchheader(self.join(patchfn), self.plainmode)
1195 ph = patchheader(self.join(patchfn), self.plainmode)
1196 if user:
1196 if user:
1197 ph.setuser(user)
1197 ph.setuser(user)
1198 if date:
1198 if date:
1199 ph.setdate('%s %s' % date)
1199 ph.setdate('%s %s' % date)
1200 ph.setparent(hex(nctx.p1().node()))
1200 ph.setparent(hex(nctx.p1().node()))
1201 msg = nctx.description().strip()
1201 msg = nctx.description().strip()
1202 if msg == defaultmsg.strip():
1202 if msg == defaultmsg.strip():
1203 msg = ''
1203 msg = ''
1204 ph.setmessage(msg)
1204 ph.setmessage(msg)
1205 p.write(str(ph))
1205 p.write(str(ph))
1206 if commitfiles:
1206 if commitfiles:
1207 parent = self.qparents(repo, n)
1207 parent = self.qparents(repo, n)
1208 if inclsubs:
1208 if inclsubs:
1209 self.putsubstate2changes(substatestate, changes)
1209 self.putsubstate2changes(substatestate, changes)
1210 chunks = patchmod.diff(repo, node1=parent, node2=n,
1210 chunks = patchmod.diff(repo, node1=parent, node2=n,
1211 changes=changes, opts=diffopts)
1211 changes=changes, opts=diffopts)
1212 for chunk in chunks:
1212 for chunk in chunks:
1213 p.write(chunk)
1213 p.write(chunk)
1214 p.close()
1214 p.close()
1215 r = self.qrepo()
1215 r = self.qrepo()
1216 if r:
1216 if r:
1217 r[None].add([patchfn])
1217 r[None].add([patchfn])
1218 except: # re-raises
1218 except: # re-raises
1219 repo.rollback()
1219 repo.rollback()
1220 raise
1220 raise
1221 except Exception:
1221 except Exception:
1222 patchpath = self.join(patchfn)
1222 patchpath = self.join(patchfn)
1223 try:
1223 try:
1224 os.unlink(patchpath)
1224 os.unlink(patchpath)
1225 except OSError:
1225 except OSError:
1226 self.ui.warn(_('error unlinking %s\n') % patchpath)
1226 self.ui.warn(_('error unlinking %s\n') % patchpath)
1227 raise
1227 raise
1228 self.removeundo(repo)
1228 self.removeundo(repo)
1229 finally:
1229 finally:
1230 release(wlock)
1230 release(wlock)
1231
1231
1232 def isapplied(self, patch):
1232 def isapplied(self, patch):
1233 """returns (index, rev, patch)"""
1233 """returns (index, rev, patch)"""
1234 for i, a in enumerate(self.applied):
1234 for i, a in enumerate(self.applied):
1235 if a.name == patch:
1235 if a.name == patch:
1236 return (i, a.node, a.name)
1236 return (i, a.node, a.name)
1237 return None
1237 return None
1238
1238
1239 # if the exact patch name does not exist, we try a few
1239 # if the exact patch name does not exist, we try a few
1240 # variations. If strict is passed, we try only #1
1240 # variations. If strict is passed, we try only #1
1241 #
1241 #
1242 # 1) a number (as string) to indicate an offset in the series file
1242 # 1) a number (as string) to indicate an offset in the series file
1243 # 2) a unique substring of the patch name was given
1243 # 2) a unique substring of the patch name was given
1244 # 3) patchname[-+]num to indicate an offset in the series file
1244 # 3) patchname[-+]num to indicate an offset in the series file
1245 def lookup(self, patch, strict=False):
1245 def lookup(self, patch, strict=False):
1246 def partialname(s):
1246 def partialname(s):
1247 if s in self.series:
1247 if s in self.series:
1248 return s
1248 return s
1249 matches = [x for x in self.series if s in x]
1249 matches = [x for x in self.series if s in x]
1250 if len(matches) > 1:
1250 if len(matches) > 1:
1251 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1251 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1252 for m in matches:
1252 for m in matches:
1253 self.ui.warn(' %s\n' % m)
1253 self.ui.warn(' %s\n' % m)
1254 return None
1254 return None
1255 if matches:
1255 if matches:
1256 return matches[0]
1256 return matches[0]
1257 if self.series and self.applied:
1257 if self.series and self.applied:
1258 if s == 'qtip':
1258 if s == 'qtip':
1259 return self.series[self.seriesend(True) - 1]
1259 return self.series[self.seriesend(True) - 1]
1260 if s == 'qbase':
1260 if s == 'qbase':
1261 return self.series[0]
1261 return self.series[0]
1262 return None
1262 return None
1263
1263
1264 if patch in self.series:
1264 if patch in self.series:
1265 return patch
1265 return patch
1266
1266
1267 if not os.path.isfile(self.join(patch)):
1267 if not os.path.isfile(self.join(patch)):
1268 try:
1268 try:
1269 sno = int(patch)
1269 sno = int(patch)
1270 except (ValueError, OverflowError):
1270 except (ValueError, OverflowError):
1271 pass
1271 pass
1272 else:
1272 else:
1273 if -len(self.series) <= sno < len(self.series):
1273 if -len(self.series) <= sno < len(self.series):
1274 return self.series[sno]
1274 return self.series[sno]
1275
1275
1276 if not strict:
1276 if not strict:
1277 res = partialname(patch)
1277 res = partialname(patch)
1278 if res:
1278 if res:
1279 return res
1279 return res
1280 minus = patch.rfind('-')
1280 minus = patch.rfind('-')
1281 if minus >= 0:
1281 if minus >= 0:
1282 res = partialname(patch[:minus])
1282 res = partialname(patch[:minus])
1283 if res:
1283 if res:
1284 i = self.series.index(res)
1284 i = self.series.index(res)
1285 try:
1285 try:
1286 off = int(patch[minus + 1:] or 1)
1286 off = int(patch[minus + 1:] or 1)
1287 except (ValueError, OverflowError):
1287 except (ValueError, OverflowError):
1288 pass
1288 pass
1289 else:
1289 else:
1290 if i - off >= 0:
1290 if i - off >= 0:
1291 return self.series[i - off]
1291 return self.series[i - off]
1292 plus = patch.rfind('+')
1292 plus = patch.rfind('+')
1293 if plus >= 0:
1293 if plus >= 0:
1294 res = partialname(patch[:plus])
1294 res = partialname(patch[:plus])
1295 if res:
1295 if res:
1296 i = self.series.index(res)
1296 i = self.series.index(res)
1297 try:
1297 try:
1298 off = int(patch[plus + 1:] or 1)
1298 off = int(patch[plus + 1:] or 1)
1299 except (ValueError, OverflowError):
1299 except (ValueError, OverflowError):
1300 pass
1300 pass
1301 else:
1301 else:
1302 if i + off < len(self.series):
1302 if i + off < len(self.series):
1303 return self.series[i + off]
1303 return self.series[i + off]
1304 raise util.Abort(_("patch %s not in series") % patch)
1304 raise util.Abort(_("patch %s not in series") % patch)
1305
1305
1306 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1306 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1307 all=False, move=False, exact=False, nobackup=False,
1307 all=False, move=False, exact=False, nobackup=False,
1308 keepchanges=False):
1308 keepchanges=False):
1309 self.checkkeepchanges(keepchanges, force)
1309 self.checkkeepchanges(keepchanges, force)
1310 diffopts = self.diffopts()
1310 diffopts = self.diffopts()
1311 wlock = repo.wlock()
1311 wlock = repo.wlock()
1312 try:
1312 try:
1313 heads = []
1313 heads = []
1314 for hs in repo.branchmap().itervalues():
1314 for hs in repo.branchmap().itervalues():
1315 heads.extend(hs)
1315 heads.extend(hs)
1316 if not heads:
1316 if not heads:
1317 heads = [nullid]
1317 heads = [nullid]
1318 if repo.dirstate.p1() not in heads and not exact:
1318 if repo.dirstate.p1() not in heads and not exact:
1319 self.ui.status(_("(working directory not at a head)\n"))
1319 self.ui.status(_("(working directory not at a head)\n"))
1320
1320
1321 if not self.series:
1321 if not self.series:
1322 self.ui.warn(_('no patches in series\n'))
1322 self.ui.warn(_('no patches in series\n'))
1323 return 0
1323 return 0
1324
1324
1325 # Suppose our series file is: A B C and the current 'top'
1325 # Suppose our series file is: A B C and the current 'top'
1326 # patch is B. qpush C should be performed (moving forward)
1326 # patch is B. qpush C should be performed (moving forward)
1327 # qpush B is a NOP (no change) qpush A is an error (can't
1327 # qpush B is a NOP (no change) qpush A is an error (can't
1328 # go backwards with qpush)
1328 # go backwards with qpush)
1329 if patch:
1329 if patch:
1330 patch = self.lookup(patch)
1330 patch = self.lookup(patch)
1331 info = self.isapplied(patch)
1331 info = self.isapplied(patch)
1332 if info and info[0] >= len(self.applied) - 1:
1332 if info and info[0] >= len(self.applied) - 1:
1333 self.ui.warn(
1333 self.ui.warn(
1334 _('qpush: %s is already at the top\n') % patch)
1334 _('qpush: %s is already at the top\n') % patch)
1335 return 0
1335 return 0
1336
1336
1337 pushable, reason = self.pushable(patch)
1337 pushable, reason = self.pushable(patch)
1338 if pushable:
1338 if pushable:
1339 if self.series.index(patch) < self.seriesend():
1339 if self.series.index(patch) < self.seriesend():
1340 raise util.Abort(
1340 raise util.Abort(
1341 _("cannot push to a previous patch: %s") % patch)
1341 _("cannot push to a previous patch: %s") % patch)
1342 else:
1342 else:
1343 if reason:
1343 if reason:
1344 reason = _('guarded by %s') % reason
1344 reason = _('guarded by %s') % reason
1345 else:
1345 else:
1346 reason = _('no matching guards')
1346 reason = _('no matching guards')
1347 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1347 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1348 return 1
1348 return 1
1349 elif all:
1349 elif all:
1350 patch = self.series[-1]
1350 patch = self.series[-1]
1351 if self.isapplied(patch):
1351 if self.isapplied(patch):
1352 self.ui.warn(_('all patches are currently applied\n'))
1352 self.ui.warn(_('all patches are currently applied\n'))
1353 return 0
1353 return 0
1354
1354
1355 # Following the above example, starting at 'top' of B:
1355 # Following the above example, starting at 'top' of B:
1356 # qpush should be performed (pushes C), but a subsequent
1356 # qpush should be performed (pushes C), but a subsequent
1357 # qpush without an argument is an error (nothing to
1357 # qpush without an argument is an error (nothing to
1358 # apply). This allows a loop of "...while hg qpush..." to
1358 # apply). This allows a loop of "...while hg qpush..." to
1359 # work as it detects an error when done
1359 # work as it detects an error when done
1360 start = self.seriesend()
1360 start = self.seriesend()
1361 if start == len(self.series):
1361 if start == len(self.series):
1362 self.ui.warn(_('patch series already fully applied\n'))
1362 self.ui.warn(_('patch series already fully applied\n'))
1363 return 1
1363 return 1
1364 if not force and not keepchanges:
1364 if not force and not keepchanges:
1365 self.checklocalchanges(repo, refresh=self.applied)
1365 self.checklocalchanges(repo, refresh=self.applied)
1366
1366
1367 if exact:
1367 if exact:
1368 if keepchanges:
1368 if keepchanges:
1369 raise util.Abort(
1369 raise util.Abort(
1370 _("cannot use --exact and --keep-changes together"))
1370 _("cannot use --exact and --keep-changes together"))
1371 if move:
1371 if move:
1372 raise util.Abort(_('cannot use --exact and --move '
1372 raise util.Abort(_('cannot use --exact and --move '
1373 'together'))
1373 'together'))
1374 if self.applied:
1374 if self.applied:
1375 raise util.Abort(_('cannot push --exact with applied '
1375 raise util.Abort(_('cannot push --exact with applied '
1376 'patches'))
1376 'patches'))
1377 root = self.series[start]
1377 root = self.series[start]
1378 target = patchheader(self.join(root), self.plainmode).parent
1378 target = patchheader(self.join(root), self.plainmode).parent
1379 if not target:
1379 if not target:
1380 raise util.Abort(
1380 raise util.Abort(
1381 _("%s does not have a parent recorded") % root)
1381 _("%s does not have a parent recorded") % root)
1382 if not repo[target] == repo['.']:
1382 if not repo[target] == repo['.']:
1383 hg.update(repo, target)
1383 hg.update(repo, target)
1384
1384
1385 if move:
1385 if move:
1386 if not patch:
1386 if not patch:
1387 raise util.Abort(_("please specify the patch to move"))
1387 raise util.Abort(_("please specify the patch to move"))
1388 for fullstart, rpn in enumerate(self.fullseries):
1388 for fullstart, rpn in enumerate(self.fullseries):
1389 # strip markers for patch guards
1389 # strip markers for patch guards
1390 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1390 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1391 break
1391 break
1392 for i, rpn in enumerate(self.fullseries[fullstart:]):
1392 for i, rpn in enumerate(self.fullseries[fullstart:]):
1393 # strip markers for patch guards
1393 # strip markers for patch guards
1394 if self.guard_re.split(rpn, 1)[0] == patch:
1394 if self.guard_re.split(rpn, 1)[0] == patch:
1395 break
1395 break
1396 index = fullstart + i
1396 index = fullstart + i
1397 assert index < len(self.fullseries)
1397 assert index < len(self.fullseries)
1398 fullpatch = self.fullseries[index]
1398 fullpatch = self.fullseries[index]
1399 del self.fullseries[index]
1399 del self.fullseries[index]
1400 self.fullseries.insert(fullstart, fullpatch)
1400 self.fullseries.insert(fullstart, fullpatch)
1401 self.parseseries()
1401 self.parseseries()
1402 self.seriesdirty = True
1402 self.seriesdirty = True
1403
1403
1404 self.applieddirty = True
1404 self.applieddirty = True
1405 if start > 0:
1405 if start > 0:
1406 self.checktoppatch(repo)
1406 self.checktoppatch(repo)
1407 if not patch:
1407 if not patch:
1408 patch = self.series[start]
1408 patch = self.series[start]
1409 end = start + 1
1409 end = start + 1
1410 else:
1410 else:
1411 end = self.series.index(patch, start) + 1
1411 end = self.series.index(patch, start) + 1
1412
1412
1413 tobackup = set()
1413 tobackup = set()
1414 if (not nobackup and force) or keepchanges:
1414 if (not nobackup and force) or keepchanges:
1415 status = self.checklocalchanges(repo, force=True)
1415 status = self.checklocalchanges(repo, force=True)
1416 if keepchanges:
1416 if keepchanges:
1417 tobackup.update(status.modified + status.added +
1417 tobackup.update(status.modified + status.added +
1418 status.removed + status.deleted)
1418 status.removed + status.deleted)
1419 else:
1419 else:
1420 tobackup.update(status.modified + status.added)
1420 tobackup.update(status.modified + status.added)
1421
1421
1422 s = self.series[start:end]
1422 s = self.series[start:end]
1423 all_files = set()
1423 all_files = set()
1424 try:
1424 try:
1425 if mergeq:
1425 if mergeq:
1426 ret = self.mergepatch(repo, mergeq, s, diffopts)
1426 ret = self.mergepatch(repo, mergeq, s, diffopts)
1427 else:
1427 else:
1428 ret = self.apply(repo, s, list, all_files=all_files,
1428 ret = self.apply(repo, s, list, all_files=all_files,
1429 tobackup=tobackup, keepchanges=keepchanges)
1429 tobackup=tobackup, keepchanges=keepchanges)
1430 except AbortNoCleanup:
1430 except AbortNoCleanup:
1431 raise
1431 raise
1432 except: # re-raises
1432 except: # re-raises
1433 self.ui.warn(_('cleaning up working directory...'))
1433 self.ui.warn(_('cleaning up working directory...'))
1434 node = repo.dirstate.p1()
1434 node = repo.dirstate.p1()
1435 hg.revert(repo, node, None)
1435 hg.revert(repo, node, None)
1436 # only remove unknown files that we know we touched or
1436 # only remove unknown files that we know we touched or
1437 # created while patching
1437 # created while patching
1438 for f in all_files:
1438 for f in all_files:
1439 if f not in repo.dirstate:
1439 if f not in repo.dirstate:
1440 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1440 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1441 self.ui.warn(_('done\n'))
1441 self.ui.warn(_('done\n'))
1442 raise
1442 raise
1443
1443
1444 if not self.applied:
1444 if not self.applied:
1445 return ret[0]
1445 return ret[0]
1446 top = self.applied[-1].name
1446 top = self.applied[-1].name
1447 if ret[0] and ret[0] > 1:
1447 if ret[0] and ret[0] > 1:
1448 msg = _("errors during apply, please fix and refresh %s\n")
1448 msg = _("errors during apply, please fix and refresh %s\n")
1449 self.ui.write(msg % top)
1449 self.ui.write(msg % top)
1450 else:
1450 else:
1451 self.ui.write(_("now at: %s\n") % top)
1451 self.ui.write(_("now at: %s\n") % top)
1452 return ret[0]
1452 return ret[0]
1453
1453
1454 finally:
1454 finally:
1455 wlock.release()
1455 wlock.release()
1456
1456
1457 def pop(self, repo, patch=None, force=False, update=True, all=False,
1457 def pop(self, repo, patch=None, force=False, update=True, all=False,
1458 nobackup=False, keepchanges=False):
1458 nobackup=False, keepchanges=False):
1459 self.checkkeepchanges(keepchanges, force)
1459 self.checkkeepchanges(keepchanges, force)
1460 wlock = repo.wlock()
1460 wlock = repo.wlock()
1461 try:
1461 try:
1462 if patch:
1462 if patch:
1463 # index, rev, patch
1463 # index, rev, patch
1464 info = self.isapplied(patch)
1464 info = self.isapplied(patch)
1465 if not info:
1465 if not info:
1466 patch = self.lookup(patch)
1466 patch = self.lookup(patch)
1467 info = self.isapplied(patch)
1467 info = self.isapplied(patch)
1468 if not info:
1468 if not info:
1469 raise util.Abort(_("patch %s is not applied") % patch)
1469 raise util.Abort(_("patch %s is not applied") % patch)
1470
1470
1471 if not self.applied:
1471 if not self.applied:
1472 # Allow qpop -a to work repeatedly,
1472 # Allow qpop -a to work repeatedly,
1473 # but not qpop without an argument
1473 # but not qpop without an argument
1474 self.ui.warn(_("no patches applied\n"))
1474 self.ui.warn(_("no patches applied\n"))
1475 return not all
1475 return not all
1476
1476
1477 if all:
1477 if all:
1478 start = 0
1478 start = 0
1479 elif patch:
1479 elif patch:
1480 start = info[0] + 1
1480 start = info[0] + 1
1481 else:
1481 else:
1482 start = len(self.applied) - 1
1482 start = len(self.applied) - 1
1483
1483
1484 if start >= len(self.applied):
1484 if start >= len(self.applied):
1485 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1485 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1486 return
1486 return
1487
1487
1488 if not update:
1488 if not update:
1489 parents = repo.dirstate.parents()
1489 parents = repo.dirstate.parents()
1490 rr = [x.node for x in self.applied]
1490 rr = [x.node for x in self.applied]
1491 for p in parents:
1491 for p in parents:
1492 if p in rr:
1492 if p in rr:
1493 self.ui.warn(_("qpop: forcing dirstate update\n"))
1493 self.ui.warn(_("qpop: forcing dirstate update\n"))
1494 update = True
1494 update = True
1495 else:
1495 else:
1496 parents = [p.node() for p in repo[None].parents()]
1496 parents = [p.node() for p in repo[None].parents()]
1497 needupdate = False
1497 needupdate = False
1498 for entry in self.applied[start:]:
1498 for entry in self.applied[start:]:
1499 if entry.node in parents:
1499 if entry.node in parents:
1500 needupdate = True
1500 needupdate = True
1501 break
1501 break
1502 update = needupdate
1502 update = needupdate
1503
1503
1504 tobackup = set()
1504 tobackup = set()
1505 if update:
1505 if update:
1506 s = self.checklocalchanges(repo, force=force or keepchanges)
1506 s = self.checklocalchanges(repo, force=force or keepchanges)
1507 if force:
1507 if force:
1508 if not nobackup:
1508 if not nobackup:
1509 tobackup.update(s.modified + s.added)
1509 tobackup.update(s.modified + s.added)
1510 elif keepchanges:
1510 elif keepchanges:
1511 tobackup.update(s.modified + s.added +
1511 tobackup.update(s.modified + s.added +
1512 s.removed + s.deleted)
1512 s.removed + s.deleted)
1513
1513
1514 self.applieddirty = True
1514 self.applieddirty = True
1515 end = len(self.applied)
1515 end = len(self.applied)
1516 rev = self.applied[start].node
1516 rev = self.applied[start].node
1517
1517
1518 try:
1518 try:
1519 heads = repo.changelog.heads(rev)
1519 heads = repo.changelog.heads(rev)
1520 except error.LookupError:
1520 except error.LookupError:
1521 node = short(rev)
1521 node = short(rev)
1522 raise util.Abort(_('trying to pop unknown node %s') % node)
1522 raise util.Abort(_('trying to pop unknown node %s') % node)
1523
1523
1524 if heads != [self.applied[-1].node]:
1524 if heads != [self.applied[-1].node]:
1525 raise util.Abort(_("popping would remove a revision not "
1525 raise util.Abort(_("popping would remove a revision not "
1526 "managed by this patch queue"))
1526 "managed by this patch queue"))
1527 if not repo[self.applied[-1].node].mutable():
1527 if not repo[self.applied[-1].node].mutable():
1528 raise util.Abort(
1528 raise util.Abort(
1529 _("popping would remove an immutable revision"),
1529 _("popping would remove a public revision"),
1530 hint=_('see "hg help phases" for details'))
1530 hint=_('see "hg help phases" for details'))
1531
1531
1532 # we know there are no local changes, so we can make a simplified
1532 # we know there are no local changes, so we can make a simplified
1533 # form of hg.update.
1533 # form of hg.update.
1534 if update:
1534 if update:
1535 qp = self.qparents(repo, rev)
1535 qp = self.qparents(repo, rev)
1536 ctx = repo[qp]
1536 ctx = repo[qp]
1537 m, a, r, d = repo.status(qp, '.')[:4]
1537 m, a, r, d = repo.status(qp, '.')[:4]
1538 if d:
1538 if d:
1539 raise util.Abort(_("deletions found between repo revs"))
1539 raise util.Abort(_("deletions found between repo revs"))
1540
1540
1541 tobackup = set(a + m + r) & tobackup
1541 tobackup = set(a + m + r) & tobackup
1542 if keepchanges and tobackup:
1542 if keepchanges and tobackup:
1543 raise util.Abort(_("local changes found, refresh first"))
1543 raise util.Abort(_("local changes found, refresh first"))
1544 self.backup(repo, tobackup)
1544 self.backup(repo, tobackup)
1545 repo.dirstate.beginparentchange()
1545 repo.dirstate.beginparentchange()
1546 for f in a:
1546 for f in a:
1547 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1547 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1548 repo.dirstate.drop(f)
1548 repo.dirstate.drop(f)
1549 for f in m + r:
1549 for f in m + r:
1550 fctx = ctx[f]
1550 fctx = ctx[f]
1551 repo.wwrite(f, fctx.data(), fctx.flags())
1551 repo.wwrite(f, fctx.data(), fctx.flags())
1552 repo.dirstate.normal(f)
1552 repo.dirstate.normal(f)
1553 repo.setparents(qp, nullid)
1553 repo.setparents(qp, nullid)
1554 repo.dirstate.endparentchange()
1554 repo.dirstate.endparentchange()
1555 for patch in reversed(self.applied[start:end]):
1555 for patch in reversed(self.applied[start:end]):
1556 self.ui.status(_("popping %s\n") % patch.name)
1556 self.ui.status(_("popping %s\n") % patch.name)
1557 del self.applied[start:end]
1557 del self.applied[start:end]
1558 strip(self.ui, repo, [rev], update=False, backup=False)
1558 strip(self.ui, repo, [rev], update=False, backup=False)
1559 for s, state in repo['.'].substate.items():
1559 for s, state in repo['.'].substate.items():
1560 repo['.'].sub(s).get(state)
1560 repo['.'].sub(s).get(state)
1561 if self.applied:
1561 if self.applied:
1562 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1562 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1563 else:
1563 else:
1564 self.ui.write(_("patch queue now empty\n"))
1564 self.ui.write(_("patch queue now empty\n"))
1565 finally:
1565 finally:
1566 wlock.release()
1566 wlock.release()
1567
1567
1568 def diff(self, repo, pats, opts):
1568 def diff(self, repo, pats, opts):
1569 top, patch = self.checktoppatch(repo)
1569 top, patch = self.checktoppatch(repo)
1570 if not top:
1570 if not top:
1571 self.ui.write(_("no patches applied\n"))
1571 self.ui.write(_("no patches applied\n"))
1572 return
1572 return
1573 qp = self.qparents(repo, top)
1573 qp = self.qparents(repo, top)
1574 if opts.get('reverse'):
1574 if opts.get('reverse'):
1575 node1, node2 = None, qp
1575 node1, node2 = None, qp
1576 else:
1576 else:
1577 node1, node2 = qp, None
1577 node1, node2 = qp, None
1578 diffopts = self.diffopts(opts, patch)
1578 diffopts = self.diffopts(opts, patch)
1579 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1579 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1580
1580
1581 def refresh(self, repo, pats=None, **opts):
1581 def refresh(self, repo, pats=None, **opts):
1582 if not self.applied:
1582 if not self.applied:
1583 self.ui.write(_("no patches applied\n"))
1583 self.ui.write(_("no patches applied\n"))
1584 return 1
1584 return 1
1585 msg = opts.get('msg', '').rstrip()
1585 msg = opts.get('msg', '').rstrip()
1586 edit = opts.get('edit')
1586 edit = opts.get('edit')
1587 editform = opts.get('editform', 'mq.qrefresh')
1587 editform = opts.get('editform', 'mq.qrefresh')
1588 newuser = opts.get('user')
1588 newuser = opts.get('user')
1589 newdate = opts.get('date')
1589 newdate = opts.get('date')
1590 if newdate:
1590 if newdate:
1591 newdate = '%d %d' % util.parsedate(newdate)
1591 newdate = '%d %d' % util.parsedate(newdate)
1592 wlock = repo.wlock()
1592 wlock = repo.wlock()
1593
1593
1594 try:
1594 try:
1595 self.checktoppatch(repo)
1595 self.checktoppatch(repo)
1596 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1596 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1597 if repo.changelog.heads(top) != [top]:
1597 if repo.changelog.heads(top) != [top]:
1598 raise util.Abort(_("cannot refresh a revision with children"))
1598 raise util.Abort(_("cannot refresh a revision with children"))
1599 if not repo[top].mutable():
1599 if not repo[top].mutable():
1600 raise util.Abort(_("cannot refresh immutable revision"),
1600 raise util.Abort(_("cannot refresh public revision"),
1601 hint=_('see "hg help phases" for details'))
1601 hint=_('see "hg help phases" for details'))
1602
1602
1603 cparents = repo.changelog.parents(top)
1603 cparents = repo.changelog.parents(top)
1604 patchparent = self.qparents(repo, top)
1604 patchparent = self.qparents(repo, top)
1605
1605
1606 inclsubs = checksubstate(repo, hex(patchparent))
1606 inclsubs = checksubstate(repo, hex(patchparent))
1607 if inclsubs:
1607 if inclsubs:
1608 substatestate = repo.dirstate['.hgsubstate']
1608 substatestate = repo.dirstate['.hgsubstate']
1609
1609
1610 ph = patchheader(self.join(patchfn), self.plainmode)
1610 ph = patchheader(self.join(patchfn), self.plainmode)
1611 diffopts = self.diffopts({'git': opts.get('git')}, patchfn)
1611 diffopts = self.diffopts({'git': opts.get('git')}, patchfn)
1612 if newuser:
1612 if newuser:
1613 ph.setuser(newuser)
1613 ph.setuser(newuser)
1614 if newdate:
1614 if newdate:
1615 ph.setdate(newdate)
1615 ph.setdate(newdate)
1616 ph.setparent(hex(patchparent))
1616 ph.setparent(hex(patchparent))
1617
1617
1618 # only commit new patch when write is complete
1618 # only commit new patch when write is complete
1619 patchf = self.opener(patchfn, 'w', atomictemp=True)
1619 patchf = self.opener(patchfn, 'w', atomictemp=True)
1620
1620
1621 # update the dirstate in place, strip off the qtip commit
1621 # update the dirstate in place, strip off the qtip commit
1622 # and then commit.
1622 # and then commit.
1623 #
1623 #
1624 # this should really read:
1624 # this should really read:
1625 # mm, dd, aa = repo.status(top, patchparent)[:3]
1625 # mm, dd, aa = repo.status(top, patchparent)[:3]
1626 # but we do it backwards to take advantage of manifest/changelog
1626 # but we do it backwards to take advantage of manifest/changelog
1627 # caching against the next repo.status call
1627 # caching against the next repo.status call
1628 mm, aa, dd = repo.status(patchparent, top)[:3]
1628 mm, aa, dd = repo.status(patchparent, top)[:3]
1629 changes = repo.changelog.read(top)
1629 changes = repo.changelog.read(top)
1630 man = repo.manifest.read(changes[0])
1630 man = repo.manifest.read(changes[0])
1631 aaa = aa[:]
1631 aaa = aa[:]
1632 matchfn = scmutil.match(repo[None], pats, opts)
1632 matchfn = scmutil.match(repo[None], pats, opts)
1633 # in short mode, we only diff the files included in the
1633 # in short mode, we only diff the files included in the
1634 # patch already plus specified files
1634 # patch already plus specified files
1635 if opts.get('short'):
1635 if opts.get('short'):
1636 # if amending a patch, we start with existing
1636 # if amending a patch, we start with existing
1637 # files plus specified files - unfiltered
1637 # files plus specified files - unfiltered
1638 match = scmutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1638 match = scmutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1639 # filter with include/exclude options
1639 # filter with include/exclude options
1640 matchfn = scmutil.match(repo[None], opts=opts)
1640 matchfn = scmutil.match(repo[None], opts=opts)
1641 else:
1641 else:
1642 match = scmutil.matchall(repo)
1642 match = scmutil.matchall(repo)
1643 m, a, r, d = repo.status(match=match)[:4]
1643 m, a, r, d = repo.status(match=match)[:4]
1644 mm = set(mm)
1644 mm = set(mm)
1645 aa = set(aa)
1645 aa = set(aa)
1646 dd = set(dd)
1646 dd = set(dd)
1647
1647
1648 # we might end up with files that were added between
1648 # we might end up with files that were added between
1649 # qtip and the dirstate parent, but then changed in the
1649 # qtip and the dirstate parent, but then changed in the
1650 # local dirstate. in this case, we want them to only
1650 # local dirstate. in this case, we want them to only
1651 # show up in the added section
1651 # show up in the added section
1652 for x in m:
1652 for x in m:
1653 if x not in aa:
1653 if x not in aa:
1654 mm.add(x)
1654 mm.add(x)
1655 # we might end up with files added by the local dirstate that
1655 # we might end up with files added by the local dirstate that
1656 # were deleted by the patch. In this case, they should only
1656 # were deleted by the patch. In this case, they should only
1657 # show up in the changed section.
1657 # show up in the changed section.
1658 for x in a:
1658 for x in a:
1659 if x in dd:
1659 if x in dd:
1660 dd.remove(x)
1660 dd.remove(x)
1661 mm.add(x)
1661 mm.add(x)
1662 else:
1662 else:
1663 aa.add(x)
1663 aa.add(x)
1664 # make sure any files deleted in the local dirstate
1664 # make sure any files deleted in the local dirstate
1665 # are not in the add or change column of the patch
1665 # are not in the add or change column of the patch
1666 forget = []
1666 forget = []
1667 for x in d + r:
1667 for x in d + r:
1668 if x in aa:
1668 if x in aa:
1669 aa.remove(x)
1669 aa.remove(x)
1670 forget.append(x)
1670 forget.append(x)
1671 continue
1671 continue
1672 else:
1672 else:
1673 mm.discard(x)
1673 mm.discard(x)
1674 dd.add(x)
1674 dd.add(x)
1675
1675
1676 m = list(mm)
1676 m = list(mm)
1677 r = list(dd)
1677 r = list(dd)
1678 a = list(aa)
1678 a = list(aa)
1679
1679
1680 # create 'match' that includes the files to be recommitted.
1680 # create 'match' that includes the files to be recommitted.
1681 # apply matchfn via repo.status to ensure correct case handling.
1681 # apply matchfn via repo.status to ensure correct case handling.
1682 cm, ca, cr, cd = repo.status(patchparent, match=matchfn)[:4]
1682 cm, ca, cr, cd = repo.status(patchparent, match=matchfn)[:4]
1683 allmatches = set(cm + ca + cr + cd)
1683 allmatches = set(cm + ca + cr + cd)
1684 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1684 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1685
1685
1686 files = set(inclsubs)
1686 files = set(inclsubs)
1687 for x in refreshchanges:
1687 for x in refreshchanges:
1688 files.update(x)
1688 files.update(x)
1689 match = scmutil.matchfiles(repo, files)
1689 match = scmutil.matchfiles(repo, files)
1690
1690
1691 bmlist = repo[top].bookmarks()
1691 bmlist = repo[top].bookmarks()
1692
1692
1693 dsguard = None
1693 dsguard = None
1694 try:
1694 try:
1695 dsguard = cmdutil.dirstateguard(repo, 'mq.refresh')
1695 dsguard = cmdutil.dirstateguard(repo, 'mq.refresh')
1696 if diffopts.git or diffopts.upgrade:
1696 if diffopts.git or diffopts.upgrade:
1697 copies = {}
1697 copies = {}
1698 for dst in a:
1698 for dst in a:
1699 src = repo.dirstate.copied(dst)
1699 src = repo.dirstate.copied(dst)
1700 # during qfold, the source file for copies may
1700 # during qfold, the source file for copies may
1701 # be removed. Treat this as a simple add.
1701 # be removed. Treat this as a simple add.
1702 if src is not None and src in repo.dirstate:
1702 if src is not None and src in repo.dirstate:
1703 copies.setdefault(src, []).append(dst)
1703 copies.setdefault(src, []).append(dst)
1704 repo.dirstate.add(dst)
1704 repo.dirstate.add(dst)
1705 # remember the copies between patchparent and qtip
1705 # remember the copies between patchparent and qtip
1706 for dst in aaa:
1706 for dst in aaa:
1707 f = repo.file(dst)
1707 f = repo.file(dst)
1708 src = f.renamed(man[dst])
1708 src = f.renamed(man[dst])
1709 if src:
1709 if src:
1710 copies.setdefault(src[0], []).extend(
1710 copies.setdefault(src[0], []).extend(
1711 copies.get(dst, []))
1711 copies.get(dst, []))
1712 if dst in a:
1712 if dst in a:
1713 copies[src[0]].append(dst)
1713 copies[src[0]].append(dst)
1714 # we can't copy a file created by the patch itself
1714 # we can't copy a file created by the patch itself
1715 if dst in copies:
1715 if dst in copies:
1716 del copies[dst]
1716 del copies[dst]
1717 for src, dsts in copies.iteritems():
1717 for src, dsts in copies.iteritems():
1718 for dst in dsts:
1718 for dst in dsts:
1719 repo.dirstate.copy(src, dst)
1719 repo.dirstate.copy(src, dst)
1720 else:
1720 else:
1721 for dst in a:
1721 for dst in a:
1722 repo.dirstate.add(dst)
1722 repo.dirstate.add(dst)
1723 # Drop useless copy information
1723 # Drop useless copy information
1724 for f in list(repo.dirstate.copies()):
1724 for f in list(repo.dirstate.copies()):
1725 repo.dirstate.copy(None, f)
1725 repo.dirstate.copy(None, f)
1726 for f in r:
1726 for f in r:
1727 repo.dirstate.remove(f)
1727 repo.dirstate.remove(f)
1728 # if the patch excludes a modified file, mark that
1728 # if the patch excludes a modified file, mark that
1729 # file with mtime=0 so status can see it.
1729 # file with mtime=0 so status can see it.
1730 mm = []
1730 mm = []
1731 for i in xrange(len(m) - 1, -1, -1):
1731 for i in xrange(len(m) - 1, -1, -1):
1732 if not matchfn(m[i]):
1732 if not matchfn(m[i]):
1733 mm.append(m[i])
1733 mm.append(m[i])
1734 del m[i]
1734 del m[i]
1735 for f in m:
1735 for f in m:
1736 repo.dirstate.normal(f)
1736 repo.dirstate.normal(f)
1737 for f in mm:
1737 for f in mm:
1738 repo.dirstate.normallookup(f)
1738 repo.dirstate.normallookup(f)
1739 for f in forget:
1739 for f in forget:
1740 repo.dirstate.drop(f)
1740 repo.dirstate.drop(f)
1741
1741
1742 user = ph.user or changes[1]
1742 user = ph.user or changes[1]
1743
1743
1744 oldphase = repo[top].phase()
1744 oldphase = repo[top].phase()
1745
1745
1746 # assumes strip can roll itself back if interrupted
1746 # assumes strip can roll itself back if interrupted
1747 repo.setparents(*cparents)
1747 repo.setparents(*cparents)
1748 self.applied.pop()
1748 self.applied.pop()
1749 self.applieddirty = True
1749 self.applieddirty = True
1750 strip(self.ui, repo, [top], update=False, backup=False)
1750 strip(self.ui, repo, [top], update=False, backup=False)
1751 dsguard.close()
1751 dsguard.close()
1752 finally:
1752 finally:
1753 release(dsguard)
1753 release(dsguard)
1754
1754
1755 try:
1755 try:
1756 # might be nice to attempt to roll back strip after this
1756 # might be nice to attempt to roll back strip after this
1757
1757
1758 defaultmsg = "[mq]: %s" % patchfn
1758 defaultmsg = "[mq]: %s" % patchfn
1759 editor = cmdutil.getcommiteditor(editform=editform)
1759 editor = cmdutil.getcommiteditor(editform=editform)
1760 if edit:
1760 if edit:
1761 def finishdesc(desc):
1761 def finishdesc(desc):
1762 if desc.rstrip():
1762 if desc.rstrip():
1763 ph.setmessage(desc)
1763 ph.setmessage(desc)
1764 return desc
1764 return desc
1765 return defaultmsg
1765 return defaultmsg
1766 # i18n: this message is shown in editor with "HG: " prefix
1766 # i18n: this message is shown in editor with "HG: " prefix
1767 extramsg = _('Leave message empty to use default message.')
1767 extramsg = _('Leave message empty to use default message.')
1768 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1768 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1769 extramsg=extramsg,
1769 extramsg=extramsg,
1770 editform=editform)
1770 editform=editform)
1771 message = msg or "\n".join(ph.message)
1771 message = msg or "\n".join(ph.message)
1772 elif not msg:
1772 elif not msg:
1773 if not ph.message:
1773 if not ph.message:
1774 message = defaultmsg
1774 message = defaultmsg
1775 else:
1775 else:
1776 message = "\n".join(ph.message)
1776 message = "\n".join(ph.message)
1777 else:
1777 else:
1778 message = msg
1778 message = msg
1779 ph.setmessage(msg)
1779 ph.setmessage(msg)
1780
1780
1781 # Ensure we create a new changeset in the same phase than
1781 # Ensure we create a new changeset in the same phase than
1782 # the old one.
1782 # the old one.
1783 n = newcommit(repo, oldphase, message, user, ph.date,
1783 n = newcommit(repo, oldphase, message, user, ph.date,
1784 match=match, force=True, editor=editor)
1784 match=match, force=True, editor=editor)
1785 # only write patch after a successful commit
1785 # only write patch after a successful commit
1786 c = [list(x) for x in refreshchanges]
1786 c = [list(x) for x in refreshchanges]
1787 if inclsubs:
1787 if inclsubs:
1788 self.putsubstate2changes(substatestate, c)
1788 self.putsubstate2changes(substatestate, c)
1789 chunks = patchmod.diff(repo, patchparent,
1789 chunks = patchmod.diff(repo, patchparent,
1790 changes=c, opts=diffopts)
1790 changes=c, opts=diffopts)
1791 comments = str(ph)
1791 comments = str(ph)
1792 if comments:
1792 if comments:
1793 patchf.write(comments)
1793 patchf.write(comments)
1794 for chunk in chunks:
1794 for chunk in chunks:
1795 patchf.write(chunk)
1795 patchf.write(chunk)
1796 patchf.close()
1796 patchf.close()
1797
1797
1798 marks = repo._bookmarks
1798 marks = repo._bookmarks
1799 for bm in bmlist:
1799 for bm in bmlist:
1800 marks[bm] = n
1800 marks[bm] = n
1801 marks.write()
1801 marks.write()
1802
1802
1803 self.applied.append(statusentry(n, patchfn))
1803 self.applied.append(statusentry(n, patchfn))
1804 except: # re-raises
1804 except: # re-raises
1805 ctx = repo[cparents[0]]
1805 ctx = repo[cparents[0]]
1806 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1806 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1807 self.savedirty()
1807 self.savedirty()
1808 self.ui.warn(_('refresh interrupted while patch was popped! '
1808 self.ui.warn(_('refresh interrupted while patch was popped! '
1809 '(revert --all, qpush to recover)\n'))
1809 '(revert --all, qpush to recover)\n'))
1810 raise
1810 raise
1811 finally:
1811 finally:
1812 wlock.release()
1812 wlock.release()
1813 self.removeundo(repo)
1813 self.removeundo(repo)
1814
1814
1815 def init(self, repo, create=False):
1815 def init(self, repo, create=False):
1816 if not create and os.path.isdir(self.path):
1816 if not create and os.path.isdir(self.path):
1817 raise util.Abort(_("patch queue directory already exists"))
1817 raise util.Abort(_("patch queue directory already exists"))
1818 try:
1818 try:
1819 os.mkdir(self.path)
1819 os.mkdir(self.path)
1820 except OSError, inst:
1820 except OSError, inst:
1821 if inst.errno != errno.EEXIST or not create:
1821 if inst.errno != errno.EEXIST or not create:
1822 raise
1822 raise
1823 if create:
1823 if create:
1824 return self.qrepo(create=True)
1824 return self.qrepo(create=True)
1825
1825
1826 def unapplied(self, repo, patch=None):
1826 def unapplied(self, repo, patch=None):
1827 if patch and patch not in self.series:
1827 if patch and patch not in self.series:
1828 raise util.Abort(_("patch %s is not in series file") % patch)
1828 raise util.Abort(_("patch %s is not in series file") % patch)
1829 if not patch:
1829 if not patch:
1830 start = self.seriesend()
1830 start = self.seriesend()
1831 else:
1831 else:
1832 start = self.series.index(patch) + 1
1832 start = self.series.index(patch) + 1
1833 unapplied = []
1833 unapplied = []
1834 for i in xrange(start, len(self.series)):
1834 for i in xrange(start, len(self.series)):
1835 pushable, reason = self.pushable(i)
1835 pushable, reason = self.pushable(i)
1836 if pushable:
1836 if pushable:
1837 unapplied.append((i, self.series[i]))
1837 unapplied.append((i, self.series[i]))
1838 self.explainpushable(i)
1838 self.explainpushable(i)
1839 return unapplied
1839 return unapplied
1840
1840
1841 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1841 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1842 summary=False):
1842 summary=False):
1843 def displayname(pfx, patchname, state):
1843 def displayname(pfx, patchname, state):
1844 if pfx:
1844 if pfx:
1845 self.ui.write(pfx)
1845 self.ui.write(pfx)
1846 if summary:
1846 if summary:
1847 ph = patchheader(self.join(patchname), self.plainmode)
1847 ph = patchheader(self.join(patchname), self.plainmode)
1848 if ph.message:
1848 if ph.message:
1849 msg = ph.message[0]
1849 msg = ph.message[0]
1850 else:
1850 else:
1851 msg = ''
1851 msg = ''
1852
1852
1853 if self.ui.formatted():
1853 if self.ui.formatted():
1854 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1854 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1855 if width > 0:
1855 if width > 0:
1856 msg = util.ellipsis(msg, width)
1856 msg = util.ellipsis(msg, width)
1857 else:
1857 else:
1858 msg = ''
1858 msg = ''
1859 self.ui.write(patchname, label='qseries.' + state)
1859 self.ui.write(patchname, label='qseries.' + state)
1860 self.ui.write(': ')
1860 self.ui.write(': ')
1861 self.ui.write(msg, label='qseries.message.' + state)
1861 self.ui.write(msg, label='qseries.message.' + state)
1862 else:
1862 else:
1863 self.ui.write(patchname, label='qseries.' + state)
1863 self.ui.write(patchname, label='qseries.' + state)
1864 self.ui.write('\n')
1864 self.ui.write('\n')
1865
1865
1866 applied = set([p.name for p in self.applied])
1866 applied = set([p.name for p in self.applied])
1867 if length is None:
1867 if length is None:
1868 length = len(self.series) - start
1868 length = len(self.series) - start
1869 if not missing:
1869 if not missing:
1870 if self.ui.verbose:
1870 if self.ui.verbose:
1871 idxwidth = len(str(start + length - 1))
1871 idxwidth = len(str(start + length - 1))
1872 for i in xrange(start, start + length):
1872 for i in xrange(start, start + length):
1873 patch = self.series[i]
1873 patch = self.series[i]
1874 if patch in applied:
1874 if patch in applied:
1875 char, state = 'A', 'applied'
1875 char, state = 'A', 'applied'
1876 elif self.pushable(i)[0]:
1876 elif self.pushable(i)[0]:
1877 char, state = 'U', 'unapplied'
1877 char, state = 'U', 'unapplied'
1878 else:
1878 else:
1879 char, state = 'G', 'guarded'
1879 char, state = 'G', 'guarded'
1880 pfx = ''
1880 pfx = ''
1881 if self.ui.verbose:
1881 if self.ui.verbose:
1882 pfx = '%*d %s ' % (idxwidth, i, char)
1882 pfx = '%*d %s ' % (idxwidth, i, char)
1883 elif status and status != char:
1883 elif status and status != char:
1884 continue
1884 continue
1885 displayname(pfx, patch, state)
1885 displayname(pfx, patch, state)
1886 else:
1886 else:
1887 msng_list = []
1887 msng_list = []
1888 for root, dirs, files in os.walk(self.path):
1888 for root, dirs, files in os.walk(self.path):
1889 d = root[len(self.path) + 1:]
1889 d = root[len(self.path) + 1:]
1890 for f in files:
1890 for f in files:
1891 fl = os.path.join(d, f)
1891 fl = os.path.join(d, f)
1892 if (fl not in self.series and
1892 if (fl not in self.series and
1893 fl not in (self.statuspath, self.seriespath,
1893 fl not in (self.statuspath, self.seriespath,
1894 self.guardspath)
1894 self.guardspath)
1895 and not fl.startswith('.')):
1895 and not fl.startswith('.')):
1896 msng_list.append(fl)
1896 msng_list.append(fl)
1897 for x in sorted(msng_list):
1897 for x in sorted(msng_list):
1898 pfx = self.ui.verbose and ('D ') or ''
1898 pfx = self.ui.verbose and ('D ') or ''
1899 displayname(pfx, x, 'missing')
1899 displayname(pfx, x, 'missing')
1900
1900
1901 def issaveline(self, l):
1901 def issaveline(self, l):
1902 if l.name == '.hg.patches.save.line':
1902 if l.name == '.hg.patches.save.line':
1903 return True
1903 return True
1904
1904
1905 def qrepo(self, create=False):
1905 def qrepo(self, create=False):
1906 ui = self.baseui.copy()
1906 ui = self.baseui.copy()
1907 if create or os.path.isdir(self.join(".hg")):
1907 if create or os.path.isdir(self.join(".hg")):
1908 return hg.repository(ui, path=self.path, create=create)
1908 return hg.repository(ui, path=self.path, create=create)
1909
1909
1910 def restore(self, repo, rev, delete=None, qupdate=None):
1910 def restore(self, repo, rev, delete=None, qupdate=None):
1911 desc = repo[rev].description().strip()
1911 desc = repo[rev].description().strip()
1912 lines = desc.splitlines()
1912 lines = desc.splitlines()
1913 i = 0
1913 i = 0
1914 datastart = None
1914 datastart = None
1915 series = []
1915 series = []
1916 applied = []
1916 applied = []
1917 qpp = None
1917 qpp = None
1918 for i, line in enumerate(lines):
1918 for i, line in enumerate(lines):
1919 if line == 'Patch Data:':
1919 if line == 'Patch Data:':
1920 datastart = i + 1
1920 datastart = i + 1
1921 elif line.startswith('Dirstate:'):
1921 elif line.startswith('Dirstate:'):
1922 l = line.rstrip()
1922 l = line.rstrip()
1923 l = l[10:].split(' ')
1923 l = l[10:].split(' ')
1924 qpp = [bin(x) for x in l]
1924 qpp = [bin(x) for x in l]
1925 elif datastart is not None:
1925 elif datastart is not None:
1926 l = line.rstrip()
1926 l = line.rstrip()
1927 n, name = l.split(':', 1)
1927 n, name = l.split(':', 1)
1928 if n:
1928 if n:
1929 applied.append(statusentry(bin(n), name))
1929 applied.append(statusentry(bin(n), name))
1930 else:
1930 else:
1931 series.append(l)
1931 series.append(l)
1932 if datastart is None:
1932 if datastart is None:
1933 self.ui.warn(_("no saved patch data found\n"))
1933 self.ui.warn(_("no saved patch data found\n"))
1934 return 1
1934 return 1
1935 self.ui.warn(_("restoring status: %s\n") % lines[0])
1935 self.ui.warn(_("restoring status: %s\n") % lines[0])
1936 self.fullseries = series
1936 self.fullseries = series
1937 self.applied = applied
1937 self.applied = applied
1938 self.parseseries()
1938 self.parseseries()
1939 self.seriesdirty = True
1939 self.seriesdirty = True
1940 self.applieddirty = True
1940 self.applieddirty = True
1941 heads = repo.changelog.heads()
1941 heads = repo.changelog.heads()
1942 if delete:
1942 if delete:
1943 if rev not in heads:
1943 if rev not in heads:
1944 self.ui.warn(_("save entry has children, leaving it alone\n"))
1944 self.ui.warn(_("save entry has children, leaving it alone\n"))
1945 else:
1945 else:
1946 self.ui.warn(_("removing save entry %s\n") % short(rev))
1946 self.ui.warn(_("removing save entry %s\n") % short(rev))
1947 pp = repo.dirstate.parents()
1947 pp = repo.dirstate.parents()
1948 if rev in pp:
1948 if rev in pp:
1949 update = True
1949 update = True
1950 else:
1950 else:
1951 update = False
1951 update = False
1952 strip(self.ui, repo, [rev], update=update, backup=False)
1952 strip(self.ui, repo, [rev], update=update, backup=False)
1953 if qpp:
1953 if qpp:
1954 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1954 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1955 (short(qpp[0]), short(qpp[1])))
1955 (short(qpp[0]), short(qpp[1])))
1956 if qupdate:
1956 if qupdate:
1957 self.ui.status(_("updating queue directory\n"))
1957 self.ui.status(_("updating queue directory\n"))
1958 r = self.qrepo()
1958 r = self.qrepo()
1959 if not r:
1959 if not r:
1960 self.ui.warn(_("unable to load queue repository\n"))
1960 self.ui.warn(_("unable to load queue repository\n"))
1961 return 1
1961 return 1
1962 hg.clean(r, qpp[0])
1962 hg.clean(r, qpp[0])
1963
1963
1964 def save(self, repo, msg=None):
1964 def save(self, repo, msg=None):
1965 if not self.applied:
1965 if not self.applied:
1966 self.ui.warn(_("save: no patches applied, exiting\n"))
1966 self.ui.warn(_("save: no patches applied, exiting\n"))
1967 return 1
1967 return 1
1968 if self.issaveline(self.applied[-1]):
1968 if self.issaveline(self.applied[-1]):
1969 self.ui.warn(_("status is already saved\n"))
1969 self.ui.warn(_("status is already saved\n"))
1970 return 1
1970 return 1
1971
1971
1972 if not msg:
1972 if not msg:
1973 msg = _("hg patches saved state")
1973 msg = _("hg patches saved state")
1974 else:
1974 else:
1975 msg = "hg patches: " + msg.rstrip('\r\n')
1975 msg = "hg patches: " + msg.rstrip('\r\n')
1976 r = self.qrepo()
1976 r = self.qrepo()
1977 if r:
1977 if r:
1978 pp = r.dirstate.parents()
1978 pp = r.dirstate.parents()
1979 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1979 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1980 msg += "\n\nPatch Data:\n"
1980 msg += "\n\nPatch Data:\n"
1981 msg += ''.join('%s\n' % x for x in self.applied)
1981 msg += ''.join('%s\n' % x for x in self.applied)
1982 msg += ''.join(':%s\n' % x for x in self.fullseries)
1982 msg += ''.join(':%s\n' % x for x in self.fullseries)
1983 n = repo.commit(msg, force=True)
1983 n = repo.commit(msg, force=True)
1984 if not n:
1984 if not n:
1985 self.ui.warn(_("repo commit failed\n"))
1985 self.ui.warn(_("repo commit failed\n"))
1986 return 1
1986 return 1
1987 self.applied.append(statusentry(n, '.hg.patches.save.line'))
1987 self.applied.append(statusentry(n, '.hg.patches.save.line'))
1988 self.applieddirty = True
1988 self.applieddirty = True
1989 self.removeundo(repo)
1989 self.removeundo(repo)
1990
1990
1991 def fullseriesend(self):
1991 def fullseriesend(self):
1992 if self.applied:
1992 if self.applied:
1993 p = self.applied[-1].name
1993 p = self.applied[-1].name
1994 end = self.findseries(p)
1994 end = self.findseries(p)
1995 if end is None:
1995 if end is None:
1996 return len(self.fullseries)
1996 return len(self.fullseries)
1997 return end + 1
1997 return end + 1
1998 return 0
1998 return 0
1999
1999
2000 def seriesend(self, all_patches=False):
2000 def seriesend(self, all_patches=False):
2001 """If all_patches is False, return the index of the next pushable patch
2001 """If all_patches is False, return the index of the next pushable patch
2002 in the series, or the series length. If all_patches is True, return the
2002 in the series, or the series length. If all_patches is True, return the
2003 index of the first patch past the last applied one.
2003 index of the first patch past the last applied one.
2004 """
2004 """
2005 end = 0
2005 end = 0
2006 def nextpatch(start):
2006 def nextpatch(start):
2007 if all_patches or start >= len(self.series):
2007 if all_patches or start >= len(self.series):
2008 return start
2008 return start
2009 for i in xrange(start, len(self.series)):
2009 for i in xrange(start, len(self.series)):
2010 p, reason = self.pushable(i)
2010 p, reason = self.pushable(i)
2011 if p:
2011 if p:
2012 return i
2012 return i
2013 self.explainpushable(i)
2013 self.explainpushable(i)
2014 return len(self.series)
2014 return len(self.series)
2015 if self.applied:
2015 if self.applied:
2016 p = self.applied[-1].name
2016 p = self.applied[-1].name
2017 try:
2017 try:
2018 end = self.series.index(p)
2018 end = self.series.index(p)
2019 except ValueError:
2019 except ValueError:
2020 return 0
2020 return 0
2021 return nextpatch(end + 1)
2021 return nextpatch(end + 1)
2022 return nextpatch(end)
2022 return nextpatch(end)
2023
2023
2024 def appliedname(self, index):
2024 def appliedname(self, index):
2025 pname = self.applied[index].name
2025 pname = self.applied[index].name
2026 if not self.ui.verbose:
2026 if not self.ui.verbose:
2027 p = pname
2027 p = pname
2028 else:
2028 else:
2029 p = str(self.series.index(pname)) + " " + pname
2029 p = str(self.series.index(pname)) + " " + pname
2030 return p
2030 return p
2031
2031
2032 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2032 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2033 force=None, git=False):
2033 force=None, git=False):
2034 def checkseries(patchname):
2034 def checkseries(patchname):
2035 if patchname in self.series:
2035 if patchname in self.series:
2036 raise util.Abort(_('patch %s is already in the series file')
2036 raise util.Abort(_('patch %s is already in the series file')
2037 % patchname)
2037 % patchname)
2038
2038
2039 if rev:
2039 if rev:
2040 if files:
2040 if files:
2041 raise util.Abort(_('option "-r" not valid when importing '
2041 raise util.Abort(_('option "-r" not valid when importing '
2042 'files'))
2042 'files'))
2043 rev = scmutil.revrange(repo, rev)
2043 rev = scmutil.revrange(repo, rev)
2044 rev.sort(reverse=True)
2044 rev.sort(reverse=True)
2045 elif not files:
2045 elif not files:
2046 raise util.Abort(_('no files or revisions specified'))
2046 raise util.Abort(_('no files or revisions specified'))
2047 if (len(files) > 1 or len(rev) > 1) and patchname:
2047 if (len(files) > 1 or len(rev) > 1) and patchname:
2048 raise util.Abort(_('option "-n" not valid when importing multiple '
2048 raise util.Abort(_('option "-n" not valid when importing multiple '
2049 'patches'))
2049 'patches'))
2050 imported = []
2050 imported = []
2051 if rev:
2051 if rev:
2052 # If mq patches are applied, we can only import revisions
2052 # If mq patches are applied, we can only import revisions
2053 # that form a linear path to qbase.
2053 # that form a linear path to qbase.
2054 # Otherwise, they should form a linear path to a head.
2054 # Otherwise, they should form a linear path to a head.
2055 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2055 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2056 if len(heads) > 1:
2056 if len(heads) > 1:
2057 raise util.Abort(_('revision %d is the root of more than one '
2057 raise util.Abort(_('revision %d is the root of more than one '
2058 'branch') % rev.last())
2058 'branch') % rev.last())
2059 if self.applied:
2059 if self.applied:
2060 base = repo.changelog.node(rev.first())
2060 base = repo.changelog.node(rev.first())
2061 if base in [n.node for n in self.applied]:
2061 if base in [n.node for n in self.applied]:
2062 raise util.Abort(_('revision %d is already managed')
2062 raise util.Abort(_('revision %d is already managed')
2063 % rev.first())
2063 % rev.first())
2064 if heads != [self.applied[-1].node]:
2064 if heads != [self.applied[-1].node]:
2065 raise util.Abort(_('revision %d is not the parent of '
2065 raise util.Abort(_('revision %d is not the parent of '
2066 'the queue') % rev.first())
2066 'the queue') % rev.first())
2067 base = repo.changelog.rev(self.applied[0].node)
2067 base = repo.changelog.rev(self.applied[0].node)
2068 lastparent = repo.changelog.parentrevs(base)[0]
2068 lastparent = repo.changelog.parentrevs(base)[0]
2069 else:
2069 else:
2070 if heads != [repo.changelog.node(rev.first())]:
2070 if heads != [repo.changelog.node(rev.first())]:
2071 raise util.Abort(_('revision %d has unmanaged children')
2071 raise util.Abort(_('revision %d has unmanaged children')
2072 % rev.first())
2072 % rev.first())
2073 lastparent = None
2073 lastparent = None
2074
2074
2075 diffopts = self.diffopts({'git': git})
2075 diffopts = self.diffopts({'git': git})
2076 tr = repo.transaction('qimport')
2076 tr = repo.transaction('qimport')
2077 try:
2077 try:
2078 for r in rev:
2078 for r in rev:
2079 if not repo[r].mutable():
2079 if not repo[r].mutable():
2080 raise util.Abort(_('revision %d is not mutable') % r,
2080 raise util.Abort(_('revision %d is not mutable') % r,
2081 hint=_('see "hg help phases" '
2081 hint=_('see "hg help phases" '
2082 'for details'))
2082 'for details'))
2083 p1, p2 = repo.changelog.parentrevs(r)
2083 p1, p2 = repo.changelog.parentrevs(r)
2084 n = repo.changelog.node(r)
2084 n = repo.changelog.node(r)
2085 if p2 != nullrev:
2085 if p2 != nullrev:
2086 raise util.Abort(_('cannot import merge revision %d')
2086 raise util.Abort(_('cannot import merge revision %d')
2087 % r)
2087 % r)
2088 if lastparent and lastparent != r:
2088 if lastparent and lastparent != r:
2089 raise util.Abort(_('revision %d is not the parent of '
2089 raise util.Abort(_('revision %d is not the parent of '
2090 '%d')
2090 '%d')
2091 % (r, lastparent))
2091 % (r, lastparent))
2092 lastparent = p1
2092 lastparent = p1
2093
2093
2094 if not patchname:
2094 if not patchname:
2095 patchname = normname('%d.diff' % r)
2095 patchname = normname('%d.diff' % r)
2096 checkseries(patchname)
2096 checkseries(patchname)
2097 self.checkpatchname(patchname, force)
2097 self.checkpatchname(patchname, force)
2098 self.fullseries.insert(0, patchname)
2098 self.fullseries.insert(0, patchname)
2099
2099
2100 patchf = self.opener(patchname, "w")
2100 patchf = self.opener(patchname, "w")
2101 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2101 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2102 patchf.close()
2102 patchf.close()
2103
2103
2104 se = statusentry(n, patchname)
2104 se = statusentry(n, patchname)
2105 self.applied.insert(0, se)
2105 self.applied.insert(0, se)
2106
2106
2107 self.added.append(patchname)
2107 self.added.append(patchname)
2108 imported.append(patchname)
2108 imported.append(patchname)
2109 patchname = None
2109 patchname = None
2110 if rev and repo.ui.configbool('mq', 'secret', False):
2110 if rev and repo.ui.configbool('mq', 'secret', False):
2111 # if we added anything with --rev, move the secret root
2111 # if we added anything with --rev, move the secret root
2112 phases.retractboundary(repo, tr, phases.secret, [n])
2112 phases.retractboundary(repo, tr, phases.secret, [n])
2113 self.parseseries()
2113 self.parseseries()
2114 self.applieddirty = True
2114 self.applieddirty = True
2115 self.seriesdirty = True
2115 self.seriesdirty = True
2116 tr.close()
2116 tr.close()
2117 finally:
2117 finally:
2118 tr.release()
2118 tr.release()
2119
2119
2120 for i, filename in enumerate(files):
2120 for i, filename in enumerate(files):
2121 if existing:
2121 if existing:
2122 if filename == '-':
2122 if filename == '-':
2123 raise util.Abort(_('-e is incompatible with import from -'))
2123 raise util.Abort(_('-e is incompatible with import from -'))
2124 filename = normname(filename)
2124 filename = normname(filename)
2125 self.checkreservedname(filename)
2125 self.checkreservedname(filename)
2126 if util.url(filename).islocal():
2126 if util.url(filename).islocal():
2127 originpath = self.join(filename)
2127 originpath = self.join(filename)
2128 if not os.path.isfile(originpath):
2128 if not os.path.isfile(originpath):
2129 raise util.Abort(
2129 raise util.Abort(
2130 _("patch %s does not exist") % filename)
2130 _("patch %s does not exist") % filename)
2131
2131
2132 if patchname:
2132 if patchname:
2133 self.checkpatchname(patchname, force)
2133 self.checkpatchname(patchname, force)
2134
2134
2135 self.ui.write(_('renaming %s to %s\n')
2135 self.ui.write(_('renaming %s to %s\n')
2136 % (filename, patchname))
2136 % (filename, patchname))
2137 util.rename(originpath, self.join(patchname))
2137 util.rename(originpath, self.join(patchname))
2138 else:
2138 else:
2139 patchname = filename
2139 patchname = filename
2140
2140
2141 else:
2141 else:
2142 if filename == '-' and not patchname:
2142 if filename == '-' and not patchname:
2143 raise util.Abort(_('need --name to import a patch from -'))
2143 raise util.Abort(_('need --name to import a patch from -'))
2144 elif not patchname:
2144 elif not patchname:
2145 patchname = normname(os.path.basename(filename.rstrip('/')))
2145 patchname = normname(os.path.basename(filename.rstrip('/')))
2146 self.checkpatchname(patchname, force)
2146 self.checkpatchname(patchname, force)
2147 try:
2147 try:
2148 if filename == '-':
2148 if filename == '-':
2149 text = self.ui.fin.read()
2149 text = self.ui.fin.read()
2150 else:
2150 else:
2151 fp = hg.openpath(self.ui, filename)
2151 fp = hg.openpath(self.ui, filename)
2152 text = fp.read()
2152 text = fp.read()
2153 fp.close()
2153 fp.close()
2154 except (OSError, IOError):
2154 except (OSError, IOError):
2155 raise util.Abort(_("unable to read file %s") % filename)
2155 raise util.Abort(_("unable to read file %s") % filename)
2156 patchf = self.opener(patchname, "w")
2156 patchf = self.opener(patchname, "w")
2157 patchf.write(text)
2157 patchf.write(text)
2158 patchf.close()
2158 patchf.close()
2159 if not force:
2159 if not force:
2160 checkseries(patchname)
2160 checkseries(patchname)
2161 if patchname not in self.series:
2161 if patchname not in self.series:
2162 index = self.fullseriesend() + i
2162 index = self.fullseriesend() + i
2163 self.fullseries[index:index] = [patchname]
2163 self.fullseries[index:index] = [patchname]
2164 self.parseseries()
2164 self.parseseries()
2165 self.seriesdirty = True
2165 self.seriesdirty = True
2166 self.ui.warn(_("adding %s to series file\n") % patchname)
2166 self.ui.warn(_("adding %s to series file\n") % patchname)
2167 self.added.append(patchname)
2167 self.added.append(patchname)
2168 imported.append(patchname)
2168 imported.append(patchname)
2169 patchname = None
2169 patchname = None
2170
2170
2171 self.removeundo(repo)
2171 self.removeundo(repo)
2172 return imported
2172 return imported
2173
2173
2174 def fixkeepchangesopts(ui, opts):
2174 def fixkeepchangesopts(ui, opts):
2175 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2175 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2176 or opts.get('exact')):
2176 or opts.get('exact')):
2177 return opts
2177 return opts
2178 opts = dict(opts)
2178 opts = dict(opts)
2179 opts['keep_changes'] = True
2179 opts['keep_changes'] = True
2180 return opts
2180 return opts
2181
2181
2182 @command("qdelete|qremove|qrm",
2182 @command("qdelete|qremove|qrm",
2183 [('k', 'keep', None, _('keep patch file')),
2183 [('k', 'keep', None, _('keep patch file')),
2184 ('r', 'rev', [],
2184 ('r', 'rev', [],
2185 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2185 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2186 _('hg qdelete [-k] [PATCH]...'))
2186 _('hg qdelete [-k] [PATCH]...'))
2187 def delete(ui, repo, *patches, **opts):
2187 def delete(ui, repo, *patches, **opts):
2188 """remove patches from queue
2188 """remove patches from queue
2189
2189
2190 The patches must not be applied, and at least one patch is required. Exact
2190 The patches must not be applied, and at least one patch is required. Exact
2191 patch identifiers must be given. With -k/--keep, the patch files are
2191 patch identifiers must be given. With -k/--keep, the patch files are
2192 preserved in the patch directory.
2192 preserved in the patch directory.
2193
2193
2194 To stop managing a patch and move it into permanent history,
2194 To stop managing a patch and move it into permanent history,
2195 use the :hg:`qfinish` command."""
2195 use the :hg:`qfinish` command."""
2196 q = repo.mq
2196 q = repo.mq
2197 q.delete(repo, patches, opts)
2197 q.delete(repo, patches, opts)
2198 q.savedirty()
2198 q.savedirty()
2199 return 0
2199 return 0
2200
2200
2201 @command("qapplied",
2201 @command("qapplied",
2202 [('1', 'last', None, _('show only the preceding applied patch'))
2202 [('1', 'last', None, _('show only the preceding applied patch'))
2203 ] + seriesopts,
2203 ] + seriesopts,
2204 _('hg qapplied [-1] [-s] [PATCH]'))
2204 _('hg qapplied [-1] [-s] [PATCH]'))
2205 def applied(ui, repo, patch=None, **opts):
2205 def applied(ui, repo, patch=None, **opts):
2206 """print the patches already applied
2206 """print the patches already applied
2207
2207
2208 Returns 0 on success."""
2208 Returns 0 on success."""
2209
2209
2210 q = repo.mq
2210 q = repo.mq
2211
2211
2212 if patch:
2212 if patch:
2213 if patch not in q.series:
2213 if patch not in q.series:
2214 raise util.Abort(_("patch %s is not in series file") % patch)
2214 raise util.Abort(_("patch %s is not in series file") % patch)
2215 end = q.series.index(patch) + 1
2215 end = q.series.index(patch) + 1
2216 else:
2216 else:
2217 end = q.seriesend(True)
2217 end = q.seriesend(True)
2218
2218
2219 if opts.get('last') and not end:
2219 if opts.get('last') and not end:
2220 ui.write(_("no patches applied\n"))
2220 ui.write(_("no patches applied\n"))
2221 return 1
2221 return 1
2222 elif opts.get('last') and end == 1:
2222 elif opts.get('last') and end == 1:
2223 ui.write(_("only one patch applied\n"))
2223 ui.write(_("only one patch applied\n"))
2224 return 1
2224 return 1
2225 elif opts.get('last'):
2225 elif opts.get('last'):
2226 start = end - 2
2226 start = end - 2
2227 end = 1
2227 end = 1
2228 else:
2228 else:
2229 start = 0
2229 start = 0
2230
2230
2231 q.qseries(repo, length=end, start=start, status='A',
2231 q.qseries(repo, length=end, start=start, status='A',
2232 summary=opts.get('summary'))
2232 summary=opts.get('summary'))
2233
2233
2234
2234
2235 @command("qunapplied",
2235 @command("qunapplied",
2236 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2236 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2237 _('hg qunapplied [-1] [-s] [PATCH]'))
2237 _('hg qunapplied [-1] [-s] [PATCH]'))
2238 def unapplied(ui, repo, patch=None, **opts):
2238 def unapplied(ui, repo, patch=None, **opts):
2239 """print the patches not yet applied
2239 """print the patches not yet applied
2240
2240
2241 Returns 0 on success."""
2241 Returns 0 on success."""
2242
2242
2243 q = repo.mq
2243 q = repo.mq
2244 if patch:
2244 if patch:
2245 if patch not in q.series:
2245 if patch not in q.series:
2246 raise util.Abort(_("patch %s is not in series file") % patch)
2246 raise util.Abort(_("patch %s is not in series file") % patch)
2247 start = q.series.index(patch) + 1
2247 start = q.series.index(patch) + 1
2248 else:
2248 else:
2249 start = q.seriesend(True)
2249 start = q.seriesend(True)
2250
2250
2251 if start == len(q.series) and opts.get('first'):
2251 if start == len(q.series) and opts.get('first'):
2252 ui.write(_("all patches applied\n"))
2252 ui.write(_("all patches applied\n"))
2253 return 1
2253 return 1
2254
2254
2255 if opts.get('first'):
2255 if opts.get('first'):
2256 length = 1
2256 length = 1
2257 else:
2257 else:
2258 length = None
2258 length = None
2259 q.qseries(repo, start=start, length=length, status='U',
2259 q.qseries(repo, start=start, length=length, status='U',
2260 summary=opts.get('summary'))
2260 summary=opts.get('summary'))
2261
2261
2262 @command("qimport",
2262 @command("qimport",
2263 [('e', 'existing', None, _('import file in patch directory')),
2263 [('e', 'existing', None, _('import file in patch directory')),
2264 ('n', 'name', '',
2264 ('n', 'name', '',
2265 _('name of patch file'), _('NAME')),
2265 _('name of patch file'), _('NAME')),
2266 ('f', 'force', None, _('overwrite existing files')),
2266 ('f', 'force', None, _('overwrite existing files')),
2267 ('r', 'rev', [],
2267 ('r', 'rev', [],
2268 _('place existing revisions under mq control'), _('REV')),
2268 _('place existing revisions under mq control'), _('REV')),
2269 ('g', 'git', None, _('use git extended diff format')),
2269 ('g', 'git', None, _('use git extended diff format')),
2270 ('P', 'push', None, _('qpush after importing'))],
2270 ('P', 'push', None, _('qpush after importing'))],
2271 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2271 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2272 def qimport(ui, repo, *filename, **opts):
2272 def qimport(ui, repo, *filename, **opts):
2273 """import a patch or existing changeset
2273 """import a patch or existing changeset
2274
2274
2275 The patch is inserted into the series after the last applied
2275 The patch is inserted into the series after the last applied
2276 patch. If no patches have been applied, qimport prepends the patch
2276 patch. If no patches have been applied, qimport prepends the patch
2277 to the series.
2277 to the series.
2278
2278
2279 The patch will have the same name as its source file unless you
2279 The patch will have the same name as its source file unless you
2280 give it a new one with -n/--name.
2280 give it a new one with -n/--name.
2281
2281
2282 You can register an existing patch inside the patch directory with
2282 You can register an existing patch inside the patch directory with
2283 the -e/--existing flag.
2283 the -e/--existing flag.
2284
2284
2285 With -f/--force, an existing patch of the same name will be
2285 With -f/--force, an existing patch of the same name will be
2286 overwritten.
2286 overwritten.
2287
2287
2288 An existing changeset may be placed under mq control with -r/--rev
2288 An existing changeset may be placed under mq control with -r/--rev
2289 (e.g. qimport --rev . -n patch will place the current revision
2289 (e.g. qimport --rev . -n patch will place the current revision
2290 under mq control). With -g/--git, patches imported with --rev will
2290 under mq control). With -g/--git, patches imported with --rev will
2291 use the git diff format. See the diffs help topic for information
2291 use the git diff format. See the diffs help topic for information
2292 on why this is important for preserving rename/copy information
2292 on why this is important for preserving rename/copy information
2293 and permission changes. Use :hg:`qfinish` to remove changesets
2293 and permission changes. Use :hg:`qfinish` to remove changesets
2294 from mq control.
2294 from mq control.
2295
2295
2296 To import a patch from standard input, pass - as the patch file.
2296 To import a patch from standard input, pass - as the patch file.
2297 When importing from standard input, a patch name must be specified
2297 When importing from standard input, a patch name must be specified
2298 using the --name flag.
2298 using the --name flag.
2299
2299
2300 To import an existing patch while renaming it::
2300 To import an existing patch while renaming it::
2301
2301
2302 hg qimport -e existing-patch -n new-name
2302 hg qimport -e existing-patch -n new-name
2303
2303
2304 Returns 0 if import succeeded.
2304 Returns 0 if import succeeded.
2305 """
2305 """
2306 lock = repo.lock() # cause this may move phase
2306 lock = repo.lock() # cause this may move phase
2307 try:
2307 try:
2308 q = repo.mq
2308 q = repo.mq
2309 try:
2309 try:
2310 imported = q.qimport(
2310 imported = q.qimport(
2311 repo, filename, patchname=opts.get('name'),
2311 repo, filename, patchname=opts.get('name'),
2312 existing=opts.get('existing'), force=opts.get('force'),
2312 existing=opts.get('existing'), force=opts.get('force'),
2313 rev=opts.get('rev'), git=opts.get('git'))
2313 rev=opts.get('rev'), git=opts.get('git'))
2314 finally:
2314 finally:
2315 q.savedirty()
2315 q.savedirty()
2316 finally:
2316 finally:
2317 lock.release()
2317 lock.release()
2318
2318
2319 if imported and opts.get('push') and not opts.get('rev'):
2319 if imported and opts.get('push') and not opts.get('rev'):
2320 return q.push(repo, imported[-1])
2320 return q.push(repo, imported[-1])
2321 return 0
2321 return 0
2322
2322
2323 def qinit(ui, repo, create):
2323 def qinit(ui, repo, create):
2324 """initialize a new queue repository
2324 """initialize a new queue repository
2325
2325
2326 This command also creates a series file for ordering patches, and
2326 This command also creates a series file for ordering patches, and
2327 an mq-specific .hgignore file in the queue repository, to exclude
2327 an mq-specific .hgignore file in the queue repository, to exclude
2328 the status and guards files (these contain mostly transient state).
2328 the status and guards files (these contain mostly transient state).
2329
2329
2330 Returns 0 if initialization succeeded."""
2330 Returns 0 if initialization succeeded."""
2331 q = repo.mq
2331 q = repo.mq
2332 r = q.init(repo, create)
2332 r = q.init(repo, create)
2333 q.savedirty()
2333 q.savedirty()
2334 if r:
2334 if r:
2335 if not os.path.exists(r.wjoin('.hgignore')):
2335 if not os.path.exists(r.wjoin('.hgignore')):
2336 fp = r.wvfs('.hgignore', 'w')
2336 fp = r.wvfs('.hgignore', 'w')
2337 fp.write('^\\.hg\n')
2337 fp.write('^\\.hg\n')
2338 fp.write('^\\.mq\n')
2338 fp.write('^\\.mq\n')
2339 fp.write('syntax: glob\n')
2339 fp.write('syntax: glob\n')
2340 fp.write('status\n')
2340 fp.write('status\n')
2341 fp.write('guards\n')
2341 fp.write('guards\n')
2342 fp.close()
2342 fp.close()
2343 if not os.path.exists(r.wjoin('series')):
2343 if not os.path.exists(r.wjoin('series')):
2344 r.wvfs('series', 'w').close()
2344 r.wvfs('series', 'w').close()
2345 r[None].add(['.hgignore', 'series'])
2345 r[None].add(['.hgignore', 'series'])
2346 commands.add(ui, r)
2346 commands.add(ui, r)
2347 return 0
2347 return 0
2348
2348
2349 @command("^qinit",
2349 @command("^qinit",
2350 [('c', 'create-repo', None, _('create queue repository'))],
2350 [('c', 'create-repo', None, _('create queue repository'))],
2351 _('hg qinit [-c]'))
2351 _('hg qinit [-c]'))
2352 def init(ui, repo, **opts):
2352 def init(ui, repo, **opts):
2353 """init a new queue repository (DEPRECATED)
2353 """init a new queue repository (DEPRECATED)
2354
2354
2355 The queue repository is unversioned by default. If
2355 The queue repository is unversioned by default. If
2356 -c/--create-repo is specified, qinit will create a separate nested
2356 -c/--create-repo is specified, qinit will create a separate nested
2357 repository for patches (qinit -c may also be run later to convert
2357 repository for patches (qinit -c may also be run later to convert
2358 an unversioned patch repository into a versioned one). You can use
2358 an unversioned patch repository into a versioned one). You can use
2359 qcommit to commit changes to this queue repository.
2359 qcommit to commit changes to this queue repository.
2360
2360
2361 This command is deprecated. Without -c, it's implied by other relevant
2361 This command is deprecated. Without -c, it's implied by other relevant
2362 commands. With -c, use :hg:`init --mq` instead."""
2362 commands. With -c, use :hg:`init --mq` instead."""
2363 return qinit(ui, repo, create=opts.get('create_repo'))
2363 return qinit(ui, repo, create=opts.get('create_repo'))
2364
2364
2365 @command("qclone",
2365 @command("qclone",
2366 [('', 'pull', None, _('use pull protocol to copy metadata')),
2366 [('', 'pull', None, _('use pull protocol to copy metadata')),
2367 ('U', 'noupdate', None,
2367 ('U', 'noupdate', None,
2368 _('do not update the new working directories')),
2368 _('do not update the new working directories')),
2369 ('', 'uncompressed', None,
2369 ('', 'uncompressed', None,
2370 _('use uncompressed transfer (fast over LAN)')),
2370 _('use uncompressed transfer (fast over LAN)')),
2371 ('p', 'patches', '',
2371 ('p', 'patches', '',
2372 _('location of source patch repository'), _('REPO')),
2372 _('location of source patch repository'), _('REPO')),
2373 ] + commands.remoteopts,
2373 ] + commands.remoteopts,
2374 _('hg qclone [OPTION]... SOURCE [DEST]'),
2374 _('hg qclone [OPTION]... SOURCE [DEST]'),
2375 norepo=True)
2375 norepo=True)
2376 def clone(ui, source, dest=None, **opts):
2376 def clone(ui, source, dest=None, **opts):
2377 '''clone main and patch repository at same time
2377 '''clone main and patch repository at same time
2378
2378
2379 If source is local, destination will have no patches applied. If
2379 If source is local, destination will have no patches applied. If
2380 source is remote, this command can not check if patches are
2380 source is remote, this command can not check if patches are
2381 applied in source, so cannot guarantee that patches are not
2381 applied in source, so cannot guarantee that patches are not
2382 applied in destination. If you clone remote repository, be sure
2382 applied in destination. If you clone remote repository, be sure
2383 before that it has no patches applied.
2383 before that it has no patches applied.
2384
2384
2385 Source patch repository is looked for in <src>/.hg/patches by
2385 Source patch repository is looked for in <src>/.hg/patches by
2386 default. Use -p <url> to change.
2386 default. Use -p <url> to change.
2387
2387
2388 The patch directory must be a nested Mercurial repository, as
2388 The patch directory must be a nested Mercurial repository, as
2389 would be created by :hg:`init --mq`.
2389 would be created by :hg:`init --mq`.
2390
2390
2391 Return 0 on success.
2391 Return 0 on success.
2392 '''
2392 '''
2393 def patchdir(repo):
2393 def patchdir(repo):
2394 """compute a patch repo url from a repo object"""
2394 """compute a patch repo url from a repo object"""
2395 url = repo.url()
2395 url = repo.url()
2396 if url.endswith('/'):
2396 if url.endswith('/'):
2397 url = url[:-1]
2397 url = url[:-1]
2398 return url + '/.hg/patches'
2398 return url + '/.hg/patches'
2399
2399
2400 # main repo (destination and sources)
2400 # main repo (destination and sources)
2401 if dest is None:
2401 if dest is None:
2402 dest = hg.defaultdest(source)
2402 dest = hg.defaultdest(source)
2403 sr = hg.peer(ui, opts, ui.expandpath(source))
2403 sr = hg.peer(ui, opts, ui.expandpath(source))
2404
2404
2405 # patches repo (source only)
2405 # patches repo (source only)
2406 if opts.get('patches'):
2406 if opts.get('patches'):
2407 patchespath = ui.expandpath(opts.get('patches'))
2407 patchespath = ui.expandpath(opts.get('patches'))
2408 else:
2408 else:
2409 patchespath = patchdir(sr)
2409 patchespath = patchdir(sr)
2410 try:
2410 try:
2411 hg.peer(ui, opts, patchespath)
2411 hg.peer(ui, opts, patchespath)
2412 except error.RepoError:
2412 except error.RepoError:
2413 raise util.Abort(_('versioned patch repository not found'
2413 raise util.Abort(_('versioned patch repository not found'
2414 ' (see init --mq)'))
2414 ' (see init --mq)'))
2415 qbase, destrev = None, None
2415 qbase, destrev = None, None
2416 if sr.local():
2416 if sr.local():
2417 repo = sr.local()
2417 repo = sr.local()
2418 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2418 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2419 qbase = repo.mq.applied[0].node
2419 qbase = repo.mq.applied[0].node
2420 if not hg.islocal(dest):
2420 if not hg.islocal(dest):
2421 heads = set(repo.heads())
2421 heads = set(repo.heads())
2422 destrev = list(heads.difference(repo.heads(qbase)))
2422 destrev = list(heads.difference(repo.heads(qbase)))
2423 destrev.append(repo.changelog.parents(qbase)[0])
2423 destrev.append(repo.changelog.parents(qbase)[0])
2424 elif sr.capable('lookup'):
2424 elif sr.capable('lookup'):
2425 try:
2425 try:
2426 qbase = sr.lookup('qbase')
2426 qbase = sr.lookup('qbase')
2427 except error.RepoError:
2427 except error.RepoError:
2428 pass
2428 pass
2429
2429
2430 ui.note(_('cloning main repository\n'))
2430 ui.note(_('cloning main repository\n'))
2431 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2431 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2432 pull=opts.get('pull'),
2432 pull=opts.get('pull'),
2433 rev=destrev,
2433 rev=destrev,
2434 update=False,
2434 update=False,
2435 stream=opts.get('uncompressed'))
2435 stream=opts.get('uncompressed'))
2436
2436
2437 ui.note(_('cloning patch repository\n'))
2437 ui.note(_('cloning patch repository\n'))
2438 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2438 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2439 pull=opts.get('pull'), update=not opts.get('noupdate'),
2439 pull=opts.get('pull'), update=not opts.get('noupdate'),
2440 stream=opts.get('uncompressed'))
2440 stream=opts.get('uncompressed'))
2441
2441
2442 if dr.local():
2442 if dr.local():
2443 repo = dr.local()
2443 repo = dr.local()
2444 if qbase:
2444 if qbase:
2445 ui.note(_('stripping applied patches from destination '
2445 ui.note(_('stripping applied patches from destination '
2446 'repository\n'))
2446 'repository\n'))
2447 strip(ui, repo, [qbase], update=False, backup=None)
2447 strip(ui, repo, [qbase], update=False, backup=None)
2448 if not opts.get('noupdate'):
2448 if not opts.get('noupdate'):
2449 ui.note(_('updating destination repository\n'))
2449 ui.note(_('updating destination repository\n'))
2450 hg.update(repo, repo.changelog.tip())
2450 hg.update(repo, repo.changelog.tip())
2451
2451
2452 @command("qcommit|qci",
2452 @command("qcommit|qci",
2453 commands.table["^commit|ci"][1],
2453 commands.table["^commit|ci"][1],
2454 _('hg qcommit [OPTION]... [FILE]...'),
2454 _('hg qcommit [OPTION]... [FILE]...'),
2455 inferrepo=True)
2455 inferrepo=True)
2456 def commit(ui, repo, *pats, **opts):
2456 def commit(ui, repo, *pats, **opts):
2457 """commit changes in the queue repository (DEPRECATED)
2457 """commit changes in the queue repository (DEPRECATED)
2458
2458
2459 This command is deprecated; use :hg:`commit --mq` instead."""
2459 This command is deprecated; use :hg:`commit --mq` instead."""
2460 q = repo.mq
2460 q = repo.mq
2461 r = q.qrepo()
2461 r = q.qrepo()
2462 if not r:
2462 if not r:
2463 raise util.Abort('no queue repository')
2463 raise util.Abort('no queue repository')
2464 commands.commit(r.ui, r, *pats, **opts)
2464 commands.commit(r.ui, r, *pats, **opts)
2465
2465
2466 @command("qseries",
2466 @command("qseries",
2467 [('m', 'missing', None, _('print patches not in series')),
2467 [('m', 'missing', None, _('print patches not in series')),
2468 ] + seriesopts,
2468 ] + seriesopts,
2469 _('hg qseries [-ms]'))
2469 _('hg qseries [-ms]'))
2470 def series(ui, repo, **opts):
2470 def series(ui, repo, **opts):
2471 """print the entire series file
2471 """print the entire series file
2472
2472
2473 Returns 0 on success."""
2473 Returns 0 on success."""
2474 repo.mq.qseries(repo, missing=opts.get('missing'),
2474 repo.mq.qseries(repo, missing=opts.get('missing'),
2475 summary=opts.get('summary'))
2475 summary=opts.get('summary'))
2476 return 0
2476 return 0
2477
2477
2478 @command("qtop", seriesopts, _('hg qtop [-s]'))
2478 @command("qtop", seriesopts, _('hg qtop [-s]'))
2479 def top(ui, repo, **opts):
2479 def top(ui, repo, **opts):
2480 """print the name of the current patch
2480 """print the name of the current patch
2481
2481
2482 Returns 0 on success."""
2482 Returns 0 on success."""
2483 q = repo.mq
2483 q = repo.mq
2484 if q.applied:
2484 if q.applied:
2485 t = q.seriesend(True)
2485 t = q.seriesend(True)
2486 else:
2486 else:
2487 t = 0
2487 t = 0
2488
2488
2489 if t:
2489 if t:
2490 q.qseries(repo, start=t - 1, length=1, status='A',
2490 q.qseries(repo, start=t - 1, length=1, status='A',
2491 summary=opts.get('summary'))
2491 summary=opts.get('summary'))
2492 else:
2492 else:
2493 ui.write(_("no patches applied\n"))
2493 ui.write(_("no patches applied\n"))
2494 return 1
2494 return 1
2495
2495
2496 @command("qnext", seriesopts, _('hg qnext [-s]'))
2496 @command("qnext", seriesopts, _('hg qnext [-s]'))
2497 def next(ui, repo, **opts):
2497 def next(ui, repo, **opts):
2498 """print the name of the next pushable patch
2498 """print the name of the next pushable patch
2499
2499
2500 Returns 0 on success."""
2500 Returns 0 on success."""
2501 q = repo.mq
2501 q = repo.mq
2502 end = q.seriesend()
2502 end = q.seriesend()
2503 if end == len(q.series):
2503 if end == len(q.series):
2504 ui.write(_("all patches applied\n"))
2504 ui.write(_("all patches applied\n"))
2505 return 1
2505 return 1
2506 q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
2506 q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
2507
2507
2508 @command("qprev", seriesopts, _('hg qprev [-s]'))
2508 @command("qprev", seriesopts, _('hg qprev [-s]'))
2509 def prev(ui, repo, **opts):
2509 def prev(ui, repo, **opts):
2510 """print the name of the preceding applied patch
2510 """print the name of the preceding applied patch
2511
2511
2512 Returns 0 on success."""
2512 Returns 0 on success."""
2513 q = repo.mq
2513 q = repo.mq
2514 l = len(q.applied)
2514 l = len(q.applied)
2515 if l == 1:
2515 if l == 1:
2516 ui.write(_("only one patch applied\n"))
2516 ui.write(_("only one patch applied\n"))
2517 return 1
2517 return 1
2518 if not l:
2518 if not l:
2519 ui.write(_("no patches applied\n"))
2519 ui.write(_("no patches applied\n"))
2520 return 1
2520 return 1
2521 idx = q.series.index(q.applied[-2].name)
2521 idx = q.series.index(q.applied[-2].name)
2522 q.qseries(repo, start=idx, length=1, status='A',
2522 q.qseries(repo, start=idx, length=1, status='A',
2523 summary=opts.get('summary'))
2523 summary=opts.get('summary'))
2524
2524
2525 def setupheaderopts(ui, opts):
2525 def setupheaderopts(ui, opts):
2526 if not opts.get('user') and opts.get('currentuser'):
2526 if not opts.get('user') and opts.get('currentuser'):
2527 opts['user'] = ui.username()
2527 opts['user'] = ui.username()
2528 if not opts.get('date') and opts.get('currentdate'):
2528 if not opts.get('date') and opts.get('currentdate'):
2529 opts['date'] = "%d %d" % util.makedate()
2529 opts['date'] = "%d %d" % util.makedate()
2530
2530
2531 @command("^qnew",
2531 @command("^qnew",
2532 [('e', 'edit', None, _('invoke editor on commit messages')),
2532 [('e', 'edit', None, _('invoke editor on commit messages')),
2533 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2533 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2534 ('g', 'git', None, _('use git extended diff format')),
2534 ('g', 'git', None, _('use git extended diff format')),
2535 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2535 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2536 ('u', 'user', '',
2536 ('u', 'user', '',
2537 _('add "From: <USER>" to patch'), _('USER')),
2537 _('add "From: <USER>" to patch'), _('USER')),
2538 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2538 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2539 ('d', 'date', '',
2539 ('d', 'date', '',
2540 _('add "Date: <DATE>" to patch'), _('DATE'))
2540 _('add "Date: <DATE>" to patch'), _('DATE'))
2541 ] + commands.walkopts + commands.commitopts,
2541 ] + commands.walkopts + commands.commitopts,
2542 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2542 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2543 inferrepo=True)
2543 inferrepo=True)
2544 def new(ui, repo, patch, *args, **opts):
2544 def new(ui, repo, patch, *args, **opts):
2545 """create a new patch
2545 """create a new patch
2546
2546
2547 qnew creates a new patch on top of the currently-applied patch (if
2547 qnew creates a new patch on top of the currently-applied patch (if
2548 any). The patch will be initialized with any outstanding changes
2548 any). The patch will be initialized with any outstanding changes
2549 in the working directory. You may also use -I/--include,
2549 in the working directory. You may also use -I/--include,
2550 -X/--exclude, and/or a list of files after the patch name to add
2550 -X/--exclude, and/or a list of files after the patch name to add
2551 only changes to matching files to the new patch, leaving the rest
2551 only changes to matching files to the new patch, leaving the rest
2552 as uncommitted modifications.
2552 as uncommitted modifications.
2553
2553
2554 -u/--user and -d/--date can be used to set the (given) user and
2554 -u/--user and -d/--date can be used to set the (given) user and
2555 date, respectively. -U/--currentuser and -D/--currentdate set user
2555 date, respectively. -U/--currentuser and -D/--currentdate set user
2556 to current user and date to current date.
2556 to current user and date to current date.
2557
2557
2558 -e/--edit, -m/--message or -l/--logfile set the patch header as
2558 -e/--edit, -m/--message or -l/--logfile set the patch header as
2559 well as the commit message. If none is specified, the header is
2559 well as the commit message. If none is specified, the header is
2560 empty and the commit message is '[mq]: PATCH'.
2560 empty and the commit message is '[mq]: PATCH'.
2561
2561
2562 Use the -g/--git option to keep the patch in the git extended diff
2562 Use the -g/--git option to keep the patch in the git extended diff
2563 format. Read the diffs help topic for more information on why this
2563 format. Read the diffs help topic for more information on why this
2564 is important for preserving permission changes and copy/rename
2564 is important for preserving permission changes and copy/rename
2565 information.
2565 information.
2566
2566
2567 Returns 0 on successful creation of a new patch.
2567 Returns 0 on successful creation of a new patch.
2568 """
2568 """
2569 msg = cmdutil.logmessage(ui, opts)
2569 msg = cmdutil.logmessage(ui, opts)
2570 q = repo.mq
2570 q = repo.mq
2571 opts['msg'] = msg
2571 opts['msg'] = msg
2572 setupheaderopts(ui, opts)
2572 setupheaderopts(ui, opts)
2573 q.new(repo, patch, *args, **opts)
2573 q.new(repo, patch, *args, **opts)
2574 q.savedirty()
2574 q.savedirty()
2575 return 0
2575 return 0
2576
2576
2577 @command("^qrefresh",
2577 @command("^qrefresh",
2578 [('e', 'edit', None, _('invoke editor on commit messages')),
2578 [('e', 'edit', None, _('invoke editor on commit messages')),
2579 ('g', 'git', None, _('use git extended diff format')),
2579 ('g', 'git', None, _('use git extended diff format')),
2580 ('s', 'short', None,
2580 ('s', 'short', None,
2581 _('refresh only files already in the patch and specified files')),
2581 _('refresh only files already in the patch and specified files')),
2582 ('U', 'currentuser', None,
2582 ('U', 'currentuser', None,
2583 _('add/update author field in patch with current user')),
2583 _('add/update author field in patch with current user')),
2584 ('u', 'user', '',
2584 ('u', 'user', '',
2585 _('add/update author field in patch with given user'), _('USER')),
2585 _('add/update author field in patch with given user'), _('USER')),
2586 ('D', 'currentdate', None,
2586 ('D', 'currentdate', None,
2587 _('add/update date field in patch with current date')),
2587 _('add/update date field in patch with current date')),
2588 ('d', 'date', '',
2588 ('d', 'date', '',
2589 _('add/update date field in patch with given date'), _('DATE'))
2589 _('add/update date field in patch with given date'), _('DATE'))
2590 ] + commands.walkopts + commands.commitopts,
2590 ] + commands.walkopts + commands.commitopts,
2591 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2591 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2592 inferrepo=True)
2592 inferrepo=True)
2593 def refresh(ui, repo, *pats, **opts):
2593 def refresh(ui, repo, *pats, **opts):
2594 """update the current patch
2594 """update the current patch
2595
2595
2596 If any file patterns are provided, the refreshed patch will
2596 If any file patterns are provided, the refreshed patch will
2597 contain only the modifications that match those patterns; the
2597 contain only the modifications that match those patterns; the
2598 remaining modifications will remain in the working directory.
2598 remaining modifications will remain in the working directory.
2599
2599
2600 If -s/--short is specified, files currently included in the patch
2600 If -s/--short is specified, files currently included in the patch
2601 will be refreshed just like matched files and remain in the patch.
2601 will be refreshed just like matched files and remain in the patch.
2602
2602
2603 If -e/--edit is specified, Mercurial will start your configured editor for
2603 If -e/--edit is specified, Mercurial will start your configured editor for
2604 you to enter a message. In case qrefresh fails, you will find a backup of
2604 you to enter a message. In case qrefresh fails, you will find a backup of
2605 your message in ``.hg/last-message.txt``.
2605 your message in ``.hg/last-message.txt``.
2606
2606
2607 hg add/remove/copy/rename work as usual, though you might want to
2607 hg add/remove/copy/rename work as usual, though you might want to
2608 use git-style patches (-g/--git or [diff] git=1) to track copies
2608 use git-style patches (-g/--git or [diff] git=1) to track copies
2609 and renames. See the diffs help topic for more information on the
2609 and renames. See the diffs help topic for more information on the
2610 git diff format.
2610 git diff format.
2611
2611
2612 Returns 0 on success.
2612 Returns 0 on success.
2613 """
2613 """
2614 q = repo.mq
2614 q = repo.mq
2615 message = cmdutil.logmessage(ui, opts)
2615 message = cmdutil.logmessage(ui, opts)
2616 setupheaderopts(ui, opts)
2616 setupheaderopts(ui, opts)
2617 wlock = repo.wlock()
2617 wlock = repo.wlock()
2618 try:
2618 try:
2619 ret = q.refresh(repo, pats, msg=message, **opts)
2619 ret = q.refresh(repo, pats, msg=message, **opts)
2620 q.savedirty()
2620 q.savedirty()
2621 return ret
2621 return ret
2622 finally:
2622 finally:
2623 wlock.release()
2623 wlock.release()
2624
2624
2625 @command("^qdiff",
2625 @command("^qdiff",
2626 commands.diffopts + commands.diffopts2 + commands.walkopts,
2626 commands.diffopts + commands.diffopts2 + commands.walkopts,
2627 _('hg qdiff [OPTION]... [FILE]...'),
2627 _('hg qdiff [OPTION]... [FILE]...'),
2628 inferrepo=True)
2628 inferrepo=True)
2629 def diff(ui, repo, *pats, **opts):
2629 def diff(ui, repo, *pats, **opts):
2630 """diff of the current patch and subsequent modifications
2630 """diff of the current patch and subsequent modifications
2631
2631
2632 Shows a diff which includes the current patch as well as any
2632 Shows a diff which includes the current patch as well as any
2633 changes which have been made in the working directory since the
2633 changes which have been made in the working directory since the
2634 last refresh (thus showing what the current patch would become
2634 last refresh (thus showing what the current patch would become
2635 after a qrefresh).
2635 after a qrefresh).
2636
2636
2637 Use :hg:`diff` if you only want to see the changes made since the
2637 Use :hg:`diff` if you only want to see the changes made since the
2638 last qrefresh, or :hg:`export qtip` if you want to see changes
2638 last qrefresh, or :hg:`export qtip` if you want to see changes
2639 made by the current patch without including changes made since the
2639 made by the current patch without including changes made since the
2640 qrefresh.
2640 qrefresh.
2641
2641
2642 Returns 0 on success.
2642 Returns 0 on success.
2643 """
2643 """
2644 repo.mq.diff(repo, pats, opts)
2644 repo.mq.diff(repo, pats, opts)
2645 return 0
2645 return 0
2646
2646
2647 @command('qfold',
2647 @command('qfold',
2648 [('e', 'edit', None, _('invoke editor on commit messages')),
2648 [('e', 'edit', None, _('invoke editor on commit messages')),
2649 ('k', 'keep', None, _('keep folded patch files')),
2649 ('k', 'keep', None, _('keep folded patch files')),
2650 ] + commands.commitopts,
2650 ] + commands.commitopts,
2651 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2651 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2652 def fold(ui, repo, *files, **opts):
2652 def fold(ui, repo, *files, **opts):
2653 """fold the named patches into the current patch
2653 """fold the named patches into the current patch
2654
2654
2655 Patches must not yet be applied. Each patch will be successively
2655 Patches must not yet be applied. Each patch will be successively
2656 applied to the current patch in the order given. If all the
2656 applied to the current patch in the order given. If all the
2657 patches apply successfully, the current patch will be refreshed
2657 patches apply successfully, the current patch will be refreshed
2658 with the new cumulative patch, and the folded patches will be
2658 with the new cumulative patch, and the folded patches will be
2659 deleted. With -k/--keep, the folded patch files will not be
2659 deleted. With -k/--keep, the folded patch files will not be
2660 removed afterwards.
2660 removed afterwards.
2661
2661
2662 The header for each folded patch will be concatenated with the
2662 The header for each folded patch will be concatenated with the
2663 current patch header, separated by a line of ``* * *``.
2663 current patch header, separated by a line of ``* * *``.
2664
2664
2665 Returns 0 on success."""
2665 Returns 0 on success."""
2666 q = repo.mq
2666 q = repo.mq
2667 if not files:
2667 if not files:
2668 raise util.Abort(_('qfold requires at least one patch name'))
2668 raise util.Abort(_('qfold requires at least one patch name'))
2669 if not q.checktoppatch(repo)[0]:
2669 if not q.checktoppatch(repo)[0]:
2670 raise util.Abort(_('no patches applied'))
2670 raise util.Abort(_('no patches applied'))
2671 q.checklocalchanges(repo)
2671 q.checklocalchanges(repo)
2672
2672
2673 message = cmdutil.logmessage(ui, opts)
2673 message = cmdutil.logmessage(ui, opts)
2674
2674
2675 parent = q.lookup('qtip')
2675 parent = q.lookup('qtip')
2676 patches = []
2676 patches = []
2677 messages = []
2677 messages = []
2678 for f in files:
2678 for f in files:
2679 p = q.lookup(f)
2679 p = q.lookup(f)
2680 if p in patches or p == parent:
2680 if p in patches or p == parent:
2681 ui.warn(_('skipping already folded patch %s\n') % p)
2681 ui.warn(_('skipping already folded patch %s\n') % p)
2682 if q.isapplied(p):
2682 if q.isapplied(p):
2683 raise util.Abort(_('qfold cannot fold already applied patch %s')
2683 raise util.Abort(_('qfold cannot fold already applied patch %s')
2684 % p)
2684 % p)
2685 patches.append(p)
2685 patches.append(p)
2686
2686
2687 for p in patches:
2687 for p in patches:
2688 if not message:
2688 if not message:
2689 ph = patchheader(q.join(p), q.plainmode)
2689 ph = patchheader(q.join(p), q.plainmode)
2690 if ph.message:
2690 if ph.message:
2691 messages.append(ph.message)
2691 messages.append(ph.message)
2692 pf = q.join(p)
2692 pf = q.join(p)
2693 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2693 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2694 if not patchsuccess:
2694 if not patchsuccess:
2695 raise util.Abort(_('error folding patch %s') % p)
2695 raise util.Abort(_('error folding patch %s') % p)
2696
2696
2697 if not message:
2697 if not message:
2698 ph = patchheader(q.join(parent), q.plainmode)
2698 ph = patchheader(q.join(parent), q.plainmode)
2699 message = ph.message
2699 message = ph.message
2700 for msg in messages:
2700 for msg in messages:
2701 if msg:
2701 if msg:
2702 if message:
2702 if message:
2703 message.append('* * *')
2703 message.append('* * *')
2704 message.extend(msg)
2704 message.extend(msg)
2705 message = '\n'.join(message)
2705 message = '\n'.join(message)
2706
2706
2707 diffopts = q.patchopts(q.diffopts(), *patches)
2707 diffopts = q.patchopts(q.diffopts(), *patches)
2708 wlock = repo.wlock()
2708 wlock = repo.wlock()
2709 try:
2709 try:
2710 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2710 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2711 editform='mq.qfold')
2711 editform='mq.qfold')
2712 q.delete(repo, patches, opts)
2712 q.delete(repo, patches, opts)
2713 q.savedirty()
2713 q.savedirty()
2714 finally:
2714 finally:
2715 wlock.release()
2715 wlock.release()
2716
2716
2717 @command("qgoto",
2717 @command("qgoto",
2718 [('', 'keep-changes', None,
2718 [('', 'keep-changes', None,
2719 _('tolerate non-conflicting local changes')),
2719 _('tolerate non-conflicting local changes')),
2720 ('f', 'force', None, _('overwrite any local changes')),
2720 ('f', 'force', None, _('overwrite any local changes')),
2721 ('', 'no-backup', None, _('do not save backup copies of files'))],
2721 ('', 'no-backup', None, _('do not save backup copies of files'))],
2722 _('hg qgoto [OPTION]... PATCH'))
2722 _('hg qgoto [OPTION]... PATCH'))
2723 def goto(ui, repo, patch, **opts):
2723 def goto(ui, repo, patch, **opts):
2724 '''push or pop patches until named patch is at top of stack
2724 '''push or pop patches until named patch is at top of stack
2725
2725
2726 Returns 0 on success.'''
2726 Returns 0 on success.'''
2727 opts = fixkeepchangesopts(ui, opts)
2727 opts = fixkeepchangesopts(ui, opts)
2728 q = repo.mq
2728 q = repo.mq
2729 patch = q.lookup(patch)
2729 patch = q.lookup(patch)
2730 nobackup = opts.get('no_backup')
2730 nobackup = opts.get('no_backup')
2731 keepchanges = opts.get('keep_changes')
2731 keepchanges = opts.get('keep_changes')
2732 if q.isapplied(patch):
2732 if q.isapplied(patch):
2733 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2733 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2734 keepchanges=keepchanges)
2734 keepchanges=keepchanges)
2735 else:
2735 else:
2736 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2736 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2737 keepchanges=keepchanges)
2737 keepchanges=keepchanges)
2738 q.savedirty()
2738 q.savedirty()
2739 return ret
2739 return ret
2740
2740
2741 @command("qguard",
2741 @command("qguard",
2742 [('l', 'list', None, _('list all patches and guards')),
2742 [('l', 'list', None, _('list all patches and guards')),
2743 ('n', 'none', None, _('drop all guards'))],
2743 ('n', 'none', None, _('drop all guards'))],
2744 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2744 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2745 def guard(ui, repo, *args, **opts):
2745 def guard(ui, repo, *args, **opts):
2746 '''set or print guards for a patch
2746 '''set or print guards for a patch
2747
2747
2748 Guards control whether a patch can be pushed. A patch with no
2748 Guards control whether a patch can be pushed. A patch with no
2749 guards is always pushed. A patch with a positive guard ("+foo") is
2749 guards is always pushed. A patch with a positive guard ("+foo") is
2750 pushed only if the :hg:`qselect` command has activated it. A patch with
2750 pushed only if the :hg:`qselect` command has activated it. A patch with
2751 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2751 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2752 has activated it.
2752 has activated it.
2753
2753
2754 With no arguments, print the currently active guards.
2754 With no arguments, print the currently active guards.
2755 With arguments, set guards for the named patch.
2755 With arguments, set guards for the named patch.
2756
2756
2757 .. note::
2757 .. note::
2758
2758
2759 Specifying negative guards now requires '--'.
2759 Specifying negative guards now requires '--'.
2760
2760
2761 To set guards on another patch::
2761 To set guards on another patch::
2762
2762
2763 hg qguard other.patch -- +2.6.17 -stable
2763 hg qguard other.patch -- +2.6.17 -stable
2764
2764
2765 Returns 0 on success.
2765 Returns 0 on success.
2766 '''
2766 '''
2767 def status(idx):
2767 def status(idx):
2768 guards = q.seriesguards[idx] or ['unguarded']
2768 guards = q.seriesguards[idx] or ['unguarded']
2769 if q.series[idx] in applied:
2769 if q.series[idx] in applied:
2770 state = 'applied'
2770 state = 'applied'
2771 elif q.pushable(idx)[0]:
2771 elif q.pushable(idx)[0]:
2772 state = 'unapplied'
2772 state = 'unapplied'
2773 else:
2773 else:
2774 state = 'guarded'
2774 state = 'guarded'
2775 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2775 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2776 ui.write('%s: ' % ui.label(q.series[idx], label))
2776 ui.write('%s: ' % ui.label(q.series[idx], label))
2777
2777
2778 for i, guard in enumerate(guards):
2778 for i, guard in enumerate(guards):
2779 if guard.startswith('+'):
2779 if guard.startswith('+'):
2780 ui.write(guard, label='qguard.positive')
2780 ui.write(guard, label='qguard.positive')
2781 elif guard.startswith('-'):
2781 elif guard.startswith('-'):
2782 ui.write(guard, label='qguard.negative')
2782 ui.write(guard, label='qguard.negative')
2783 else:
2783 else:
2784 ui.write(guard, label='qguard.unguarded')
2784 ui.write(guard, label='qguard.unguarded')
2785 if i != len(guards) - 1:
2785 if i != len(guards) - 1:
2786 ui.write(' ')
2786 ui.write(' ')
2787 ui.write('\n')
2787 ui.write('\n')
2788 q = repo.mq
2788 q = repo.mq
2789 applied = set(p.name for p in q.applied)
2789 applied = set(p.name for p in q.applied)
2790 patch = None
2790 patch = None
2791 args = list(args)
2791 args = list(args)
2792 if opts.get('list'):
2792 if opts.get('list'):
2793 if args or opts.get('none'):
2793 if args or opts.get('none'):
2794 raise util.Abort(_('cannot mix -l/--list with options or '
2794 raise util.Abort(_('cannot mix -l/--list with options or '
2795 'arguments'))
2795 'arguments'))
2796 for i in xrange(len(q.series)):
2796 for i in xrange(len(q.series)):
2797 status(i)
2797 status(i)
2798 return
2798 return
2799 if not args or args[0][0:1] in '-+':
2799 if not args or args[0][0:1] in '-+':
2800 if not q.applied:
2800 if not q.applied:
2801 raise util.Abort(_('no patches applied'))
2801 raise util.Abort(_('no patches applied'))
2802 patch = q.applied[-1].name
2802 patch = q.applied[-1].name
2803 if patch is None and args[0][0:1] not in '-+':
2803 if patch is None and args[0][0:1] not in '-+':
2804 patch = args.pop(0)
2804 patch = args.pop(0)
2805 if patch is None:
2805 if patch is None:
2806 raise util.Abort(_('no patch to work with'))
2806 raise util.Abort(_('no patch to work with'))
2807 if args or opts.get('none'):
2807 if args or opts.get('none'):
2808 idx = q.findseries(patch)
2808 idx = q.findseries(patch)
2809 if idx is None:
2809 if idx is None:
2810 raise util.Abort(_('no patch named %s') % patch)
2810 raise util.Abort(_('no patch named %s') % patch)
2811 q.setguards(idx, args)
2811 q.setguards(idx, args)
2812 q.savedirty()
2812 q.savedirty()
2813 else:
2813 else:
2814 status(q.series.index(q.lookup(patch)))
2814 status(q.series.index(q.lookup(patch)))
2815
2815
2816 @command("qheader", [], _('hg qheader [PATCH]'))
2816 @command("qheader", [], _('hg qheader [PATCH]'))
2817 def header(ui, repo, patch=None):
2817 def header(ui, repo, patch=None):
2818 """print the header of the topmost or specified patch
2818 """print the header of the topmost or specified patch
2819
2819
2820 Returns 0 on success."""
2820 Returns 0 on success."""
2821 q = repo.mq
2821 q = repo.mq
2822
2822
2823 if patch:
2823 if patch:
2824 patch = q.lookup(patch)
2824 patch = q.lookup(patch)
2825 else:
2825 else:
2826 if not q.applied:
2826 if not q.applied:
2827 ui.write(_('no patches applied\n'))
2827 ui.write(_('no patches applied\n'))
2828 return 1
2828 return 1
2829 patch = q.lookup('qtip')
2829 patch = q.lookup('qtip')
2830 ph = patchheader(q.join(patch), q.plainmode)
2830 ph = patchheader(q.join(patch), q.plainmode)
2831
2831
2832 ui.write('\n'.join(ph.message) + '\n')
2832 ui.write('\n'.join(ph.message) + '\n')
2833
2833
2834 def lastsavename(path):
2834 def lastsavename(path):
2835 (directory, base) = os.path.split(path)
2835 (directory, base) = os.path.split(path)
2836 names = os.listdir(directory)
2836 names = os.listdir(directory)
2837 namere = re.compile("%s.([0-9]+)" % base)
2837 namere = re.compile("%s.([0-9]+)" % base)
2838 maxindex = None
2838 maxindex = None
2839 maxname = None
2839 maxname = None
2840 for f in names:
2840 for f in names:
2841 m = namere.match(f)
2841 m = namere.match(f)
2842 if m:
2842 if m:
2843 index = int(m.group(1))
2843 index = int(m.group(1))
2844 if maxindex is None or index > maxindex:
2844 if maxindex is None or index > maxindex:
2845 maxindex = index
2845 maxindex = index
2846 maxname = f
2846 maxname = f
2847 if maxname:
2847 if maxname:
2848 return (os.path.join(directory, maxname), maxindex)
2848 return (os.path.join(directory, maxname), maxindex)
2849 return (None, None)
2849 return (None, None)
2850
2850
2851 def savename(path):
2851 def savename(path):
2852 (last, index) = lastsavename(path)
2852 (last, index) = lastsavename(path)
2853 if last is None:
2853 if last is None:
2854 index = 0
2854 index = 0
2855 newpath = path + ".%d" % (index + 1)
2855 newpath = path + ".%d" % (index + 1)
2856 return newpath
2856 return newpath
2857
2857
2858 @command("^qpush",
2858 @command("^qpush",
2859 [('', 'keep-changes', None,
2859 [('', 'keep-changes', None,
2860 _('tolerate non-conflicting local changes')),
2860 _('tolerate non-conflicting local changes')),
2861 ('f', 'force', None, _('apply on top of local changes')),
2861 ('f', 'force', None, _('apply on top of local changes')),
2862 ('e', 'exact', None,
2862 ('e', 'exact', None,
2863 _('apply the target patch to its recorded parent')),
2863 _('apply the target patch to its recorded parent')),
2864 ('l', 'list', None, _('list patch name in commit text')),
2864 ('l', 'list', None, _('list patch name in commit text')),
2865 ('a', 'all', None, _('apply all patches')),
2865 ('a', 'all', None, _('apply all patches')),
2866 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2866 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2867 ('n', 'name', '',
2867 ('n', 'name', '',
2868 _('merge queue name (DEPRECATED)'), _('NAME')),
2868 _('merge queue name (DEPRECATED)'), _('NAME')),
2869 ('', 'move', None,
2869 ('', 'move', None,
2870 _('reorder patch series and apply only the patch')),
2870 _('reorder patch series and apply only the patch')),
2871 ('', 'no-backup', None, _('do not save backup copies of files'))],
2871 ('', 'no-backup', None, _('do not save backup copies of files'))],
2872 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2872 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2873 def push(ui, repo, patch=None, **opts):
2873 def push(ui, repo, patch=None, **opts):
2874 """push the next patch onto the stack
2874 """push the next patch onto the stack
2875
2875
2876 By default, abort if the working directory contains uncommitted
2876 By default, abort if the working directory contains uncommitted
2877 changes. With --keep-changes, abort only if the uncommitted files
2877 changes. With --keep-changes, abort only if the uncommitted files
2878 overlap with patched files. With -f/--force, backup and patch over
2878 overlap with patched files. With -f/--force, backup and patch over
2879 uncommitted changes.
2879 uncommitted changes.
2880
2880
2881 Return 0 on success.
2881 Return 0 on success.
2882 """
2882 """
2883 q = repo.mq
2883 q = repo.mq
2884 mergeq = None
2884 mergeq = None
2885
2885
2886 opts = fixkeepchangesopts(ui, opts)
2886 opts = fixkeepchangesopts(ui, opts)
2887 if opts.get('merge'):
2887 if opts.get('merge'):
2888 if opts.get('name'):
2888 if opts.get('name'):
2889 newpath = repo.join(opts.get('name'))
2889 newpath = repo.join(opts.get('name'))
2890 else:
2890 else:
2891 newpath, i = lastsavename(q.path)
2891 newpath, i = lastsavename(q.path)
2892 if not newpath:
2892 if not newpath:
2893 ui.warn(_("no saved queues found, please use -n\n"))
2893 ui.warn(_("no saved queues found, please use -n\n"))
2894 return 1
2894 return 1
2895 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2895 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2896 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2896 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2897 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2897 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2898 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2898 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2899 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2899 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2900 keepchanges=opts.get('keep_changes'))
2900 keepchanges=opts.get('keep_changes'))
2901 return ret
2901 return ret
2902
2902
2903 @command("^qpop",
2903 @command("^qpop",
2904 [('a', 'all', None, _('pop all patches')),
2904 [('a', 'all', None, _('pop all patches')),
2905 ('n', 'name', '',
2905 ('n', 'name', '',
2906 _('queue name to pop (DEPRECATED)'), _('NAME')),
2906 _('queue name to pop (DEPRECATED)'), _('NAME')),
2907 ('', 'keep-changes', None,
2907 ('', 'keep-changes', None,
2908 _('tolerate non-conflicting local changes')),
2908 _('tolerate non-conflicting local changes')),
2909 ('f', 'force', None, _('forget any local changes to patched files')),
2909 ('f', 'force', None, _('forget any local changes to patched files')),
2910 ('', 'no-backup', None, _('do not save backup copies of files'))],
2910 ('', 'no-backup', None, _('do not save backup copies of files'))],
2911 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2911 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2912 def pop(ui, repo, patch=None, **opts):
2912 def pop(ui, repo, patch=None, **opts):
2913 """pop the current patch off the stack
2913 """pop the current patch off the stack
2914
2914
2915 Without argument, pops off the top of the patch stack. If given a
2915 Without argument, pops off the top of the patch stack. If given a
2916 patch name, keeps popping off patches until the named patch is at
2916 patch name, keeps popping off patches until the named patch is at
2917 the top of the stack.
2917 the top of the stack.
2918
2918
2919 By default, abort if the working directory contains uncommitted
2919 By default, abort if the working directory contains uncommitted
2920 changes. With --keep-changes, abort only if the uncommitted files
2920 changes. With --keep-changes, abort only if the uncommitted files
2921 overlap with patched files. With -f/--force, backup and discard
2921 overlap with patched files. With -f/--force, backup and discard
2922 changes made to such files.
2922 changes made to such files.
2923
2923
2924 Return 0 on success.
2924 Return 0 on success.
2925 """
2925 """
2926 opts = fixkeepchangesopts(ui, opts)
2926 opts = fixkeepchangesopts(ui, opts)
2927 localupdate = True
2927 localupdate = True
2928 if opts.get('name'):
2928 if opts.get('name'):
2929 q = queue(ui, repo.baseui, repo.path, repo.join(opts.get('name')))
2929 q = queue(ui, repo.baseui, repo.path, repo.join(opts.get('name')))
2930 ui.warn(_('using patch queue: %s\n') % q.path)
2930 ui.warn(_('using patch queue: %s\n') % q.path)
2931 localupdate = False
2931 localupdate = False
2932 else:
2932 else:
2933 q = repo.mq
2933 q = repo.mq
2934 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
2934 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
2935 all=opts.get('all'), nobackup=opts.get('no_backup'),
2935 all=opts.get('all'), nobackup=opts.get('no_backup'),
2936 keepchanges=opts.get('keep_changes'))
2936 keepchanges=opts.get('keep_changes'))
2937 q.savedirty()
2937 q.savedirty()
2938 return ret
2938 return ret
2939
2939
2940 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
2940 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
2941 def rename(ui, repo, patch, name=None, **opts):
2941 def rename(ui, repo, patch, name=None, **opts):
2942 """rename a patch
2942 """rename a patch
2943
2943
2944 With one argument, renames the current patch to PATCH1.
2944 With one argument, renames the current patch to PATCH1.
2945 With two arguments, renames PATCH1 to PATCH2.
2945 With two arguments, renames PATCH1 to PATCH2.
2946
2946
2947 Returns 0 on success."""
2947 Returns 0 on success."""
2948 q = repo.mq
2948 q = repo.mq
2949 if not name:
2949 if not name:
2950 name = patch
2950 name = patch
2951 patch = None
2951 patch = None
2952
2952
2953 if patch:
2953 if patch:
2954 patch = q.lookup(patch)
2954 patch = q.lookup(patch)
2955 else:
2955 else:
2956 if not q.applied:
2956 if not q.applied:
2957 ui.write(_('no patches applied\n'))
2957 ui.write(_('no patches applied\n'))
2958 return
2958 return
2959 patch = q.lookup('qtip')
2959 patch = q.lookup('qtip')
2960 absdest = q.join(name)
2960 absdest = q.join(name)
2961 if os.path.isdir(absdest):
2961 if os.path.isdir(absdest):
2962 name = normname(os.path.join(name, os.path.basename(patch)))
2962 name = normname(os.path.join(name, os.path.basename(patch)))
2963 absdest = q.join(name)
2963 absdest = q.join(name)
2964 q.checkpatchname(name)
2964 q.checkpatchname(name)
2965
2965
2966 ui.note(_('renaming %s to %s\n') % (patch, name))
2966 ui.note(_('renaming %s to %s\n') % (patch, name))
2967 i = q.findseries(patch)
2967 i = q.findseries(patch)
2968 guards = q.guard_re.findall(q.fullseries[i])
2968 guards = q.guard_re.findall(q.fullseries[i])
2969 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
2969 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
2970 q.parseseries()
2970 q.parseseries()
2971 q.seriesdirty = True
2971 q.seriesdirty = True
2972
2972
2973 info = q.isapplied(patch)
2973 info = q.isapplied(patch)
2974 if info:
2974 if info:
2975 q.applied[info[0]] = statusentry(info[1], name)
2975 q.applied[info[0]] = statusentry(info[1], name)
2976 q.applieddirty = True
2976 q.applieddirty = True
2977
2977
2978 destdir = os.path.dirname(absdest)
2978 destdir = os.path.dirname(absdest)
2979 if not os.path.isdir(destdir):
2979 if not os.path.isdir(destdir):
2980 os.makedirs(destdir)
2980 os.makedirs(destdir)
2981 util.rename(q.join(patch), absdest)
2981 util.rename(q.join(patch), absdest)
2982 r = q.qrepo()
2982 r = q.qrepo()
2983 if r and patch in r.dirstate:
2983 if r and patch in r.dirstate:
2984 wctx = r[None]
2984 wctx = r[None]
2985 wlock = r.wlock()
2985 wlock = r.wlock()
2986 try:
2986 try:
2987 if r.dirstate[patch] == 'a':
2987 if r.dirstate[patch] == 'a':
2988 r.dirstate.drop(patch)
2988 r.dirstate.drop(patch)
2989 r.dirstate.add(name)
2989 r.dirstate.add(name)
2990 else:
2990 else:
2991 wctx.copy(patch, name)
2991 wctx.copy(patch, name)
2992 wctx.forget([patch])
2992 wctx.forget([patch])
2993 finally:
2993 finally:
2994 wlock.release()
2994 wlock.release()
2995
2995
2996 q.savedirty()
2996 q.savedirty()
2997
2997
2998 @command("qrestore",
2998 @command("qrestore",
2999 [('d', 'delete', None, _('delete save entry')),
2999 [('d', 'delete', None, _('delete save entry')),
3000 ('u', 'update', None, _('update queue working directory'))],
3000 ('u', 'update', None, _('update queue working directory'))],
3001 _('hg qrestore [-d] [-u] REV'))
3001 _('hg qrestore [-d] [-u] REV'))
3002 def restore(ui, repo, rev, **opts):
3002 def restore(ui, repo, rev, **opts):
3003 """restore the queue state saved by a revision (DEPRECATED)
3003 """restore the queue state saved by a revision (DEPRECATED)
3004
3004
3005 This command is deprecated, use :hg:`rebase` instead."""
3005 This command is deprecated, use :hg:`rebase` instead."""
3006 rev = repo.lookup(rev)
3006 rev = repo.lookup(rev)
3007 q = repo.mq
3007 q = repo.mq
3008 q.restore(repo, rev, delete=opts.get('delete'),
3008 q.restore(repo, rev, delete=opts.get('delete'),
3009 qupdate=opts.get('update'))
3009 qupdate=opts.get('update'))
3010 q.savedirty()
3010 q.savedirty()
3011 return 0
3011 return 0
3012
3012
3013 @command("qsave",
3013 @command("qsave",
3014 [('c', 'copy', None, _('copy patch directory')),
3014 [('c', 'copy', None, _('copy patch directory')),
3015 ('n', 'name', '',
3015 ('n', 'name', '',
3016 _('copy directory name'), _('NAME')),
3016 _('copy directory name'), _('NAME')),
3017 ('e', 'empty', None, _('clear queue status file')),
3017 ('e', 'empty', None, _('clear queue status file')),
3018 ('f', 'force', None, _('force copy'))] + commands.commitopts,
3018 ('f', 'force', None, _('force copy'))] + commands.commitopts,
3019 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3019 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3020 def save(ui, repo, **opts):
3020 def save(ui, repo, **opts):
3021 """save current queue state (DEPRECATED)
3021 """save current queue state (DEPRECATED)
3022
3022
3023 This command is deprecated, use :hg:`rebase` instead."""
3023 This command is deprecated, use :hg:`rebase` instead."""
3024 q = repo.mq
3024 q = repo.mq
3025 message = cmdutil.logmessage(ui, opts)
3025 message = cmdutil.logmessage(ui, opts)
3026 ret = q.save(repo, msg=message)
3026 ret = q.save(repo, msg=message)
3027 if ret:
3027 if ret:
3028 return ret
3028 return ret
3029 q.savedirty() # save to .hg/patches before copying
3029 q.savedirty() # save to .hg/patches before copying
3030 if opts.get('copy'):
3030 if opts.get('copy'):
3031 path = q.path
3031 path = q.path
3032 if opts.get('name'):
3032 if opts.get('name'):
3033 newpath = os.path.join(q.basepath, opts.get('name'))
3033 newpath = os.path.join(q.basepath, opts.get('name'))
3034 if os.path.exists(newpath):
3034 if os.path.exists(newpath):
3035 if not os.path.isdir(newpath):
3035 if not os.path.isdir(newpath):
3036 raise util.Abort(_('destination %s exists and is not '
3036 raise util.Abort(_('destination %s exists and is not '
3037 'a directory') % newpath)
3037 'a directory') % newpath)
3038 if not opts.get('force'):
3038 if not opts.get('force'):
3039 raise util.Abort(_('destination %s exists, '
3039 raise util.Abort(_('destination %s exists, '
3040 'use -f to force') % newpath)
3040 'use -f to force') % newpath)
3041 else:
3041 else:
3042 newpath = savename(path)
3042 newpath = savename(path)
3043 ui.warn(_("copy %s to %s\n") % (path, newpath))
3043 ui.warn(_("copy %s to %s\n") % (path, newpath))
3044 util.copyfiles(path, newpath)
3044 util.copyfiles(path, newpath)
3045 if opts.get('empty'):
3045 if opts.get('empty'):
3046 del q.applied[:]
3046 del q.applied[:]
3047 q.applieddirty = True
3047 q.applieddirty = True
3048 q.savedirty()
3048 q.savedirty()
3049 return 0
3049 return 0
3050
3050
3051
3051
3052 @command("qselect",
3052 @command("qselect",
3053 [('n', 'none', None, _('disable all guards')),
3053 [('n', 'none', None, _('disable all guards')),
3054 ('s', 'series', None, _('list all guards in series file')),
3054 ('s', 'series', None, _('list all guards in series file')),
3055 ('', 'pop', None, _('pop to before first guarded applied patch')),
3055 ('', 'pop', None, _('pop to before first guarded applied patch')),
3056 ('', 'reapply', None, _('pop, then reapply patches'))],
3056 ('', 'reapply', None, _('pop, then reapply patches'))],
3057 _('hg qselect [OPTION]... [GUARD]...'))
3057 _('hg qselect [OPTION]... [GUARD]...'))
3058 def select(ui, repo, *args, **opts):
3058 def select(ui, repo, *args, **opts):
3059 '''set or print guarded patches to push
3059 '''set or print guarded patches to push
3060
3060
3061 Use the :hg:`qguard` command to set or print guards on patch, then use
3061 Use the :hg:`qguard` command to set or print guards on patch, then use
3062 qselect to tell mq which guards to use. A patch will be pushed if
3062 qselect to tell mq which guards to use. A patch will be pushed if
3063 it has no guards or any positive guards match the currently
3063 it has no guards or any positive guards match the currently
3064 selected guard, but will not be pushed if any negative guards
3064 selected guard, but will not be pushed if any negative guards
3065 match the current guard. For example::
3065 match the current guard. For example::
3066
3066
3067 qguard foo.patch -- -stable (negative guard)
3067 qguard foo.patch -- -stable (negative guard)
3068 qguard bar.patch +stable (positive guard)
3068 qguard bar.patch +stable (positive guard)
3069 qselect stable
3069 qselect stable
3070
3070
3071 This activates the "stable" guard. mq will skip foo.patch (because
3071 This activates the "stable" guard. mq will skip foo.patch (because
3072 it has a negative match) but push bar.patch (because it has a
3072 it has a negative match) but push bar.patch (because it has a
3073 positive match).
3073 positive match).
3074
3074
3075 With no arguments, prints the currently active guards.
3075 With no arguments, prints the currently active guards.
3076 With one argument, sets the active guard.
3076 With one argument, sets the active guard.
3077
3077
3078 Use -n/--none to deactivate guards (no other arguments needed).
3078 Use -n/--none to deactivate guards (no other arguments needed).
3079 When no guards are active, patches with positive guards are
3079 When no guards are active, patches with positive guards are
3080 skipped and patches with negative guards are pushed.
3080 skipped and patches with negative guards are pushed.
3081
3081
3082 qselect can change the guards on applied patches. It does not pop
3082 qselect can change the guards on applied patches. It does not pop
3083 guarded patches by default. Use --pop to pop back to the last
3083 guarded patches by default. Use --pop to pop back to the last
3084 applied patch that is not guarded. Use --reapply (which implies
3084 applied patch that is not guarded. Use --reapply (which implies
3085 --pop) to push back to the current patch afterwards, but skip
3085 --pop) to push back to the current patch afterwards, but skip
3086 guarded patches.
3086 guarded patches.
3087
3087
3088 Use -s/--series to print a list of all guards in the series file
3088 Use -s/--series to print a list of all guards in the series file
3089 (no other arguments needed). Use -v for more information.
3089 (no other arguments needed). Use -v for more information.
3090
3090
3091 Returns 0 on success.'''
3091 Returns 0 on success.'''
3092
3092
3093 q = repo.mq
3093 q = repo.mq
3094 guards = q.active()
3094 guards = q.active()
3095 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3095 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3096 if args or opts.get('none'):
3096 if args or opts.get('none'):
3097 old_unapplied = q.unapplied(repo)
3097 old_unapplied = q.unapplied(repo)
3098 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3098 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3099 q.setactive(args)
3099 q.setactive(args)
3100 q.savedirty()
3100 q.savedirty()
3101 if not args:
3101 if not args:
3102 ui.status(_('guards deactivated\n'))
3102 ui.status(_('guards deactivated\n'))
3103 if not opts.get('pop') and not opts.get('reapply'):
3103 if not opts.get('pop') and not opts.get('reapply'):
3104 unapplied = q.unapplied(repo)
3104 unapplied = q.unapplied(repo)
3105 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3105 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3106 if len(unapplied) != len(old_unapplied):
3106 if len(unapplied) != len(old_unapplied):
3107 ui.status(_('number of unguarded, unapplied patches has '
3107 ui.status(_('number of unguarded, unapplied patches has '
3108 'changed from %d to %d\n') %
3108 'changed from %d to %d\n') %
3109 (len(old_unapplied), len(unapplied)))
3109 (len(old_unapplied), len(unapplied)))
3110 if len(guarded) != len(old_guarded):
3110 if len(guarded) != len(old_guarded):
3111 ui.status(_('number of guarded, applied patches has changed '
3111 ui.status(_('number of guarded, applied patches has changed '
3112 'from %d to %d\n') %
3112 'from %d to %d\n') %
3113 (len(old_guarded), len(guarded)))
3113 (len(old_guarded), len(guarded)))
3114 elif opts.get('series'):
3114 elif opts.get('series'):
3115 guards = {}
3115 guards = {}
3116 noguards = 0
3116 noguards = 0
3117 for gs in q.seriesguards:
3117 for gs in q.seriesguards:
3118 if not gs:
3118 if not gs:
3119 noguards += 1
3119 noguards += 1
3120 for g in gs:
3120 for g in gs:
3121 guards.setdefault(g, 0)
3121 guards.setdefault(g, 0)
3122 guards[g] += 1
3122 guards[g] += 1
3123 if ui.verbose:
3123 if ui.verbose:
3124 guards['NONE'] = noguards
3124 guards['NONE'] = noguards
3125 guards = guards.items()
3125 guards = guards.items()
3126 guards.sort(key=lambda x: x[0][1:])
3126 guards.sort(key=lambda x: x[0][1:])
3127 if guards:
3127 if guards:
3128 ui.note(_('guards in series file:\n'))
3128 ui.note(_('guards in series file:\n'))
3129 for guard, count in guards:
3129 for guard, count in guards:
3130 ui.note('%2d ' % count)
3130 ui.note('%2d ' % count)
3131 ui.write(guard, '\n')
3131 ui.write(guard, '\n')
3132 else:
3132 else:
3133 ui.note(_('no guards in series file\n'))
3133 ui.note(_('no guards in series file\n'))
3134 else:
3134 else:
3135 if guards:
3135 if guards:
3136 ui.note(_('active guards:\n'))
3136 ui.note(_('active guards:\n'))
3137 for g in guards:
3137 for g in guards:
3138 ui.write(g, '\n')
3138 ui.write(g, '\n')
3139 else:
3139 else:
3140 ui.write(_('no active guards\n'))
3140 ui.write(_('no active guards\n'))
3141 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3141 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3142 popped = False
3142 popped = False
3143 if opts.get('pop') or opts.get('reapply'):
3143 if opts.get('pop') or opts.get('reapply'):
3144 for i in xrange(len(q.applied)):
3144 for i in xrange(len(q.applied)):
3145 if not pushable(i):
3145 if not pushable(i):
3146 ui.status(_('popping guarded patches\n'))
3146 ui.status(_('popping guarded patches\n'))
3147 popped = True
3147 popped = True
3148 if i == 0:
3148 if i == 0:
3149 q.pop(repo, all=True)
3149 q.pop(repo, all=True)
3150 else:
3150 else:
3151 q.pop(repo, q.applied[i - 1].name)
3151 q.pop(repo, q.applied[i - 1].name)
3152 break
3152 break
3153 if popped:
3153 if popped:
3154 try:
3154 try:
3155 if reapply:
3155 if reapply:
3156 ui.status(_('reapplying unguarded patches\n'))
3156 ui.status(_('reapplying unguarded patches\n'))
3157 q.push(repo, reapply)
3157 q.push(repo, reapply)
3158 finally:
3158 finally:
3159 q.savedirty()
3159 q.savedirty()
3160
3160
3161 @command("qfinish",
3161 @command("qfinish",
3162 [('a', 'applied', None, _('finish all applied changesets'))],
3162 [('a', 'applied', None, _('finish all applied changesets'))],
3163 _('hg qfinish [-a] [REV]...'))
3163 _('hg qfinish [-a] [REV]...'))
3164 def finish(ui, repo, *revrange, **opts):
3164 def finish(ui, repo, *revrange, **opts):
3165 """move applied patches into repository history
3165 """move applied patches into repository history
3166
3166
3167 Finishes the specified revisions (corresponding to applied
3167 Finishes the specified revisions (corresponding to applied
3168 patches) by moving them out of mq control into regular repository
3168 patches) by moving them out of mq control into regular repository
3169 history.
3169 history.
3170
3170
3171 Accepts a revision range or the -a/--applied option. If --applied
3171 Accepts a revision range or the -a/--applied option. If --applied
3172 is specified, all applied mq revisions are removed from mq
3172 is specified, all applied mq revisions are removed from mq
3173 control. Otherwise, the given revisions must be at the base of the
3173 control. Otherwise, the given revisions must be at the base of the
3174 stack of applied patches.
3174 stack of applied patches.
3175
3175
3176 This can be especially useful if your changes have been applied to
3176 This can be especially useful if your changes have been applied to
3177 an upstream repository, or if you are about to push your changes
3177 an upstream repository, or if you are about to push your changes
3178 to upstream.
3178 to upstream.
3179
3179
3180 Returns 0 on success.
3180 Returns 0 on success.
3181 """
3181 """
3182 if not opts.get('applied') and not revrange:
3182 if not opts.get('applied') and not revrange:
3183 raise util.Abort(_('no revisions specified'))
3183 raise util.Abort(_('no revisions specified'))
3184 elif opts.get('applied'):
3184 elif opts.get('applied'):
3185 revrange = ('qbase::qtip',) + revrange
3185 revrange = ('qbase::qtip',) + revrange
3186
3186
3187 q = repo.mq
3187 q = repo.mq
3188 if not q.applied:
3188 if not q.applied:
3189 ui.status(_('no patches applied\n'))
3189 ui.status(_('no patches applied\n'))
3190 return 0
3190 return 0
3191
3191
3192 revs = scmutil.revrange(repo, revrange)
3192 revs = scmutil.revrange(repo, revrange)
3193 if repo['.'].rev() in revs and repo[None].files():
3193 if repo['.'].rev() in revs and repo[None].files():
3194 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3194 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3195 # queue.finish may changes phases but leave the responsibility to lock the
3195 # queue.finish may changes phases but leave the responsibility to lock the
3196 # repo to the caller to avoid deadlock with wlock. This command code is
3196 # repo to the caller to avoid deadlock with wlock. This command code is
3197 # responsibility for this locking.
3197 # responsibility for this locking.
3198 lock = repo.lock()
3198 lock = repo.lock()
3199 try:
3199 try:
3200 q.finish(repo, revs)
3200 q.finish(repo, revs)
3201 q.savedirty()
3201 q.savedirty()
3202 finally:
3202 finally:
3203 lock.release()
3203 lock.release()
3204 return 0
3204 return 0
3205
3205
3206 @command("qqueue",
3206 @command("qqueue",
3207 [('l', 'list', False, _('list all available queues')),
3207 [('l', 'list', False, _('list all available queues')),
3208 ('', 'active', False, _('print name of active queue')),
3208 ('', 'active', False, _('print name of active queue')),
3209 ('c', 'create', False, _('create new queue')),
3209 ('c', 'create', False, _('create new queue')),
3210 ('', 'rename', False, _('rename active queue')),
3210 ('', 'rename', False, _('rename active queue')),
3211 ('', 'delete', False, _('delete reference to queue')),
3211 ('', 'delete', False, _('delete reference to queue')),
3212 ('', 'purge', False, _('delete queue, and remove patch dir')),
3212 ('', 'purge', False, _('delete queue, and remove patch dir')),
3213 ],
3213 ],
3214 _('[OPTION] [QUEUE]'))
3214 _('[OPTION] [QUEUE]'))
3215 def qqueue(ui, repo, name=None, **opts):
3215 def qqueue(ui, repo, name=None, **opts):
3216 '''manage multiple patch queues
3216 '''manage multiple patch queues
3217
3217
3218 Supports switching between different patch queues, as well as creating
3218 Supports switching between different patch queues, as well as creating
3219 new patch queues and deleting existing ones.
3219 new patch queues and deleting existing ones.
3220
3220
3221 Omitting a queue name or specifying -l/--list will show you the registered
3221 Omitting a queue name or specifying -l/--list will show you the registered
3222 queues - by default the "normal" patches queue is registered. The currently
3222 queues - by default the "normal" patches queue is registered. The currently
3223 active queue will be marked with "(active)". Specifying --active will print
3223 active queue will be marked with "(active)". Specifying --active will print
3224 only the name of the active queue.
3224 only the name of the active queue.
3225
3225
3226 To create a new queue, use -c/--create. The queue is automatically made
3226 To create a new queue, use -c/--create. The queue is automatically made
3227 active, except in the case where there are applied patches from the
3227 active, except in the case where there are applied patches from the
3228 currently active queue in the repository. Then the queue will only be
3228 currently active queue in the repository. Then the queue will only be
3229 created and switching will fail.
3229 created and switching will fail.
3230
3230
3231 To delete an existing queue, use --delete. You cannot delete the currently
3231 To delete an existing queue, use --delete. You cannot delete the currently
3232 active queue.
3232 active queue.
3233
3233
3234 Returns 0 on success.
3234 Returns 0 on success.
3235 '''
3235 '''
3236 q = repo.mq
3236 q = repo.mq
3237 _defaultqueue = 'patches'
3237 _defaultqueue = 'patches'
3238 _allqueues = 'patches.queues'
3238 _allqueues = 'patches.queues'
3239 _activequeue = 'patches.queue'
3239 _activequeue = 'patches.queue'
3240
3240
3241 def _getcurrent():
3241 def _getcurrent():
3242 cur = os.path.basename(q.path)
3242 cur = os.path.basename(q.path)
3243 if cur.startswith('patches-'):
3243 if cur.startswith('patches-'):
3244 cur = cur[8:]
3244 cur = cur[8:]
3245 return cur
3245 return cur
3246
3246
3247 def _noqueues():
3247 def _noqueues():
3248 try:
3248 try:
3249 fh = repo.vfs(_allqueues, 'r')
3249 fh = repo.vfs(_allqueues, 'r')
3250 fh.close()
3250 fh.close()
3251 except IOError:
3251 except IOError:
3252 return True
3252 return True
3253
3253
3254 return False
3254 return False
3255
3255
3256 def _getqueues():
3256 def _getqueues():
3257 current = _getcurrent()
3257 current = _getcurrent()
3258
3258
3259 try:
3259 try:
3260 fh = repo.vfs(_allqueues, 'r')
3260 fh = repo.vfs(_allqueues, 'r')
3261 queues = [queue.strip() for queue in fh if queue.strip()]
3261 queues = [queue.strip() for queue in fh if queue.strip()]
3262 fh.close()
3262 fh.close()
3263 if current not in queues:
3263 if current not in queues:
3264 queues.append(current)
3264 queues.append(current)
3265 except IOError:
3265 except IOError:
3266 queues = [_defaultqueue]
3266 queues = [_defaultqueue]
3267
3267
3268 return sorted(queues)
3268 return sorted(queues)
3269
3269
3270 def _setactive(name):
3270 def _setactive(name):
3271 if q.applied:
3271 if q.applied:
3272 raise util.Abort(_('new queue created, but cannot make active '
3272 raise util.Abort(_('new queue created, but cannot make active '
3273 'as patches are applied'))
3273 'as patches are applied'))
3274 _setactivenocheck(name)
3274 _setactivenocheck(name)
3275
3275
3276 def _setactivenocheck(name):
3276 def _setactivenocheck(name):
3277 fh = repo.vfs(_activequeue, 'w')
3277 fh = repo.vfs(_activequeue, 'w')
3278 if name != 'patches':
3278 if name != 'patches':
3279 fh.write(name)
3279 fh.write(name)
3280 fh.close()
3280 fh.close()
3281
3281
3282 def _addqueue(name):
3282 def _addqueue(name):
3283 fh = repo.vfs(_allqueues, 'a')
3283 fh = repo.vfs(_allqueues, 'a')
3284 fh.write('%s\n' % (name,))
3284 fh.write('%s\n' % (name,))
3285 fh.close()
3285 fh.close()
3286
3286
3287 def _queuedir(name):
3287 def _queuedir(name):
3288 if name == 'patches':
3288 if name == 'patches':
3289 return repo.join('patches')
3289 return repo.join('patches')
3290 else:
3290 else:
3291 return repo.join('patches-' + name)
3291 return repo.join('patches-' + name)
3292
3292
3293 def _validname(name):
3293 def _validname(name):
3294 for n in name:
3294 for n in name:
3295 if n in ':\\/.':
3295 if n in ':\\/.':
3296 return False
3296 return False
3297 return True
3297 return True
3298
3298
3299 def _delete(name):
3299 def _delete(name):
3300 if name not in existing:
3300 if name not in existing:
3301 raise util.Abort(_('cannot delete queue that does not exist'))
3301 raise util.Abort(_('cannot delete queue that does not exist'))
3302
3302
3303 current = _getcurrent()
3303 current = _getcurrent()
3304
3304
3305 if name == current:
3305 if name == current:
3306 raise util.Abort(_('cannot delete currently active queue'))
3306 raise util.Abort(_('cannot delete currently active queue'))
3307
3307
3308 fh = repo.vfs('patches.queues.new', 'w')
3308 fh = repo.vfs('patches.queues.new', 'w')
3309 for queue in existing:
3309 for queue in existing:
3310 if queue == name:
3310 if queue == name:
3311 continue
3311 continue
3312 fh.write('%s\n' % (queue,))
3312 fh.write('%s\n' % (queue,))
3313 fh.close()
3313 fh.close()
3314 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3314 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3315
3315
3316 if not name or opts.get('list') or opts.get('active'):
3316 if not name or opts.get('list') or opts.get('active'):
3317 current = _getcurrent()
3317 current = _getcurrent()
3318 if opts.get('active'):
3318 if opts.get('active'):
3319 ui.write('%s\n' % (current,))
3319 ui.write('%s\n' % (current,))
3320 return
3320 return
3321 for queue in _getqueues():
3321 for queue in _getqueues():
3322 ui.write('%s' % (queue,))
3322 ui.write('%s' % (queue,))
3323 if queue == current and not ui.quiet:
3323 if queue == current and not ui.quiet:
3324 ui.write(_(' (active)\n'))
3324 ui.write(_(' (active)\n'))
3325 else:
3325 else:
3326 ui.write('\n')
3326 ui.write('\n')
3327 return
3327 return
3328
3328
3329 if not _validname(name):
3329 if not _validname(name):
3330 raise util.Abort(
3330 raise util.Abort(
3331 _('invalid queue name, may not contain the characters ":\\/."'))
3331 _('invalid queue name, may not contain the characters ":\\/."'))
3332
3332
3333 existing = _getqueues()
3333 existing = _getqueues()
3334
3334
3335 if opts.get('create'):
3335 if opts.get('create'):
3336 if name in existing:
3336 if name in existing:
3337 raise util.Abort(_('queue "%s" already exists') % name)
3337 raise util.Abort(_('queue "%s" already exists') % name)
3338 if _noqueues():
3338 if _noqueues():
3339 _addqueue(_defaultqueue)
3339 _addqueue(_defaultqueue)
3340 _addqueue(name)
3340 _addqueue(name)
3341 _setactive(name)
3341 _setactive(name)
3342 elif opts.get('rename'):
3342 elif opts.get('rename'):
3343 current = _getcurrent()
3343 current = _getcurrent()
3344 if name == current:
3344 if name == current:
3345 raise util.Abort(_('can\'t rename "%s" to its current name') % name)
3345 raise util.Abort(_('can\'t rename "%s" to its current name') % name)
3346 if name in existing:
3346 if name in existing:
3347 raise util.Abort(_('queue "%s" already exists') % name)
3347 raise util.Abort(_('queue "%s" already exists') % name)
3348
3348
3349 olddir = _queuedir(current)
3349 olddir = _queuedir(current)
3350 newdir = _queuedir(name)
3350 newdir = _queuedir(name)
3351
3351
3352 if os.path.exists(newdir):
3352 if os.path.exists(newdir):
3353 raise util.Abort(_('non-queue directory "%s" already exists') %
3353 raise util.Abort(_('non-queue directory "%s" already exists') %
3354 newdir)
3354 newdir)
3355
3355
3356 fh = repo.vfs('patches.queues.new', 'w')
3356 fh = repo.vfs('patches.queues.new', 'w')
3357 for queue in existing:
3357 for queue in existing:
3358 if queue == current:
3358 if queue == current:
3359 fh.write('%s\n' % (name,))
3359 fh.write('%s\n' % (name,))
3360 if os.path.exists(olddir):
3360 if os.path.exists(olddir):
3361 util.rename(olddir, newdir)
3361 util.rename(olddir, newdir)
3362 else:
3362 else:
3363 fh.write('%s\n' % (queue,))
3363 fh.write('%s\n' % (queue,))
3364 fh.close()
3364 fh.close()
3365 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3365 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3366 _setactivenocheck(name)
3366 _setactivenocheck(name)
3367 elif opts.get('delete'):
3367 elif opts.get('delete'):
3368 _delete(name)
3368 _delete(name)
3369 elif opts.get('purge'):
3369 elif opts.get('purge'):
3370 if name in existing:
3370 if name in existing:
3371 _delete(name)
3371 _delete(name)
3372 qdir = _queuedir(name)
3372 qdir = _queuedir(name)
3373 if os.path.exists(qdir):
3373 if os.path.exists(qdir):
3374 shutil.rmtree(qdir)
3374 shutil.rmtree(qdir)
3375 else:
3375 else:
3376 if name not in existing:
3376 if name not in existing:
3377 raise util.Abort(_('use --create to create a new queue'))
3377 raise util.Abort(_('use --create to create a new queue'))
3378 _setactive(name)
3378 _setactive(name)
3379
3379
3380 def mqphasedefaults(repo, roots):
3380 def mqphasedefaults(repo, roots):
3381 """callback used to set mq changeset as secret when no phase data exists"""
3381 """callback used to set mq changeset as secret when no phase data exists"""
3382 if repo.mq.applied:
3382 if repo.mq.applied:
3383 if repo.ui.configbool('mq', 'secret', False):
3383 if repo.ui.configbool('mq', 'secret', False):
3384 mqphase = phases.secret
3384 mqphase = phases.secret
3385 else:
3385 else:
3386 mqphase = phases.draft
3386 mqphase = phases.draft
3387 qbase = repo[repo.mq.applied[0].node]
3387 qbase = repo[repo.mq.applied[0].node]
3388 roots[mqphase].add(qbase.node())
3388 roots[mqphase].add(qbase.node())
3389 return roots
3389 return roots
3390
3390
3391 def reposetup(ui, repo):
3391 def reposetup(ui, repo):
3392 class mqrepo(repo.__class__):
3392 class mqrepo(repo.__class__):
3393 @localrepo.unfilteredpropertycache
3393 @localrepo.unfilteredpropertycache
3394 def mq(self):
3394 def mq(self):
3395 return queue(self.ui, self.baseui, self.path)
3395 return queue(self.ui, self.baseui, self.path)
3396
3396
3397 def invalidateall(self):
3397 def invalidateall(self):
3398 super(mqrepo, self).invalidateall()
3398 super(mqrepo, self).invalidateall()
3399 if localrepo.hasunfilteredcache(self, 'mq'):
3399 if localrepo.hasunfilteredcache(self, 'mq'):
3400 # recreate mq in case queue path was changed
3400 # recreate mq in case queue path was changed
3401 delattr(self.unfiltered(), 'mq')
3401 delattr(self.unfiltered(), 'mq')
3402
3402
3403 def abortifwdirpatched(self, errmsg, force=False):
3403 def abortifwdirpatched(self, errmsg, force=False):
3404 if self.mq.applied and self.mq.checkapplied and not force:
3404 if self.mq.applied and self.mq.checkapplied and not force:
3405 parents = self.dirstate.parents()
3405 parents = self.dirstate.parents()
3406 patches = [s.node for s in self.mq.applied]
3406 patches = [s.node for s in self.mq.applied]
3407 if parents[0] in patches or parents[1] in patches:
3407 if parents[0] in patches or parents[1] in patches:
3408 raise util.Abort(errmsg)
3408 raise util.Abort(errmsg)
3409
3409
3410 def commit(self, text="", user=None, date=None, match=None,
3410 def commit(self, text="", user=None, date=None, match=None,
3411 force=False, editor=False, extra={}):
3411 force=False, editor=False, extra={}):
3412 self.abortifwdirpatched(
3412 self.abortifwdirpatched(
3413 _('cannot commit over an applied mq patch'),
3413 _('cannot commit over an applied mq patch'),
3414 force)
3414 force)
3415
3415
3416 return super(mqrepo, self).commit(text, user, date, match, force,
3416 return super(mqrepo, self).commit(text, user, date, match, force,
3417 editor, extra)
3417 editor, extra)
3418
3418
3419 def checkpush(self, pushop):
3419 def checkpush(self, pushop):
3420 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3420 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3421 outapplied = [e.node for e in self.mq.applied]
3421 outapplied = [e.node for e in self.mq.applied]
3422 if pushop.revs:
3422 if pushop.revs:
3423 # Assume applied patches have no non-patch descendants and
3423 # Assume applied patches have no non-patch descendants and
3424 # are not on remote already. Filtering any changeset not
3424 # are not on remote already. Filtering any changeset not
3425 # pushed.
3425 # pushed.
3426 heads = set(pushop.revs)
3426 heads = set(pushop.revs)
3427 for node in reversed(outapplied):
3427 for node in reversed(outapplied):
3428 if node in heads:
3428 if node in heads:
3429 break
3429 break
3430 else:
3430 else:
3431 outapplied.pop()
3431 outapplied.pop()
3432 # looking for pushed and shared changeset
3432 # looking for pushed and shared changeset
3433 for node in outapplied:
3433 for node in outapplied:
3434 if self[node].phase() < phases.secret:
3434 if self[node].phase() < phases.secret:
3435 raise util.Abort(_('source has mq patches applied'))
3435 raise util.Abort(_('source has mq patches applied'))
3436 # no non-secret patches pushed
3436 # no non-secret patches pushed
3437 super(mqrepo, self).checkpush(pushop)
3437 super(mqrepo, self).checkpush(pushop)
3438
3438
3439 def _findtags(self):
3439 def _findtags(self):
3440 '''augment tags from base class with patch tags'''
3440 '''augment tags from base class with patch tags'''
3441 result = super(mqrepo, self)._findtags()
3441 result = super(mqrepo, self)._findtags()
3442
3442
3443 q = self.mq
3443 q = self.mq
3444 if not q.applied:
3444 if not q.applied:
3445 return result
3445 return result
3446
3446
3447 mqtags = [(patch.node, patch.name) for patch in q.applied]
3447 mqtags = [(patch.node, patch.name) for patch in q.applied]
3448
3448
3449 try:
3449 try:
3450 # for now ignore filtering business
3450 # for now ignore filtering business
3451 self.unfiltered().changelog.rev(mqtags[-1][0])
3451 self.unfiltered().changelog.rev(mqtags[-1][0])
3452 except error.LookupError:
3452 except error.LookupError:
3453 self.ui.warn(_('mq status file refers to unknown node %s\n')
3453 self.ui.warn(_('mq status file refers to unknown node %s\n')
3454 % short(mqtags[-1][0]))
3454 % short(mqtags[-1][0]))
3455 return result
3455 return result
3456
3456
3457 # do not add fake tags for filtered revisions
3457 # do not add fake tags for filtered revisions
3458 included = self.changelog.hasnode
3458 included = self.changelog.hasnode
3459 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3459 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3460 if not mqtags:
3460 if not mqtags:
3461 return result
3461 return result
3462
3462
3463 mqtags.append((mqtags[-1][0], 'qtip'))
3463 mqtags.append((mqtags[-1][0], 'qtip'))
3464 mqtags.append((mqtags[0][0], 'qbase'))
3464 mqtags.append((mqtags[0][0], 'qbase'))
3465 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3465 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3466 tags = result[0]
3466 tags = result[0]
3467 for patch in mqtags:
3467 for patch in mqtags:
3468 if patch[1] in tags:
3468 if patch[1] in tags:
3469 self.ui.warn(_('tag %s overrides mq patch of the same '
3469 self.ui.warn(_('tag %s overrides mq patch of the same '
3470 'name\n') % patch[1])
3470 'name\n') % patch[1])
3471 else:
3471 else:
3472 tags[patch[1]] = patch[0]
3472 tags[patch[1]] = patch[0]
3473
3473
3474 return result
3474 return result
3475
3475
3476 if repo.local():
3476 if repo.local():
3477 repo.__class__ = mqrepo
3477 repo.__class__ = mqrepo
3478
3478
3479 repo._phasedefaults.append(mqphasedefaults)
3479 repo._phasedefaults.append(mqphasedefaults)
3480
3480
3481 def mqimport(orig, ui, repo, *args, **kwargs):
3481 def mqimport(orig, ui, repo, *args, **kwargs):
3482 if (util.safehasattr(repo, 'abortifwdirpatched')
3482 if (util.safehasattr(repo, 'abortifwdirpatched')
3483 and not kwargs.get('no_commit', False)):
3483 and not kwargs.get('no_commit', False)):
3484 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3484 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3485 kwargs.get('force'))
3485 kwargs.get('force'))
3486 return orig(ui, repo, *args, **kwargs)
3486 return orig(ui, repo, *args, **kwargs)
3487
3487
3488 def mqinit(orig, ui, *args, **kwargs):
3488 def mqinit(orig, ui, *args, **kwargs):
3489 mq = kwargs.pop('mq', None)
3489 mq = kwargs.pop('mq', None)
3490
3490
3491 if not mq:
3491 if not mq:
3492 return orig(ui, *args, **kwargs)
3492 return orig(ui, *args, **kwargs)
3493
3493
3494 if args:
3494 if args:
3495 repopath = args[0]
3495 repopath = args[0]
3496 if not hg.islocal(repopath):
3496 if not hg.islocal(repopath):
3497 raise util.Abort(_('only a local queue repository '
3497 raise util.Abort(_('only a local queue repository '
3498 'may be initialized'))
3498 'may be initialized'))
3499 else:
3499 else:
3500 repopath = cmdutil.findrepo(os.getcwd())
3500 repopath = cmdutil.findrepo(os.getcwd())
3501 if not repopath:
3501 if not repopath:
3502 raise util.Abort(_('there is no Mercurial repository here '
3502 raise util.Abort(_('there is no Mercurial repository here '
3503 '(.hg not found)'))
3503 '(.hg not found)'))
3504 repo = hg.repository(ui, repopath)
3504 repo = hg.repository(ui, repopath)
3505 return qinit(ui, repo, True)
3505 return qinit(ui, repo, True)
3506
3506
3507 def mqcommand(orig, ui, repo, *args, **kwargs):
3507 def mqcommand(orig, ui, repo, *args, **kwargs):
3508 """Add --mq option to operate on patch repository instead of main"""
3508 """Add --mq option to operate on patch repository instead of main"""
3509
3509
3510 # some commands do not like getting unknown options
3510 # some commands do not like getting unknown options
3511 mq = kwargs.pop('mq', None)
3511 mq = kwargs.pop('mq', None)
3512
3512
3513 if not mq:
3513 if not mq:
3514 return orig(ui, repo, *args, **kwargs)
3514 return orig(ui, repo, *args, **kwargs)
3515
3515
3516 q = repo.mq
3516 q = repo.mq
3517 r = q.qrepo()
3517 r = q.qrepo()
3518 if not r:
3518 if not r:
3519 raise util.Abort(_('no queue repository'))
3519 raise util.Abort(_('no queue repository'))
3520 return orig(r.ui, r, *args, **kwargs)
3520 return orig(r.ui, r, *args, **kwargs)
3521
3521
3522 def summaryhook(ui, repo):
3522 def summaryhook(ui, repo):
3523 q = repo.mq
3523 q = repo.mq
3524 m = []
3524 m = []
3525 a, u = len(q.applied), len(q.unapplied(repo))
3525 a, u = len(q.applied), len(q.unapplied(repo))
3526 if a:
3526 if a:
3527 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3527 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3528 if u:
3528 if u:
3529 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3529 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3530 if m:
3530 if m:
3531 # i18n: column positioning for "hg summary"
3531 # i18n: column positioning for "hg summary"
3532 ui.write(_("mq: %s\n") % ', '.join(m))
3532 ui.write(_("mq: %s\n") % ', '.join(m))
3533 else:
3533 else:
3534 # i18n: column positioning for "hg summary"
3534 # i18n: column positioning for "hg summary"
3535 ui.note(_("mq: (empty queue)\n"))
3535 ui.note(_("mq: (empty queue)\n"))
3536
3536
3537 def revsetmq(repo, subset, x):
3537 def revsetmq(repo, subset, x):
3538 """``mq()``
3538 """``mq()``
3539 Changesets managed by MQ.
3539 Changesets managed by MQ.
3540 """
3540 """
3541 revset.getargs(x, 0, 0, _("mq takes no arguments"))
3541 revset.getargs(x, 0, 0, _("mq takes no arguments"))
3542 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3542 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3543 return revset.baseset([r for r in subset if r in applied])
3543 return revset.baseset([r for r in subset if r in applied])
3544
3544
3545 # tell hggettext to extract docstrings from these functions:
3545 # tell hggettext to extract docstrings from these functions:
3546 i18nfunctions = [revsetmq]
3546 i18nfunctions = [revsetmq]
3547
3547
3548 def extsetup(ui):
3548 def extsetup(ui):
3549 # Ensure mq wrappers are called first, regardless of extension load order by
3549 # Ensure mq wrappers are called first, regardless of extension load order by
3550 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3550 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3551 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3551 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3552
3552
3553 extensions.wrapcommand(commands.table, 'import', mqimport)
3553 extensions.wrapcommand(commands.table, 'import', mqimport)
3554 cmdutil.summaryhooks.add('mq', summaryhook)
3554 cmdutil.summaryhooks.add('mq', summaryhook)
3555
3555
3556 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3556 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3557 entry[1].extend(mqopt)
3557 entry[1].extend(mqopt)
3558
3558
3559 nowrap = set(commands.norepo.split(" "))
3559 nowrap = set(commands.norepo.split(" "))
3560
3560
3561 def dotable(cmdtable):
3561 def dotable(cmdtable):
3562 for cmd in cmdtable.keys():
3562 for cmd in cmdtable.keys():
3563 cmd = cmdutil.parsealiases(cmd)[0]
3563 cmd = cmdutil.parsealiases(cmd)[0]
3564 if cmd in nowrap:
3564 if cmd in nowrap:
3565 continue
3565 continue
3566 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3566 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3567 entry[1].extend(mqopt)
3567 entry[1].extend(mqopt)
3568
3568
3569 dotable(commands.table)
3569 dotable(commands.table)
3570
3570
3571 for extname, extmodule in extensions.extensions():
3571 for extname, extmodule in extensions.extensions():
3572 if extmodule.__file__ != __file__:
3572 if extmodule.__file__ != __file__:
3573 dotable(getattr(extmodule, 'cmdtable', {}))
3573 dotable(getattr(extmodule, 'cmdtable', {}))
3574
3574
3575 revset.symbols['mq'] = revsetmq
3575 revset.symbols['mq'] = revsetmq
3576
3576
3577 colortable = {'qguard.negative': 'red',
3577 colortable = {'qguard.negative': 'red',
3578 'qguard.positive': 'yellow',
3578 'qguard.positive': 'yellow',
3579 'qguard.unguarded': 'green',
3579 'qguard.unguarded': 'green',
3580 'qseries.applied': 'blue bold underline',
3580 'qseries.applied': 'blue bold underline',
3581 'qseries.guarded': 'black bold',
3581 'qseries.guarded': 'black bold',
3582 'qseries.missing': 'red bold',
3582 'qseries.missing': 'red bold',
3583 'qseries.unapplied': 'black bold'}
3583 'qseries.unapplied': 'black bold'}
@@ -1,1122 +1,1122
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 http://mercurial.selenic.com/wiki/RebaseExtension
14 http://mercurial.selenic.com/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
19 from mercurial import copies, repoview
19 from mercurial import copies, repoview
20 from mercurial.commands import templateopts
20 from mercurial.commands import templateopts
21 from mercurial.node import nullrev, nullid, hex, short
21 from mercurial.node import nullrev, nullid, hex, short
22 from mercurial.lock import release
22 from mercurial.lock import release
23 from mercurial.i18n import _
23 from mercurial.i18n import _
24 import os, errno
24 import os, errno
25
25
26 revtodo = -1
26 revtodo = -1
27 nullmerge = -2
27 nullmerge = -2
28 revignored = -3
28 revignored = -3
29
29
30 cmdtable = {}
30 cmdtable = {}
31 command = cmdutil.command(cmdtable)
31 command = cmdutil.command(cmdtable)
32 # Note for extension authors: ONLY specify testedwith = 'internal' for
32 # Note for extension authors: ONLY specify testedwith = 'internal' for
33 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
33 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
34 # be specifying the version(s) of Mercurial they are tested with, or
34 # be specifying the version(s) of Mercurial they are tested with, or
35 # leave the attribute unspecified.
35 # leave the attribute unspecified.
36 testedwith = 'internal'
36 testedwith = 'internal'
37
37
38 def _savegraft(ctx, extra):
38 def _savegraft(ctx, extra):
39 s = ctx.extra().get('source', None)
39 s = ctx.extra().get('source', None)
40 if s is not None:
40 if s is not None:
41 extra['source'] = s
41 extra['source'] = s
42
42
43 def _savebranch(ctx, extra):
43 def _savebranch(ctx, extra):
44 extra['branch'] = ctx.branch()
44 extra['branch'] = ctx.branch()
45
45
46 def _makeextrafn(copiers):
46 def _makeextrafn(copiers):
47 """make an extrafn out of the given copy-functions.
47 """make an extrafn out of the given copy-functions.
48
48
49 A copy function takes a context and an extra dict, and mutates the
49 A copy function takes a context and an extra dict, and mutates the
50 extra dict as needed based on the given context.
50 extra dict as needed based on the given context.
51 """
51 """
52 def extrafn(ctx, extra):
52 def extrafn(ctx, extra):
53 for c in copiers:
53 for c in copiers:
54 c(ctx, extra)
54 c(ctx, extra)
55 return extrafn
55 return extrafn
56
56
57 @command('rebase',
57 @command('rebase',
58 [('s', 'source', '',
58 [('s', 'source', '',
59 _('rebase the specified changeset and descendants'), _('REV')),
59 _('rebase the specified changeset and descendants'), _('REV')),
60 ('b', 'base', '',
60 ('b', 'base', '',
61 _('rebase everything from branching point of specified changeset'),
61 _('rebase everything from branching point of specified changeset'),
62 _('REV')),
62 _('REV')),
63 ('r', 'rev', [],
63 ('r', 'rev', [],
64 _('rebase these revisions'),
64 _('rebase these revisions'),
65 _('REV')),
65 _('REV')),
66 ('d', 'dest', '',
66 ('d', 'dest', '',
67 _('rebase onto the specified changeset'), _('REV')),
67 _('rebase onto the specified changeset'), _('REV')),
68 ('', 'collapse', False, _('collapse the rebased changesets')),
68 ('', 'collapse', False, _('collapse the rebased changesets')),
69 ('m', 'message', '',
69 ('m', 'message', '',
70 _('use text as collapse commit message'), _('TEXT')),
70 _('use text as collapse commit message'), _('TEXT')),
71 ('e', 'edit', False, _('invoke editor on commit messages')),
71 ('e', 'edit', False, _('invoke editor on commit messages')),
72 ('l', 'logfile', '',
72 ('l', 'logfile', '',
73 _('read collapse commit message from file'), _('FILE')),
73 _('read collapse commit message from file'), _('FILE')),
74 ('k', 'keep', False, _('keep original changesets')),
74 ('k', 'keep', False, _('keep original changesets')),
75 ('', 'keepbranches', False, _('keep original branch names')),
75 ('', 'keepbranches', False, _('keep original branch names')),
76 ('D', 'detach', False, _('(DEPRECATED)')),
76 ('D', 'detach', False, _('(DEPRECATED)')),
77 ('i', 'interactive', False, _('(DEPRECATED)')),
77 ('i', 'interactive', False, _('(DEPRECATED)')),
78 ('t', 'tool', '', _('specify merge tool')),
78 ('t', 'tool', '', _('specify merge tool')),
79 ('c', 'continue', False, _('continue an interrupted rebase')),
79 ('c', 'continue', False, _('continue an interrupted rebase')),
80 ('a', 'abort', False, _('abort an interrupted rebase'))] +
80 ('a', 'abort', False, _('abort an interrupted rebase'))] +
81 templateopts,
81 templateopts,
82 _('[-s REV | -b REV] [-d REV] [OPTION]'))
82 _('[-s REV | -b REV] [-d REV] [OPTION]'))
83 def rebase(ui, repo, **opts):
83 def rebase(ui, repo, **opts):
84 """move changeset (and descendants) to a different branch
84 """move changeset (and descendants) to a different branch
85
85
86 Rebase uses repeated merging to graft changesets from one part of
86 Rebase uses repeated merging to graft changesets from one part of
87 history (the source) onto another (the destination). This can be
87 history (the source) onto another (the destination). This can be
88 useful for linearizing *local* changes relative to a master
88 useful for linearizing *local* changes relative to a master
89 development tree.
89 development tree.
90
90
91 You should not rebase changesets that have already been shared
91 You should not rebase changesets that have already been shared
92 with others. Doing so will force everybody else to perform the
92 with others. Doing so will force everybody else to perform the
93 same rebase or they will end up with duplicated changesets after
93 same rebase or they will end up with duplicated changesets after
94 pulling in your rebased changesets.
94 pulling in your rebased changesets.
95
95
96 In its default configuration, Mercurial will prevent you from
96 In its default configuration, Mercurial will prevent you from
97 rebasing published changes. See :hg:`help phases` for details.
97 rebasing published changes. See :hg:`help phases` for details.
98
98
99 If you don't specify a destination changeset (``-d/--dest``),
99 If you don't specify a destination changeset (``-d/--dest``),
100 rebase uses the current branch tip as the destination. (The
100 rebase uses the current branch tip as the destination. (The
101 destination changeset is not modified by rebasing, but new
101 destination changeset is not modified by rebasing, but new
102 changesets are added as its descendants.)
102 changesets are added as its descendants.)
103
103
104 You can specify which changesets to rebase in two ways: as a
104 You can specify which changesets to rebase in two ways: as a
105 "source" changeset or as a "base" changeset. Both are shorthand
105 "source" changeset or as a "base" changeset. Both are shorthand
106 for a topologically related set of changesets (the "source
106 for a topologically related set of changesets (the "source
107 branch"). If you specify source (``-s/--source``), rebase will
107 branch"). If you specify source (``-s/--source``), rebase will
108 rebase that changeset and all of its descendants onto dest. If you
108 rebase that changeset and all of its descendants onto dest. If you
109 specify base (``-b/--base``), rebase will select ancestors of base
109 specify base (``-b/--base``), rebase will select ancestors of base
110 back to but not including the common ancestor with dest. Thus,
110 back to but not including the common ancestor with dest. Thus,
111 ``-b`` is less precise but more convenient than ``-s``: you can
111 ``-b`` is less precise but more convenient than ``-s``: you can
112 specify any changeset in the source branch, and rebase will select
112 specify any changeset in the source branch, and rebase will select
113 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
113 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
114 uses the parent of the working directory as the base.
114 uses the parent of the working directory as the base.
115
115
116 For advanced usage, a third way is available through the ``--rev``
116 For advanced usage, a third way is available through the ``--rev``
117 option. It allows you to specify an arbitrary set of changesets to
117 option. It allows you to specify an arbitrary set of changesets to
118 rebase. Descendants of revs you specify with this option are not
118 rebase. Descendants of revs you specify with this option are not
119 automatically included in the rebase.
119 automatically included in the rebase.
120
120
121 By default, rebase recreates the changesets in the source branch
121 By default, rebase recreates the changesets in the source branch
122 as descendants of dest and then destroys the originals. Use
122 as descendants of dest and then destroys the originals. Use
123 ``--keep`` to preserve the original source changesets. Some
123 ``--keep`` to preserve the original source changesets. Some
124 changesets in the source branch (e.g. merges from the destination
124 changesets in the source branch (e.g. merges from the destination
125 branch) may be dropped if they no longer contribute any change.
125 branch) may be dropped if they no longer contribute any change.
126
126
127 One result of the rules for selecting the destination changeset
127 One result of the rules for selecting the destination changeset
128 and source branch is that, unlike ``merge``, rebase will do
128 and source branch is that, unlike ``merge``, rebase will do
129 nothing if you are at the branch tip of a named branch
129 nothing if you are at the branch tip of a named branch
130 with two heads. You need to explicitly specify source and/or
130 with two heads. You need to explicitly specify source and/or
131 destination (or ``update`` to the other head, if it's the head of
131 destination (or ``update`` to the other head, if it's the head of
132 the intended source branch).
132 the intended source branch).
133
133
134 If a rebase is interrupted to manually resolve a merge, it can be
134 If a rebase is interrupted to manually resolve a merge, it can be
135 continued with --continue/-c or aborted with --abort/-a.
135 continued with --continue/-c or aborted with --abort/-a.
136
136
137 .. container:: verbose
137 .. container:: verbose
138
138
139 Examples:
139 Examples:
140
140
141 - move "local changes" (current commit back to branching point)
141 - move "local changes" (current commit back to branching point)
142 to the current branch tip after a pull::
142 to the current branch tip after a pull::
143
143
144 hg rebase
144 hg rebase
145
145
146 - move a single changeset to the stable branch::
146 - move a single changeset to the stable branch::
147
147
148 hg rebase -r 5f493448 -d stable
148 hg rebase -r 5f493448 -d stable
149
149
150 - splice a commit and all its descendants onto another part of history::
150 - splice a commit and all its descendants onto another part of history::
151
151
152 hg rebase --source c0c3 --dest 4cf9
152 hg rebase --source c0c3 --dest 4cf9
153
153
154 - rebase everything on a branch marked by a bookmark onto the
154 - rebase everything on a branch marked by a bookmark onto the
155 default branch::
155 default branch::
156
156
157 hg rebase --base myfeature --dest default
157 hg rebase --base myfeature --dest default
158
158
159 - collapse a sequence of changes into a single commit::
159 - collapse a sequence of changes into a single commit::
160
160
161 hg rebase --collapse -r 1520:1525 -d .
161 hg rebase --collapse -r 1520:1525 -d .
162
162
163 - move a named branch while preserving its name::
163 - move a named branch while preserving its name::
164
164
165 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
165 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
166
166
167 Returns 0 on success, 1 if nothing to rebase or there are
167 Returns 0 on success, 1 if nothing to rebase or there are
168 unresolved conflicts.
168 unresolved conflicts.
169
169
170 """
170 """
171 originalwd = target = None
171 originalwd = target = None
172 activebookmark = None
172 activebookmark = None
173 external = nullrev
173 external = nullrev
174 state = {}
174 state = {}
175 skipped = set()
175 skipped = set()
176 targetancestors = set()
176 targetancestors = set()
177
177
178
178
179 lock = wlock = None
179 lock = wlock = None
180 try:
180 try:
181 wlock = repo.wlock()
181 wlock = repo.wlock()
182 lock = repo.lock()
182 lock = repo.lock()
183
183
184 # Validate input and define rebasing points
184 # Validate input and define rebasing points
185 destf = opts.get('dest', None)
185 destf = opts.get('dest', None)
186 srcf = opts.get('source', None)
186 srcf = opts.get('source', None)
187 basef = opts.get('base', None)
187 basef = opts.get('base', None)
188 revf = opts.get('rev', [])
188 revf = opts.get('rev', [])
189 contf = opts.get('continue')
189 contf = opts.get('continue')
190 abortf = opts.get('abort')
190 abortf = opts.get('abort')
191 collapsef = opts.get('collapse', False)
191 collapsef = opts.get('collapse', False)
192 collapsemsg = cmdutil.logmessage(ui, opts)
192 collapsemsg = cmdutil.logmessage(ui, opts)
193 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
193 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
194 extrafns = [_savegraft]
194 extrafns = [_savegraft]
195 if e:
195 if e:
196 extrafns = [e]
196 extrafns = [e]
197 keepf = opts.get('keep', False)
197 keepf = opts.get('keep', False)
198 keepbranchesf = opts.get('keepbranches', False)
198 keepbranchesf = opts.get('keepbranches', False)
199 # keepopen is not meant for use on the command line, but by
199 # keepopen is not meant for use on the command line, but by
200 # other extensions
200 # other extensions
201 keepopen = opts.get('keepopen', False)
201 keepopen = opts.get('keepopen', False)
202
202
203 if opts.get('interactive'):
203 if opts.get('interactive'):
204 msg = _("interactive history editing is supported by the "
204 msg = _("interactive history editing is supported by the "
205 "'histedit' extension (see \"hg help histedit\")")
205 "'histedit' extension (see \"hg help histedit\")")
206 raise util.Abort(msg)
206 raise util.Abort(msg)
207
207
208 if collapsemsg and not collapsef:
208 if collapsemsg and not collapsef:
209 raise util.Abort(
209 raise util.Abort(
210 _('message can only be specified with collapse'))
210 _('message can only be specified with collapse'))
211
211
212 if contf or abortf:
212 if contf or abortf:
213 if contf and abortf:
213 if contf and abortf:
214 raise util.Abort(_('cannot use both abort and continue'))
214 raise util.Abort(_('cannot use both abort and continue'))
215 if collapsef:
215 if collapsef:
216 raise util.Abort(
216 raise util.Abort(
217 _('cannot use collapse with continue or abort'))
217 _('cannot use collapse with continue or abort'))
218 if srcf or basef or destf:
218 if srcf or basef or destf:
219 raise util.Abort(
219 raise util.Abort(
220 _('abort and continue do not allow specifying revisions'))
220 _('abort and continue do not allow specifying revisions'))
221 if opts.get('tool', False):
221 if opts.get('tool', False):
222 ui.warn(_('tool option will be ignored\n'))
222 ui.warn(_('tool option will be ignored\n'))
223
223
224 try:
224 try:
225 (originalwd, target, state, skipped, collapsef, keepf,
225 (originalwd, target, state, skipped, collapsef, keepf,
226 keepbranchesf, external, activebookmark) = restorestatus(repo)
226 keepbranchesf, external, activebookmark) = restorestatus(repo)
227 except error.RepoLookupError:
227 except error.RepoLookupError:
228 if abortf:
228 if abortf:
229 clearstatus(repo)
229 clearstatus(repo)
230 repo.ui.warn(_('rebase aborted (no revision is removed,'
230 repo.ui.warn(_('rebase aborted (no revision is removed,'
231 ' only broken state is cleared)\n'))
231 ' only broken state is cleared)\n'))
232 return 0
232 return 0
233 else:
233 else:
234 msg = _('cannot continue inconsistent rebase')
234 msg = _('cannot continue inconsistent rebase')
235 hint = _('use "hg rebase --abort" to clear broken state')
235 hint = _('use "hg rebase --abort" to clear broken state')
236 raise util.Abort(msg, hint=hint)
236 raise util.Abort(msg, hint=hint)
237 if abortf:
237 if abortf:
238 return abort(repo, originalwd, target, state,
238 return abort(repo, originalwd, target, state,
239 activebookmark=activebookmark)
239 activebookmark=activebookmark)
240 else:
240 else:
241 if srcf and basef:
241 if srcf and basef:
242 raise util.Abort(_('cannot specify both a '
242 raise util.Abort(_('cannot specify both a '
243 'source and a base'))
243 'source and a base'))
244 if revf and basef:
244 if revf and basef:
245 raise util.Abort(_('cannot specify both a '
245 raise util.Abort(_('cannot specify both a '
246 'revision and a base'))
246 'revision and a base'))
247 if revf and srcf:
247 if revf and srcf:
248 raise util.Abort(_('cannot specify both a '
248 raise util.Abort(_('cannot specify both a '
249 'revision and a source'))
249 'revision and a source'))
250
250
251 cmdutil.checkunfinished(repo)
251 cmdutil.checkunfinished(repo)
252 cmdutil.bailifchanged(repo)
252 cmdutil.bailifchanged(repo)
253
253
254 if not destf:
254 if not destf:
255 # Destination defaults to the latest revision in the
255 # Destination defaults to the latest revision in the
256 # current branch
256 # current branch
257 branch = repo[None].branch()
257 branch = repo[None].branch()
258 dest = repo[branch]
258 dest = repo[branch]
259 else:
259 else:
260 dest = scmutil.revsingle(repo, destf)
260 dest = scmutil.revsingle(repo, destf)
261
261
262 if revf:
262 if revf:
263 rebaseset = scmutil.revrange(repo, revf)
263 rebaseset = scmutil.revrange(repo, revf)
264 if not rebaseset:
264 if not rebaseset:
265 ui.status(_('empty "rev" revision set - '
265 ui.status(_('empty "rev" revision set - '
266 'nothing to rebase\n'))
266 'nothing to rebase\n'))
267 return 1
267 return 1
268 elif srcf:
268 elif srcf:
269 src = scmutil.revrange(repo, [srcf])
269 src = scmutil.revrange(repo, [srcf])
270 if not src:
270 if not src:
271 ui.status(_('empty "source" revision set - '
271 ui.status(_('empty "source" revision set - '
272 'nothing to rebase\n'))
272 'nothing to rebase\n'))
273 return 1
273 return 1
274 rebaseset = repo.revs('(%ld)::', src)
274 rebaseset = repo.revs('(%ld)::', src)
275 assert rebaseset
275 assert rebaseset
276 else:
276 else:
277 base = scmutil.revrange(repo, [basef or '.'])
277 base = scmutil.revrange(repo, [basef or '.'])
278 if not base:
278 if not base:
279 ui.status(_('empty "base" revision set - '
279 ui.status(_('empty "base" revision set - '
280 "can't compute rebase set\n"))
280 "can't compute rebase set\n"))
281 return 1
281 return 1
282 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
282 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
283 if commonanc is not None:
283 if commonanc is not None:
284 rebaseset = repo.revs('(%d::(%ld) - %d)::',
284 rebaseset = repo.revs('(%d::(%ld) - %d)::',
285 commonanc, base, commonanc)
285 commonanc, base, commonanc)
286 else:
286 else:
287 rebaseset = []
287 rebaseset = []
288
288
289 if not rebaseset:
289 if not rebaseset:
290 # transform to list because smartsets are not comparable to
290 # transform to list because smartsets are not comparable to
291 # lists. This should be improved to honor laziness of
291 # lists. This should be improved to honor laziness of
292 # smartset.
292 # smartset.
293 if list(base) == [dest.rev()]:
293 if list(base) == [dest.rev()]:
294 if basef:
294 if basef:
295 ui.status(_('nothing to rebase - %s is both "base"'
295 ui.status(_('nothing to rebase - %s is both "base"'
296 ' and destination\n') % dest)
296 ' and destination\n') % dest)
297 else:
297 else:
298 ui.status(_('nothing to rebase - working directory '
298 ui.status(_('nothing to rebase - working directory '
299 'parent is also destination\n'))
299 'parent is also destination\n'))
300 elif not repo.revs('%ld - ::%d', base, dest):
300 elif not repo.revs('%ld - ::%d', base, dest):
301 if basef:
301 if basef:
302 ui.status(_('nothing to rebase - "base" %s is '
302 ui.status(_('nothing to rebase - "base" %s is '
303 'already an ancestor of destination '
303 'already an ancestor of destination '
304 '%s\n') %
304 '%s\n') %
305 ('+'.join(str(repo[r]) for r in base),
305 ('+'.join(str(repo[r]) for r in base),
306 dest))
306 dest))
307 else:
307 else:
308 ui.status(_('nothing to rebase - working '
308 ui.status(_('nothing to rebase - working '
309 'directory parent is already an '
309 'directory parent is already an '
310 'ancestor of destination %s\n') % dest)
310 'ancestor of destination %s\n') % dest)
311 else: # can it happen?
311 else: # can it happen?
312 ui.status(_('nothing to rebase from %s to %s\n') %
312 ui.status(_('nothing to rebase from %s to %s\n') %
313 ('+'.join(str(repo[r]) for r in base), dest))
313 ('+'.join(str(repo[r]) for r in base), dest))
314 return 1
314 return 1
315
315
316 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
316 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
317 if (not (keepf or allowunstable)
317 if (not (keepf or allowunstable)
318 and repo.revs('first(children(%ld) - %ld)',
318 and repo.revs('first(children(%ld) - %ld)',
319 rebaseset, rebaseset)):
319 rebaseset, rebaseset)):
320 raise util.Abort(
320 raise util.Abort(
321 _("can't remove original changesets with"
321 _("can't remove original changesets with"
322 " unrebased descendants"),
322 " unrebased descendants"),
323 hint=_('use --keep to keep original changesets'))
323 hint=_('use --keep to keep original changesets'))
324
324
325 result = buildstate(repo, dest, rebaseset, collapsef)
325 result = buildstate(repo, dest, rebaseset, collapsef)
326 if not result:
326 if not result:
327 # Empty state built, nothing to rebase
327 # Empty state built, nothing to rebase
328 ui.status(_('nothing to rebase\n'))
328 ui.status(_('nothing to rebase\n'))
329 return 1
329 return 1
330
330
331 root = min(rebaseset)
331 root = min(rebaseset)
332 if not keepf and not repo[root].mutable():
332 if not keepf and not repo[root].mutable():
333 raise util.Abort(_("can't rebase immutable changeset %s")
333 raise util.Abort(_("can't rebase public changeset %s")
334 % repo[root],
334 % repo[root],
335 hint=_('see "hg help phases" for details'))
335 hint=_('see "hg help phases" for details'))
336
336
337 originalwd, target, state = result
337 originalwd, target, state = result
338 if collapsef:
338 if collapsef:
339 targetancestors = repo.changelog.ancestors([target],
339 targetancestors = repo.changelog.ancestors([target],
340 inclusive=True)
340 inclusive=True)
341 external = externalparent(repo, state, targetancestors)
341 external = externalparent(repo, state, targetancestors)
342
342
343 if dest.closesbranch() and not keepbranchesf:
343 if dest.closesbranch() and not keepbranchesf:
344 ui.status(_('reopening closed branch head %s\n') % dest)
344 ui.status(_('reopening closed branch head %s\n') % dest)
345
345
346 if keepbranchesf:
346 if keepbranchesf:
347 # insert _savebranch at the start of extrafns so if
347 # insert _savebranch at the start of extrafns so if
348 # there's a user-provided extrafn it can clobber branch if
348 # there's a user-provided extrafn it can clobber branch if
349 # desired
349 # desired
350 extrafns.insert(0, _savebranch)
350 extrafns.insert(0, _savebranch)
351 if collapsef:
351 if collapsef:
352 branches = set()
352 branches = set()
353 for rev in state:
353 for rev in state:
354 branches.add(repo[rev].branch())
354 branches.add(repo[rev].branch())
355 if len(branches) > 1:
355 if len(branches) > 1:
356 raise util.Abort(_('cannot collapse multiple named '
356 raise util.Abort(_('cannot collapse multiple named '
357 'branches'))
357 'branches'))
358
358
359 # Rebase
359 # Rebase
360 if not targetancestors:
360 if not targetancestors:
361 targetancestors = repo.changelog.ancestors([target], inclusive=True)
361 targetancestors = repo.changelog.ancestors([target], inclusive=True)
362
362
363 # Keep track of the current bookmarks in order to reset them later
363 # Keep track of the current bookmarks in order to reset them later
364 currentbookmarks = repo._bookmarks.copy()
364 currentbookmarks = repo._bookmarks.copy()
365 activebookmark = activebookmark or repo._activebookmark
365 activebookmark = activebookmark or repo._activebookmark
366 if activebookmark:
366 if activebookmark:
367 bookmarks.deactivate(repo)
367 bookmarks.deactivate(repo)
368
368
369 extrafn = _makeextrafn(extrafns)
369 extrafn = _makeextrafn(extrafns)
370
370
371 sortedstate = sorted(state)
371 sortedstate = sorted(state)
372 total = len(sortedstate)
372 total = len(sortedstate)
373 pos = 0
373 pos = 0
374 for rev in sortedstate:
374 for rev in sortedstate:
375 ctx = repo[rev]
375 ctx = repo[rev]
376 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
376 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
377 ctx.description().split('\n', 1)[0])
377 ctx.description().split('\n', 1)[0])
378 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
378 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
379 if names:
379 if names:
380 desc += ' (%s)' % ' '.join(names)
380 desc += ' (%s)' % ' '.join(names)
381 pos += 1
381 pos += 1
382 if state[rev] == revtodo:
382 if state[rev] == revtodo:
383 ui.status(_('rebasing %s\n') % desc)
383 ui.status(_('rebasing %s\n') % desc)
384 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
384 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
385 _('changesets'), total)
385 _('changesets'), total)
386 p1, p2, base = defineparents(repo, rev, target, state,
386 p1, p2, base = defineparents(repo, rev, target, state,
387 targetancestors)
387 targetancestors)
388 storestatus(repo, originalwd, target, state, collapsef, keepf,
388 storestatus(repo, originalwd, target, state, collapsef, keepf,
389 keepbranchesf, external, activebookmark)
389 keepbranchesf, external, activebookmark)
390 if len(repo.parents()) == 2:
390 if len(repo.parents()) == 2:
391 repo.ui.debug('resuming interrupted rebase\n')
391 repo.ui.debug('resuming interrupted rebase\n')
392 else:
392 else:
393 try:
393 try:
394 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
394 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
395 'rebase')
395 'rebase')
396 stats = rebasenode(repo, rev, p1, base, state,
396 stats = rebasenode(repo, rev, p1, base, state,
397 collapsef, target)
397 collapsef, target)
398 if stats and stats[3] > 0:
398 if stats and stats[3] > 0:
399 raise error.InterventionRequired(
399 raise error.InterventionRequired(
400 _('unresolved conflicts (see hg '
400 _('unresolved conflicts (see hg '
401 'resolve, then hg rebase --continue)'))
401 'resolve, then hg rebase --continue)'))
402 finally:
402 finally:
403 ui.setconfig('ui', 'forcemerge', '', 'rebase')
403 ui.setconfig('ui', 'forcemerge', '', 'rebase')
404 if not collapsef:
404 if not collapsef:
405 merging = p2 != nullrev
405 merging = p2 != nullrev
406 editform = cmdutil.mergeeditform(merging, 'rebase')
406 editform = cmdutil.mergeeditform(merging, 'rebase')
407 editor = cmdutil.getcommiteditor(editform=editform, **opts)
407 editor = cmdutil.getcommiteditor(editform=editform, **opts)
408 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
408 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
409 editor=editor)
409 editor=editor)
410 else:
410 else:
411 # Skip commit if we are collapsing
411 # Skip commit if we are collapsing
412 repo.dirstate.beginparentchange()
412 repo.dirstate.beginparentchange()
413 repo.setparents(repo[p1].node())
413 repo.setparents(repo[p1].node())
414 repo.dirstate.endparentchange()
414 repo.dirstate.endparentchange()
415 newnode = None
415 newnode = None
416 # Update the state
416 # Update the state
417 if newnode is not None:
417 if newnode is not None:
418 state[rev] = repo[newnode].rev()
418 state[rev] = repo[newnode].rev()
419 ui.debug('rebased as %s\n' % short(newnode))
419 ui.debug('rebased as %s\n' % short(newnode))
420 else:
420 else:
421 ui.warn(_('note: rebase of %d:%s created no changes '
421 ui.warn(_('note: rebase of %d:%s created no changes '
422 'to commit\n') % (rev, ctx))
422 'to commit\n') % (rev, ctx))
423 if not collapsef:
423 if not collapsef:
424 skipped.add(rev)
424 skipped.add(rev)
425 state[rev] = p1
425 state[rev] = p1
426 ui.debug('next revision set to %s\n' % p1)
426 ui.debug('next revision set to %s\n' % p1)
427 elif state[rev] == nullmerge:
427 elif state[rev] == nullmerge:
428 ui.debug('ignoring null merge rebase of %s\n' % rev)
428 ui.debug('ignoring null merge rebase of %s\n' % rev)
429 elif state[rev] == revignored:
429 elif state[rev] == revignored:
430 ui.status(_('not rebasing ignored %s\n') % desc)
430 ui.status(_('not rebasing ignored %s\n') % desc)
431 else:
431 else:
432 ui.status(_('already rebased %s as %s\n') %
432 ui.status(_('already rebased %s as %s\n') %
433 (desc, repo[state[rev]]))
433 (desc, repo[state[rev]]))
434
434
435 ui.progress(_('rebasing'), None)
435 ui.progress(_('rebasing'), None)
436 ui.note(_('rebase merging completed\n'))
436 ui.note(_('rebase merging completed\n'))
437
437
438 if collapsef and not keepopen:
438 if collapsef and not keepopen:
439 p1, p2, _base = defineparents(repo, min(state), target,
439 p1, p2, _base = defineparents(repo, min(state), target,
440 state, targetancestors)
440 state, targetancestors)
441 editopt = opts.get('edit')
441 editopt = opts.get('edit')
442 editform = 'rebase.collapse'
442 editform = 'rebase.collapse'
443 if collapsemsg:
443 if collapsemsg:
444 commitmsg = collapsemsg
444 commitmsg = collapsemsg
445 else:
445 else:
446 commitmsg = 'Collapsed revision'
446 commitmsg = 'Collapsed revision'
447 for rebased in state:
447 for rebased in state:
448 if rebased not in skipped and state[rebased] > nullmerge:
448 if rebased not in skipped and state[rebased] > nullmerge:
449 commitmsg += '\n* %s' % repo[rebased].description()
449 commitmsg += '\n* %s' % repo[rebased].description()
450 editopt = True
450 editopt = True
451 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
451 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
452 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
452 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
453 extrafn=extrafn, editor=editor)
453 extrafn=extrafn, editor=editor)
454 if newnode is None:
454 if newnode is None:
455 newrev = target
455 newrev = target
456 else:
456 else:
457 newrev = repo[newnode].rev()
457 newrev = repo[newnode].rev()
458 for oldrev in state.iterkeys():
458 for oldrev in state.iterkeys():
459 if state[oldrev] > nullmerge:
459 if state[oldrev] > nullmerge:
460 state[oldrev] = newrev
460 state[oldrev] = newrev
461
461
462 if 'qtip' in repo.tags():
462 if 'qtip' in repo.tags():
463 updatemq(repo, state, skipped, **opts)
463 updatemq(repo, state, skipped, **opts)
464
464
465 if currentbookmarks:
465 if currentbookmarks:
466 # Nodeids are needed to reset bookmarks
466 # Nodeids are needed to reset bookmarks
467 nstate = {}
467 nstate = {}
468 for k, v in state.iteritems():
468 for k, v in state.iteritems():
469 if v > nullmerge:
469 if v > nullmerge:
470 nstate[repo[k].node()] = repo[v].node()
470 nstate[repo[k].node()] = repo[v].node()
471 # XXX this is the same as dest.node() for the non-continue path --
471 # XXX this is the same as dest.node() for the non-continue path --
472 # this should probably be cleaned up
472 # this should probably be cleaned up
473 targetnode = repo[target].node()
473 targetnode = repo[target].node()
474
474
475 # restore original working directory
475 # restore original working directory
476 # (we do this before stripping)
476 # (we do this before stripping)
477 newwd = state.get(originalwd, originalwd)
477 newwd = state.get(originalwd, originalwd)
478 if newwd < 0:
478 if newwd < 0:
479 # original directory is a parent of rebase set root or ignored
479 # original directory is a parent of rebase set root or ignored
480 newwd = originalwd
480 newwd = originalwd
481 if newwd not in [c.rev() for c in repo[None].parents()]:
481 if newwd not in [c.rev() for c in repo[None].parents()]:
482 ui.note(_("update back to initial working directory parent\n"))
482 ui.note(_("update back to initial working directory parent\n"))
483 hg.updaterepo(repo, newwd, False)
483 hg.updaterepo(repo, newwd, False)
484
484
485 if not keepf:
485 if not keepf:
486 collapsedas = None
486 collapsedas = None
487 if collapsef:
487 if collapsef:
488 collapsedas = newnode
488 collapsedas = newnode
489 clearrebased(ui, repo, state, skipped, collapsedas)
489 clearrebased(ui, repo, state, skipped, collapsedas)
490
490
491 if currentbookmarks:
491 if currentbookmarks:
492 updatebookmarks(repo, targetnode, nstate, currentbookmarks)
492 updatebookmarks(repo, targetnode, nstate, currentbookmarks)
493 if activebookmark not in repo._bookmarks:
493 if activebookmark not in repo._bookmarks:
494 # active bookmark was divergent one and has been deleted
494 # active bookmark was divergent one and has been deleted
495 activebookmark = None
495 activebookmark = None
496
496
497 clearstatus(repo)
497 clearstatus(repo)
498 ui.note(_("rebase completed\n"))
498 ui.note(_("rebase completed\n"))
499 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
499 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
500 if skipped:
500 if skipped:
501 ui.note(_("%d revisions have been skipped\n") % len(skipped))
501 ui.note(_("%d revisions have been skipped\n") % len(skipped))
502
502
503 if (activebookmark and
503 if (activebookmark and
504 repo['.'].node() == repo._bookmarks[activebookmark]):
504 repo['.'].node() == repo._bookmarks[activebookmark]):
505 bookmarks.activate(repo, activebookmark)
505 bookmarks.activate(repo, activebookmark)
506
506
507 finally:
507 finally:
508 release(lock, wlock)
508 release(lock, wlock)
509
509
510 def externalparent(repo, state, targetancestors):
510 def externalparent(repo, state, targetancestors):
511 """Return the revision that should be used as the second parent
511 """Return the revision that should be used as the second parent
512 when the revisions in state is collapsed on top of targetancestors.
512 when the revisions in state is collapsed on top of targetancestors.
513 Abort if there is more than one parent.
513 Abort if there is more than one parent.
514 """
514 """
515 parents = set()
515 parents = set()
516 source = min(state)
516 source = min(state)
517 for rev in state:
517 for rev in state:
518 if rev == source:
518 if rev == source:
519 continue
519 continue
520 for p in repo[rev].parents():
520 for p in repo[rev].parents():
521 if (p.rev() not in state
521 if (p.rev() not in state
522 and p.rev() not in targetancestors):
522 and p.rev() not in targetancestors):
523 parents.add(p.rev())
523 parents.add(p.rev())
524 if not parents:
524 if not parents:
525 return nullrev
525 return nullrev
526 if len(parents) == 1:
526 if len(parents) == 1:
527 return parents.pop()
527 return parents.pop()
528 raise util.Abort(_('unable to collapse on top of %s, there is more '
528 raise util.Abort(_('unable to collapse on top of %s, there is more '
529 'than one external parent: %s') %
529 'than one external parent: %s') %
530 (max(targetancestors),
530 (max(targetancestors),
531 ', '.join(str(p) for p in sorted(parents))))
531 ', '.join(str(p) for p in sorted(parents))))
532
532
533 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None):
533 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None):
534 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
534 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
535 but also store useful information in extra.
535 but also store useful information in extra.
536 Return node of committed revision.'''
536 Return node of committed revision.'''
537 dsguard = cmdutil.dirstateguard(repo, 'rebase')
537 dsguard = cmdutil.dirstateguard(repo, 'rebase')
538 try:
538 try:
539 repo.setparents(repo[p1].node(), repo[p2].node())
539 repo.setparents(repo[p1].node(), repo[p2].node())
540 ctx = repo[rev]
540 ctx = repo[rev]
541 if commitmsg is None:
541 if commitmsg is None:
542 commitmsg = ctx.description()
542 commitmsg = ctx.description()
543 extra = {'rebase_source': ctx.hex()}
543 extra = {'rebase_source': ctx.hex()}
544 if extrafn:
544 if extrafn:
545 extrafn(ctx, extra)
545 extrafn(ctx, extra)
546
546
547 backup = repo.ui.backupconfig('phases', 'new-commit')
547 backup = repo.ui.backupconfig('phases', 'new-commit')
548 try:
548 try:
549 targetphase = max(ctx.phase(), phases.draft)
549 targetphase = max(ctx.phase(), phases.draft)
550 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
550 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
551 # Commit might fail if unresolved files exist
551 # Commit might fail if unresolved files exist
552 newnode = repo.commit(text=commitmsg, user=ctx.user(),
552 newnode = repo.commit(text=commitmsg, user=ctx.user(),
553 date=ctx.date(), extra=extra, editor=editor)
553 date=ctx.date(), extra=extra, editor=editor)
554 finally:
554 finally:
555 repo.ui.restoreconfig(backup)
555 repo.ui.restoreconfig(backup)
556
556
557 repo.dirstate.setbranch(repo[newnode].branch())
557 repo.dirstate.setbranch(repo[newnode].branch())
558 dsguard.close()
558 dsguard.close()
559 return newnode
559 return newnode
560 finally:
560 finally:
561 release(dsguard)
561 release(dsguard)
562
562
563 def rebasenode(repo, rev, p1, base, state, collapse, target):
563 def rebasenode(repo, rev, p1, base, state, collapse, target):
564 'Rebase a single revision rev on top of p1 using base as merge ancestor'
564 'Rebase a single revision rev on top of p1 using base as merge ancestor'
565 # Merge phase
565 # Merge phase
566 # Update to target and merge it with local
566 # Update to target and merge it with local
567 if repo['.'].rev() != p1:
567 if repo['.'].rev() != p1:
568 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
568 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
569 merge.update(repo, p1, False, True, False)
569 merge.update(repo, p1, False, True, False)
570 else:
570 else:
571 repo.ui.debug(" already in target\n")
571 repo.ui.debug(" already in target\n")
572 repo.dirstate.write()
572 repo.dirstate.write()
573 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
573 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
574 if base is not None:
574 if base is not None:
575 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
575 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
576 # When collapsing in-place, the parent is the common ancestor, we
576 # When collapsing in-place, the parent is the common ancestor, we
577 # have to allow merging with it.
577 # have to allow merging with it.
578 stats = merge.update(repo, rev, True, True, False, base, collapse,
578 stats = merge.update(repo, rev, True, True, False, base, collapse,
579 labels=['dest', 'source'])
579 labels=['dest', 'source'])
580 if collapse:
580 if collapse:
581 copies.duplicatecopies(repo, rev, target)
581 copies.duplicatecopies(repo, rev, target)
582 else:
582 else:
583 # If we're not using --collapse, we need to
583 # If we're not using --collapse, we need to
584 # duplicate copies between the revision we're
584 # duplicate copies between the revision we're
585 # rebasing and its first parent, but *not*
585 # rebasing and its first parent, but *not*
586 # duplicate any copies that have already been
586 # duplicate any copies that have already been
587 # performed in the destination.
587 # performed in the destination.
588 p1rev = repo[rev].p1().rev()
588 p1rev = repo[rev].p1().rev()
589 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
589 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
590 return stats
590 return stats
591
591
592 def nearestrebased(repo, rev, state):
592 def nearestrebased(repo, rev, state):
593 """return the nearest ancestors of rev in the rebase result"""
593 """return the nearest ancestors of rev in the rebase result"""
594 rebased = [r for r in state if state[r] > nullmerge]
594 rebased = [r for r in state if state[r] > nullmerge]
595 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
595 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
596 if candidates:
596 if candidates:
597 return state[candidates.first()]
597 return state[candidates.first()]
598 else:
598 else:
599 return None
599 return None
600
600
601 def defineparents(repo, rev, target, state, targetancestors):
601 def defineparents(repo, rev, target, state, targetancestors):
602 'Return the new parent relationship of the revision that will be rebased'
602 'Return the new parent relationship of the revision that will be rebased'
603 parents = repo[rev].parents()
603 parents = repo[rev].parents()
604 p1 = p2 = nullrev
604 p1 = p2 = nullrev
605
605
606 p1n = parents[0].rev()
606 p1n = parents[0].rev()
607 if p1n in targetancestors:
607 if p1n in targetancestors:
608 p1 = target
608 p1 = target
609 elif p1n in state:
609 elif p1n in state:
610 if state[p1n] == nullmerge:
610 if state[p1n] == nullmerge:
611 p1 = target
611 p1 = target
612 elif state[p1n] == revignored:
612 elif state[p1n] == revignored:
613 p1 = nearestrebased(repo, p1n, state)
613 p1 = nearestrebased(repo, p1n, state)
614 if p1 is None:
614 if p1 is None:
615 p1 = target
615 p1 = target
616 else:
616 else:
617 p1 = state[p1n]
617 p1 = state[p1n]
618 else: # p1n external
618 else: # p1n external
619 p1 = target
619 p1 = target
620 p2 = p1n
620 p2 = p1n
621
621
622 if len(parents) == 2 and parents[1].rev() not in targetancestors:
622 if len(parents) == 2 and parents[1].rev() not in targetancestors:
623 p2n = parents[1].rev()
623 p2n = parents[1].rev()
624 # interesting second parent
624 # interesting second parent
625 if p2n in state:
625 if p2n in state:
626 if p1 == target: # p1n in targetancestors or external
626 if p1 == target: # p1n in targetancestors or external
627 p1 = state[p2n]
627 p1 = state[p2n]
628 elif state[p2n] == revignored:
628 elif state[p2n] == revignored:
629 p2 = nearestrebased(repo, p2n, state)
629 p2 = nearestrebased(repo, p2n, state)
630 if p2 is None:
630 if p2 is None:
631 # no ancestors rebased yet, detach
631 # no ancestors rebased yet, detach
632 p2 = target
632 p2 = target
633 else:
633 else:
634 p2 = state[p2n]
634 p2 = state[p2n]
635 else: # p2n external
635 else: # p2n external
636 if p2 != nullrev: # p1n external too => rev is a merged revision
636 if p2 != nullrev: # p1n external too => rev is a merged revision
637 raise util.Abort(_('cannot use revision %d as base, result '
637 raise util.Abort(_('cannot use revision %d as base, result '
638 'would have 3 parents') % rev)
638 'would have 3 parents') % rev)
639 p2 = p2n
639 p2 = p2n
640 repo.ui.debug(" future parents are %d and %d\n" %
640 repo.ui.debug(" future parents are %d and %d\n" %
641 (repo[p1].rev(), repo[p2].rev()))
641 (repo[p1].rev(), repo[p2].rev()))
642
642
643 if rev == min(state):
643 if rev == min(state):
644 # Case (1) initial changeset of a non-detaching rebase.
644 # Case (1) initial changeset of a non-detaching rebase.
645 # Let the merge mechanism find the base itself.
645 # Let the merge mechanism find the base itself.
646 base = None
646 base = None
647 elif not repo[rev].p2():
647 elif not repo[rev].p2():
648 # Case (2) detaching the node with a single parent, use this parent
648 # Case (2) detaching the node with a single parent, use this parent
649 base = repo[rev].p1().rev()
649 base = repo[rev].p1().rev()
650 else:
650 else:
651 # Assuming there is a p1, this is the case where there also is a p2.
651 # Assuming there is a p1, this is the case where there also is a p2.
652 # We are thus rebasing a merge and need to pick the right merge base.
652 # We are thus rebasing a merge and need to pick the right merge base.
653 #
653 #
654 # Imagine we have:
654 # Imagine we have:
655 # - M: current rebase revision in this step
655 # - M: current rebase revision in this step
656 # - A: one parent of M
656 # - A: one parent of M
657 # - B: other parent of M
657 # - B: other parent of M
658 # - D: destination of this merge step (p1 var)
658 # - D: destination of this merge step (p1 var)
659 #
659 #
660 # Consider the case where D is a descendant of A or B and the other is
660 # Consider the case where D is a descendant of A or B and the other is
661 # 'outside'. In this case, the right merge base is the D ancestor.
661 # 'outside'. In this case, the right merge base is the D ancestor.
662 #
662 #
663 # An informal proof, assuming A is 'outside' and B is the D ancestor:
663 # An informal proof, assuming A is 'outside' and B is the D ancestor:
664 #
664 #
665 # If we pick B as the base, the merge involves:
665 # If we pick B as the base, the merge involves:
666 # - changes from B to M (actual changeset payload)
666 # - changes from B to M (actual changeset payload)
667 # - changes from B to D (induced by rebase) as D is a rebased
667 # - changes from B to D (induced by rebase) as D is a rebased
668 # version of B)
668 # version of B)
669 # Which exactly represent the rebase operation.
669 # Which exactly represent the rebase operation.
670 #
670 #
671 # If we pick A as the base, the merge involves:
671 # If we pick A as the base, the merge involves:
672 # - changes from A to M (actual changeset payload)
672 # - changes from A to M (actual changeset payload)
673 # - changes from A to D (with include changes between unrelated A and B
673 # - changes from A to D (with include changes between unrelated A and B
674 # plus changes induced by rebase)
674 # plus changes induced by rebase)
675 # Which does not represent anything sensible and creates a lot of
675 # Which does not represent anything sensible and creates a lot of
676 # conflicts. A is thus not the right choice - B is.
676 # conflicts. A is thus not the right choice - B is.
677 #
677 #
678 # Note: The base found in this 'proof' is only correct in the specified
678 # Note: The base found in this 'proof' is only correct in the specified
679 # case. This base does not make sense if is not D a descendant of A or B
679 # case. This base does not make sense if is not D a descendant of A or B
680 # or if the other is not parent 'outside' (especially not if the other
680 # or if the other is not parent 'outside' (especially not if the other
681 # parent has been rebased). The current implementation does not
681 # parent has been rebased). The current implementation does not
682 # make it feasible to consider different cases separately. In these
682 # make it feasible to consider different cases separately. In these
683 # other cases we currently just leave it to the user to correctly
683 # other cases we currently just leave it to the user to correctly
684 # resolve an impossible merge using a wrong ancestor.
684 # resolve an impossible merge using a wrong ancestor.
685 for p in repo[rev].parents():
685 for p in repo[rev].parents():
686 if state.get(p.rev()) == p1:
686 if state.get(p.rev()) == p1:
687 base = p.rev()
687 base = p.rev()
688 break
688 break
689 else: # fallback when base not found
689 else: # fallback when base not found
690 base = None
690 base = None
691
691
692 # Raise because this function is called wrong (see issue 4106)
692 # Raise because this function is called wrong (see issue 4106)
693 raise AssertionError('no base found to rebase on '
693 raise AssertionError('no base found to rebase on '
694 '(defineparents called wrong)')
694 '(defineparents called wrong)')
695 return p1, p2, base
695 return p1, p2, base
696
696
697 def isagitpatch(repo, patchname):
697 def isagitpatch(repo, patchname):
698 'Return true if the given patch is in git format'
698 'Return true if the given patch is in git format'
699 mqpatch = os.path.join(repo.mq.path, patchname)
699 mqpatch = os.path.join(repo.mq.path, patchname)
700 for line in patch.linereader(file(mqpatch, 'rb')):
700 for line in patch.linereader(file(mqpatch, 'rb')):
701 if line.startswith('diff --git'):
701 if line.startswith('diff --git'):
702 return True
702 return True
703 return False
703 return False
704
704
705 def updatemq(repo, state, skipped, **opts):
705 def updatemq(repo, state, skipped, **opts):
706 'Update rebased mq patches - finalize and then import them'
706 'Update rebased mq patches - finalize and then import them'
707 mqrebase = {}
707 mqrebase = {}
708 mq = repo.mq
708 mq = repo.mq
709 original_series = mq.fullseries[:]
709 original_series = mq.fullseries[:]
710 skippedpatches = set()
710 skippedpatches = set()
711
711
712 for p in mq.applied:
712 for p in mq.applied:
713 rev = repo[p.node].rev()
713 rev = repo[p.node].rev()
714 if rev in state:
714 if rev in state:
715 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
715 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
716 (rev, p.name))
716 (rev, p.name))
717 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
717 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
718 else:
718 else:
719 # Applied but not rebased, not sure this should happen
719 # Applied but not rebased, not sure this should happen
720 skippedpatches.add(p.name)
720 skippedpatches.add(p.name)
721
721
722 if mqrebase:
722 if mqrebase:
723 mq.finish(repo, mqrebase.keys())
723 mq.finish(repo, mqrebase.keys())
724
724
725 # We must start import from the newest revision
725 # We must start import from the newest revision
726 for rev in sorted(mqrebase, reverse=True):
726 for rev in sorted(mqrebase, reverse=True):
727 if rev not in skipped:
727 if rev not in skipped:
728 name, isgit = mqrebase[rev]
728 name, isgit = mqrebase[rev]
729 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
729 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
730 (name, state[rev], repo[state[rev]]))
730 (name, state[rev], repo[state[rev]]))
731 mq.qimport(repo, (), patchname=name, git=isgit,
731 mq.qimport(repo, (), patchname=name, git=isgit,
732 rev=[str(state[rev])])
732 rev=[str(state[rev])])
733 else:
733 else:
734 # Rebased and skipped
734 # Rebased and skipped
735 skippedpatches.add(mqrebase[rev][0])
735 skippedpatches.add(mqrebase[rev][0])
736
736
737 # Patches were either applied and rebased and imported in
737 # Patches were either applied and rebased and imported in
738 # order, applied and removed or unapplied. Discard the removed
738 # order, applied and removed or unapplied. Discard the removed
739 # ones while preserving the original series order and guards.
739 # ones while preserving the original series order and guards.
740 newseries = [s for s in original_series
740 newseries = [s for s in original_series
741 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
741 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
742 mq.fullseries[:] = newseries
742 mq.fullseries[:] = newseries
743 mq.seriesdirty = True
743 mq.seriesdirty = True
744 mq.savedirty()
744 mq.savedirty()
745
745
746 def updatebookmarks(repo, targetnode, nstate, originalbookmarks):
746 def updatebookmarks(repo, targetnode, nstate, originalbookmarks):
747 'Move bookmarks to their correct changesets, and delete divergent ones'
747 'Move bookmarks to their correct changesets, and delete divergent ones'
748 marks = repo._bookmarks
748 marks = repo._bookmarks
749 for k, v in originalbookmarks.iteritems():
749 for k, v in originalbookmarks.iteritems():
750 if v in nstate:
750 if v in nstate:
751 # update the bookmarks for revs that have moved
751 # update the bookmarks for revs that have moved
752 marks[k] = nstate[v]
752 marks[k] = nstate[v]
753 bookmarks.deletedivergent(repo, [targetnode], k)
753 bookmarks.deletedivergent(repo, [targetnode], k)
754
754
755 marks.write()
755 marks.write()
756
756
757 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
757 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
758 external, activebookmark):
758 external, activebookmark):
759 'Store the current status to allow recovery'
759 'Store the current status to allow recovery'
760 f = repo.vfs("rebasestate", "w")
760 f = repo.vfs("rebasestate", "w")
761 f.write(repo[originalwd].hex() + '\n')
761 f.write(repo[originalwd].hex() + '\n')
762 f.write(repo[target].hex() + '\n')
762 f.write(repo[target].hex() + '\n')
763 f.write(repo[external].hex() + '\n')
763 f.write(repo[external].hex() + '\n')
764 f.write('%d\n' % int(collapse))
764 f.write('%d\n' % int(collapse))
765 f.write('%d\n' % int(keep))
765 f.write('%d\n' % int(keep))
766 f.write('%d\n' % int(keepbranches))
766 f.write('%d\n' % int(keepbranches))
767 f.write('%s\n' % (activebookmark or ''))
767 f.write('%s\n' % (activebookmark or ''))
768 for d, v in state.iteritems():
768 for d, v in state.iteritems():
769 oldrev = repo[d].hex()
769 oldrev = repo[d].hex()
770 if v >= 0:
770 if v >= 0:
771 newrev = repo[v].hex()
771 newrev = repo[v].hex()
772 elif v == revtodo:
772 elif v == revtodo:
773 # To maintain format compatibility, we have to use nullid.
773 # To maintain format compatibility, we have to use nullid.
774 # Please do remove this special case when upgrading the format.
774 # Please do remove this special case when upgrading the format.
775 newrev = hex(nullid)
775 newrev = hex(nullid)
776 else:
776 else:
777 newrev = v
777 newrev = v
778 f.write("%s:%s\n" % (oldrev, newrev))
778 f.write("%s:%s\n" % (oldrev, newrev))
779 f.close()
779 f.close()
780 repo.ui.debug('rebase status stored\n')
780 repo.ui.debug('rebase status stored\n')
781
781
782 def clearstatus(repo):
782 def clearstatus(repo):
783 'Remove the status files'
783 'Remove the status files'
784 _clearrebasesetvisibiliy(repo)
784 _clearrebasesetvisibiliy(repo)
785 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
785 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
786
786
787 def restorestatus(repo):
787 def restorestatus(repo):
788 'Restore a previously stored status'
788 'Restore a previously stored status'
789 try:
789 try:
790 keepbranches = None
790 keepbranches = None
791 target = None
791 target = None
792 collapse = False
792 collapse = False
793 external = nullrev
793 external = nullrev
794 activebookmark = None
794 activebookmark = None
795 state = {}
795 state = {}
796 f = repo.vfs("rebasestate")
796 f = repo.vfs("rebasestate")
797 for i, l in enumerate(f.read().splitlines()):
797 for i, l in enumerate(f.read().splitlines()):
798 if i == 0:
798 if i == 0:
799 originalwd = repo[l].rev()
799 originalwd = repo[l].rev()
800 elif i == 1:
800 elif i == 1:
801 target = repo[l].rev()
801 target = repo[l].rev()
802 elif i == 2:
802 elif i == 2:
803 external = repo[l].rev()
803 external = repo[l].rev()
804 elif i == 3:
804 elif i == 3:
805 collapse = bool(int(l))
805 collapse = bool(int(l))
806 elif i == 4:
806 elif i == 4:
807 keep = bool(int(l))
807 keep = bool(int(l))
808 elif i == 5:
808 elif i == 5:
809 keepbranches = bool(int(l))
809 keepbranches = bool(int(l))
810 elif i == 6 and not (len(l) == 81 and ':' in l):
810 elif i == 6 and not (len(l) == 81 and ':' in l):
811 # line 6 is a recent addition, so for backwards compatibility
811 # line 6 is a recent addition, so for backwards compatibility
812 # check that the line doesn't look like the oldrev:newrev lines
812 # check that the line doesn't look like the oldrev:newrev lines
813 activebookmark = l
813 activebookmark = l
814 else:
814 else:
815 oldrev, newrev = l.split(':')
815 oldrev, newrev = l.split(':')
816 if newrev in (str(nullmerge), str(revignored)):
816 if newrev in (str(nullmerge), str(revignored)):
817 state[repo[oldrev].rev()] = int(newrev)
817 state[repo[oldrev].rev()] = int(newrev)
818 elif newrev == nullid:
818 elif newrev == nullid:
819 state[repo[oldrev].rev()] = revtodo
819 state[repo[oldrev].rev()] = revtodo
820 # Legacy compat special case
820 # Legacy compat special case
821 else:
821 else:
822 state[repo[oldrev].rev()] = repo[newrev].rev()
822 state[repo[oldrev].rev()] = repo[newrev].rev()
823
823
824 if keepbranches is None:
824 if keepbranches is None:
825 raise util.Abort(_('.hg/rebasestate is incomplete'))
825 raise util.Abort(_('.hg/rebasestate is incomplete'))
826
826
827 skipped = set()
827 skipped = set()
828 # recompute the set of skipped revs
828 # recompute the set of skipped revs
829 if not collapse:
829 if not collapse:
830 seen = set([target])
830 seen = set([target])
831 for old, new in sorted(state.items()):
831 for old, new in sorted(state.items()):
832 if new != revtodo and new in seen:
832 if new != revtodo and new in seen:
833 skipped.add(old)
833 skipped.add(old)
834 seen.add(new)
834 seen.add(new)
835 repo.ui.debug('computed skipped revs: %s\n' %
835 repo.ui.debug('computed skipped revs: %s\n' %
836 (' '.join(str(r) for r in sorted(skipped)) or None))
836 (' '.join(str(r) for r in sorted(skipped)) or None))
837 repo.ui.debug('rebase status resumed\n')
837 repo.ui.debug('rebase status resumed\n')
838 _setrebasesetvisibility(repo, state.keys())
838 _setrebasesetvisibility(repo, state.keys())
839 return (originalwd, target, state, skipped,
839 return (originalwd, target, state, skipped,
840 collapse, keep, keepbranches, external, activebookmark)
840 collapse, keep, keepbranches, external, activebookmark)
841 except IOError, err:
841 except IOError, err:
842 if err.errno != errno.ENOENT:
842 if err.errno != errno.ENOENT:
843 raise
843 raise
844 raise util.Abort(_('no rebase in progress'))
844 raise util.Abort(_('no rebase in progress'))
845
845
846 def needupdate(repo, state):
846 def needupdate(repo, state):
847 '''check whether we should `update --clean` away from a merge, or if
847 '''check whether we should `update --clean` away from a merge, or if
848 somehow the working dir got forcibly updated, e.g. by older hg'''
848 somehow the working dir got forcibly updated, e.g. by older hg'''
849 parents = [p.rev() for p in repo.parents()]
849 parents = [p.rev() for p in repo.parents()]
850
850
851 # Are we in a merge state at all?
851 # Are we in a merge state at all?
852 if len(parents) < 2:
852 if len(parents) < 2:
853 return False
853 return False
854
854
855 # We should be standing on the first as-of-yet unrebased commit.
855 # We should be standing on the first as-of-yet unrebased commit.
856 firstunrebased = min([old for old, new in state.iteritems()
856 firstunrebased = min([old for old, new in state.iteritems()
857 if new == nullrev])
857 if new == nullrev])
858 if firstunrebased in parents:
858 if firstunrebased in parents:
859 return True
859 return True
860
860
861 return False
861 return False
862
862
863 def abort(repo, originalwd, target, state, activebookmark=None):
863 def abort(repo, originalwd, target, state, activebookmark=None):
864 '''Restore the repository to its original state. Additional args:
864 '''Restore the repository to its original state. Additional args:
865
865
866 activebookmark: the name of the bookmark that should be active after the
866 activebookmark: the name of the bookmark that should be active after the
867 restore'''
867 restore'''
868 dstates = [s for s in state.values() if s >= 0]
868 dstates = [s for s in state.values() if s >= 0]
869 immutable = [d for d in dstates if not repo[d].mutable()]
869 immutable = [d for d in dstates if not repo[d].mutable()]
870 cleanup = True
870 cleanup = True
871 if immutable:
871 if immutable:
872 repo.ui.warn(_("warning: can't clean up immutable changesets %s\n")
872 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
873 % ', '.join(str(repo[r]) for r in immutable),
873 % ', '.join(str(repo[r]) for r in immutable),
874 hint=_('see "hg help phases" for details'))
874 hint=_('see "hg help phases" for details'))
875 cleanup = False
875 cleanup = False
876
876
877 descendants = set()
877 descendants = set()
878 if dstates:
878 if dstates:
879 descendants = set(repo.changelog.descendants(dstates))
879 descendants = set(repo.changelog.descendants(dstates))
880 if descendants - set(dstates):
880 if descendants - set(dstates):
881 repo.ui.warn(_("warning: new changesets detected on target branch, "
881 repo.ui.warn(_("warning: new changesets detected on target branch, "
882 "can't strip\n"))
882 "can't strip\n"))
883 cleanup = False
883 cleanup = False
884
884
885 if cleanup:
885 if cleanup:
886 # Update away from the rebase if necessary
886 # Update away from the rebase if necessary
887 if needupdate(repo, state):
887 if needupdate(repo, state):
888 merge.update(repo, originalwd, False, True, False)
888 merge.update(repo, originalwd, False, True, False)
889
889
890 # Strip from the first rebased revision
890 # Strip from the first rebased revision
891 rebased = filter(lambda x: x >= 0 and x != target, state.values())
891 rebased = filter(lambda x: x >= 0 and x != target, state.values())
892 if rebased:
892 if rebased:
893 strippoints = [c.node() for c in repo.set('roots(%ld)', rebased)]
893 strippoints = [c.node() for c in repo.set('roots(%ld)', rebased)]
894 # no backup of rebased cset versions needed
894 # no backup of rebased cset versions needed
895 repair.strip(repo.ui, repo, strippoints)
895 repair.strip(repo.ui, repo, strippoints)
896
896
897 if activebookmark and activebookmark in repo._bookmarks:
897 if activebookmark and activebookmark in repo._bookmarks:
898 bookmarks.activate(repo, activebookmark)
898 bookmarks.activate(repo, activebookmark)
899
899
900 clearstatus(repo)
900 clearstatus(repo)
901 repo.ui.warn(_('rebase aborted\n'))
901 repo.ui.warn(_('rebase aborted\n'))
902 return 0
902 return 0
903
903
904 def buildstate(repo, dest, rebaseset, collapse):
904 def buildstate(repo, dest, rebaseset, collapse):
905 '''Define which revisions are going to be rebased and where
905 '''Define which revisions are going to be rebased and where
906
906
907 repo: repo
907 repo: repo
908 dest: context
908 dest: context
909 rebaseset: set of rev
909 rebaseset: set of rev
910 '''
910 '''
911 _setrebasesetvisibility(repo, rebaseset)
911 _setrebasesetvisibility(repo, rebaseset)
912
912
913 # This check isn't strictly necessary, since mq detects commits over an
913 # This check isn't strictly necessary, since mq detects commits over an
914 # applied patch. But it prevents messing up the working directory when
914 # applied patch. But it prevents messing up the working directory when
915 # a partially completed rebase is blocked by mq.
915 # a partially completed rebase is blocked by mq.
916 if 'qtip' in repo.tags() and (dest.node() in
916 if 'qtip' in repo.tags() and (dest.node() in
917 [s.node for s in repo.mq.applied]):
917 [s.node for s in repo.mq.applied]):
918 raise util.Abort(_('cannot rebase onto an applied mq patch'))
918 raise util.Abort(_('cannot rebase onto an applied mq patch'))
919
919
920 roots = list(repo.set('roots(%ld)', rebaseset))
920 roots = list(repo.set('roots(%ld)', rebaseset))
921 if not roots:
921 if not roots:
922 raise util.Abort(_('no matching revisions'))
922 raise util.Abort(_('no matching revisions'))
923 roots.sort()
923 roots.sort()
924 state = {}
924 state = {}
925 detachset = set()
925 detachset = set()
926 for root in roots:
926 for root in roots:
927 commonbase = root.ancestor(dest)
927 commonbase = root.ancestor(dest)
928 if commonbase == root:
928 if commonbase == root:
929 raise util.Abort(_('source is ancestor of destination'))
929 raise util.Abort(_('source is ancestor of destination'))
930 if commonbase == dest:
930 if commonbase == dest:
931 samebranch = root.branch() == dest.branch()
931 samebranch = root.branch() == dest.branch()
932 if not collapse and samebranch and root in dest.children():
932 if not collapse and samebranch and root in dest.children():
933 repo.ui.debug('source is a child of destination\n')
933 repo.ui.debug('source is a child of destination\n')
934 return None
934 return None
935
935
936 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
936 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
937 state.update(dict.fromkeys(rebaseset, revtodo))
937 state.update(dict.fromkeys(rebaseset, revtodo))
938 # Rebase tries to turn <dest> into a parent of <root> while
938 # Rebase tries to turn <dest> into a parent of <root> while
939 # preserving the number of parents of rebased changesets:
939 # preserving the number of parents of rebased changesets:
940 #
940 #
941 # - A changeset with a single parent will always be rebased as a
941 # - A changeset with a single parent will always be rebased as a
942 # changeset with a single parent.
942 # changeset with a single parent.
943 #
943 #
944 # - A merge will be rebased as merge unless its parents are both
944 # - A merge will be rebased as merge unless its parents are both
945 # ancestors of <dest> or are themselves in the rebased set and
945 # ancestors of <dest> or are themselves in the rebased set and
946 # pruned while rebased.
946 # pruned while rebased.
947 #
947 #
948 # If one parent of <root> is an ancestor of <dest>, the rebased
948 # If one parent of <root> is an ancestor of <dest>, the rebased
949 # version of this parent will be <dest>. This is always true with
949 # version of this parent will be <dest>. This is always true with
950 # --base option.
950 # --base option.
951 #
951 #
952 # Otherwise, we need to *replace* the original parents with
952 # Otherwise, we need to *replace* the original parents with
953 # <dest>. This "detaches" the rebased set from its former location
953 # <dest>. This "detaches" the rebased set from its former location
954 # and rebases it onto <dest>. Changes introduced by ancestors of
954 # and rebases it onto <dest>. Changes introduced by ancestors of
955 # <root> not common with <dest> (the detachset, marked as
955 # <root> not common with <dest> (the detachset, marked as
956 # nullmerge) are "removed" from the rebased changesets.
956 # nullmerge) are "removed" from the rebased changesets.
957 #
957 #
958 # - If <root> has a single parent, set it to <dest>.
958 # - If <root> has a single parent, set it to <dest>.
959 #
959 #
960 # - If <root> is a merge, we cannot decide which parent to
960 # - If <root> is a merge, we cannot decide which parent to
961 # replace, the rebase operation is not clearly defined.
961 # replace, the rebase operation is not clearly defined.
962 #
962 #
963 # The table below sums up this behavior:
963 # The table below sums up this behavior:
964 #
964 #
965 # +------------------+----------------------+-------------------------+
965 # +------------------+----------------------+-------------------------+
966 # | | one parent | merge |
966 # | | one parent | merge |
967 # +------------------+----------------------+-------------------------+
967 # +------------------+----------------------+-------------------------+
968 # | parent in | new parent is <dest> | parents in ::<dest> are |
968 # | parent in | new parent is <dest> | parents in ::<dest> are |
969 # | ::<dest> | | remapped to <dest> |
969 # | ::<dest> | | remapped to <dest> |
970 # +------------------+----------------------+-------------------------+
970 # +------------------+----------------------+-------------------------+
971 # | unrelated source | new parent is <dest> | ambiguous, abort |
971 # | unrelated source | new parent is <dest> | ambiguous, abort |
972 # +------------------+----------------------+-------------------------+
972 # +------------------+----------------------+-------------------------+
973 #
973 #
974 # The actual abort is handled by `defineparents`
974 # The actual abort is handled by `defineparents`
975 if len(root.parents()) <= 1:
975 if len(root.parents()) <= 1:
976 # ancestors of <root> not ancestors of <dest>
976 # ancestors of <root> not ancestors of <dest>
977 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
977 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
978 [root.rev()]))
978 [root.rev()]))
979 for r in detachset:
979 for r in detachset:
980 if r not in state:
980 if r not in state:
981 state[r] = nullmerge
981 state[r] = nullmerge
982 if len(roots) > 1:
982 if len(roots) > 1:
983 # If we have multiple roots, we may have "hole" in the rebase set.
983 # If we have multiple roots, we may have "hole" in the rebase set.
984 # Rebase roots that descend from those "hole" should not be detached as
984 # Rebase roots that descend from those "hole" should not be detached as
985 # other root are. We use the special `revignored` to inform rebase that
985 # other root are. We use the special `revignored` to inform rebase that
986 # the revision should be ignored but that `defineparents` should search
986 # the revision should be ignored but that `defineparents` should search
987 # a rebase destination that make sense regarding rebased topology.
987 # a rebase destination that make sense regarding rebased topology.
988 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
988 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
989 for ignored in set(rebasedomain) - set(rebaseset):
989 for ignored in set(rebasedomain) - set(rebaseset):
990 state[ignored] = revignored
990 state[ignored] = revignored
991 return repo['.'].rev(), dest.rev(), state
991 return repo['.'].rev(), dest.rev(), state
992
992
993 def clearrebased(ui, repo, state, skipped, collapsedas=None):
993 def clearrebased(ui, repo, state, skipped, collapsedas=None):
994 """dispose of rebased revision at the end of the rebase
994 """dispose of rebased revision at the end of the rebase
995
995
996 If `collapsedas` is not None, the rebase was a collapse whose result if the
996 If `collapsedas` is not None, the rebase was a collapse whose result if the
997 `collapsedas` node."""
997 `collapsedas` node."""
998 if obsolete.isenabled(repo, obsolete.createmarkersopt):
998 if obsolete.isenabled(repo, obsolete.createmarkersopt):
999 markers = []
999 markers = []
1000 for rev, newrev in sorted(state.items()):
1000 for rev, newrev in sorted(state.items()):
1001 if newrev >= 0:
1001 if newrev >= 0:
1002 if rev in skipped:
1002 if rev in skipped:
1003 succs = ()
1003 succs = ()
1004 elif collapsedas is not None:
1004 elif collapsedas is not None:
1005 succs = (repo[collapsedas],)
1005 succs = (repo[collapsedas],)
1006 else:
1006 else:
1007 succs = (repo[newrev],)
1007 succs = (repo[newrev],)
1008 markers.append((repo[rev], succs))
1008 markers.append((repo[rev], succs))
1009 if markers:
1009 if markers:
1010 obsolete.createmarkers(repo, markers)
1010 obsolete.createmarkers(repo, markers)
1011 else:
1011 else:
1012 rebased = [rev for rev in state if state[rev] > nullmerge]
1012 rebased = [rev for rev in state if state[rev] > nullmerge]
1013 if rebased:
1013 if rebased:
1014 stripped = []
1014 stripped = []
1015 for root in repo.set('roots(%ld)', rebased):
1015 for root in repo.set('roots(%ld)', rebased):
1016 if set(repo.changelog.descendants([root.rev()])) - set(state):
1016 if set(repo.changelog.descendants([root.rev()])) - set(state):
1017 ui.warn(_("warning: new changesets detected "
1017 ui.warn(_("warning: new changesets detected "
1018 "on source branch, not stripping\n"))
1018 "on source branch, not stripping\n"))
1019 else:
1019 else:
1020 stripped.append(root.node())
1020 stripped.append(root.node())
1021 if stripped:
1021 if stripped:
1022 # backup the old csets by default
1022 # backup the old csets by default
1023 repair.strip(ui, repo, stripped, "all")
1023 repair.strip(ui, repo, stripped, "all")
1024
1024
1025
1025
1026 def pullrebase(orig, ui, repo, *args, **opts):
1026 def pullrebase(orig, ui, repo, *args, **opts):
1027 'Call rebase after pull if the latter has been invoked with --rebase'
1027 'Call rebase after pull if the latter has been invoked with --rebase'
1028 if opts.get('rebase'):
1028 if opts.get('rebase'):
1029 if opts.get('update'):
1029 if opts.get('update'):
1030 del opts['update']
1030 del opts['update']
1031 ui.debug('--update and --rebase are not compatible, ignoring '
1031 ui.debug('--update and --rebase are not compatible, ignoring '
1032 'the update flag\n')
1032 'the update flag\n')
1033
1033
1034 movemarkfrom = repo['.'].node()
1034 movemarkfrom = repo['.'].node()
1035 revsprepull = len(repo)
1035 revsprepull = len(repo)
1036 origpostincoming = commands.postincoming
1036 origpostincoming = commands.postincoming
1037 def _dummy(*args, **kwargs):
1037 def _dummy(*args, **kwargs):
1038 pass
1038 pass
1039 commands.postincoming = _dummy
1039 commands.postincoming = _dummy
1040 try:
1040 try:
1041 orig(ui, repo, *args, **opts)
1041 orig(ui, repo, *args, **opts)
1042 finally:
1042 finally:
1043 commands.postincoming = origpostincoming
1043 commands.postincoming = origpostincoming
1044 revspostpull = len(repo)
1044 revspostpull = len(repo)
1045 if revspostpull > revsprepull:
1045 if revspostpull > revsprepull:
1046 # --rev option from pull conflict with rebase own --rev
1046 # --rev option from pull conflict with rebase own --rev
1047 # dropping it
1047 # dropping it
1048 if 'rev' in opts:
1048 if 'rev' in opts:
1049 del opts['rev']
1049 del opts['rev']
1050 # positional argument from pull conflicts with rebase's own
1050 # positional argument from pull conflicts with rebase's own
1051 # --source.
1051 # --source.
1052 if 'source' in opts:
1052 if 'source' in opts:
1053 del opts['source']
1053 del opts['source']
1054 rebase(ui, repo, **opts)
1054 rebase(ui, repo, **opts)
1055 branch = repo[None].branch()
1055 branch = repo[None].branch()
1056 dest = repo[branch].rev()
1056 dest = repo[branch].rev()
1057 if dest != repo['.'].rev():
1057 if dest != repo['.'].rev():
1058 # there was nothing to rebase we force an update
1058 # there was nothing to rebase we force an update
1059 hg.update(repo, dest)
1059 hg.update(repo, dest)
1060 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1060 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1061 ui.status(_("updating bookmark %s\n")
1061 ui.status(_("updating bookmark %s\n")
1062 % repo._activebookmark)
1062 % repo._activebookmark)
1063 else:
1063 else:
1064 if opts.get('tool'):
1064 if opts.get('tool'):
1065 raise util.Abort(_('--tool can only be used with --rebase'))
1065 raise util.Abort(_('--tool can only be used with --rebase'))
1066 orig(ui, repo, *args, **opts)
1066 orig(ui, repo, *args, **opts)
1067
1067
1068 def _setrebasesetvisibility(repo, revs):
1068 def _setrebasesetvisibility(repo, revs):
1069 """store the currently rebased set on the repo object
1069 """store the currently rebased set on the repo object
1070
1070
1071 This is used by another function to prevent rebased revision to because
1071 This is used by another function to prevent rebased revision to because
1072 hidden (see issue4505)"""
1072 hidden (see issue4505)"""
1073 repo = repo.unfiltered()
1073 repo = repo.unfiltered()
1074 revs = set(revs)
1074 revs = set(revs)
1075 repo._rebaseset = revs
1075 repo._rebaseset = revs
1076 # invalidate cache if visibility changes
1076 # invalidate cache if visibility changes
1077 hiddens = repo.filteredrevcache.get('visible', set())
1077 hiddens = repo.filteredrevcache.get('visible', set())
1078 if revs & hiddens:
1078 if revs & hiddens:
1079 repo.invalidatevolatilesets()
1079 repo.invalidatevolatilesets()
1080
1080
1081 def _clearrebasesetvisibiliy(repo):
1081 def _clearrebasesetvisibiliy(repo):
1082 """remove rebaseset data from the repo"""
1082 """remove rebaseset data from the repo"""
1083 repo = repo.unfiltered()
1083 repo = repo.unfiltered()
1084 if '_rebaseset' in vars(repo):
1084 if '_rebaseset' in vars(repo):
1085 del repo._rebaseset
1085 del repo._rebaseset
1086
1086
1087 def _rebasedvisible(orig, repo):
1087 def _rebasedvisible(orig, repo):
1088 """ensure rebased revs stay visible (see issue4505)"""
1088 """ensure rebased revs stay visible (see issue4505)"""
1089 blockers = orig(repo)
1089 blockers = orig(repo)
1090 blockers.update(getattr(repo, '_rebaseset', ()))
1090 blockers.update(getattr(repo, '_rebaseset', ()))
1091 return blockers
1091 return blockers
1092
1092
1093 def summaryhook(ui, repo):
1093 def summaryhook(ui, repo):
1094 if not os.path.exists(repo.join('rebasestate')):
1094 if not os.path.exists(repo.join('rebasestate')):
1095 return
1095 return
1096 try:
1096 try:
1097 state = restorestatus(repo)[2]
1097 state = restorestatus(repo)[2]
1098 except error.RepoLookupError:
1098 except error.RepoLookupError:
1099 # i18n: column positioning for "hg summary"
1099 # i18n: column positioning for "hg summary"
1100 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1100 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1101 ui.write(msg)
1101 ui.write(msg)
1102 return
1102 return
1103 numrebased = len([i for i in state.itervalues() if i >= 0])
1103 numrebased = len([i for i in state.itervalues() if i >= 0])
1104 # i18n: column positioning for "hg summary"
1104 # i18n: column positioning for "hg summary"
1105 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1105 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1106 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1106 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1107 ui.label(_('%d remaining'), 'rebase.remaining') %
1107 ui.label(_('%d remaining'), 'rebase.remaining') %
1108 (len(state) - numrebased)))
1108 (len(state) - numrebased)))
1109
1109
1110 def uisetup(ui):
1110 def uisetup(ui):
1111 #Replace pull with a decorator to provide --rebase option
1111 #Replace pull with a decorator to provide --rebase option
1112 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1112 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1113 entry[1].append(('', 'rebase', None,
1113 entry[1].append(('', 'rebase', None,
1114 _("rebase working directory to branch head")))
1114 _("rebase working directory to branch head")))
1115 entry[1].append(('t', 'tool', '',
1115 entry[1].append(('t', 'tool', '',
1116 _("specify merge tool for rebase")))
1116 _("specify merge tool for rebase")))
1117 cmdutil.summaryhooks.add('rebase', summaryhook)
1117 cmdutil.summaryhooks.add('rebase', summaryhook)
1118 cmdutil.unfinishedstates.append(
1118 cmdutil.unfinishedstates.append(
1119 ['rebasestate', False, False, _('rebase in progress'),
1119 ['rebasestate', False, False, _('rebase in progress'),
1120 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1120 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1121 # ensure rebased rev are not hidden
1121 # ensure rebased rev are not hidden
1122 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1122 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
@@ -1,1252 +1,1252
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66 The header is followed by the markers. Marker format depend of the version. See
66 The header is followed by the markers. Marker format depend of the version. See
67 comment associated with each format for details.
67 comment associated with each format for details.
68
68
69 """
69 """
70 import struct
70 import struct
71 import util, base85, node, parsers
71 import util, base85, node, parsers
72 import phases
72 import phases
73 from i18n import _
73 from i18n import _
74
74
75 _pack = struct.pack
75 _pack = struct.pack
76 _unpack = struct.unpack
76 _unpack = struct.unpack
77 _calcsize = struct.calcsize
77 _calcsize = struct.calcsize
78 propertycache = util.propertycache
78 propertycache = util.propertycache
79
79
80 # the obsolete feature is not mature enough to be enabled by default.
80 # the obsolete feature is not mature enough to be enabled by default.
81 # you have to rely on third party extension extension to enable this.
81 # you have to rely on third party extension extension to enable this.
82 _enabled = False
82 _enabled = False
83
83
84 # Options for obsolescence
84 # Options for obsolescence
85 createmarkersopt = 'createmarkers'
85 createmarkersopt = 'createmarkers'
86 allowunstableopt = 'allowunstable'
86 allowunstableopt = 'allowunstable'
87 exchangeopt = 'exchange'
87 exchangeopt = 'exchange'
88
88
89 ### obsolescence marker flag
89 ### obsolescence marker flag
90
90
91 ## bumpedfix flag
91 ## bumpedfix flag
92 #
92 #
93 # When a changeset A' succeed to a changeset A which became public, we call A'
93 # When a changeset A' succeed to a changeset A which became public, we call A'
94 # "bumped" because it's a successors of a public changesets
94 # "bumped" because it's a successors of a public changesets
95 #
95 #
96 # o A' (bumped)
96 # o A' (bumped)
97 # |`:
97 # |`:
98 # | o A
98 # | o A
99 # |/
99 # |/
100 # o Z
100 # o Z
101 #
101 #
102 # The way to solve this situation is to create a new changeset Ad as children
102 # The way to solve this situation is to create a new changeset Ad as children
103 # of A. This changeset have the same content than A'. So the diff from A to A'
103 # of A. This changeset have the same content than A'. So the diff from A to A'
104 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
104 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
105 #
105 #
106 # o Ad
106 # o Ad
107 # |`:
107 # |`:
108 # | x A'
108 # | x A'
109 # |'|
109 # |'|
110 # o | A
110 # o | A
111 # |/
111 # |/
112 # o Z
112 # o Z
113 #
113 #
114 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
114 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
115 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
115 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
116 # This flag mean that the successors express the changes between the public and
116 # This flag mean that the successors express the changes between the public and
117 # bumped version and fix the situation, breaking the transitivity of
117 # bumped version and fix the situation, breaking the transitivity of
118 # "bumped" here.
118 # "bumped" here.
119 bumpedfix = 1
119 bumpedfix = 1
120 usingsha256 = 2
120 usingsha256 = 2
121
121
122 ## Parsing and writing of version "0"
122 ## Parsing and writing of version "0"
123 #
123 #
124 # The header is followed by the markers. Each marker is made of:
124 # The header is followed by the markers. Each marker is made of:
125 #
125 #
126 # - 1 uint8 : number of new changesets "N", can be zero.
126 # - 1 uint8 : number of new changesets "N", can be zero.
127 #
127 #
128 # - 1 uint32: metadata size "M" in bytes.
128 # - 1 uint32: metadata size "M" in bytes.
129 #
129 #
130 # - 1 byte: a bit field. It is reserved for flags used in common
130 # - 1 byte: a bit field. It is reserved for flags used in common
131 # obsolete marker operations, to avoid repeated decoding of metadata
131 # obsolete marker operations, to avoid repeated decoding of metadata
132 # entries.
132 # entries.
133 #
133 #
134 # - 20 bytes: obsoleted changeset identifier.
134 # - 20 bytes: obsoleted changeset identifier.
135 #
135 #
136 # - N*20 bytes: new changesets identifiers.
136 # - N*20 bytes: new changesets identifiers.
137 #
137 #
138 # - M bytes: metadata as a sequence of nul-terminated strings. Each
138 # - M bytes: metadata as a sequence of nul-terminated strings. Each
139 # string contains a key and a value, separated by a colon ':', without
139 # string contains a key and a value, separated by a colon ':', without
140 # additional encoding. Keys cannot contain '\0' or ':' and values
140 # additional encoding. Keys cannot contain '\0' or ':' and values
141 # cannot contain '\0'.
141 # cannot contain '\0'.
142 _fm0version = 0
142 _fm0version = 0
143 _fm0fixed = '>BIB20s'
143 _fm0fixed = '>BIB20s'
144 _fm0node = '20s'
144 _fm0node = '20s'
145 _fm0fsize = _calcsize(_fm0fixed)
145 _fm0fsize = _calcsize(_fm0fixed)
146 _fm0fnodesize = _calcsize(_fm0node)
146 _fm0fnodesize = _calcsize(_fm0node)
147
147
148 def _fm0readmarkers(data, off):
148 def _fm0readmarkers(data, off):
149 # Loop on markers
149 # Loop on markers
150 l = len(data)
150 l = len(data)
151 while off + _fm0fsize <= l:
151 while off + _fm0fsize <= l:
152 # read fixed part
152 # read fixed part
153 cur = data[off:off + _fm0fsize]
153 cur = data[off:off + _fm0fsize]
154 off += _fm0fsize
154 off += _fm0fsize
155 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
155 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
156 # read replacement
156 # read replacement
157 sucs = ()
157 sucs = ()
158 if numsuc:
158 if numsuc:
159 s = (_fm0fnodesize * numsuc)
159 s = (_fm0fnodesize * numsuc)
160 cur = data[off:off + s]
160 cur = data[off:off + s]
161 sucs = _unpack(_fm0node * numsuc, cur)
161 sucs = _unpack(_fm0node * numsuc, cur)
162 off += s
162 off += s
163 # read metadata
163 # read metadata
164 # (metadata will be decoded on demand)
164 # (metadata will be decoded on demand)
165 metadata = data[off:off + mdsize]
165 metadata = data[off:off + mdsize]
166 if len(metadata) != mdsize:
166 if len(metadata) != mdsize:
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
168 'short, %d bytes expected, got %d')
168 'short, %d bytes expected, got %d')
169 % (mdsize, len(metadata)))
169 % (mdsize, len(metadata)))
170 off += mdsize
170 off += mdsize
171 metadata = _fm0decodemeta(metadata)
171 metadata = _fm0decodemeta(metadata)
172 try:
172 try:
173 when, offset = metadata.pop('date', '0 0').split(' ')
173 when, offset = metadata.pop('date', '0 0').split(' ')
174 date = float(when), int(offset)
174 date = float(when), int(offset)
175 except ValueError:
175 except ValueError:
176 date = (0., 0)
176 date = (0., 0)
177 parents = None
177 parents = None
178 if 'p2' in metadata:
178 if 'p2' in metadata:
179 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
179 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
180 elif 'p1' in metadata:
180 elif 'p1' in metadata:
181 parents = (metadata.pop('p1', None),)
181 parents = (metadata.pop('p1', None),)
182 elif 'p0' in metadata:
182 elif 'p0' in metadata:
183 parents = ()
183 parents = ()
184 if parents is not None:
184 if parents is not None:
185 try:
185 try:
186 parents = tuple(node.bin(p) for p in parents)
186 parents = tuple(node.bin(p) for p in parents)
187 # if parent content is not a nodeid, drop the data
187 # if parent content is not a nodeid, drop the data
188 for p in parents:
188 for p in parents:
189 if len(p) != 20:
189 if len(p) != 20:
190 parents = None
190 parents = None
191 break
191 break
192 except TypeError:
192 except TypeError:
193 # if content cannot be translated to nodeid drop the data.
193 # if content cannot be translated to nodeid drop the data.
194 parents = None
194 parents = None
195
195
196 metadata = tuple(sorted(metadata.iteritems()))
196 metadata = tuple(sorted(metadata.iteritems()))
197
197
198 yield (pre, sucs, flags, metadata, date, parents)
198 yield (pre, sucs, flags, metadata, date, parents)
199
199
200 def _fm0encodeonemarker(marker):
200 def _fm0encodeonemarker(marker):
201 pre, sucs, flags, metadata, date, parents = marker
201 pre, sucs, flags, metadata, date, parents = marker
202 if flags & usingsha256:
202 if flags & usingsha256:
203 raise util.Abort(_('cannot handle sha256 with old obsstore format'))
203 raise util.Abort(_('cannot handle sha256 with old obsstore format'))
204 metadata = dict(metadata)
204 metadata = dict(metadata)
205 time, tz = date
205 time, tz = date
206 metadata['date'] = '%r %i' % (time, tz)
206 metadata['date'] = '%r %i' % (time, tz)
207 if parents is not None:
207 if parents is not None:
208 if not parents:
208 if not parents:
209 # mark that we explicitly recorded no parents
209 # mark that we explicitly recorded no parents
210 metadata['p0'] = ''
210 metadata['p0'] = ''
211 for i, p in enumerate(parents):
211 for i, p in enumerate(parents):
212 metadata['p%i' % (i + 1)] = node.hex(p)
212 metadata['p%i' % (i + 1)] = node.hex(p)
213 metadata = _fm0encodemeta(metadata)
213 metadata = _fm0encodemeta(metadata)
214 numsuc = len(sucs)
214 numsuc = len(sucs)
215 format = _fm0fixed + (_fm0node * numsuc)
215 format = _fm0fixed + (_fm0node * numsuc)
216 data = [numsuc, len(metadata), flags, pre]
216 data = [numsuc, len(metadata), flags, pre]
217 data.extend(sucs)
217 data.extend(sucs)
218 return _pack(format, *data) + metadata
218 return _pack(format, *data) + metadata
219
219
220 def _fm0encodemeta(meta):
220 def _fm0encodemeta(meta):
221 """Return encoded metadata string to string mapping.
221 """Return encoded metadata string to string mapping.
222
222
223 Assume no ':' in key and no '\0' in both key and value."""
223 Assume no ':' in key and no '\0' in both key and value."""
224 for key, value in meta.iteritems():
224 for key, value in meta.iteritems():
225 if ':' in key or '\0' in key:
225 if ':' in key or '\0' in key:
226 raise ValueError("':' and '\0' are forbidden in metadata key'")
226 raise ValueError("':' and '\0' are forbidden in metadata key'")
227 if '\0' in value:
227 if '\0' in value:
228 raise ValueError("':' is forbidden in metadata value'")
228 raise ValueError("':' is forbidden in metadata value'")
229 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
229 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
230
230
231 def _fm0decodemeta(data):
231 def _fm0decodemeta(data):
232 """Return string to string dictionary from encoded version."""
232 """Return string to string dictionary from encoded version."""
233 d = {}
233 d = {}
234 for l in data.split('\0'):
234 for l in data.split('\0'):
235 if l:
235 if l:
236 key, value = l.split(':')
236 key, value = l.split(':')
237 d[key] = value
237 d[key] = value
238 return d
238 return d
239
239
240 ## Parsing and writing of version "1"
240 ## Parsing and writing of version "1"
241 #
241 #
242 # The header is followed by the markers. Each marker is made of:
242 # The header is followed by the markers. Each marker is made of:
243 #
243 #
244 # - uint32: total size of the marker (including this field)
244 # - uint32: total size of the marker (including this field)
245 #
245 #
246 # - float64: date in seconds since epoch
246 # - float64: date in seconds since epoch
247 #
247 #
248 # - int16: timezone offset in minutes
248 # - int16: timezone offset in minutes
249 #
249 #
250 # - uint16: a bit field. It is reserved for flags used in common
250 # - uint16: a bit field. It is reserved for flags used in common
251 # obsolete marker operations, to avoid repeated decoding of metadata
251 # obsolete marker operations, to avoid repeated decoding of metadata
252 # entries.
252 # entries.
253 #
253 #
254 # - uint8: number of successors "N", can be zero.
254 # - uint8: number of successors "N", can be zero.
255 #
255 #
256 # - uint8: number of parents "P", can be zero.
256 # - uint8: number of parents "P", can be zero.
257 #
257 #
258 # 0: parents data stored but no parent,
258 # 0: parents data stored but no parent,
259 # 1: one parent stored,
259 # 1: one parent stored,
260 # 2: two parents stored,
260 # 2: two parents stored,
261 # 3: no parent data stored
261 # 3: no parent data stored
262 #
262 #
263 # - uint8: number of metadata entries M
263 # - uint8: number of metadata entries M
264 #
264 #
265 # - 20 or 32 bytes: precursor changeset identifier.
265 # - 20 or 32 bytes: precursor changeset identifier.
266 #
266 #
267 # - N*(20 or 32) bytes: successors changesets identifiers.
267 # - N*(20 or 32) bytes: successors changesets identifiers.
268 #
268 #
269 # - P*(20 or 32) bytes: parents of the precursors changesets.
269 # - P*(20 or 32) bytes: parents of the precursors changesets.
270 #
270 #
271 # - M*(uint8, uint8): size of all metadata entries (key and value)
271 # - M*(uint8, uint8): size of all metadata entries (key and value)
272 #
272 #
273 # - remaining bytes: the metadata, each (key, value) pair after the other.
273 # - remaining bytes: the metadata, each (key, value) pair after the other.
274 _fm1version = 1
274 _fm1version = 1
275 _fm1fixed = '>IdhHBBB20s'
275 _fm1fixed = '>IdhHBBB20s'
276 _fm1nodesha1 = '20s'
276 _fm1nodesha1 = '20s'
277 _fm1nodesha256 = '32s'
277 _fm1nodesha256 = '32s'
278 _fm1nodesha1size = _calcsize(_fm1nodesha1)
278 _fm1nodesha1size = _calcsize(_fm1nodesha1)
279 _fm1nodesha256size = _calcsize(_fm1nodesha256)
279 _fm1nodesha256size = _calcsize(_fm1nodesha256)
280 _fm1fsize = _calcsize(_fm1fixed)
280 _fm1fsize = _calcsize(_fm1fixed)
281 _fm1parentnone = 3
281 _fm1parentnone = 3
282 _fm1parentshift = 14
282 _fm1parentshift = 14
283 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
283 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
284 _fm1metapair = 'BB'
284 _fm1metapair = 'BB'
285 _fm1metapairsize = _calcsize('BB')
285 _fm1metapairsize = _calcsize('BB')
286
286
287 def _fm1purereadmarkers(data, off):
287 def _fm1purereadmarkers(data, off):
288 # make some global constants local for performance
288 # make some global constants local for performance
289 noneflag = _fm1parentnone
289 noneflag = _fm1parentnone
290 sha2flag = usingsha256
290 sha2flag = usingsha256
291 sha1size = _fm1nodesha1size
291 sha1size = _fm1nodesha1size
292 sha2size = _fm1nodesha256size
292 sha2size = _fm1nodesha256size
293 sha1fmt = _fm1nodesha1
293 sha1fmt = _fm1nodesha1
294 sha2fmt = _fm1nodesha256
294 sha2fmt = _fm1nodesha256
295 metasize = _fm1metapairsize
295 metasize = _fm1metapairsize
296 metafmt = _fm1metapair
296 metafmt = _fm1metapair
297 fsize = _fm1fsize
297 fsize = _fm1fsize
298 unpack = _unpack
298 unpack = _unpack
299
299
300 # Loop on markers
300 # Loop on markers
301 stop = len(data) - _fm1fsize
301 stop = len(data) - _fm1fsize
302 ufixed = struct.Struct(_fm1fixed).unpack
302 ufixed = struct.Struct(_fm1fixed).unpack
303
303
304 while off <= stop:
304 while off <= stop:
305 # read fixed part
305 # read fixed part
306 o1 = off + fsize
306 o1 = off + fsize
307 t, secs, tz, flags, numsuc, numpar, nummeta, prec = ufixed(data[off:o1])
307 t, secs, tz, flags, numsuc, numpar, nummeta, prec = ufixed(data[off:o1])
308
308
309 if flags & sha2flag:
309 if flags & sha2flag:
310 # FIXME: prec was read as a SHA1, needs to be amended
310 # FIXME: prec was read as a SHA1, needs to be amended
311
311
312 # read 0 or more successors
312 # read 0 or more successors
313 if numsuc == 1:
313 if numsuc == 1:
314 o2 = o1 + sha2size
314 o2 = o1 + sha2size
315 sucs = (data[o1:o2],)
315 sucs = (data[o1:o2],)
316 else:
316 else:
317 o2 = o1 + sha2size * numsuc
317 o2 = o1 + sha2size * numsuc
318 sucs = unpack(sha2fmt * numsuc, data[o1:o2])
318 sucs = unpack(sha2fmt * numsuc, data[o1:o2])
319
319
320 # read parents
320 # read parents
321 if numpar == noneflag:
321 if numpar == noneflag:
322 o3 = o2
322 o3 = o2
323 parents = None
323 parents = None
324 elif numpar == 1:
324 elif numpar == 1:
325 o3 = o2 + sha2size
325 o3 = o2 + sha2size
326 parents = (data[o2:o3],)
326 parents = (data[o2:o3],)
327 else:
327 else:
328 o3 = o2 + sha2size * numpar
328 o3 = o2 + sha2size * numpar
329 parents = unpack(sha2fmt * numpar, data[o2:o3])
329 parents = unpack(sha2fmt * numpar, data[o2:o3])
330 else:
330 else:
331 # read 0 or more successors
331 # read 0 or more successors
332 if numsuc == 1:
332 if numsuc == 1:
333 o2 = o1 + sha1size
333 o2 = o1 + sha1size
334 sucs = (data[o1:o2],)
334 sucs = (data[o1:o2],)
335 else:
335 else:
336 o2 = o1 + sha1size * numsuc
336 o2 = o1 + sha1size * numsuc
337 sucs = unpack(sha1fmt * numsuc, data[o1:o2])
337 sucs = unpack(sha1fmt * numsuc, data[o1:o2])
338
338
339 # read parents
339 # read parents
340 if numpar == noneflag:
340 if numpar == noneflag:
341 o3 = o2
341 o3 = o2
342 parents = None
342 parents = None
343 elif numpar == 1:
343 elif numpar == 1:
344 o3 = o2 + sha1size
344 o3 = o2 + sha1size
345 parents = (data[o2:o3],)
345 parents = (data[o2:o3],)
346 else:
346 else:
347 o3 = o2 + sha1size * numpar
347 o3 = o2 + sha1size * numpar
348 parents = unpack(sha1fmt * numpar, data[o2:o3])
348 parents = unpack(sha1fmt * numpar, data[o2:o3])
349
349
350 # read metadata
350 # read metadata
351 off = o3 + metasize * nummeta
351 off = o3 + metasize * nummeta
352 metapairsize = unpack('>' + (metafmt * nummeta), data[o3:off])
352 metapairsize = unpack('>' + (metafmt * nummeta), data[o3:off])
353 metadata = []
353 metadata = []
354 for idx in xrange(0, len(metapairsize), 2):
354 for idx in xrange(0, len(metapairsize), 2):
355 o1 = off + metapairsize[idx]
355 o1 = off + metapairsize[idx]
356 o2 = o1 + metapairsize[idx + 1]
356 o2 = o1 + metapairsize[idx + 1]
357 metadata.append((data[off:o1], data[o1:o2]))
357 metadata.append((data[off:o1], data[o1:o2]))
358 off = o2
358 off = o2
359
359
360 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
360 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
361
361
362 def _fm1encodeonemarker(marker):
362 def _fm1encodeonemarker(marker):
363 pre, sucs, flags, metadata, date, parents = marker
363 pre, sucs, flags, metadata, date, parents = marker
364 # determine node size
364 # determine node size
365 _fm1node = _fm1nodesha1
365 _fm1node = _fm1nodesha1
366 if flags & usingsha256:
366 if flags & usingsha256:
367 _fm1node = _fm1nodesha256
367 _fm1node = _fm1nodesha256
368 numsuc = len(sucs)
368 numsuc = len(sucs)
369 numextranodes = numsuc
369 numextranodes = numsuc
370 if parents is None:
370 if parents is None:
371 numpar = _fm1parentnone
371 numpar = _fm1parentnone
372 else:
372 else:
373 numpar = len(parents)
373 numpar = len(parents)
374 numextranodes += numpar
374 numextranodes += numpar
375 formatnodes = _fm1node * numextranodes
375 formatnodes = _fm1node * numextranodes
376 formatmeta = _fm1metapair * len(metadata)
376 formatmeta = _fm1metapair * len(metadata)
377 format = _fm1fixed + formatnodes + formatmeta
377 format = _fm1fixed + formatnodes + formatmeta
378 # tz is stored in minutes so we divide by 60
378 # tz is stored in minutes so we divide by 60
379 tz = date[1]//60
379 tz = date[1]//60
380 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
380 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
381 data.extend(sucs)
381 data.extend(sucs)
382 if parents is not None:
382 if parents is not None:
383 data.extend(parents)
383 data.extend(parents)
384 totalsize = _calcsize(format)
384 totalsize = _calcsize(format)
385 for key, value in metadata:
385 for key, value in metadata:
386 lk = len(key)
386 lk = len(key)
387 lv = len(value)
387 lv = len(value)
388 data.append(lk)
388 data.append(lk)
389 data.append(lv)
389 data.append(lv)
390 totalsize += lk + lv
390 totalsize += lk + lv
391 data[0] = totalsize
391 data[0] = totalsize
392 data = [_pack(format, *data)]
392 data = [_pack(format, *data)]
393 for key, value in metadata:
393 for key, value in metadata:
394 data.append(key)
394 data.append(key)
395 data.append(value)
395 data.append(value)
396 return ''.join(data)
396 return ''.join(data)
397
397
398 def _fm1readmarkers(data, off):
398 def _fm1readmarkers(data, off):
399 native = getattr(parsers, 'fm1readmarkers', None)
399 native = getattr(parsers, 'fm1readmarkers', None)
400 if not native:
400 if not native:
401 return _fm1purereadmarkers(data, off)
401 return _fm1purereadmarkers(data, off)
402 stop = len(data) - _fm1fsize
402 stop = len(data) - _fm1fsize
403 return native(data, off, stop)
403 return native(data, off, stop)
404
404
405 # mapping to read/write various marker formats
405 # mapping to read/write various marker formats
406 # <version> -> (decoder, encoder)
406 # <version> -> (decoder, encoder)
407 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
407 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
408 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
408 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
409
409
410 @util.nogc
410 @util.nogc
411 def _readmarkers(data):
411 def _readmarkers(data):
412 """Read and enumerate markers from raw data"""
412 """Read and enumerate markers from raw data"""
413 off = 0
413 off = 0
414 diskversion = _unpack('>B', data[off:off + 1])[0]
414 diskversion = _unpack('>B', data[off:off + 1])[0]
415 off += 1
415 off += 1
416 if diskversion not in formats:
416 if diskversion not in formats:
417 raise util.Abort(_('parsing obsolete marker: unknown version %r')
417 raise util.Abort(_('parsing obsolete marker: unknown version %r')
418 % diskversion)
418 % diskversion)
419 return diskversion, formats[diskversion][0](data, off)
419 return diskversion, formats[diskversion][0](data, off)
420
420
421 def encodemarkers(markers, addheader=False, version=_fm0version):
421 def encodemarkers(markers, addheader=False, version=_fm0version):
422 # Kept separate from flushmarkers(), it will be reused for
422 # Kept separate from flushmarkers(), it will be reused for
423 # markers exchange.
423 # markers exchange.
424 encodeone = formats[version][1]
424 encodeone = formats[version][1]
425 if addheader:
425 if addheader:
426 yield _pack('>B', version)
426 yield _pack('>B', version)
427 for marker in markers:
427 for marker in markers:
428 yield encodeone(marker)
428 yield encodeone(marker)
429
429
430
430
431 class marker(object):
431 class marker(object):
432 """Wrap obsolete marker raw data"""
432 """Wrap obsolete marker raw data"""
433
433
434 def __init__(self, repo, data):
434 def __init__(self, repo, data):
435 # the repo argument will be used to create changectx in later version
435 # the repo argument will be used to create changectx in later version
436 self._repo = repo
436 self._repo = repo
437 self._data = data
437 self._data = data
438 self._decodedmeta = None
438 self._decodedmeta = None
439
439
440 def __hash__(self):
440 def __hash__(self):
441 return hash(self._data)
441 return hash(self._data)
442
442
443 def __eq__(self, other):
443 def __eq__(self, other):
444 if type(other) != type(self):
444 if type(other) != type(self):
445 return False
445 return False
446 return self._data == other._data
446 return self._data == other._data
447
447
448 def precnode(self):
448 def precnode(self):
449 """Precursor changeset node identifier"""
449 """Precursor changeset node identifier"""
450 return self._data[0]
450 return self._data[0]
451
451
452 def succnodes(self):
452 def succnodes(self):
453 """List of successor changesets node identifiers"""
453 """List of successor changesets node identifiers"""
454 return self._data[1]
454 return self._data[1]
455
455
456 def parentnodes(self):
456 def parentnodes(self):
457 """Parents of the precursors (None if not recorded)"""
457 """Parents of the precursors (None if not recorded)"""
458 return self._data[5]
458 return self._data[5]
459
459
460 def metadata(self):
460 def metadata(self):
461 """Decoded metadata dictionary"""
461 """Decoded metadata dictionary"""
462 return dict(self._data[3])
462 return dict(self._data[3])
463
463
464 def date(self):
464 def date(self):
465 """Creation date as (unixtime, offset)"""
465 """Creation date as (unixtime, offset)"""
466 return self._data[4]
466 return self._data[4]
467
467
468 def flags(self):
468 def flags(self):
469 """The flags field of the marker"""
469 """The flags field of the marker"""
470 return self._data[2]
470 return self._data[2]
471
471
472 @util.nogc
472 @util.nogc
473 def _addsuccessors(successors, markers):
473 def _addsuccessors(successors, markers):
474 for mark in markers:
474 for mark in markers:
475 successors.setdefault(mark[0], set()).add(mark)
475 successors.setdefault(mark[0], set()).add(mark)
476
476
477 @util.nogc
477 @util.nogc
478 def _addprecursors(precursors, markers):
478 def _addprecursors(precursors, markers):
479 for mark in markers:
479 for mark in markers:
480 for suc in mark[1]:
480 for suc in mark[1]:
481 precursors.setdefault(suc, set()).add(mark)
481 precursors.setdefault(suc, set()).add(mark)
482
482
483 @util.nogc
483 @util.nogc
484 def _addchildren(children, markers):
484 def _addchildren(children, markers):
485 for mark in markers:
485 for mark in markers:
486 parents = mark[5]
486 parents = mark[5]
487 if parents is not None:
487 if parents is not None:
488 for p in parents:
488 for p in parents:
489 children.setdefault(p, set()).add(mark)
489 children.setdefault(p, set()).add(mark)
490
490
491 def _checkinvalidmarkers(markers):
491 def _checkinvalidmarkers(markers):
492 """search for marker with invalid data and raise error if needed
492 """search for marker with invalid data and raise error if needed
493
493
494 Exist as a separated function to allow the evolve extension for a more
494 Exist as a separated function to allow the evolve extension for a more
495 subtle handling.
495 subtle handling.
496 """
496 """
497 for mark in markers:
497 for mark in markers:
498 if node.nullid in mark[1]:
498 if node.nullid in mark[1]:
499 raise util.Abort(_('bad obsolescence marker detected: '
499 raise util.Abort(_('bad obsolescence marker detected: '
500 'invalid successors nullid'))
500 'invalid successors nullid'))
501
501
502 class obsstore(object):
502 class obsstore(object):
503 """Store obsolete markers
503 """Store obsolete markers
504
504
505 Markers can be accessed with two mappings:
505 Markers can be accessed with two mappings:
506 - precursors[x] -> set(markers on precursors edges of x)
506 - precursors[x] -> set(markers on precursors edges of x)
507 - successors[x] -> set(markers on successors edges of x)
507 - successors[x] -> set(markers on successors edges of x)
508 - children[x] -> set(markers on precursors edges of children(x)
508 - children[x] -> set(markers on precursors edges of children(x)
509 """
509 """
510
510
511 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
511 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
512 # prec: nodeid, precursor changesets
512 # prec: nodeid, precursor changesets
513 # succs: tuple of nodeid, successor changesets (0-N length)
513 # succs: tuple of nodeid, successor changesets (0-N length)
514 # flag: integer, flag field carrying modifier for the markers (see doc)
514 # flag: integer, flag field carrying modifier for the markers (see doc)
515 # meta: binary blob, encoded metadata dictionary
515 # meta: binary blob, encoded metadata dictionary
516 # date: (float, int) tuple, date of marker creation
516 # date: (float, int) tuple, date of marker creation
517 # parents: (tuple of nodeid) or None, parents of precursors
517 # parents: (tuple of nodeid) or None, parents of precursors
518 # None is used when no data has been recorded
518 # None is used when no data has been recorded
519
519
520 def __init__(self, sopener, defaultformat=_fm1version, readonly=False):
520 def __init__(self, sopener, defaultformat=_fm1version, readonly=False):
521 # caches for various obsolescence related cache
521 # caches for various obsolescence related cache
522 self.caches = {}
522 self.caches = {}
523 self._all = []
523 self._all = []
524 self.sopener = sopener
524 self.sopener = sopener
525 data = sopener.tryread('obsstore')
525 data = sopener.tryread('obsstore')
526 self._version = defaultformat
526 self._version = defaultformat
527 self._readonly = readonly
527 self._readonly = readonly
528 if data:
528 if data:
529 self._version, markers = _readmarkers(data)
529 self._version, markers = _readmarkers(data)
530 self._addmarkers(markers)
530 self._addmarkers(markers)
531
531
532 def __iter__(self):
532 def __iter__(self):
533 return iter(self._all)
533 return iter(self._all)
534
534
535 def __len__(self):
535 def __len__(self):
536 return len(self._all)
536 return len(self._all)
537
537
538 def __nonzero__(self):
538 def __nonzero__(self):
539 return bool(self._all)
539 return bool(self._all)
540
540
541 def create(self, transaction, prec, succs=(), flag=0, parents=None,
541 def create(self, transaction, prec, succs=(), flag=0, parents=None,
542 date=None, metadata=None):
542 date=None, metadata=None):
543 """obsolete: add a new obsolete marker
543 """obsolete: add a new obsolete marker
544
544
545 * ensuring it is hashable
545 * ensuring it is hashable
546 * check mandatory metadata
546 * check mandatory metadata
547 * encode metadata
547 * encode metadata
548
548
549 If you are a human writing code creating marker you want to use the
549 If you are a human writing code creating marker you want to use the
550 `createmarkers` function in this module instead.
550 `createmarkers` function in this module instead.
551
551
552 return True if a new marker have been added, False if the markers
552 return True if a new marker have been added, False if the markers
553 already existed (no op).
553 already existed (no op).
554 """
554 """
555 if metadata is None:
555 if metadata is None:
556 metadata = {}
556 metadata = {}
557 if date is None:
557 if date is None:
558 if 'date' in metadata:
558 if 'date' in metadata:
559 # as a courtesy for out-of-tree extensions
559 # as a courtesy for out-of-tree extensions
560 date = util.parsedate(metadata.pop('date'))
560 date = util.parsedate(metadata.pop('date'))
561 else:
561 else:
562 date = util.makedate()
562 date = util.makedate()
563 if len(prec) != 20:
563 if len(prec) != 20:
564 raise ValueError(prec)
564 raise ValueError(prec)
565 for succ in succs:
565 for succ in succs:
566 if len(succ) != 20:
566 if len(succ) != 20:
567 raise ValueError(succ)
567 raise ValueError(succ)
568 if prec in succs:
568 if prec in succs:
569 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
569 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
570
570
571 metadata = tuple(sorted(metadata.iteritems()))
571 metadata = tuple(sorted(metadata.iteritems()))
572
572
573 marker = (str(prec), tuple(succs), int(flag), metadata, date, parents)
573 marker = (str(prec), tuple(succs), int(flag), metadata, date, parents)
574 return bool(self.add(transaction, [marker]))
574 return bool(self.add(transaction, [marker]))
575
575
576 def add(self, transaction, markers):
576 def add(self, transaction, markers):
577 """Add new markers to the store
577 """Add new markers to the store
578
578
579 Take care of filtering duplicate.
579 Take care of filtering duplicate.
580 Return the number of new marker."""
580 Return the number of new marker."""
581 if self._readonly:
581 if self._readonly:
582 raise util.Abort('creating obsolete markers is not enabled on this '
582 raise util.Abort('creating obsolete markers is not enabled on this '
583 'repo')
583 'repo')
584 known = set(self._all)
584 known = set(self._all)
585 new = []
585 new = []
586 for m in markers:
586 for m in markers:
587 if m not in known:
587 if m not in known:
588 known.add(m)
588 known.add(m)
589 new.append(m)
589 new.append(m)
590 if new:
590 if new:
591 f = self.sopener('obsstore', 'ab')
591 f = self.sopener('obsstore', 'ab')
592 try:
592 try:
593 offset = f.tell()
593 offset = f.tell()
594 transaction.add('obsstore', offset)
594 transaction.add('obsstore', offset)
595 # offset == 0: new file - add the version header
595 # offset == 0: new file - add the version header
596 for bytes in encodemarkers(new, offset == 0, self._version):
596 for bytes in encodemarkers(new, offset == 0, self._version):
597 f.write(bytes)
597 f.write(bytes)
598 finally:
598 finally:
599 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
599 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
600 # call 'filecacheentry.refresh()' here
600 # call 'filecacheentry.refresh()' here
601 f.close()
601 f.close()
602 self._addmarkers(new)
602 self._addmarkers(new)
603 # new marker *may* have changed several set. invalidate the cache.
603 # new marker *may* have changed several set. invalidate the cache.
604 self.caches.clear()
604 self.caches.clear()
605 # records the number of new markers for the transaction hooks
605 # records the number of new markers for the transaction hooks
606 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
606 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
607 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
607 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
608 return len(new)
608 return len(new)
609
609
610 def mergemarkers(self, transaction, data):
610 def mergemarkers(self, transaction, data):
611 """merge a binary stream of markers inside the obsstore
611 """merge a binary stream of markers inside the obsstore
612
612
613 Returns the number of new markers added."""
613 Returns the number of new markers added."""
614 version, markers = _readmarkers(data)
614 version, markers = _readmarkers(data)
615 return self.add(transaction, markers)
615 return self.add(transaction, markers)
616
616
617 @propertycache
617 @propertycache
618 def successors(self):
618 def successors(self):
619 successors = {}
619 successors = {}
620 _addsuccessors(successors, self._all)
620 _addsuccessors(successors, self._all)
621 return successors
621 return successors
622
622
623 @propertycache
623 @propertycache
624 def precursors(self):
624 def precursors(self):
625 precursors = {}
625 precursors = {}
626 _addprecursors(precursors, self._all)
626 _addprecursors(precursors, self._all)
627 return precursors
627 return precursors
628
628
629 @propertycache
629 @propertycache
630 def children(self):
630 def children(self):
631 children = {}
631 children = {}
632 _addchildren(children, self._all)
632 _addchildren(children, self._all)
633 return children
633 return children
634
634
635 def _cached(self, attr):
635 def _cached(self, attr):
636 return attr in self.__dict__
636 return attr in self.__dict__
637
637
638 def _addmarkers(self, markers):
638 def _addmarkers(self, markers):
639 markers = list(markers) # to allow repeated iteration
639 markers = list(markers) # to allow repeated iteration
640 self._all.extend(markers)
640 self._all.extend(markers)
641 if self._cached('successors'):
641 if self._cached('successors'):
642 _addsuccessors(self.successors, markers)
642 _addsuccessors(self.successors, markers)
643 if self._cached('precursors'):
643 if self._cached('precursors'):
644 _addprecursors(self.precursors, markers)
644 _addprecursors(self.precursors, markers)
645 if self._cached('children'):
645 if self._cached('children'):
646 _addchildren(self.children, markers)
646 _addchildren(self.children, markers)
647 _checkinvalidmarkers(markers)
647 _checkinvalidmarkers(markers)
648
648
649 def relevantmarkers(self, nodes):
649 def relevantmarkers(self, nodes):
650 """return a set of all obsolescence markers relevant to a set of nodes.
650 """return a set of all obsolescence markers relevant to a set of nodes.
651
651
652 "relevant" to a set of nodes mean:
652 "relevant" to a set of nodes mean:
653
653
654 - marker that use this changeset as successor
654 - marker that use this changeset as successor
655 - prune marker of direct children on this changeset
655 - prune marker of direct children on this changeset
656 - recursive application of the two rules on precursors of these markers
656 - recursive application of the two rules on precursors of these markers
657
657
658 It is a set so you cannot rely on order."""
658 It is a set so you cannot rely on order."""
659
659
660 pendingnodes = set(nodes)
660 pendingnodes = set(nodes)
661 seenmarkers = set()
661 seenmarkers = set()
662 seennodes = set(pendingnodes)
662 seennodes = set(pendingnodes)
663 precursorsmarkers = self.precursors
663 precursorsmarkers = self.precursors
664 children = self.children
664 children = self.children
665 while pendingnodes:
665 while pendingnodes:
666 direct = set()
666 direct = set()
667 for current in pendingnodes:
667 for current in pendingnodes:
668 direct.update(precursorsmarkers.get(current, ()))
668 direct.update(precursorsmarkers.get(current, ()))
669 pruned = [m for m in children.get(current, ()) if not m[1]]
669 pruned = [m for m in children.get(current, ()) if not m[1]]
670 direct.update(pruned)
670 direct.update(pruned)
671 direct -= seenmarkers
671 direct -= seenmarkers
672 pendingnodes = set([m[0] for m in direct])
672 pendingnodes = set([m[0] for m in direct])
673 seenmarkers |= direct
673 seenmarkers |= direct
674 pendingnodes -= seennodes
674 pendingnodes -= seennodes
675 seennodes |= pendingnodes
675 seennodes |= pendingnodes
676 return seenmarkers
676 return seenmarkers
677
677
678 def commonversion(versions):
678 def commonversion(versions):
679 """Return the newest version listed in both versions and our local formats.
679 """Return the newest version listed in both versions and our local formats.
680
680
681 Returns None if no common version exists.
681 Returns None if no common version exists.
682 """
682 """
683 versions.sort(reverse=True)
683 versions.sort(reverse=True)
684 # search for highest version known on both side
684 # search for highest version known on both side
685 for v in versions:
685 for v in versions:
686 if v in formats:
686 if v in formats:
687 return v
687 return v
688 return None
688 return None
689
689
690 # arbitrary picked to fit into 8K limit from HTTP server
690 # arbitrary picked to fit into 8K limit from HTTP server
691 # you have to take in account:
691 # you have to take in account:
692 # - the version header
692 # - the version header
693 # - the base85 encoding
693 # - the base85 encoding
694 _maxpayload = 5300
694 _maxpayload = 5300
695
695
696 def _pushkeyescape(markers):
696 def _pushkeyescape(markers):
697 """encode markers into a dict suitable for pushkey exchange
697 """encode markers into a dict suitable for pushkey exchange
698
698
699 - binary data is base85 encoded
699 - binary data is base85 encoded
700 - split in chunks smaller than 5300 bytes"""
700 - split in chunks smaller than 5300 bytes"""
701 keys = {}
701 keys = {}
702 parts = []
702 parts = []
703 currentlen = _maxpayload * 2 # ensure we create a new part
703 currentlen = _maxpayload * 2 # ensure we create a new part
704 for marker in markers:
704 for marker in markers:
705 nextdata = _fm0encodeonemarker(marker)
705 nextdata = _fm0encodeonemarker(marker)
706 if (len(nextdata) + currentlen > _maxpayload):
706 if (len(nextdata) + currentlen > _maxpayload):
707 currentpart = []
707 currentpart = []
708 currentlen = 0
708 currentlen = 0
709 parts.append(currentpart)
709 parts.append(currentpart)
710 currentpart.append(nextdata)
710 currentpart.append(nextdata)
711 currentlen += len(nextdata)
711 currentlen += len(nextdata)
712 for idx, part in enumerate(reversed(parts)):
712 for idx, part in enumerate(reversed(parts)):
713 data = ''.join([_pack('>B', _fm0version)] + part)
713 data = ''.join([_pack('>B', _fm0version)] + part)
714 keys['dump%i' % idx] = base85.b85encode(data)
714 keys['dump%i' % idx] = base85.b85encode(data)
715 return keys
715 return keys
716
716
717 def listmarkers(repo):
717 def listmarkers(repo):
718 """List markers over pushkey"""
718 """List markers over pushkey"""
719 if not repo.obsstore:
719 if not repo.obsstore:
720 return {}
720 return {}
721 return _pushkeyescape(sorted(repo.obsstore))
721 return _pushkeyescape(sorted(repo.obsstore))
722
722
723 def pushmarker(repo, key, old, new):
723 def pushmarker(repo, key, old, new):
724 """Push markers over pushkey"""
724 """Push markers over pushkey"""
725 if not key.startswith('dump'):
725 if not key.startswith('dump'):
726 repo.ui.warn(_('unknown key: %r') % key)
726 repo.ui.warn(_('unknown key: %r') % key)
727 return 0
727 return 0
728 if old:
728 if old:
729 repo.ui.warn(_('unexpected old value for %r') % key)
729 repo.ui.warn(_('unexpected old value for %r') % key)
730 return 0
730 return 0
731 data = base85.b85decode(new)
731 data = base85.b85decode(new)
732 lock = repo.lock()
732 lock = repo.lock()
733 try:
733 try:
734 tr = repo.transaction('pushkey: obsolete markers')
734 tr = repo.transaction('pushkey: obsolete markers')
735 try:
735 try:
736 repo.obsstore.mergemarkers(tr, data)
736 repo.obsstore.mergemarkers(tr, data)
737 tr.close()
737 tr.close()
738 return 1
738 return 1
739 finally:
739 finally:
740 tr.release()
740 tr.release()
741 finally:
741 finally:
742 lock.release()
742 lock.release()
743
743
744 def getmarkers(repo, nodes=None):
744 def getmarkers(repo, nodes=None):
745 """returns markers known in a repository
745 """returns markers known in a repository
746
746
747 If <nodes> is specified, only markers "relevant" to those nodes are are
747 If <nodes> is specified, only markers "relevant" to those nodes are are
748 returned"""
748 returned"""
749 if nodes is None:
749 if nodes is None:
750 rawmarkers = repo.obsstore
750 rawmarkers = repo.obsstore
751 else:
751 else:
752 rawmarkers = repo.obsstore.relevantmarkers(nodes)
752 rawmarkers = repo.obsstore.relevantmarkers(nodes)
753
753
754 for markerdata in rawmarkers:
754 for markerdata in rawmarkers:
755 yield marker(repo, markerdata)
755 yield marker(repo, markerdata)
756
756
757 def relevantmarkers(repo, node):
757 def relevantmarkers(repo, node):
758 """all obsolete markers relevant to some revision"""
758 """all obsolete markers relevant to some revision"""
759 for markerdata in repo.obsstore.relevantmarkers(node):
759 for markerdata in repo.obsstore.relevantmarkers(node):
760 yield marker(repo, markerdata)
760 yield marker(repo, markerdata)
761
761
762
762
763 def precursormarkers(ctx):
763 def precursormarkers(ctx):
764 """obsolete marker marking this changeset as a successors"""
764 """obsolete marker marking this changeset as a successors"""
765 for data in ctx.repo().obsstore.precursors.get(ctx.node(), ()):
765 for data in ctx.repo().obsstore.precursors.get(ctx.node(), ()):
766 yield marker(ctx.repo(), data)
766 yield marker(ctx.repo(), data)
767
767
768 def successormarkers(ctx):
768 def successormarkers(ctx):
769 """obsolete marker making this changeset obsolete"""
769 """obsolete marker making this changeset obsolete"""
770 for data in ctx.repo().obsstore.successors.get(ctx.node(), ()):
770 for data in ctx.repo().obsstore.successors.get(ctx.node(), ()):
771 yield marker(ctx.repo(), data)
771 yield marker(ctx.repo(), data)
772
772
773 def allsuccessors(obsstore, nodes, ignoreflags=0):
773 def allsuccessors(obsstore, nodes, ignoreflags=0):
774 """Yield node for every successor of <nodes>.
774 """Yield node for every successor of <nodes>.
775
775
776 Some successors may be unknown locally.
776 Some successors may be unknown locally.
777
777
778 This is a linear yield unsuited to detecting split changesets. It includes
778 This is a linear yield unsuited to detecting split changesets. It includes
779 initial nodes too."""
779 initial nodes too."""
780 remaining = set(nodes)
780 remaining = set(nodes)
781 seen = set(remaining)
781 seen = set(remaining)
782 while remaining:
782 while remaining:
783 current = remaining.pop()
783 current = remaining.pop()
784 yield current
784 yield current
785 for mark in obsstore.successors.get(current, ()):
785 for mark in obsstore.successors.get(current, ()):
786 # ignore marker flagged with specified flag
786 # ignore marker flagged with specified flag
787 if mark[2] & ignoreflags:
787 if mark[2] & ignoreflags:
788 continue
788 continue
789 for suc in mark[1]:
789 for suc in mark[1]:
790 if suc not in seen:
790 if suc not in seen:
791 seen.add(suc)
791 seen.add(suc)
792 remaining.add(suc)
792 remaining.add(suc)
793
793
794 def allprecursors(obsstore, nodes, ignoreflags=0):
794 def allprecursors(obsstore, nodes, ignoreflags=0):
795 """Yield node for every precursors of <nodes>.
795 """Yield node for every precursors of <nodes>.
796
796
797 Some precursors may be unknown locally.
797 Some precursors may be unknown locally.
798
798
799 This is a linear yield unsuited to detecting folded changesets. It includes
799 This is a linear yield unsuited to detecting folded changesets. It includes
800 initial nodes too."""
800 initial nodes too."""
801
801
802 remaining = set(nodes)
802 remaining = set(nodes)
803 seen = set(remaining)
803 seen = set(remaining)
804 while remaining:
804 while remaining:
805 current = remaining.pop()
805 current = remaining.pop()
806 yield current
806 yield current
807 for mark in obsstore.precursors.get(current, ()):
807 for mark in obsstore.precursors.get(current, ()):
808 # ignore marker flagged with specified flag
808 # ignore marker flagged with specified flag
809 if mark[2] & ignoreflags:
809 if mark[2] & ignoreflags:
810 continue
810 continue
811 suc = mark[0]
811 suc = mark[0]
812 if suc not in seen:
812 if suc not in seen:
813 seen.add(suc)
813 seen.add(suc)
814 remaining.add(suc)
814 remaining.add(suc)
815
815
816 def foreground(repo, nodes):
816 def foreground(repo, nodes):
817 """return all nodes in the "foreground" of other node
817 """return all nodes in the "foreground" of other node
818
818
819 The foreground of a revision is anything reachable using parent -> children
819 The foreground of a revision is anything reachable using parent -> children
820 or precursor -> successor relation. It is very similar to "descendant" but
820 or precursor -> successor relation. It is very similar to "descendant" but
821 augmented with obsolescence information.
821 augmented with obsolescence information.
822
822
823 Beware that possible obsolescence cycle may result if complex situation.
823 Beware that possible obsolescence cycle may result if complex situation.
824 """
824 """
825 repo = repo.unfiltered()
825 repo = repo.unfiltered()
826 foreground = set(repo.set('%ln::', nodes))
826 foreground = set(repo.set('%ln::', nodes))
827 if repo.obsstore:
827 if repo.obsstore:
828 # We only need this complicated logic if there is obsolescence
828 # We only need this complicated logic if there is obsolescence
829 # XXX will probably deserve an optimised revset.
829 # XXX will probably deserve an optimised revset.
830 nm = repo.changelog.nodemap
830 nm = repo.changelog.nodemap
831 plen = -1
831 plen = -1
832 # compute the whole set of successors or descendants
832 # compute the whole set of successors or descendants
833 while len(foreground) != plen:
833 while len(foreground) != plen:
834 plen = len(foreground)
834 plen = len(foreground)
835 succs = set(c.node() for c in foreground)
835 succs = set(c.node() for c in foreground)
836 mutable = [c.node() for c in foreground if c.mutable()]
836 mutable = [c.node() for c in foreground if c.mutable()]
837 succs.update(allsuccessors(repo.obsstore, mutable))
837 succs.update(allsuccessors(repo.obsstore, mutable))
838 known = (n for n in succs if n in nm)
838 known = (n for n in succs if n in nm)
839 foreground = set(repo.set('%ln::', known))
839 foreground = set(repo.set('%ln::', known))
840 return set(c.node() for c in foreground)
840 return set(c.node() for c in foreground)
841
841
842
842
843 def successorssets(repo, initialnode, cache=None):
843 def successorssets(repo, initialnode, cache=None):
844 """Return all set of successors of initial nodes
844 """Return all set of successors of initial nodes
845
845
846 The successors set of a changeset A are a group of revisions that succeed
846 The successors set of a changeset A are a group of revisions that succeed
847 A. It succeeds A as a consistent whole, each revision being only a partial
847 A. It succeeds A as a consistent whole, each revision being only a partial
848 replacement. The successors set contains non-obsolete changesets only.
848 replacement. The successors set contains non-obsolete changesets only.
849
849
850 This function returns the full list of successor sets which is why it
850 This function returns the full list of successor sets which is why it
851 returns a list of tuples and not just a single tuple. Each tuple is a valid
851 returns a list of tuples and not just a single tuple. Each tuple is a valid
852 successors set. Not that (A,) may be a valid successors set for changeset A
852 successors set. Not that (A,) may be a valid successors set for changeset A
853 (see below).
853 (see below).
854
854
855 In most cases, a changeset A will have a single element (e.g. the changeset
855 In most cases, a changeset A will have a single element (e.g. the changeset
856 A is replaced by A') in its successors set. Though, it is also common for a
856 A is replaced by A') in its successors set. Though, it is also common for a
857 changeset A to have no elements in its successor set (e.g. the changeset
857 changeset A to have no elements in its successor set (e.g. the changeset
858 has been pruned). Therefore, the returned list of successors sets will be
858 has been pruned). Therefore, the returned list of successors sets will be
859 [(A',)] or [], respectively.
859 [(A',)] or [], respectively.
860
860
861 When a changeset A is split into A' and B', however, it will result in a
861 When a changeset A is split into A' and B', however, it will result in a
862 successors set containing more than a single element, i.e. [(A',B')].
862 successors set containing more than a single element, i.e. [(A',B')].
863 Divergent changesets will result in multiple successors sets, i.e. [(A',),
863 Divergent changesets will result in multiple successors sets, i.e. [(A',),
864 (A'')].
864 (A'')].
865
865
866 If a changeset A is not obsolete, then it will conceptually have no
866 If a changeset A is not obsolete, then it will conceptually have no
867 successors set. To distinguish this from a pruned changeset, the successor
867 successors set. To distinguish this from a pruned changeset, the successor
868 set will only contain itself, i.e. [(A,)].
868 set will only contain itself, i.e. [(A,)].
869
869
870 Finally, successors unknown locally are considered to be pruned (obsoleted
870 Finally, successors unknown locally are considered to be pruned (obsoleted
871 without any successors).
871 without any successors).
872
872
873 The optional `cache` parameter is a dictionary that may contain precomputed
873 The optional `cache` parameter is a dictionary that may contain precomputed
874 successors sets. It is meant to reuse the computation of a previous call to
874 successors sets. It is meant to reuse the computation of a previous call to
875 `successorssets` when multiple calls are made at the same time. The cache
875 `successorssets` when multiple calls are made at the same time. The cache
876 dictionary is updated in place. The caller is responsible for its live
876 dictionary is updated in place. The caller is responsible for its live
877 spawn. Code that makes multiple calls to `successorssets` *must* use this
877 spawn. Code that makes multiple calls to `successorssets` *must* use this
878 cache mechanism or suffer terrible performances.
878 cache mechanism or suffer terrible performances.
879
879
880 """
880 """
881
881
882 succmarkers = repo.obsstore.successors
882 succmarkers = repo.obsstore.successors
883
883
884 # Stack of nodes we search successors sets for
884 # Stack of nodes we search successors sets for
885 toproceed = [initialnode]
885 toproceed = [initialnode]
886 # set version of above list for fast loop detection
886 # set version of above list for fast loop detection
887 # element added to "toproceed" must be added here
887 # element added to "toproceed" must be added here
888 stackedset = set(toproceed)
888 stackedset = set(toproceed)
889 if cache is None:
889 if cache is None:
890 cache = {}
890 cache = {}
891
891
892 # This while loop is the flattened version of a recursive search for
892 # This while loop is the flattened version of a recursive search for
893 # successors sets
893 # successors sets
894 #
894 #
895 # def successorssets(x):
895 # def successorssets(x):
896 # successors = directsuccessors(x)
896 # successors = directsuccessors(x)
897 # ss = [[]]
897 # ss = [[]]
898 # for succ in directsuccessors(x):
898 # for succ in directsuccessors(x):
899 # # product as in itertools cartesian product
899 # # product as in itertools cartesian product
900 # ss = product(ss, successorssets(succ))
900 # ss = product(ss, successorssets(succ))
901 # return ss
901 # return ss
902 #
902 #
903 # But we can not use plain recursive calls here:
903 # But we can not use plain recursive calls here:
904 # - that would blow the python call stack
904 # - that would blow the python call stack
905 # - obsolescence markers may have cycles, we need to handle them.
905 # - obsolescence markers may have cycles, we need to handle them.
906 #
906 #
907 # The `toproceed` list act as our call stack. Every node we search
907 # The `toproceed` list act as our call stack. Every node we search
908 # successors set for are stacked there.
908 # successors set for are stacked there.
909 #
909 #
910 # The `stackedset` is set version of this stack used to check if a node is
910 # The `stackedset` is set version of this stack used to check if a node is
911 # already stacked. This check is used to detect cycles and prevent infinite
911 # already stacked. This check is used to detect cycles and prevent infinite
912 # loop.
912 # loop.
913 #
913 #
914 # successors set of all nodes are stored in the `cache` dictionary.
914 # successors set of all nodes are stored in the `cache` dictionary.
915 #
915 #
916 # After this while loop ends we use the cache to return the successors sets
916 # After this while loop ends we use the cache to return the successors sets
917 # for the node requested by the caller.
917 # for the node requested by the caller.
918 while toproceed:
918 while toproceed:
919 # Every iteration tries to compute the successors sets of the topmost
919 # Every iteration tries to compute the successors sets of the topmost
920 # node of the stack: CURRENT.
920 # node of the stack: CURRENT.
921 #
921 #
922 # There are four possible outcomes:
922 # There are four possible outcomes:
923 #
923 #
924 # 1) We already know the successors sets of CURRENT:
924 # 1) We already know the successors sets of CURRENT:
925 # -> mission accomplished, pop it from the stack.
925 # -> mission accomplished, pop it from the stack.
926 # 2) Node is not obsolete:
926 # 2) Node is not obsolete:
927 # -> the node is its own successors sets. Add it to the cache.
927 # -> the node is its own successors sets. Add it to the cache.
928 # 3) We do not know successors set of direct successors of CURRENT:
928 # 3) We do not know successors set of direct successors of CURRENT:
929 # -> We add those successors to the stack.
929 # -> We add those successors to the stack.
930 # 4) We know successors sets of all direct successors of CURRENT:
930 # 4) We know successors sets of all direct successors of CURRENT:
931 # -> We can compute CURRENT successors set and add it to the
931 # -> We can compute CURRENT successors set and add it to the
932 # cache.
932 # cache.
933 #
933 #
934 current = toproceed[-1]
934 current = toproceed[-1]
935 if current in cache:
935 if current in cache:
936 # case (1): We already know the successors sets
936 # case (1): We already know the successors sets
937 stackedset.remove(toproceed.pop())
937 stackedset.remove(toproceed.pop())
938 elif current not in succmarkers:
938 elif current not in succmarkers:
939 # case (2): The node is not obsolete.
939 # case (2): The node is not obsolete.
940 if current in repo:
940 if current in repo:
941 # We have a valid last successors.
941 # We have a valid last successors.
942 cache[current] = [(current,)]
942 cache[current] = [(current,)]
943 else:
943 else:
944 # Final obsolete version is unknown locally.
944 # Final obsolete version is unknown locally.
945 # Do not count that as a valid successors
945 # Do not count that as a valid successors
946 cache[current] = []
946 cache[current] = []
947 else:
947 else:
948 # cases (3) and (4)
948 # cases (3) and (4)
949 #
949 #
950 # We proceed in two phases. Phase 1 aims to distinguish case (3)
950 # We proceed in two phases. Phase 1 aims to distinguish case (3)
951 # from case (4):
951 # from case (4):
952 #
952 #
953 # For each direct successors of CURRENT, we check whether its
953 # For each direct successors of CURRENT, we check whether its
954 # successors sets are known. If they are not, we stack the
954 # successors sets are known. If they are not, we stack the
955 # unknown node and proceed to the next iteration of the while
955 # unknown node and proceed to the next iteration of the while
956 # loop. (case 3)
956 # loop. (case 3)
957 #
957 #
958 # During this step, we may detect obsolescence cycles: a node
958 # During this step, we may detect obsolescence cycles: a node
959 # with unknown successors sets but already in the call stack.
959 # with unknown successors sets but already in the call stack.
960 # In such a situation, we arbitrary set the successors sets of
960 # In such a situation, we arbitrary set the successors sets of
961 # the node to nothing (node pruned) to break the cycle.
961 # the node to nothing (node pruned) to break the cycle.
962 #
962 #
963 # If no break was encountered we proceed to phase 2.
963 # If no break was encountered we proceed to phase 2.
964 #
964 #
965 # Phase 2 computes successors sets of CURRENT (case 4); see details
965 # Phase 2 computes successors sets of CURRENT (case 4); see details
966 # in phase 2 itself.
966 # in phase 2 itself.
967 #
967 #
968 # Note the two levels of iteration in each phase.
968 # Note the two levels of iteration in each phase.
969 # - The first one handles obsolescence markers using CURRENT as
969 # - The first one handles obsolescence markers using CURRENT as
970 # precursor (successors markers of CURRENT).
970 # precursor (successors markers of CURRENT).
971 #
971 #
972 # Having multiple entry here means divergence.
972 # Having multiple entry here means divergence.
973 #
973 #
974 # - The second one handles successors defined in each marker.
974 # - The second one handles successors defined in each marker.
975 #
975 #
976 # Having none means pruned node, multiple successors means split,
976 # Having none means pruned node, multiple successors means split,
977 # single successors are standard replacement.
977 # single successors are standard replacement.
978 #
978 #
979 for mark in sorted(succmarkers[current]):
979 for mark in sorted(succmarkers[current]):
980 for suc in mark[1]:
980 for suc in mark[1]:
981 if suc not in cache:
981 if suc not in cache:
982 if suc in stackedset:
982 if suc in stackedset:
983 # cycle breaking
983 # cycle breaking
984 cache[suc] = []
984 cache[suc] = []
985 else:
985 else:
986 # case (3) If we have not computed successors sets
986 # case (3) If we have not computed successors sets
987 # of one of those successors we add it to the
987 # of one of those successors we add it to the
988 # `toproceed` stack and stop all work for this
988 # `toproceed` stack and stop all work for this
989 # iteration.
989 # iteration.
990 toproceed.append(suc)
990 toproceed.append(suc)
991 stackedset.add(suc)
991 stackedset.add(suc)
992 break
992 break
993 else:
993 else:
994 continue
994 continue
995 break
995 break
996 else:
996 else:
997 # case (4): we know all successors sets of all direct
997 # case (4): we know all successors sets of all direct
998 # successors
998 # successors
999 #
999 #
1000 # Successors set contributed by each marker depends on the
1000 # Successors set contributed by each marker depends on the
1001 # successors sets of all its "successors" node.
1001 # successors sets of all its "successors" node.
1002 #
1002 #
1003 # Each different marker is a divergence in the obsolescence
1003 # Each different marker is a divergence in the obsolescence
1004 # history. It contributes successors sets distinct from other
1004 # history. It contributes successors sets distinct from other
1005 # markers.
1005 # markers.
1006 #
1006 #
1007 # Within a marker, a successor may have divergent successors
1007 # Within a marker, a successor may have divergent successors
1008 # sets. In such a case, the marker will contribute multiple
1008 # sets. In such a case, the marker will contribute multiple
1009 # divergent successors sets. If multiple successors have
1009 # divergent successors sets. If multiple successors have
1010 # divergent successors sets, a Cartesian product is used.
1010 # divergent successors sets, a Cartesian product is used.
1011 #
1011 #
1012 # At the end we post-process successors sets to remove
1012 # At the end we post-process successors sets to remove
1013 # duplicated entry and successors set that are strict subset of
1013 # duplicated entry and successors set that are strict subset of
1014 # another one.
1014 # another one.
1015 succssets = []
1015 succssets = []
1016 for mark in sorted(succmarkers[current]):
1016 for mark in sorted(succmarkers[current]):
1017 # successors sets contributed by this marker
1017 # successors sets contributed by this marker
1018 markss = [[]]
1018 markss = [[]]
1019 for suc in mark[1]:
1019 for suc in mark[1]:
1020 # cardinal product with previous successors
1020 # cardinal product with previous successors
1021 productresult = []
1021 productresult = []
1022 for prefix in markss:
1022 for prefix in markss:
1023 for suffix in cache[suc]:
1023 for suffix in cache[suc]:
1024 newss = list(prefix)
1024 newss = list(prefix)
1025 for part in suffix:
1025 for part in suffix:
1026 # do not duplicated entry in successors set
1026 # do not duplicated entry in successors set
1027 # first entry wins.
1027 # first entry wins.
1028 if part not in newss:
1028 if part not in newss:
1029 newss.append(part)
1029 newss.append(part)
1030 productresult.append(newss)
1030 productresult.append(newss)
1031 markss = productresult
1031 markss = productresult
1032 succssets.extend(markss)
1032 succssets.extend(markss)
1033 # remove duplicated and subset
1033 # remove duplicated and subset
1034 seen = []
1034 seen = []
1035 final = []
1035 final = []
1036 candidate = sorted(((set(s), s) for s in succssets if s),
1036 candidate = sorted(((set(s), s) for s in succssets if s),
1037 key=lambda x: len(x[1]), reverse=True)
1037 key=lambda x: len(x[1]), reverse=True)
1038 for setversion, listversion in candidate:
1038 for setversion, listversion in candidate:
1039 for seenset in seen:
1039 for seenset in seen:
1040 if setversion.issubset(seenset):
1040 if setversion.issubset(seenset):
1041 break
1041 break
1042 else:
1042 else:
1043 final.append(listversion)
1043 final.append(listversion)
1044 seen.append(setversion)
1044 seen.append(setversion)
1045 final.reverse() # put small successors set first
1045 final.reverse() # put small successors set first
1046 cache[current] = final
1046 cache[current] = final
1047 return cache[initialnode]
1047 return cache[initialnode]
1048
1048
1049 def _knownrevs(repo, nodes):
1049 def _knownrevs(repo, nodes):
1050 """yield revision numbers of known nodes passed in parameters
1050 """yield revision numbers of known nodes passed in parameters
1051
1051
1052 Unknown revisions are silently ignored."""
1052 Unknown revisions are silently ignored."""
1053 torev = repo.changelog.nodemap.get
1053 torev = repo.changelog.nodemap.get
1054 for n in nodes:
1054 for n in nodes:
1055 rev = torev(n)
1055 rev = torev(n)
1056 if rev is not None:
1056 if rev is not None:
1057 yield rev
1057 yield rev
1058
1058
1059 # mapping of 'set-name' -> <function to compute this set>
1059 # mapping of 'set-name' -> <function to compute this set>
1060 cachefuncs = {}
1060 cachefuncs = {}
1061 def cachefor(name):
1061 def cachefor(name):
1062 """Decorator to register a function as computing the cache for a set"""
1062 """Decorator to register a function as computing the cache for a set"""
1063 def decorator(func):
1063 def decorator(func):
1064 assert name not in cachefuncs
1064 assert name not in cachefuncs
1065 cachefuncs[name] = func
1065 cachefuncs[name] = func
1066 return func
1066 return func
1067 return decorator
1067 return decorator
1068
1068
1069 def getrevs(repo, name):
1069 def getrevs(repo, name):
1070 """Return the set of revision that belong to the <name> set
1070 """Return the set of revision that belong to the <name> set
1071
1071
1072 Such access may compute the set and cache it for future use"""
1072 Such access may compute the set and cache it for future use"""
1073 repo = repo.unfiltered()
1073 repo = repo.unfiltered()
1074 if not repo.obsstore:
1074 if not repo.obsstore:
1075 return frozenset()
1075 return frozenset()
1076 if name not in repo.obsstore.caches:
1076 if name not in repo.obsstore.caches:
1077 repo.obsstore.caches[name] = cachefuncs[name](repo)
1077 repo.obsstore.caches[name] = cachefuncs[name](repo)
1078 return repo.obsstore.caches[name]
1078 return repo.obsstore.caches[name]
1079
1079
1080 # To be simple we need to invalidate obsolescence cache when:
1080 # To be simple we need to invalidate obsolescence cache when:
1081 #
1081 #
1082 # - new changeset is added:
1082 # - new changeset is added:
1083 # - public phase is changed
1083 # - public phase is changed
1084 # - obsolescence marker are added
1084 # - obsolescence marker are added
1085 # - strip is used a repo
1085 # - strip is used a repo
1086 def clearobscaches(repo):
1086 def clearobscaches(repo):
1087 """Remove all obsolescence related cache from a repo
1087 """Remove all obsolescence related cache from a repo
1088
1088
1089 This remove all cache in obsstore is the obsstore already exist on the
1089 This remove all cache in obsstore is the obsstore already exist on the
1090 repo.
1090 repo.
1091
1091
1092 (We could be smarter here given the exact event that trigger the cache
1092 (We could be smarter here given the exact event that trigger the cache
1093 clearing)"""
1093 clearing)"""
1094 # only clear cache is there is obsstore data in this repo
1094 # only clear cache is there is obsstore data in this repo
1095 if 'obsstore' in repo._filecache:
1095 if 'obsstore' in repo._filecache:
1096 repo.obsstore.caches.clear()
1096 repo.obsstore.caches.clear()
1097
1097
1098 @cachefor('obsolete')
1098 @cachefor('obsolete')
1099 def _computeobsoleteset(repo):
1099 def _computeobsoleteset(repo):
1100 """the set of obsolete revisions"""
1100 """the set of obsolete revisions"""
1101 obs = set()
1101 obs = set()
1102 getrev = repo.changelog.nodemap.get
1102 getrev = repo.changelog.nodemap.get
1103 getphase = repo._phasecache.phase
1103 getphase = repo._phasecache.phase
1104 for n in repo.obsstore.successors:
1104 for n in repo.obsstore.successors:
1105 rev = getrev(n)
1105 rev = getrev(n)
1106 if rev is not None and getphase(repo, rev):
1106 if rev is not None and getphase(repo, rev):
1107 obs.add(rev)
1107 obs.add(rev)
1108 return obs
1108 return obs
1109
1109
1110 @cachefor('unstable')
1110 @cachefor('unstable')
1111 def _computeunstableset(repo):
1111 def _computeunstableset(repo):
1112 """the set of non obsolete revisions with obsolete parents"""
1112 """the set of non obsolete revisions with obsolete parents"""
1113 revs = [(ctx.rev(), ctx) for ctx in
1113 revs = [(ctx.rev(), ctx) for ctx in
1114 repo.set('(not public()) and (not obsolete())')]
1114 repo.set('(not public()) and (not obsolete())')]
1115 revs.sort(key=lambda x:x[0])
1115 revs.sort(key=lambda x:x[0])
1116 unstable = set()
1116 unstable = set()
1117 for rev, ctx in revs:
1117 for rev, ctx in revs:
1118 # A rev is unstable if one of its parent is obsolete or unstable
1118 # A rev is unstable if one of its parent is obsolete or unstable
1119 # this works since we traverse following growing rev order
1119 # this works since we traverse following growing rev order
1120 if any((x.obsolete() or (x.rev() in unstable))
1120 if any((x.obsolete() or (x.rev() in unstable))
1121 for x in ctx.parents()):
1121 for x in ctx.parents()):
1122 unstable.add(rev)
1122 unstable.add(rev)
1123 return unstable
1123 return unstable
1124
1124
1125 @cachefor('suspended')
1125 @cachefor('suspended')
1126 def _computesuspendedset(repo):
1126 def _computesuspendedset(repo):
1127 """the set of obsolete parents with non obsolete descendants"""
1127 """the set of obsolete parents with non obsolete descendants"""
1128 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
1128 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
1129 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
1129 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
1130
1130
1131 @cachefor('extinct')
1131 @cachefor('extinct')
1132 def _computeextinctset(repo):
1132 def _computeextinctset(repo):
1133 """the set of obsolete parents without non obsolete descendants"""
1133 """the set of obsolete parents without non obsolete descendants"""
1134 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
1134 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
1135
1135
1136
1136
1137 @cachefor('bumped')
1137 @cachefor('bumped')
1138 def _computebumpedset(repo):
1138 def _computebumpedset(repo):
1139 """the set of revs trying to obsolete public revisions"""
1139 """the set of revs trying to obsolete public revisions"""
1140 bumped = set()
1140 bumped = set()
1141 # util function (avoid attribute lookup in the loop)
1141 # util function (avoid attribute lookup in the loop)
1142 phase = repo._phasecache.phase # would be faster to grab the full list
1142 phase = repo._phasecache.phase # would be faster to grab the full list
1143 public = phases.public
1143 public = phases.public
1144 cl = repo.changelog
1144 cl = repo.changelog
1145 torev = cl.nodemap.get
1145 torev = cl.nodemap.get
1146 for ctx in repo.set('(not public()) and (not obsolete())'):
1146 for ctx in repo.set('(not public()) and (not obsolete())'):
1147 rev = ctx.rev()
1147 rev = ctx.rev()
1148 # We only evaluate mutable, non-obsolete revision
1148 # We only evaluate mutable, non-obsolete revision
1149 node = ctx.node()
1149 node = ctx.node()
1150 # (future) A cache of precursors may worth if split is very common
1150 # (future) A cache of precursors may worth if split is very common
1151 for pnode in allprecursors(repo.obsstore, [node],
1151 for pnode in allprecursors(repo.obsstore, [node],
1152 ignoreflags=bumpedfix):
1152 ignoreflags=bumpedfix):
1153 prev = torev(pnode) # unfiltered! but so is phasecache
1153 prev = torev(pnode) # unfiltered! but so is phasecache
1154 if (prev is not None) and (phase(repo, prev) <= public):
1154 if (prev is not None) and (phase(repo, prev) <= public):
1155 # we have a public precursors
1155 # we have a public precursors
1156 bumped.add(rev)
1156 bumped.add(rev)
1157 break # Next draft!
1157 break # Next draft!
1158 return bumped
1158 return bumped
1159
1159
1160 @cachefor('divergent')
1160 @cachefor('divergent')
1161 def _computedivergentset(repo):
1161 def _computedivergentset(repo):
1162 """the set of rev that compete to be the final successors of some revision.
1162 """the set of rev that compete to be the final successors of some revision.
1163 """
1163 """
1164 divergent = set()
1164 divergent = set()
1165 obsstore = repo.obsstore
1165 obsstore = repo.obsstore
1166 newermap = {}
1166 newermap = {}
1167 for ctx in repo.set('(not public()) - obsolete()'):
1167 for ctx in repo.set('(not public()) - obsolete()'):
1168 mark = obsstore.precursors.get(ctx.node(), ())
1168 mark = obsstore.precursors.get(ctx.node(), ())
1169 toprocess = set(mark)
1169 toprocess = set(mark)
1170 seen = set()
1170 seen = set()
1171 while toprocess:
1171 while toprocess:
1172 prec = toprocess.pop()[0]
1172 prec = toprocess.pop()[0]
1173 if prec in seen:
1173 if prec in seen:
1174 continue # emergency cycle hanging prevention
1174 continue # emergency cycle hanging prevention
1175 seen.add(prec)
1175 seen.add(prec)
1176 if prec not in newermap:
1176 if prec not in newermap:
1177 successorssets(repo, prec, newermap)
1177 successorssets(repo, prec, newermap)
1178 newer = [n for n in newermap[prec] if n]
1178 newer = [n for n in newermap[prec] if n]
1179 if len(newer) > 1:
1179 if len(newer) > 1:
1180 divergent.add(ctx.rev())
1180 divergent.add(ctx.rev())
1181 break
1181 break
1182 toprocess.update(obsstore.precursors.get(prec, ()))
1182 toprocess.update(obsstore.precursors.get(prec, ()))
1183 return divergent
1183 return divergent
1184
1184
1185
1185
1186 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
1186 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
1187 """Add obsolete markers between changesets in a repo
1187 """Add obsolete markers between changesets in a repo
1188
1188
1189 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1189 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1190 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1190 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1191 containing metadata for this marker only. It is merged with the global
1191 containing metadata for this marker only. It is merged with the global
1192 metadata specified through the `metadata` argument of this function,
1192 metadata specified through the `metadata` argument of this function,
1193
1193
1194 Trying to obsolete a public changeset will raise an exception.
1194 Trying to obsolete a public changeset will raise an exception.
1195
1195
1196 Current user and date are used except if specified otherwise in the
1196 Current user and date are used except if specified otherwise in the
1197 metadata attribute.
1197 metadata attribute.
1198
1198
1199 This function operates within a transaction of its own, but does
1199 This function operates within a transaction of its own, but does
1200 not take any lock on the repo.
1200 not take any lock on the repo.
1201 """
1201 """
1202 # prepare metadata
1202 # prepare metadata
1203 if metadata is None:
1203 if metadata is None:
1204 metadata = {}
1204 metadata = {}
1205 if 'user' not in metadata:
1205 if 'user' not in metadata:
1206 metadata['user'] = repo.ui.username()
1206 metadata['user'] = repo.ui.username()
1207 tr = repo.transaction('add-obsolescence-marker')
1207 tr = repo.transaction('add-obsolescence-marker')
1208 try:
1208 try:
1209 for rel in relations:
1209 for rel in relations:
1210 prec = rel[0]
1210 prec = rel[0]
1211 sucs = rel[1]
1211 sucs = rel[1]
1212 localmetadata = metadata.copy()
1212 localmetadata = metadata.copy()
1213 if 2 < len(rel):
1213 if 2 < len(rel):
1214 localmetadata.update(rel[2])
1214 localmetadata.update(rel[2])
1215
1215
1216 if not prec.mutable():
1216 if not prec.mutable():
1217 raise util.Abort("cannot obsolete immutable changeset: %s"
1217 raise util.Abort("cannot obsolete public changeset: %s"
1218 % prec)
1218 % prec)
1219 nprec = prec.node()
1219 nprec = prec.node()
1220 nsucs = tuple(s.node() for s in sucs)
1220 nsucs = tuple(s.node() for s in sucs)
1221 npare = None
1221 npare = None
1222 if not nsucs:
1222 if not nsucs:
1223 npare = tuple(p.node() for p in prec.parents())
1223 npare = tuple(p.node() for p in prec.parents())
1224 if nprec in nsucs:
1224 if nprec in nsucs:
1225 raise util.Abort("changeset %s cannot obsolete itself" % prec)
1225 raise util.Abort("changeset %s cannot obsolete itself" % prec)
1226 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1226 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1227 date=date, metadata=localmetadata)
1227 date=date, metadata=localmetadata)
1228 repo.filteredrevcache.clear()
1228 repo.filteredrevcache.clear()
1229 tr.close()
1229 tr.close()
1230 finally:
1230 finally:
1231 tr.release()
1231 tr.release()
1232
1232
1233 def isenabled(repo, option):
1233 def isenabled(repo, option):
1234 """Returns True if the given repository has the given obsolete option
1234 """Returns True if the given repository has the given obsolete option
1235 enabled.
1235 enabled.
1236 """
1236 """
1237 result = set(repo.ui.configlist('experimental', 'evolution'))
1237 result = set(repo.ui.configlist('experimental', 'evolution'))
1238 if 'all' in result:
1238 if 'all' in result:
1239 return True
1239 return True
1240
1240
1241 # For migration purposes, temporarily return true if the config hasn't been
1241 # For migration purposes, temporarily return true if the config hasn't been
1242 # set but _enabled is true.
1242 # set but _enabled is true.
1243 if len(result) == 0 and _enabled:
1243 if len(result) == 0 and _enabled:
1244 return True
1244 return True
1245
1245
1246 # createmarkers must be enabled if other options are enabled
1246 # createmarkers must be enabled if other options are enabled
1247 if ((allowunstableopt in result or exchangeopt in result) and
1247 if ((allowunstableopt in result or exchangeopt in result) and
1248 not createmarkersopt in result):
1248 not createmarkersopt in result):
1249 raise util.Abort(_("'createmarkers' obsolete option must be enabled "
1249 raise util.Abort(_("'createmarkers' obsolete option must be enabled "
1250 "if other obsolete options are enabled"))
1250 "if other obsolete options are enabled"))
1251
1251
1252 return option in result
1252 return option in result
@@ -1,459 +1,459
1 $ . "$TESTDIR/histedit-helpers.sh"
1 $ . "$TESTDIR/histedit-helpers.sh"
2
2
3 Enable obsolete
3 Enable obsolete
4
4
5 $ cat >> $HGRCPATH << EOF
5 $ cat >> $HGRCPATH << EOF
6 > [ui]
6 > [ui]
7 > logtemplate= {rev}:{node|short} {desc|firstline}
7 > logtemplate= {rev}:{node|short} {desc|firstline}
8 > [phases]
8 > [phases]
9 > publish=False
9 > publish=False
10 > [experimental]
10 > [experimental]
11 > evolution=createmarkers,allowunstable
11 > evolution=createmarkers,allowunstable
12 > [extensions]
12 > [extensions]
13 > histedit=
13 > histedit=
14 > rebase=
14 > rebase=
15 > EOF
15 > EOF
16
16
17 $ hg init base
17 $ hg init base
18 $ cd base
18 $ cd base
19
19
20 $ for x in a b c d e f ; do
20 $ for x in a b c d e f ; do
21 > echo $x > $x
21 > echo $x > $x
22 > hg add $x
22 > hg add $x
23 > hg ci -m $x
23 > hg ci -m $x
24 > done
24 > done
25
25
26 $ hg log --graph
26 $ hg log --graph
27 @ 5:652413bf663e f
27 @ 5:652413bf663e f
28 |
28 |
29 o 4:e860deea161a e
29 o 4:e860deea161a e
30 |
30 |
31 o 3:055a42cdd887 d
31 o 3:055a42cdd887 d
32 |
32 |
33 o 2:177f92b77385 c
33 o 2:177f92b77385 c
34 |
34 |
35 o 1:d2ae7f538514 b
35 o 1:d2ae7f538514 b
36 |
36 |
37 o 0:cb9a9f314b8b a
37 o 0:cb9a9f314b8b a
38
38
39
39
40 $ HGEDITOR=cat hg histedit 1
40 $ HGEDITOR=cat hg histedit 1
41 pick d2ae7f538514 1 b
41 pick d2ae7f538514 1 b
42 pick 177f92b77385 2 c
42 pick 177f92b77385 2 c
43 pick 055a42cdd887 3 d
43 pick 055a42cdd887 3 d
44 pick e860deea161a 4 e
44 pick e860deea161a 4 e
45 pick 652413bf663e 5 f
45 pick 652413bf663e 5 f
46
46
47 # Edit history between d2ae7f538514 and 652413bf663e
47 # Edit history between d2ae7f538514 and 652413bf663e
48 #
48 #
49 # Commits are listed from least to most recent
49 # Commits are listed from least to most recent
50 #
50 #
51 # Commands:
51 # Commands:
52 # p, pick = use commit
52 # p, pick = use commit
53 # e, edit = use commit, but stop for amending
53 # e, edit = use commit, but stop for amending
54 # f, fold = use commit, but combine it with the one above
54 # f, fold = use commit, but combine it with the one above
55 # r, roll = like fold, but discard this commit's description
55 # r, roll = like fold, but discard this commit's description
56 # d, drop = remove commit from history
56 # d, drop = remove commit from history
57 # m, mess = edit message without changing commit content
57 # m, mess = edit message without changing commit content
58 #
58 #
59 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
59 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
60 $ hg histedit 1 --commands - --verbose <<EOF | grep histedit
60 $ hg histedit 1 --commands - --verbose <<EOF | grep histedit
61 > pick 177f92b77385 2 c
61 > pick 177f92b77385 2 c
62 > drop d2ae7f538514 1 b
62 > drop d2ae7f538514 1 b
63 > pick 055a42cdd887 3 d
63 > pick 055a42cdd887 3 d
64 > fold e860deea161a 4 e
64 > fold e860deea161a 4 e
65 > pick 652413bf663e 5 f
65 > pick 652413bf663e 5 f
66 > EOF
66 > EOF
67 saved backup bundle to $TESTTMP/base/.hg/strip-backup/96e494a2d553-3c6c5d92-backup.hg (glob)
67 saved backup bundle to $TESTTMP/base/.hg/strip-backup/96e494a2d553-3c6c5d92-backup.hg (glob)
68 $ hg log --graph --hidden
68 $ hg log --graph --hidden
69 @ 8:cacdfd884a93 f
69 @ 8:cacdfd884a93 f
70 |
70 |
71 o 7:59d9f330561f d
71 o 7:59d9f330561f d
72 |
72 |
73 o 6:b346ab9a313d c
73 o 6:b346ab9a313d c
74 |
74 |
75 | x 5:652413bf663e f
75 | x 5:652413bf663e f
76 | |
76 | |
77 | x 4:e860deea161a e
77 | x 4:e860deea161a e
78 | |
78 | |
79 | x 3:055a42cdd887 d
79 | x 3:055a42cdd887 d
80 | |
80 | |
81 | x 2:177f92b77385 c
81 | x 2:177f92b77385 c
82 | |
82 | |
83 | x 1:d2ae7f538514 b
83 | x 1:d2ae7f538514 b
84 |/
84 |/
85 o 0:cb9a9f314b8b a
85 o 0:cb9a9f314b8b a
86
86
87 $ hg debugobsolete
87 $ hg debugobsolete
88 d2ae7f538514cd87c17547b0de4cea71fe1af9fb 0 {cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b} (*) {'user': 'test'} (glob)
88 d2ae7f538514cd87c17547b0de4cea71fe1af9fb 0 {cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b} (*) {'user': 'test'} (glob)
89 177f92b773850b59254aa5e923436f921b55483b b346ab9a313db8537ecf96fca3ca3ca984ef3bd7 0 (*) {'user': 'test'} (glob)
89 177f92b773850b59254aa5e923436f921b55483b b346ab9a313db8537ecf96fca3ca3ca984ef3bd7 0 (*) {'user': 'test'} (glob)
90 055a42cdd88768532f9cf79daa407fc8d138de9b 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
90 055a42cdd88768532f9cf79daa407fc8d138de9b 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
91 e860deea161a2f77de56603b340ebbb4536308ae 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
91 e860deea161a2f77de56603b340ebbb4536308ae 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
92 652413bf663ef2a641cab26574e46d5f5a64a55a cacdfd884a9321ec4e1de275ef3949fa953a1f83 0 (*) {'user': 'test'} (glob)
92 652413bf663ef2a641cab26574e46d5f5a64a55a cacdfd884a9321ec4e1de275ef3949fa953a1f83 0 (*) {'user': 'test'} (glob)
93
93
94
94
95 Ensure hidden revision does not prevent histedit
95 Ensure hidden revision does not prevent histedit
96 -------------------------------------------------
96 -------------------------------------------------
97
97
98 create an hidden revision
98 create an hidden revision
99
99
100 $ hg histedit 6 --commands - << EOF
100 $ hg histedit 6 --commands - << EOF
101 > pick b346ab9a313d 6 c
101 > pick b346ab9a313d 6 c
102 > drop 59d9f330561f 7 d
102 > drop 59d9f330561f 7 d
103 > pick cacdfd884a93 8 f
103 > pick cacdfd884a93 8 f
104 > EOF
104 > EOF
105 0 files updated, 0 files merged, 3 files removed, 0 files unresolved
105 0 files updated, 0 files merged, 3 files removed, 0 files unresolved
106 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
106 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
107 $ hg log --graph
107 $ hg log --graph
108 @ 9:c13eb81022ca f
108 @ 9:c13eb81022ca f
109 |
109 |
110 o 6:b346ab9a313d c
110 o 6:b346ab9a313d c
111 |
111 |
112 o 0:cb9a9f314b8b a
112 o 0:cb9a9f314b8b a
113
113
114 check hidden revision are ignored (6 have hidden children 7 and 8)
114 check hidden revision are ignored (6 have hidden children 7 and 8)
115
115
116 $ hg histedit 6 --commands - << EOF
116 $ hg histedit 6 --commands - << EOF
117 > pick b346ab9a313d 6 c
117 > pick b346ab9a313d 6 c
118 > pick c13eb81022ca 8 f
118 > pick c13eb81022ca 8 f
119 > EOF
119 > EOF
120 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
120 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
121
121
122
122
123
123
124 Test that rewriting leaving instability behind is allowed
124 Test that rewriting leaving instability behind is allowed
125 ---------------------------------------------------------------------
125 ---------------------------------------------------------------------
126
126
127 $ hg up '.^'
127 $ hg up '.^'
128 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
128 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
129 $ hg log -r 'children(.)'
129 $ hg log -r 'children(.)'
130 9:c13eb81022ca f (no-eol)
130 9:c13eb81022ca f (no-eol)
131 $ hg histedit -r '.' --commands - <<EOF
131 $ hg histedit -r '.' --commands - <<EOF
132 > edit b346ab9a313d 6 c
132 > edit b346ab9a313d 6 c
133 > EOF
133 > EOF
134 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
134 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
135 adding c
135 adding c
136 Make changes as needed, you may commit or record as needed now.
136 Make changes as needed, you may commit or record as needed now.
137 When you are finished, run hg histedit --continue to resume.
137 When you are finished, run hg histedit --continue to resume.
138 [1]
138 [1]
139 $ echo c >> c
139 $ echo c >> c
140 $ hg histedit --continue
140 $ hg histedit --continue
141 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
141 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
142
142
143 $ hg log -r 'unstable()'
143 $ hg log -r 'unstable()'
144 9:c13eb81022ca f (no-eol)
144 9:c13eb81022ca f (no-eol)
145
145
146 stabilise
146 stabilise
147
147
148 $ hg rebase -r 'unstable()' -d .
148 $ hg rebase -r 'unstable()' -d .
149 rebasing 9:c13eb81022ca "f"
149 rebasing 9:c13eb81022ca "f"
150 $ hg up tip -q
150 $ hg up tip -q
151
151
152 Test dropping of changeset on the top of the stack
152 Test dropping of changeset on the top of the stack
153 -------------------------------------------------------
153 -------------------------------------------------------
154
154
155 Nothing is rewritten below, the working directory parent must be change for the
155 Nothing is rewritten below, the working directory parent must be change for the
156 dropped changeset to be hidden.
156 dropped changeset to be hidden.
157
157
158 $ cd ..
158 $ cd ..
159 $ hg clone base droplast
159 $ hg clone base droplast
160 updating to branch default
160 updating to branch default
161 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
161 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
162 $ cd droplast
162 $ cd droplast
163 $ hg histedit -r '40db8afa467b' --commands - << EOF
163 $ hg histedit -r '40db8afa467b' --commands - << EOF
164 > pick 40db8afa467b 10 c
164 > pick 40db8afa467b 10 c
165 > drop b449568bf7fc 11 f
165 > drop b449568bf7fc 11 f
166 > EOF
166 > EOF
167 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
167 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
168 $ hg log -G
168 $ hg log -G
169 @ 10:40db8afa467b c
169 @ 10:40db8afa467b c
170 |
170 |
171 o 0:cb9a9f314b8b a
171 o 0:cb9a9f314b8b a
172
172
173
173
174 With rewritten ancestors
174 With rewritten ancestors
175
175
176 $ echo e > e
176 $ echo e > e
177 $ hg add e
177 $ hg add e
178 $ hg commit -m g
178 $ hg commit -m g
179 $ echo f > f
179 $ echo f > f
180 $ hg add f
180 $ hg add f
181 $ hg commit -m h
181 $ hg commit -m h
182 $ hg histedit -r '40db8afa467b' --commands - << EOF
182 $ hg histedit -r '40db8afa467b' --commands - << EOF
183 > pick 47a8561c0449 12 g
183 > pick 47a8561c0449 12 g
184 > pick 40db8afa467b 10 c
184 > pick 40db8afa467b 10 c
185 > drop 1b3b05f35ff0 13 h
185 > drop 1b3b05f35ff0 13 h
186 > EOF
186 > EOF
187 0 files updated, 0 files merged, 3 files removed, 0 files unresolved
187 0 files updated, 0 files merged, 3 files removed, 0 files unresolved
188 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
188 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
189 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
189 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
190 $ hg log -G
190 $ hg log -G
191 @ 15:ee6544123ab8 c
191 @ 15:ee6544123ab8 c
192 |
192 |
193 o 14:269e713e9eae g
193 o 14:269e713e9eae g
194 |
194 |
195 o 0:cb9a9f314b8b a
195 o 0:cb9a9f314b8b a
196
196
197 $ cd ../base
197 $ cd ../base
198
198
199
199
200
200
201 Test phases support
201 Test phases support
202 ===========================================
202 ===========================================
203
203
204 Check that histedit respect immutability
204 Check that histedit respect immutability
205 -------------------------------------------
205 -------------------------------------------
206
206
207 $ cat >> $HGRCPATH << EOF
207 $ cat >> $HGRCPATH << EOF
208 > [ui]
208 > [ui]
209 > logtemplate= {rev}:{node|short} ({phase}) {desc|firstline}\n
209 > logtemplate= {rev}:{node|short} ({phase}) {desc|firstline}\n
210 > EOF
210 > EOF
211
211
212 $ hg ph -pv '.^'
212 $ hg ph -pv '.^'
213 phase changed for 2 changesets
213 phase changed for 2 changesets
214 $ hg log -G
214 $ hg log -G
215 @ 11:b449568bf7fc (draft) f
215 @ 11:b449568bf7fc (draft) f
216 |
216 |
217 o 10:40db8afa467b (public) c
217 o 10:40db8afa467b (public) c
218 |
218 |
219 o 0:cb9a9f314b8b (public) a
219 o 0:cb9a9f314b8b (public) a
220
220
221 $ hg histedit -r '.~2'
221 $ hg histedit -r '.~2'
222 abort: cannot edit immutable changeset: cb9a9f314b8b
222 abort: cannot edit public changeset: cb9a9f314b8b
223 [255]
223 [255]
224
224
225
225
226 Prepare further testing
226 Prepare further testing
227 -------------------------------------------
227 -------------------------------------------
228
228
229 $ for x in g h i j k ; do
229 $ for x in g h i j k ; do
230 > echo $x > $x
230 > echo $x > $x
231 > hg add $x
231 > hg add $x
232 > hg ci -m $x
232 > hg ci -m $x
233 > done
233 > done
234 $ hg phase --force --secret .~2
234 $ hg phase --force --secret .~2
235 $ hg log -G
235 $ hg log -G
236 @ 16:ee118ab9fa44 (secret) k
236 @ 16:ee118ab9fa44 (secret) k
237 |
237 |
238 o 15:3a6c53ee7f3d (secret) j
238 o 15:3a6c53ee7f3d (secret) j
239 |
239 |
240 o 14:b605fb7503f2 (secret) i
240 o 14:b605fb7503f2 (secret) i
241 |
241 |
242 o 13:7395e1ff83bd (draft) h
242 o 13:7395e1ff83bd (draft) h
243 |
243 |
244 o 12:6b70183d2492 (draft) g
244 o 12:6b70183d2492 (draft) g
245 |
245 |
246 o 11:b449568bf7fc (draft) f
246 o 11:b449568bf7fc (draft) f
247 |
247 |
248 o 10:40db8afa467b (public) c
248 o 10:40db8afa467b (public) c
249 |
249 |
250 o 0:cb9a9f314b8b (public) a
250 o 0:cb9a9f314b8b (public) a
251
251
252 $ cd ..
252 $ cd ..
253
253
254 simple phase conservation
254 simple phase conservation
255 -------------------------------------------
255 -------------------------------------------
256
256
257 Resulting changeset should conserve the phase of the original one whatever the
257 Resulting changeset should conserve the phase of the original one whatever the
258 phases.new-commit option is.
258 phases.new-commit option is.
259
259
260 New-commit as draft (default)
260 New-commit as draft (default)
261
261
262 $ cp -r base simple-draft
262 $ cp -r base simple-draft
263 $ cd simple-draft
263 $ cd simple-draft
264 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
264 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
265 > edit b449568bf7fc 11 f
265 > edit b449568bf7fc 11 f
266 > pick 6b70183d2492 12 g
266 > pick 6b70183d2492 12 g
267 > pick 7395e1ff83bd 13 h
267 > pick 7395e1ff83bd 13 h
268 > pick b605fb7503f2 14 i
268 > pick b605fb7503f2 14 i
269 > pick 3a6c53ee7f3d 15 j
269 > pick 3a6c53ee7f3d 15 j
270 > pick ee118ab9fa44 16 k
270 > pick ee118ab9fa44 16 k
271 > EOF
271 > EOF
272 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
272 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
273 adding f
273 adding f
274 Make changes as needed, you may commit or record as needed now.
274 Make changes as needed, you may commit or record as needed now.
275 When you are finished, run hg histedit --continue to resume.
275 When you are finished, run hg histedit --continue to resume.
276 [1]
276 [1]
277 $ echo f >> f
277 $ echo f >> f
278 $ hg histedit --continue
278 $ hg histedit --continue
279 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
279 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
280 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
280 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
281 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
281 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
282 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
282 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
283 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
283 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
284 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
284 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
285 $ hg log -G
285 $ hg log -G
286 @ 22:12e89af74238 (secret) k
286 @ 22:12e89af74238 (secret) k
287 |
287 |
288 o 21:636a8687b22e (secret) j
288 o 21:636a8687b22e (secret) j
289 |
289 |
290 o 20:ccaf0a38653f (secret) i
290 o 20:ccaf0a38653f (secret) i
291 |
291 |
292 o 19:11a89d1c2613 (draft) h
292 o 19:11a89d1c2613 (draft) h
293 |
293 |
294 o 18:c1dec7ca82ea (draft) g
294 o 18:c1dec7ca82ea (draft) g
295 |
295 |
296 o 17:087281e68428 (draft) f
296 o 17:087281e68428 (draft) f
297 |
297 |
298 o 10:40db8afa467b (public) c
298 o 10:40db8afa467b (public) c
299 |
299 |
300 o 0:cb9a9f314b8b (public) a
300 o 0:cb9a9f314b8b (public) a
301
301
302 $ cd ..
302 $ cd ..
303
303
304
304
305 New-commit as draft (default)
305 New-commit as draft (default)
306
306
307 $ cp -r base simple-secret
307 $ cp -r base simple-secret
308 $ cd simple-secret
308 $ cd simple-secret
309 $ cat >> .hg/hgrc << EOF
309 $ cat >> .hg/hgrc << EOF
310 > [phases]
310 > [phases]
311 > new-commit=secret
311 > new-commit=secret
312 > EOF
312 > EOF
313 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
313 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
314 > edit b449568bf7fc 11 f
314 > edit b449568bf7fc 11 f
315 > pick 6b70183d2492 12 g
315 > pick 6b70183d2492 12 g
316 > pick 7395e1ff83bd 13 h
316 > pick 7395e1ff83bd 13 h
317 > pick b605fb7503f2 14 i
317 > pick b605fb7503f2 14 i
318 > pick 3a6c53ee7f3d 15 j
318 > pick 3a6c53ee7f3d 15 j
319 > pick ee118ab9fa44 16 k
319 > pick ee118ab9fa44 16 k
320 > EOF
320 > EOF
321 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
321 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
322 adding f
322 adding f
323 Make changes as needed, you may commit or record as needed now.
323 Make changes as needed, you may commit or record as needed now.
324 When you are finished, run hg histedit --continue to resume.
324 When you are finished, run hg histedit --continue to resume.
325 [1]
325 [1]
326 $ echo f >> f
326 $ echo f >> f
327 $ hg histedit --continue
327 $ hg histedit --continue
328 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
328 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
329 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
329 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
330 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
330 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
331 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
331 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
332 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
332 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
333 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
333 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
334 $ hg log -G
334 $ hg log -G
335 @ 22:12e89af74238 (secret) k
335 @ 22:12e89af74238 (secret) k
336 |
336 |
337 o 21:636a8687b22e (secret) j
337 o 21:636a8687b22e (secret) j
338 |
338 |
339 o 20:ccaf0a38653f (secret) i
339 o 20:ccaf0a38653f (secret) i
340 |
340 |
341 o 19:11a89d1c2613 (draft) h
341 o 19:11a89d1c2613 (draft) h
342 |
342 |
343 o 18:c1dec7ca82ea (draft) g
343 o 18:c1dec7ca82ea (draft) g
344 |
344 |
345 o 17:087281e68428 (draft) f
345 o 17:087281e68428 (draft) f
346 |
346 |
347 o 10:40db8afa467b (public) c
347 o 10:40db8afa467b (public) c
348 |
348 |
349 o 0:cb9a9f314b8b (public) a
349 o 0:cb9a9f314b8b (public) a
350
350
351 $ cd ..
351 $ cd ..
352
352
353
353
354 Changeset reordering
354 Changeset reordering
355 -------------------------------------------
355 -------------------------------------------
356
356
357 If a secret changeset is put before a draft one, all descendant should be secret.
357 If a secret changeset is put before a draft one, all descendant should be secret.
358 It seems more important to present the secret phase.
358 It seems more important to present the secret phase.
359
359
360 $ cp -r base reorder
360 $ cp -r base reorder
361 $ cd reorder
361 $ cd reorder
362 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
362 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
363 > pick b449568bf7fc 11 f
363 > pick b449568bf7fc 11 f
364 > pick 3a6c53ee7f3d 15 j
364 > pick 3a6c53ee7f3d 15 j
365 > pick 6b70183d2492 12 g
365 > pick 6b70183d2492 12 g
366 > pick b605fb7503f2 14 i
366 > pick b605fb7503f2 14 i
367 > pick 7395e1ff83bd 13 h
367 > pick 7395e1ff83bd 13 h
368 > pick ee118ab9fa44 16 k
368 > pick ee118ab9fa44 16 k
369 > EOF
369 > EOF
370 0 files updated, 0 files merged, 5 files removed, 0 files unresolved
370 0 files updated, 0 files merged, 5 files removed, 0 files unresolved
371 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
371 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
372 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
372 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
373 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
373 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
374 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
374 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
375 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
375 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
376 $ hg log -G
376 $ hg log -G
377 @ 21:558246857888 (secret) k
377 @ 21:558246857888 (secret) k
378 |
378 |
379 o 20:28bd44768535 (secret) h
379 o 20:28bd44768535 (secret) h
380 |
380 |
381 o 19:d5395202aeb9 (secret) i
381 o 19:d5395202aeb9 (secret) i
382 |
382 |
383 o 18:21edda8e341b (secret) g
383 o 18:21edda8e341b (secret) g
384 |
384 |
385 o 17:5ab64f3a4832 (secret) j
385 o 17:5ab64f3a4832 (secret) j
386 |
386 |
387 o 11:b449568bf7fc (draft) f
387 o 11:b449568bf7fc (draft) f
388 |
388 |
389 o 10:40db8afa467b (public) c
389 o 10:40db8afa467b (public) c
390 |
390 |
391 o 0:cb9a9f314b8b (public) a
391 o 0:cb9a9f314b8b (public) a
392
392
393 $ cd ..
393 $ cd ..
394
394
395 Changeset folding
395 Changeset folding
396 -------------------------------------------
396 -------------------------------------------
397
397
398 Folding a secret changeset with a draft one turn the result secret (again,
398 Folding a secret changeset with a draft one turn the result secret (again,
399 better safe than sorry). Folding between same phase changeset still works
399 better safe than sorry). Folding between same phase changeset still works
400
400
401 Note that there is a few reordering in this series for more extensive test
401 Note that there is a few reordering in this series for more extensive test
402
402
403 $ cp -r base folding
403 $ cp -r base folding
404 $ cd folding
404 $ cd folding
405 $ cat >> .hg/hgrc << EOF
405 $ cat >> .hg/hgrc << EOF
406 > [phases]
406 > [phases]
407 > new-commit=secret
407 > new-commit=secret
408 > EOF
408 > EOF
409 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
409 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
410 > pick 7395e1ff83bd 13 h
410 > pick 7395e1ff83bd 13 h
411 > fold b449568bf7fc 11 f
411 > fold b449568bf7fc 11 f
412 > pick 6b70183d2492 12 g
412 > pick 6b70183d2492 12 g
413 > fold 3a6c53ee7f3d 15 j
413 > fold 3a6c53ee7f3d 15 j
414 > pick b605fb7503f2 14 i
414 > pick b605fb7503f2 14 i
415 > fold ee118ab9fa44 16 k
415 > fold ee118ab9fa44 16 k
416 > EOF
416 > EOF
417 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
417 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
418 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
418 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
419 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
419 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
420 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
420 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
421 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
421 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
422 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
422 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
423 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
423 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
424 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
424 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
425 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
425 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
426 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
426 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
427 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
427 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
428 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
428 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
429 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
429 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
430 saved backup bundle to $TESTTMP/folding/.hg/strip-backup/58019c66f35f-96092fce-backup.hg (glob)
430 saved backup bundle to $TESTTMP/folding/.hg/strip-backup/58019c66f35f-96092fce-backup.hg (glob)
431 saved backup bundle to $TESTTMP/folding/.hg/strip-backup/83d1858e070b-f3469cf8-backup.hg (glob)
431 saved backup bundle to $TESTTMP/folding/.hg/strip-backup/83d1858e070b-f3469cf8-backup.hg (glob)
432 saved backup bundle to $TESTTMP/folding/.hg/strip-backup/859969f5ed7e-d89a19d7-backup.hg (glob)
432 saved backup bundle to $TESTTMP/folding/.hg/strip-backup/859969f5ed7e-d89a19d7-backup.hg (glob)
433 $ hg log -G
433 $ hg log -G
434 @ 19:f9daec13fb98 (secret) i
434 @ 19:f9daec13fb98 (secret) i
435 |
435 |
436 o 18:49807617f46a (secret) g
436 o 18:49807617f46a (secret) g
437 |
437 |
438 o 17:050280826e04 (draft) h
438 o 17:050280826e04 (draft) h
439 |
439 |
440 o 10:40db8afa467b (public) c
440 o 10:40db8afa467b (public) c
441 |
441 |
442 o 0:cb9a9f314b8b (public) a
442 o 0:cb9a9f314b8b (public) a
443
443
444 $ hg co 18
444 $ hg co 18
445 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
445 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
446 $ echo wat >> wat
446 $ echo wat >> wat
447 $ hg add wat
447 $ hg add wat
448 $ hg ci -m 'add wat'
448 $ hg ci -m 'add wat'
449 created new head
449 created new head
450 $ hg merge 19
450 $ hg merge 19
451 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
451 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
452 (branch merge, don't forget to commit)
452 (branch merge, don't forget to commit)
453 $ hg ci -m 'merge'
453 $ hg ci -m 'merge'
454 $ echo not wat > wat
454 $ echo not wat > wat
455 $ hg ci -m 'modify wat'
455 $ hg ci -m 'modify wat'
456 $ hg histedit 17
456 $ hg histedit 17
457 abort: cannot edit history that contains merges
457 abort: cannot edit history that contains merges
458 [255]
458 [255]
459 $ cd ..
459 $ cd ..
@@ -1,215 +1,215
1 $ echo '[extensions]' >> $HGRCPATH
1 $ echo '[extensions]' >> $HGRCPATH
2 $ echo 'hgext.mq =' >> $HGRCPATH
2 $ echo 'hgext.mq =' >> $HGRCPATH
3
3
4 $ hg init repo
4 $ hg init repo
5 $ cd repo
5 $ cd repo
6
6
7 $ echo foo > foo
7 $ echo foo > foo
8 $ hg ci -qAm 'add a file'
8 $ hg ci -qAm 'add a file'
9
9
10 $ hg qinit
10 $ hg qinit
11
11
12 $ hg qnew foo
12 $ hg qnew foo
13 $ echo foo >> foo
13 $ echo foo >> foo
14 $ hg qrefresh -m 'append foo'
14 $ hg qrefresh -m 'append foo'
15
15
16 $ hg qnew bar
16 $ hg qnew bar
17 $ echo bar >> foo
17 $ echo bar >> foo
18 $ hg qrefresh -m 'append bar'
18 $ hg qrefresh -m 'append bar'
19
19
20 Try to operate on public mq changeset
20 Try to operate on public mq changeset
21
21
22 $ hg qpop
22 $ hg qpop
23 popping bar
23 popping bar
24 now at: foo
24 now at: foo
25 $ hg phase --public qbase
25 $ hg phase --public qbase
26 $ echo babar >> foo
26 $ echo babar >> foo
27 $ hg qref
27 $ hg qref
28 abort: cannot refresh immutable revision
28 abort: cannot refresh public revision
29 (see "hg help phases" for details)
29 (see "hg help phases" for details)
30 [255]
30 [255]
31 $ hg revert -a
31 $ hg revert -a
32 reverting foo
32 reverting foo
33 $ hg qpop
33 $ hg qpop
34 abort: popping would remove an immutable revision
34 abort: popping would remove a public revision
35 (see "hg help phases" for details)
35 (see "hg help phases" for details)
36 [255]
36 [255]
37 $ hg qfold bar
37 $ hg qfold bar
38 abort: cannot refresh immutable revision
38 abort: cannot refresh public revision
39 (see "hg help phases" for details)
39 (see "hg help phases" for details)
40 [255]
40 [255]
41 $ hg revert -a
41 $ hg revert -a
42 reverting foo
42 reverting foo
43
43
44 restore state for remaining test
44 restore state for remaining test
45
45
46 $ hg qpush
46 $ hg qpush
47 applying bar
47 applying bar
48 now at: bar
48 now at: bar
49
49
50 try to commit on top of a patch
50 try to commit on top of a patch
51
51
52 $ echo quux >> foo
52 $ echo quux >> foo
53 $ hg ci -m 'append quux'
53 $ hg ci -m 'append quux'
54 abort: cannot commit over an applied mq patch
54 abort: cannot commit over an applied mq patch
55 [255]
55 [255]
56
56
57
57
58 cheat a bit...
58 cheat a bit...
59
59
60 $ mv .hg/patches .hg/patches2
60 $ mv .hg/patches .hg/patches2
61 $ hg ci -m 'append quux'
61 $ hg ci -m 'append quux'
62 $ mv .hg/patches2 .hg/patches
62 $ mv .hg/patches2 .hg/patches
63
63
64
64
65 qpop/qrefresh on the wrong revision
65 qpop/qrefresh on the wrong revision
66
66
67 $ hg qpop
67 $ hg qpop
68 abort: popping would remove a revision not managed by this patch queue
68 abort: popping would remove a revision not managed by this patch queue
69 [255]
69 [255]
70 $ hg qpop -n patches
70 $ hg qpop -n patches
71 using patch queue: $TESTTMP/repo/.hg/patches (glob)
71 using patch queue: $TESTTMP/repo/.hg/patches (glob)
72 abort: popping would remove a revision not managed by this patch queue
72 abort: popping would remove a revision not managed by this patch queue
73 [255]
73 [255]
74 $ hg qrefresh
74 $ hg qrefresh
75 abort: working directory revision is not qtip
75 abort: working directory revision is not qtip
76 [255]
76 [255]
77
77
78 $ hg up -C qtip
78 $ hg up -C qtip
79 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
79 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
80 $ hg qpop
80 $ hg qpop
81 abort: popping would remove a revision not managed by this patch queue
81 abort: popping would remove a revision not managed by this patch queue
82 [255]
82 [255]
83 $ hg qrefresh
83 $ hg qrefresh
84 abort: cannot refresh a revision with children
84 abort: cannot refresh a revision with children
85 [255]
85 [255]
86 $ hg tip --template '{rev} {desc}\n'
86 $ hg tip --template '{rev} {desc}\n'
87 3 append quux
87 3 append quux
88
88
89
89
90 qpush warning branchheads
90 qpush warning branchheads
91
91
92 $ cd ..
92 $ cd ..
93 $ hg init branchy
93 $ hg init branchy
94 $ cd branchy
94 $ cd branchy
95 $ echo q > q
95 $ echo q > q
96 $ hg add q
96 $ hg add q
97 $ hg qnew -f qp
97 $ hg qnew -f qp
98 $ hg qpop
98 $ hg qpop
99 popping qp
99 popping qp
100 patch queue now empty
100 patch queue now empty
101 $ echo a > a
101 $ echo a > a
102 $ hg ci -Ama
102 $ hg ci -Ama
103 adding a
103 adding a
104 $ hg up null
104 $ hg up null
105 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
105 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
106 $ hg branch b
106 $ hg branch b
107 marked working directory as branch b
107 marked working directory as branch b
108 (branches are permanent and global, did you want a bookmark?)
108 (branches are permanent and global, did you want a bookmark?)
109 $ echo c > c
109 $ echo c > c
110 $ hg ci -Amc
110 $ hg ci -Amc
111 adding c
111 adding c
112 $ hg merge default
112 $ hg merge default
113 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
113 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
114 (branch merge, don't forget to commit)
114 (branch merge, don't forget to commit)
115 $ hg ci -mmerge
115 $ hg ci -mmerge
116 $ hg up default
116 $ hg up default
117 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
117 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
118 $ hg log
118 $ hg log
119 changeset: 2:65309210bf4e
119 changeset: 2:65309210bf4e
120 branch: b
120 branch: b
121 tag: tip
121 tag: tip
122 parent: 1:707adb4c8ae1
122 parent: 1:707adb4c8ae1
123 parent: 0:cb9a9f314b8b
123 parent: 0:cb9a9f314b8b
124 user: test
124 user: test
125 date: Thu Jan 01 00:00:00 1970 +0000
125 date: Thu Jan 01 00:00:00 1970 +0000
126 summary: merge
126 summary: merge
127
127
128 changeset: 1:707adb4c8ae1
128 changeset: 1:707adb4c8ae1
129 branch: b
129 branch: b
130 parent: -1:000000000000
130 parent: -1:000000000000
131 user: test
131 user: test
132 date: Thu Jan 01 00:00:00 1970 +0000
132 date: Thu Jan 01 00:00:00 1970 +0000
133 summary: c
133 summary: c
134
134
135 changeset: 0:cb9a9f314b8b
135 changeset: 0:cb9a9f314b8b
136 user: test
136 user: test
137 date: Thu Jan 01 00:00:00 1970 +0000
137 date: Thu Jan 01 00:00:00 1970 +0000
138 summary: a
138 summary: a
139
139
140 $ hg qpush
140 $ hg qpush
141 applying qp
141 applying qp
142 now at: qp
142 now at: qp
143
143
144 Testing applied patches, push and --force
144 Testing applied patches, push and --force
145
145
146 $ cd ..
146 $ cd ..
147 $ hg init forcepush
147 $ hg init forcepush
148 $ cd forcepush
148 $ cd forcepush
149 $ echo a > a
149 $ echo a > a
150 $ hg ci -Am adda
150 $ hg ci -Am adda
151 adding a
151 adding a
152 $ echo a >> a
152 $ echo a >> a
153 $ hg ci -m changea
153 $ hg ci -m changea
154 $ hg up 0
154 $ hg up 0
155 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
155 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
156 $ hg branch branch
156 $ hg branch branch
157 marked working directory as branch branch
157 marked working directory as branch branch
158 (branches are permanent and global, did you want a bookmark?)
158 (branches are permanent and global, did you want a bookmark?)
159 $ echo b > b
159 $ echo b > b
160 $ hg ci -Am addb
160 $ hg ci -Am addb
161 adding b
161 adding b
162 $ hg up 0
162 $ hg up 0
163 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
163 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
164 $ hg --cwd .. clone -r 0 forcepush forcepush2
164 $ hg --cwd .. clone -r 0 forcepush forcepush2
165 adding changesets
165 adding changesets
166 adding manifests
166 adding manifests
167 adding file changes
167 adding file changes
168 added 1 changesets with 1 changes to 1 files
168 added 1 changesets with 1 changes to 1 files
169 updating to branch default
169 updating to branch default
170 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
170 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
171 $ echo a >> a
171 $ echo a >> a
172 $ hg qnew patch
172 $ hg qnew patch
173
173
174 Pushing applied patch with --rev without --force
174 Pushing applied patch with --rev without --force
175
175
176 $ hg push -r . ../forcepush2
176 $ hg push -r . ../forcepush2
177 pushing to ../forcepush2
177 pushing to ../forcepush2
178 abort: source has mq patches applied
178 abort: source has mq patches applied
179 [255]
179 [255]
180
180
181 Pushing applied patch with branchhash, without --force
181 Pushing applied patch with branchhash, without --force
182
182
183 $ hg push ../forcepush2#default
183 $ hg push ../forcepush2#default
184 pushing to ../forcepush2
184 pushing to ../forcepush2
185 abort: source has mq patches applied
185 abort: source has mq patches applied
186 [255]
186 [255]
187
187
188 Pushing revs excluding applied patch
188 Pushing revs excluding applied patch
189
189
190 $ hg push --new-branch -r 'branch(branch)' -r 2 ../forcepush2
190 $ hg push --new-branch -r 'branch(branch)' -r 2 ../forcepush2
191 pushing to ../forcepush2
191 pushing to ../forcepush2
192 searching for changes
192 searching for changes
193 adding changesets
193 adding changesets
194 adding manifests
194 adding manifests
195 adding file changes
195 adding file changes
196 added 1 changesets with 1 changes to 1 files
196 added 1 changesets with 1 changes to 1 files
197
197
198 Pushing applied patch with --force
198 Pushing applied patch with --force
199
199
200 $ hg phase --force --secret 'mq()'
200 $ hg phase --force --secret 'mq()'
201 $ hg push --force -r default ../forcepush2
201 $ hg push --force -r default ../forcepush2
202 pushing to ../forcepush2
202 pushing to ../forcepush2
203 searching for changes
203 searching for changes
204 no changes found (ignored 1 secret changesets)
204 no changes found (ignored 1 secret changesets)
205 [1]
205 [1]
206 $ hg phase --draft 'mq()'
206 $ hg phase --draft 'mq()'
207 $ hg push --force -r default ../forcepush2
207 $ hg push --force -r default ../forcepush2
208 pushing to ../forcepush2
208 pushing to ../forcepush2
209 searching for changes
209 searching for changes
210 adding changesets
210 adding changesets
211 adding manifests
211 adding manifests
212 adding file changes
212 adding file changes
213 added 1 changesets with 1 changes to 1 files (+1 heads)
213 added 1 changesets with 1 changes to 1 files (+1 heads)
214
214
215 $ cd ..
215 $ cd ..
@@ -1,277 +1,277
1 $ cat >> $HGRCPATH <<EOF
1 $ cat >> $HGRCPATH <<EOF
2 > [extensions]
2 > [extensions]
3 > rebase=
3 > rebase=
4 >
4 >
5 > [phases]
5 > [phases]
6 > publish=False
6 > publish=False
7 >
7 >
8 > [alias]
8 > [alias]
9 > tglog = log -G --template "{rev}: '{desc}' {branches}\n"
9 > tglog = log -G --template "{rev}: '{desc}' {branches}\n"
10 > tglogp = log -G --template "{rev}:{phase} '{desc}' {branches}\n"
10 > tglogp = log -G --template "{rev}:{phase} '{desc}' {branches}\n"
11 > EOF
11 > EOF
12
12
13
13
14 $ hg init a
14 $ hg init a
15 $ cd a
15 $ cd a
16
16
17 $ echo A > A
17 $ echo A > A
18 $ hg ci -Am A
18 $ hg ci -Am A
19 adding A
19 adding A
20
20
21 $ echo B > B
21 $ echo B > B
22 $ hg ci -Am B
22 $ hg ci -Am B
23 adding B
23 adding B
24
24
25 $ echo C >> A
25 $ echo C >> A
26 $ hg ci -m C
26 $ hg ci -m C
27
27
28 $ hg up -q -C 0
28 $ hg up -q -C 0
29
29
30 $ echo D >> A
30 $ echo D >> A
31 $ hg ci -m D
31 $ hg ci -m D
32 created new head
32 created new head
33
33
34 $ echo E > E
34 $ echo E > E
35 $ hg ci -Am E
35 $ hg ci -Am E
36 adding E
36 adding E
37
37
38 $ cd ..
38 $ cd ..
39
39
40
40
41 Changes during an interruption - continue:
41 Changes during an interruption - continue:
42
42
43 $ hg clone -q -u . a a1
43 $ hg clone -q -u . a a1
44 $ cd a1
44 $ cd a1
45
45
46 $ hg tglog
46 $ hg tglog
47 @ 4: 'E'
47 @ 4: 'E'
48 |
48 |
49 o 3: 'D'
49 o 3: 'D'
50 |
50 |
51 | o 2: 'C'
51 | o 2: 'C'
52 | |
52 | |
53 | o 1: 'B'
53 | o 1: 'B'
54 |/
54 |/
55 o 0: 'A'
55 o 0: 'A'
56
56
57 Rebasing B onto E:
57 Rebasing B onto E:
58
58
59 $ hg rebase -s 1 -d 4
59 $ hg rebase -s 1 -d 4
60 rebasing 1:27547f69f254 "B"
60 rebasing 1:27547f69f254 "B"
61 rebasing 2:965c486023db "C"
61 rebasing 2:965c486023db "C"
62 merging A
62 merging A
63 warning: conflicts during merge.
63 warning: conflicts during merge.
64 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
64 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
65 unresolved conflicts (see hg resolve, then hg rebase --continue)
65 unresolved conflicts (see hg resolve, then hg rebase --continue)
66 [1]
66 [1]
67
67
68 Force a commit on C during the interruption:
68 Force a commit on C during the interruption:
69
69
70 $ hg up -q -C 2 --config 'extensions.rebase=!'
70 $ hg up -q -C 2 --config 'extensions.rebase=!'
71
71
72 $ echo 'Extra' > Extra
72 $ echo 'Extra' > Extra
73 $ hg add Extra
73 $ hg add Extra
74 $ hg ci -m 'Extra' --config 'extensions.rebase=!'
74 $ hg ci -m 'Extra' --config 'extensions.rebase=!'
75
75
76 Force this commit onto secret phase
76 Force this commit onto secret phase
77
77
78 $ hg phase --force --secret 6
78 $ hg phase --force --secret 6
79
79
80 $ hg tglogp
80 $ hg tglogp
81 @ 6:secret 'Extra'
81 @ 6:secret 'Extra'
82 |
82 |
83 | o 5:draft 'B'
83 | o 5:draft 'B'
84 | |
84 | |
85 | o 4:draft 'E'
85 | o 4:draft 'E'
86 | |
86 | |
87 | o 3:draft 'D'
87 | o 3:draft 'D'
88 | |
88 | |
89 o | 2:draft 'C'
89 o | 2:draft 'C'
90 | |
90 | |
91 o | 1:draft 'B'
91 o | 1:draft 'B'
92 |/
92 |/
93 o 0:draft 'A'
93 o 0:draft 'A'
94
94
95 Resume the rebasing:
95 Resume the rebasing:
96
96
97 $ hg rebase --continue
97 $ hg rebase --continue
98 already rebased 1:27547f69f254 "B" as 45396c49d53b
98 already rebased 1:27547f69f254 "B" as 45396c49d53b
99 rebasing 2:965c486023db "C"
99 rebasing 2:965c486023db "C"
100 merging A
100 merging A
101 warning: conflicts during merge.
101 warning: conflicts during merge.
102 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
102 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
103 unresolved conflicts (see hg resolve, then hg rebase --continue)
103 unresolved conflicts (see hg resolve, then hg rebase --continue)
104 [1]
104 [1]
105
105
106 Solve the conflict and go on:
106 Solve the conflict and go on:
107
107
108 $ echo 'conflict solved' > A
108 $ echo 'conflict solved' > A
109 $ rm A.orig
109 $ rm A.orig
110 $ hg resolve -m A
110 $ hg resolve -m A
111 (no more unresolved files)
111 (no more unresolved files)
112
112
113 $ hg rebase --continue
113 $ hg rebase --continue
114 already rebased 1:27547f69f254 "B" as 45396c49d53b
114 already rebased 1:27547f69f254 "B" as 45396c49d53b
115 rebasing 2:965c486023db "C"
115 rebasing 2:965c486023db "C"
116 warning: new changesets detected on source branch, not stripping
116 warning: new changesets detected on source branch, not stripping
117
117
118 $ hg tglogp
118 $ hg tglogp
119 o 7:draft 'C'
119 o 7:draft 'C'
120 |
120 |
121 | o 6:secret 'Extra'
121 | o 6:secret 'Extra'
122 | |
122 | |
123 o | 5:draft 'B'
123 o | 5:draft 'B'
124 | |
124 | |
125 @ | 4:draft 'E'
125 @ | 4:draft 'E'
126 | |
126 | |
127 o | 3:draft 'D'
127 o | 3:draft 'D'
128 | |
128 | |
129 | o 2:draft 'C'
129 | o 2:draft 'C'
130 | |
130 | |
131 | o 1:draft 'B'
131 | o 1:draft 'B'
132 |/
132 |/
133 o 0:draft 'A'
133 o 0:draft 'A'
134
134
135 $ cd ..
135 $ cd ..
136
136
137
137
138 Changes during an interruption - abort:
138 Changes during an interruption - abort:
139
139
140 $ hg clone -q -u . a a2
140 $ hg clone -q -u . a a2
141 $ cd a2
141 $ cd a2
142
142
143 $ hg tglog
143 $ hg tglog
144 @ 4: 'E'
144 @ 4: 'E'
145 |
145 |
146 o 3: 'D'
146 o 3: 'D'
147 |
147 |
148 | o 2: 'C'
148 | o 2: 'C'
149 | |
149 | |
150 | o 1: 'B'
150 | o 1: 'B'
151 |/
151 |/
152 o 0: 'A'
152 o 0: 'A'
153
153
154 Rebasing B onto E:
154 Rebasing B onto E:
155
155
156 $ hg rebase -s 1 -d 4
156 $ hg rebase -s 1 -d 4
157 rebasing 1:27547f69f254 "B"
157 rebasing 1:27547f69f254 "B"
158 rebasing 2:965c486023db "C"
158 rebasing 2:965c486023db "C"
159 merging A
159 merging A
160 warning: conflicts during merge.
160 warning: conflicts during merge.
161 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
161 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
162 unresolved conflicts (see hg resolve, then hg rebase --continue)
162 unresolved conflicts (see hg resolve, then hg rebase --continue)
163 [1]
163 [1]
164
164
165 Force a commit on B' during the interruption:
165 Force a commit on B' during the interruption:
166
166
167 $ hg up -q -C 5 --config 'extensions.rebase=!'
167 $ hg up -q -C 5 --config 'extensions.rebase=!'
168
168
169 $ echo 'Extra' > Extra
169 $ echo 'Extra' > Extra
170 $ hg add Extra
170 $ hg add Extra
171 $ hg ci -m 'Extra' --config 'extensions.rebase=!'
171 $ hg ci -m 'Extra' --config 'extensions.rebase=!'
172
172
173 $ hg tglog
173 $ hg tglog
174 @ 6: 'Extra'
174 @ 6: 'Extra'
175 |
175 |
176 o 5: 'B'
176 o 5: 'B'
177 |
177 |
178 o 4: 'E'
178 o 4: 'E'
179 |
179 |
180 o 3: 'D'
180 o 3: 'D'
181 |
181 |
182 | o 2: 'C'
182 | o 2: 'C'
183 | |
183 | |
184 | o 1: 'B'
184 | o 1: 'B'
185 |/
185 |/
186 o 0: 'A'
186 o 0: 'A'
187
187
188 Abort the rebasing:
188 Abort the rebasing:
189
189
190 $ hg rebase --abort
190 $ hg rebase --abort
191 warning: new changesets detected on target branch, can't strip
191 warning: new changesets detected on target branch, can't strip
192 rebase aborted
192 rebase aborted
193
193
194 $ hg tglog
194 $ hg tglog
195 @ 6: 'Extra'
195 @ 6: 'Extra'
196 |
196 |
197 o 5: 'B'
197 o 5: 'B'
198 |
198 |
199 o 4: 'E'
199 o 4: 'E'
200 |
200 |
201 o 3: 'D'
201 o 3: 'D'
202 |
202 |
203 | o 2: 'C'
203 | o 2: 'C'
204 | |
204 | |
205 | o 1: 'B'
205 | o 1: 'B'
206 |/
206 |/
207 o 0: 'A'
207 o 0: 'A'
208
208
209 $ cd ..
209 $ cd ..
210
210
211 Changes during an interruption - abort (again):
211 Changes during an interruption - abort (again):
212
212
213 $ hg clone -q -u . a a3
213 $ hg clone -q -u . a a3
214 $ cd a3
214 $ cd a3
215
215
216 $ hg tglogp
216 $ hg tglogp
217 @ 4:draft 'E'
217 @ 4:draft 'E'
218 |
218 |
219 o 3:draft 'D'
219 o 3:draft 'D'
220 |
220 |
221 | o 2:draft 'C'
221 | o 2:draft 'C'
222 | |
222 | |
223 | o 1:draft 'B'
223 | o 1:draft 'B'
224 |/
224 |/
225 o 0:draft 'A'
225 o 0:draft 'A'
226
226
227 Rebasing B onto E:
227 Rebasing B onto E:
228
228
229 $ hg rebase -s 1 -d 4
229 $ hg rebase -s 1 -d 4
230 rebasing 1:27547f69f254 "B"
230 rebasing 1:27547f69f254 "B"
231 rebasing 2:965c486023db "C"
231 rebasing 2:965c486023db "C"
232 merging A
232 merging A
233 warning: conflicts during merge.
233 warning: conflicts during merge.
234 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
234 merging A incomplete! (edit conflicts, then use 'hg resolve --mark')
235 unresolved conflicts (see hg resolve, then hg rebase --continue)
235 unresolved conflicts (see hg resolve, then hg rebase --continue)
236 [1]
236 [1]
237
237
238 Change phase on B and B'
238 Change phase on B and B'
239
239
240 $ hg up -q -C 5 --config 'extensions.rebase=!'
240 $ hg up -q -C 5 --config 'extensions.rebase=!'
241 $ hg phase --public 1
241 $ hg phase --public 1
242 $ hg phase --public 5
242 $ hg phase --public 5
243 $ hg phase --secret -f 2
243 $ hg phase --secret -f 2
244
244
245 $ hg tglogp
245 $ hg tglogp
246 @ 5:public 'B'
246 @ 5:public 'B'
247 |
247 |
248 o 4:public 'E'
248 o 4:public 'E'
249 |
249 |
250 o 3:public 'D'
250 o 3:public 'D'
251 |
251 |
252 | o 2:secret 'C'
252 | o 2:secret 'C'
253 | |
253 | |
254 | o 1:public 'B'
254 | o 1:public 'B'
255 |/
255 |/
256 o 0:public 'A'
256 o 0:public 'A'
257
257
258 Abort the rebasing:
258 Abort the rebasing:
259
259
260 $ hg rebase --abort
260 $ hg rebase --abort
261 warning: can't clean up immutable changesets 45396c49d53b
261 warning: can't clean up public changesets 45396c49d53b
262 rebase aborted
262 rebase aborted
263
263
264 $ hg tglogp
264 $ hg tglogp
265 @ 5:public 'B'
265 @ 5:public 'B'
266 |
266 |
267 o 4:public 'E'
267 o 4:public 'E'
268 |
268 |
269 o 3:public 'D'
269 o 3:public 'D'
270 |
270 |
271 | o 2:secret 'C'
271 | o 2:secret 'C'
272 | |
272 | |
273 | o 1:public 'B'
273 | o 1:public 'B'
274 |/
274 |/
275 o 0:public 'A'
275 o 0:public 'A'
276
276
277 $ cd ..
277 $ cd ..
@@ -1,745 +1,745
1 $ cat >> $HGRCPATH <<EOF
1 $ cat >> $HGRCPATH <<EOF
2 > [extensions]
2 > [extensions]
3 > rebase=
3 > rebase=
4 >
4 >
5 > [phases]
5 > [phases]
6 > publish=False
6 > publish=False
7 >
7 >
8 > [alias]
8 > [alias]
9 > tglog = log -G --template "{rev}: '{desc}' {branches}\n"
9 > tglog = log -G --template "{rev}: '{desc}' {branches}\n"
10 > EOF
10 > EOF
11
11
12
12
13 $ hg init a
13 $ hg init a
14 $ cd a
14 $ cd a
15 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
15 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
16 adding changesets
16 adding changesets
17 adding manifests
17 adding manifests
18 adding file changes
18 adding file changes
19 added 8 changesets with 7 changes to 7 files (+2 heads)
19 added 8 changesets with 7 changes to 7 files (+2 heads)
20 (run 'hg heads' to see heads, 'hg merge' to merge)
20 (run 'hg heads' to see heads, 'hg merge' to merge)
21 $ hg up tip
21 $ hg up tip
22 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
22 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
23 $ cd ..
23 $ cd ..
24
24
25
25
26 Rebasing
26 Rebasing
27 D onto H - simple rebase:
27 D onto H - simple rebase:
28 (this also tests that editor is invoked if '--edit' is specified)
28 (this also tests that editor is invoked if '--edit' is specified)
29
29
30 $ hg clone -q -u . a a1
30 $ hg clone -q -u . a a1
31 $ cd a1
31 $ cd a1
32
32
33 $ hg tglog
33 $ hg tglog
34 @ 7: 'H'
34 @ 7: 'H'
35 |
35 |
36 | o 6: 'G'
36 | o 6: 'G'
37 |/|
37 |/|
38 o | 5: 'F'
38 o | 5: 'F'
39 | |
39 | |
40 | o 4: 'E'
40 | o 4: 'E'
41 |/
41 |/
42 | o 3: 'D'
42 | o 3: 'D'
43 | |
43 | |
44 | o 2: 'C'
44 | o 2: 'C'
45 | |
45 | |
46 | o 1: 'B'
46 | o 1: 'B'
47 |/
47 |/
48 o 0: 'A'
48 o 0: 'A'
49
49
50
50
51 $ hg status --rev "3^1" --rev 3
51 $ hg status --rev "3^1" --rev 3
52 A D
52 A D
53 $ HGEDITOR=cat hg rebase -s 3 -d 7 --edit
53 $ HGEDITOR=cat hg rebase -s 3 -d 7 --edit
54 rebasing 3:32af7686d403 "D"
54 rebasing 3:32af7686d403 "D"
55 D
55 D
56
56
57
57
58 HG: Enter commit message. Lines beginning with 'HG:' are removed.
58 HG: Enter commit message. Lines beginning with 'HG:' are removed.
59 HG: Leave message empty to abort commit.
59 HG: Leave message empty to abort commit.
60 HG: --
60 HG: --
61 HG: user: Nicolas Dumazet <nicdumz.commits@gmail.com>
61 HG: user: Nicolas Dumazet <nicdumz.commits@gmail.com>
62 HG: branch 'default'
62 HG: branch 'default'
63 HG: added D
63 HG: added D
64 saved backup bundle to $TESTTMP/a1/.hg/strip-backup/32af7686d403-6f7dface-backup.hg (glob)
64 saved backup bundle to $TESTTMP/a1/.hg/strip-backup/32af7686d403-6f7dface-backup.hg (glob)
65
65
66 $ hg tglog
66 $ hg tglog
67 o 7: 'D'
67 o 7: 'D'
68 |
68 |
69 @ 6: 'H'
69 @ 6: 'H'
70 |
70 |
71 | o 5: 'G'
71 | o 5: 'G'
72 |/|
72 |/|
73 o | 4: 'F'
73 o | 4: 'F'
74 | |
74 | |
75 | o 3: 'E'
75 | o 3: 'E'
76 |/
76 |/
77 | o 2: 'C'
77 | o 2: 'C'
78 | |
78 | |
79 | o 1: 'B'
79 | o 1: 'B'
80 |/
80 |/
81 o 0: 'A'
81 o 0: 'A'
82
82
83 $ cd ..
83 $ cd ..
84
84
85
85
86 D onto F - intermediate point:
86 D onto F - intermediate point:
87 (this also tests that editor is not invoked if '--edit' is not specified)
87 (this also tests that editor is not invoked if '--edit' is not specified)
88
88
89 $ hg clone -q -u . a a2
89 $ hg clone -q -u . a a2
90 $ cd a2
90 $ cd a2
91
91
92 $ HGEDITOR=cat hg rebase -s 3 -d 5
92 $ HGEDITOR=cat hg rebase -s 3 -d 5
93 rebasing 3:32af7686d403 "D"
93 rebasing 3:32af7686d403 "D"
94 saved backup bundle to $TESTTMP/a2/.hg/strip-backup/32af7686d403-6f7dface-backup.hg (glob)
94 saved backup bundle to $TESTTMP/a2/.hg/strip-backup/32af7686d403-6f7dface-backup.hg (glob)
95
95
96 $ hg tglog
96 $ hg tglog
97 o 7: 'D'
97 o 7: 'D'
98 |
98 |
99 | @ 6: 'H'
99 | @ 6: 'H'
100 |/
100 |/
101 | o 5: 'G'
101 | o 5: 'G'
102 |/|
102 |/|
103 o | 4: 'F'
103 o | 4: 'F'
104 | |
104 | |
105 | o 3: 'E'
105 | o 3: 'E'
106 |/
106 |/
107 | o 2: 'C'
107 | o 2: 'C'
108 | |
108 | |
109 | o 1: 'B'
109 | o 1: 'B'
110 |/
110 |/
111 o 0: 'A'
111 o 0: 'A'
112
112
113 $ cd ..
113 $ cd ..
114
114
115
115
116 E onto H - skip of G:
116 E onto H - skip of G:
117
117
118 $ hg clone -q -u . a a3
118 $ hg clone -q -u . a a3
119 $ cd a3
119 $ cd a3
120
120
121 $ hg rebase -s 4 -d 7
121 $ hg rebase -s 4 -d 7
122 rebasing 4:9520eea781bc "E"
122 rebasing 4:9520eea781bc "E"
123 rebasing 6:eea13746799a "G"
123 rebasing 6:eea13746799a "G"
124 note: rebase of 6:eea13746799a created no changes to commit
124 note: rebase of 6:eea13746799a created no changes to commit
125 saved backup bundle to $TESTTMP/a3/.hg/strip-backup/9520eea781bc-fcd8edd4-backup.hg (glob)
125 saved backup bundle to $TESTTMP/a3/.hg/strip-backup/9520eea781bc-fcd8edd4-backup.hg (glob)
126
126
127 $ hg tglog
127 $ hg tglog
128 o 6: 'E'
128 o 6: 'E'
129 |
129 |
130 @ 5: 'H'
130 @ 5: 'H'
131 |
131 |
132 o 4: 'F'
132 o 4: 'F'
133 |
133 |
134 | o 3: 'D'
134 | o 3: 'D'
135 | |
135 | |
136 | o 2: 'C'
136 | o 2: 'C'
137 | |
137 | |
138 | o 1: 'B'
138 | o 1: 'B'
139 |/
139 |/
140 o 0: 'A'
140 o 0: 'A'
141
141
142 $ cd ..
142 $ cd ..
143
143
144
144
145 F onto E - rebase of a branching point (skip G):
145 F onto E - rebase of a branching point (skip G):
146
146
147 $ hg clone -q -u . a a4
147 $ hg clone -q -u . a a4
148 $ cd a4
148 $ cd a4
149
149
150 $ hg rebase -s 5 -d 4
150 $ hg rebase -s 5 -d 4
151 rebasing 5:24b6387c8c8c "F"
151 rebasing 5:24b6387c8c8c "F"
152 rebasing 6:eea13746799a "G"
152 rebasing 6:eea13746799a "G"
153 note: rebase of 6:eea13746799a created no changes to commit
153 note: rebase of 6:eea13746799a created no changes to commit
154 rebasing 7:02de42196ebe "H" (tip)
154 rebasing 7:02de42196ebe "H" (tip)
155 saved backup bundle to $TESTTMP/a4/.hg/strip-backup/24b6387c8c8c-c3fe765d-backup.hg (glob)
155 saved backup bundle to $TESTTMP/a4/.hg/strip-backup/24b6387c8c8c-c3fe765d-backup.hg (glob)
156
156
157 $ hg tglog
157 $ hg tglog
158 @ 6: 'H'
158 @ 6: 'H'
159 |
159 |
160 o 5: 'F'
160 o 5: 'F'
161 |
161 |
162 o 4: 'E'
162 o 4: 'E'
163 |
163 |
164 | o 3: 'D'
164 | o 3: 'D'
165 | |
165 | |
166 | o 2: 'C'
166 | o 2: 'C'
167 | |
167 | |
168 | o 1: 'B'
168 | o 1: 'B'
169 |/
169 |/
170 o 0: 'A'
170 o 0: 'A'
171
171
172 $ cd ..
172 $ cd ..
173
173
174
174
175 G onto H - merged revision having a parent in ancestors of target:
175 G onto H - merged revision having a parent in ancestors of target:
176
176
177 $ hg clone -q -u . a a5
177 $ hg clone -q -u . a a5
178 $ cd a5
178 $ cd a5
179
179
180 $ hg rebase -s 6 -d 7
180 $ hg rebase -s 6 -d 7
181 rebasing 6:eea13746799a "G"
181 rebasing 6:eea13746799a "G"
182 saved backup bundle to $TESTTMP/a5/.hg/strip-backup/eea13746799a-883828ed-backup.hg (glob)
182 saved backup bundle to $TESTTMP/a5/.hg/strip-backup/eea13746799a-883828ed-backup.hg (glob)
183
183
184 $ hg tglog
184 $ hg tglog
185 o 7: 'G'
185 o 7: 'G'
186 |\
186 |\
187 | @ 6: 'H'
187 | @ 6: 'H'
188 | |
188 | |
189 | o 5: 'F'
189 | o 5: 'F'
190 | |
190 | |
191 o | 4: 'E'
191 o | 4: 'E'
192 |/
192 |/
193 | o 3: 'D'
193 | o 3: 'D'
194 | |
194 | |
195 | o 2: 'C'
195 | o 2: 'C'
196 | |
196 | |
197 | o 1: 'B'
197 | o 1: 'B'
198 |/
198 |/
199 o 0: 'A'
199 o 0: 'A'
200
200
201 $ cd ..
201 $ cd ..
202
202
203
203
204 F onto B - G maintains E as parent:
204 F onto B - G maintains E as parent:
205
205
206 $ hg clone -q -u . a a6
206 $ hg clone -q -u . a a6
207 $ cd a6
207 $ cd a6
208
208
209 $ hg rebase -s 5 -d 1
209 $ hg rebase -s 5 -d 1
210 rebasing 5:24b6387c8c8c "F"
210 rebasing 5:24b6387c8c8c "F"
211 rebasing 6:eea13746799a "G"
211 rebasing 6:eea13746799a "G"
212 rebasing 7:02de42196ebe "H" (tip)
212 rebasing 7:02de42196ebe "H" (tip)
213 saved backup bundle to $TESTTMP/a6/.hg/strip-backup/24b6387c8c8c-c3fe765d-backup.hg (glob)
213 saved backup bundle to $TESTTMP/a6/.hg/strip-backup/24b6387c8c8c-c3fe765d-backup.hg (glob)
214
214
215 $ hg tglog
215 $ hg tglog
216 @ 7: 'H'
216 @ 7: 'H'
217 |
217 |
218 | o 6: 'G'
218 | o 6: 'G'
219 |/|
219 |/|
220 o | 5: 'F'
220 o | 5: 'F'
221 | |
221 | |
222 | o 4: 'E'
222 | o 4: 'E'
223 | |
223 | |
224 | | o 3: 'D'
224 | | o 3: 'D'
225 | | |
225 | | |
226 +---o 2: 'C'
226 +---o 2: 'C'
227 | |
227 | |
228 o | 1: 'B'
228 o | 1: 'B'
229 |/
229 |/
230 o 0: 'A'
230 o 0: 'A'
231
231
232 $ cd ..
232 $ cd ..
233
233
234
234
235 These will fail (using --source):
235 These will fail (using --source):
236
236
237 G onto F - rebase onto an ancestor:
237 G onto F - rebase onto an ancestor:
238
238
239 $ hg clone -q -u . a a7
239 $ hg clone -q -u . a a7
240 $ cd a7
240 $ cd a7
241
241
242 $ hg rebase -s 6 -d 5
242 $ hg rebase -s 6 -d 5
243 nothing to rebase
243 nothing to rebase
244 [1]
244 [1]
245
245
246 F onto G - rebase onto a descendant:
246 F onto G - rebase onto a descendant:
247
247
248 $ hg rebase -s 5 -d 6
248 $ hg rebase -s 5 -d 6
249 abort: source is ancestor of destination
249 abort: source is ancestor of destination
250 [255]
250 [255]
251
251
252 G onto B - merge revision with both parents not in ancestors of target:
252 G onto B - merge revision with both parents not in ancestors of target:
253
253
254 $ hg rebase -s 6 -d 1
254 $ hg rebase -s 6 -d 1
255 rebasing 6:eea13746799a "G"
255 rebasing 6:eea13746799a "G"
256 abort: cannot use revision 6 as base, result would have 3 parents
256 abort: cannot use revision 6 as base, result would have 3 parents
257 [255]
257 [255]
258
258
259
259
260 These will abort gracefully (using --base):
260 These will abort gracefully (using --base):
261
261
262 G onto G - rebase onto same changeset:
262 G onto G - rebase onto same changeset:
263
263
264 $ hg rebase -b 6 -d 6
264 $ hg rebase -b 6 -d 6
265 nothing to rebase - eea13746799a is both "base" and destination
265 nothing to rebase - eea13746799a is both "base" and destination
266 [1]
266 [1]
267
267
268 G onto F - rebase onto an ancestor:
268 G onto F - rebase onto an ancestor:
269
269
270 $ hg rebase -b 6 -d 5
270 $ hg rebase -b 6 -d 5
271 nothing to rebase
271 nothing to rebase
272 [1]
272 [1]
273
273
274 F onto G - rebase onto a descendant:
274 F onto G - rebase onto a descendant:
275
275
276 $ hg rebase -b 5 -d 6
276 $ hg rebase -b 5 -d 6
277 nothing to rebase - "base" 24b6387c8c8c is already an ancestor of destination eea13746799a
277 nothing to rebase - "base" 24b6387c8c8c is already an ancestor of destination eea13746799a
278 [1]
278 [1]
279
279
280 C onto A - rebase onto an ancestor:
280 C onto A - rebase onto an ancestor:
281
281
282 $ hg rebase -d 0 -s 2
282 $ hg rebase -d 0 -s 2
283 rebasing 2:5fddd98957c8 "C"
283 rebasing 2:5fddd98957c8 "C"
284 rebasing 3:32af7686d403 "D"
284 rebasing 3:32af7686d403 "D"
285 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/5fddd98957c8-f9244fa1-backup.hg (glob)
285 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/5fddd98957c8-f9244fa1-backup.hg (glob)
286 $ hg tglog
286 $ hg tglog
287 o 7: 'D'
287 o 7: 'D'
288 |
288 |
289 o 6: 'C'
289 o 6: 'C'
290 |
290 |
291 | @ 5: 'H'
291 | @ 5: 'H'
292 | |
292 | |
293 | | o 4: 'G'
293 | | o 4: 'G'
294 | |/|
294 | |/|
295 | o | 3: 'F'
295 | o | 3: 'F'
296 |/ /
296 |/ /
297 | o 2: 'E'
297 | o 2: 'E'
298 |/
298 |/
299 | o 1: 'B'
299 | o 1: 'B'
300 |/
300 |/
301 o 0: 'A'
301 o 0: 'A'
302
302
303
303
304 Check rebasing public changeset
304 Check rebasing public changeset
305
305
306 $ hg pull --config phases.publish=True -q -r 6 . # update phase of 6
306 $ hg pull --config phases.publish=True -q -r 6 . # update phase of 6
307 $ hg rebase -d 0 -b 6
307 $ hg rebase -d 0 -b 6
308 nothing to rebase
308 nothing to rebase
309 [1]
309 [1]
310 $ hg rebase -d 5 -b 6
310 $ hg rebase -d 5 -b 6
311 abort: can't rebase immutable changeset e1c4361dd923
311 abort: can't rebase public changeset e1c4361dd923
312 (see "hg help phases" for details)
312 (see "hg help phases" for details)
313 [255]
313 [255]
314
314
315 $ hg rebase -d 5 -b 6 --keep
315 $ hg rebase -d 5 -b 6 --keep
316 rebasing 6:e1c4361dd923 "C"
316 rebasing 6:e1c4361dd923 "C"
317 rebasing 7:c9659aac0000 "D" (tip)
317 rebasing 7:c9659aac0000 "D" (tip)
318
318
319 Check rebasing mutable changeset
319 Check rebasing mutable changeset
320 Source phase greater or equal to destination phase: new changeset get the phase of source:
320 Source phase greater or equal to destination phase: new changeset get the phase of source:
321 $ hg id -n
321 $ hg id -n
322 5
322 5
323 $ hg rebase -s9 -d0
323 $ hg rebase -s9 -d0
324 rebasing 9:2b23e52411f4 "D" (tip)
324 rebasing 9:2b23e52411f4 "D" (tip)
325 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/2b23e52411f4-f942decf-backup.hg (glob)
325 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/2b23e52411f4-f942decf-backup.hg (glob)
326 $ hg id -n # check we updated back to parent
326 $ hg id -n # check we updated back to parent
327 5
327 5
328 $ hg log --template "{phase}\n" -r 9
328 $ hg log --template "{phase}\n" -r 9
329 draft
329 draft
330 $ hg rebase -s9 -d1
330 $ hg rebase -s9 -d1
331 rebasing 9:2cb10d0cfc6c "D" (tip)
331 rebasing 9:2cb10d0cfc6c "D" (tip)
332 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/2cb10d0cfc6c-ddb0f256-backup.hg (glob)
332 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/2cb10d0cfc6c-ddb0f256-backup.hg (glob)
333 $ hg log --template "{phase}\n" -r 9
333 $ hg log --template "{phase}\n" -r 9
334 draft
334 draft
335 $ hg phase --force --secret 9
335 $ hg phase --force --secret 9
336 $ hg rebase -s9 -d0
336 $ hg rebase -s9 -d0
337 rebasing 9:c5b12b67163a "D" (tip)
337 rebasing 9:c5b12b67163a "D" (tip)
338 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/c5b12b67163a-4e372053-backup.hg (glob)
338 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/c5b12b67163a-4e372053-backup.hg (glob)
339 $ hg log --template "{phase}\n" -r 9
339 $ hg log --template "{phase}\n" -r 9
340 secret
340 secret
341 $ hg rebase -s9 -d1
341 $ hg rebase -s9 -d1
342 rebasing 9:2a0524f868ac "D" (tip)
342 rebasing 9:2a0524f868ac "D" (tip)
343 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/2a0524f868ac-cefd8574-backup.hg (glob)
343 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/2a0524f868ac-cefd8574-backup.hg (glob)
344 $ hg log --template "{phase}\n" -r 9
344 $ hg log --template "{phase}\n" -r 9
345 secret
345 secret
346 Source phase lower than destination phase: new changeset get the phase of destination:
346 Source phase lower than destination phase: new changeset get the phase of destination:
347 $ hg rebase -s8 -d9
347 $ hg rebase -s8 -d9
348 rebasing 8:6d4f22462821 "C"
348 rebasing 8:6d4f22462821 "C"
349 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/6d4f22462821-3441f70b-backup.hg (glob)
349 saved backup bundle to $TESTTMP/a7/.hg/strip-backup/6d4f22462821-3441f70b-backup.hg (glob)
350 $ hg log --template "{phase}\n" -r 'rev(9)'
350 $ hg log --template "{phase}\n" -r 'rev(9)'
351 secret
351 secret
352
352
353 $ cd ..
353 $ cd ..
354
354
355 Test for revset
355 Test for revset
356
356
357 We need a bit different graph
357 We need a bit different graph
358 All destination are B
358 All destination are B
359
359
360 $ hg init ah
360 $ hg init ah
361 $ cd ah
361 $ cd ah
362 $ hg unbundle "$TESTDIR/bundles/rebase-revset.hg"
362 $ hg unbundle "$TESTDIR/bundles/rebase-revset.hg"
363 adding changesets
363 adding changesets
364 adding manifests
364 adding manifests
365 adding file changes
365 adding file changes
366 added 9 changesets with 9 changes to 9 files (+2 heads)
366 added 9 changesets with 9 changes to 9 files (+2 heads)
367 (run 'hg heads' to see heads, 'hg merge' to merge)
367 (run 'hg heads' to see heads, 'hg merge' to merge)
368 $ hg tglog
368 $ hg tglog
369 o 8: 'I'
369 o 8: 'I'
370 |
370 |
371 o 7: 'H'
371 o 7: 'H'
372 |
372 |
373 o 6: 'G'
373 o 6: 'G'
374 |
374 |
375 | o 5: 'F'
375 | o 5: 'F'
376 | |
376 | |
377 | o 4: 'E'
377 | o 4: 'E'
378 |/
378 |/
379 o 3: 'D'
379 o 3: 'D'
380 |
380 |
381 o 2: 'C'
381 o 2: 'C'
382 |
382 |
383 | o 1: 'B'
383 | o 1: 'B'
384 |/
384 |/
385 o 0: 'A'
385 o 0: 'A'
386
386
387 $ cd ..
387 $ cd ..
388
388
389
389
390 Simple case with keep:
390 Simple case with keep:
391
391
392 Source on have two descendant heads but ask for one
392 Source on have two descendant heads but ask for one
393
393
394 $ hg clone -q -u . ah ah1
394 $ hg clone -q -u . ah ah1
395 $ cd ah1
395 $ cd ah1
396 $ hg rebase -r '2::8' -d 1
396 $ hg rebase -r '2::8' -d 1
397 abort: can't remove original changesets with unrebased descendants
397 abort: can't remove original changesets with unrebased descendants
398 (use --keep to keep original changesets)
398 (use --keep to keep original changesets)
399 [255]
399 [255]
400 $ hg rebase -r '2::8' -d 1 -k
400 $ hg rebase -r '2::8' -d 1 -k
401 rebasing 2:c9e50f6cdc55 "C"
401 rebasing 2:c9e50f6cdc55 "C"
402 rebasing 3:ffd453c31098 "D"
402 rebasing 3:ffd453c31098 "D"
403 rebasing 6:3d8a618087a7 "G"
403 rebasing 6:3d8a618087a7 "G"
404 rebasing 7:72434a4e60b0 "H"
404 rebasing 7:72434a4e60b0 "H"
405 rebasing 8:479ddb54a924 "I" (tip)
405 rebasing 8:479ddb54a924 "I" (tip)
406 $ hg tglog
406 $ hg tglog
407 o 13: 'I'
407 o 13: 'I'
408 |
408 |
409 o 12: 'H'
409 o 12: 'H'
410 |
410 |
411 o 11: 'G'
411 o 11: 'G'
412 |
412 |
413 o 10: 'D'
413 o 10: 'D'
414 |
414 |
415 o 9: 'C'
415 o 9: 'C'
416 |
416 |
417 | o 8: 'I'
417 | o 8: 'I'
418 | |
418 | |
419 | o 7: 'H'
419 | o 7: 'H'
420 | |
420 | |
421 | o 6: 'G'
421 | o 6: 'G'
422 | |
422 | |
423 | | o 5: 'F'
423 | | o 5: 'F'
424 | | |
424 | | |
425 | | o 4: 'E'
425 | | o 4: 'E'
426 | |/
426 | |/
427 | o 3: 'D'
427 | o 3: 'D'
428 | |
428 | |
429 | o 2: 'C'
429 | o 2: 'C'
430 | |
430 | |
431 o | 1: 'B'
431 o | 1: 'B'
432 |/
432 |/
433 o 0: 'A'
433 o 0: 'A'
434
434
435
435
436 $ cd ..
436 $ cd ..
437
437
438 Base on have one descendant heads we ask for but common ancestor have two
438 Base on have one descendant heads we ask for but common ancestor have two
439
439
440 $ hg clone -q -u . ah ah2
440 $ hg clone -q -u . ah ah2
441 $ cd ah2
441 $ cd ah2
442 $ hg rebase -r '3::8' -d 1
442 $ hg rebase -r '3::8' -d 1
443 abort: can't remove original changesets with unrebased descendants
443 abort: can't remove original changesets with unrebased descendants
444 (use --keep to keep original changesets)
444 (use --keep to keep original changesets)
445 [255]
445 [255]
446 $ hg rebase -r '3::8' -d 1 --keep
446 $ hg rebase -r '3::8' -d 1 --keep
447 rebasing 3:ffd453c31098 "D"
447 rebasing 3:ffd453c31098 "D"
448 rebasing 6:3d8a618087a7 "G"
448 rebasing 6:3d8a618087a7 "G"
449 rebasing 7:72434a4e60b0 "H"
449 rebasing 7:72434a4e60b0 "H"
450 rebasing 8:479ddb54a924 "I" (tip)
450 rebasing 8:479ddb54a924 "I" (tip)
451 $ hg tglog
451 $ hg tglog
452 o 12: 'I'
452 o 12: 'I'
453 |
453 |
454 o 11: 'H'
454 o 11: 'H'
455 |
455 |
456 o 10: 'G'
456 o 10: 'G'
457 |
457 |
458 o 9: 'D'
458 o 9: 'D'
459 |
459 |
460 | o 8: 'I'
460 | o 8: 'I'
461 | |
461 | |
462 | o 7: 'H'
462 | o 7: 'H'
463 | |
463 | |
464 | o 6: 'G'
464 | o 6: 'G'
465 | |
465 | |
466 | | o 5: 'F'
466 | | o 5: 'F'
467 | | |
467 | | |
468 | | o 4: 'E'
468 | | o 4: 'E'
469 | |/
469 | |/
470 | o 3: 'D'
470 | o 3: 'D'
471 | |
471 | |
472 | o 2: 'C'
472 | o 2: 'C'
473 | |
473 | |
474 o | 1: 'B'
474 o | 1: 'B'
475 |/
475 |/
476 o 0: 'A'
476 o 0: 'A'
477
477
478
478
479 $ cd ..
479 $ cd ..
480
480
481 rebase subset
481 rebase subset
482
482
483 $ hg clone -q -u . ah ah3
483 $ hg clone -q -u . ah ah3
484 $ cd ah3
484 $ cd ah3
485 $ hg rebase -r '3::7' -d 1
485 $ hg rebase -r '3::7' -d 1
486 abort: can't remove original changesets with unrebased descendants
486 abort: can't remove original changesets with unrebased descendants
487 (use --keep to keep original changesets)
487 (use --keep to keep original changesets)
488 [255]
488 [255]
489 $ hg rebase -r '3::7' -d 1 --keep
489 $ hg rebase -r '3::7' -d 1 --keep
490 rebasing 3:ffd453c31098 "D"
490 rebasing 3:ffd453c31098 "D"
491 rebasing 6:3d8a618087a7 "G"
491 rebasing 6:3d8a618087a7 "G"
492 rebasing 7:72434a4e60b0 "H"
492 rebasing 7:72434a4e60b0 "H"
493 $ hg tglog
493 $ hg tglog
494 o 11: 'H'
494 o 11: 'H'
495 |
495 |
496 o 10: 'G'
496 o 10: 'G'
497 |
497 |
498 o 9: 'D'
498 o 9: 'D'
499 |
499 |
500 | o 8: 'I'
500 | o 8: 'I'
501 | |
501 | |
502 | o 7: 'H'
502 | o 7: 'H'
503 | |
503 | |
504 | o 6: 'G'
504 | o 6: 'G'
505 | |
505 | |
506 | | o 5: 'F'
506 | | o 5: 'F'
507 | | |
507 | | |
508 | | o 4: 'E'
508 | | o 4: 'E'
509 | |/
509 | |/
510 | o 3: 'D'
510 | o 3: 'D'
511 | |
511 | |
512 | o 2: 'C'
512 | o 2: 'C'
513 | |
513 | |
514 o | 1: 'B'
514 o | 1: 'B'
515 |/
515 |/
516 o 0: 'A'
516 o 0: 'A'
517
517
518
518
519 $ cd ..
519 $ cd ..
520
520
521 rebase subset with multiple head
521 rebase subset with multiple head
522
522
523 $ hg clone -q -u . ah ah4
523 $ hg clone -q -u . ah ah4
524 $ cd ah4
524 $ cd ah4
525 $ hg rebase -r '3::(7+5)' -d 1
525 $ hg rebase -r '3::(7+5)' -d 1
526 abort: can't remove original changesets with unrebased descendants
526 abort: can't remove original changesets with unrebased descendants
527 (use --keep to keep original changesets)
527 (use --keep to keep original changesets)
528 [255]
528 [255]
529 $ hg rebase -r '3::(7+5)' -d 1 --keep
529 $ hg rebase -r '3::(7+5)' -d 1 --keep
530 rebasing 3:ffd453c31098 "D"
530 rebasing 3:ffd453c31098 "D"
531 rebasing 4:c01897464e7f "E"
531 rebasing 4:c01897464e7f "E"
532 rebasing 5:41bfcc75ed73 "F"
532 rebasing 5:41bfcc75ed73 "F"
533 rebasing 6:3d8a618087a7 "G"
533 rebasing 6:3d8a618087a7 "G"
534 rebasing 7:72434a4e60b0 "H"
534 rebasing 7:72434a4e60b0 "H"
535 $ hg tglog
535 $ hg tglog
536 o 13: 'H'
536 o 13: 'H'
537 |
537 |
538 o 12: 'G'
538 o 12: 'G'
539 |
539 |
540 | o 11: 'F'
540 | o 11: 'F'
541 | |
541 | |
542 | o 10: 'E'
542 | o 10: 'E'
543 |/
543 |/
544 o 9: 'D'
544 o 9: 'D'
545 |
545 |
546 | o 8: 'I'
546 | o 8: 'I'
547 | |
547 | |
548 | o 7: 'H'
548 | o 7: 'H'
549 | |
549 | |
550 | o 6: 'G'
550 | o 6: 'G'
551 | |
551 | |
552 | | o 5: 'F'
552 | | o 5: 'F'
553 | | |
553 | | |
554 | | o 4: 'E'
554 | | o 4: 'E'
555 | |/
555 | |/
556 | o 3: 'D'
556 | o 3: 'D'
557 | |
557 | |
558 | o 2: 'C'
558 | o 2: 'C'
559 | |
559 | |
560 o | 1: 'B'
560 o | 1: 'B'
561 |/
561 |/
562 o 0: 'A'
562 o 0: 'A'
563
563
564
564
565 $ cd ..
565 $ cd ..
566
566
567 More advanced tests
567 More advanced tests
568
568
569 rebase on ancestor with revset
569 rebase on ancestor with revset
570
570
571 $ hg clone -q -u . ah ah5
571 $ hg clone -q -u . ah ah5
572 $ cd ah5
572 $ cd ah5
573 $ hg rebase -r '6::' -d 2
573 $ hg rebase -r '6::' -d 2
574 rebasing 6:3d8a618087a7 "G"
574 rebasing 6:3d8a618087a7 "G"
575 rebasing 7:72434a4e60b0 "H"
575 rebasing 7:72434a4e60b0 "H"
576 rebasing 8:479ddb54a924 "I" (tip)
576 rebasing 8:479ddb54a924 "I" (tip)
577 saved backup bundle to $TESTTMP/ah5/.hg/strip-backup/3d8a618087a7-b4f73f31-backup.hg (glob)
577 saved backup bundle to $TESTTMP/ah5/.hg/strip-backup/3d8a618087a7-b4f73f31-backup.hg (glob)
578 $ hg tglog
578 $ hg tglog
579 o 8: 'I'
579 o 8: 'I'
580 |
580 |
581 o 7: 'H'
581 o 7: 'H'
582 |
582 |
583 o 6: 'G'
583 o 6: 'G'
584 |
584 |
585 | o 5: 'F'
585 | o 5: 'F'
586 | |
586 | |
587 | o 4: 'E'
587 | o 4: 'E'
588 | |
588 | |
589 | o 3: 'D'
589 | o 3: 'D'
590 |/
590 |/
591 o 2: 'C'
591 o 2: 'C'
592 |
592 |
593 | o 1: 'B'
593 | o 1: 'B'
594 |/
594 |/
595 o 0: 'A'
595 o 0: 'A'
596
596
597 $ cd ..
597 $ cd ..
598
598
599
599
600 rebase with multiple root.
600 rebase with multiple root.
601 We rebase E and G on B
601 We rebase E and G on B
602 We would expect heads are I, F if it was supported
602 We would expect heads are I, F if it was supported
603
603
604 $ hg clone -q -u . ah ah6
604 $ hg clone -q -u . ah ah6
605 $ cd ah6
605 $ cd ah6
606 $ hg rebase -r '(4+6)::' -d 1
606 $ hg rebase -r '(4+6)::' -d 1
607 rebasing 4:c01897464e7f "E"
607 rebasing 4:c01897464e7f "E"
608 rebasing 5:41bfcc75ed73 "F"
608 rebasing 5:41bfcc75ed73 "F"
609 rebasing 6:3d8a618087a7 "G"
609 rebasing 6:3d8a618087a7 "G"
610 rebasing 7:72434a4e60b0 "H"
610 rebasing 7:72434a4e60b0 "H"
611 rebasing 8:479ddb54a924 "I" (tip)
611 rebasing 8:479ddb54a924 "I" (tip)
612 saved backup bundle to $TESTTMP/ah6/.hg/strip-backup/3d8a618087a7-aae93a24-backup.hg (glob)
612 saved backup bundle to $TESTTMP/ah6/.hg/strip-backup/3d8a618087a7-aae93a24-backup.hg (glob)
613 $ hg tglog
613 $ hg tglog
614 o 8: 'I'
614 o 8: 'I'
615 |
615 |
616 o 7: 'H'
616 o 7: 'H'
617 |
617 |
618 o 6: 'G'
618 o 6: 'G'
619 |
619 |
620 | o 5: 'F'
620 | o 5: 'F'
621 | |
621 | |
622 | o 4: 'E'
622 | o 4: 'E'
623 |/
623 |/
624 | o 3: 'D'
624 | o 3: 'D'
625 | |
625 | |
626 | o 2: 'C'
626 | o 2: 'C'
627 | |
627 | |
628 o | 1: 'B'
628 o | 1: 'B'
629 |/
629 |/
630 o 0: 'A'
630 o 0: 'A'
631
631
632 $ cd ..
632 $ cd ..
633
633
634 More complex rebase with multiple roots
634 More complex rebase with multiple roots
635 each root have a different common ancestor with the destination and this is a detach
635 each root have a different common ancestor with the destination and this is a detach
636
636
637 (setup)
637 (setup)
638
638
639 $ hg clone -q -u . a a8
639 $ hg clone -q -u . a a8
640 $ cd a8
640 $ cd a8
641 $ echo I > I
641 $ echo I > I
642 $ hg add I
642 $ hg add I
643 $ hg commit -m I
643 $ hg commit -m I
644 $ hg up 4
644 $ hg up 4
645 1 files updated, 0 files merged, 3 files removed, 0 files unresolved
645 1 files updated, 0 files merged, 3 files removed, 0 files unresolved
646 $ echo I > J
646 $ echo I > J
647 $ hg add J
647 $ hg add J
648 $ hg commit -m J
648 $ hg commit -m J
649 created new head
649 created new head
650 $ echo I > K
650 $ echo I > K
651 $ hg add K
651 $ hg add K
652 $ hg commit -m K
652 $ hg commit -m K
653 $ hg tglog
653 $ hg tglog
654 @ 10: 'K'
654 @ 10: 'K'
655 |
655 |
656 o 9: 'J'
656 o 9: 'J'
657 |
657 |
658 | o 8: 'I'
658 | o 8: 'I'
659 | |
659 | |
660 | o 7: 'H'
660 | o 7: 'H'
661 | |
661 | |
662 +---o 6: 'G'
662 +---o 6: 'G'
663 | |/
663 | |/
664 | o 5: 'F'
664 | o 5: 'F'
665 | |
665 | |
666 o | 4: 'E'
666 o | 4: 'E'
667 |/
667 |/
668 | o 3: 'D'
668 | o 3: 'D'
669 | |
669 | |
670 | o 2: 'C'
670 | o 2: 'C'
671 | |
671 | |
672 | o 1: 'B'
672 | o 1: 'B'
673 |/
673 |/
674 o 0: 'A'
674 o 0: 'A'
675
675
676 (actual test)
676 (actual test)
677
677
678 $ hg rebase --dest 'desc(G)' --rev 'desc(K) + desc(I)'
678 $ hg rebase --dest 'desc(G)' --rev 'desc(K) + desc(I)'
679 rebasing 8:e7ec4e813ba6 "I"
679 rebasing 8:e7ec4e813ba6 "I"
680 rebasing 10:23a4ace37988 "K" (tip)
680 rebasing 10:23a4ace37988 "K" (tip)
681 saved backup bundle to $TESTTMP/a8/.hg/strip-backup/23a4ace37988-b06984b3-backup.hg (glob)
681 saved backup bundle to $TESTTMP/a8/.hg/strip-backup/23a4ace37988-b06984b3-backup.hg (glob)
682 $ hg log --rev 'children(desc(G))'
682 $ hg log --rev 'children(desc(G))'
683 changeset: 9:adb617877056
683 changeset: 9:adb617877056
684 parent: 6:eea13746799a
684 parent: 6:eea13746799a
685 user: test
685 user: test
686 date: Thu Jan 01 00:00:00 1970 +0000
686 date: Thu Jan 01 00:00:00 1970 +0000
687 summary: I
687 summary: I
688
688
689 changeset: 10:882431a34a0e
689 changeset: 10:882431a34a0e
690 tag: tip
690 tag: tip
691 parent: 6:eea13746799a
691 parent: 6:eea13746799a
692 user: test
692 user: test
693 date: Thu Jan 01 00:00:00 1970 +0000
693 date: Thu Jan 01 00:00:00 1970 +0000
694 summary: K
694 summary: K
695
695
696 $ hg tglog
696 $ hg tglog
697 @ 10: 'K'
697 @ 10: 'K'
698 |
698 |
699 | o 9: 'I'
699 | o 9: 'I'
700 |/
700 |/
701 | o 8: 'J'
701 | o 8: 'J'
702 | |
702 | |
703 | | o 7: 'H'
703 | | o 7: 'H'
704 | | |
704 | | |
705 o---+ 6: 'G'
705 o---+ 6: 'G'
706 |/ /
706 |/ /
707 | o 5: 'F'
707 | o 5: 'F'
708 | |
708 | |
709 o | 4: 'E'
709 o | 4: 'E'
710 |/
710 |/
711 | o 3: 'D'
711 | o 3: 'D'
712 | |
712 | |
713 | o 2: 'C'
713 | o 2: 'C'
714 | |
714 | |
715 | o 1: 'B'
715 | o 1: 'B'
716 |/
716 |/
717 o 0: 'A'
717 o 0: 'A'
718
718
719
719
720 Test that rebase is not confused by $CWD disappearing during rebase (issue4121)
720 Test that rebase is not confused by $CWD disappearing during rebase (issue4121)
721
721
722 $ cd ..
722 $ cd ..
723 $ hg init cwd-vanish
723 $ hg init cwd-vanish
724 $ cd cwd-vanish
724 $ cd cwd-vanish
725 $ touch initial-file
725 $ touch initial-file
726 $ hg add initial-file
726 $ hg add initial-file
727 $ hg commit -m 'initial commit'
727 $ hg commit -m 'initial commit'
728 $ touch dest-file
728 $ touch dest-file
729 $ hg add dest-file
729 $ hg add dest-file
730 $ hg commit -m 'dest commit'
730 $ hg commit -m 'dest commit'
731 $ hg up 0
731 $ hg up 0
732 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
732 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
733 $ touch other-file
733 $ touch other-file
734 $ hg add other-file
734 $ hg add other-file
735 $ hg commit -m 'first source commit'
735 $ hg commit -m 'first source commit'
736 created new head
736 created new head
737 $ mkdir subdir
737 $ mkdir subdir
738 $ cd subdir
738 $ cd subdir
739 $ touch subfile
739 $ touch subfile
740 $ hg add subfile
740 $ hg add subfile
741 $ hg commit -m 'second source with subdir'
741 $ hg commit -m 'second source with subdir'
742 $ hg rebase -b . -d 1 --traceback
742 $ hg rebase -b . -d 1 --traceback
743 rebasing 2:779a07b1b7a0 "first source commit"
743 rebasing 2:779a07b1b7a0 "first source commit"
744 rebasing 3:a7d6f3a00bf3 "second source with subdir" (tip)
744 rebasing 3:a7d6f3a00bf3 "second source with subdir" (tip)
745 saved backup bundle to $TESTTMP/cwd-vanish/.hg/strip-backup/779a07b1b7a0-853e0073-backup.hg (glob)
745 saved backup bundle to $TESTTMP/cwd-vanish/.hg/strip-backup/779a07b1b7a0-853e0073-backup.hg (glob)
General Comments 0
You need to be logged in to leave comments. Login now