##// END OF EJS Templates
obsolete: add operation metadata to rebase/amend/histedit obsmarkers...
Durham Goode -
r32327:3546a771 default
parent child Browse files
Show More
@@ -1,1675 +1,1675
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description and date
39 # r, roll = like fold, but discard this commit's description and date
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit commit message without changing commit content
41 # m, mess = edit commit message without changing commit content
42 #
42 #
43
43
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 for each revision in your history. For example, if you had meant to add gamma
45 for each revision in your history. For example, if you had meant to add gamma
46 before beta, and then wanted to add delta in the same revision as beta, you
46 before beta, and then wanted to add delta in the same revision as beta, you
47 would reorganize the file to look like this::
47 would reorganize the file to look like this::
48
48
49 pick 030b686bedc4 Add gamma
49 pick 030b686bedc4 Add gamma
50 pick c561b4e977df Add beta
50 pick c561b4e977df Add beta
51 fold 7c2fd3b9020c Add delta
51 fold 7c2fd3b9020c Add delta
52
52
53 # Edit history between c561b4e977df and 7c2fd3b9020c
53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 #
54 #
55 # Commits are listed from least to most recent
55 # Commits are listed from least to most recent
56 #
56 #
57 # Commands:
57 # Commands:
58 # p, pick = use commit
58 # p, pick = use commit
59 # e, edit = use commit, but stop for amending
59 # e, edit = use commit, but stop for amending
60 # f, fold = use commit, but combine it with the one above
60 # f, fold = use commit, but combine it with the one above
61 # r, roll = like fold, but discard this commit's description and date
61 # r, roll = like fold, but discard this commit's description and date
62 # d, drop = remove commit from history
62 # d, drop = remove commit from history
63 # m, mess = edit commit message without changing commit content
63 # m, mess = edit commit message without changing commit content
64 #
64 #
65
65
66 At which point you close the editor and ``histedit`` starts working. When you
66 At which point you close the editor and ``histedit`` starts working. When you
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 those revisions together, offering you a chance to clean up the commit message::
68 those revisions together, offering you a chance to clean up the commit message::
69
69
70 Add beta
70 Add beta
71 ***
71 ***
72 Add delta
72 Add delta
73
73
74 Edit the commit message to your liking, then close the editor. The date used
74 Edit the commit message to your liking, then close the editor. The date used
75 for the commit will be the later of the two commits' dates. For this example,
75 for the commit will be the later of the two commits' dates. For this example,
76 let's assume that the commit message was changed to ``Add beta and delta.``
76 let's assume that the commit message was changed to ``Add beta and delta.``
77 After histedit has run and had a chance to remove any old or temporary
77 After histedit has run and had a chance to remove any old or temporary
78 revisions it needed, the history looks like this::
78 revisions it needed, the history looks like this::
79
79
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 | Add beta and delta.
81 | Add beta and delta.
82 |
82 |
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 | Add gamma
84 | Add gamma
85 |
85 |
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 Add alpha
87 Add alpha
88
88
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 ones) until after it has completed all the editing operations, so it will
90 ones) until after it has completed all the editing operations, so it will
91 probably perform several strip operations when it's done. For the above example,
91 probably perform several strip operations when it's done. For the above example,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 so you might need to be a little patient. You can choose to keep the original
93 so you might need to be a little patient. You can choose to keep the original
94 revisions by passing the ``--keep`` flag.
94 revisions by passing the ``--keep`` flag.
95
95
96 The ``edit`` operation will drop you back to a command prompt,
96 The ``edit`` operation will drop you back to a command prompt,
97 allowing you to edit files freely, or even use ``hg record`` to commit
97 allowing you to edit files freely, or even use ``hg record`` to commit
98 some changes as a separate commit. When you're done, any remaining
98 some changes as a separate commit. When you're done, any remaining
99 uncommitted changes will be committed as well. When done, run ``hg
99 uncommitted changes will be committed as well. When done, run ``hg
100 histedit --continue`` to finish this step. If there are uncommitted
100 histedit --continue`` to finish this step. If there are uncommitted
101 changes, you'll be prompted for a new commit message, but the default
101 changes, you'll be prompted for a new commit message, but the default
102 commit message will be the original message for the ``edit`` ed
102 commit message will be the original message for the ``edit`` ed
103 revision, and the date of the original commit will be preserved.
103 revision, and the date of the original commit will be preserved.
104
104
105 The ``message`` operation will give you a chance to revise a commit
105 The ``message`` operation will give you a chance to revise a commit
106 message without changing the contents. It's a shortcut for doing
106 message without changing the contents. It's a shortcut for doing
107 ``edit`` immediately followed by `hg histedit --continue``.
107 ``edit`` immediately followed by `hg histedit --continue``.
108
108
109 If ``histedit`` encounters a conflict when moving a revision (while
109 If ``histedit`` encounters a conflict when moving a revision (while
110 handling ``pick`` or ``fold``), it'll stop in a similar manner to
110 handling ``pick`` or ``fold``), it'll stop in a similar manner to
111 ``edit`` with the difference that it won't prompt you for a commit
111 ``edit`` with the difference that it won't prompt you for a commit
112 message when done. If you decide at this point that you don't like how
112 message when done. If you decide at this point that you don't like how
113 much work it will be to rearrange history, or that you made a mistake,
113 much work it will be to rearrange history, or that you made a mistake,
114 you can use ``hg histedit --abort`` to abandon the new changes you
114 you can use ``hg histedit --abort`` to abandon the new changes you
115 have made and return to the state before you attempted to edit your
115 have made and return to the state before you attempted to edit your
116 history.
116 history.
117
117
118 If we clone the histedit-ed example repository above and add four more
118 If we clone the histedit-ed example repository above and add four more
119 changes, such that we have the following history::
119 changes, such that we have the following history::
120
120
121 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
121 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
122 | Add theta
122 | Add theta
123 |
123 |
124 o 5 140988835471 2009-04-27 18:04 -0500 stefan
124 o 5 140988835471 2009-04-27 18:04 -0500 stefan
125 | Add eta
125 | Add eta
126 |
126 |
127 o 4 122930637314 2009-04-27 18:04 -0500 stefan
127 o 4 122930637314 2009-04-27 18:04 -0500 stefan
128 | Add zeta
128 | Add zeta
129 |
129 |
130 o 3 836302820282 2009-04-27 18:04 -0500 stefan
130 o 3 836302820282 2009-04-27 18:04 -0500 stefan
131 | Add epsilon
131 | Add epsilon
132 |
132 |
133 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
133 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
134 | Add beta and delta.
134 | Add beta and delta.
135 |
135 |
136 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
136 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
137 | Add gamma
137 | Add gamma
138 |
138 |
139 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
139 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
140 Add alpha
140 Add alpha
141
141
142 If you run ``hg histedit --outgoing`` on the clone then it is the same
142 If you run ``hg histedit --outgoing`` on the clone then it is the same
143 as running ``hg histedit 836302820282``. If you need plan to push to a
143 as running ``hg histedit 836302820282``. If you need plan to push to a
144 repository that Mercurial does not detect to be related to the source
144 repository that Mercurial does not detect to be related to the source
145 repo, you can add a ``--force`` option.
145 repo, you can add a ``--force`` option.
146
146
147 Config
147 Config
148 ------
148 ------
149
149
150 Histedit rule lines are truncated to 80 characters by default. You
150 Histedit rule lines are truncated to 80 characters by default. You
151 can customize this behavior by setting a different length in your
151 can customize this behavior by setting a different length in your
152 configuration file::
152 configuration file::
153
153
154 [histedit]
154 [histedit]
155 linelen = 120 # truncate rule lines at 120 characters
155 linelen = 120 # truncate rule lines at 120 characters
156
156
157 ``hg histedit`` attempts to automatically choose an appropriate base
157 ``hg histedit`` attempts to automatically choose an appropriate base
158 revision to use. To change which base revision is used, define a
158 revision to use. To change which base revision is used, define a
159 revset in your configuration file::
159 revset in your configuration file::
160
160
161 [histedit]
161 [histedit]
162 defaultrev = only(.) & draft()
162 defaultrev = only(.) & draft()
163
163
164 By default each edited revision needs to be present in histedit commands.
164 By default each edited revision needs to be present in histedit commands.
165 To remove revision you need to use ``drop`` operation. You can configure
165 To remove revision you need to use ``drop`` operation. You can configure
166 the drop to be implicit for missing commits by adding::
166 the drop to be implicit for missing commits by adding::
167
167
168 [histedit]
168 [histedit]
169 dropmissing = True
169 dropmissing = True
170
170
171 By default, histedit will close the transaction after each action. For
171 By default, histedit will close the transaction after each action. For
172 performance purposes, you can configure histedit to use a single transaction
172 performance purposes, you can configure histedit to use a single transaction
173 across the entire histedit. WARNING: This setting introduces a significant risk
173 across the entire histedit. WARNING: This setting introduces a significant risk
174 of losing the work you've done in a histedit if the histedit aborts
174 of losing the work you've done in a histedit if the histedit aborts
175 unexpectedly::
175 unexpectedly::
176
176
177 [histedit]
177 [histedit]
178 singletransaction = True
178 singletransaction = True
179
179
180 """
180 """
181
181
182 from __future__ import absolute_import
182 from __future__ import absolute_import
183
183
184 import errno
184 import errno
185 import os
185 import os
186
186
187 from mercurial.i18n import _
187 from mercurial.i18n import _
188 from mercurial import (
188 from mercurial import (
189 bundle2,
189 bundle2,
190 cmdutil,
190 cmdutil,
191 context,
191 context,
192 copies,
192 copies,
193 destutil,
193 destutil,
194 discovery,
194 discovery,
195 error,
195 error,
196 exchange,
196 exchange,
197 extensions,
197 extensions,
198 hg,
198 hg,
199 lock,
199 lock,
200 merge as mergemod,
200 merge as mergemod,
201 mergeutil,
201 mergeutil,
202 node,
202 node,
203 obsolete,
203 obsolete,
204 repair,
204 repair,
205 scmutil,
205 scmutil,
206 util,
206 util,
207 )
207 )
208
208
209 pickle = util.pickle
209 pickle = util.pickle
210 release = lock.release
210 release = lock.release
211 cmdtable = {}
211 cmdtable = {}
212 command = cmdutil.command(cmdtable)
212 command = cmdutil.command(cmdtable)
213
213
214 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
214 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
215 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
215 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
216 # be specifying the version(s) of Mercurial they are tested with, or
216 # be specifying the version(s) of Mercurial they are tested with, or
217 # leave the attribute unspecified.
217 # leave the attribute unspecified.
218 testedwith = 'ships-with-hg-core'
218 testedwith = 'ships-with-hg-core'
219
219
220 actiontable = {}
220 actiontable = {}
221 primaryactions = set()
221 primaryactions = set()
222 secondaryactions = set()
222 secondaryactions = set()
223 tertiaryactions = set()
223 tertiaryactions = set()
224 internalactions = set()
224 internalactions = set()
225
225
226 def geteditcomment(ui, first, last):
226 def geteditcomment(ui, first, last):
227 """ construct the editor comment
227 """ construct the editor comment
228 The comment includes::
228 The comment includes::
229 - an intro
229 - an intro
230 - sorted primary commands
230 - sorted primary commands
231 - sorted short commands
231 - sorted short commands
232 - sorted long commands
232 - sorted long commands
233 - additional hints
233 - additional hints
234
234
235 Commands are only included once.
235 Commands are only included once.
236 """
236 """
237 intro = _("""Edit history between %s and %s
237 intro = _("""Edit history between %s and %s
238
238
239 Commits are listed from least to most recent
239 Commits are listed from least to most recent
240
240
241 You can reorder changesets by reordering the lines
241 You can reorder changesets by reordering the lines
242
242
243 Commands:
243 Commands:
244 """)
244 """)
245 actions = []
245 actions = []
246 def addverb(v):
246 def addverb(v):
247 a = actiontable[v]
247 a = actiontable[v]
248 lines = a.message.split("\n")
248 lines = a.message.split("\n")
249 if len(a.verbs):
249 if len(a.verbs):
250 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
250 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
251 actions.append(" %s = %s" % (v, lines[0]))
251 actions.append(" %s = %s" % (v, lines[0]))
252 actions.extend([' %s' for l in lines[1:]])
252 actions.extend([' %s' for l in lines[1:]])
253
253
254 for v in (
254 for v in (
255 sorted(primaryactions) +
255 sorted(primaryactions) +
256 sorted(secondaryactions) +
256 sorted(secondaryactions) +
257 sorted(tertiaryactions)
257 sorted(tertiaryactions)
258 ):
258 ):
259 addverb(v)
259 addverb(v)
260 actions.append('')
260 actions.append('')
261
261
262 hints = []
262 hints = []
263 if ui.configbool('histedit', 'dropmissing'):
263 if ui.configbool('histedit', 'dropmissing'):
264 hints.append("Deleting a changeset from the list "
264 hints.append("Deleting a changeset from the list "
265 "will DISCARD it from the edited history!")
265 "will DISCARD it from the edited history!")
266
266
267 lines = (intro % (first, last)).split('\n') + actions + hints
267 lines = (intro % (first, last)).split('\n') + actions + hints
268
268
269 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
269 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
270
270
271 class histeditstate(object):
271 class histeditstate(object):
272 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
272 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
273 topmost=None, replacements=None, lock=None, wlock=None):
273 topmost=None, replacements=None, lock=None, wlock=None):
274 self.repo = repo
274 self.repo = repo
275 self.actions = actions
275 self.actions = actions
276 self.keep = keep
276 self.keep = keep
277 self.topmost = topmost
277 self.topmost = topmost
278 self.parentctxnode = parentctxnode
278 self.parentctxnode = parentctxnode
279 self.lock = lock
279 self.lock = lock
280 self.wlock = wlock
280 self.wlock = wlock
281 self.backupfile = None
281 self.backupfile = None
282 self.tr = None
282 self.tr = None
283 if replacements is None:
283 if replacements is None:
284 self.replacements = []
284 self.replacements = []
285 else:
285 else:
286 self.replacements = replacements
286 self.replacements = replacements
287
287
288 def read(self):
288 def read(self):
289 """Load histedit state from disk and set fields appropriately."""
289 """Load histedit state from disk and set fields appropriately."""
290 try:
290 try:
291 state = self.repo.vfs.read('histedit-state')
291 state = self.repo.vfs.read('histedit-state')
292 except IOError as err:
292 except IOError as err:
293 if err.errno != errno.ENOENT:
293 if err.errno != errno.ENOENT:
294 raise
294 raise
295 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
295 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
296
296
297 if state.startswith('v1\n'):
297 if state.startswith('v1\n'):
298 data = self._load()
298 data = self._load()
299 parentctxnode, rules, keep, topmost, replacements, backupfile = data
299 parentctxnode, rules, keep, topmost, replacements, backupfile = data
300 else:
300 else:
301 data = pickle.loads(state)
301 data = pickle.loads(state)
302 parentctxnode, rules, keep, topmost, replacements = data
302 parentctxnode, rules, keep, topmost, replacements = data
303 backupfile = None
303 backupfile = None
304
304
305 self.parentctxnode = parentctxnode
305 self.parentctxnode = parentctxnode
306 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
306 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
307 actions = parserules(rules, self)
307 actions = parserules(rules, self)
308 self.actions = actions
308 self.actions = actions
309 self.keep = keep
309 self.keep = keep
310 self.topmost = topmost
310 self.topmost = topmost
311 self.replacements = replacements
311 self.replacements = replacements
312 self.backupfile = backupfile
312 self.backupfile = backupfile
313
313
314 def write(self, tr=None):
314 def write(self, tr=None):
315 if tr:
315 if tr:
316 tr.addfilegenerator('histedit-state', ('histedit-state',),
316 tr.addfilegenerator('histedit-state', ('histedit-state',),
317 self._write, location='plain')
317 self._write, location='plain')
318 else:
318 else:
319 with self.repo.vfs("histedit-state", "w") as f:
319 with self.repo.vfs("histedit-state", "w") as f:
320 self._write(f)
320 self._write(f)
321
321
322 def _write(self, fp):
322 def _write(self, fp):
323 fp.write('v1\n')
323 fp.write('v1\n')
324 fp.write('%s\n' % node.hex(self.parentctxnode))
324 fp.write('%s\n' % node.hex(self.parentctxnode))
325 fp.write('%s\n' % node.hex(self.topmost))
325 fp.write('%s\n' % node.hex(self.topmost))
326 fp.write('%s\n' % self.keep)
326 fp.write('%s\n' % self.keep)
327 fp.write('%d\n' % len(self.actions))
327 fp.write('%d\n' % len(self.actions))
328 for action in self.actions:
328 for action in self.actions:
329 fp.write('%s\n' % action.tostate())
329 fp.write('%s\n' % action.tostate())
330 fp.write('%d\n' % len(self.replacements))
330 fp.write('%d\n' % len(self.replacements))
331 for replacement in self.replacements:
331 for replacement in self.replacements:
332 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
332 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
333 for r in replacement[1])))
333 for r in replacement[1])))
334 backupfile = self.backupfile
334 backupfile = self.backupfile
335 if not backupfile:
335 if not backupfile:
336 backupfile = ''
336 backupfile = ''
337 fp.write('%s\n' % backupfile)
337 fp.write('%s\n' % backupfile)
338
338
339 def _load(self):
339 def _load(self):
340 fp = self.repo.vfs('histedit-state', 'r')
340 fp = self.repo.vfs('histedit-state', 'r')
341 lines = [l[:-1] for l in fp.readlines()]
341 lines = [l[:-1] for l in fp.readlines()]
342
342
343 index = 0
343 index = 0
344 lines[index] # version number
344 lines[index] # version number
345 index += 1
345 index += 1
346
346
347 parentctxnode = node.bin(lines[index])
347 parentctxnode = node.bin(lines[index])
348 index += 1
348 index += 1
349
349
350 topmost = node.bin(lines[index])
350 topmost = node.bin(lines[index])
351 index += 1
351 index += 1
352
352
353 keep = lines[index] == 'True'
353 keep = lines[index] == 'True'
354 index += 1
354 index += 1
355
355
356 # Rules
356 # Rules
357 rules = []
357 rules = []
358 rulelen = int(lines[index])
358 rulelen = int(lines[index])
359 index += 1
359 index += 1
360 for i in xrange(rulelen):
360 for i in xrange(rulelen):
361 ruleaction = lines[index]
361 ruleaction = lines[index]
362 index += 1
362 index += 1
363 rule = lines[index]
363 rule = lines[index]
364 index += 1
364 index += 1
365 rules.append((ruleaction, rule))
365 rules.append((ruleaction, rule))
366
366
367 # Replacements
367 # Replacements
368 replacements = []
368 replacements = []
369 replacementlen = int(lines[index])
369 replacementlen = int(lines[index])
370 index += 1
370 index += 1
371 for i in xrange(replacementlen):
371 for i in xrange(replacementlen):
372 replacement = lines[index]
372 replacement = lines[index]
373 original = node.bin(replacement[:40])
373 original = node.bin(replacement[:40])
374 succ = [node.bin(replacement[i:i + 40]) for i in
374 succ = [node.bin(replacement[i:i + 40]) for i in
375 range(40, len(replacement), 40)]
375 range(40, len(replacement), 40)]
376 replacements.append((original, succ))
376 replacements.append((original, succ))
377 index += 1
377 index += 1
378
378
379 backupfile = lines[index]
379 backupfile = lines[index]
380 index += 1
380 index += 1
381
381
382 fp.close()
382 fp.close()
383
383
384 return parentctxnode, rules, keep, topmost, replacements, backupfile
384 return parentctxnode, rules, keep, topmost, replacements, backupfile
385
385
386 def clear(self):
386 def clear(self):
387 if self.inprogress():
387 if self.inprogress():
388 self.repo.vfs.unlink('histedit-state')
388 self.repo.vfs.unlink('histedit-state')
389
389
390 def inprogress(self):
390 def inprogress(self):
391 return self.repo.vfs.exists('histedit-state')
391 return self.repo.vfs.exists('histedit-state')
392
392
393
393
394 class histeditaction(object):
394 class histeditaction(object):
395 def __init__(self, state, node):
395 def __init__(self, state, node):
396 self.state = state
396 self.state = state
397 self.repo = state.repo
397 self.repo = state.repo
398 self.node = node
398 self.node = node
399
399
400 @classmethod
400 @classmethod
401 def fromrule(cls, state, rule):
401 def fromrule(cls, state, rule):
402 """Parses the given rule, returning an instance of the histeditaction.
402 """Parses the given rule, returning an instance of the histeditaction.
403 """
403 """
404 rulehash = rule.strip().split(' ', 1)[0]
404 rulehash = rule.strip().split(' ', 1)[0]
405 try:
405 try:
406 rev = node.bin(rulehash)
406 rev = node.bin(rulehash)
407 except TypeError:
407 except TypeError:
408 raise error.ParseError("invalid changeset %s" % rulehash)
408 raise error.ParseError("invalid changeset %s" % rulehash)
409 return cls(state, rev)
409 return cls(state, rev)
410
410
411 def verify(self, prev, expected, seen):
411 def verify(self, prev, expected, seen):
412 """ Verifies semantic correctness of the rule"""
412 """ Verifies semantic correctness of the rule"""
413 repo = self.repo
413 repo = self.repo
414 ha = node.hex(self.node)
414 ha = node.hex(self.node)
415 try:
415 try:
416 self.node = repo[ha].node()
416 self.node = repo[ha].node()
417 except error.RepoError:
417 except error.RepoError:
418 raise error.ParseError(_('unknown changeset %s listed')
418 raise error.ParseError(_('unknown changeset %s listed')
419 % ha[:12])
419 % ha[:12])
420 if self.node is not None:
420 if self.node is not None:
421 self._verifynodeconstraints(prev, expected, seen)
421 self._verifynodeconstraints(prev, expected, seen)
422
422
423 def _verifynodeconstraints(self, prev, expected, seen):
423 def _verifynodeconstraints(self, prev, expected, seen):
424 # by default command need a node in the edited list
424 # by default command need a node in the edited list
425 if self.node not in expected:
425 if self.node not in expected:
426 raise error.ParseError(_('%s "%s" changeset was not a candidate')
426 raise error.ParseError(_('%s "%s" changeset was not a candidate')
427 % (self.verb, node.short(self.node)),
427 % (self.verb, node.short(self.node)),
428 hint=_('only use listed changesets'))
428 hint=_('only use listed changesets'))
429 # and only one command per node
429 # and only one command per node
430 if self.node in seen:
430 if self.node in seen:
431 raise error.ParseError(_('duplicated command for changeset %s') %
431 raise error.ParseError(_('duplicated command for changeset %s') %
432 node.short(self.node))
432 node.short(self.node))
433
433
434 def torule(self):
434 def torule(self):
435 """build a histedit rule line for an action
435 """build a histedit rule line for an action
436
436
437 by default lines are in the form:
437 by default lines are in the form:
438 <hash> <rev> <summary>
438 <hash> <rev> <summary>
439 """
439 """
440 ctx = self.repo[self.node]
440 ctx = self.repo[self.node]
441 summary = _getsummary(ctx)
441 summary = _getsummary(ctx)
442 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
442 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
443 # trim to 75 columns by default so it's not stupidly wide in my editor
443 # trim to 75 columns by default so it's not stupidly wide in my editor
444 # (the 5 more are left for verb)
444 # (the 5 more are left for verb)
445 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
445 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
446 maxlen = max(maxlen, 22) # avoid truncating hash
446 maxlen = max(maxlen, 22) # avoid truncating hash
447 return util.ellipsis(line, maxlen)
447 return util.ellipsis(line, maxlen)
448
448
449 def tostate(self):
449 def tostate(self):
450 """Print an action in format used by histedit state files
450 """Print an action in format used by histedit state files
451 (the first line is a verb, the remainder is the second)
451 (the first line is a verb, the remainder is the second)
452 """
452 """
453 return "%s\n%s" % (self.verb, node.hex(self.node))
453 return "%s\n%s" % (self.verb, node.hex(self.node))
454
454
455 def run(self):
455 def run(self):
456 """Runs the action. The default behavior is simply apply the action's
456 """Runs the action. The default behavior is simply apply the action's
457 rulectx onto the current parentctx."""
457 rulectx onto the current parentctx."""
458 self.applychange()
458 self.applychange()
459 self.continuedirty()
459 self.continuedirty()
460 return self.continueclean()
460 return self.continueclean()
461
461
462 def applychange(self):
462 def applychange(self):
463 """Applies the changes from this action's rulectx onto the current
463 """Applies the changes from this action's rulectx onto the current
464 parentctx, but does not commit them."""
464 parentctx, but does not commit them."""
465 repo = self.repo
465 repo = self.repo
466 rulectx = repo[self.node]
466 rulectx = repo[self.node]
467 repo.ui.pushbuffer(error=True, labeled=True)
467 repo.ui.pushbuffer(error=True, labeled=True)
468 hg.update(repo, self.state.parentctxnode, quietempty=True)
468 hg.update(repo, self.state.parentctxnode, quietempty=True)
469 stats = applychanges(repo.ui, repo, rulectx, {})
469 stats = applychanges(repo.ui, repo, rulectx, {})
470 if stats and stats[3] > 0:
470 if stats and stats[3] > 0:
471 buf = repo.ui.popbuffer()
471 buf = repo.ui.popbuffer()
472 repo.ui.write(*buf)
472 repo.ui.write(*buf)
473 raise error.InterventionRequired(
473 raise error.InterventionRequired(
474 _('Fix up the change (%s %s)') %
474 _('Fix up the change (%s %s)') %
475 (self.verb, node.short(self.node)),
475 (self.verb, node.short(self.node)),
476 hint=_('hg histedit --continue to resume'))
476 hint=_('hg histedit --continue to resume'))
477 else:
477 else:
478 repo.ui.popbuffer()
478 repo.ui.popbuffer()
479
479
480 def continuedirty(self):
480 def continuedirty(self):
481 """Continues the action when changes have been applied to the working
481 """Continues the action when changes have been applied to the working
482 copy. The default behavior is to commit the dirty changes."""
482 copy. The default behavior is to commit the dirty changes."""
483 repo = self.repo
483 repo = self.repo
484 rulectx = repo[self.node]
484 rulectx = repo[self.node]
485
485
486 editor = self.commiteditor()
486 editor = self.commiteditor()
487 commit = commitfuncfor(repo, rulectx)
487 commit = commitfuncfor(repo, rulectx)
488
488
489 commit(text=rulectx.description(), user=rulectx.user(),
489 commit(text=rulectx.description(), user=rulectx.user(),
490 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
490 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
491
491
492 def commiteditor(self):
492 def commiteditor(self):
493 """The editor to be used to edit the commit message."""
493 """The editor to be used to edit the commit message."""
494 return False
494 return False
495
495
496 def continueclean(self):
496 def continueclean(self):
497 """Continues the action when the working copy is clean. The default
497 """Continues the action when the working copy is clean. The default
498 behavior is to accept the current commit as the new version of the
498 behavior is to accept the current commit as the new version of the
499 rulectx."""
499 rulectx."""
500 ctx = self.repo['.']
500 ctx = self.repo['.']
501 if ctx.node() == self.state.parentctxnode:
501 if ctx.node() == self.state.parentctxnode:
502 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
502 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
503 node.short(self.node))
503 node.short(self.node))
504 return ctx, [(self.node, tuple())]
504 return ctx, [(self.node, tuple())]
505 if ctx.node() == self.node:
505 if ctx.node() == self.node:
506 # Nothing changed
506 # Nothing changed
507 return ctx, []
507 return ctx, []
508 return ctx, [(self.node, (ctx.node(),))]
508 return ctx, [(self.node, (ctx.node(),))]
509
509
510 def commitfuncfor(repo, src):
510 def commitfuncfor(repo, src):
511 """Build a commit function for the replacement of <src>
511 """Build a commit function for the replacement of <src>
512
512
513 This function ensure we apply the same treatment to all changesets.
513 This function ensure we apply the same treatment to all changesets.
514
514
515 - Add a 'histedit_source' entry in extra.
515 - Add a 'histedit_source' entry in extra.
516
516
517 Note that fold has its own separated logic because its handling is a bit
517 Note that fold has its own separated logic because its handling is a bit
518 different and not easily factored out of the fold method.
518 different and not easily factored out of the fold method.
519 """
519 """
520 phasemin = src.phase()
520 phasemin = src.phase()
521 def commitfunc(**kwargs):
521 def commitfunc(**kwargs):
522 overrides = {('phases', 'new-commit'): phasemin}
522 overrides = {('phases', 'new-commit'): phasemin}
523 with repo.ui.configoverride(overrides, 'histedit'):
523 with repo.ui.configoverride(overrides, 'histedit'):
524 extra = kwargs.get('extra', {}).copy()
524 extra = kwargs.get('extra', {}).copy()
525 extra['histedit_source'] = src.hex()
525 extra['histedit_source'] = src.hex()
526 kwargs['extra'] = extra
526 kwargs['extra'] = extra
527 return repo.commit(**kwargs)
527 return repo.commit(**kwargs)
528 return commitfunc
528 return commitfunc
529
529
530 def applychanges(ui, repo, ctx, opts):
530 def applychanges(ui, repo, ctx, opts):
531 """Merge changeset from ctx (only) in the current working directory"""
531 """Merge changeset from ctx (only) in the current working directory"""
532 wcpar = repo.dirstate.parents()[0]
532 wcpar = repo.dirstate.parents()[0]
533 if ctx.p1().node() == wcpar:
533 if ctx.p1().node() == wcpar:
534 # edits are "in place" we do not need to make any merge,
534 # edits are "in place" we do not need to make any merge,
535 # just applies changes on parent for editing
535 # just applies changes on parent for editing
536 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
536 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
537 stats = None
537 stats = None
538 else:
538 else:
539 try:
539 try:
540 # ui.forcemerge is an internal variable, do not document
540 # ui.forcemerge is an internal variable, do not document
541 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
541 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
542 'histedit')
542 'histedit')
543 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
543 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
544 finally:
544 finally:
545 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
545 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
546 return stats
546 return stats
547
547
548 def collapse(repo, first, last, commitopts, skipprompt=False):
548 def collapse(repo, first, last, commitopts, skipprompt=False):
549 """collapse the set of revisions from first to last as new one.
549 """collapse the set of revisions from first to last as new one.
550
550
551 Expected commit options are:
551 Expected commit options are:
552 - message
552 - message
553 - date
553 - date
554 - username
554 - username
555 Commit message is edited in all cases.
555 Commit message is edited in all cases.
556
556
557 This function works in memory."""
557 This function works in memory."""
558 ctxs = list(repo.set('%d::%d', first, last))
558 ctxs = list(repo.set('%d::%d', first, last))
559 if not ctxs:
559 if not ctxs:
560 return None
560 return None
561 for c in ctxs:
561 for c in ctxs:
562 if not c.mutable():
562 if not c.mutable():
563 raise error.ParseError(
563 raise error.ParseError(
564 _("cannot fold into public change %s") % node.short(c.node()))
564 _("cannot fold into public change %s") % node.short(c.node()))
565 base = first.parents()[0]
565 base = first.parents()[0]
566
566
567 # commit a new version of the old changeset, including the update
567 # commit a new version of the old changeset, including the update
568 # collect all files which might be affected
568 # collect all files which might be affected
569 files = set()
569 files = set()
570 for ctx in ctxs:
570 for ctx in ctxs:
571 files.update(ctx.files())
571 files.update(ctx.files())
572
572
573 # Recompute copies (avoid recording a -> b -> a)
573 # Recompute copies (avoid recording a -> b -> a)
574 copied = copies.pathcopies(base, last)
574 copied = copies.pathcopies(base, last)
575
575
576 # prune files which were reverted by the updates
576 # prune files which were reverted by the updates
577 files = [f for f in files if not cmdutil.samefile(f, last, base)]
577 files = [f for f in files if not cmdutil.samefile(f, last, base)]
578 # commit version of these files as defined by head
578 # commit version of these files as defined by head
579 headmf = last.manifest()
579 headmf = last.manifest()
580 def filectxfn(repo, ctx, path):
580 def filectxfn(repo, ctx, path):
581 if path in headmf:
581 if path in headmf:
582 fctx = last[path]
582 fctx = last[path]
583 flags = fctx.flags()
583 flags = fctx.flags()
584 mctx = context.memfilectx(repo,
584 mctx = context.memfilectx(repo,
585 fctx.path(), fctx.data(),
585 fctx.path(), fctx.data(),
586 islink='l' in flags,
586 islink='l' in flags,
587 isexec='x' in flags,
587 isexec='x' in flags,
588 copied=copied.get(path))
588 copied=copied.get(path))
589 return mctx
589 return mctx
590 return None
590 return None
591
591
592 if commitopts.get('message'):
592 if commitopts.get('message'):
593 message = commitopts['message']
593 message = commitopts['message']
594 else:
594 else:
595 message = first.description()
595 message = first.description()
596 user = commitopts.get('user')
596 user = commitopts.get('user')
597 date = commitopts.get('date')
597 date = commitopts.get('date')
598 extra = commitopts.get('extra')
598 extra = commitopts.get('extra')
599
599
600 parents = (first.p1().node(), first.p2().node())
600 parents = (first.p1().node(), first.p2().node())
601 editor = None
601 editor = None
602 if not skipprompt:
602 if not skipprompt:
603 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
603 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
604 new = context.memctx(repo,
604 new = context.memctx(repo,
605 parents=parents,
605 parents=parents,
606 text=message,
606 text=message,
607 files=files,
607 files=files,
608 filectxfn=filectxfn,
608 filectxfn=filectxfn,
609 user=user,
609 user=user,
610 date=date,
610 date=date,
611 extra=extra,
611 extra=extra,
612 editor=editor)
612 editor=editor)
613 return repo.commitctx(new)
613 return repo.commitctx(new)
614
614
615 def _isdirtywc(repo):
615 def _isdirtywc(repo):
616 return repo[None].dirty(missing=True)
616 return repo[None].dirty(missing=True)
617
617
618 def abortdirty():
618 def abortdirty():
619 raise error.Abort(_('working copy has pending changes'),
619 raise error.Abort(_('working copy has pending changes'),
620 hint=_('amend, commit, or revert them and run histedit '
620 hint=_('amend, commit, or revert them and run histedit '
621 '--continue, or abort with histedit --abort'))
621 '--continue, or abort with histedit --abort'))
622
622
623 def action(verbs, message, priority=False, internal=False):
623 def action(verbs, message, priority=False, internal=False):
624 def wrap(cls):
624 def wrap(cls):
625 assert not priority or not internal
625 assert not priority or not internal
626 verb = verbs[0]
626 verb = verbs[0]
627 if priority:
627 if priority:
628 primaryactions.add(verb)
628 primaryactions.add(verb)
629 elif internal:
629 elif internal:
630 internalactions.add(verb)
630 internalactions.add(verb)
631 elif len(verbs) > 1:
631 elif len(verbs) > 1:
632 secondaryactions.add(verb)
632 secondaryactions.add(verb)
633 else:
633 else:
634 tertiaryactions.add(verb)
634 tertiaryactions.add(verb)
635
635
636 cls.verb = verb
636 cls.verb = verb
637 cls.verbs = verbs
637 cls.verbs = verbs
638 cls.message = message
638 cls.message = message
639 for verb in verbs:
639 for verb in verbs:
640 actiontable[verb] = cls
640 actiontable[verb] = cls
641 return cls
641 return cls
642 return wrap
642 return wrap
643
643
644 @action(['pick', 'p'],
644 @action(['pick', 'p'],
645 _('use commit'),
645 _('use commit'),
646 priority=True)
646 priority=True)
647 class pick(histeditaction):
647 class pick(histeditaction):
648 def run(self):
648 def run(self):
649 rulectx = self.repo[self.node]
649 rulectx = self.repo[self.node]
650 if rulectx.parents()[0].node() == self.state.parentctxnode:
650 if rulectx.parents()[0].node() == self.state.parentctxnode:
651 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
651 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
652 return rulectx, []
652 return rulectx, []
653
653
654 return super(pick, self).run()
654 return super(pick, self).run()
655
655
656 @action(['edit', 'e'],
656 @action(['edit', 'e'],
657 _('use commit, but stop for amending'),
657 _('use commit, but stop for amending'),
658 priority=True)
658 priority=True)
659 class edit(histeditaction):
659 class edit(histeditaction):
660 def run(self):
660 def run(self):
661 repo = self.repo
661 repo = self.repo
662 rulectx = repo[self.node]
662 rulectx = repo[self.node]
663 hg.update(repo, self.state.parentctxnode, quietempty=True)
663 hg.update(repo, self.state.parentctxnode, quietempty=True)
664 applychanges(repo.ui, repo, rulectx, {})
664 applychanges(repo.ui, repo, rulectx, {})
665 raise error.InterventionRequired(
665 raise error.InterventionRequired(
666 _('Editing (%s), you may commit or record as needed now.')
666 _('Editing (%s), you may commit or record as needed now.')
667 % node.short(self.node),
667 % node.short(self.node),
668 hint=_('hg histedit --continue to resume'))
668 hint=_('hg histedit --continue to resume'))
669
669
670 def commiteditor(self):
670 def commiteditor(self):
671 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
671 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
672
672
673 @action(['fold', 'f'],
673 @action(['fold', 'f'],
674 _('use commit, but combine it with the one above'))
674 _('use commit, but combine it with the one above'))
675 class fold(histeditaction):
675 class fold(histeditaction):
676 def verify(self, prev, expected, seen):
676 def verify(self, prev, expected, seen):
677 """ Verifies semantic correctness of the fold rule"""
677 """ Verifies semantic correctness of the fold rule"""
678 super(fold, self).verify(prev, expected, seen)
678 super(fold, self).verify(prev, expected, seen)
679 repo = self.repo
679 repo = self.repo
680 if not prev:
680 if not prev:
681 c = repo[self.node].parents()[0]
681 c = repo[self.node].parents()[0]
682 elif not prev.verb in ('pick', 'base'):
682 elif not prev.verb in ('pick', 'base'):
683 return
683 return
684 else:
684 else:
685 c = repo[prev.node]
685 c = repo[prev.node]
686 if not c.mutable():
686 if not c.mutable():
687 raise error.ParseError(
687 raise error.ParseError(
688 _("cannot fold into public change %s") % node.short(c.node()))
688 _("cannot fold into public change %s") % node.short(c.node()))
689
689
690
690
691 def continuedirty(self):
691 def continuedirty(self):
692 repo = self.repo
692 repo = self.repo
693 rulectx = repo[self.node]
693 rulectx = repo[self.node]
694
694
695 commit = commitfuncfor(repo, rulectx)
695 commit = commitfuncfor(repo, rulectx)
696 commit(text='fold-temp-revision %s' % node.short(self.node),
696 commit(text='fold-temp-revision %s' % node.short(self.node),
697 user=rulectx.user(), date=rulectx.date(),
697 user=rulectx.user(), date=rulectx.date(),
698 extra=rulectx.extra())
698 extra=rulectx.extra())
699
699
700 def continueclean(self):
700 def continueclean(self):
701 repo = self.repo
701 repo = self.repo
702 ctx = repo['.']
702 ctx = repo['.']
703 rulectx = repo[self.node]
703 rulectx = repo[self.node]
704 parentctxnode = self.state.parentctxnode
704 parentctxnode = self.state.parentctxnode
705 if ctx.node() == parentctxnode:
705 if ctx.node() == parentctxnode:
706 repo.ui.warn(_('%s: empty changeset\n') %
706 repo.ui.warn(_('%s: empty changeset\n') %
707 node.short(self.node))
707 node.short(self.node))
708 return ctx, [(self.node, (parentctxnode,))]
708 return ctx, [(self.node, (parentctxnode,))]
709
709
710 parentctx = repo[parentctxnode]
710 parentctx = repo[parentctxnode]
711 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
711 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
712 parentctx))
712 parentctx))
713 if not newcommits:
713 if not newcommits:
714 repo.ui.warn(_('%s: cannot fold - working copy is not a '
714 repo.ui.warn(_('%s: cannot fold - working copy is not a '
715 'descendant of previous commit %s\n') %
715 'descendant of previous commit %s\n') %
716 (node.short(self.node), node.short(parentctxnode)))
716 (node.short(self.node), node.short(parentctxnode)))
717 return ctx, [(self.node, (ctx.node(),))]
717 return ctx, [(self.node, (ctx.node(),))]
718
718
719 middlecommits = newcommits.copy()
719 middlecommits = newcommits.copy()
720 middlecommits.discard(ctx.node())
720 middlecommits.discard(ctx.node())
721
721
722 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
722 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
723 middlecommits)
723 middlecommits)
724
724
725 def skipprompt(self):
725 def skipprompt(self):
726 """Returns true if the rule should skip the message editor.
726 """Returns true if the rule should skip the message editor.
727
727
728 For example, 'fold' wants to show an editor, but 'rollup'
728 For example, 'fold' wants to show an editor, but 'rollup'
729 doesn't want to.
729 doesn't want to.
730 """
730 """
731 return False
731 return False
732
732
733 def mergedescs(self):
733 def mergedescs(self):
734 """Returns true if the rule should merge messages of multiple changes.
734 """Returns true if the rule should merge messages of multiple changes.
735
735
736 This exists mainly so that 'rollup' rules can be a subclass of
736 This exists mainly so that 'rollup' rules can be a subclass of
737 'fold'.
737 'fold'.
738 """
738 """
739 return True
739 return True
740
740
741 def firstdate(self):
741 def firstdate(self):
742 """Returns true if the rule should preserve the date of the first
742 """Returns true if the rule should preserve the date of the first
743 change.
743 change.
744
744
745 This exists mainly so that 'rollup' rules can be a subclass of
745 This exists mainly so that 'rollup' rules can be a subclass of
746 'fold'.
746 'fold'.
747 """
747 """
748 return False
748 return False
749
749
750 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
750 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
751 parent = ctx.parents()[0].node()
751 parent = ctx.parents()[0].node()
752 repo.ui.pushbuffer()
752 repo.ui.pushbuffer()
753 hg.update(repo, parent)
753 hg.update(repo, parent)
754 repo.ui.popbuffer()
754 repo.ui.popbuffer()
755 ### prepare new commit data
755 ### prepare new commit data
756 commitopts = {}
756 commitopts = {}
757 commitopts['user'] = ctx.user()
757 commitopts['user'] = ctx.user()
758 # commit message
758 # commit message
759 if not self.mergedescs():
759 if not self.mergedescs():
760 newmessage = ctx.description()
760 newmessage = ctx.description()
761 else:
761 else:
762 newmessage = '\n***\n'.join(
762 newmessage = '\n***\n'.join(
763 [ctx.description()] +
763 [ctx.description()] +
764 [repo[r].description() for r in internalchanges] +
764 [repo[r].description() for r in internalchanges] +
765 [oldctx.description()]) + '\n'
765 [oldctx.description()]) + '\n'
766 commitopts['message'] = newmessage
766 commitopts['message'] = newmessage
767 # date
767 # date
768 if self.firstdate():
768 if self.firstdate():
769 commitopts['date'] = ctx.date()
769 commitopts['date'] = ctx.date()
770 else:
770 else:
771 commitopts['date'] = max(ctx.date(), oldctx.date())
771 commitopts['date'] = max(ctx.date(), oldctx.date())
772 extra = ctx.extra().copy()
772 extra = ctx.extra().copy()
773 # histedit_source
773 # histedit_source
774 # note: ctx is likely a temporary commit but that the best we can do
774 # note: ctx is likely a temporary commit but that the best we can do
775 # here. This is sufficient to solve issue3681 anyway.
775 # here. This is sufficient to solve issue3681 anyway.
776 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
776 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
777 commitopts['extra'] = extra
777 commitopts['extra'] = extra
778 phasemin = max(ctx.phase(), oldctx.phase())
778 phasemin = max(ctx.phase(), oldctx.phase())
779 overrides = {('phases', 'new-commit'): phasemin}
779 overrides = {('phases', 'new-commit'): phasemin}
780 with repo.ui.configoverride(overrides, 'histedit'):
780 with repo.ui.configoverride(overrides, 'histedit'):
781 n = collapse(repo, ctx, repo[newnode], commitopts,
781 n = collapse(repo, ctx, repo[newnode], commitopts,
782 skipprompt=self.skipprompt())
782 skipprompt=self.skipprompt())
783 if n is None:
783 if n is None:
784 return ctx, []
784 return ctx, []
785 repo.ui.pushbuffer()
785 repo.ui.pushbuffer()
786 hg.update(repo, n)
786 hg.update(repo, n)
787 repo.ui.popbuffer()
787 repo.ui.popbuffer()
788 replacements = [(oldctx.node(), (newnode,)),
788 replacements = [(oldctx.node(), (newnode,)),
789 (ctx.node(), (n,)),
789 (ctx.node(), (n,)),
790 (newnode, (n,)),
790 (newnode, (n,)),
791 ]
791 ]
792 for ich in internalchanges:
792 for ich in internalchanges:
793 replacements.append((ich, (n,)))
793 replacements.append((ich, (n,)))
794 return repo[n], replacements
794 return repo[n], replacements
795
795
796 class base(histeditaction):
796 class base(histeditaction):
797
797
798 def run(self):
798 def run(self):
799 if self.repo['.'].node() != self.node:
799 if self.repo['.'].node() != self.node:
800 mergemod.update(self.repo, self.node, False, True)
800 mergemod.update(self.repo, self.node, False, True)
801 # branchmerge, force)
801 # branchmerge, force)
802 return self.continueclean()
802 return self.continueclean()
803
803
804 def continuedirty(self):
804 def continuedirty(self):
805 abortdirty()
805 abortdirty()
806
806
807 def continueclean(self):
807 def continueclean(self):
808 basectx = self.repo['.']
808 basectx = self.repo['.']
809 return basectx, []
809 return basectx, []
810
810
811 def _verifynodeconstraints(self, prev, expected, seen):
811 def _verifynodeconstraints(self, prev, expected, seen):
812 # base can only be use with a node not in the edited set
812 # base can only be use with a node not in the edited set
813 if self.node in expected:
813 if self.node in expected:
814 msg = _('%s "%s" changeset was an edited list candidate')
814 msg = _('%s "%s" changeset was an edited list candidate')
815 raise error.ParseError(
815 raise error.ParseError(
816 msg % (self.verb, node.short(self.node)),
816 msg % (self.verb, node.short(self.node)),
817 hint=_('base must only use unlisted changesets'))
817 hint=_('base must only use unlisted changesets'))
818
818
819 @action(['_multifold'],
819 @action(['_multifold'],
820 _(
820 _(
821 """fold subclass used for when multiple folds happen in a row
821 """fold subclass used for when multiple folds happen in a row
822
822
823 We only want to fire the editor for the folded message once when
823 We only want to fire the editor for the folded message once when
824 (say) four changes are folded down into a single change. This is
824 (say) four changes are folded down into a single change. This is
825 similar to rollup, but we should preserve both messages so that
825 similar to rollup, but we should preserve both messages so that
826 when the last fold operation runs we can show the user all the
826 when the last fold operation runs we can show the user all the
827 commit messages in their editor.
827 commit messages in their editor.
828 """),
828 """),
829 internal=True)
829 internal=True)
830 class _multifold(fold):
830 class _multifold(fold):
831 def skipprompt(self):
831 def skipprompt(self):
832 return True
832 return True
833
833
834 @action(["roll", "r"],
834 @action(["roll", "r"],
835 _("like fold, but discard this commit's description and date"))
835 _("like fold, but discard this commit's description and date"))
836 class rollup(fold):
836 class rollup(fold):
837 def mergedescs(self):
837 def mergedescs(self):
838 return False
838 return False
839
839
840 def skipprompt(self):
840 def skipprompt(self):
841 return True
841 return True
842
842
843 def firstdate(self):
843 def firstdate(self):
844 return True
844 return True
845
845
846 @action(["drop", "d"],
846 @action(["drop", "d"],
847 _('remove commit from history'))
847 _('remove commit from history'))
848 class drop(histeditaction):
848 class drop(histeditaction):
849 def run(self):
849 def run(self):
850 parentctx = self.repo[self.state.parentctxnode]
850 parentctx = self.repo[self.state.parentctxnode]
851 return parentctx, [(self.node, tuple())]
851 return parentctx, [(self.node, tuple())]
852
852
853 @action(["mess", "m"],
853 @action(["mess", "m"],
854 _('edit commit message without changing commit content'),
854 _('edit commit message without changing commit content'),
855 priority=True)
855 priority=True)
856 class message(histeditaction):
856 class message(histeditaction):
857 def commiteditor(self):
857 def commiteditor(self):
858 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
858 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
859
859
860 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
860 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
861 """utility function to find the first outgoing changeset
861 """utility function to find the first outgoing changeset
862
862
863 Used by initialization code"""
863 Used by initialization code"""
864 if opts is None:
864 if opts is None:
865 opts = {}
865 opts = {}
866 dest = ui.expandpath(remote or 'default-push', remote or 'default')
866 dest = ui.expandpath(remote or 'default-push', remote or 'default')
867 dest, revs = hg.parseurl(dest, None)[:2]
867 dest, revs = hg.parseurl(dest, None)[:2]
868 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
868 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
869
869
870 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
870 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
871 other = hg.peer(repo, opts, dest)
871 other = hg.peer(repo, opts, dest)
872
872
873 if revs:
873 if revs:
874 revs = [repo.lookup(rev) for rev in revs]
874 revs = [repo.lookup(rev) for rev in revs]
875
875
876 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
876 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
877 if not outgoing.missing:
877 if not outgoing.missing:
878 raise error.Abort(_('no outgoing ancestors'))
878 raise error.Abort(_('no outgoing ancestors'))
879 roots = list(repo.revs("roots(%ln)", outgoing.missing))
879 roots = list(repo.revs("roots(%ln)", outgoing.missing))
880 if 1 < len(roots):
880 if 1 < len(roots):
881 msg = _('there are ambiguous outgoing revisions')
881 msg = _('there are ambiguous outgoing revisions')
882 hint = _("see 'hg help histedit' for more detail")
882 hint = _("see 'hg help histedit' for more detail")
883 raise error.Abort(msg, hint=hint)
883 raise error.Abort(msg, hint=hint)
884 return repo.lookup(roots[0])
884 return repo.lookup(roots[0])
885
885
886
886
887 @command('histedit',
887 @command('histedit',
888 [('', 'commands', '',
888 [('', 'commands', '',
889 _('read history edits from the specified file'), _('FILE')),
889 _('read history edits from the specified file'), _('FILE')),
890 ('c', 'continue', False, _('continue an edit already in progress')),
890 ('c', 'continue', False, _('continue an edit already in progress')),
891 ('', 'edit-plan', False, _('edit remaining actions list')),
891 ('', 'edit-plan', False, _('edit remaining actions list')),
892 ('k', 'keep', False,
892 ('k', 'keep', False,
893 _("don't strip old nodes after edit is complete")),
893 _("don't strip old nodes after edit is complete")),
894 ('', 'abort', False, _('abort an edit in progress')),
894 ('', 'abort', False, _('abort an edit in progress')),
895 ('o', 'outgoing', False, _('changesets not found in destination')),
895 ('o', 'outgoing', False, _('changesets not found in destination')),
896 ('f', 'force', False,
896 ('f', 'force', False,
897 _('force outgoing even for unrelated repositories')),
897 _('force outgoing even for unrelated repositories')),
898 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
898 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
899 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"))
899 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"))
900 def histedit(ui, repo, *freeargs, **opts):
900 def histedit(ui, repo, *freeargs, **opts):
901 """interactively edit changeset history
901 """interactively edit changeset history
902
902
903 This command lets you edit a linear series of changesets (up to
903 This command lets you edit a linear series of changesets (up to
904 and including the working directory, which should be clean).
904 and including the working directory, which should be clean).
905 You can:
905 You can:
906
906
907 - `pick` to [re]order a changeset
907 - `pick` to [re]order a changeset
908
908
909 - `drop` to omit changeset
909 - `drop` to omit changeset
910
910
911 - `mess` to reword the changeset commit message
911 - `mess` to reword the changeset commit message
912
912
913 - `fold` to combine it with the preceding changeset (using the later date)
913 - `fold` to combine it with the preceding changeset (using the later date)
914
914
915 - `roll` like fold, but discarding this commit's description and date
915 - `roll` like fold, but discarding this commit's description and date
916
916
917 - `edit` to edit this changeset (preserving date)
917 - `edit` to edit this changeset (preserving date)
918
918
919 There are a number of ways to select the root changeset:
919 There are a number of ways to select the root changeset:
920
920
921 - Specify ANCESTOR directly
921 - Specify ANCESTOR directly
922
922
923 - Use --outgoing -- it will be the first linear changeset not
923 - Use --outgoing -- it will be the first linear changeset not
924 included in destination. (See :hg:`help config.paths.default-push`)
924 included in destination. (See :hg:`help config.paths.default-push`)
925
925
926 - Otherwise, the value from the "histedit.defaultrev" config option
926 - Otherwise, the value from the "histedit.defaultrev" config option
927 is used as a revset to select the base revision when ANCESTOR is not
927 is used as a revset to select the base revision when ANCESTOR is not
928 specified. The first revision returned by the revset is used. By
928 specified. The first revision returned by the revset is used. By
929 default, this selects the editable history that is unique to the
929 default, this selects the editable history that is unique to the
930 ancestry of the working directory.
930 ancestry of the working directory.
931
931
932 .. container:: verbose
932 .. container:: verbose
933
933
934 If you use --outgoing, this command will abort if there are ambiguous
934 If you use --outgoing, this command will abort if there are ambiguous
935 outgoing revisions. For example, if there are multiple branches
935 outgoing revisions. For example, if there are multiple branches
936 containing outgoing revisions.
936 containing outgoing revisions.
937
937
938 Use "min(outgoing() and ::.)" or similar revset specification
938 Use "min(outgoing() and ::.)" or similar revset specification
939 instead of --outgoing to specify edit target revision exactly in
939 instead of --outgoing to specify edit target revision exactly in
940 such ambiguous situation. See :hg:`help revsets` for detail about
940 such ambiguous situation. See :hg:`help revsets` for detail about
941 selecting revisions.
941 selecting revisions.
942
942
943 .. container:: verbose
943 .. container:: verbose
944
944
945 Examples:
945 Examples:
946
946
947 - A number of changes have been made.
947 - A number of changes have been made.
948 Revision 3 is no longer needed.
948 Revision 3 is no longer needed.
949
949
950 Start history editing from revision 3::
950 Start history editing from revision 3::
951
951
952 hg histedit -r 3
952 hg histedit -r 3
953
953
954 An editor opens, containing the list of revisions,
954 An editor opens, containing the list of revisions,
955 with specific actions specified::
955 with specific actions specified::
956
956
957 pick 5339bf82f0ca 3 Zworgle the foobar
957 pick 5339bf82f0ca 3 Zworgle the foobar
958 pick 8ef592ce7cc4 4 Bedazzle the zerlog
958 pick 8ef592ce7cc4 4 Bedazzle the zerlog
959 pick 0a9639fcda9d 5 Morgify the cromulancy
959 pick 0a9639fcda9d 5 Morgify the cromulancy
960
960
961 Additional information about the possible actions
961 Additional information about the possible actions
962 to take appears below the list of revisions.
962 to take appears below the list of revisions.
963
963
964 To remove revision 3 from the history,
964 To remove revision 3 from the history,
965 its action (at the beginning of the relevant line)
965 its action (at the beginning of the relevant line)
966 is changed to 'drop'::
966 is changed to 'drop'::
967
967
968 drop 5339bf82f0ca 3 Zworgle the foobar
968 drop 5339bf82f0ca 3 Zworgle the foobar
969 pick 8ef592ce7cc4 4 Bedazzle the zerlog
969 pick 8ef592ce7cc4 4 Bedazzle the zerlog
970 pick 0a9639fcda9d 5 Morgify the cromulancy
970 pick 0a9639fcda9d 5 Morgify the cromulancy
971
971
972 - A number of changes have been made.
972 - A number of changes have been made.
973 Revision 2 and 4 need to be swapped.
973 Revision 2 and 4 need to be swapped.
974
974
975 Start history editing from revision 2::
975 Start history editing from revision 2::
976
976
977 hg histedit -r 2
977 hg histedit -r 2
978
978
979 An editor opens, containing the list of revisions,
979 An editor opens, containing the list of revisions,
980 with specific actions specified::
980 with specific actions specified::
981
981
982 pick 252a1af424ad 2 Blorb a morgwazzle
982 pick 252a1af424ad 2 Blorb a morgwazzle
983 pick 5339bf82f0ca 3 Zworgle the foobar
983 pick 5339bf82f0ca 3 Zworgle the foobar
984 pick 8ef592ce7cc4 4 Bedazzle the zerlog
984 pick 8ef592ce7cc4 4 Bedazzle the zerlog
985
985
986 To swap revision 2 and 4, its lines are swapped
986 To swap revision 2 and 4, its lines are swapped
987 in the editor::
987 in the editor::
988
988
989 pick 8ef592ce7cc4 4 Bedazzle the zerlog
989 pick 8ef592ce7cc4 4 Bedazzle the zerlog
990 pick 5339bf82f0ca 3 Zworgle the foobar
990 pick 5339bf82f0ca 3 Zworgle the foobar
991 pick 252a1af424ad 2 Blorb a morgwazzle
991 pick 252a1af424ad 2 Blorb a morgwazzle
992
992
993 Returns 0 on success, 1 if user intervention is required (not only
993 Returns 0 on success, 1 if user intervention is required (not only
994 for intentional "edit" command, but also for resolving unexpected
994 for intentional "edit" command, but also for resolving unexpected
995 conflicts).
995 conflicts).
996 """
996 """
997 state = histeditstate(repo)
997 state = histeditstate(repo)
998 try:
998 try:
999 state.wlock = repo.wlock()
999 state.wlock = repo.wlock()
1000 state.lock = repo.lock()
1000 state.lock = repo.lock()
1001 _histedit(ui, repo, state, *freeargs, **opts)
1001 _histedit(ui, repo, state, *freeargs, **opts)
1002 finally:
1002 finally:
1003 release(state.lock, state.wlock)
1003 release(state.lock, state.wlock)
1004
1004
1005 goalcontinue = 'continue'
1005 goalcontinue = 'continue'
1006 goalabort = 'abort'
1006 goalabort = 'abort'
1007 goaleditplan = 'edit-plan'
1007 goaleditplan = 'edit-plan'
1008 goalnew = 'new'
1008 goalnew = 'new'
1009
1009
1010 def _getgoal(opts):
1010 def _getgoal(opts):
1011 if opts.get('continue'):
1011 if opts.get('continue'):
1012 return goalcontinue
1012 return goalcontinue
1013 if opts.get('abort'):
1013 if opts.get('abort'):
1014 return goalabort
1014 return goalabort
1015 if opts.get('edit_plan'):
1015 if opts.get('edit_plan'):
1016 return goaleditplan
1016 return goaleditplan
1017 return goalnew
1017 return goalnew
1018
1018
1019 def _readfile(ui, path):
1019 def _readfile(ui, path):
1020 if path == '-':
1020 if path == '-':
1021 with ui.timeblockedsection('histedit'):
1021 with ui.timeblockedsection('histedit'):
1022 return ui.fin.read()
1022 return ui.fin.read()
1023 else:
1023 else:
1024 with open(path, 'rb') as f:
1024 with open(path, 'rb') as f:
1025 return f.read()
1025 return f.read()
1026
1026
1027 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1027 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1028 # TODO only abort if we try to histedit mq patches, not just
1028 # TODO only abort if we try to histedit mq patches, not just
1029 # blanket if mq patches are applied somewhere
1029 # blanket if mq patches are applied somewhere
1030 mq = getattr(repo, 'mq', None)
1030 mq = getattr(repo, 'mq', None)
1031 if mq and mq.applied:
1031 if mq and mq.applied:
1032 raise error.Abort(_('source has mq patches applied'))
1032 raise error.Abort(_('source has mq patches applied'))
1033
1033
1034 # basic argument incompatibility processing
1034 # basic argument incompatibility processing
1035 outg = opts.get('outgoing')
1035 outg = opts.get('outgoing')
1036 editplan = opts.get('edit_plan')
1036 editplan = opts.get('edit_plan')
1037 abort = opts.get('abort')
1037 abort = opts.get('abort')
1038 force = opts.get('force')
1038 force = opts.get('force')
1039 if force and not outg:
1039 if force and not outg:
1040 raise error.Abort(_('--force only allowed with --outgoing'))
1040 raise error.Abort(_('--force only allowed with --outgoing'))
1041 if goal == 'continue':
1041 if goal == 'continue':
1042 if any((outg, abort, revs, freeargs, rules, editplan)):
1042 if any((outg, abort, revs, freeargs, rules, editplan)):
1043 raise error.Abort(_('no arguments allowed with --continue'))
1043 raise error.Abort(_('no arguments allowed with --continue'))
1044 elif goal == 'abort':
1044 elif goal == 'abort':
1045 if any((outg, revs, freeargs, rules, editplan)):
1045 if any((outg, revs, freeargs, rules, editplan)):
1046 raise error.Abort(_('no arguments allowed with --abort'))
1046 raise error.Abort(_('no arguments allowed with --abort'))
1047 elif goal == 'edit-plan':
1047 elif goal == 'edit-plan':
1048 if any((outg, revs, freeargs)):
1048 if any((outg, revs, freeargs)):
1049 raise error.Abort(_('only --commands argument allowed with '
1049 raise error.Abort(_('only --commands argument allowed with '
1050 '--edit-plan'))
1050 '--edit-plan'))
1051 else:
1051 else:
1052 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1052 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1053 raise error.Abort(_('history edit already in progress, try '
1053 raise error.Abort(_('history edit already in progress, try '
1054 '--continue or --abort'))
1054 '--continue or --abort'))
1055 if outg:
1055 if outg:
1056 if revs:
1056 if revs:
1057 raise error.Abort(_('no revisions allowed with --outgoing'))
1057 raise error.Abort(_('no revisions allowed with --outgoing'))
1058 if len(freeargs) > 1:
1058 if len(freeargs) > 1:
1059 raise error.Abort(
1059 raise error.Abort(
1060 _('only one repo argument allowed with --outgoing'))
1060 _('only one repo argument allowed with --outgoing'))
1061 else:
1061 else:
1062 revs.extend(freeargs)
1062 revs.extend(freeargs)
1063 if len(revs) == 0:
1063 if len(revs) == 0:
1064 defaultrev = destutil.desthistedit(ui, repo)
1064 defaultrev = destutil.desthistedit(ui, repo)
1065 if defaultrev is not None:
1065 if defaultrev is not None:
1066 revs.append(defaultrev)
1066 revs.append(defaultrev)
1067
1067
1068 if len(revs) != 1:
1068 if len(revs) != 1:
1069 raise error.Abort(
1069 raise error.Abort(
1070 _('histedit requires exactly one ancestor revision'))
1070 _('histedit requires exactly one ancestor revision'))
1071
1071
1072 def _histedit(ui, repo, state, *freeargs, **opts):
1072 def _histedit(ui, repo, state, *freeargs, **opts):
1073 goal = _getgoal(opts)
1073 goal = _getgoal(opts)
1074 revs = opts.get('rev', [])
1074 revs = opts.get('rev', [])
1075 rules = opts.get('commands', '')
1075 rules = opts.get('commands', '')
1076 state.keep = opts.get('keep', False)
1076 state.keep = opts.get('keep', False)
1077
1077
1078 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1078 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1079
1079
1080 # rebuild state
1080 # rebuild state
1081 if goal == goalcontinue:
1081 if goal == goalcontinue:
1082 state.read()
1082 state.read()
1083 state = bootstrapcontinue(ui, state, opts)
1083 state = bootstrapcontinue(ui, state, opts)
1084 elif goal == goaleditplan:
1084 elif goal == goaleditplan:
1085 _edithisteditplan(ui, repo, state, rules)
1085 _edithisteditplan(ui, repo, state, rules)
1086 return
1086 return
1087 elif goal == goalabort:
1087 elif goal == goalabort:
1088 _aborthistedit(ui, repo, state)
1088 _aborthistedit(ui, repo, state)
1089 return
1089 return
1090 else:
1090 else:
1091 # goal == goalnew
1091 # goal == goalnew
1092 _newhistedit(ui, repo, state, revs, freeargs, opts)
1092 _newhistedit(ui, repo, state, revs, freeargs, opts)
1093
1093
1094 _continuehistedit(ui, repo, state)
1094 _continuehistedit(ui, repo, state)
1095 _finishhistedit(ui, repo, state)
1095 _finishhistedit(ui, repo, state)
1096
1096
1097 def _continuehistedit(ui, repo, state):
1097 def _continuehistedit(ui, repo, state):
1098 """This function runs after either:
1098 """This function runs after either:
1099 - bootstrapcontinue (if the goal is 'continue')
1099 - bootstrapcontinue (if the goal is 'continue')
1100 - _newhistedit (if the goal is 'new')
1100 - _newhistedit (if the goal is 'new')
1101 """
1101 """
1102 # preprocess rules so that we can hide inner folds from the user
1102 # preprocess rules so that we can hide inner folds from the user
1103 # and only show one editor
1103 # and only show one editor
1104 actions = state.actions[:]
1104 actions = state.actions[:]
1105 for idx, (action, nextact) in enumerate(
1105 for idx, (action, nextact) in enumerate(
1106 zip(actions, actions[1:] + [None])):
1106 zip(actions, actions[1:] + [None])):
1107 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1107 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1108 state.actions[idx].__class__ = _multifold
1108 state.actions[idx].__class__ = _multifold
1109
1109
1110 total = len(state.actions)
1110 total = len(state.actions)
1111 pos = 0
1111 pos = 0
1112 state.tr = None
1112 state.tr = None
1113
1113
1114 # Force an initial state file write, so the user can run --abort/continue
1114 # Force an initial state file write, so the user can run --abort/continue
1115 # even if there's an exception before the first transaction serialize.
1115 # even if there's an exception before the first transaction serialize.
1116 state.write()
1116 state.write()
1117 try:
1117 try:
1118 # Don't use singletransaction by default since it rolls the entire
1118 # Don't use singletransaction by default since it rolls the entire
1119 # transaction back if an unexpected exception happens (like a
1119 # transaction back if an unexpected exception happens (like a
1120 # pretxncommit hook throws, or the user aborts the commit msg editor).
1120 # pretxncommit hook throws, or the user aborts the commit msg editor).
1121 if ui.configbool("histedit", "singletransaction", False):
1121 if ui.configbool("histedit", "singletransaction", False):
1122 # Don't use a 'with' for the transaction, since actions may close
1122 # Don't use a 'with' for the transaction, since actions may close
1123 # and reopen a transaction. For example, if the action executes an
1123 # and reopen a transaction. For example, if the action executes an
1124 # external process it may choose to commit the transaction first.
1124 # external process it may choose to commit the transaction first.
1125 state.tr = repo.transaction('histedit')
1125 state.tr = repo.transaction('histedit')
1126
1126
1127 while state.actions:
1127 while state.actions:
1128 state.write(tr=state.tr)
1128 state.write(tr=state.tr)
1129 actobj = state.actions[0]
1129 actobj = state.actions[0]
1130 pos += 1
1130 pos += 1
1131 ui.progress(_("editing"), pos, actobj.torule(),
1131 ui.progress(_("editing"), pos, actobj.torule(),
1132 _('changes'), total)
1132 _('changes'), total)
1133 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1133 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1134 actobj.torule()))
1134 actobj.torule()))
1135 parentctx, replacement_ = actobj.run()
1135 parentctx, replacement_ = actobj.run()
1136 state.parentctxnode = parentctx.node()
1136 state.parentctxnode = parentctx.node()
1137 state.replacements.extend(replacement_)
1137 state.replacements.extend(replacement_)
1138 state.actions.pop(0)
1138 state.actions.pop(0)
1139
1139
1140 if state.tr is not None:
1140 if state.tr is not None:
1141 state.tr.close()
1141 state.tr.close()
1142 except error.InterventionRequired:
1142 except error.InterventionRequired:
1143 if state.tr is not None:
1143 if state.tr is not None:
1144 state.tr.close()
1144 state.tr.close()
1145 raise
1145 raise
1146 except Exception:
1146 except Exception:
1147 if state.tr is not None:
1147 if state.tr is not None:
1148 state.tr.abort()
1148 state.tr.abort()
1149 raise
1149 raise
1150
1150
1151 state.write()
1151 state.write()
1152 ui.progress(_("editing"), None)
1152 ui.progress(_("editing"), None)
1153
1153
1154 def _finishhistedit(ui, repo, state):
1154 def _finishhistedit(ui, repo, state):
1155 """This action runs when histedit is finishing its session"""
1155 """This action runs when histedit is finishing its session"""
1156 repo.ui.pushbuffer()
1156 repo.ui.pushbuffer()
1157 hg.update(repo, state.parentctxnode, quietempty=True)
1157 hg.update(repo, state.parentctxnode, quietempty=True)
1158 repo.ui.popbuffer()
1158 repo.ui.popbuffer()
1159
1159
1160 mapping, tmpnodes, created, ntm = processreplacement(state)
1160 mapping, tmpnodes, created, ntm = processreplacement(state)
1161 if mapping:
1161 if mapping:
1162 for prec, succs in mapping.iteritems():
1162 for prec, succs in mapping.iteritems():
1163 if not succs:
1163 if not succs:
1164 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1164 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1165 else:
1165 else:
1166 ui.debug('histedit: %s is replaced by %s\n' % (
1166 ui.debug('histedit: %s is replaced by %s\n' % (
1167 node.short(prec), node.short(succs[0])))
1167 node.short(prec), node.short(succs[0])))
1168 if len(succs) > 1:
1168 if len(succs) > 1:
1169 m = 'histedit: %s'
1169 m = 'histedit: %s'
1170 for n in succs[1:]:
1170 for n in succs[1:]:
1171 ui.debug(m % node.short(n))
1171 ui.debug(m % node.short(n))
1172
1172
1173 safecleanupnode(ui, repo, 'temp', tmpnodes)
1173 safecleanupnode(ui, repo, 'temp', tmpnodes)
1174
1174
1175 if not state.keep:
1175 if not state.keep:
1176 if mapping:
1176 if mapping:
1177 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1177 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1178 # TODO update mq state
1178 # TODO update mq state
1179 safecleanupnode(ui, repo, 'replaced', mapping)
1179 safecleanupnode(ui, repo, 'replaced', mapping)
1180
1180
1181 state.clear()
1181 state.clear()
1182 if os.path.exists(repo.sjoin('undo')):
1182 if os.path.exists(repo.sjoin('undo')):
1183 os.unlink(repo.sjoin('undo'))
1183 os.unlink(repo.sjoin('undo'))
1184 if repo.vfs.exists('histedit-last-edit.txt'):
1184 if repo.vfs.exists('histedit-last-edit.txt'):
1185 repo.vfs.unlink('histedit-last-edit.txt')
1185 repo.vfs.unlink('histedit-last-edit.txt')
1186
1186
1187 def _aborthistedit(ui, repo, state):
1187 def _aborthistedit(ui, repo, state):
1188 try:
1188 try:
1189 state.read()
1189 state.read()
1190 __, leafs, tmpnodes, __ = processreplacement(state)
1190 __, leafs, tmpnodes, __ = processreplacement(state)
1191 ui.debug('restore wc to old parent %s\n'
1191 ui.debug('restore wc to old parent %s\n'
1192 % node.short(state.topmost))
1192 % node.short(state.topmost))
1193
1193
1194 # Recover our old commits if necessary
1194 # Recover our old commits if necessary
1195 if not state.topmost in repo and state.backupfile:
1195 if not state.topmost in repo and state.backupfile:
1196 backupfile = repo.vfs.join(state.backupfile)
1196 backupfile = repo.vfs.join(state.backupfile)
1197 f = hg.openpath(ui, backupfile)
1197 f = hg.openpath(ui, backupfile)
1198 gen = exchange.readbundle(ui, f, backupfile)
1198 gen = exchange.readbundle(ui, f, backupfile)
1199 with repo.transaction('histedit.abort') as tr:
1199 with repo.transaction('histedit.abort') as tr:
1200 if not isinstance(gen, bundle2.unbundle20):
1200 if not isinstance(gen, bundle2.unbundle20):
1201 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
1201 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
1202 if isinstance(gen, bundle2.unbundle20):
1202 if isinstance(gen, bundle2.unbundle20):
1203 bundle2.applybundle(repo, gen, tr,
1203 bundle2.applybundle(repo, gen, tr,
1204 source='histedit',
1204 source='histedit',
1205 url='bundle:' + backupfile)
1205 url='bundle:' + backupfile)
1206
1206
1207 os.remove(backupfile)
1207 os.remove(backupfile)
1208
1208
1209 # check whether we should update away
1209 # check whether we should update away
1210 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1210 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1211 state.parentctxnode, leafs | tmpnodes):
1211 state.parentctxnode, leafs | tmpnodes):
1212 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1212 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1213 cleanupnode(ui, repo, 'created', tmpnodes)
1213 cleanupnode(ui, repo, 'created', tmpnodes)
1214 cleanupnode(ui, repo, 'temp', leafs)
1214 cleanupnode(ui, repo, 'temp', leafs)
1215 except Exception:
1215 except Exception:
1216 if state.inprogress():
1216 if state.inprogress():
1217 ui.warn(_('warning: encountered an exception during histedit '
1217 ui.warn(_('warning: encountered an exception during histedit '
1218 '--abort; the repository may not have been completely '
1218 '--abort; the repository may not have been completely '
1219 'cleaned up\n'))
1219 'cleaned up\n'))
1220 raise
1220 raise
1221 finally:
1221 finally:
1222 state.clear()
1222 state.clear()
1223
1223
1224 def _edithisteditplan(ui, repo, state, rules):
1224 def _edithisteditplan(ui, repo, state, rules):
1225 state.read()
1225 state.read()
1226 if not rules:
1226 if not rules:
1227 comment = geteditcomment(ui,
1227 comment = geteditcomment(ui,
1228 node.short(state.parentctxnode),
1228 node.short(state.parentctxnode),
1229 node.short(state.topmost))
1229 node.short(state.topmost))
1230 rules = ruleeditor(repo, ui, state.actions, comment)
1230 rules = ruleeditor(repo, ui, state.actions, comment)
1231 else:
1231 else:
1232 rules = _readfile(ui, rules)
1232 rules = _readfile(ui, rules)
1233 actions = parserules(rules, state)
1233 actions = parserules(rules, state)
1234 ctxs = [repo[act.node] \
1234 ctxs = [repo[act.node] \
1235 for act in state.actions if act.node]
1235 for act in state.actions if act.node]
1236 warnverifyactions(ui, repo, actions, state, ctxs)
1236 warnverifyactions(ui, repo, actions, state, ctxs)
1237 state.actions = actions
1237 state.actions = actions
1238 state.write()
1238 state.write()
1239
1239
1240 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1240 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1241 outg = opts.get('outgoing')
1241 outg = opts.get('outgoing')
1242 rules = opts.get('commands', '')
1242 rules = opts.get('commands', '')
1243 force = opts.get('force')
1243 force = opts.get('force')
1244
1244
1245 cmdutil.checkunfinished(repo)
1245 cmdutil.checkunfinished(repo)
1246 cmdutil.bailifchanged(repo)
1246 cmdutil.bailifchanged(repo)
1247
1247
1248 topmost, empty = repo.dirstate.parents()
1248 topmost, empty = repo.dirstate.parents()
1249 if outg:
1249 if outg:
1250 if freeargs:
1250 if freeargs:
1251 remote = freeargs[0]
1251 remote = freeargs[0]
1252 else:
1252 else:
1253 remote = None
1253 remote = None
1254 root = findoutgoing(ui, repo, remote, force, opts)
1254 root = findoutgoing(ui, repo, remote, force, opts)
1255 else:
1255 else:
1256 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1256 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1257 if len(rr) != 1:
1257 if len(rr) != 1:
1258 raise error.Abort(_('The specified revisions must have '
1258 raise error.Abort(_('The specified revisions must have '
1259 'exactly one common root'))
1259 'exactly one common root'))
1260 root = rr[0].node()
1260 root = rr[0].node()
1261
1261
1262 revs = between(repo, root, topmost, state.keep)
1262 revs = between(repo, root, topmost, state.keep)
1263 if not revs:
1263 if not revs:
1264 raise error.Abort(_('%s is not an ancestor of working directory') %
1264 raise error.Abort(_('%s is not an ancestor of working directory') %
1265 node.short(root))
1265 node.short(root))
1266
1266
1267 ctxs = [repo[r] for r in revs]
1267 ctxs = [repo[r] for r in revs]
1268 if not rules:
1268 if not rules:
1269 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1269 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1270 actions = [pick(state, r) for r in revs]
1270 actions = [pick(state, r) for r in revs]
1271 rules = ruleeditor(repo, ui, actions, comment)
1271 rules = ruleeditor(repo, ui, actions, comment)
1272 else:
1272 else:
1273 rules = _readfile(ui, rules)
1273 rules = _readfile(ui, rules)
1274 actions = parserules(rules, state)
1274 actions = parserules(rules, state)
1275 warnverifyactions(ui, repo, actions, state, ctxs)
1275 warnverifyactions(ui, repo, actions, state, ctxs)
1276
1276
1277 parentctxnode = repo[root].parents()[0].node()
1277 parentctxnode = repo[root].parents()[0].node()
1278
1278
1279 state.parentctxnode = parentctxnode
1279 state.parentctxnode = parentctxnode
1280 state.actions = actions
1280 state.actions = actions
1281 state.topmost = topmost
1281 state.topmost = topmost
1282 state.replacements = []
1282 state.replacements = []
1283
1283
1284 # Create a backup so we can always abort completely.
1284 # Create a backup so we can always abort completely.
1285 backupfile = None
1285 backupfile = None
1286 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1286 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1287 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1287 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1288 'histedit')
1288 'histedit')
1289 state.backupfile = backupfile
1289 state.backupfile = backupfile
1290
1290
1291 def _getsummary(ctx):
1291 def _getsummary(ctx):
1292 # a common pattern is to extract the summary but default to the empty
1292 # a common pattern is to extract the summary but default to the empty
1293 # string
1293 # string
1294 summary = ctx.description() or ''
1294 summary = ctx.description() or ''
1295 if summary:
1295 if summary:
1296 summary = summary.splitlines()[0]
1296 summary = summary.splitlines()[0]
1297 return summary
1297 return summary
1298
1298
1299 def bootstrapcontinue(ui, state, opts):
1299 def bootstrapcontinue(ui, state, opts):
1300 repo = state.repo
1300 repo = state.repo
1301
1301
1302 ms = mergemod.mergestate.read(repo)
1302 ms = mergemod.mergestate.read(repo)
1303 mergeutil.checkunresolved(ms)
1303 mergeutil.checkunresolved(ms)
1304
1304
1305 if state.actions:
1305 if state.actions:
1306 actobj = state.actions.pop(0)
1306 actobj = state.actions.pop(0)
1307
1307
1308 if _isdirtywc(repo):
1308 if _isdirtywc(repo):
1309 actobj.continuedirty()
1309 actobj.continuedirty()
1310 if _isdirtywc(repo):
1310 if _isdirtywc(repo):
1311 abortdirty()
1311 abortdirty()
1312
1312
1313 parentctx, replacements = actobj.continueclean()
1313 parentctx, replacements = actobj.continueclean()
1314
1314
1315 state.parentctxnode = parentctx.node()
1315 state.parentctxnode = parentctx.node()
1316 state.replacements.extend(replacements)
1316 state.replacements.extend(replacements)
1317
1317
1318 return state
1318 return state
1319
1319
1320 def between(repo, old, new, keep):
1320 def between(repo, old, new, keep):
1321 """select and validate the set of revision to edit
1321 """select and validate the set of revision to edit
1322
1322
1323 When keep is false, the specified set can't have children."""
1323 When keep is false, the specified set can't have children."""
1324 ctxs = list(repo.set('%n::%n', old, new))
1324 ctxs = list(repo.set('%n::%n', old, new))
1325 if ctxs and not keep:
1325 if ctxs and not keep:
1326 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1326 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1327 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1327 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1328 raise error.Abort(_('can only histedit a changeset together '
1328 raise error.Abort(_('can only histedit a changeset together '
1329 'with all its descendants'))
1329 'with all its descendants'))
1330 if repo.revs('(%ld) and merge()', ctxs):
1330 if repo.revs('(%ld) and merge()', ctxs):
1331 raise error.Abort(_('cannot edit history that contains merges'))
1331 raise error.Abort(_('cannot edit history that contains merges'))
1332 root = ctxs[0] # list is already sorted by repo.set
1332 root = ctxs[0] # list is already sorted by repo.set
1333 if not root.mutable():
1333 if not root.mutable():
1334 raise error.Abort(_('cannot edit public changeset: %s') % root,
1334 raise error.Abort(_('cannot edit public changeset: %s') % root,
1335 hint=_("see 'hg help phases' for details"))
1335 hint=_("see 'hg help phases' for details"))
1336 return [c.node() for c in ctxs]
1336 return [c.node() for c in ctxs]
1337
1337
1338 def ruleeditor(repo, ui, actions, editcomment=""):
1338 def ruleeditor(repo, ui, actions, editcomment=""):
1339 """open an editor to edit rules
1339 """open an editor to edit rules
1340
1340
1341 rules are in the format [ [act, ctx], ...] like in state.rules
1341 rules are in the format [ [act, ctx], ...] like in state.rules
1342 """
1342 """
1343 if repo.ui.configbool("experimental", "histedit.autoverb"):
1343 if repo.ui.configbool("experimental", "histedit.autoverb"):
1344 newact = util.sortdict()
1344 newact = util.sortdict()
1345 for act in actions:
1345 for act in actions:
1346 ctx = repo[act.node]
1346 ctx = repo[act.node]
1347 summary = _getsummary(ctx)
1347 summary = _getsummary(ctx)
1348 fword = summary.split(' ', 1)[0].lower()
1348 fword = summary.split(' ', 1)[0].lower()
1349 added = False
1349 added = False
1350
1350
1351 # if it doesn't end with the special character '!' just skip this
1351 # if it doesn't end with the special character '!' just skip this
1352 if fword.endswith('!'):
1352 if fword.endswith('!'):
1353 fword = fword[:-1]
1353 fword = fword[:-1]
1354 if fword in primaryactions | secondaryactions | tertiaryactions:
1354 if fword in primaryactions | secondaryactions | tertiaryactions:
1355 act.verb = fword
1355 act.verb = fword
1356 # get the target summary
1356 # get the target summary
1357 tsum = summary[len(fword) + 1:].lstrip()
1357 tsum = summary[len(fword) + 1:].lstrip()
1358 # safe but slow: reverse iterate over the actions so we
1358 # safe but slow: reverse iterate over the actions so we
1359 # don't clash on two commits having the same summary
1359 # don't clash on two commits having the same summary
1360 for na, l in reversed(list(newact.iteritems())):
1360 for na, l in reversed(list(newact.iteritems())):
1361 actx = repo[na.node]
1361 actx = repo[na.node]
1362 asum = _getsummary(actx)
1362 asum = _getsummary(actx)
1363 if asum == tsum:
1363 if asum == tsum:
1364 added = True
1364 added = True
1365 l.append(act)
1365 l.append(act)
1366 break
1366 break
1367
1367
1368 if not added:
1368 if not added:
1369 newact[act] = []
1369 newact[act] = []
1370
1370
1371 # copy over and flatten the new list
1371 # copy over and flatten the new list
1372 actions = []
1372 actions = []
1373 for na, l in newact.iteritems():
1373 for na, l in newact.iteritems():
1374 actions.append(na)
1374 actions.append(na)
1375 actions += l
1375 actions += l
1376
1376
1377 rules = '\n'.join([act.torule() for act in actions])
1377 rules = '\n'.join([act.torule() for act in actions])
1378 rules += '\n\n'
1378 rules += '\n\n'
1379 rules += editcomment
1379 rules += editcomment
1380 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1380 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1381 repopath=repo.path)
1381 repopath=repo.path)
1382
1382
1383 # Save edit rules in .hg/histedit-last-edit.txt in case
1383 # Save edit rules in .hg/histedit-last-edit.txt in case
1384 # the user needs to ask for help after something
1384 # the user needs to ask for help after something
1385 # surprising happens.
1385 # surprising happens.
1386 f = open(repo.vfs.join('histedit-last-edit.txt'), 'w')
1386 f = open(repo.vfs.join('histedit-last-edit.txt'), 'w')
1387 f.write(rules)
1387 f.write(rules)
1388 f.close()
1388 f.close()
1389
1389
1390 return rules
1390 return rules
1391
1391
1392 def parserules(rules, state):
1392 def parserules(rules, state):
1393 """Read the histedit rules string and return list of action objects """
1393 """Read the histedit rules string and return list of action objects """
1394 rules = [l for l in (r.strip() for r in rules.splitlines())
1394 rules = [l for l in (r.strip() for r in rules.splitlines())
1395 if l and not l.startswith('#')]
1395 if l and not l.startswith('#')]
1396 actions = []
1396 actions = []
1397 for r in rules:
1397 for r in rules:
1398 if ' ' not in r:
1398 if ' ' not in r:
1399 raise error.ParseError(_('malformed line "%s"') % r)
1399 raise error.ParseError(_('malformed line "%s"') % r)
1400 verb, rest = r.split(' ', 1)
1400 verb, rest = r.split(' ', 1)
1401
1401
1402 if verb not in actiontable:
1402 if verb not in actiontable:
1403 raise error.ParseError(_('unknown action "%s"') % verb)
1403 raise error.ParseError(_('unknown action "%s"') % verb)
1404
1404
1405 action = actiontable[verb].fromrule(state, rest)
1405 action = actiontable[verb].fromrule(state, rest)
1406 actions.append(action)
1406 actions.append(action)
1407 return actions
1407 return actions
1408
1408
1409 def warnverifyactions(ui, repo, actions, state, ctxs):
1409 def warnverifyactions(ui, repo, actions, state, ctxs):
1410 try:
1410 try:
1411 verifyactions(actions, state, ctxs)
1411 verifyactions(actions, state, ctxs)
1412 except error.ParseError:
1412 except error.ParseError:
1413 if repo.vfs.exists('histedit-last-edit.txt'):
1413 if repo.vfs.exists('histedit-last-edit.txt'):
1414 ui.warn(_('warning: histedit rules saved '
1414 ui.warn(_('warning: histedit rules saved '
1415 'to: .hg/histedit-last-edit.txt\n'))
1415 'to: .hg/histedit-last-edit.txt\n'))
1416 raise
1416 raise
1417
1417
1418 def verifyactions(actions, state, ctxs):
1418 def verifyactions(actions, state, ctxs):
1419 """Verify that there exists exactly one action per given changeset and
1419 """Verify that there exists exactly one action per given changeset and
1420 other constraints.
1420 other constraints.
1421
1421
1422 Will abort if there are to many or too few rules, a malformed rule,
1422 Will abort if there are to many or too few rules, a malformed rule,
1423 or a rule on a changeset outside of the user-given range.
1423 or a rule on a changeset outside of the user-given range.
1424 """
1424 """
1425 expected = set(c.node() for c in ctxs)
1425 expected = set(c.node() for c in ctxs)
1426 seen = set()
1426 seen = set()
1427 prev = None
1427 prev = None
1428 for action in actions:
1428 for action in actions:
1429 action.verify(prev, expected, seen)
1429 action.verify(prev, expected, seen)
1430 prev = action
1430 prev = action
1431 if action.node is not None:
1431 if action.node is not None:
1432 seen.add(action.node)
1432 seen.add(action.node)
1433 missing = sorted(expected - seen) # sort to stabilize output
1433 missing = sorted(expected - seen) # sort to stabilize output
1434
1434
1435 if state.repo.ui.configbool('histedit', 'dropmissing'):
1435 if state.repo.ui.configbool('histedit', 'dropmissing'):
1436 if len(actions) == 0:
1436 if len(actions) == 0:
1437 raise error.ParseError(_('no rules provided'),
1437 raise error.ParseError(_('no rules provided'),
1438 hint=_('use strip extension to remove commits'))
1438 hint=_('use strip extension to remove commits'))
1439
1439
1440 drops = [drop(state, n) for n in missing]
1440 drops = [drop(state, n) for n in missing]
1441 # put the in the beginning so they execute immediately and
1441 # put the in the beginning so they execute immediately and
1442 # don't show in the edit-plan in the future
1442 # don't show in the edit-plan in the future
1443 actions[:0] = drops
1443 actions[:0] = drops
1444 elif missing:
1444 elif missing:
1445 raise error.ParseError(_('missing rules for changeset %s') %
1445 raise error.ParseError(_('missing rules for changeset %s') %
1446 node.short(missing[0]),
1446 node.short(missing[0]),
1447 hint=_('use "drop %s" to discard, see also: '
1447 hint=_('use "drop %s" to discard, see also: '
1448 "'hg help -e histedit.config'")
1448 "'hg help -e histedit.config'")
1449 % node.short(missing[0]))
1449 % node.short(missing[0]))
1450
1450
1451 def adjustreplacementsfrommarkers(repo, oldreplacements):
1451 def adjustreplacementsfrommarkers(repo, oldreplacements):
1452 """Adjust replacements from obsolescence markers
1452 """Adjust replacements from obsolescence markers
1453
1453
1454 Replacements structure is originally generated based on
1454 Replacements structure is originally generated based on
1455 histedit's state and does not account for changes that are
1455 histedit's state and does not account for changes that are
1456 not recorded there. This function fixes that by adding
1456 not recorded there. This function fixes that by adding
1457 data read from obsolescence markers"""
1457 data read from obsolescence markers"""
1458 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1458 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1459 return oldreplacements
1459 return oldreplacements
1460
1460
1461 unfi = repo.unfiltered()
1461 unfi = repo.unfiltered()
1462 nm = unfi.changelog.nodemap
1462 nm = unfi.changelog.nodemap
1463 obsstore = repo.obsstore
1463 obsstore = repo.obsstore
1464 newreplacements = list(oldreplacements)
1464 newreplacements = list(oldreplacements)
1465 oldsuccs = [r[1] for r in oldreplacements]
1465 oldsuccs = [r[1] for r in oldreplacements]
1466 # successors that have already been added to succstocheck once
1466 # successors that have already been added to succstocheck once
1467 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1467 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1468 succstocheck = list(seensuccs)
1468 succstocheck = list(seensuccs)
1469 while succstocheck:
1469 while succstocheck:
1470 n = succstocheck.pop()
1470 n = succstocheck.pop()
1471 missing = nm.get(n) is None
1471 missing = nm.get(n) is None
1472 markers = obsstore.successors.get(n, ())
1472 markers = obsstore.successors.get(n, ())
1473 if missing and not markers:
1473 if missing and not markers:
1474 # dead end, mark it as such
1474 # dead end, mark it as such
1475 newreplacements.append((n, ()))
1475 newreplacements.append((n, ()))
1476 for marker in markers:
1476 for marker in markers:
1477 nsuccs = marker[1]
1477 nsuccs = marker[1]
1478 newreplacements.append((n, nsuccs))
1478 newreplacements.append((n, nsuccs))
1479 for nsucc in nsuccs:
1479 for nsucc in nsuccs:
1480 if nsucc not in seensuccs:
1480 if nsucc not in seensuccs:
1481 seensuccs.add(nsucc)
1481 seensuccs.add(nsucc)
1482 succstocheck.append(nsucc)
1482 succstocheck.append(nsucc)
1483
1483
1484 return newreplacements
1484 return newreplacements
1485
1485
1486 def processreplacement(state):
1486 def processreplacement(state):
1487 """process the list of replacements to return
1487 """process the list of replacements to return
1488
1488
1489 1) the final mapping between original and created nodes
1489 1) the final mapping between original and created nodes
1490 2) the list of temporary node created by histedit
1490 2) the list of temporary node created by histedit
1491 3) the list of new commit created by histedit"""
1491 3) the list of new commit created by histedit"""
1492 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1492 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1493 allsuccs = set()
1493 allsuccs = set()
1494 replaced = set()
1494 replaced = set()
1495 fullmapping = {}
1495 fullmapping = {}
1496 # initialize basic set
1496 # initialize basic set
1497 # fullmapping records all operations recorded in replacement
1497 # fullmapping records all operations recorded in replacement
1498 for rep in replacements:
1498 for rep in replacements:
1499 allsuccs.update(rep[1])
1499 allsuccs.update(rep[1])
1500 replaced.add(rep[0])
1500 replaced.add(rep[0])
1501 fullmapping.setdefault(rep[0], set()).update(rep[1])
1501 fullmapping.setdefault(rep[0], set()).update(rep[1])
1502 new = allsuccs - replaced
1502 new = allsuccs - replaced
1503 tmpnodes = allsuccs & replaced
1503 tmpnodes = allsuccs & replaced
1504 # Reduce content fullmapping into direct relation between original nodes
1504 # Reduce content fullmapping into direct relation between original nodes
1505 # and final node created during history edition
1505 # and final node created during history edition
1506 # Dropped changeset are replaced by an empty list
1506 # Dropped changeset are replaced by an empty list
1507 toproceed = set(fullmapping)
1507 toproceed = set(fullmapping)
1508 final = {}
1508 final = {}
1509 while toproceed:
1509 while toproceed:
1510 for x in list(toproceed):
1510 for x in list(toproceed):
1511 succs = fullmapping[x]
1511 succs = fullmapping[x]
1512 for s in list(succs):
1512 for s in list(succs):
1513 if s in toproceed:
1513 if s in toproceed:
1514 # non final node with unknown closure
1514 # non final node with unknown closure
1515 # We can't process this now
1515 # We can't process this now
1516 break
1516 break
1517 elif s in final:
1517 elif s in final:
1518 # non final node, replace with closure
1518 # non final node, replace with closure
1519 succs.remove(s)
1519 succs.remove(s)
1520 succs.update(final[s])
1520 succs.update(final[s])
1521 else:
1521 else:
1522 final[x] = succs
1522 final[x] = succs
1523 toproceed.remove(x)
1523 toproceed.remove(x)
1524 # remove tmpnodes from final mapping
1524 # remove tmpnodes from final mapping
1525 for n in tmpnodes:
1525 for n in tmpnodes:
1526 del final[n]
1526 del final[n]
1527 # we expect all changes involved in final to exist in the repo
1527 # we expect all changes involved in final to exist in the repo
1528 # turn `final` into list (topologically sorted)
1528 # turn `final` into list (topologically sorted)
1529 nm = state.repo.changelog.nodemap
1529 nm = state.repo.changelog.nodemap
1530 for prec, succs in final.items():
1530 for prec, succs in final.items():
1531 final[prec] = sorted(succs, key=nm.get)
1531 final[prec] = sorted(succs, key=nm.get)
1532
1532
1533 # computed topmost element (necessary for bookmark)
1533 # computed topmost element (necessary for bookmark)
1534 if new:
1534 if new:
1535 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1535 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1536 elif not final:
1536 elif not final:
1537 # Nothing rewritten at all. we won't need `newtopmost`
1537 # Nothing rewritten at all. we won't need `newtopmost`
1538 # It is the same as `oldtopmost` and `processreplacement` know it
1538 # It is the same as `oldtopmost` and `processreplacement` know it
1539 newtopmost = None
1539 newtopmost = None
1540 else:
1540 else:
1541 # every body died. The newtopmost is the parent of the root.
1541 # every body died. The newtopmost is the parent of the root.
1542 r = state.repo.changelog.rev
1542 r = state.repo.changelog.rev
1543 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1543 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1544
1544
1545 return final, tmpnodes, new, newtopmost
1545 return final, tmpnodes, new, newtopmost
1546
1546
1547 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1547 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1548 """Move bookmark from old to newly created node"""
1548 """Move bookmark from old to newly created node"""
1549 if not mapping:
1549 if not mapping:
1550 # if nothing got rewritten there is not purpose for this function
1550 # if nothing got rewritten there is not purpose for this function
1551 return
1551 return
1552 moves = []
1552 moves = []
1553 for bk, old in sorted(repo._bookmarks.iteritems()):
1553 for bk, old in sorted(repo._bookmarks.iteritems()):
1554 if old == oldtopmost:
1554 if old == oldtopmost:
1555 # special case ensure bookmark stay on tip.
1555 # special case ensure bookmark stay on tip.
1556 #
1556 #
1557 # This is arguably a feature and we may only want that for the
1557 # This is arguably a feature and we may only want that for the
1558 # active bookmark. But the behavior is kept compatible with the old
1558 # active bookmark. But the behavior is kept compatible with the old
1559 # version for now.
1559 # version for now.
1560 moves.append((bk, newtopmost))
1560 moves.append((bk, newtopmost))
1561 continue
1561 continue
1562 base = old
1562 base = old
1563 new = mapping.get(base, None)
1563 new = mapping.get(base, None)
1564 if new is None:
1564 if new is None:
1565 continue
1565 continue
1566 while not new:
1566 while not new:
1567 # base is killed, trying with parent
1567 # base is killed, trying with parent
1568 base = repo[base].p1().node()
1568 base = repo[base].p1().node()
1569 new = mapping.get(base, (base,))
1569 new = mapping.get(base, (base,))
1570 # nothing to move
1570 # nothing to move
1571 moves.append((bk, new[-1]))
1571 moves.append((bk, new[-1]))
1572 if moves:
1572 if moves:
1573 lock = tr = None
1573 lock = tr = None
1574 try:
1574 try:
1575 lock = repo.lock()
1575 lock = repo.lock()
1576 tr = repo.transaction('histedit')
1576 tr = repo.transaction('histedit')
1577 marks = repo._bookmarks
1577 marks = repo._bookmarks
1578 for mark, new in moves:
1578 for mark, new in moves:
1579 old = marks[mark]
1579 old = marks[mark]
1580 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1580 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1581 % (mark, node.short(old), node.short(new)))
1581 % (mark, node.short(old), node.short(new)))
1582 marks[mark] = new
1582 marks[mark] = new
1583 marks.recordchange(tr)
1583 marks.recordchange(tr)
1584 tr.close()
1584 tr.close()
1585 finally:
1585 finally:
1586 release(tr, lock)
1586 release(tr, lock)
1587
1587
1588 def cleanupnode(ui, repo, name, nodes):
1588 def cleanupnode(ui, repo, name, nodes):
1589 """strip a group of nodes from the repository
1589 """strip a group of nodes from the repository
1590
1590
1591 The set of node to strip may contains unknown nodes."""
1591 The set of node to strip may contains unknown nodes."""
1592 ui.debug('should strip %s nodes %s\n' %
1592 ui.debug('should strip %s nodes %s\n' %
1593 (name, ', '.join([node.short(n) for n in nodes])))
1593 (name, ', '.join([node.short(n) for n in nodes])))
1594 with repo.lock():
1594 with repo.lock():
1595 # do not let filtering get in the way of the cleanse
1595 # do not let filtering get in the way of the cleanse
1596 # we should probably get rid of obsolescence marker created during the
1596 # we should probably get rid of obsolescence marker created during the
1597 # histedit, but we currently do not have such information.
1597 # histedit, but we currently do not have such information.
1598 repo = repo.unfiltered()
1598 repo = repo.unfiltered()
1599 # Find all nodes that need to be stripped
1599 # Find all nodes that need to be stripped
1600 # (we use %lr instead of %ln to silently ignore unknown items)
1600 # (we use %lr instead of %ln to silently ignore unknown items)
1601 nm = repo.changelog.nodemap
1601 nm = repo.changelog.nodemap
1602 nodes = sorted(n for n in nodes if n in nm)
1602 nodes = sorted(n for n in nodes if n in nm)
1603 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1603 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1604 for c in roots:
1604 for c in roots:
1605 # We should process node in reverse order to strip tip most first.
1605 # We should process node in reverse order to strip tip most first.
1606 # but this trigger a bug in changegroup hook.
1606 # but this trigger a bug in changegroup hook.
1607 # This would reduce bundle overhead
1607 # This would reduce bundle overhead
1608 repair.strip(ui, repo, c)
1608 repair.strip(ui, repo, c)
1609
1609
1610 def safecleanupnode(ui, repo, name, nodes):
1610 def safecleanupnode(ui, repo, name, nodes):
1611 """strip or obsolete nodes
1611 """strip or obsolete nodes
1612
1612
1613 nodes could be either a set or dict which maps to replacements.
1613 nodes could be either a set or dict which maps to replacements.
1614 nodes could be unknown (outside the repo).
1614 nodes could be unknown (outside the repo).
1615 """
1615 """
1616 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
1616 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
1617 if supportsmarkers:
1617 if supportsmarkers:
1618 if util.safehasattr(nodes, 'get'):
1618 if util.safehasattr(nodes, 'get'):
1619 # nodes is a dict-like mapping
1619 # nodes is a dict-like mapping
1620 # use unfiltered repo for successors in case they are hidden
1620 # use unfiltered repo for successors in case they are hidden
1621 urepo = repo.unfiltered()
1621 urepo = repo.unfiltered()
1622 def getmarker(prec):
1622 def getmarker(prec):
1623 succs = tuple(urepo[n] for n in nodes.get(prec, ()))
1623 succs = tuple(urepo[n] for n in nodes.get(prec, ()))
1624 return (repo[prec], succs)
1624 return (repo[prec], succs)
1625 else:
1625 else:
1626 # nodes is a set-like
1626 # nodes is a set-like
1627 def getmarker(prec):
1627 def getmarker(prec):
1628 return (repo[prec], ())
1628 return (repo[prec], ())
1629 # sort by revision number because it sound "right"
1629 # sort by revision number because it sound "right"
1630 sortednodes = sorted([n for n in nodes if n in repo],
1630 sortednodes = sorted([n for n in nodes if n in repo],
1631 key=repo.changelog.rev)
1631 key=repo.changelog.rev)
1632 markers = [getmarker(t) for t in sortednodes]
1632 markers = [getmarker(t) for t in sortednodes]
1633 if markers:
1633 if markers:
1634 obsolete.createmarkers(repo, markers)
1634 obsolete.createmarkers(repo, markers, operation='histedit')
1635 else:
1635 else:
1636 return cleanupnode(ui, repo, name, nodes)
1636 return cleanupnode(ui, repo, name, nodes)
1637
1637
1638 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1638 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1639 if isinstance(nodelist, str):
1639 if isinstance(nodelist, str):
1640 nodelist = [nodelist]
1640 nodelist = [nodelist]
1641 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1641 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1642 state = histeditstate(repo)
1642 state = histeditstate(repo)
1643 state.read()
1643 state.read()
1644 histedit_nodes = {action.node for action
1644 histedit_nodes = {action.node for action
1645 in state.actions if action.node}
1645 in state.actions if action.node}
1646 common_nodes = histedit_nodes & set(nodelist)
1646 common_nodes = histedit_nodes & set(nodelist)
1647 if common_nodes:
1647 if common_nodes:
1648 raise error.Abort(_("histedit in progress, can't strip %s")
1648 raise error.Abort(_("histedit in progress, can't strip %s")
1649 % ', '.join(node.short(x) for x in common_nodes))
1649 % ', '.join(node.short(x) for x in common_nodes))
1650 return orig(ui, repo, nodelist, *args, **kwargs)
1650 return orig(ui, repo, nodelist, *args, **kwargs)
1651
1651
1652 extensions.wrapfunction(repair, 'strip', stripwrapper)
1652 extensions.wrapfunction(repair, 'strip', stripwrapper)
1653
1653
1654 def summaryhook(ui, repo):
1654 def summaryhook(ui, repo):
1655 if not os.path.exists(repo.vfs.join('histedit-state')):
1655 if not os.path.exists(repo.vfs.join('histedit-state')):
1656 return
1656 return
1657 state = histeditstate(repo)
1657 state = histeditstate(repo)
1658 state.read()
1658 state.read()
1659 if state.actions:
1659 if state.actions:
1660 # i18n: column positioning for "hg summary"
1660 # i18n: column positioning for "hg summary"
1661 ui.write(_('hist: %s (histedit --continue)\n') %
1661 ui.write(_('hist: %s (histedit --continue)\n') %
1662 (ui.label(_('%d remaining'), 'histedit.remaining') %
1662 (ui.label(_('%d remaining'), 'histedit.remaining') %
1663 len(state.actions)))
1663 len(state.actions)))
1664
1664
1665 def extsetup(ui):
1665 def extsetup(ui):
1666 cmdutil.summaryhooks.add('histedit', summaryhook)
1666 cmdutil.summaryhooks.add('histedit', summaryhook)
1667 cmdutil.unfinishedstates.append(
1667 cmdutil.unfinishedstates.append(
1668 ['histedit-state', False, True, _('histedit in progress'),
1668 ['histedit-state', False, True, _('histedit in progress'),
1669 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1669 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1670 cmdutil.afterresolvedstates.append(
1670 cmdutil.afterresolvedstates.append(
1671 ['histedit-state', _('hg histedit --continue')])
1671 ['histedit-state', _('hg histedit --continue')])
1672 if ui.configbool("experimental", "histeditng"):
1672 if ui.configbool("experimental", "histeditng"):
1673 globals()['base'] = action(['base', 'b'],
1673 globals()['base'] = action(['base', 'b'],
1674 _('checkout changeset and apply further changesets from there')
1674 _('checkout changeset and apply further changesets from there')
1675 )(base)
1675 )(base)
@@ -1,1541 +1,1541
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 https://mercurial-scm.org/wiki/RebaseExtension
14 https://mercurial-scm.org/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from __future__ import absolute_import
17 from __future__ import absolute_import
18
18
19 import errno
19 import errno
20 import os
20 import os
21
21
22 from mercurial.i18n import _
22 from mercurial.i18n import _
23 from mercurial.node import (
23 from mercurial.node import (
24 hex,
24 hex,
25 nullid,
25 nullid,
26 nullrev,
26 nullrev,
27 short,
27 short,
28 )
28 )
29 from mercurial import (
29 from mercurial import (
30 bookmarks,
30 bookmarks,
31 cmdutil,
31 cmdutil,
32 commands,
32 commands,
33 copies,
33 copies,
34 destutil,
34 destutil,
35 dirstateguard,
35 dirstateguard,
36 error,
36 error,
37 extensions,
37 extensions,
38 hg,
38 hg,
39 lock,
39 lock,
40 merge as mergemod,
40 merge as mergemod,
41 mergeutil,
41 mergeutil,
42 obsolete,
42 obsolete,
43 patch,
43 patch,
44 phases,
44 phases,
45 registrar,
45 registrar,
46 repair,
46 repair,
47 repoview,
47 repoview,
48 revset,
48 revset,
49 scmutil,
49 scmutil,
50 smartset,
50 smartset,
51 util,
51 util,
52 )
52 )
53
53
54 release = lock.release
54 release = lock.release
55 templateopts = commands.templateopts
55 templateopts = commands.templateopts
56
56
57 # The following constants are used throughout the rebase module. The ordering of
57 # The following constants are used throughout the rebase module. The ordering of
58 # their values must be maintained.
58 # their values must be maintained.
59
59
60 # Indicates that a revision needs to be rebased
60 # Indicates that a revision needs to be rebased
61 revtodo = -1
61 revtodo = -1
62 nullmerge = -2
62 nullmerge = -2
63 revignored = -3
63 revignored = -3
64 # successor in rebase destination
64 # successor in rebase destination
65 revprecursor = -4
65 revprecursor = -4
66 # plain prune (no successor)
66 # plain prune (no successor)
67 revpruned = -5
67 revpruned = -5
68 revskipped = (revignored, revprecursor, revpruned)
68 revskipped = (revignored, revprecursor, revpruned)
69
69
70 cmdtable = {}
70 cmdtable = {}
71 command = cmdutil.command(cmdtable)
71 command = cmdutil.command(cmdtable)
72 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
72 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
73 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
73 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
74 # be specifying the version(s) of Mercurial they are tested with, or
74 # be specifying the version(s) of Mercurial they are tested with, or
75 # leave the attribute unspecified.
75 # leave the attribute unspecified.
76 testedwith = 'ships-with-hg-core'
76 testedwith = 'ships-with-hg-core'
77
77
78 def _nothingtorebase():
78 def _nothingtorebase():
79 return 1
79 return 1
80
80
81 def _savegraft(ctx, extra):
81 def _savegraft(ctx, extra):
82 s = ctx.extra().get('source', None)
82 s = ctx.extra().get('source', None)
83 if s is not None:
83 if s is not None:
84 extra['source'] = s
84 extra['source'] = s
85 s = ctx.extra().get('intermediate-source', None)
85 s = ctx.extra().get('intermediate-source', None)
86 if s is not None:
86 if s is not None:
87 extra['intermediate-source'] = s
87 extra['intermediate-source'] = s
88
88
89 def _savebranch(ctx, extra):
89 def _savebranch(ctx, extra):
90 extra['branch'] = ctx.branch()
90 extra['branch'] = ctx.branch()
91
91
92 def _makeextrafn(copiers):
92 def _makeextrafn(copiers):
93 """make an extrafn out of the given copy-functions.
93 """make an extrafn out of the given copy-functions.
94
94
95 A copy function takes a context and an extra dict, and mutates the
95 A copy function takes a context and an extra dict, and mutates the
96 extra dict as needed based on the given context.
96 extra dict as needed based on the given context.
97 """
97 """
98 def extrafn(ctx, extra):
98 def extrafn(ctx, extra):
99 for c in copiers:
99 for c in copiers:
100 c(ctx, extra)
100 c(ctx, extra)
101 return extrafn
101 return extrafn
102
102
103 def _destrebase(repo, sourceset, destspace=None):
103 def _destrebase(repo, sourceset, destspace=None):
104 """small wrapper around destmerge to pass the right extra args
104 """small wrapper around destmerge to pass the right extra args
105
105
106 Please wrap destutil.destmerge instead."""
106 Please wrap destutil.destmerge instead."""
107 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
107 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
108 onheadcheck=False, destspace=destspace)
108 onheadcheck=False, destspace=destspace)
109
109
110 revsetpredicate = registrar.revsetpredicate()
110 revsetpredicate = registrar.revsetpredicate()
111
111
112 @revsetpredicate('_destrebase')
112 @revsetpredicate('_destrebase')
113 def _revsetdestrebase(repo, subset, x):
113 def _revsetdestrebase(repo, subset, x):
114 # ``_rebasedefaultdest()``
114 # ``_rebasedefaultdest()``
115
115
116 # default destination for rebase.
116 # default destination for rebase.
117 # # XXX: Currently private because I expect the signature to change.
117 # # XXX: Currently private because I expect the signature to change.
118 # # XXX: - bailing out in case of ambiguity vs returning all data.
118 # # XXX: - bailing out in case of ambiguity vs returning all data.
119 # i18n: "_rebasedefaultdest" is a keyword
119 # i18n: "_rebasedefaultdest" is a keyword
120 sourceset = None
120 sourceset = None
121 if x is not None:
121 if x is not None:
122 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
122 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
123 return subset & smartset.baseset([_destrebase(repo, sourceset)])
123 return subset & smartset.baseset([_destrebase(repo, sourceset)])
124
124
125 class rebaseruntime(object):
125 class rebaseruntime(object):
126 """This class is a container for rebase runtime state"""
126 """This class is a container for rebase runtime state"""
127 def __init__(self, repo, ui, opts=None):
127 def __init__(self, repo, ui, opts=None):
128 if opts is None:
128 if opts is None:
129 opts = {}
129 opts = {}
130
130
131 self.repo = repo
131 self.repo = repo
132 self.ui = ui
132 self.ui = ui
133 self.opts = opts
133 self.opts = opts
134 self.originalwd = None
134 self.originalwd = None
135 self.external = nullrev
135 self.external = nullrev
136 # Mapping between the old revision id and either what is the new rebased
136 # Mapping between the old revision id and either what is the new rebased
137 # revision or what needs to be done with the old revision. The state
137 # revision or what needs to be done with the old revision. The state
138 # dict will be what contains most of the rebase progress state.
138 # dict will be what contains most of the rebase progress state.
139 self.state = {}
139 self.state = {}
140 self.activebookmark = None
140 self.activebookmark = None
141 self.currentbookmarks = None
141 self.currentbookmarks = None
142 self.dest = None
142 self.dest = None
143 self.skipped = set()
143 self.skipped = set()
144 self.destancestors = set()
144 self.destancestors = set()
145
145
146 self.collapsef = opts.get('collapse', False)
146 self.collapsef = opts.get('collapse', False)
147 self.collapsemsg = cmdutil.logmessage(ui, opts)
147 self.collapsemsg = cmdutil.logmessage(ui, opts)
148 self.date = opts.get('date', None)
148 self.date = opts.get('date', None)
149
149
150 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
150 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
151 self.extrafns = [_savegraft]
151 self.extrafns = [_savegraft]
152 if e:
152 if e:
153 self.extrafns = [e]
153 self.extrafns = [e]
154
154
155 self.keepf = opts.get('keep', False)
155 self.keepf = opts.get('keep', False)
156 self.keepbranchesf = opts.get('keepbranches', False)
156 self.keepbranchesf = opts.get('keepbranches', False)
157 # keepopen is not meant for use on the command line, but by
157 # keepopen is not meant for use on the command line, but by
158 # other extensions
158 # other extensions
159 self.keepopen = opts.get('keepopen', False)
159 self.keepopen = opts.get('keepopen', False)
160 self.obsoletenotrebased = {}
160 self.obsoletenotrebased = {}
161
161
162 def storestatus(self, tr=None):
162 def storestatus(self, tr=None):
163 """Store the current status to allow recovery"""
163 """Store the current status to allow recovery"""
164 if tr:
164 if tr:
165 tr.addfilegenerator('rebasestate', ('rebasestate',),
165 tr.addfilegenerator('rebasestate', ('rebasestate',),
166 self._writestatus, location='plain')
166 self._writestatus, location='plain')
167 else:
167 else:
168 with self.repo.vfs("rebasestate", "w") as f:
168 with self.repo.vfs("rebasestate", "w") as f:
169 self._writestatus(f)
169 self._writestatus(f)
170
170
171 def _writestatus(self, f):
171 def _writestatus(self, f):
172 repo = self.repo.unfiltered()
172 repo = self.repo.unfiltered()
173 f.write(repo[self.originalwd].hex() + '\n')
173 f.write(repo[self.originalwd].hex() + '\n')
174 f.write(repo[self.dest].hex() + '\n')
174 f.write(repo[self.dest].hex() + '\n')
175 f.write(repo[self.external].hex() + '\n')
175 f.write(repo[self.external].hex() + '\n')
176 f.write('%d\n' % int(self.collapsef))
176 f.write('%d\n' % int(self.collapsef))
177 f.write('%d\n' % int(self.keepf))
177 f.write('%d\n' % int(self.keepf))
178 f.write('%d\n' % int(self.keepbranchesf))
178 f.write('%d\n' % int(self.keepbranchesf))
179 f.write('%s\n' % (self.activebookmark or ''))
179 f.write('%s\n' % (self.activebookmark or ''))
180 for d, v in self.state.iteritems():
180 for d, v in self.state.iteritems():
181 oldrev = repo[d].hex()
181 oldrev = repo[d].hex()
182 if v >= 0:
182 if v >= 0:
183 newrev = repo[v].hex()
183 newrev = repo[v].hex()
184 elif v == revtodo:
184 elif v == revtodo:
185 # To maintain format compatibility, we have to use nullid.
185 # To maintain format compatibility, we have to use nullid.
186 # Please do remove this special case when upgrading the format.
186 # Please do remove this special case when upgrading the format.
187 newrev = hex(nullid)
187 newrev = hex(nullid)
188 else:
188 else:
189 newrev = v
189 newrev = v
190 f.write("%s:%s\n" % (oldrev, newrev))
190 f.write("%s:%s\n" % (oldrev, newrev))
191 repo.ui.debug('rebase status stored\n')
191 repo.ui.debug('rebase status stored\n')
192
192
193 def restorestatus(self):
193 def restorestatus(self):
194 """Restore a previously stored status"""
194 """Restore a previously stored status"""
195 repo = self.repo
195 repo = self.repo
196 keepbranches = None
196 keepbranches = None
197 dest = None
197 dest = None
198 collapse = False
198 collapse = False
199 external = nullrev
199 external = nullrev
200 activebookmark = None
200 activebookmark = None
201 state = {}
201 state = {}
202
202
203 try:
203 try:
204 f = repo.vfs("rebasestate")
204 f = repo.vfs("rebasestate")
205 for i, l in enumerate(f.read().splitlines()):
205 for i, l in enumerate(f.read().splitlines()):
206 if i == 0:
206 if i == 0:
207 originalwd = repo[l].rev()
207 originalwd = repo[l].rev()
208 elif i == 1:
208 elif i == 1:
209 dest = repo[l].rev()
209 dest = repo[l].rev()
210 elif i == 2:
210 elif i == 2:
211 external = repo[l].rev()
211 external = repo[l].rev()
212 elif i == 3:
212 elif i == 3:
213 collapse = bool(int(l))
213 collapse = bool(int(l))
214 elif i == 4:
214 elif i == 4:
215 keep = bool(int(l))
215 keep = bool(int(l))
216 elif i == 5:
216 elif i == 5:
217 keepbranches = bool(int(l))
217 keepbranches = bool(int(l))
218 elif i == 6 and not (len(l) == 81 and ':' in l):
218 elif i == 6 and not (len(l) == 81 and ':' in l):
219 # line 6 is a recent addition, so for backwards
219 # line 6 is a recent addition, so for backwards
220 # compatibility check that the line doesn't look like the
220 # compatibility check that the line doesn't look like the
221 # oldrev:newrev lines
221 # oldrev:newrev lines
222 activebookmark = l
222 activebookmark = l
223 else:
223 else:
224 oldrev, newrev = l.split(':')
224 oldrev, newrev = l.split(':')
225 if newrev in (str(nullmerge), str(revignored),
225 if newrev in (str(nullmerge), str(revignored),
226 str(revprecursor), str(revpruned)):
226 str(revprecursor), str(revpruned)):
227 state[repo[oldrev].rev()] = int(newrev)
227 state[repo[oldrev].rev()] = int(newrev)
228 elif newrev == nullid:
228 elif newrev == nullid:
229 state[repo[oldrev].rev()] = revtodo
229 state[repo[oldrev].rev()] = revtodo
230 # Legacy compat special case
230 # Legacy compat special case
231 else:
231 else:
232 state[repo[oldrev].rev()] = repo[newrev].rev()
232 state[repo[oldrev].rev()] = repo[newrev].rev()
233
233
234 except IOError as err:
234 except IOError as err:
235 if err.errno != errno.ENOENT:
235 if err.errno != errno.ENOENT:
236 raise
236 raise
237 cmdutil.wrongtooltocontinue(repo, _('rebase'))
237 cmdutil.wrongtooltocontinue(repo, _('rebase'))
238
238
239 if keepbranches is None:
239 if keepbranches is None:
240 raise error.Abort(_('.hg/rebasestate is incomplete'))
240 raise error.Abort(_('.hg/rebasestate is incomplete'))
241
241
242 skipped = set()
242 skipped = set()
243 # recompute the set of skipped revs
243 # recompute the set of skipped revs
244 if not collapse:
244 if not collapse:
245 seen = {dest}
245 seen = {dest}
246 for old, new in sorted(state.items()):
246 for old, new in sorted(state.items()):
247 if new != revtodo and new in seen:
247 if new != revtodo and new in seen:
248 skipped.add(old)
248 skipped.add(old)
249 seen.add(new)
249 seen.add(new)
250 repo.ui.debug('computed skipped revs: %s\n' %
250 repo.ui.debug('computed skipped revs: %s\n' %
251 (' '.join(str(r) for r in sorted(skipped)) or None))
251 (' '.join(str(r) for r in sorted(skipped)) or None))
252 repo.ui.debug('rebase status resumed\n')
252 repo.ui.debug('rebase status resumed\n')
253 _setrebasesetvisibility(repo, set(state.keys()) | {originalwd})
253 _setrebasesetvisibility(repo, set(state.keys()) | {originalwd})
254
254
255 self.originalwd = originalwd
255 self.originalwd = originalwd
256 self.dest = dest
256 self.dest = dest
257 self.state = state
257 self.state = state
258 self.skipped = skipped
258 self.skipped = skipped
259 self.collapsef = collapse
259 self.collapsef = collapse
260 self.keepf = keep
260 self.keepf = keep
261 self.keepbranchesf = keepbranches
261 self.keepbranchesf = keepbranches
262 self.external = external
262 self.external = external
263 self.activebookmark = activebookmark
263 self.activebookmark = activebookmark
264
264
265 def _handleskippingobsolete(self, rebaserevs, obsoleterevs, dest):
265 def _handleskippingobsolete(self, rebaserevs, obsoleterevs, dest):
266 """Compute structures necessary for skipping obsolete revisions
266 """Compute structures necessary for skipping obsolete revisions
267
267
268 rebaserevs: iterable of all revisions that are to be rebased
268 rebaserevs: iterable of all revisions that are to be rebased
269 obsoleterevs: iterable of all obsolete revisions in rebaseset
269 obsoleterevs: iterable of all obsolete revisions in rebaseset
270 dest: a destination revision for the rebase operation
270 dest: a destination revision for the rebase operation
271 """
271 """
272 self.obsoletenotrebased = {}
272 self.obsoletenotrebased = {}
273 if not self.ui.configbool('experimental', 'rebaseskipobsolete',
273 if not self.ui.configbool('experimental', 'rebaseskipobsolete',
274 default=True):
274 default=True):
275 return
275 return
276 rebaseset = set(rebaserevs)
276 rebaseset = set(rebaserevs)
277 obsoleteset = set(obsoleterevs)
277 obsoleteset = set(obsoleterevs)
278 self.obsoletenotrebased = _computeobsoletenotrebased(self.repo,
278 self.obsoletenotrebased = _computeobsoletenotrebased(self.repo,
279 obsoleteset, dest)
279 obsoleteset, dest)
280 skippedset = set(self.obsoletenotrebased)
280 skippedset = set(self.obsoletenotrebased)
281 _checkobsrebase(self.repo, self.ui, obsoleteset, rebaseset, skippedset)
281 _checkobsrebase(self.repo, self.ui, obsoleteset, rebaseset, skippedset)
282
282
283 def _prepareabortorcontinue(self, isabort):
283 def _prepareabortorcontinue(self, isabort):
284 try:
284 try:
285 self.restorestatus()
285 self.restorestatus()
286 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
286 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
287 except error.RepoLookupError:
287 except error.RepoLookupError:
288 if isabort:
288 if isabort:
289 clearstatus(self.repo)
289 clearstatus(self.repo)
290 clearcollapsemsg(self.repo)
290 clearcollapsemsg(self.repo)
291 self.repo.ui.warn(_('rebase aborted (no revision is removed,'
291 self.repo.ui.warn(_('rebase aborted (no revision is removed,'
292 ' only broken state is cleared)\n'))
292 ' only broken state is cleared)\n'))
293 return 0
293 return 0
294 else:
294 else:
295 msg = _('cannot continue inconsistent rebase')
295 msg = _('cannot continue inconsistent rebase')
296 hint = _('use "hg rebase --abort" to clear broken state')
296 hint = _('use "hg rebase --abort" to clear broken state')
297 raise error.Abort(msg, hint=hint)
297 raise error.Abort(msg, hint=hint)
298 if isabort:
298 if isabort:
299 return abort(self.repo, self.originalwd, self.dest,
299 return abort(self.repo, self.originalwd, self.dest,
300 self.state, activebookmark=self.activebookmark)
300 self.state, activebookmark=self.activebookmark)
301
301
302 obsrevs = (r for r, st in self.state.items() if st == revprecursor)
302 obsrevs = (r for r, st in self.state.items() if st == revprecursor)
303 self._handleskippingobsolete(self.state.keys(), obsrevs, self.dest)
303 self._handleskippingobsolete(self.state.keys(), obsrevs, self.dest)
304
304
305 def _preparenewrebase(self, dest, rebaseset):
305 def _preparenewrebase(self, dest, rebaseset):
306 if dest is None:
306 if dest is None:
307 return _nothingtorebase()
307 return _nothingtorebase()
308
308
309 allowunstable = obsolete.isenabled(self.repo, obsolete.allowunstableopt)
309 allowunstable = obsolete.isenabled(self.repo, obsolete.allowunstableopt)
310 if (not (self.keepf or allowunstable)
310 if (not (self.keepf or allowunstable)
311 and self.repo.revs('first(children(%ld) - %ld)',
311 and self.repo.revs('first(children(%ld) - %ld)',
312 rebaseset, rebaseset)):
312 rebaseset, rebaseset)):
313 raise error.Abort(
313 raise error.Abort(
314 _("can't remove original changesets with"
314 _("can't remove original changesets with"
315 " unrebased descendants"),
315 " unrebased descendants"),
316 hint=_('use --keep to keep original changesets'))
316 hint=_('use --keep to keep original changesets'))
317
317
318 obsrevs = _filterobsoleterevs(self.repo, set(rebaseset))
318 obsrevs = _filterobsoleterevs(self.repo, set(rebaseset))
319 self._handleskippingobsolete(rebaseset, obsrevs, dest)
319 self._handleskippingobsolete(rebaseset, obsrevs, dest)
320
320
321 result = buildstate(self.repo, dest, rebaseset, self.collapsef,
321 result = buildstate(self.repo, dest, rebaseset, self.collapsef,
322 self.obsoletenotrebased)
322 self.obsoletenotrebased)
323
323
324 if not result:
324 if not result:
325 # Empty state built, nothing to rebase
325 # Empty state built, nothing to rebase
326 self.ui.status(_('nothing to rebase\n'))
326 self.ui.status(_('nothing to rebase\n'))
327 return _nothingtorebase()
327 return _nothingtorebase()
328
328
329 for root in self.repo.set('roots(%ld)', rebaseset):
329 for root in self.repo.set('roots(%ld)', rebaseset):
330 if not self.keepf and not root.mutable():
330 if not self.keepf and not root.mutable():
331 raise error.Abort(_("can't rebase public changeset %s")
331 raise error.Abort(_("can't rebase public changeset %s")
332 % root,
332 % root,
333 hint=_("see 'hg help phases' for details"))
333 hint=_("see 'hg help phases' for details"))
334
334
335 (self.originalwd, self.dest, self.state) = result
335 (self.originalwd, self.dest, self.state) = result
336 if self.collapsef:
336 if self.collapsef:
337 self.destancestors = self.repo.changelog.ancestors(
337 self.destancestors = self.repo.changelog.ancestors(
338 [self.dest],
338 [self.dest],
339 inclusive=True)
339 inclusive=True)
340 self.external = externalparent(self.repo, self.state,
340 self.external = externalparent(self.repo, self.state,
341 self.destancestors)
341 self.destancestors)
342
342
343 if dest.closesbranch() and not self.keepbranchesf:
343 if dest.closesbranch() and not self.keepbranchesf:
344 self.ui.status(_('reopening closed branch head %s\n') % dest)
344 self.ui.status(_('reopening closed branch head %s\n') % dest)
345
345
346 def _performrebase(self, tr):
346 def _performrebase(self, tr):
347 repo, ui, opts = self.repo, self.ui, self.opts
347 repo, ui, opts = self.repo, self.ui, self.opts
348 if self.keepbranchesf:
348 if self.keepbranchesf:
349 # insert _savebranch at the start of extrafns so if
349 # insert _savebranch at the start of extrafns so if
350 # there's a user-provided extrafn it can clobber branch if
350 # there's a user-provided extrafn it can clobber branch if
351 # desired
351 # desired
352 self.extrafns.insert(0, _savebranch)
352 self.extrafns.insert(0, _savebranch)
353 if self.collapsef:
353 if self.collapsef:
354 branches = set()
354 branches = set()
355 for rev in self.state:
355 for rev in self.state:
356 branches.add(repo[rev].branch())
356 branches.add(repo[rev].branch())
357 if len(branches) > 1:
357 if len(branches) > 1:
358 raise error.Abort(_('cannot collapse multiple named '
358 raise error.Abort(_('cannot collapse multiple named '
359 'branches'))
359 'branches'))
360
360
361 # Rebase
361 # Rebase
362 if not self.destancestors:
362 if not self.destancestors:
363 self.destancestors = repo.changelog.ancestors([self.dest],
363 self.destancestors = repo.changelog.ancestors([self.dest],
364 inclusive=True)
364 inclusive=True)
365
365
366 # Keep track of the current bookmarks in order to reset them later
366 # Keep track of the current bookmarks in order to reset them later
367 self.currentbookmarks = repo._bookmarks.copy()
367 self.currentbookmarks = repo._bookmarks.copy()
368 self.activebookmark = self.activebookmark or repo._activebookmark
368 self.activebookmark = self.activebookmark or repo._activebookmark
369 if self.activebookmark:
369 if self.activebookmark:
370 bookmarks.deactivate(repo)
370 bookmarks.deactivate(repo)
371
371
372 # Store the state before we begin so users can run 'hg rebase --abort'
372 # Store the state before we begin so users can run 'hg rebase --abort'
373 # if we fail before the transaction closes.
373 # if we fail before the transaction closes.
374 self.storestatus()
374 self.storestatus()
375
375
376 sortedrevs = repo.revs('sort(%ld, -topo)', self.state)
376 sortedrevs = repo.revs('sort(%ld, -topo)', self.state)
377 cands = [k for k, v in self.state.iteritems() if v == revtodo]
377 cands = [k for k, v in self.state.iteritems() if v == revtodo]
378 total = len(cands)
378 total = len(cands)
379 pos = 0
379 pos = 0
380 for rev in sortedrevs:
380 for rev in sortedrevs:
381 ctx = repo[rev]
381 ctx = repo[rev]
382 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
382 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
383 ctx.description().split('\n', 1)[0])
383 ctx.description().split('\n', 1)[0])
384 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
384 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
385 if names:
385 if names:
386 desc += ' (%s)' % ' '.join(names)
386 desc += ' (%s)' % ' '.join(names)
387 if self.state[rev] == rev:
387 if self.state[rev] == rev:
388 ui.status(_('already rebased %s\n') % desc)
388 ui.status(_('already rebased %s\n') % desc)
389 elif self.state[rev] == revtodo:
389 elif self.state[rev] == revtodo:
390 pos += 1
390 pos += 1
391 ui.status(_('rebasing %s\n') % desc)
391 ui.status(_('rebasing %s\n') % desc)
392 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
392 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
393 _('changesets'), total)
393 _('changesets'), total)
394 p1, p2, base = defineparents(repo, rev, self.dest,
394 p1, p2, base = defineparents(repo, rev, self.dest,
395 self.state,
395 self.state,
396 self.destancestors,
396 self.destancestors,
397 self.obsoletenotrebased)
397 self.obsoletenotrebased)
398 self.storestatus(tr=tr)
398 self.storestatus(tr=tr)
399 storecollapsemsg(repo, self.collapsemsg)
399 storecollapsemsg(repo, self.collapsemsg)
400 if len(repo[None].parents()) == 2:
400 if len(repo[None].parents()) == 2:
401 repo.ui.debug('resuming interrupted rebase\n')
401 repo.ui.debug('resuming interrupted rebase\n')
402 else:
402 else:
403 try:
403 try:
404 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
404 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
405 'rebase')
405 'rebase')
406 stats = rebasenode(repo, rev, p1, base, self.state,
406 stats = rebasenode(repo, rev, p1, base, self.state,
407 self.collapsef, self.dest)
407 self.collapsef, self.dest)
408 if stats and stats[3] > 0:
408 if stats and stats[3] > 0:
409 raise error.InterventionRequired(
409 raise error.InterventionRequired(
410 _('unresolved conflicts (see hg '
410 _('unresolved conflicts (see hg '
411 'resolve, then hg rebase --continue)'))
411 'resolve, then hg rebase --continue)'))
412 finally:
412 finally:
413 ui.setconfig('ui', 'forcemerge', '', 'rebase')
413 ui.setconfig('ui', 'forcemerge', '', 'rebase')
414 if not self.collapsef:
414 if not self.collapsef:
415 merging = p2 != nullrev
415 merging = p2 != nullrev
416 editform = cmdutil.mergeeditform(merging, 'rebase')
416 editform = cmdutil.mergeeditform(merging, 'rebase')
417 editor = cmdutil.getcommiteditor(editform=editform, **opts)
417 editor = cmdutil.getcommiteditor(editform=editform, **opts)
418 newnode = concludenode(repo, rev, p1, p2,
418 newnode = concludenode(repo, rev, p1, p2,
419 extrafn=_makeextrafn(self.extrafns),
419 extrafn=_makeextrafn(self.extrafns),
420 editor=editor,
420 editor=editor,
421 keepbranches=self.keepbranchesf,
421 keepbranches=self.keepbranchesf,
422 date=self.date)
422 date=self.date)
423 if newnode is None:
423 if newnode is None:
424 # If it ended up being a no-op commit, then the normal
424 # If it ended up being a no-op commit, then the normal
425 # merge state clean-up path doesn't happen, so do it
425 # merge state clean-up path doesn't happen, so do it
426 # here. Fix issue5494
426 # here. Fix issue5494
427 mergemod.mergestate.clean(repo)
427 mergemod.mergestate.clean(repo)
428 else:
428 else:
429 # Skip commit if we are collapsing
429 # Skip commit if we are collapsing
430 repo.dirstate.beginparentchange()
430 repo.dirstate.beginparentchange()
431 repo.setparents(repo[p1].node())
431 repo.setparents(repo[p1].node())
432 repo.dirstate.endparentchange()
432 repo.dirstate.endparentchange()
433 newnode = None
433 newnode = None
434 # Update the state
434 # Update the state
435 if newnode is not None:
435 if newnode is not None:
436 self.state[rev] = repo[newnode].rev()
436 self.state[rev] = repo[newnode].rev()
437 ui.debug('rebased as %s\n' % short(newnode))
437 ui.debug('rebased as %s\n' % short(newnode))
438 else:
438 else:
439 if not self.collapsef:
439 if not self.collapsef:
440 ui.warn(_('note: rebase of %d:%s created no changes '
440 ui.warn(_('note: rebase of %d:%s created no changes '
441 'to commit\n') % (rev, ctx))
441 'to commit\n') % (rev, ctx))
442 self.skipped.add(rev)
442 self.skipped.add(rev)
443 self.state[rev] = p1
443 self.state[rev] = p1
444 ui.debug('next revision set to %s\n' % p1)
444 ui.debug('next revision set to %s\n' % p1)
445 elif self.state[rev] == nullmerge:
445 elif self.state[rev] == nullmerge:
446 ui.debug('ignoring null merge rebase of %s\n' % rev)
446 ui.debug('ignoring null merge rebase of %s\n' % rev)
447 elif self.state[rev] == revignored:
447 elif self.state[rev] == revignored:
448 ui.status(_('not rebasing ignored %s\n') % desc)
448 ui.status(_('not rebasing ignored %s\n') % desc)
449 elif self.state[rev] == revprecursor:
449 elif self.state[rev] == revprecursor:
450 destctx = repo[self.obsoletenotrebased[rev]]
450 destctx = repo[self.obsoletenotrebased[rev]]
451 descdest = '%d:%s "%s"' % (destctx.rev(), destctx,
451 descdest = '%d:%s "%s"' % (destctx.rev(), destctx,
452 destctx.description().split('\n', 1)[0])
452 destctx.description().split('\n', 1)[0])
453 msg = _('note: not rebasing %s, already in destination as %s\n')
453 msg = _('note: not rebasing %s, already in destination as %s\n')
454 ui.status(msg % (desc, descdest))
454 ui.status(msg % (desc, descdest))
455 elif self.state[rev] == revpruned:
455 elif self.state[rev] == revpruned:
456 msg = _('note: not rebasing %s, it has no successor\n')
456 msg = _('note: not rebasing %s, it has no successor\n')
457 ui.status(msg % desc)
457 ui.status(msg % desc)
458 else:
458 else:
459 ui.status(_('already rebased %s as %s\n') %
459 ui.status(_('already rebased %s as %s\n') %
460 (desc, repo[self.state[rev]]))
460 (desc, repo[self.state[rev]]))
461
461
462 ui.progress(_('rebasing'), None)
462 ui.progress(_('rebasing'), None)
463 ui.note(_('rebase merging completed\n'))
463 ui.note(_('rebase merging completed\n'))
464
464
465 def _finishrebase(self):
465 def _finishrebase(self):
466 repo, ui, opts = self.repo, self.ui, self.opts
466 repo, ui, opts = self.repo, self.ui, self.opts
467 if self.collapsef and not self.keepopen:
467 if self.collapsef and not self.keepopen:
468 p1, p2, _base = defineparents(repo, min(self.state),
468 p1, p2, _base = defineparents(repo, min(self.state),
469 self.dest, self.state,
469 self.dest, self.state,
470 self.destancestors,
470 self.destancestors,
471 self.obsoletenotrebased)
471 self.obsoletenotrebased)
472 editopt = opts.get('edit')
472 editopt = opts.get('edit')
473 editform = 'rebase.collapse'
473 editform = 'rebase.collapse'
474 if self.collapsemsg:
474 if self.collapsemsg:
475 commitmsg = self.collapsemsg
475 commitmsg = self.collapsemsg
476 else:
476 else:
477 commitmsg = 'Collapsed revision'
477 commitmsg = 'Collapsed revision'
478 for rebased in self.state:
478 for rebased in self.state:
479 if rebased not in self.skipped and\
479 if rebased not in self.skipped and\
480 self.state[rebased] > nullmerge:
480 self.state[rebased] > nullmerge:
481 commitmsg += '\n* %s' % repo[rebased].description()
481 commitmsg += '\n* %s' % repo[rebased].description()
482 editopt = True
482 editopt = True
483 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
483 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
484 revtoreuse = max(self.state)
484 revtoreuse = max(self.state)
485 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
485 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
486 try:
486 try:
487 newnode = concludenode(repo, revtoreuse, p1, self.external,
487 newnode = concludenode(repo, revtoreuse, p1, self.external,
488 commitmsg=commitmsg,
488 commitmsg=commitmsg,
489 extrafn=_makeextrafn(self.extrafns),
489 extrafn=_makeextrafn(self.extrafns),
490 editor=editor,
490 editor=editor,
491 keepbranches=self.keepbranchesf,
491 keepbranches=self.keepbranchesf,
492 date=self.date)
492 date=self.date)
493 dsguard.close()
493 dsguard.close()
494 release(dsguard)
494 release(dsguard)
495 except error.InterventionRequired:
495 except error.InterventionRequired:
496 dsguard.close()
496 dsguard.close()
497 release(dsguard)
497 release(dsguard)
498 raise
498 raise
499 except Exception:
499 except Exception:
500 release(dsguard)
500 release(dsguard)
501 raise
501 raise
502
502
503 if newnode is None:
503 if newnode is None:
504 newrev = self.dest
504 newrev = self.dest
505 else:
505 else:
506 newrev = repo[newnode].rev()
506 newrev = repo[newnode].rev()
507 for oldrev in self.state.iterkeys():
507 for oldrev in self.state.iterkeys():
508 if self.state[oldrev] > nullmerge:
508 if self.state[oldrev] > nullmerge:
509 self.state[oldrev] = newrev
509 self.state[oldrev] = newrev
510
510
511 if 'qtip' in repo.tags():
511 if 'qtip' in repo.tags():
512 updatemq(repo, self.state, self.skipped, **opts)
512 updatemq(repo, self.state, self.skipped, **opts)
513
513
514 if self.currentbookmarks:
514 if self.currentbookmarks:
515 # Nodeids are needed to reset bookmarks
515 # Nodeids are needed to reset bookmarks
516 nstate = {}
516 nstate = {}
517 for k, v in self.state.iteritems():
517 for k, v in self.state.iteritems():
518 if v > nullmerge and v != k:
518 if v > nullmerge and v != k:
519 nstate[repo[k].node()] = repo[v].node()
519 nstate[repo[k].node()] = repo[v].node()
520 elif v == revprecursor:
520 elif v == revprecursor:
521 succ = self.obsoletenotrebased[k]
521 succ = self.obsoletenotrebased[k]
522 nstate[repo[k].node()] = repo[succ].node()
522 nstate[repo[k].node()] = repo[succ].node()
523 # XXX this is the same as dest.node() for the non-continue path --
523 # XXX this is the same as dest.node() for the non-continue path --
524 # this should probably be cleaned up
524 # this should probably be cleaned up
525 destnode = repo[self.dest].node()
525 destnode = repo[self.dest].node()
526
526
527 # restore original working directory
527 # restore original working directory
528 # (we do this before stripping)
528 # (we do this before stripping)
529 newwd = self.state.get(self.originalwd, self.originalwd)
529 newwd = self.state.get(self.originalwd, self.originalwd)
530 if newwd == revprecursor:
530 if newwd == revprecursor:
531 newwd = self.obsoletenotrebased[self.originalwd]
531 newwd = self.obsoletenotrebased[self.originalwd]
532 elif newwd < 0:
532 elif newwd < 0:
533 # original directory is a parent of rebase set root or ignored
533 # original directory is a parent of rebase set root or ignored
534 newwd = self.originalwd
534 newwd = self.originalwd
535 if newwd not in [c.rev() for c in repo[None].parents()]:
535 if newwd not in [c.rev() for c in repo[None].parents()]:
536 ui.note(_("update back to initial working directory parent\n"))
536 ui.note(_("update back to initial working directory parent\n"))
537 hg.updaterepo(repo, newwd, False)
537 hg.updaterepo(repo, newwd, False)
538
538
539 if self.currentbookmarks:
539 if self.currentbookmarks:
540 with repo.transaction('bookmark') as tr:
540 with repo.transaction('bookmark') as tr:
541 updatebookmarks(repo, destnode, nstate,
541 updatebookmarks(repo, destnode, nstate,
542 self.currentbookmarks, tr)
542 self.currentbookmarks, tr)
543 if self.activebookmark not in repo._bookmarks:
543 if self.activebookmark not in repo._bookmarks:
544 # active bookmark was divergent one and has been deleted
544 # active bookmark was divergent one and has been deleted
545 self.activebookmark = None
545 self.activebookmark = None
546
546
547 if not self.keepf:
547 if not self.keepf:
548 collapsedas = None
548 collapsedas = None
549 if self.collapsef:
549 if self.collapsef:
550 collapsedas = newnode
550 collapsedas = newnode
551 clearrebased(ui, repo, self.state, self.skipped, collapsedas)
551 clearrebased(ui, repo, self.state, self.skipped, collapsedas)
552
552
553 clearstatus(repo)
553 clearstatus(repo)
554 clearcollapsemsg(repo)
554 clearcollapsemsg(repo)
555
555
556 ui.note(_("rebase completed\n"))
556 ui.note(_("rebase completed\n"))
557 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
557 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
558 if self.skipped:
558 if self.skipped:
559 skippedlen = len(self.skipped)
559 skippedlen = len(self.skipped)
560 ui.note(_("%d revisions have been skipped\n") % skippedlen)
560 ui.note(_("%d revisions have been skipped\n") % skippedlen)
561
561
562 if (self.activebookmark and
562 if (self.activebookmark and
563 repo['.'].node() == repo._bookmarks[self.activebookmark]):
563 repo['.'].node() == repo._bookmarks[self.activebookmark]):
564 bookmarks.activate(repo, self.activebookmark)
564 bookmarks.activate(repo, self.activebookmark)
565
565
566 @command('rebase',
566 @command('rebase',
567 [('s', 'source', '',
567 [('s', 'source', '',
568 _('rebase the specified changeset and descendants'), _('REV')),
568 _('rebase the specified changeset and descendants'), _('REV')),
569 ('b', 'base', '',
569 ('b', 'base', '',
570 _('rebase everything from branching point of specified changeset'),
570 _('rebase everything from branching point of specified changeset'),
571 _('REV')),
571 _('REV')),
572 ('r', 'rev', [],
572 ('r', 'rev', [],
573 _('rebase these revisions'),
573 _('rebase these revisions'),
574 _('REV')),
574 _('REV')),
575 ('d', 'dest', '',
575 ('d', 'dest', '',
576 _('rebase onto the specified changeset'), _('REV')),
576 _('rebase onto the specified changeset'), _('REV')),
577 ('', 'collapse', False, _('collapse the rebased changesets')),
577 ('', 'collapse', False, _('collapse the rebased changesets')),
578 ('m', 'message', '',
578 ('m', 'message', '',
579 _('use text as collapse commit message'), _('TEXT')),
579 _('use text as collapse commit message'), _('TEXT')),
580 ('e', 'edit', False, _('invoke editor on commit messages')),
580 ('e', 'edit', False, _('invoke editor on commit messages')),
581 ('l', 'logfile', '',
581 ('l', 'logfile', '',
582 _('read collapse commit message from file'), _('FILE')),
582 _('read collapse commit message from file'), _('FILE')),
583 ('k', 'keep', False, _('keep original changesets')),
583 ('k', 'keep', False, _('keep original changesets')),
584 ('', 'keepbranches', False, _('keep original branch names')),
584 ('', 'keepbranches', False, _('keep original branch names')),
585 ('D', 'detach', False, _('(DEPRECATED)')),
585 ('D', 'detach', False, _('(DEPRECATED)')),
586 ('i', 'interactive', False, _('(DEPRECATED)')),
586 ('i', 'interactive', False, _('(DEPRECATED)')),
587 ('t', 'tool', '', _('specify merge tool')),
587 ('t', 'tool', '', _('specify merge tool')),
588 ('c', 'continue', False, _('continue an interrupted rebase')),
588 ('c', 'continue', False, _('continue an interrupted rebase')),
589 ('a', 'abort', False, _('abort an interrupted rebase'))] +
589 ('a', 'abort', False, _('abort an interrupted rebase'))] +
590 templateopts,
590 templateopts,
591 _('[-s REV | -b REV] [-d REV] [OPTION]'))
591 _('[-s REV | -b REV] [-d REV] [OPTION]'))
592 def rebase(ui, repo, **opts):
592 def rebase(ui, repo, **opts):
593 """move changeset (and descendants) to a different branch
593 """move changeset (and descendants) to a different branch
594
594
595 Rebase uses repeated merging to graft changesets from one part of
595 Rebase uses repeated merging to graft changesets from one part of
596 history (the source) onto another (the destination). This can be
596 history (the source) onto another (the destination). This can be
597 useful for linearizing *local* changes relative to a master
597 useful for linearizing *local* changes relative to a master
598 development tree.
598 development tree.
599
599
600 Published commits cannot be rebased (see :hg:`help phases`).
600 Published commits cannot be rebased (see :hg:`help phases`).
601 To copy commits, see :hg:`help graft`.
601 To copy commits, see :hg:`help graft`.
602
602
603 If you don't specify a destination changeset (``-d/--dest``), rebase
603 If you don't specify a destination changeset (``-d/--dest``), rebase
604 will use the same logic as :hg:`merge` to pick a destination. if
604 will use the same logic as :hg:`merge` to pick a destination. if
605 the current branch contains exactly one other head, the other head
605 the current branch contains exactly one other head, the other head
606 is merged with by default. Otherwise, an explicit revision with
606 is merged with by default. Otherwise, an explicit revision with
607 which to merge with must be provided. (destination changeset is not
607 which to merge with must be provided. (destination changeset is not
608 modified by rebasing, but new changesets are added as its
608 modified by rebasing, but new changesets are added as its
609 descendants.)
609 descendants.)
610
610
611 Here are the ways to select changesets:
611 Here are the ways to select changesets:
612
612
613 1. Explicitly select them using ``--rev``.
613 1. Explicitly select them using ``--rev``.
614
614
615 2. Use ``--source`` to select a root changeset and include all of its
615 2. Use ``--source`` to select a root changeset and include all of its
616 descendants.
616 descendants.
617
617
618 3. Use ``--base`` to select a changeset; rebase will find ancestors
618 3. Use ``--base`` to select a changeset; rebase will find ancestors
619 and their descendants which are not also ancestors of the destination.
619 and their descendants which are not also ancestors of the destination.
620
620
621 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
621 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
622 rebase will use ``--base .`` as above.
622 rebase will use ``--base .`` as above.
623
623
624 Rebase will destroy original changesets unless you use ``--keep``.
624 Rebase will destroy original changesets unless you use ``--keep``.
625 It will also move your bookmarks (even if you do).
625 It will also move your bookmarks (even if you do).
626
626
627 Some changesets may be dropped if they do not contribute changes
627 Some changesets may be dropped if they do not contribute changes
628 (e.g. merges from the destination branch).
628 (e.g. merges from the destination branch).
629
629
630 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
630 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
631 a named branch with two heads. You will need to explicitly specify source
631 a named branch with two heads. You will need to explicitly specify source
632 and/or destination.
632 and/or destination.
633
633
634 If you need to use a tool to automate merge/conflict decisions, you
634 If you need to use a tool to automate merge/conflict decisions, you
635 can specify one with ``--tool``, see :hg:`help merge-tools`.
635 can specify one with ``--tool``, see :hg:`help merge-tools`.
636 As a caveat: the tool will not be used to mediate when a file was
636 As a caveat: the tool will not be used to mediate when a file was
637 deleted, there is no hook presently available for this.
637 deleted, there is no hook presently available for this.
638
638
639 If a rebase is interrupted to manually resolve a conflict, it can be
639 If a rebase is interrupted to manually resolve a conflict, it can be
640 continued with --continue/-c or aborted with --abort/-a.
640 continued with --continue/-c or aborted with --abort/-a.
641
641
642 .. container:: verbose
642 .. container:: verbose
643
643
644 Examples:
644 Examples:
645
645
646 - move "local changes" (current commit back to branching point)
646 - move "local changes" (current commit back to branching point)
647 to the current branch tip after a pull::
647 to the current branch tip after a pull::
648
648
649 hg rebase
649 hg rebase
650
650
651 - move a single changeset to the stable branch::
651 - move a single changeset to the stable branch::
652
652
653 hg rebase -r 5f493448 -d stable
653 hg rebase -r 5f493448 -d stable
654
654
655 - splice a commit and all its descendants onto another part of history::
655 - splice a commit and all its descendants onto another part of history::
656
656
657 hg rebase --source c0c3 --dest 4cf9
657 hg rebase --source c0c3 --dest 4cf9
658
658
659 - rebase everything on a branch marked by a bookmark onto the
659 - rebase everything on a branch marked by a bookmark onto the
660 default branch::
660 default branch::
661
661
662 hg rebase --base myfeature --dest default
662 hg rebase --base myfeature --dest default
663
663
664 - collapse a sequence of changes into a single commit::
664 - collapse a sequence of changes into a single commit::
665
665
666 hg rebase --collapse -r 1520:1525 -d .
666 hg rebase --collapse -r 1520:1525 -d .
667
667
668 - move a named branch while preserving its name::
668 - move a named branch while preserving its name::
669
669
670 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
670 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
671
671
672 Configuration Options:
672 Configuration Options:
673
673
674 You can make rebase require a destination if you set the following config
674 You can make rebase require a destination if you set the following config
675 option::
675 option::
676
676
677 [commands]
677 [commands]
678 rebase.requiredest = True
678 rebase.requiredest = True
679
679
680 Return Values:
680 Return Values:
681
681
682 Returns 0 on success, 1 if nothing to rebase or there are
682 Returns 0 on success, 1 if nothing to rebase or there are
683 unresolved conflicts.
683 unresolved conflicts.
684
684
685 """
685 """
686 rbsrt = rebaseruntime(repo, ui, opts)
686 rbsrt = rebaseruntime(repo, ui, opts)
687
687
688 lock = wlock = None
688 lock = wlock = None
689 try:
689 try:
690 wlock = repo.wlock()
690 wlock = repo.wlock()
691 lock = repo.lock()
691 lock = repo.lock()
692
692
693 # Validate input and define rebasing points
693 # Validate input and define rebasing points
694 destf = opts.get('dest', None)
694 destf = opts.get('dest', None)
695 srcf = opts.get('source', None)
695 srcf = opts.get('source', None)
696 basef = opts.get('base', None)
696 basef = opts.get('base', None)
697 revf = opts.get('rev', [])
697 revf = opts.get('rev', [])
698 # search default destination in this space
698 # search default destination in this space
699 # used in the 'hg pull --rebase' case, see issue 5214.
699 # used in the 'hg pull --rebase' case, see issue 5214.
700 destspace = opts.get('_destspace')
700 destspace = opts.get('_destspace')
701 contf = opts.get('continue')
701 contf = opts.get('continue')
702 abortf = opts.get('abort')
702 abortf = opts.get('abort')
703 if opts.get('interactive'):
703 if opts.get('interactive'):
704 try:
704 try:
705 if extensions.find('histedit'):
705 if extensions.find('histedit'):
706 enablehistedit = ''
706 enablehistedit = ''
707 except KeyError:
707 except KeyError:
708 enablehistedit = " --config extensions.histedit="
708 enablehistedit = " --config extensions.histedit="
709 help = "hg%s help -e histedit" % enablehistedit
709 help = "hg%s help -e histedit" % enablehistedit
710 msg = _("interactive history editing is supported by the "
710 msg = _("interactive history editing is supported by the "
711 "'histedit' extension (see \"%s\")") % help
711 "'histedit' extension (see \"%s\")") % help
712 raise error.Abort(msg)
712 raise error.Abort(msg)
713
713
714 if rbsrt.collapsemsg and not rbsrt.collapsef:
714 if rbsrt.collapsemsg and not rbsrt.collapsef:
715 raise error.Abort(
715 raise error.Abort(
716 _('message can only be specified with collapse'))
716 _('message can only be specified with collapse'))
717
717
718 if contf or abortf:
718 if contf or abortf:
719 if contf and abortf:
719 if contf and abortf:
720 raise error.Abort(_('cannot use both abort and continue'))
720 raise error.Abort(_('cannot use both abort and continue'))
721 if rbsrt.collapsef:
721 if rbsrt.collapsef:
722 raise error.Abort(
722 raise error.Abort(
723 _('cannot use collapse with continue or abort'))
723 _('cannot use collapse with continue or abort'))
724 if srcf or basef or destf:
724 if srcf or basef or destf:
725 raise error.Abort(
725 raise error.Abort(
726 _('abort and continue do not allow specifying revisions'))
726 _('abort and continue do not allow specifying revisions'))
727 if abortf and opts.get('tool', False):
727 if abortf and opts.get('tool', False):
728 ui.warn(_('tool option will be ignored\n'))
728 ui.warn(_('tool option will be ignored\n'))
729 if contf:
729 if contf:
730 ms = mergemod.mergestate.read(repo)
730 ms = mergemod.mergestate.read(repo)
731 mergeutil.checkunresolved(ms)
731 mergeutil.checkunresolved(ms)
732
732
733 retcode = rbsrt._prepareabortorcontinue(abortf)
733 retcode = rbsrt._prepareabortorcontinue(abortf)
734 if retcode is not None:
734 if retcode is not None:
735 return retcode
735 return retcode
736 else:
736 else:
737 dest, rebaseset = _definesets(ui, repo, destf, srcf, basef, revf,
737 dest, rebaseset = _definesets(ui, repo, destf, srcf, basef, revf,
738 destspace=destspace)
738 destspace=destspace)
739 retcode = rbsrt._preparenewrebase(dest, rebaseset)
739 retcode = rbsrt._preparenewrebase(dest, rebaseset)
740 if retcode is not None:
740 if retcode is not None:
741 return retcode
741 return retcode
742
742
743 with repo.transaction('rebase') as tr:
743 with repo.transaction('rebase') as tr:
744 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
744 dsguard = dirstateguard.dirstateguard(repo, 'rebase')
745 try:
745 try:
746 rbsrt._performrebase(tr)
746 rbsrt._performrebase(tr)
747 dsguard.close()
747 dsguard.close()
748 release(dsguard)
748 release(dsguard)
749 except error.InterventionRequired:
749 except error.InterventionRequired:
750 dsguard.close()
750 dsguard.close()
751 release(dsguard)
751 release(dsguard)
752 tr.close()
752 tr.close()
753 raise
753 raise
754 except Exception:
754 except Exception:
755 release(dsguard)
755 release(dsguard)
756 raise
756 raise
757 rbsrt._finishrebase()
757 rbsrt._finishrebase()
758 finally:
758 finally:
759 release(lock, wlock)
759 release(lock, wlock)
760
760
761 def _definesets(ui, repo, destf=None, srcf=None, basef=None, revf=None,
761 def _definesets(ui, repo, destf=None, srcf=None, basef=None, revf=None,
762 destspace=None):
762 destspace=None):
763 """use revisions argument to define destination and rebase set
763 """use revisions argument to define destination and rebase set
764 """
764 """
765 if revf is None:
765 if revf is None:
766 revf = []
766 revf = []
767
767
768 # destspace is here to work around issues with `hg pull --rebase` see
768 # destspace is here to work around issues with `hg pull --rebase` see
769 # issue5214 for details
769 # issue5214 for details
770 if srcf and basef:
770 if srcf and basef:
771 raise error.Abort(_('cannot specify both a source and a base'))
771 raise error.Abort(_('cannot specify both a source and a base'))
772 if revf and basef:
772 if revf and basef:
773 raise error.Abort(_('cannot specify both a revision and a base'))
773 raise error.Abort(_('cannot specify both a revision and a base'))
774 if revf and srcf:
774 if revf and srcf:
775 raise error.Abort(_('cannot specify both a revision and a source'))
775 raise error.Abort(_('cannot specify both a revision and a source'))
776
776
777 cmdutil.checkunfinished(repo)
777 cmdutil.checkunfinished(repo)
778 cmdutil.bailifchanged(repo)
778 cmdutil.bailifchanged(repo)
779
779
780 if ui.configbool('commands', 'rebase.requiredest') and not destf:
780 if ui.configbool('commands', 'rebase.requiredest') and not destf:
781 raise error.Abort(_('you must specify a destination'),
781 raise error.Abort(_('you must specify a destination'),
782 hint=_('use: hg rebase -d REV'))
782 hint=_('use: hg rebase -d REV'))
783
783
784 if destf:
784 if destf:
785 dest = scmutil.revsingle(repo, destf)
785 dest = scmutil.revsingle(repo, destf)
786
786
787 if revf:
787 if revf:
788 rebaseset = scmutil.revrange(repo, revf)
788 rebaseset = scmutil.revrange(repo, revf)
789 if not rebaseset:
789 if not rebaseset:
790 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
790 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
791 return None, None
791 return None, None
792 elif srcf:
792 elif srcf:
793 src = scmutil.revrange(repo, [srcf])
793 src = scmutil.revrange(repo, [srcf])
794 if not src:
794 if not src:
795 ui.status(_('empty "source" revision set - nothing to rebase\n'))
795 ui.status(_('empty "source" revision set - nothing to rebase\n'))
796 return None, None
796 return None, None
797 rebaseset = repo.revs('(%ld)::', src)
797 rebaseset = repo.revs('(%ld)::', src)
798 assert rebaseset
798 assert rebaseset
799 else:
799 else:
800 base = scmutil.revrange(repo, [basef or '.'])
800 base = scmutil.revrange(repo, [basef or '.'])
801 if not base:
801 if not base:
802 ui.status(_('empty "base" revision set - '
802 ui.status(_('empty "base" revision set - '
803 "can't compute rebase set\n"))
803 "can't compute rebase set\n"))
804 return None, None
804 return None, None
805 if not destf:
805 if not destf:
806 dest = repo[_destrebase(repo, base, destspace=destspace)]
806 dest = repo[_destrebase(repo, base, destspace=destspace)]
807 destf = str(dest)
807 destf = str(dest)
808
808
809 roots = [] # selected children of branching points
809 roots = [] # selected children of branching points
810 bpbase = {} # {branchingpoint: [origbase]}
810 bpbase = {} # {branchingpoint: [origbase]}
811 for b in base: # group bases by branching points
811 for b in base: # group bases by branching points
812 bp = repo.revs('ancestor(%d, %d)', b, dest).first()
812 bp = repo.revs('ancestor(%d, %d)', b, dest).first()
813 bpbase[bp] = bpbase.get(bp, []) + [b]
813 bpbase[bp] = bpbase.get(bp, []) + [b]
814 if None in bpbase:
814 if None in bpbase:
815 # emulate the old behavior, showing "nothing to rebase" (a better
815 # emulate the old behavior, showing "nothing to rebase" (a better
816 # behavior may be abort with "cannot find branching point" error)
816 # behavior may be abort with "cannot find branching point" error)
817 bpbase.clear()
817 bpbase.clear()
818 for bp, bs in bpbase.iteritems(): # calculate roots
818 for bp, bs in bpbase.iteritems(): # calculate roots
819 roots += list(repo.revs('children(%d) & ancestors(%ld)', bp, bs))
819 roots += list(repo.revs('children(%d) & ancestors(%ld)', bp, bs))
820
820
821 rebaseset = repo.revs('%ld::', roots)
821 rebaseset = repo.revs('%ld::', roots)
822
822
823 if not rebaseset:
823 if not rebaseset:
824 # transform to list because smartsets are not comparable to
824 # transform to list because smartsets are not comparable to
825 # lists. This should be improved to honor laziness of
825 # lists. This should be improved to honor laziness of
826 # smartset.
826 # smartset.
827 if list(base) == [dest.rev()]:
827 if list(base) == [dest.rev()]:
828 if basef:
828 if basef:
829 ui.status(_('nothing to rebase - %s is both "base"'
829 ui.status(_('nothing to rebase - %s is both "base"'
830 ' and destination\n') % dest)
830 ' and destination\n') % dest)
831 else:
831 else:
832 ui.status(_('nothing to rebase - working directory '
832 ui.status(_('nothing to rebase - working directory '
833 'parent is also destination\n'))
833 'parent is also destination\n'))
834 elif not repo.revs('%ld - ::%d', base, dest):
834 elif not repo.revs('%ld - ::%d', base, dest):
835 if basef:
835 if basef:
836 ui.status(_('nothing to rebase - "base" %s is '
836 ui.status(_('nothing to rebase - "base" %s is '
837 'already an ancestor of destination '
837 'already an ancestor of destination '
838 '%s\n') %
838 '%s\n') %
839 ('+'.join(str(repo[r]) for r in base),
839 ('+'.join(str(repo[r]) for r in base),
840 dest))
840 dest))
841 else:
841 else:
842 ui.status(_('nothing to rebase - working '
842 ui.status(_('nothing to rebase - working '
843 'directory parent is already an '
843 'directory parent is already an '
844 'ancestor of destination %s\n') % dest)
844 'ancestor of destination %s\n') % dest)
845 else: # can it happen?
845 else: # can it happen?
846 ui.status(_('nothing to rebase from %s to %s\n') %
846 ui.status(_('nothing to rebase from %s to %s\n') %
847 ('+'.join(str(repo[r]) for r in base), dest))
847 ('+'.join(str(repo[r]) for r in base), dest))
848 return None, None
848 return None, None
849
849
850 if not destf:
850 if not destf:
851 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
851 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
852 destf = str(dest)
852 destf = str(dest)
853
853
854 return dest, rebaseset
854 return dest, rebaseset
855
855
856 def externalparent(repo, state, destancestors):
856 def externalparent(repo, state, destancestors):
857 """Return the revision that should be used as the second parent
857 """Return the revision that should be used as the second parent
858 when the revisions in state is collapsed on top of destancestors.
858 when the revisions in state is collapsed on top of destancestors.
859 Abort if there is more than one parent.
859 Abort if there is more than one parent.
860 """
860 """
861 parents = set()
861 parents = set()
862 source = min(state)
862 source = min(state)
863 for rev in state:
863 for rev in state:
864 if rev == source:
864 if rev == source:
865 continue
865 continue
866 for p in repo[rev].parents():
866 for p in repo[rev].parents():
867 if (p.rev() not in state
867 if (p.rev() not in state
868 and p.rev() not in destancestors):
868 and p.rev() not in destancestors):
869 parents.add(p.rev())
869 parents.add(p.rev())
870 if not parents:
870 if not parents:
871 return nullrev
871 return nullrev
872 if len(parents) == 1:
872 if len(parents) == 1:
873 return parents.pop()
873 return parents.pop()
874 raise error.Abort(_('unable to collapse on top of %s, there is more '
874 raise error.Abort(_('unable to collapse on top of %s, there is more '
875 'than one external parent: %s') %
875 'than one external parent: %s') %
876 (max(destancestors),
876 (max(destancestors),
877 ', '.join(str(p) for p in sorted(parents))))
877 ', '.join(str(p) for p in sorted(parents))))
878
878
879 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
879 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
880 keepbranches=False, date=None):
880 keepbranches=False, date=None):
881 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
881 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
882 but also store useful information in extra.
882 but also store useful information in extra.
883 Return node of committed revision.'''
883 Return node of committed revision.'''
884 repo.setparents(repo[p1].node(), repo[p2].node())
884 repo.setparents(repo[p1].node(), repo[p2].node())
885 ctx = repo[rev]
885 ctx = repo[rev]
886 if commitmsg is None:
886 if commitmsg is None:
887 commitmsg = ctx.description()
887 commitmsg = ctx.description()
888 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
888 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
889 extra = {'rebase_source': ctx.hex()}
889 extra = {'rebase_source': ctx.hex()}
890 if extrafn:
890 if extrafn:
891 extrafn(ctx, extra)
891 extrafn(ctx, extra)
892
892
893 destphase = max(ctx.phase(), phases.draft)
893 destphase = max(ctx.phase(), phases.draft)
894 overrides = {('phases', 'new-commit'): destphase}
894 overrides = {('phases', 'new-commit'): destphase}
895 with repo.ui.configoverride(overrides, 'rebase'):
895 with repo.ui.configoverride(overrides, 'rebase'):
896 if keepbranch:
896 if keepbranch:
897 repo.ui.setconfig('ui', 'allowemptycommit', True)
897 repo.ui.setconfig('ui', 'allowemptycommit', True)
898 # Commit might fail if unresolved files exist
898 # Commit might fail if unresolved files exist
899 if date is None:
899 if date is None:
900 date = ctx.date()
900 date = ctx.date()
901 newnode = repo.commit(text=commitmsg, user=ctx.user(),
901 newnode = repo.commit(text=commitmsg, user=ctx.user(),
902 date=date, extra=extra, editor=editor)
902 date=date, extra=extra, editor=editor)
903
903
904 repo.dirstate.setbranch(repo[newnode].branch())
904 repo.dirstate.setbranch(repo[newnode].branch())
905 return newnode
905 return newnode
906
906
907 def rebasenode(repo, rev, p1, base, state, collapse, dest):
907 def rebasenode(repo, rev, p1, base, state, collapse, dest):
908 'Rebase a single revision rev on top of p1 using base as merge ancestor'
908 'Rebase a single revision rev on top of p1 using base as merge ancestor'
909 # Merge phase
909 # Merge phase
910 # Update to destination and merge it with local
910 # Update to destination and merge it with local
911 if repo['.'].rev() != p1:
911 if repo['.'].rev() != p1:
912 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
912 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
913 mergemod.update(repo, p1, False, True)
913 mergemod.update(repo, p1, False, True)
914 else:
914 else:
915 repo.ui.debug(" already in destination\n")
915 repo.ui.debug(" already in destination\n")
916 repo.dirstate.write(repo.currenttransaction())
916 repo.dirstate.write(repo.currenttransaction())
917 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
917 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
918 if base is not None:
918 if base is not None:
919 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
919 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
920 # When collapsing in-place, the parent is the common ancestor, we
920 # When collapsing in-place, the parent is the common ancestor, we
921 # have to allow merging with it.
921 # have to allow merging with it.
922 stats = mergemod.update(repo, rev, True, True, base, collapse,
922 stats = mergemod.update(repo, rev, True, True, base, collapse,
923 labels=['dest', 'source'])
923 labels=['dest', 'source'])
924 if collapse:
924 if collapse:
925 copies.duplicatecopies(repo, rev, dest)
925 copies.duplicatecopies(repo, rev, dest)
926 else:
926 else:
927 # If we're not using --collapse, we need to
927 # If we're not using --collapse, we need to
928 # duplicate copies between the revision we're
928 # duplicate copies between the revision we're
929 # rebasing and its first parent, but *not*
929 # rebasing and its first parent, but *not*
930 # duplicate any copies that have already been
930 # duplicate any copies that have already been
931 # performed in the destination.
931 # performed in the destination.
932 p1rev = repo[rev].p1().rev()
932 p1rev = repo[rev].p1().rev()
933 copies.duplicatecopies(repo, rev, p1rev, skiprev=dest)
933 copies.duplicatecopies(repo, rev, p1rev, skiprev=dest)
934 return stats
934 return stats
935
935
936 def nearestrebased(repo, rev, state):
936 def nearestrebased(repo, rev, state):
937 """return the nearest ancestors of rev in the rebase result"""
937 """return the nearest ancestors of rev in the rebase result"""
938 rebased = [r for r in state if state[r] > nullmerge]
938 rebased = [r for r in state if state[r] > nullmerge]
939 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
939 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
940 if candidates:
940 if candidates:
941 return state[candidates.first()]
941 return state[candidates.first()]
942 else:
942 else:
943 return None
943 return None
944
944
945 def _checkobsrebase(repo, ui, rebaseobsrevs, rebasesetrevs, rebaseobsskipped):
945 def _checkobsrebase(repo, ui, rebaseobsrevs, rebasesetrevs, rebaseobsskipped):
946 """
946 """
947 Abort if rebase will create divergence or rebase is noop because of markers
947 Abort if rebase will create divergence or rebase is noop because of markers
948
948
949 `rebaseobsrevs`: set of obsolete revision in source
949 `rebaseobsrevs`: set of obsolete revision in source
950 `rebasesetrevs`: set of revisions to be rebased from source
950 `rebasesetrevs`: set of revisions to be rebased from source
951 `rebaseobsskipped`: set of revisions from source skipped because they have
951 `rebaseobsskipped`: set of revisions from source skipped because they have
952 successors in destination
952 successors in destination
953 """
953 """
954 # Obsolete node with successors not in dest leads to divergence
954 # Obsolete node with successors not in dest leads to divergence
955 divergenceok = ui.configbool('experimental',
955 divergenceok = ui.configbool('experimental',
956 'allowdivergence')
956 'allowdivergence')
957 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
957 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
958
958
959 if divergencebasecandidates and not divergenceok:
959 if divergencebasecandidates and not divergenceok:
960 divhashes = (str(repo[r])
960 divhashes = (str(repo[r])
961 for r in divergencebasecandidates)
961 for r in divergencebasecandidates)
962 msg = _("this rebase will cause "
962 msg = _("this rebase will cause "
963 "divergences from: %s")
963 "divergences from: %s")
964 h = _("to force the rebase please set "
964 h = _("to force the rebase please set "
965 "experimental.allowdivergence=True")
965 "experimental.allowdivergence=True")
966 raise error.Abort(msg % (",".join(divhashes),), hint=h)
966 raise error.Abort(msg % (",".join(divhashes),), hint=h)
967
967
968 def defineparents(repo, rev, dest, state, destancestors,
968 def defineparents(repo, rev, dest, state, destancestors,
969 obsoletenotrebased):
969 obsoletenotrebased):
970 'Return the new parent relationship of the revision that will be rebased'
970 'Return the new parent relationship of the revision that will be rebased'
971 parents = repo[rev].parents()
971 parents = repo[rev].parents()
972 p1 = p2 = nullrev
972 p1 = p2 = nullrev
973 rp1 = None
973 rp1 = None
974
974
975 p1n = parents[0].rev()
975 p1n = parents[0].rev()
976 if p1n in destancestors:
976 if p1n in destancestors:
977 p1 = dest
977 p1 = dest
978 elif p1n in state:
978 elif p1n in state:
979 if state[p1n] == nullmerge:
979 if state[p1n] == nullmerge:
980 p1 = dest
980 p1 = dest
981 elif state[p1n] in revskipped:
981 elif state[p1n] in revskipped:
982 p1 = nearestrebased(repo, p1n, state)
982 p1 = nearestrebased(repo, p1n, state)
983 if p1 is None:
983 if p1 is None:
984 p1 = dest
984 p1 = dest
985 else:
985 else:
986 p1 = state[p1n]
986 p1 = state[p1n]
987 else: # p1n external
987 else: # p1n external
988 p1 = dest
988 p1 = dest
989 p2 = p1n
989 p2 = p1n
990
990
991 if len(parents) == 2 and parents[1].rev() not in destancestors:
991 if len(parents) == 2 and parents[1].rev() not in destancestors:
992 p2n = parents[1].rev()
992 p2n = parents[1].rev()
993 # interesting second parent
993 # interesting second parent
994 if p2n in state:
994 if p2n in state:
995 if p1 == dest: # p1n in destancestors or external
995 if p1 == dest: # p1n in destancestors or external
996 p1 = state[p2n]
996 p1 = state[p2n]
997 if p1 == revprecursor:
997 if p1 == revprecursor:
998 rp1 = obsoletenotrebased[p2n]
998 rp1 = obsoletenotrebased[p2n]
999 elif state[p2n] in revskipped:
999 elif state[p2n] in revskipped:
1000 p2 = nearestrebased(repo, p2n, state)
1000 p2 = nearestrebased(repo, p2n, state)
1001 if p2 is None:
1001 if p2 is None:
1002 # no ancestors rebased yet, detach
1002 # no ancestors rebased yet, detach
1003 p2 = dest
1003 p2 = dest
1004 else:
1004 else:
1005 p2 = state[p2n]
1005 p2 = state[p2n]
1006 else: # p2n external
1006 else: # p2n external
1007 if p2 != nullrev: # p1n external too => rev is a merged revision
1007 if p2 != nullrev: # p1n external too => rev is a merged revision
1008 raise error.Abort(_('cannot use revision %d as base, result '
1008 raise error.Abort(_('cannot use revision %d as base, result '
1009 'would have 3 parents') % rev)
1009 'would have 3 parents') % rev)
1010 p2 = p2n
1010 p2 = p2n
1011 repo.ui.debug(" future parents are %d and %d\n" %
1011 repo.ui.debug(" future parents are %d and %d\n" %
1012 (repo[rp1 or p1].rev(), repo[p2].rev()))
1012 (repo[rp1 or p1].rev(), repo[p2].rev()))
1013
1013
1014 if not any(p.rev() in state for p in parents):
1014 if not any(p.rev() in state for p in parents):
1015 # Case (1) root changeset of a non-detaching rebase set.
1015 # Case (1) root changeset of a non-detaching rebase set.
1016 # Let the merge mechanism find the base itself.
1016 # Let the merge mechanism find the base itself.
1017 base = None
1017 base = None
1018 elif not repo[rev].p2():
1018 elif not repo[rev].p2():
1019 # Case (2) detaching the node with a single parent, use this parent
1019 # Case (2) detaching the node with a single parent, use this parent
1020 base = repo[rev].p1().rev()
1020 base = repo[rev].p1().rev()
1021 else:
1021 else:
1022 # Assuming there is a p1, this is the case where there also is a p2.
1022 # Assuming there is a p1, this is the case where there also is a p2.
1023 # We are thus rebasing a merge and need to pick the right merge base.
1023 # We are thus rebasing a merge and need to pick the right merge base.
1024 #
1024 #
1025 # Imagine we have:
1025 # Imagine we have:
1026 # - M: current rebase revision in this step
1026 # - M: current rebase revision in this step
1027 # - A: one parent of M
1027 # - A: one parent of M
1028 # - B: other parent of M
1028 # - B: other parent of M
1029 # - D: destination of this merge step (p1 var)
1029 # - D: destination of this merge step (p1 var)
1030 #
1030 #
1031 # Consider the case where D is a descendant of A or B and the other is
1031 # Consider the case where D is a descendant of A or B and the other is
1032 # 'outside'. In this case, the right merge base is the D ancestor.
1032 # 'outside'. In this case, the right merge base is the D ancestor.
1033 #
1033 #
1034 # An informal proof, assuming A is 'outside' and B is the D ancestor:
1034 # An informal proof, assuming A is 'outside' and B is the D ancestor:
1035 #
1035 #
1036 # If we pick B as the base, the merge involves:
1036 # If we pick B as the base, the merge involves:
1037 # - changes from B to M (actual changeset payload)
1037 # - changes from B to M (actual changeset payload)
1038 # - changes from B to D (induced by rebase) as D is a rebased
1038 # - changes from B to D (induced by rebase) as D is a rebased
1039 # version of B)
1039 # version of B)
1040 # Which exactly represent the rebase operation.
1040 # Which exactly represent the rebase operation.
1041 #
1041 #
1042 # If we pick A as the base, the merge involves:
1042 # If we pick A as the base, the merge involves:
1043 # - changes from A to M (actual changeset payload)
1043 # - changes from A to M (actual changeset payload)
1044 # - changes from A to D (with include changes between unrelated A and B
1044 # - changes from A to D (with include changes between unrelated A and B
1045 # plus changes induced by rebase)
1045 # plus changes induced by rebase)
1046 # Which does not represent anything sensible and creates a lot of
1046 # Which does not represent anything sensible and creates a lot of
1047 # conflicts. A is thus not the right choice - B is.
1047 # conflicts. A is thus not the right choice - B is.
1048 #
1048 #
1049 # Note: The base found in this 'proof' is only correct in the specified
1049 # Note: The base found in this 'proof' is only correct in the specified
1050 # case. This base does not make sense if is not D a descendant of A or B
1050 # case. This base does not make sense if is not D a descendant of A or B
1051 # or if the other is not parent 'outside' (especially not if the other
1051 # or if the other is not parent 'outside' (especially not if the other
1052 # parent has been rebased). The current implementation does not
1052 # parent has been rebased). The current implementation does not
1053 # make it feasible to consider different cases separately. In these
1053 # make it feasible to consider different cases separately. In these
1054 # other cases we currently just leave it to the user to correctly
1054 # other cases we currently just leave it to the user to correctly
1055 # resolve an impossible merge using a wrong ancestor.
1055 # resolve an impossible merge using a wrong ancestor.
1056 #
1056 #
1057 # xx, p1 could be -4, and both parents could probably be -4...
1057 # xx, p1 could be -4, and both parents could probably be -4...
1058 for p in repo[rev].parents():
1058 for p in repo[rev].parents():
1059 if state.get(p.rev()) == p1:
1059 if state.get(p.rev()) == p1:
1060 base = p.rev()
1060 base = p.rev()
1061 break
1061 break
1062 else: # fallback when base not found
1062 else: # fallback when base not found
1063 base = None
1063 base = None
1064
1064
1065 # Raise because this function is called wrong (see issue 4106)
1065 # Raise because this function is called wrong (see issue 4106)
1066 raise AssertionError('no base found to rebase on '
1066 raise AssertionError('no base found to rebase on '
1067 '(defineparents called wrong)')
1067 '(defineparents called wrong)')
1068 return rp1 or p1, p2, base
1068 return rp1 or p1, p2, base
1069
1069
1070 def isagitpatch(repo, patchname):
1070 def isagitpatch(repo, patchname):
1071 'Return true if the given patch is in git format'
1071 'Return true if the given patch is in git format'
1072 mqpatch = os.path.join(repo.mq.path, patchname)
1072 mqpatch = os.path.join(repo.mq.path, patchname)
1073 for line in patch.linereader(file(mqpatch, 'rb')):
1073 for line in patch.linereader(file(mqpatch, 'rb')):
1074 if line.startswith('diff --git'):
1074 if line.startswith('diff --git'):
1075 return True
1075 return True
1076 return False
1076 return False
1077
1077
1078 def updatemq(repo, state, skipped, **opts):
1078 def updatemq(repo, state, skipped, **opts):
1079 'Update rebased mq patches - finalize and then import them'
1079 'Update rebased mq patches - finalize and then import them'
1080 mqrebase = {}
1080 mqrebase = {}
1081 mq = repo.mq
1081 mq = repo.mq
1082 original_series = mq.fullseries[:]
1082 original_series = mq.fullseries[:]
1083 skippedpatches = set()
1083 skippedpatches = set()
1084
1084
1085 for p in mq.applied:
1085 for p in mq.applied:
1086 rev = repo[p.node].rev()
1086 rev = repo[p.node].rev()
1087 if rev in state:
1087 if rev in state:
1088 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
1088 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
1089 (rev, p.name))
1089 (rev, p.name))
1090 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1090 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1091 else:
1091 else:
1092 # Applied but not rebased, not sure this should happen
1092 # Applied but not rebased, not sure this should happen
1093 skippedpatches.add(p.name)
1093 skippedpatches.add(p.name)
1094
1094
1095 if mqrebase:
1095 if mqrebase:
1096 mq.finish(repo, mqrebase.keys())
1096 mq.finish(repo, mqrebase.keys())
1097
1097
1098 # We must start import from the newest revision
1098 # We must start import from the newest revision
1099 for rev in sorted(mqrebase, reverse=True):
1099 for rev in sorted(mqrebase, reverse=True):
1100 if rev not in skipped:
1100 if rev not in skipped:
1101 name, isgit = mqrebase[rev]
1101 name, isgit = mqrebase[rev]
1102 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
1102 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
1103 (name, state[rev], repo[state[rev]]))
1103 (name, state[rev], repo[state[rev]]))
1104 mq.qimport(repo, (), patchname=name, git=isgit,
1104 mq.qimport(repo, (), patchname=name, git=isgit,
1105 rev=[str(state[rev])])
1105 rev=[str(state[rev])])
1106 else:
1106 else:
1107 # Rebased and skipped
1107 # Rebased and skipped
1108 skippedpatches.add(mqrebase[rev][0])
1108 skippedpatches.add(mqrebase[rev][0])
1109
1109
1110 # Patches were either applied and rebased and imported in
1110 # Patches were either applied and rebased and imported in
1111 # order, applied and removed or unapplied. Discard the removed
1111 # order, applied and removed or unapplied. Discard the removed
1112 # ones while preserving the original series order and guards.
1112 # ones while preserving the original series order and guards.
1113 newseries = [s for s in original_series
1113 newseries = [s for s in original_series
1114 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
1114 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
1115 mq.fullseries[:] = newseries
1115 mq.fullseries[:] = newseries
1116 mq.seriesdirty = True
1116 mq.seriesdirty = True
1117 mq.savedirty()
1117 mq.savedirty()
1118
1118
1119 def updatebookmarks(repo, destnode, nstate, originalbookmarks, tr):
1119 def updatebookmarks(repo, destnode, nstate, originalbookmarks, tr):
1120 'Move bookmarks to their correct changesets, and delete divergent ones'
1120 'Move bookmarks to their correct changesets, and delete divergent ones'
1121 marks = repo._bookmarks
1121 marks = repo._bookmarks
1122 for k, v in originalbookmarks.iteritems():
1122 for k, v in originalbookmarks.iteritems():
1123 if v in nstate:
1123 if v in nstate:
1124 # update the bookmarks for revs that have moved
1124 # update the bookmarks for revs that have moved
1125 marks[k] = nstate[v]
1125 marks[k] = nstate[v]
1126 bookmarks.deletedivergent(repo, [destnode], k)
1126 bookmarks.deletedivergent(repo, [destnode], k)
1127 marks.recordchange(tr)
1127 marks.recordchange(tr)
1128
1128
1129 def storecollapsemsg(repo, collapsemsg):
1129 def storecollapsemsg(repo, collapsemsg):
1130 'Store the collapse message to allow recovery'
1130 'Store the collapse message to allow recovery'
1131 collapsemsg = collapsemsg or ''
1131 collapsemsg = collapsemsg or ''
1132 f = repo.vfs("last-message.txt", "w")
1132 f = repo.vfs("last-message.txt", "w")
1133 f.write("%s\n" % collapsemsg)
1133 f.write("%s\n" % collapsemsg)
1134 f.close()
1134 f.close()
1135
1135
1136 def clearcollapsemsg(repo):
1136 def clearcollapsemsg(repo):
1137 'Remove collapse message file'
1137 'Remove collapse message file'
1138 repo.vfs.unlinkpath("last-message.txt", ignoremissing=True)
1138 repo.vfs.unlinkpath("last-message.txt", ignoremissing=True)
1139
1139
1140 def restorecollapsemsg(repo, isabort):
1140 def restorecollapsemsg(repo, isabort):
1141 'Restore previously stored collapse message'
1141 'Restore previously stored collapse message'
1142 try:
1142 try:
1143 f = repo.vfs("last-message.txt")
1143 f = repo.vfs("last-message.txt")
1144 collapsemsg = f.readline().strip()
1144 collapsemsg = f.readline().strip()
1145 f.close()
1145 f.close()
1146 except IOError as err:
1146 except IOError as err:
1147 if err.errno != errno.ENOENT:
1147 if err.errno != errno.ENOENT:
1148 raise
1148 raise
1149 if isabort:
1149 if isabort:
1150 # Oh well, just abort like normal
1150 # Oh well, just abort like normal
1151 collapsemsg = ''
1151 collapsemsg = ''
1152 else:
1152 else:
1153 raise error.Abort(_('missing .hg/last-message.txt for rebase'))
1153 raise error.Abort(_('missing .hg/last-message.txt for rebase'))
1154 return collapsemsg
1154 return collapsemsg
1155
1155
1156 def clearstatus(repo):
1156 def clearstatus(repo):
1157 'Remove the status files'
1157 'Remove the status files'
1158 _clearrebasesetvisibiliy(repo)
1158 _clearrebasesetvisibiliy(repo)
1159 repo.vfs.unlinkpath("rebasestate", ignoremissing=True)
1159 repo.vfs.unlinkpath("rebasestate", ignoremissing=True)
1160
1160
1161 def needupdate(repo, state):
1161 def needupdate(repo, state):
1162 '''check whether we should `update --clean` away from a merge, or if
1162 '''check whether we should `update --clean` away from a merge, or if
1163 somehow the working dir got forcibly updated, e.g. by older hg'''
1163 somehow the working dir got forcibly updated, e.g. by older hg'''
1164 parents = [p.rev() for p in repo[None].parents()]
1164 parents = [p.rev() for p in repo[None].parents()]
1165
1165
1166 # Are we in a merge state at all?
1166 # Are we in a merge state at all?
1167 if len(parents) < 2:
1167 if len(parents) < 2:
1168 return False
1168 return False
1169
1169
1170 # We should be standing on the first as-of-yet unrebased commit.
1170 # We should be standing on the first as-of-yet unrebased commit.
1171 firstunrebased = min([old for old, new in state.iteritems()
1171 firstunrebased = min([old for old, new in state.iteritems()
1172 if new == nullrev])
1172 if new == nullrev])
1173 if firstunrebased in parents:
1173 if firstunrebased in parents:
1174 return True
1174 return True
1175
1175
1176 return False
1176 return False
1177
1177
1178 def abort(repo, originalwd, dest, state, activebookmark=None):
1178 def abort(repo, originalwd, dest, state, activebookmark=None):
1179 '''Restore the repository to its original state. Additional args:
1179 '''Restore the repository to its original state. Additional args:
1180
1180
1181 activebookmark: the name of the bookmark that should be active after the
1181 activebookmark: the name of the bookmark that should be active after the
1182 restore'''
1182 restore'''
1183
1183
1184 try:
1184 try:
1185 # If the first commits in the rebased set get skipped during the rebase,
1185 # If the first commits in the rebased set get skipped during the rebase,
1186 # their values within the state mapping will be the dest rev id. The
1186 # their values within the state mapping will be the dest rev id. The
1187 # dstates list must must not contain the dest rev (issue4896)
1187 # dstates list must must not contain the dest rev (issue4896)
1188 dstates = [s for s in state.values() if s >= 0 and s != dest]
1188 dstates = [s for s in state.values() if s >= 0 and s != dest]
1189 immutable = [d for d in dstates if not repo[d].mutable()]
1189 immutable = [d for d in dstates if not repo[d].mutable()]
1190 cleanup = True
1190 cleanup = True
1191 if immutable:
1191 if immutable:
1192 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1192 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1193 % ', '.join(str(repo[r]) for r in immutable),
1193 % ', '.join(str(repo[r]) for r in immutable),
1194 hint=_("see 'hg help phases' for details"))
1194 hint=_("see 'hg help phases' for details"))
1195 cleanup = False
1195 cleanup = False
1196
1196
1197 descendants = set()
1197 descendants = set()
1198 if dstates:
1198 if dstates:
1199 descendants = set(repo.changelog.descendants(dstates))
1199 descendants = set(repo.changelog.descendants(dstates))
1200 if descendants - set(dstates):
1200 if descendants - set(dstates):
1201 repo.ui.warn(_("warning: new changesets detected on destination "
1201 repo.ui.warn(_("warning: new changesets detected on destination "
1202 "branch, can't strip\n"))
1202 "branch, can't strip\n"))
1203 cleanup = False
1203 cleanup = False
1204
1204
1205 if cleanup:
1205 if cleanup:
1206 shouldupdate = False
1206 shouldupdate = False
1207 rebased = filter(lambda x: x >= 0 and x != dest, state.values())
1207 rebased = filter(lambda x: x >= 0 and x != dest, state.values())
1208 if rebased:
1208 if rebased:
1209 strippoints = [
1209 strippoints = [
1210 c.node() for c in repo.set('roots(%ld)', rebased)]
1210 c.node() for c in repo.set('roots(%ld)', rebased)]
1211
1211
1212 updateifonnodes = set(rebased)
1212 updateifonnodes = set(rebased)
1213 updateifonnodes.add(dest)
1213 updateifonnodes.add(dest)
1214 updateifonnodes.add(originalwd)
1214 updateifonnodes.add(originalwd)
1215 shouldupdate = repo['.'].rev() in updateifonnodes
1215 shouldupdate = repo['.'].rev() in updateifonnodes
1216
1216
1217 # Update away from the rebase if necessary
1217 # Update away from the rebase if necessary
1218 if shouldupdate or needupdate(repo, state):
1218 if shouldupdate or needupdate(repo, state):
1219 mergemod.update(repo, originalwd, False, True)
1219 mergemod.update(repo, originalwd, False, True)
1220
1220
1221 # Strip from the first rebased revision
1221 # Strip from the first rebased revision
1222 if rebased:
1222 if rebased:
1223 # no backup of rebased cset versions needed
1223 # no backup of rebased cset versions needed
1224 repair.strip(repo.ui, repo, strippoints)
1224 repair.strip(repo.ui, repo, strippoints)
1225
1225
1226 if activebookmark and activebookmark in repo._bookmarks:
1226 if activebookmark and activebookmark in repo._bookmarks:
1227 bookmarks.activate(repo, activebookmark)
1227 bookmarks.activate(repo, activebookmark)
1228
1228
1229 finally:
1229 finally:
1230 clearstatus(repo)
1230 clearstatus(repo)
1231 clearcollapsemsg(repo)
1231 clearcollapsemsg(repo)
1232 repo.ui.warn(_('rebase aborted\n'))
1232 repo.ui.warn(_('rebase aborted\n'))
1233 return 0
1233 return 0
1234
1234
1235 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
1235 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
1236 '''Define which revisions are going to be rebased and where
1236 '''Define which revisions are going to be rebased and where
1237
1237
1238 repo: repo
1238 repo: repo
1239 dest: context
1239 dest: context
1240 rebaseset: set of rev
1240 rebaseset: set of rev
1241 '''
1241 '''
1242 originalwd = repo['.'].rev()
1242 originalwd = repo['.'].rev()
1243 _setrebasesetvisibility(repo, set(rebaseset) | {originalwd})
1243 _setrebasesetvisibility(repo, set(rebaseset) | {originalwd})
1244
1244
1245 # This check isn't strictly necessary, since mq detects commits over an
1245 # This check isn't strictly necessary, since mq detects commits over an
1246 # applied patch. But it prevents messing up the working directory when
1246 # applied patch. But it prevents messing up the working directory when
1247 # a partially completed rebase is blocked by mq.
1247 # a partially completed rebase is blocked by mq.
1248 if 'qtip' in repo.tags() and (dest.node() in
1248 if 'qtip' in repo.tags() and (dest.node() in
1249 [s.node for s in repo.mq.applied]):
1249 [s.node for s in repo.mq.applied]):
1250 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1250 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1251
1251
1252 roots = list(repo.set('roots(%ld)', rebaseset))
1252 roots = list(repo.set('roots(%ld)', rebaseset))
1253 if not roots:
1253 if not roots:
1254 raise error.Abort(_('no matching revisions'))
1254 raise error.Abort(_('no matching revisions'))
1255 roots.sort()
1255 roots.sort()
1256 state = dict.fromkeys(rebaseset, revtodo)
1256 state = dict.fromkeys(rebaseset, revtodo)
1257 detachset = set()
1257 detachset = set()
1258 emptyrebase = True
1258 emptyrebase = True
1259 for root in roots:
1259 for root in roots:
1260 commonbase = root.ancestor(dest)
1260 commonbase = root.ancestor(dest)
1261 if commonbase == root:
1261 if commonbase == root:
1262 raise error.Abort(_('source is ancestor of destination'))
1262 raise error.Abort(_('source is ancestor of destination'))
1263 if commonbase == dest:
1263 if commonbase == dest:
1264 wctx = repo[None]
1264 wctx = repo[None]
1265 if dest == wctx.p1():
1265 if dest == wctx.p1():
1266 # when rebasing to '.', it will use the current wd branch name
1266 # when rebasing to '.', it will use the current wd branch name
1267 samebranch = root.branch() == wctx.branch()
1267 samebranch = root.branch() == wctx.branch()
1268 else:
1268 else:
1269 samebranch = root.branch() == dest.branch()
1269 samebranch = root.branch() == dest.branch()
1270 if not collapse and samebranch and root in dest.children():
1270 if not collapse and samebranch and root in dest.children():
1271 # mark the revision as done by setting its new revision
1271 # mark the revision as done by setting its new revision
1272 # equal to its old (current) revisions
1272 # equal to its old (current) revisions
1273 state[root.rev()] = root.rev()
1273 state[root.rev()] = root.rev()
1274 repo.ui.debug('source is a child of destination\n')
1274 repo.ui.debug('source is a child of destination\n')
1275 continue
1275 continue
1276
1276
1277 emptyrebase = False
1277 emptyrebase = False
1278 repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root))
1278 repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root))
1279 # Rebase tries to turn <dest> into a parent of <root> while
1279 # Rebase tries to turn <dest> into a parent of <root> while
1280 # preserving the number of parents of rebased changesets:
1280 # preserving the number of parents of rebased changesets:
1281 #
1281 #
1282 # - A changeset with a single parent will always be rebased as a
1282 # - A changeset with a single parent will always be rebased as a
1283 # changeset with a single parent.
1283 # changeset with a single parent.
1284 #
1284 #
1285 # - A merge will be rebased as merge unless its parents are both
1285 # - A merge will be rebased as merge unless its parents are both
1286 # ancestors of <dest> or are themselves in the rebased set and
1286 # ancestors of <dest> or are themselves in the rebased set and
1287 # pruned while rebased.
1287 # pruned while rebased.
1288 #
1288 #
1289 # If one parent of <root> is an ancestor of <dest>, the rebased
1289 # If one parent of <root> is an ancestor of <dest>, the rebased
1290 # version of this parent will be <dest>. This is always true with
1290 # version of this parent will be <dest>. This is always true with
1291 # --base option.
1291 # --base option.
1292 #
1292 #
1293 # Otherwise, we need to *replace* the original parents with
1293 # Otherwise, we need to *replace* the original parents with
1294 # <dest>. This "detaches" the rebased set from its former location
1294 # <dest>. This "detaches" the rebased set from its former location
1295 # and rebases it onto <dest>. Changes introduced by ancestors of
1295 # and rebases it onto <dest>. Changes introduced by ancestors of
1296 # <root> not common with <dest> (the detachset, marked as
1296 # <root> not common with <dest> (the detachset, marked as
1297 # nullmerge) are "removed" from the rebased changesets.
1297 # nullmerge) are "removed" from the rebased changesets.
1298 #
1298 #
1299 # - If <root> has a single parent, set it to <dest>.
1299 # - If <root> has a single parent, set it to <dest>.
1300 #
1300 #
1301 # - If <root> is a merge, we cannot decide which parent to
1301 # - If <root> is a merge, we cannot decide which parent to
1302 # replace, the rebase operation is not clearly defined.
1302 # replace, the rebase operation is not clearly defined.
1303 #
1303 #
1304 # The table below sums up this behavior:
1304 # The table below sums up this behavior:
1305 #
1305 #
1306 # +------------------+----------------------+-------------------------+
1306 # +------------------+----------------------+-------------------------+
1307 # | | one parent | merge |
1307 # | | one parent | merge |
1308 # +------------------+----------------------+-------------------------+
1308 # +------------------+----------------------+-------------------------+
1309 # | parent in | new parent is <dest> | parents in ::<dest> are |
1309 # | parent in | new parent is <dest> | parents in ::<dest> are |
1310 # | ::<dest> | | remapped to <dest> |
1310 # | ::<dest> | | remapped to <dest> |
1311 # +------------------+----------------------+-------------------------+
1311 # +------------------+----------------------+-------------------------+
1312 # | unrelated source | new parent is <dest> | ambiguous, abort |
1312 # | unrelated source | new parent is <dest> | ambiguous, abort |
1313 # +------------------+----------------------+-------------------------+
1313 # +------------------+----------------------+-------------------------+
1314 #
1314 #
1315 # The actual abort is handled by `defineparents`
1315 # The actual abort is handled by `defineparents`
1316 if len(root.parents()) <= 1:
1316 if len(root.parents()) <= 1:
1317 # ancestors of <root> not ancestors of <dest>
1317 # ancestors of <root> not ancestors of <dest>
1318 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1318 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1319 [root.rev()]))
1319 [root.rev()]))
1320 if emptyrebase:
1320 if emptyrebase:
1321 return None
1321 return None
1322 for rev in sorted(state):
1322 for rev in sorted(state):
1323 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
1323 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
1324 # if all parents of this revision are done, then so is this revision
1324 # if all parents of this revision are done, then so is this revision
1325 if parents and all((state.get(p) == p for p in parents)):
1325 if parents and all((state.get(p) == p for p in parents)):
1326 state[rev] = rev
1326 state[rev] = rev
1327 for r in detachset:
1327 for r in detachset:
1328 if r not in state:
1328 if r not in state:
1329 state[r] = nullmerge
1329 state[r] = nullmerge
1330 if len(roots) > 1:
1330 if len(roots) > 1:
1331 # If we have multiple roots, we may have "hole" in the rebase set.
1331 # If we have multiple roots, we may have "hole" in the rebase set.
1332 # Rebase roots that descend from those "hole" should not be detached as
1332 # Rebase roots that descend from those "hole" should not be detached as
1333 # other root are. We use the special `revignored` to inform rebase that
1333 # other root are. We use the special `revignored` to inform rebase that
1334 # the revision should be ignored but that `defineparents` should search
1334 # the revision should be ignored but that `defineparents` should search
1335 # a rebase destination that make sense regarding rebased topology.
1335 # a rebase destination that make sense regarding rebased topology.
1336 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1336 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1337 for ignored in set(rebasedomain) - set(rebaseset):
1337 for ignored in set(rebasedomain) - set(rebaseset):
1338 state[ignored] = revignored
1338 state[ignored] = revignored
1339 for r in obsoletenotrebased:
1339 for r in obsoletenotrebased:
1340 if obsoletenotrebased[r] is None:
1340 if obsoletenotrebased[r] is None:
1341 state[r] = revpruned
1341 state[r] = revpruned
1342 else:
1342 else:
1343 state[r] = revprecursor
1343 state[r] = revprecursor
1344 return originalwd, dest.rev(), state
1344 return originalwd, dest.rev(), state
1345
1345
1346 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1346 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1347 """dispose of rebased revision at the end of the rebase
1347 """dispose of rebased revision at the end of the rebase
1348
1348
1349 If `collapsedas` is not None, the rebase was a collapse whose result if the
1349 If `collapsedas` is not None, the rebase was a collapse whose result if the
1350 `collapsedas` node."""
1350 `collapsedas` node."""
1351 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1351 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1352 markers = []
1352 markers = []
1353 for rev, newrev in sorted(state.items()):
1353 for rev, newrev in sorted(state.items()):
1354 if newrev >= 0 and newrev != rev:
1354 if newrev >= 0 and newrev != rev:
1355 if rev in skipped:
1355 if rev in skipped:
1356 succs = ()
1356 succs = ()
1357 elif collapsedas is not None:
1357 elif collapsedas is not None:
1358 succs = (repo[collapsedas],)
1358 succs = (repo[collapsedas],)
1359 else:
1359 else:
1360 succs = (repo[newrev],)
1360 succs = (repo[newrev],)
1361 markers.append((repo[rev], succs))
1361 markers.append((repo[rev], succs))
1362 if markers:
1362 if markers:
1363 obsolete.createmarkers(repo, markers)
1363 obsolete.createmarkers(repo, markers, operation='rebase')
1364 else:
1364 else:
1365 rebased = [rev for rev in state
1365 rebased = [rev for rev in state
1366 if state[rev] > nullmerge and state[rev] != rev]
1366 if state[rev] > nullmerge and state[rev] != rev]
1367 if rebased:
1367 if rebased:
1368 stripped = []
1368 stripped = []
1369 for root in repo.set('roots(%ld)', rebased):
1369 for root in repo.set('roots(%ld)', rebased):
1370 if set(repo.changelog.descendants([root.rev()])) - set(state):
1370 if set(repo.changelog.descendants([root.rev()])) - set(state):
1371 ui.warn(_("warning: new changesets detected "
1371 ui.warn(_("warning: new changesets detected "
1372 "on source branch, not stripping\n"))
1372 "on source branch, not stripping\n"))
1373 else:
1373 else:
1374 stripped.append(root.node())
1374 stripped.append(root.node())
1375 if stripped:
1375 if stripped:
1376 # backup the old csets by default
1376 # backup the old csets by default
1377 repair.strip(ui, repo, stripped, "all")
1377 repair.strip(ui, repo, stripped, "all")
1378
1378
1379
1379
1380 def pullrebase(orig, ui, repo, *args, **opts):
1380 def pullrebase(orig, ui, repo, *args, **opts):
1381 'Call rebase after pull if the latter has been invoked with --rebase'
1381 'Call rebase after pull if the latter has been invoked with --rebase'
1382 ret = None
1382 ret = None
1383 if opts.get('rebase'):
1383 if opts.get('rebase'):
1384 if ui.configbool('commands', 'rebase.requiredest'):
1384 if ui.configbool('commands', 'rebase.requiredest'):
1385 msg = _('rebase destination required by configuration')
1385 msg = _('rebase destination required by configuration')
1386 hint = _('use hg pull followed by hg rebase -d DEST')
1386 hint = _('use hg pull followed by hg rebase -d DEST')
1387 raise error.Abort(msg, hint=hint)
1387 raise error.Abort(msg, hint=hint)
1388
1388
1389 wlock = lock = None
1389 wlock = lock = None
1390 try:
1390 try:
1391 wlock = repo.wlock()
1391 wlock = repo.wlock()
1392 lock = repo.lock()
1392 lock = repo.lock()
1393 if opts.get('update'):
1393 if opts.get('update'):
1394 del opts['update']
1394 del opts['update']
1395 ui.debug('--update and --rebase are not compatible, ignoring '
1395 ui.debug('--update and --rebase are not compatible, ignoring '
1396 'the update flag\n')
1396 'the update flag\n')
1397
1397
1398 cmdutil.checkunfinished(repo)
1398 cmdutil.checkunfinished(repo)
1399 cmdutil.bailifchanged(repo, hint=_('cannot pull with rebase: '
1399 cmdutil.bailifchanged(repo, hint=_('cannot pull with rebase: '
1400 'please commit or shelve your changes first'))
1400 'please commit or shelve your changes first'))
1401
1401
1402 revsprepull = len(repo)
1402 revsprepull = len(repo)
1403 origpostincoming = commands.postincoming
1403 origpostincoming = commands.postincoming
1404 def _dummy(*args, **kwargs):
1404 def _dummy(*args, **kwargs):
1405 pass
1405 pass
1406 commands.postincoming = _dummy
1406 commands.postincoming = _dummy
1407 try:
1407 try:
1408 ret = orig(ui, repo, *args, **opts)
1408 ret = orig(ui, repo, *args, **opts)
1409 finally:
1409 finally:
1410 commands.postincoming = origpostincoming
1410 commands.postincoming = origpostincoming
1411 revspostpull = len(repo)
1411 revspostpull = len(repo)
1412 if revspostpull > revsprepull:
1412 if revspostpull > revsprepull:
1413 # --rev option from pull conflict with rebase own --rev
1413 # --rev option from pull conflict with rebase own --rev
1414 # dropping it
1414 # dropping it
1415 if 'rev' in opts:
1415 if 'rev' in opts:
1416 del opts['rev']
1416 del opts['rev']
1417 # positional argument from pull conflicts with rebase's own
1417 # positional argument from pull conflicts with rebase's own
1418 # --source.
1418 # --source.
1419 if 'source' in opts:
1419 if 'source' in opts:
1420 del opts['source']
1420 del opts['source']
1421 # revsprepull is the len of the repo, not revnum of tip.
1421 # revsprepull is the len of the repo, not revnum of tip.
1422 destspace = list(repo.changelog.revs(start=revsprepull))
1422 destspace = list(repo.changelog.revs(start=revsprepull))
1423 opts['_destspace'] = destspace
1423 opts['_destspace'] = destspace
1424 try:
1424 try:
1425 rebase(ui, repo, **opts)
1425 rebase(ui, repo, **opts)
1426 except error.NoMergeDestAbort:
1426 except error.NoMergeDestAbort:
1427 # we can maybe update instead
1427 # we can maybe update instead
1428 rev, _a, _b = destutil.destupdate(repo)
1428 rev, _a, _b = destutil.destupdate(repo)
1429 if rev == repo['.'].rev():
1429 if rev == repo['.'].rev():
1430 ui.status(_('nothing to rebase\n'))
1430 ui.status(_('nothing to rebase\n'))
1431 else:
1431 else:
1432 ui.status(_('nothing to rebase - updating instead\n'))
1432 ui.status(_('nothing to rebase - updating instead\n'))
1433 # not passing argument to get the bare update behavior
1433 # not passing argument to get the bare update behavior
1434 # with warning and trumpets
1434 # with warning and trumpets
1435 commands.update(ui, repo)
1435 commands.update(ui, repo)
1436 finally:
1436 finally:
1437 release(lock, wlock)
1437 release(lock, wlock)
1438 else:
1438 else:
1439 if opts.get('tool'):
1439 if opts.get('tool'):
1440 raise error.Abort(_('--tool can only be used with --rebase'))
1440 raise error.Abort(_('--tool can only be used with --rebase'))
1441 ret = orig(ui, repo, *args, **opts)
1441 ret = orig(ui, repo, *args, **opts)
1442
1442
1443 return ret
1443 return ret
1444
1444
1445 def _setrebasesetvisibility(repo, revs):
1445 def _setrebasesetvisibility(repo, revs):
1446 """store the currently rebased set on the repo object
1446 """store the currently rebased set on the repo object
1447
1447
1448 This is used by another function to prevent rebased revision to because
1448 This is used by another function to prevent rebased revision to because
1449 hidden (see issue4504)"""
1449 hidden (see issue4504)"""
1450 repo = repo.unfiltered()
1450 repo = repo.unfiltered()
1451 repo._rebaseset = revs
1451 repo._rebaseset = revs
1452 # invalidate cache if visibility changes
1452 # invalidate cache if visibility changes
1453 hiddens = repo.filteredrevcache.get('visible', set())
1453 hiddens = repo.filteredrevcache.get('visible', set())
1454 if revs & hiddens:
1454 if revs & hiddens:
1455 repo.invalidatevolatilesets()
1455 repo.invalidatevolatilesets()
1456
1456
1457 def _clearrebasesetvisibiliy(repo):
1457 def _clearrebasesetvisibiliy(repo):
1458 """remove rebaseset data from the repo"""
1458 """remove rebaseset data from the repo"""
1459 repo = repo.unfiltered()
1459 repo = repo.unfiltered()
1460 if '_rebaseset' in vars(repo):
1460 if '_rebaseset' in vars(repo):
1461 del repo._rebaseset
1461 del repo._rebaseset
1462
1462
1463 def _rebasedvisible(orig, repo):
1463 def _rebasedvisible(orig, repo):
1464 """ensure rebased revs stay visible (see issue4504)"""
1464 """ensure rebased revs stay visible (see issue4504)"""
1465 blockers = orig(repo)
1465 blockers = orig(repo)
1466 blockers.update(getattr(repo, '_rebaseset', ()))
1466 blockers.update(getattr(repo, '_rebaseset', ()))
1467 return blockers
1467 return blockers
1468
1468
1469 def _filterobsoleterevs(repo, revs):
1469 def _filterobsoleterevs(repo, revs):
1470 """returns a set of the obsolete revisions in revs"""
1470 """returns a set of the obsolete revisions in revs"""
1471 return set(r for r in revs if repo[r].obsolete())
1471 return set(r for r in revs if repo[r].obsolete())
1472
1472
1473 def _computeobsoletenotrebased(repo, rebaseobsrevs, dest):
1473 def _computeobsoletenotrebased(repo, rebaseobsrevs, dest):
1474 """return a mapping obsolete => successor for all obsolete nodes to be
1474 """return a mapping obsolete => successor for all obsolete nodes to be
1475 rebased that have a successors in the destination
1475 rebased that have a successors in the destination
1476
1476
1477 obsolete => None entries in the mapping indicate nodes with no successor"""
1477 obsolete => None entries in the mapping indicate nodes with no successor"""
1478 obsoletenotrebased = {}
1478 obsoletenotrebased = {}
1479
1479
1480 # Build a mapping successor => obsolete nodes for the obsolete
1480 # Build a mapping successor => obsolete nodes for the obsolete
1481 # nodes to be rebased
1481 # nodes to be rebased
1482 allsuccessors = {}
1482 allsuccessors = {}
1483 cl = repo.changelog
1483 cl = repo.changelog
1484 for r in rebaseobsrevs:
1484 for r in rebaseobsrevs:
1485 node = cl.node(r)
1485 node = cl.node(r)
1486 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1486 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1487 try:
1487 try:
1488 allsuccessors[cl.rev(s)] = cl.rev(node)
1488 allsuccessors[cl.rev(s)] = cl.rev(node)
1489 except LookupError:
1489 except LookupError:
1490 pass
1490 pass
1491
1491
1492 if allsuccessors:
1492 if allsuccessors:
1493 # Look for successors of obsolete nodes to be rebased among
1493 # Look for successors of obsolete nodes to be rebased among
1494 # the ancestors of dest
1494 # the ancestors of dest
1495 ancs = cl.ancestors([repo[dest].rev()],
1495 ancs = cl.ancestors([repo[dest].rev()],
1496 stoprev=min(allsuccessors),
1496 stoprev=min(allsuccessors),
1497 inclusive=True)
1497 inclusive=True)
1498 for s in allsuccessors:
1498 for s in allsuccessors:
1499 if s in ancs:
1499 if s in ancs:
1500 obsoletenotrebased[allsuccessors[s]] = s
1500 obsoletenotrebased[allsuccessors[s]] = s
1501 elif (s == allsuccessors[s] and
1501 elif (s == allsuccessors[s] and
1502 allsuccessors.values().count(s) == 1):
1502 allsuccessors.values().count(s) == 1):
1503 # plain prune
1503 # plain prune
1504 obsoletenotrebased[s] = None
1504 obsoletenotrebased[s] = None
1505
1505
1506 return obsoletenotrebased
1506 return obsoletenotrebased
1507
1507
1508 def summaryhook(ui, repo):
1508 def summaryhook(ui, repo):
1509 if not repo.vfs.exists('rebasestate'):
1509 if not repo.vfs.exists('rebasestate'):
1510 return
1510 return
1511 try:
1511 try:
1512 rbsrt = rebaseruntime(repo, ui, {})
1512 rbsrt = rebaseruntime(repo, ui, {})
1513 rbsrt.restorestatus()
1513 rbsrt.restorestatus()
1514 state = rbsrt.state
1514 state = rbsrt.state
1515 except error.RepoLookupError:
1515 except error.RepoLookupError:
1516 # i18n: column positioning for "hg summary"
1516 # i18n: column positioning for "hg summary"
1517 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1517 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1518 ui.write(msg)
1518 ui.write(msg)
1519 return
1519 return
1520 numrebased = len([i for i in state.itervalues() if i >= 0])
1520 numrebased = len([i for i in state.itervalues() if i >= 0])
1521 # i18n: column positioning for "hg summary"
1521 # i18n: column positioning for "hg summary"
1522 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1522 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1523 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1523 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1524 ui.label(_('%d remaining'), 'rebase.remaining') %
1524 ui.label(_('%d remaining'), 'rebase.remaining') %
1525 (len(state) - numrebased)))
1525 (len(state) - numrebased)))
1526
1526
1527 def uisetup(ui):
1527 def uisetup(ui):
1528 #Replace pull with a decorator to provide --rebase option
1528 #Replace pull with a decorator to provide --rebase option
1529 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1529 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1530 entry[1].append(('', 'rebase', None,
1530 entry[1].append(('', 'rebase', None,
1531 _("rebase working directory to branch head")))
1531 _("rebase working directory to branch head")))
1532 entry[1].append(('t', 'tool', '',
1532 entry[1].append(('t', 'tool', '',
1533 _("specify merge tool for rebase")))
1533 _("specify merge tool for rebase")))
1534 cmdutil.summaryhooks.add('rebase', summaryhook)
1534 cmdutil.summaryhooks.add('rebase', summaryhook)
1535 cmdutil.unfinishedstates.append(
1535 cmdutil.unfinishedstates.append(
1536 ['rebasestate', False, False, _('rebase in progress'),
1536 ['rebasestate', False, False, _('rebase in progress'),
1537 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1537 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1538 cmdutil.afterresolvedstates.append(
1538 cmdutil.afterresolvedstates.append(
1539 ['rebasestate', _('hg rebase --continue')])
1539 ['rebasestate', _('hg rebase --continue')])
1540 # ensure rebased rev are not hidden
1540 # ensure rebased rev are not hidden
1541 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1541 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
@@ -1,3487 +1,3487
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import itertools
11 import itertools
12 import os
12 import os
13 import re
13 import re
14 import tempfile
14 import tempfile
15
15
16 from .i18n import _
16 from .i18n import _
17 from .node import (
17 from .node import (
18 bin,
18 bin,
19 hex,
19 hex,
20 nullid,
20 nullid,
21 nullrev,
21 nullrev,
22 short,
22 short,
23 )
23 )
24
24
25 from . import (
25 from . import (
26 bookmarks,
26 bookmarks,
27 changelog,
27 changelog,
28 copies,
28 copies,
29 crecord as crecordmod,
29 crecord as crecordmod,
30 encoding,
30 encoding,
31 error,
31 error,
32 formatter,
32 formatter,
33 graphmod,
33 graphmod,
34 lock as lockmod,
34 lock as lockmod,
35 match as matchmod,
35 match as matchmod,
36 obsolete,
36 obsolete,
37 patch,
37 patch,
38 pathutil,
38 pathutil,
39 phases,
39 phases,
40 pycompat,
40 pycompat,
41 repair,
41 repair,
42 revlog,
42 revlog,
43 revset,
43 revset,
44 scmutil,
44 scmutil,
45 smartset,
45 smartset,
46 templatekw,
46 templatekw,
47 templater,
47 templater,
48 util,
48 util,
49 vfs as vfsmod,
49 vfs as vfsmod,
50 )
50 )
51 stringio = util.stringio
51 stringio = util.stringio
52
52
53 # special string such that everything below this line will be ingored in the
53 # special string such that everything below this line will be ingored in the
54 # editor text
54 # editor text
55 _linebelow = "^HG: ------------------------ >8 ------------------------$"
55 _linebelow = "^HG: ------------------------ >8 ------------------------$"
56
56
57 def ishunk(x):
57 def ishunk(x):
58 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
58 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
59 return isinstance(x, hunkclasses)
59 return isinstance(x, hunkclasses)
60
60
61 def newandmodified(chunks, originalchunks):
61 def newandmodified(chunks, originalchunks):
62 newlyaddedandmodifiedfiles = set()
62 newlyaddedandmodifiedfiles = set()
63 for chunk in chunks:
63 for chunk in chunks:
64 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
64 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
65 originalchunks:
65 originalchunks:
66 newlyaddedandmodifiedfiles.add(chunk.header.filename())
66 newlyaddedandmodifiedfiles.add(chunk.header.filename())
67 return newlyaddedandmodifiedfiles
67 return newlyaddedandmodifiedfiles
68
68
69 def parsealiases(cmd):
69 def parsealiases(cmd):
70 return cmd.lstrip("^").split("|")
70 return cmd.lstrip("^").split("|")
71
71
72 def setupwrapcolorwrite(ui):
72 def setupwrapcolorwrite(ui):
73 # wrap ui.write so diff output can be labeled/colorized
73 # wrap ui.write so diff output can be labeled/colorized
74 def wrapwrite(orig, *args, **kw):
74 def wrapwrite(orig, *args, **kw):
75 label = kw.pop('label', '')
75 label = kw.pop('label', '')
76 for chunk, l in patch.difflabel(lambda: args):
76 for chunk, l in patch.difflabel(lambda: args):
77 orig(chunk, label=label + l)
77 orig(chunk, label=label + l)
78
78
79 oldwrite = ui.write
79 oldwrite = ui.write
80 def wrap(*args, **kwargs):
80 def wrap(*args, **kwargs):
81 return wrapwrite(oldwrite, *args, **kwargs)
81 return wrapwrite(oldwrite, *args, **kwargs)
82 setattr(ui, 'write', wrap)
82 setattr(ui, 'write', wrap)
83 return oldwrite
83 return oldwrite
84
84
85 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
85 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
86 if usecurses:
86 if usecurses:
87 if testfile:
87 if testfile:
88 recordfn = crecordmod.testdecorator(testfile,
88 recordfn = crecordmod.testdecorator(testfile,
89 crecordmod.testchunkselector)
89 crecordmod.testchunkselector)
90 else:
90 else:
91 recordfn = crecordmod.chunkselector
91 recordfn = crecordmod.chunkselector
92
92
93 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
93 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
94
94
95 else:
95 else:
96 return patch.filterpatch(ui, originalhunks, operation)
96 return patch.filterpatch(ui, originalhunks, operation)
97
97
98 def recordfilter(ui, originalhunks, operation=None):
98 def recordfilter(ui, originalhunks, operation=None):
99 """ Prompts the user to filter the originalhunks and return a list of
99 """ Prompts the user to filter the originalhunks and return a list of
100 selected hunks.
100 selected hunks.
101 *operation* is used for to build ui messages to indicate the user what
101 *operation* is used for to build ui messages to indicate the user what
102 kind of filtering they are doing: reverting, committing, shelving, etc.
102 kind of filtering they are doing: reverting, committing, shelving, etc.
103 (see patch.filterpatch).
103 (see patch.filterpatch).
104 """
104 """
105 usecurses = crecordmod.checkcurses(ui)
105 usecurses = crecordmod.checkcurses(ui)
106 testfile = ui.config('experimental', 'crecordtest', None)
106 testfile = ui.config('experimental', 'crecordtest', None)
107 oldwrite = setupwrapcolorwrite(ui)
107 oldwrite = setupwrapcolorwrite(ui)
108 try:
108 try:
109 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
109 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
110 testfile, operation)
110 testfile, operation)
111 finally:
111 finally:
112 ui.write = oldwrite
112 ui.write = oldwrite
113 return newchunks, newopts
113 return newchunks, newopts
114
114
115 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
115 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
116 filterfn, *pats, **opts):
116 filterfn, *pats, **opts):
117 from . import merge as mergemod
117 from . import merge as mergemod
118 opts = pycompat.byteskwargs(opts)
118 opts = pycompat.byteskwargs(opts)
119 if not ui.interactive():
119 if not ui.interactive():
120 if cmdsuggest:
120 if cmdsuggest:
121 msg = _('running non-interactively, use %s instead') % cmdsuggest
121 msg = _('running non-interactively, use %s instead') % cmdsuggest
122 else:
122 else:
123 msg = _('running non-interactively')
123 msg = _('running non-interactively')
124 raise error.Abort(msg)
124 raise error.Abort(msg)
125
125
126 # make sure username is set before going interactive
126 # make sure username is set before going interactive
127 if not opts.get('user'):
127 if not opts.get('user'):
128 ui.username() # raise exception, username not provided
128 ui.username() # raise exception, username not provided
129
129
130 def recordfunc(ui, repo, message, match, opts):
130 def recordfunc(ui, repo, message, match, opts):
131 """This is generic record driver.
131 """This is generic record driver.
132
132
133 Its job is to interactively filter local changes, and
133 Its job is to interactively filter local changes, and
134 accordingly prepare working directory into a state in which the
134 accordingly prepare working directory into a state in which the
135 job can be delegated to a non-interactive commit command such as
135 job can be delegated to a non-interactive commit command such as
136 'commit' or 'qrefresh'.
136 'commit' or 'qrefresh'.
137
137
138 After the actual job is done by non-interactive command, the
138 After the actual job is done by non-interactive command, the
139 working directory is restored to its original state.
139 working directory is restored to its original state.
140
140
141 In the end we'll record interesting changes, and everything else
141 In the end we'll record interesting changes, and everything else
142 will be left in place, so the user can continue working.
142 will be left in place, so the user can continue working.
143 """
143 """
144
144
145 checkunfinished(repo, commit=True)
145 checkunfinished(repo, commit=True)
146 wctx = repo[None]
146 wctx = repo[None]
147 merge = len(wctx.parents()) > 1
147 merge = len(wctx.parents()) > 1
148 if merge:
148 if merge:
149 raise error.Abort(_('cannot partially commit a merge '
149 raise error.Abort(_('cannot partially commit a merge '
150 '(use "hg commit" instead)'))
150 '(use "hg commit" instead)'))
151
151
152 def fail(f, msg):
152 def fail(f, msg):
153 raise error.Abort('%s: %s' % (f, msg))
153 raise error.Abort('%s: %s' % (f, msg))
154
154
155 force = opts.get('force')
155 force = opts.get('force')
156 if not force:
156 if not force:
157 vdirs = []
157 vdirs = []
158 match.explicitdir = vdirs.append
158 match.explicitdir = vdirs.append
159 match.bad = fail
159 match.bad = fail
160
160
161 status = repo.status(match=match)
161 status = repo.status(match=match)
162 if not force:
162 if not force:
163 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
163 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
164 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
164 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
165 diffopts.nodates = True
165 diffopts.nodates = True
166 diffopts.git = True
166 diffopts.git = True
167 diffopts.showfunc = True
167 diffopts.showfunc = True
168 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
168 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
169 originalchunks = patch.parsepatch(originaldiff)
169 originalchunks = patch.parsepatch(originaldiff)
170
170
171 # 1. filter patch, since we are intending to apply subset of it
171 # 1. filter patch, since we are intending to apply subset of it
172 try:
172 try:
173 chunks, newopts = filterfn(ui, originalchunks)
173 chunks, newopts = filterfn(ui, originalchunks)
174 except patch.PatchError as err:
174 except patch.PatchError as err:
175 raise error.Abort(_('error parsing patch: %s') % err)
175 raise error.Abort(_('error parsing patch: %s') % err)
176 opts.update(newopts)
176 opts.update(newopts)
177
177
178 # We need to keep a backup of files that have been newly added and
178 # We need to keep a backup of files that have been newly added and
179 # modified during the recording process because there is a previous
179 # modified during the recording process because there is a previous
180 # version without the edit in the workdir
180 # version without the edit in the workdir
181 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
181 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
182 contenders = set()
182 contenders = set()
183 for h in chunks:
183 for h in chunks:
184 try:
184 try:
185 contenders.update(set(h.files()))
185 contenders.update(set(h.files()))
186 except AttributeError:
186 except AttributeError:
187 pass
187 pass
188
188
189 changed = status.modified + status.added + status.removed
189 changed = status.modified + status.added + status.removed
190 newfiles = [f for f in changed if f in contenders]
190 newfiles = [f for f in changed if f in contenders]
191 if not newfiles:
191 if not newfiles:
192 ui.status(_('no changes to record\n'))
192 ui.status(_('no changes to record\n'))
193 return 0
193 return 0
194
194
195 modified = set(status.modified)
195 modified = set(status.modified)
196
196
197 # 2. backup changed files, so we can restore them in the end
197 # 2. backup changed files, so we can restore them in the end
198
198
199 if backupall:
199 if backupall:
200 tobackup = changed
200 tobackup = changed
201 else:
201 else:
202 tobackup = [f for f in newfiles if f in modified or f in \
202 tobackup = [f for f in newfiles if f in modified or f in \
203 newlyaddedandmodifiedfiles]
203 newlyaddedandmodifiedfiles]
204 backups = {}
204 backups = {}
205 if tobackup:
205 if tobackup:
206 backupdir = repo.vfs.join('record-backups')
206 backupdir = repo.vfs.join('record-backups')
207 try:
207 try:
208 os.mkdir(backupdir)
208 os.mkdir(backupdir)
209 except OSError as err:
209 except OSError as err:
210 if err.errno != errno.EEXIST:
210 if err.errno != errno.EEXIST:
211 raise
211 raise
212 try:
212 try:
213 # backup continues
213 # backup continues
214 for f in tobackup:
214 for f in tobackup:
215 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
215 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
216 dir=backupdir)
216 dir=backupdir)
217 os.close(fd)
217 os.close(fd)
218 ui.debug('backup %r as %r\n' % (f, tmpname))
218 ui.debug('backup %r as %r\n' % (f, tmpname))
219 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
219 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
220 backups[f] = tmpname
220 backups[f] = tmpname
221
221
222 fp = stringio()
222 fp = stringio()
223 for c in chunks:
223 for c in chunks:
224 fname = c.filename()
224 fname = c.filename()
225 if fname in backups:
225 if fname in backups:
226 c.write(fp)
226 c.write(fp)
227 dopatch = fp.tell()
227 dopatch = fp.tell()
228 fp.seek(0)
228 fp.seek(0)
229
229
230 # 2.5 optionally review / modify patch in text editor
230 # 2.5 optionally review / modify patch in text editor
231 if opts.get('review', False):
231 if opts.get('review', False):
232 patchtext = (crecordmod.diffhelptext
232 patchtext = (crecordmod.diffhelptext
233 + crecordmod.patchhelptext
233 + crecordmod.patchhelptext
234 + fp.read())
234 + fp.read())
235 reviewedpatch = ui.edit(patchtext, "",
235 reviewedpatch = ui.edit(patchtext, "",
236 extra={"suffix": ".diff"},
236 extra={"suffix": ".diff"},
237 repopath=repo.path)
237 repopath=repo.path)
238 fp.truncate(0)
238 fp.truncate(0)
239 fp.write(reviewedpatch)
239 fp.write(reviewedpatch)
240 fp.seek(0)
240 fp.seek(0)
241
241
242 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
242 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
243 # 3a. apply filtered patch to clean repo (clean)
243 # 3a. apply filtered patch to clean repo (clean)
244 if backups:
244 if backups:
245 # Equivalent to hg.revert
245 # Equivalent to hg.revert
246 m = scmutil.matchfiles(repo, backups.keys())
246 m = scmutil.matchfiles(repo, backups.keys())
247 mergemod.update(repo, repo.dirstate.p1(),
247 mergemod.update(repo, repo.dirstate.p1(),
248 False, True, matcher=m)
248 False, True, matcher=m)
249
249
250 # 3b. (apply)
250 # 3b. (apply)
251 if dopatch:
251 if dopatch:
252 try:
252 try:
253 ui.debug('applying patch\n')
253 ui.debug('applying patch\n')
254 ui.debug(fp.getvalue())
254 ui.debug(fp.getvalue())
255 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
255 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
256 except patch.PatchError as err:
256 except patch.PatchError as err:
257 raise error.Abort(str(err))
257 raise error.Abort(str(err))
258 del fp
258 del fp
259
259
260 # 4. We prepared working directory according to filtered
260 # 4. We prepared working directory according to filtered
261 # patch. Now is the time to delegate the job to
261 # patch. Now is the time to delegate the job to
262 # commit/qrefresh or the like!
262 # commit/qrefresh or the like!
263
263
264 # Make all of the pathnames absolute.
264 # Make all of the pathnames absolute.
265 newfiles = [repo.wjoin(nf) for nf in newfiles]
265 newfiles = [repo.wjoin(nf) for nf in newfiles]
266 return commitfunc(ui, repo, *newfiles, **opts)
266 return commitfunc(ui, repo, *newfiles, **opts)
267 finally:
267 finally:
268 # 5. finally restore backed-up files
268 # 5. finally restore backed-up files
269 try:
269 try:
270 dirstate = repo.dirstate
270 dirstate = repo.dirstate
271 for realname, tmpname in backups.iteritems():
271 for realname, tmpname in backups.iteritems():
272 ui.debug('restoring %r to %r\n' % (tmpname, realname))
272 ui.debug('restoring %r to %r\n' % (tmpname, realname))
273
273
274 if dirstate[realname] == 'n':
274 if dirstate[realname] == 'n':
275 # without normallookup, restoring timestamp
275 # without normallookup, restoring timestamp
276 # may cause partially committed files
276 # may cause partially committed files
277 # to be treated as unmodified
277 # to be treated as unmodified
278 dirstate.normallookup(realname)
278 dirstate.normallookup(realname)
279
279
280 # copystat=True here and above are a hack to trick any
280 # copystat=True here and above are a hack to trick any
281 # editors that have f open that we haven't modified them.
281 # editors that have f open that we haven't modified them.
282 #
282 #
283 # Also note that this racy as an editor could notice the
283 # Also note that this racy as an editor could notice the
284 # file's mtime before we've finished writing it.
284 # file's mtime before we've finished writing it.
285 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
285 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
286 os.unlink(tmpname)
286 os.unlink(tmpname)
287 if tobackup:
287 if tobackup:
288 os.rmdir(backupdir)
288 os.rmdir(backupdir)
289 except OSError:
289 except OSError:
290 pass
290 pass
291
291
292 def recordinwlock(ui, repo, message, match, opts):
292 def recordinwlock(ui, repo, message, match, opts):
293 with repo.wlock():
293 with repo.wlock():
294 return recordfunc(ui, repo, message, match, opts)
294 return recordfunc(ui, repo, message, match, opts)
295
295
296 return commit(ui, repo, recordinwlock, pats, opts)
296 return commit(ui, repo, recordinwlock, pats, opts)
297
297
298 def findpossible(cmd, table, strict=False):
298 def findpossible(cmd, table, strict=False):
299 """
299 """
300 Return cmd -> (aliases, command table entry)
300 Return cmd -> (aliases, command table entry)
301 for each matching command.
301 for each matching command.
302 Return debug commands (or their aliases) only if no normal command matches.
302 Return debug commands (or their aliases) only if no normal command matches.
303 """
303 """
304 choice = {}
304 choice = {}
305 debugchoice = {}
305 debugchoice = {}
306
306
307 if cmd in table:
307 if cmd in table:
308 # short-circuit exact matches, "log" alias beats "^log|history"
308 # short-circuit exact matches, "log" alias beats "^log|history"
309 keys = [cmd]
309 keys = [cmd]
310 else:
310 else:
311 keys = table.keys()
311 keys = table.keys()
312
312
313 allcmds = []
313 allcmds = []
314 for e in keys:
314 for e in keys:
315 aliases = parsealiases(e)
315 aliases = parsealiases(e)
316 allcmds.extend(aliases)
316 allcmds.extend(aliases)
317 found = None
317 found = None
318 if cmd in aliases:
318 if cmd in aliases:
319 found = cmd
319 found = cmd
320 elif not strict:
320 elif not strict:
321 for a in aliases:
321 for a in aliases:
322 if a.startswith(cmd):
322 if a.startswith(cmd):
323 found = a
323 found = a
324 break
324 break
325 if found is not None:
325 if found is not None:
326 if aliases[0].startswith("debug") or found.startswith("debug"):
326 if aliases[0].startswith("debug") or found.startswith("debug"):
327 debugchoice[found] = (aliases, table[e])
327 debugchoice[found] = (aliases, table[e])
328 else:
328 else:
329 choice[found] = (aliases, table[e])
329 choice[found] = (aliases, table[e])
330
330
331 if not choice and debugchoice:
331 if not choice and debugchoice:
332 choice = debugchoice
332 choice = debugchoice
333
333
334 return choice, allcmds
334 return choice, allcmds
335
335
336 def findcmd(cmd, table, strict=True):
336 def findcmd(cmd, table, strict=True):
337 """Return (aliases, command table entry) for command string."""
337 """Return (aliases, command table entry) for command string."""
338 choice, allcmds = findpossible(cmd, table, strict)
338 choice, allcmds = findpossible(cmd, table, strict)
339
339
340 if cmd in choice:
340 if cmd in choice:
341 return choice[cmd]
341 return choice[cmd]
342
342
343 if len(choice) > 1:
343 if len(choice) > 1:
344 clist = choice.keys()
344 clist = choice.keys()
345 clist.sort()
345 clist.sort()
346 raise error.AmbiguousCommand(cmd, clist)
346 raise error.AmbiguousCommand(cmd, clist)
347
347
348 if choice:
348 if choice:
349 return choice.values()[0]
349 return choice.values()[0]
350
350
351 raise error.UnknownCommand(cmd, allcmds)
351 raise error.UnknownCommand(cmd, allcmds)
352
352
353 def findrepo(p):
353 def findrepo(p):
354 while not os.path.isdir(os.path.join(p, ".hg")):
354 while not os.path.isdir(os.path.join(p, ".hg")):
355 oldp, p = p, os.path.dirname(p)
355 oldp, p = p, os.path.dirname(p)
356 if p == oldp:
356 if p == oldp:
357 return None
357 return None
358
358
359 return p
359 return p
360
360
361 def bailifchanged(repo, merge=True, hint=None):
361 def bailifchanged(repo, merge=True, hint=None):
362 """ enforce the precondition that working directory must be clean.
362 """ enforce the precondition that working directory must be clean.
363
363
364 'merge' can be set to false if a pending uncommitted merge should be
364 'merge' can be set to false if a pending uncommitted merge should be
365 ignored (such as when 'update --check' runs).
365 ignored (such as when 'update --check' runs).
366
366
367 'hint' is the usual hint given to Abort exception.
367 'hint' is the usual hint given to Abort exception.
368 """
368 """
369
369
370 if merge and repo.dirstate.p2() != nullid:
370 if merge and repo.dirstate.p2() != nullid:
371 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
371 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
372 modified, added, removed, deleted = repo.status()[:4]
372 modified, added, removed, deleted = repo.status()[:4]
373 if modified or added or removed or deleted:
373 if modified or added or removed or deleted:
374 raise error.Abort(_('uncommitted changes'), hint=hint)
374 raise error.Abort(_('uncommitted changes'), hint=hint)
375 ctx = repo[None]
375 ctx = repo[None]
376 for s in sorted(ctx.substate):
376 for s in sorted(ctx.substate):
377 ctx.sub(s).bailifchanged(hint=hint)
377 ctx.sub(s).bailifchanged(hint=hint)
378
378
379 def logmessage(ui, opts):
379 def logmessage(ui, opts):
380 """ get the log message according to -m and -l option """
380 """ get the log message according to -m and -l option """
381 message = opts.get('message')
381 message = opts.get('message')
382 logfile = opts.get('logfile')
382 logfile = opts.get('logfile')
383
383
384 if message and logfile:
384 if message and logfile:
385 raise error.Abort(_('options --message and --logfile are mutually '
385 raise error.Abort(_('options --message and --logfile are mutually '
386 'exclusive'))
386 'exclusive'))
387 if not message and logfile:
387 if not message and logfile:
388 try:
388 try:
389 if logfile == '-':
389 if logfile == '-':
390 message = ui.fin.read()
390 message = ui.fin.read()
391 else:
391 else:
392 message = '\n'.join(util.readfile(logfile).splitlines())
392 message = '\n'.join(util.readfile(logfile).splitlines())
393 except IOError as inst:
393 except IOError as inst:
394 raise error.Abort(_("can't read commit message '%s': %s") %
394 raise error.Abort(_("can't read commit message '%s': %s") %
395 (logfile, inst.strerror))
395 (logfile, inst.strerror))
396 return message
396 return message
397
397
398 def mergeeditform(ctxorbool, baseformname):
398 def mergeeditform(ctxorbool, baseformname):
399 """return appropriate editform name (referencing a committemplate)
399 """return appropriate editform name (referencing a committemplate)
400
400
401 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
401 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
402 merging is committed.
402 merging is committed.
403
403
404 This returns baseformname with '.merge' appended if it is a merge,
404 This returns baseformname with '.merge' appended if it is a merge,
405 otherwise '.normal' is appended.
405 otherwise '.normal' is appended.
406 """
406 """
407 if isinstance(ctxorbool, bool):
407 if isinstance(ctxorbool, bool):
408 if ctxorbool:
408 if ctxorbool:
409 return baseformname + ".merge"
409 return baseformname + ".merge"
410 elif 1 < len(ctxorbool.parents()):
410 elif 1 < len(ctxorbool.parents()):
411 return baseformname + ".merge"
411 return baseformname + ".merge"
412
412
413 return baseformname + ".normal"
413 return baseformname + ".normal"
414
414
415 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
415 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
416 editform='', **opts):
416 editform='', **opts):
417 """get appropriate commit message editor according to '--edit' option
417 """get appropriate commit message editor according to '--edit' option
418
418
419 'finishdesc' is a function to be called with edited commit message
419 'finishdesc' is a function to be called with edited commit message
420 (= 'description' of the new changeset) just after editing, but
420 (= 'description' of the new changeset) just after editing, but
421 before checking empty-ness. It should return actual text to be
421 before checking empty-ness. It should return actual text to be
422 stored into history. This allows to change description before
422 stored into history. This allows to change description before
423 storing.
423 storing.
424
424
425 'extramsg' is a extra message to be shown in the editor instead of
425 'extramsg' is a extra message to be shown in the editor instead of
426 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
426 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
427 is automatically added.
427 is automatically added.
428
428
429 'editform' is a dot-separated list of names, to distinguish
429 'editform' is a dot-separated list of names, to distinguish
430 the purpose of commit text editing.
430 the purpose of commit text editing.
431
431
432 'getcommiteditor' returns 'commitforceeditor' regardless of
432 'getcommiteditor' returns 'commitforceeditor' regardless of
433 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
433 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
434 they are specific for usage in MQ.
434 they are specific for usage in MQ.
435 """
435 """
436 if edit or finishdesc or extramsg:
436 if edit or finishdesc or extramsg:
437 return lambda r, c, s: commitforceeditor(r, c, s,
437 return lambda r, c, s: commitforceeditor(r, c, s,
438 finishdesc=finishdesc,
438 finishdesc=finishdesc,
439 extramsg=extramsg,
439 extramsg=extramsg,
440 editform=editform)
440 editform=editform)
441 elif editform:
441 elif editform:
442 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
442 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
443 else:
443 else:
444 return commiteditor
444 return commiteditor
445
445
446 def loglimit(opts):
446 def loglimit(opts):
447 """get the log limit according to option -l/--limit"""
447 """get the log limit according to option -l/--limit"""
448 limit = opts.get('limit')
448 limit = opts.get('limit')
449 if limit:
449 if limit:
450 try:
450 try:
451 limit = int(limit)
451 limit = int(limit)
452 except ValueError:
452 except ValueError:
453 raise error.Abort(_('limit must be a positive integer'))
453 raise error.Abort(_('limit must be a positive integer'))
454 if limit <= 0:
454 if limit <= 0:
455 raise error.Abort(_('limit must be positive'))
455 raise error.Abort(_('limit must be positive'))
456 else:
456 else:
457 limit = None
457 limit = None
458 return limit
458 return limit
459
459
460 def makefilename(repo, pat, node, desc=None,
460 def makefilename(repo, pat, node, desc=None,
461 total=None, seqno=None, revwidth=None, pathname=None):
461 total=None, seqno=None, revwidth=None, pathname=None):
462 node_expander = {
462 node_expander = {
463 'H': lambda: hex(node),
463 'H': lambda: hex(node),
464 'R': lambda: str(repo.changelog.rev(node)),
464 'R': lambda: str(repo.changelog.rev(node)),
465 'h': lambda: short(node),
465 'h': lambda: short(node),
466 'm': lambda: re.sub('[^\w]', '_', str(desc))
466 'm': lambda: re.sub('[^\w]', '_', str(desc))
467 }
467 }
468 expander = {
468 expander = {
469 '%': lambda: '%',
469 '%': lambda: '%',
470 'b': lambda: os.path.basename(repo.root),
470 'b': lambda: os.path.basename(repo.root),
471 }
471 }
472
472
473 try:
473 try:
474 if node:
474 if node:
475 expander.update(node_expander)
475 expander.update(node_expander)
476 if node:
476 if node:
477 expander['r'] = (lambda:
477 expander['r'] = (lambda:
478 str(repo.changelog.rev(node)).zfill(revwidth or 0))
478 str(repo.changelog.rev(node)).zfill(revwidth or 0))
479 if total is not None:
479 if total is not None:
480 expander['N'] = lambda: str(total)
480 expander['N'] = lambda: str(total)
481 if seqno is not None:
481 if seqno is not None:
482 expander['n'] = lambda: str(seqno)
482 expander['n'] = lambda: str(seqno)
483 if total is not None and seqno is not None:
483 if total is not None and seqno is not None:
484 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
484 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
485 if pathname is not None:
485 if pathname is not None:
486 expander['s'] = lambda: os.path.basename(pathname)
486 expander['s'] = lambda: os.path.basename(pathname)
487 expander['d'] = lambda: os.path.dirname(pathname) or '.'
487 expander['d'] = lambda: os.path.dirname(pathname) or '.'
488 expander['p'] = lambda: pathname
488 expander['p'] = lambda: pathname
489
489
490 newname = []
490 newname = []
491 patlen = len(pat)
491 patlen = len(pat)
492 i = 0
492 i = 0
493 while i < patlen:
493 while i < patlen:
494 c = pat[i:i + 1]
494 c = pat[i:i + 1]
495 if c == '%':
495 if c == '%':
496 i += 1
496 i += 1
497 c = pat[i:i + 1]
497 c = pat[i:i + 1]
498 c = expander[c]()
498 c = expander[c]()
499 newname.append(c)
499 newname.append(c)
500 i += 1
500 i += 1
501 return ''.join(newname)
501 return ''.join(newname)
502 except KeyError as inst:
502 except KeyError as inst:
503 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
503 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
504 inst.args[0])
504 inst.args[0])
505
505
506 class _unclosablefile(object):
506 class _unclosablefile(object):
507 def __init__(self, fp):
507 def __init__(self, fp):
508 self._fp = fp
508 self._fp = fp
509
509
510 def close(self):
510 def close(self):
511 pass
511 pass
512
512
513 def __iter__(self):
513 def __iter__(self):
514 return iter(self._fp)
514 return iter(self._fp)
515
515
516 def __getattr__(self, attr):
516 def __getattr__(self, attr):
517 return getattr(self._fp, attr)
517 return getattr(self._fp, attr)
518
518
519 def __enter__(self):
519 def __enter__(self):
520 return self
520 return self
521
521
522 def __exit__(self, exc_type, exc_value, exc_tb):
522 def __exit__(self, exc_type, exc_value, exc_tb):
523 pass
523 pass
524
524
525 def makefileobj(repo, pat, node=None, desc=None, total=None,
525 def makefileobj(repo, pat, node=None, desc=None, total=None,
526 seqno=None, revwidth=None, mode='wb', modemap=None,
526 seqno=None, revwidth=None, mode='wb', modemap=None,
527 pathname=None):
527 pathname=None):
528
528
529 writable = mode not in ('r', 'rb')
529 writable = mode not in ('r', 'rb')
530
530
531 if not pat or pat == '-':
531 if not pat or pat == '-':
532 if writable:
532 if writable:
533 fp = repo.ui.fout
533 fp = repo.ui.fout
534 else:
534 else:
535 fp = repo.ui.fin
535 fp = repo.ui.fin
536 return _unclosablefile(fp)
536 return _unclosablefile(fp)
537 if util.safehasattr(pat, 'write') and writable:
537 if util.safehasattr(pat, 'write') and writable:
538 return pat
538 return pat
539 if util.safehasattr(pat, 'read') and 'r' in mode:
539 if util.safehasattr(pat, 'read') and 'r' in mode:
540 return pat
540 return pat
541 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
541 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
542 if modemap is not None:
542 if modemap is not None:
543 mode = modemap.get(fn, mode)
543 mode = modemap.get(fn, mode)
544 if mode == 'wb':
544 if mode == 'wb':
545 modemap[fn] = 'ab'
545 modemap[fn] = 'ab'
546 return open(fn, mode)
546 return open(fn, mode)
547
547
548 def openrevlog(repo, cmd, file_, opts):
548 def openrevlog(repo, cmd, file_, opts):
549 """opens the changelog, manifest, a filelog or a given revlog"""
549 """opens the changelog, manifest, a filelog or a given revlog"""
550 cl = opts['changelog']
550 cl = opts['changelog']
551 mf = opts['manifest']
551 mf = opts['manifest']
552 dir = opts['dir']
552 dir = opts['dir']
553 msg = None
553 msg = None
554 if cl and mf:
554 if cl and mf:
555 msg = _('cannot specify --changelog and --manifest at the same time')
555 msg = _('cannot specify --changelog and --manifest at the same time')
556 elif cl and dir:
556 elif cl and dir:
557 msg = _('cannot specify --changelog and --dir at the same time')
557 msg = _('cannot specify --changelog and --dir at the same time')
558 elif cl or mf or dir:
558 elif cl or mf or dir:
559 if file_:
559 if file_:
560 msg = _('cannot specify filename with --changelog or --manifest')
560 msg = _('cannot specify filename with --changelog or --manifest')
561 elif not repo:
561 elif not repo:
562 msg = _('cannot specify --changelog or --manifest or --dir '
562 msg = _('cannot specify --changelog or --manifest or --dir '
563 'without a repository')
563 'without a repository')
564 if msg:
564 if msg:
565 raise error.Abort(msg)
565 raise error.Abort(msg)
566
566
567 r = None
567 r = None
568 if repo:
568 if repo:
569 if cl:
569 if cl:
570 r = repo.unfiltered().changelog
570 r = repo.unfiltered().changelog
571 elif dir:
571 elif dir:
572 if 'treemanifest' not in repo.requirements:
572 if 'treemanifest' not in repo.requirements:
573 raise error.Abort(_("--dir can only be used on repos with "
573 raise error.Abort(_("--dir can only be used on repos with "
574 "treemanifest enabled"))
574 "treemanifest enabled"))
575 dirlog = repo.manifestlog._revlog.dirlog(dir)
575 dirlog = repo.manifestlog._revlog.dirlog(dir)
576 if len(dirlog):
576 if len(dirlog):
577 r = dirlog
577 r = dirlog
578 elif mf:
578 elif mf:
579 r = repo.manifestlog._revlog
579 r = repo.manifestlog._revlog
580 elif file_:
580 elif file_:
581 filelog = repo.file(file_)
581 filelog = repo.file(file_)
582 if len(filelog):
582 if len(filelog):
583 r = filelog
583 r = filelog
584 if not r:
584 if not r:
585 if not file_:
585 if not file_:
586 raise error.CommandError(cmd, _('invalid arguments'))
586 raise error.CommandError(cmd, _('invalid arguments'))
587 if not os.path.isfile(file_):
587 if not os.path.isfile(file_):
588 raise error.Abort(_("revlog '%s' not found") % file_)
588 raise error.Abort(_("revlog '%s' not found") % file_)
589 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
589 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
590 file_[:-2] + ".i")
590 file_[:-2] + ".i")
591 return r
591 return r
592
592
593 def copy(ui, repo, pats, opts, rename=False):
593 def copy(ui, repo, pats, opts, rename=False):
594 # called with the repo lock held
594 # called with the repo lock held
595 #
595 #
596 # hgsep => pathname that uses "/" to separate directories
596 # hgsep => pathname that uses "/" to separate directories
597 # ossep => pathname that uses os.sep to separate directories
597 # ossep => pathname that uses os.sep to separate directories
598 cwd = repo.getcwd()
598 cwd = repo.getcwd()
599 targets = {}
599 targets = {}
600 after = opts.get("after")
600 after = opts.get("after")
601 dryrun = opts.get("dry_run")
601 dryrun = opts.get("dry_run")
602 wctx = repo[None]
602 wctx = repo[None]
603
603
604 def walkpat(pat):
604 def walkpat(pat):
605 srcs = []
605 srcs = []
606 if after:
606 if after:
607 badstates = '?'
607 badstates = '?'
608 else:
608 else:
609 badstates = '?r'
609 badstates = '?r'
610 m = scmutil.match(repo[None], [pat], opts, globbed=True)
610 m = scmutil.match(repo[None], [pat], opts, globbed=True)
611 for abs in repo.walk(m):
611 for abs in repo.walk(m):
612 state = repo.dirstate[abs]
612 state = repo.dirstate[abs]
613 rel = m.rel(abs)
613 rel = m.rel(abs)
614 exact = m.exact(abs)
614 exact = m.exact(abs)
615 if state in badstates:
615 if state in badstates:
616 if exact and state == '?':
616 if exact and state == '?':
617 ui.warn(_('%s: not copying - file is not managed\n') % rel)
617 ui.warn(_('%s: not copying - file is not managed\n') % rel)
618 if exact and state == 'r':
618 if exact and state == 'r':
619 ui.warn(_('%s: not copying - file has been marked for'
619 ui.warn(_('%s: not copying - file has been marked for'
620 ' remove\n') % rel)
620 ' remove\n') % rel)
621 continue
621 continue
622 # abs: hgsep
622 # abs: hgsep
623 # rel: ossep
623 # rel: ossep
624 srcs.append((abs, rel, exact))
624 srcs.append((abs, rel, exact))
625 return srcs
625 return srcs
626
626
627 # abssrc: hgsep
627 # abssrc: hgsep
628 # relsrc: ossep
628 # relsrc: ossep
629 # otarget: ossep
629 # otarget: ossep
630 def copyfile(abssrc, relsrc, otarget, exact):
630 def copyfile(abssrc, relsrc, otarget, exact):
631 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
631 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
632 if '/' in abstarget:
632 if '/' in abstarget:
633 # We cannot normalize abstarget itself, this would prevent
633 # We cannot normalize abstarget itself, this would prevent
634 # case only renames, like a => A.
634 # case only renames, like a => A.
635 abspath, absname = abstarget.rsplit('/', 1)
635 abspath, absname = abstarget.rsplit('/', 1)
636 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
636 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
637 reltarget = repo.pathto(abstarget, cwd)
637 reltarget = repo.pathto(abstarget, cwd)
638 target = repo.wjoin(abstarget)
638 target = repo.wjoin(abstarget)
639 src = repo.wjoin(abssrc)
639 src = repo.wjoin(abssrc)
640 state = repo.dirstate[abstarget]
640 state = repo.dirstate[abstarget]
641
641
642 scmutil.checkportable(ui, abstarget)
642 scmutil.checkportable(ui, abstarget)
643
643
644 # check for collisions
644 # check for collisions
645 prevsrc = targets.get(abstarget)
645 prevsrc = targets.get(abstarget)
646 if prevsrc is not None:
646 if prevsrc is not None:
647 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
647 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
648 (reltarget, repo.pathto(abssrc, cwd),
648 (reltarget, repo.pathto(abssrc, cwd),
649 repo.pathto(prevsrc, cwd)))
649 repo.pathto(prevsrc, cwd)))
650 return
650 return
651
651
652 # check for overwrites
652 # check for overwrites
653 exists = os.path.lexists(target)
653 exists = os.path.lexists(target)
654 samefile = False
654 samefile = False
655 if exists and abssrc != abstarget:
655 if exists and abssrc != abstarget:
656 if (repo.dirstate.normalize(abssrc) ==
656 if (repo.dirstate.normalize(abssrc) ==
657 repo.dirstate.normalize(abstarget)):
657 repo.dirstate.normalize(abstarget)):
658 if not rename:
658 if not rename:
659 ui.warn(_("%s: can't copy - same file\n") % reltarget)
659 ui.warn(_("%s: can't copy - same file\n") % reltarget)
660 return
660 return
661 exists = False
661 exists = False
662 samefile = True
662 samefile = True
663
663
664 if not after and exists or after and state in 'mn':
664 if not after and exists or after and state in 'mn':
665 if not opts['force']:
665 if not opts['force']:
666 if state in 'mn':
666 if state in 'mn':
667 msg = _('%s: not overwriting - file already committed\n')
667 msg = _('%s: not overwriting - file already committed\n')
668 if after:
668 if after:
669 flags = '--after --force'
669 flags = '--after --force'
670 else:
670 else:
671 flags = '--force'
671 flags = '--force'
672 if rename:
672 if rename:
673 hint = _('(hg rename %s to replace the file by '
673 hint = _('(hg rename %s to replace the file by '
674 'recording a rename)\n') % flags
674 'recording a rename)\n') % flags
675 else:
675 else:
676 hint = _('(hg copy %s to replace the file by '
676 hint = _('(hg copy %s to replace the file by '
677 'recording a copy)\n') % flags
677 'recording a copy)\n') % flags
678 else:
678 else:
679 msg = _('%s: not overwriting - file exists\n')
679 msg = _('%s: not overwriting - file exists\n')
680 if rename:
680 if rename:
681 hint = _('(hg rename --after to record the rename)\n')
681 hint = _('(hg rename --after to record the rename)\n')
682 else:
682 else:
683 hint = _('(hg copy --after to record the copy)\n')
683 hint = _('(hg copy --after to record the copy)\n')
684 ui.warn(msg % reltarget)
684 ui.warn(msg % reltarget)
685 ui.warn(hint)
685 ui.warn(hint)
686 return
686 return
687
687
688 if after:
688 if after:
689 if not exists:
689 if not exists:
690 if rename:
690 if rename:
691 ui.warn(_('%s: not recording move - %s does not exist\n') %
691 ui.warn(_('%s: not recording move - %s does not exist\n') %
692 (relsrc, reltarget))
692 (relsrc, reltarget))
693 else:
693 else:
694 ui.warn(_('%s: not recording copy - %s does not exist\n') %
694 ui.warn(_('%s: not recording copy - %s does not exist\n') %
695 (relsrc, reltarget))
695 (relsrc, reltarget))
696 return
696 return
697 elif not dryrun:
697 elif not dryrun:
698 try:
698 try:
699 if exists:
699 if exists:
700 os.unlink(target)
700 os.unlink(target)
701 targetdir = os.path.dirname(target) or '.'
701 targetdir = os.path.dirname(target) or '.'
702 if not os.path.isdir(targetdir):
702 if not os.path.isdir(targetdir):
703 os.makedirs(targetdir)
703 os.makedirs(targetdir)
704 if samefile:
704 if samefile:
705 tmp = target + "~hgrename"
705 tmp = target + "~hgrename"
706 os.rename(src, tmp)
706 os.rename(src, tmp)
707 os.rename(tmp, target)
707 os.rename(tmp, target)
708 else:
708 else:
709 util.copyfile(src, target)
709 util.copyfile(src, target)
710 srcexists = True
710 srcexists = True
711 except IOError as inst:
711 except IOError as inst:
712 if inst.errno == errno.ENOENT:
712 if inst.errno == errno.ENOENT:
713 ui.warn(_('%s: deleted in working directory\n') % relsrc)
713 ui.warn(_('%s: deleted in working directory\n') % relsrc)
714 srcexists = False
714 srcexists = False
715 else:
715 else:
716 ui.warn(_('%s: cannot copy - %s\n') %
716 ui.warn(_('%s: cannot copy - %s\n') %
717 (relsrc, inst.strerror))
717 (relsrc, inst.strerror))
718 return True # report a failure
718 return True # report a failure
719
719
720 if ui.verbose or not exact:
720 if ui.verbose or not exact:
721 if rename:
721 if rename:
722 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
722 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
723 else:
723 else:
724 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
724 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
725
725
726 targets[abstarget] = abssrc
726 targets[abstarget] = abssrc
727
727
728 # fix up dirstate
728 # fix up dirstate
729 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
729 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
730 dryrun=dryrun, cwd=cwd)
730 dryrun=dryrun, cwd=cwd)
731 if rename and not dryrun:
731 if rename and not dryrun:
732 if not after and srcexists and not samefile:
732 if not after and srcexists and not samefile:
733 repo.wvfs.unlinkpath(abssrc)
733 repo.wvfs.unlinkpath(abssrc)
734 wctx.forget([abssrc])
734 wctx.forget([abssrc])
735
735
736 # pat: ossep
736 # pat: ossep
737 # dest ossep
737 # dest ossep
738 # srcs: list of (hgsep, hgsep, ossep, bool)
738 # srcs: list of (hgsep, hgsep, ossep, bool)
739 # return: function that takes hgsep and returns ossep
739 # return: function that takes hgsep and returns ossep
740 def targetpathfn(pat, dest, srcs):
740 def targetpathfn(pat, dest, srcs):
741 if os.path.isdir(pat):
741 if os.path.isdir(pat):
742 abspfx = pathutil.canonpath(repo.root, cwd, pat)
742 abspfx = pathutil.canonpath(repo.root, cwd, pat)
743 abspfx = util.localpath(abspfx)
743 abspfx = util.localpath(abspfx)
744 if destdirexists:
744 if destdirexists:
745 striplen = len(os.path.split(abspfx)[0])
745 striplen = len(os.path.split(abspfx)[0])
746 else:
746 else:
747 striplen = len(abspfx)
747 striplen = len(abspfx)
748 if striplen:
748 if striplen:
749 striplen += len(pycompat.ossep)
749 striplen += len(pycompat.ossep)
750 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
750 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
751 elif destdirexists:
751 elif destdirexists:
752 res = lambda p: os.path.join(dest,
752 res = lambda p: os.path.join(dest,
753 os.path.basename(util.localpath(p)))
753 os.path.basename(util.localpath(p)))
754 else:
754 else:
755 res = lambda p: dest
755 res = lambda p: dest
756 return res
756 return res
757
757
758 # pat: ossep
758 # pat: ossep
759 # dest ossep
759 # dest ossep
760 # srcs: list of (hgsep, hgsep, ossep, bool)
760 # srcs: list of (hgsep, hgsep, ossep, bool)
761 # return: function that takes hgsep and returns ossep
761 # return: function that takes hgsep and returns ossep
762 def targetpathafterfn(pat, dest, srcs):
762 def targetpathafterfn(pat, dest, srcs):
763 if matchmod.patkind(pat):
763 if matchmod.patkind(pat):
764 # a mercurial pattern
764 # a mercurial pattern
765 res = lambda p: os.path.join(dest,
765 res = lambda p: os.path.join(dest,
766 os.path.basename(util.localpath(p)))
766 os.path.basename(util.localpath(p)))
767 else:
767 else:
768 abspfx = pathutil.canonpath(repo.root, cwd, pat)
768 abspfx = pathutil.canonpath(repo.root, cwd, pat)
769 if len(abspfx) < len(srcs[0][0]):
769 if len(abspfx) < len(srcs[0][0]):
770 # A directory. Either the target path contains the last
770 # A directory. Either the target path contains the last
771 # component of the source path or it does not.
771 # component of the source path or it does not.
772 def evalpath(striplen):
772 def evalpath(striplen):
773 score = 0
773 score = 0
774 for s in srcs:
774 for s in srcs:
775 t = os.path.join(dest, util.localpath(s[0])[striplen:])
775 t = os.path.join(dest, util.localpath(s[0])[striplen:])
776 if os.path.lexists(t):
776 if os.path.lexists(t):
777 score += 1
777 score += 1
778 return score
778 return score
779
779
780 abspfx = util.localpath(abspfx)
780 abspfx = util.localpath(abspfx)
781 striplen = len(abspfx)
781 striplen = len(abspfx)
782 if striplen:
782 if striplen:
783 striplen += len(pycompat.ossep)
783 striplen += len(pycompat.ossep)
784 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
784 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
785 score = evalpath(striplen)
785 score = evalpath(striplen)
786 striplen1 = len(os.path.split(abspfx)[0])
786 striplen1 = len(os.path.split(abspfx)[0])
787 if striplen1:
787 if striplen1:
788 striplen1 += len(pycompat.ossep)
788 striplen1 += len(pycompat.ossep)
789 if evalpath(striplen1) > score:
789 if evalpath(striplen1) > score:
790 striplen = striplen1
790 striplen = striplen1
791 res = lambda p: os.path.join(dest,
791 res = lambda p: os.path.join(dest,
792 util.localpath(p)[striplen:])
792 util.localpath(p)[striplen:])
793 else:
793 else:
794 # a file
794 # a file
795 if destdirexists:
795 if destdirexists:
796 res = lambda p: os.path.join(dest,
796 res = lambda p: os.path.join(dest,
797 os.path.basename(util.localpath(p)))
797 os.path.basename(util.localpath(p)))
798 else:
798 else:
799 res = lambda p: dest
799 res = lambda p: dest
800 return res
800 return res
801
801
802 pats = scmutil.expandpats(pats)
802 pats = scmutil.expandpats(pats)
803 if not pats:
803 if not pats:
804 raise error.Abort(_('no source or destination specified'))
804 raise error.Abort(_('no source or destination specified'))
805 if len(pats) == 1:
805 if len(pats) == 1:
806 raise error.Abort(_('no destination specified'))
806 raise error.Abort(_('no destination specified'))
807 dest = pats.pop()
807 dest = pats.pop()
808 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
808 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
809 if not destdirexists:
809 if not destdirexists:
810 if len(pats) > 1 or matchmod.patkind(pats[0]):
810 if len(pats) > 1 or matchmod.patkind(pats[0]):
811 raise error.Abort(_('with multiple sources, destination must be an '
811 raise error.Abort(_('with multiple sources, destination must be an '
812 'existing directory'))
812 'existing directory'))
813 if util.endswithsep(dest):
813 if util.endswithsep(dest):
814 raise error.Abort(_('destination %s is not a directory') % dest)
814 raise error.Abort(_('destination %s is not a directory') % dest)
815
815
816 tfn = targetpathfn
816 tfn = targetpathfn
817 if after:
817 if after:
818 tfn = targetpathafterfn
818 tfn = targetpathafterfn
819 copylist = []
819 copylist = []
820 for pat in pats:
820 for pat in pats:
821 srcs = walkpat(pat)
821 srcs = walkpat(pat)
822 if not srcs:
822 if not srcs:
823 continue
823 continue
824 copylist.append((tfn(pat, dest, srcs), srcs))
824 copylist.append((tfn(pat, dest, srcs), srcs))
825 if not copylist:
825 if not copylist:
826 raise error.Abort(_('no files to copy'))
826 raise error.Abort(_('no files to copy'))
827
827
828 errors = 0
828 errors = 0
829 for targetpath, srcs in copylist:
829 for targetpath, srcs in copylist:
830 for abssrc, relsrc, exact in srcs:
830 for abssrc, relsrc, exact in srcs:
831 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
831 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
832 errors += 1
832 errors += 1
833
833
834 if errors:
834 if errors:
835 ui.warn(_('(consider using --after)\n'))
835 ui.warn(_('(consider using --after)\n'))
836
836
837 return errors != 0
837 return errors != 0
838
838
839 ## facility to let extension process additional data into an import patch
839 ## facility to let extension process additional data into an import patch
840 # list of identifier to be executed in order
840 # list of identifier to be executed in order
841 extrapreimport = [] # run before commit
841 extrapreimport = [] # run before commit
842 extrapostimport = [] # run after commit
842 extrapostimport = [] # run after commit
843 # mapping from identifier to actual import function
843 # mapping from identifier to actual import function
844 #
844 #
845 # 'preimport' are run before the commit is made and are provided the following
845 # 'preimport' are run before the commit is made and are provided the following
846 # arguments:
846 # arguments:
847 # - repo: the localrepository instance,
847 # - repo: the localrepository instance,
848 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
848 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
849 # - extra: the future extra dictionary of the changeset, please mutate it,
849 # - extra: the future extra dictionary of the changeset, please mutate it,
850 # - opts: the import options.
850 # - opts: the import options.
851 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
851 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
852 # mutation of in memory commit and more. Feel free to rework the code to get
852 # mutation of in memory commit and more. Feel free to rework the code to get
853 # there.
853 # there.
854 extrapreimportmap = {}
854 extrapreimportmap = {}
855 # 'postimport' are run after the commit is made and are provided the following
855 # 'postimport' are run after the commit is made and are provided the following
856 # argument:
856 # argument:
857 # - ctx: the changectx created by import.
857 # - ctx: the changectx created by import.
858 extrapostimportmap = {}
858 extrapostimportmap = {}
859
859
860 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
860 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
861 """Utility function used by commands.import to import a single patch
861 """Utility function used by commands.import to import a single patch
862
862
863 This function is explicitly defined here to help the evolve extension to
863 This function is explicitly defined here to help the evolve extension to
864 wrap this part of the import logic.
864 wrap this part of the import logic.
865
865
866 The API is currently a bit ugly because it a simple code translation from
866 The API is currently a bit ugly because it a simple code translation from
867 the import command. Feel free to make it better.
867 the import command. Feel free to make it better.
868
868
869 :hunk: a patch (as a binary string)
869 :hunk: a patch (as a binary string)
870 :parents: nodes that will be parent of the created commit
870 :parents: nodes that will be parent of the created commit
871 :opts: the full dict of option passed to the import command
871 :opts: the full dict of option passed to the import command
872 :msgs: list to save commit message to.
872 :msgs: list to save commit message to.
873 (used in case we need to save it when failing)
873 (used in case we need to save it when failing)
874 :updatefunc: a function that update a repo to a given node
874 :updatefunc: a function that update a repo to a given node
875 updatefunc(<repo>, <node>)
875 updatefunc(<repo>, <node>)
876 """
876 """
877 # avoid cycle context -> subrepo -> cmdutil
877 # avoid cycle context -> subrepo -> cmdutil
878 from . import context
878 from . import context
879 extractdata = patch.extract(ui, hunk)
879 extractdata = patch.extract(ui, hunk)
880 tmpname = extractdata.get('filename')
880 tmpname = extractdata.get('filename')
881 message = extractdata.get('message')
881 message = extractdata.get('message')
882 user = opts.get('user') or extractdata.get('user')
882 user = opts.get('user') or extractdata.get('user')
883 date = opts.get('date') or extractdata.get('date')
883 date = opts.get('date') or extractdata.get('date')
884 branch = extractdata.get('branch')
884 branch = extractdata.get('branch')
885 nodeid = extractdata.get('nodeid')
885 nodeid = extractdata.get('nodeid')
886 p1 = extractdata.get('p1')
886 p1 = extractdata.get('p1')
887 p2 = extractdata.get('p2')
887 p2 = extractdata.get('p2')
888
888
889 nocommit = opts.get('no_commit')
889 nocommit = opts.get('no_commit')
890 importbranch = opts.get('import_branch')
890 importbranch = opts.get('import_branch')
891 update = not opts.get('bypass')
891 update = not opts.get('bypass')
892 strip = opts["strip"]
892 strip = opts["strip"]
893 prefix = opts["prefix"]
893 prefix = opts["prefix"]
894 sim = float(opts.get('similarity') or 0)
894 sim = float(opts.get('similarity') or 0)
895 if not tmpname:
895 if not tmpname:
896 return (None, None, False)
896 return (None, None, False)
897
897
898 rejects = False
898 rejects = False
899
899
900 try:
900 try:
901 cmdline_message = logmessage(ui, opts)
901 cmdline_message = logmessage(ui, opts)
902 if cmdline_message:
902 if cmdline_message:
903 # pickup the cmdline msg
903 # pickup the cmdline msg
904 message = cmdline_message
904 message = cmdline_message
905 elif message:
905 elif message:
906 # pickup the patch msg
906 # pickup the patch msg
907 message = message.strip()
907 message = message.strip()
908 else:
908 else:
909 # launch the editor
909 # launch the editor
910 message = None
910 message = None
911 ui.debug('message:\n%s\n' % message)
911 ui.debug('message:\n%s\n' % message)
912
912
913 if len(parents) == 1:
913 if len(parents) == 1:
914 parents.append(repo[nullid])
914 parents.append(repo[nullid])
915 if opts.get('exact'):
915 if opts.get('exact'):
916 if not nodeid or not p1:
916 if not nodeid or not p1:
917 raise error.Abort(_('not a Mercurial patch'))
917 raise error.Abort(_('not a Mercurial patch'))
918 p1 = repo[p1]
918 p1 = repo[p1]
919 p2 = repo[p2 or nullid]
919 p2 = repo[p2 or nullid]
920 elif p2:
920 elif p2:
921 try:
921 try:
922 p1 = repo[p1]
922 p1 = repo[p1]
923 p2 = repo[p2]
923 p2 = repo[p2]
924 # Without any options, consider p2 only if the
924 # Without any options, consider p2 only if the
925 # patch is being applied on top of the recorded
925 # patch is being applied on top of the recorded
926 # first parent.
926 # first parent.
927 if p1 != parents[0]:
927 if p1 != parents[0]:
928 p1 = parents[0]
928 p1 = parents[0]
929 p2 = repo[nullid]
929 p2 = repo[nullid]
930 except error.RepoError:
930 except error.RepoError:
931 p1, p2 = parents
931 p1, p2 = parents
932 if p2.node() == nullid:
932 if p2.node() == nullid:
933 ui.warn(_("warning: import the patch as a normal revision\n"
933 ui.warn(_("warning: import the patch as a normal revision\n"
934 "(use --exact to import the patch as a merge)\n"))
934 "(use --exact to import the patch as a merge)\n"))
935 else:
935 else:
936 p1, p2 = parents
936 p1, p2 = parents
937
937
938 n = None
938 n = None
939 if update:
939 if update:
940 if p1 != parents[0]:
940 if p1 != parents[0]:
941 updatefunc(repo, p1.node())
941 updatefunc(repo, p1.node())
942 if p2 != parents[1]:
942 if p2 != parents[1]:
943 repo.setparents(p1.node(), p2.node())
943 repo.setparents(p1.node(), p2.node())
944
944
945 if opts.get('exact') or importbranch:
945 if opts.get('exact') or importbranch:
946 repo.dirstate.setbranch(branch or 'default')
946 repo.dirstate.setbranch(branch or 'default')
947
947
948 partial = opts.get('partial', False)
948 partial = opts.get('partial', False)
949 files = set()
949 files = set()
950 try:
950 try:
951 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
951 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
952 files=files, eolmode=None, similarity=sim / 100.0)
952 files=files, eolmode=None, similarity=sim / 100.0)
953 except patch.PatchError as e:
953 except patch.PatchError as e:
954 if not partial:
954 if not partial:
955 raise error.Abort(str(e))
955 raise error.Abort(str(e))
956 if partial:
956 if partial:
957 rejects = True
957 rejects = True
958
958
959 files = list(files)
959 files = list(files)
960 if nocommit:
960 if nocommit:
961 if message:
961 if message:
962 msgs.append(message)
962 msgs.append(message)
963 else:
963 else:
964 if opts.get('exact') or p2:
964 if opts.get('exact') or p2:
965 # If you got here, you either use --force and know what
965 # If you got here, you either use --force and know what
966 # you are doing or used --exact or a merge patch while
966 # you are doing or used --exact or a merge patch while
967 # being updated to its first parent.
967 # being updated to its first parent.
968 m = None
968 m = None
969 else:
969 else:
970 m = scmutil.matchfiles(repo, files or [])
970 m = scmutil.matchfiles(repo, files or [])
971 editform = mergeeditform(repo[None], 'import.normal')
971 editform = mergeeditform(repo[None], 'import.normal')
972 if opts.get('exact'):
972 if opts.get('exact'):
973 editor = None
973 editor = None
974 else:
974 else:
975 editor = getcommiteditor(editform=editform, **opts)
975 editor = getcommiteditor(editform=editform, **opts)
976 extra = {}
976 extra = {}
977 for idfunc in extrapreimport:
977 for idfunc in extrapreimport:
978 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
978 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
979 overrides = {}
979 overrides = {}
980 if partial:
980 if partial:
981 overrides[('ui', 'allowemptycommit')] = True
981 overrides[('ui', 'allowemptycommit')] = True
982 with repo.ui.configoverride(overrides, 'import'):
982 with repo.ui.configoverride(overrides, 'import'):
983 n = repo.commit(message, user,
983 n = repo.commit(message, user,
984 date, match=m,
984 date, match=m,
985 editor=editor, extra=extra)
985 editor=editor, extra=extra)
986 for idfunc in extrapostimport:
986 for idfunc in extrapostimport:
987 extrapostimportmap[idfunc](repo[n])
987 extrapostimportmap[idfunc](repo[n])
988 else:
988 else:
989 if opts.get('exact') or importbranch:
989 if opts.get('exact') or importbranch:
990 branch = branch or 'default'
990 branch = branch or 'default'
991 else:
991 else:
992 branch = p1.branch()
992 branch = p1.branch()
993 store = patch.filestore()
993 store = patch.filestore()
994 try:
994 try:
995 files = set()
995 files = set()
996 try:
996 try:
997 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
997 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
998 files, eolmode=None)
998 files, eolmode=None)
999 except patch.PatchError as e:
999 except patch.PatchError as e:
1000 raise error.Abort(str(e))
1000 raise error.Abort(str(e))
1001 if opts.get('exact'):
1001 if opts.get('exact'):
1002 editor = None
1002 editor = None
1003 else:
1003 else:
1004 editor = getcommiteditor(editform='import.bypass')
1004 editor = getcommiteditor(editform='import.bypass')
1005 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1005 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1006 message,
1006 message,
1007 user,
1007 user,
1008 date,
1008 date,
1009 branch, files, store,
1009 branch, files, store,
1010 editor=editor)
1010 editor=editor)
1011 n = memctx.commit()
1011 n = memctx.commit()
1012 finally:
1012 finally:
1013 store.close()
1013 store.close()
1014 if opts.get('exact') and nocommit:
1014 if opts.get('exact') and nocommit:
1015 # --exact with --no-commit is still useful in that it does merge
1015 # --exact with --no-commit is still useful in that it does merge
1016 # and branch bits
1016 # and branch bits
1017 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1017 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1018 elif opts.get('exact') and hex(n) != nodeid:
1018 elif opts.get('exact') and hex(n) != nodeid:
1019 raise error.Abort(_('patch is damaged or loses information'))
1019 raise error.Abort(_('patch is damaged or loses information'))
1020 msg = _('applied to working directory')
1020 msg = _('applied to working directory')
1021 if n:
1021 if n:
1022 # i18n: refers to a short changeset id
1022 # i18n: refers to a short changeset id
1023 msg = _('created %s') % short(n)
1023 msg = _('created %s') % short(n)
1024 return (msg, n, rejects)
1024 return (msg, n, rejects)
1025 finally:
1025 finally:
1026 os.unlink(tmpname)
1026 os.unlink(tmpname)
1027
1027
1028 # facility to let extensions include additional data in an exported patch
1028 # facility to let extensions include additional data in an exported patch
1029 # list of identifiers to be executed in order
1029 # list of identifiers to be executed in order
1030 extraexport = []
1030 extraexport = []
1031 # mapping from identifier to actual export function
1031 # mapping from identifier to actual export function
1032 # function as to return a string to be added to the header or None
1032 # function as to return a string to be added to the header or None
1033 # it is given two arguments (sequencenumber, changectx)
1033 # it is given two arguments (sequencenumber, changectx)
1034 extraexportmap = {}
1034 extraexportmap = {}
1035
1035
1036 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1036 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1037 opts=None, match=None):
1037 opts=None, match=None):
1038 '''export changesets as hg patches.'''
1038 '''export changesets as hg patches.'''
1039
1039
1040 total = len(revs)
1040 total = len(revs)
1041 revwidth = max([len(str(rev)) for rev in revs])
1041 revwidth = max([len(str(rev)) for rev in revs])
1042 filemode = {}
1042 filemode = {}
1043
1043
1044 def single(rev, seqno, fp):
1044 def single(rev, seqno, fp):
1045 ctx = repo[rev]
1045 ctx = repo[rev]
1046 node = ctx.node()
1046 node = ctx.node()
1047 parents = [p.node() for p in ctx.parents() if p]
1047 parents = [p.node() for p in ctx.parents() if p]
1048 branch = ctx.branch()
1048 branch = ctx.branch()
1049 if switch_parent:
1049 if switch_parent:
1050 parents.reverse()
1050 parents.reverse()
1051
1051
1052 if parents:
1052 if parents:
1053 prev = parents[0]
1053 prev = parents[0]
1054 else:
1054 else:
1055 prev = nullid
1055 prev = nullid
1056
1056
1057 shouldclose = False
1057 shouldclose = False
1058 if not fp and len(template) > 0:
1058 if not fp and len(template) > 0:
1059 desc_lines = ctx.description().rstrip().split('\n')
1059 desc_lines = ctx.description().rstrip().split('\n')
1060 desc = desc_lines[0] #Commit always has a first line.
1060 desc = desc_lines[0] #Commit always has a first line.
1061 fp = makefileobj(repo, template, node, desc=desc, total=total,
1061 fp = makefileobj(repo, template, node, desc=desc, total=total,
1062 seqno=seqno, revwidth=revwidth, mode='wb',
1062 seqno=seqno, revwidth=revwidth, mode='wb',
1063 modemap=filemode)
1063 modemap=filemode)
1064 shouldclose = True
1064 shouldclose = True
1065 if fp and not getattr(fp, 'name', '<unnamed>').startswith('<'):
1065 if fp and not getattr(fp, 'name', '<unnamed>').startswith('<'):
1066 repo.ui.note("%s\n" % fp.name)
1066 repo.ui.note("%s\n" % fp.name)
1067
1067
1068 if not fp:
1068 if not fp:
1069 write = repo.ui.write
1069 write = repo.ui.write
1070 else:
1070 else:
1071 def write(s, **kw):
1071 def write(s, **kw):
1072 fp.write(s)
1072 fp.write(s)
1073
1073
1074 write("# HG changeset patch\n")
1074 write("# HG changeset patch\n")
1075 write("# User %s\n" % ctx.user())
1075 write("# User %s\n" % ctx.user())
1076 write("# Date %d %d\n" % ctx.date())
1076 write("# Date %d %d\n" % ctx.date())
1077 write("# %s\n" % util.datestr(ctx.date()))
1077 write("# %s\n" % util.datestr(ctx.date()))
1078 if branch and branch != 'default':
1078 if branch and branch != 'default':
1079 write("# Branch %s\n" % branch)
1079 write("# Branch %s\n" % branch)
1080 write("# Node ID %s\n" % hex(node))
1080 write("# Node ID %s\n" % hex(node))
1081 write("# Parent %s\n" % hex(prev))
1081 write("# Parent %s\n" % hex(prev))
1082 if len(parents) > 1:
1082 if len(parents) > 1:
1083 write("# Parent %s\n" % hex(parents[1]))
1083 write("# Parent %s\n" % hex(parents[1]))
1084
1084
1085 for headerid in extraexport:
1085 for headerid in extraexport:
1086 header = extraexportmap[headerid](seqno, ctx)
1086 header = extraexportmap[headerid](seqno, ctx)
1087 if header is not None:
1087 if header is not None:
1088 write('# %s\n' % header)
1088 write('# %s\n' % header)
1089 write(ctx.description().rstrip())
1089 write(ctx.description().rstrip())
1090 write("\n\n")
1090 write("\n\n")
1091
1091
1092 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1092 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1093 write(chunk, label=label)
1093 write(chunk, label=label)
1094
1094
1095 if shouldclose:
1095 if shouldclose:
1096 fp.close()
1096 fp.close()
1097
1097
1098 for seqno, rev in enumerate(revs):
1098 for seqno, rev in enumerate(revs):
1099 single(rev, seqno + 1, fp)
1099 single(rev, seqno + 1, fp)
1100
1100
1101 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1101 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1102 changes=None, stat=False, fp=None, prefix='',
1102 changes=None, stat=False, fp=None, prefix='',
1103 root='', listsubrepos=False):
1103 root='', listsubrepos=False):
1104 '''show diff or diffstat.'''
1104 '''show diff or diffstat.'''
1105 if fp is None:
1105 if fp is None:
1106 write = ui.write
1106 write = ui.write
1107 else:
1107 else:
1108 def write(s, **kw):
1108 def write(s, **kw):
1109 fp.write(s)
1109 fp.write(s)
1110
1110
1111 if root:
1111 if root:
1112 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1112 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1113 else:
1113 else:
1114 relroot = ''
1114 relroot = ''
1115 if relroot != '':
1115 if relroot != '':
1116 # XXX relative roots currently don't work if the root is within a
1116 # XXX relative roots currently don't work if the root is within a
1117 # subrepo
1117 # subrepo
1118 uirelroot = match.uipath(relroot)
1118 uirelroot = match.uipath(relroot)
1119 relroot += '/'
1119 relroot += '/'
1120 for matchroot in match.files():
1120 for matchroot in match.files():
1121 if not matchroot.startswith(relroot):
1121 if not matchroot.startswith(relroot):
1122 ui.warn(_('warning: %s not inside relative root %s\n') % (
1122 ui.warn(_('warning: %s not inside relative root %s\n') % (
1123 match.uipath(matchroot), uirelroot))
1123 match.uipath(matchroot), uirelroot))
1124
1124
1125 if stat:
1125 if stat:
1126 diffopts = diffopts.copy(context=0)
1126 diffopts = diffopts.copy(context=0)
1127 width = 80
1127 width = 80
1128 if not ui.plain():
1128 if not ui.plain():
1129 width = ui.termwidth()
1129 width = ui.termwidth()
1130 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1130 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1131 prefix=prefix, relroot=relroot)
1131 prefix=prefix, relroot=relroot)
1132 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1132 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1133 width=width):
1133 width=width):
1134 write(chunk, label=label)
1134 write(chunk, label=label)
1135 else:
1135 else:
1136 for chunk, label in patch.diffui(repo, node1, node2, match,
1136 for chunk, label in patch.diffui(repo, node1, node2, match,
1137 changes, diffopts, prefix=prefix,
1137 changes, diffopts, prefix=prefix,
1138 relroot=relroot):
1138 relroot=relroot):
1139 write(chunk, label=label)
1139 write(chunk, label=label)
1140
1140
1141 if listsubrepos:
1141 if listsubrepos:
1142 ctx1 = repo[node1]
1142 ctx1 = repo[node1]
1143 ctx2 = repo[node2]
1143 ctx2 = repo[node2]
1144 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1144 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1145 tempnode2 = node2
1145 tempnode2 = node2
1146 try:
1146 try:
1147 if node2 is not None:
1147 if node2 is not None:
1148 tempnode2 = ctx2.substate[subpath][1]
1148 tempnode2 = ctx2.substate[subpath][1]
1149 except KeyError:
1149 except KeyError:
1150 # A subrepo that existed in node1 was deleted between node1 and
1150 # A subrepo that existed in node1 was deleted between node1 and
1151 # node2 (inclusive). Thus, ctx2's substate won't contain that
1151 # node2 (inclusive). Thus, ctx2's substate won't contain that
1152 # subpath. The best we can do is to ignore it.
1152 # subpath. The best we can do is to ignore it.
1153 tempnode2 = None
1153 tempnode2 = None
1154 submatch = matchmod.subdirmatcher(subpath, match)
1154 submatch = matchmod.subdirmatcher(subpath, match)
1155 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1155 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1156 stat=stat, fp=fp, prefix=prefix)
1156 stat=stat, fp=fp, prefix=prefix)
1157
1157
1158 def _changesetlabels(ctx):
1158 def _changesetlabels(ctx):
1159 labels = ['log.changeset', 'changeset.%s' % ctx.phasestr()]
1159 labels = ['log.changeset', 'changeset.%s' % ctx.phasestr()]
1160 if ctx.obsolete():
1160 if ctx.obsolete():
1161 labels.append('changeset.obsolete')
1161 labels.append('changeset.obsolete')
1162 if ctx.troubled():
1162 if ctx.troubled():
1163 labels.append('changeset.troubled')
1163 labels.append('changeset.troubled')
1164 for trouble in ctx.troubles():
1164 for trouble in ctx.troubles():
1165 labels.append('trouble.%s' % trouble)
1165 labels.append('trouble.%s' % trouble)
1166 return ' '.join(labels)
1166 return ' '.join(labels)
1167
1167
1168 class changeset_printer(object):
1168 class changeset_printer(object):
1169 '''show changeset information when templating not requested.'''
1169 '''show changeset information when templating not requested.'''
1170
1170
1171 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1171 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1172 self.ui = ui
1172 self.ui = ui
1173 self.repo = repo
1173 self.repo = repo
1174 self.buffered = buffered
1174 self.buffered = buffered
1175 self.matchfn = matchfn
1175 self.matchfn = matchfn
1176 self.diffopts = diffopts
1176 self.diffopts = diffopts
1177 self.header = {}
1177 self.header = {}
1178 self.hunk = {}
1178 self.hunk = {}
1179 self.lastheader = None
1179 self.lastheader = None
1180 self.footer = None
1180 self.footer = None
1181
1181
1182 def flush(self, ctx):
1182 def flush(self, ctx):
1183 rev = ctx.rev()
1183 rev = ctx.rev()
1184 if rev in self.header:
1184 if rev in self.header:
1185 h = self.header[rev]
1185 h = self.header[rev]
1186 if h != self.lastheader:
1186 if h != self.lastheader:
1187 self.lastheader = h
1187 self.lastheader = h
1188 self.ui.write(h)
1188 self.ui.write(h)
1189 del self.header[rev]
1189 del self.header[rev]
1190 if rev in self.hunk:
1190 if rev in self.hunk:
1191 self.ui.write(self.hunk[rev])
1191 self.ui.write(self.hunk[rev])
1192 del self.hunk[rev]
1192 del self.hunk[rev]
1193 return 1
1193 return 1
1194 return 0
1194 return 0
1195
1195
1196 def close(self):
1196 def close(self):
1197 if self.footer:
1197 if self.footer:
1198 self.ui.write(self.footer)
1198 self.ui.write(self.footer)
1199
1199
1200 def show(self, ctx, copies=None, matchfn=None, **props):
1200 def show(self, ctx, copies=None, matchfn=None, **props):
1201 if self.buffered:
1201 if self.buffered:
1202 self.ui.pushbuffer(labeled=True)
1202 self.ui.pushbuffer(labeled=True)
1203 self._show(ctx, copies, matchfn, props)
1203 self._show(ctx, copies, matchfn, props)
1204 self.hunk[ctx.rev()] = self.ui.popbuffer()
1204 self.hunk[ctx.rev()] = self.ui.popbuffer()
1205 else:
1205 else:
1206 self._show(ctx, copies, matchfn, props)
1206 self._show(ctx, copies, matchfn, props)
1207
1207
1208 def _show(self, ctx, copies, matchfn, props):
1208 def _show(self, ctx, copies, matchfn, props):
1209 '''show a single changeset or file revision'''
1209 '''show a single changeset or file revision'''
1210 changenode = ctx.node()
1210 changenode = ctx.node()
1211 rev = ctx.rev()
1211 rev = ctx.rev()
1212 if self.ui.debugflag:
1212 if self.ui.debugflag:
1213 hexfunc = hex
1213 hexfunc = hex
1214 else:
1214 else:
1215 hexfunc = short
1215 hexfunc = short
1216 # as of now, wctx.node() and wctx.rev() return None, but we want to
1216 # as of now, wctx.node() and wctx.rev() return None, but we want to
1217 # show the same values as {node} and {rev} templatekw
1217 # show the same values as {node} and {rev} templatekw
1218 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1218 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1219
1219
1220 if self.ui.quiet:
1220 if self.ui.quiet:
1221 self.ui.write("%d:%s\n" % revnode, label='log.node')
1221 self.ui.write("%d:%s\n" % revnode, label='log.node')
1222 return
1222 return
1223
1223
1224 date = util.datestr(ctx.date())
1224 date = util.datestr(ctx.date())
1225
1225
1226 # i18n: column positioning for "hg log"
1226 # i18n: column positioning for "hg log"
1227 self.ui.write(_("changeset: %d:%s\n") % revnode,
1227 self.ui.write(_("changeset: %d:%s\n") % revnode,
1228 label=_changesetlabels(ctx))
1228 label=_changesetlabels(ctx))
1229
1229
1230 # branches are shown first before any other names due to backwards
1230 # branches are shown first before any other names due to backwards
1231 # compatibility
1231 # compatibility
1232 branch = ctx.branch()
1232 branch = ctx.branch()
1233 # don't show the default branch name
1233 # don't show the default branch name
1234 if branch != 'default':
1234 if branch != 'default':
1235 # i18n: column positioning for "hg log"
1235 # i18n: column positioning for "hg log"
1236 self.ui.write(_("branch: %s\n") % branch,
1236 self.ui.write(_("branch: %s\n") % branch,
1237 label='log.branch')
1237 label='log.branch')
1238
1238
1239 for nsname, ns in self.repo.names.iteritems():
1239 for nsname, ns in self.repo.names.iteritems():
1240 # branches has special logic already handled above, so here we just
1240 # branches has special logic already handled above, so here we just
1241 # skip it
1241 # skip it
1242 if nsname == 'branches':
1242 if nsname == 'branches':
1243 continue
1243 continue
1244 # we will use the templatename as the color name since those two
1244 # we will use the templatename as the color name since those two
1245 # should be the same
1245 # should be the same
1246 for name in ns.names(self.repo, changenode):
1246 for name in ns.names(self.repo, changenode):
1247 self.ui.write(ns.logfmt % name,
1247 self.ui.write(ns.logfmt % name,
1248 label='log.%s' % ns.colorname)
1248 label='log.%s' % ns.colorname)
1249 if self.ui.debugflag:
1249 if self.ui.debugflag:
1250 # i18n: column positioning for "hg log"
1250 # i18n: column positioning for "hg log"
1251 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1251 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1252 label='log.phase')
1252 label='log.phase')
1253 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1253 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1254 label = 'log.parent changeset.%s' % pctx.phasestr()
1254 label = 'log.parent changeset.%s' % pctx.phasestr()
1255 # i18n: column positioning for "hg log"
1255 # i18n: column positioning for "hg log"
1256 self.ui.write(_("parent: %d:%s\n")
1256 self.ui.write(_("parent: %d:%s\n")
1257 % (pctx.rev(), hexfunc(pctx.node())),
1257 % (pctx.rev(), hexfunc(pctx.node())),
1258 label=label)
1258 label=label)
1259
1259
1260 if self.ui.debugflag and rev is not None:
1260 if self.ui.debugflag and rev is not None:
1261 mnode = ctx.manifestnode()
1261 mnode = ctx.manifestnode()
1262 # i18n: column positioning for "hg log"
1262 # i18n: column positioning for "hg log"
1263 self.ui.write(_("manifest: %d:%s\n") %
1263 self.ui.write(_("manifest: %d:%s\n") %
1264 (self.repo.manifestlog._revlog.rev(mnode),
1264 (self.repo.manifestlog._revlog.rev(mnode),
1265 hex(mnode)),
1265 hex(mnode)),
1266 label='ui.debug log.manifest')
1266 label='ui.debug log.manifest')
1267 # i18n: column positioning for "hg log"
1267 # i18n: column positioning for "hg log"
1268 self.ui.write(_("user: %s\n") % ctx.user(),
1268 self.ui.write(_("user: %s\n") % ctx.user(),
1269 label='log.user')
1269 label='log.user')
1270 # i18n: column positioning for "hg log"
1270 # i18n: column positioning for "hg log"
1271 self.ui.write(_("date: %s\n") % date,
1271 self.ui.write(_("date: %s\n") % date,
1272 label='log.date')
1272 label='log.date')
1273
1273
1274 if ctx.troubled():
1274 if ctx.troubled():
1275 # i18n: column positioning for "hg log"
1275 # i18n: column positioning for "hg log"
1276 self.ui.write(_("trouble: %s\n") % ', '.join(ctx.troubles()),
1276 self.ui.write(_("trouble: %s\n") % ', '.join(ctx.troubles()),
1277 label='log.trouble')
1277 label='log.trouble')
1278
1278
1279 if self.ui.debugflag:
1279 if self.ui.debugflag:
1280 files = ctx.p1().status(ctx)[:3]
1280 files = ctx.p1().status(ctx)[:3]
1281 for key, value in zip([# i18n: column positioning for "hg log"
1281 for key, value in zip([# i18n: column positioning for "hg log"
1282 _("files:"),
1282 _("files:"),
1283 # i18n: column positioning for "hg log"
1283 # i18n: column positioning for "hg log"
1284 _("files+:"),
1284 _("files+:"),
1285 # i18n: column positioning for "hg log"
1285 # i18n: column positioning for "hg log"
1286 _("files-:")], files):
1286 _("files-:")], files):
1287 if value:
1287 if value:
1288 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1288 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1289 label='ui.debug log.files')
1289 label='ui.debug log.files')
1290 elif ctx.files() and self.ui.verbose:
1290 elif ctx.files() and self.ui.verbose:
1291 # i18n: column positioning for "hg log"
1291 # i18n: column positioning for "hg log"
1292 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1292 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1293 label='ui.note log.files')
1293 label='ui.note log.files')
1294 if copies and self.ui.verbose:
1294 if copies and self.ui.verbose:
1295 copies = ['%s (%s)' % c for c in copies]
1295 copies = ['%s (%s)' % c for c in copies]
1296 # i18n: column positioning for "hg log"
1296 # i18n: column positioning for "hg log"
1297 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1297 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1298 label='ui.note log.copies')
1298 label='ui.note log.copies')
1299
1299
1300 extra = ctx.extra()
1300 extra = ctx.extra()
1301 if extra and self.ui.debugflag:
1301 if extra and self.ui.debugflag:
1302 for key, value in sorted(extra.items()):
1302 for key, value in sorted(extra.items()):
1303 # i18n: column positioning for "hg log"
1303 # i18n: column positioning for "hg log"
1304 self.ui.write(_("extra: %s=%s\n")
1304 self.ui.write(_("extra: %s=%s\n")
1305 % (key, util.escapestr(value)),
1305 % (key, util.escapestr(value)),
1306 label='ui.debug log.extra')
1306 label='ui.debug log.extra')
1307
1307
1308 description = ctx.description().strip()
1308 description = ctx.description().strip()
1309 if description:
1309 if description:
1310 if self.ui.verbose:
1310 if self.ui.verbose:
1311 self.ui.write(_("description:\n"),
1311 self.ui.write(_("description:\n"),
1312 label='ui.note log.description')
1312 label='ui.note log.description')
1313 self.ui.write(description,
1313 self.ui.write(description,
1314 label='ui.note log.description')
1314 label='ui.note log.description')
1315 self.ui.write("\n\n")
1315 self.ui.write("\n\n")
1316 else:
1316 else:
1317 # i18n: column positioning for "hg log"
1317 # i18n: column positioning for "hg log"
1318 self.ui.write(_("summary: %s\n") %
1318 self.ui.write(_("summary: %s\n") %
1319 description.splitlines()[0],
1319 description.splitlines()[0],
1320 label='log.summary')
1320 label='log.summary')
1321 self.ui.write("\n")
1321 self.ui.write("\n")
1322
1322
1323 self.showpatch(ctx, matchfn)
1323 self.showpatch(ctx, matchfn)
1324
1324
1325 def showpatch(self, ctx, matchfn):
1325 def showpatch(self, ctx, matchfn):
1326 if not matchfn:
1326 if not matchfn:
1327 matchfn = self.matchfn
1327 matchfn = self.matchfn
1328 if matchfn:
1328 if matchfn:
1329 stat = self.diffopts.get('stat')
1329 stat = self.diffopts.get('stat')
1330 diff = self.diffopts.get('patch')
1330 diff = self.diffopts.get('patch')
1331 diffopts = patch.diffallopts(self.ui, self.diffopts)
1331 diffopts = patch.diffallopts(self.ui, self.diffopts)
1332 node = ctx.node()
1332 node = ctx.node()
1333 prev = ctx.p1().node()
1333 prev = ctx.p1().node()
1334 if stat:
1334 if stat:
1335 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1335 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1336 match=matchfn, stat=True)
1336 match=matchfn, stat=True)
1337 if diff:
1337 if diff:
1338 if stat:
1338 if stat:
1339 self.ui.write("\n")
1339 self.ui.write("\n")
1340 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1340 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1341 match=matchfn, stat=False)
1341 match=matchfn, stat=False)
1342 self.ui.write("\n")
1342 self.ui.write("\n")
1343
1343
1344 class jsonchangeset(changeset_printer):
1344 class jsonchangeset(changeset_printer):
1345 '''format changeset information.'''
1345 '''format changeset information.'''
1346
1346
1347 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1347 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1348 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1348 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1349 self.cache = {}
1349 self.cache = {}
1350 self._first = True
1350 self._first = True
1351
1351
1352 def close(self):
1352 def close(self):
1353 if not self._first:
1353 if not self._first:
1354 self.ui.write("\n]\n")
1354 self.ui.write("\n]\n")
1355 else:
1355 else:
1356 self.ui.write("[]\n")
1356 self.ui.write("[]\n")
1357
1357
1358 def _show(self, ctx, copies, matchfn, props):
1358 def _show(self, ctx, copies, matchfn, props):
1359 '''show a single changeset or file revision'''
1359 '''show a single changeset or file revision'''
1360 rev = ctx.rev()
1360 rev = ctx.rev()
1361 if rev is None:
1361 if rev is None:
1362 jrev = jnode = 'null'
1362 jrev = jnode = 'null'
1363 else:
1363 else:
1364 jrev = '%d' % rev
1364 jrev = '%d' % rev
1365 jnode = '"%s"' % hex(ctx.node())
1365 jnode = '"%s"' % hex(ctx.node())
1366 j = encoding.jsonescape
1366 j = encoding.jsonescape
1367
1367
1368 if self._first:
1368 if self._first:
1369 self.ui.write("[\n {")
1369 self.ui.write("[\n {")
1370 self._first = False
1370 self._first = False
1371 else:
1371 else:
1372 self.ui.write(",\n {")
1372 self.ui.write(",\n {")
1373
1373
1374 if self.ui.quiet:
1374 if self.ui.quiet:
1375 self.ui.write(('\n "rev": %s') % jrev)
1375 self.ui.write(('\n "rev": %s') % jrev)
1376 self.ui.write((',\n "node": %s') % jnode)
1376 self.ui.write((',\n "node": %s') % jnode)
1377 self.ui.write('\n }')
1377 self.ui.write('\n }')
1378 return
1378 return
1379
1379
1380 self.ui.write(('\n "rev": %s') % jrev)
1380 self.ui.write(('\n "rev": %s') % jrev)
1381 self.ui.write((',\n "node": %s') % jnode)
1381 self.ui.write((',\n "node": %s') % jnode)
1382 self.ui.write((',\n "branch": "%s"') % j(ctx.branch()))
1382 self.ui.write((',\n "branch": "%s"') % j(ctx.branch()))
1383 self.ui.write((',\n "phase": "%s"') % ctx.phasestr())
1383 self.ui.write((',\n "phase": "%s"') % ctx.phasestr())
1384 self.ui.write((',\n "user": "%s"') % j(ctx.user()))
1384 self.ui.write((',\n "user": "%s"') % j(ctx.user()))
1385 self.ui.write((',\n "date": [%d, %d]') % ctx.date())
1385 self.ui.write((',\n "date": [%d, %d]') % ctx.date())
1386 self.ui.write((',\n "desc": "%s"') % j(ctx.description()))
1386 self.ui.write((',\n "desc": "%s"') % j(ctx.description()))
1387
1387
1388 self.ui.write((',\n "bookmarks": [%s]') %
1388 self.ui.write((',\n "bookmarks": [%s]') %
1389 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1389 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1390 self.ui.write((',\n "tags": [%s]') %
1390 self.ui.write((',\n "tags": [%s]') %
1391 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1391 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1392 self.ui.write((',\n "parents": [%s]') %
1392 self.ui.write((',\n "parents": [%s]') %
1393 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1393 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1394
1394
1395 if self.ui.debugflag:
1395 if self.ui.debugflag:
1396 if rev is None:
1396 if rev is None:
1397 jmanifestnode = 'null'
1397 jmanifestnode = 'null'
1398 else:
1398 else:
1399 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1399 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1400 self.ui.write((',\n "manifest": %s') % jmanifestnode)
1400 self.ui.write((',\n "manifest": %s') % jmanifestnode)
1401
1401
1402 self.ui.write((',\n "extra": {%s}') %
1402 self.ui.write((',\n "extra": {%s}') %
1403 ", ".join('"%s": "%s"' % (j(k), j(v))
1403 ", ".join('"%s": "%s"' % (j(k), j(v))
1404 for k, v in ctx.extra().items()))
1404 for k, v in ctx.extra().items()))
1405
1405
1406 files = ctx.p1().status(ctx)
1406 files = ctx.p1().status(ctx)
1407 self.ui.write((',\n "modified": [%s]') %
1407 self.ui.write((',\n "modified": [%s]') %
1408 ", ".join('"%s"' % j(f) for f in files[0]))
1408 ", ".join('"%s"' % j(f) for f in files[0]))
1409 self.ui.write((',\n "added": [%s]') %
1409 self.ui.write((',\n "added": [%s]') %
1410 ", ".join('"%s"' % j(f) for f in files[1]))
1410 ", ".join('"%s"' % j(f) for f in files[1]))
1411 self.ui.write((',\n "removed": [%s]') %
1411 self.ui.write((',\n "removed": [%s]') %
1412 ", ".join('"%s"' % j(f) for f in files[2]))
1412 ", ".join('"%s"' % j(f) for f in files[2]))
1413
1413
1414 elif self.ui.verbose:
1414 elif self.ui.verbose:
1415 self.ui.write((',\n "files": [%s]') %
1415 self.ui.write((',\n "files": [%s]') %
1416 ", ".join('"%s"' % j(f) for f in ctx.files()))
1416 ", ".join('"%s"' % j(f) for f in ctx.files()))
1417
1417
1418 if copies:
1418 if copies:
1419 self.ui.write((',\n "copies": {%s}') %
1419 self.ui.write((',\n "copies": {%s}') %
1420 ", ".join('"%s": "%s"' % (j(k), j(v))
1420 ", ".join('"%s": "%s"' % (j(k), j(v))
1421 for k, v in copies))
1421 for k, v in copies))
1422
1422
1423 matchfn = self.matchfn
1423 matchfn = self.matchfn
1424 if matchfn:
1424 if matchfn:
1425 stat = self.diffopts.get('stat')
1425 stat = self.diffopts.get('stat')
1426 diff = self.diffopts.get('patch')
1426 diff = self.diffopts.get('patch')
1427 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1427 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1428 node, prev = ctx.node(), ctx.p1().node()
1428 node, prev = ctx.node(), ctx.p1().node()
1429 if stat:
1429 if stat:
1430 self.ui.pushbuffer()
1430 self.ui.pushbuffer()
1431 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1431 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1432 match=matchfn, stat=True)
1432 match=matchfn, stat=True)
1433 self.ui.write((',\n "diffstat": "%s"')
1433 self.ui.write((',\n "diffstat": "%s"')
1434 % j(self.ui.popbuffer()))
1434 % j(self.ui.popbuffer()))
1435 if diff:
1435 if diff:
1436 self.ui.pushbuffer()
1436 self.ui.pushbuffer()
1437 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1437 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1438 match=matchfn, stat=False)
1438 match=matchfn, stat=False)
1439 self.ui.write((',\n "diff": "%s"') % j(self.ui.popbuffer()))
1439 self.ui.write((',\n "diff": "%s"') % j(self.ui.popbuffer()))
1440
1440
1441 self.ui.write("\n }")
1441 self.ui.write("\n }")
1442
1442
1443 class changeset_templater(changeset_printer):
1443 class changeset_templater(changeset_printer):
1444 '''format changeset information.'''
1444 '''format changeset information.'''
1445
1445
1446 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1446 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1447 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1447 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1448 assert not (tmpl and mapfile)
1448 assert not (tmpl and mapfile)
1449 defaulttempl = templatekw.defaulttempl
1449 defaulttempl = templatekw.defaulttempl
1450 if mapfile:
1450 if mapfile:
1451 self.t = templater.templater.frommapfile(mapfile,
1451 self.t = templater.templater.frommapfile(mapfile,
1452 cache=defaulttempl)
1452 cache=defaulttempl)
1453 else:
1453 else:
1454 self.t = formatter.maketemplater(ui, 'changeset', tmpl,
1454 self.t = formatter.maketemplater(ui, 'changeset', tmpl,
1455 cache=defaulttempl)
1455 cache=defaulttempl)
1456
1456
1457 self._counter = itertools.count()
1457 self._counter = itertools.count()
1458 self.cache = {}
1458 self.cache = {}
1459
1459
1460 # find correct templates for current mode
1460 # find correct templates for current mode
1461 tmplmodes = [
1461 tmplmodes = [
1462 (True, None),
1462 (True, None),
1463 (self.ui.verbose, 'verbose'),
1463 (self.ui.verbose, 'verbose'),
1464 (self.ui.quiet, 'quiet'),
1464 (self.ui.quiet, 'quiet'),
1465 (self.ui.debugflag, 'debug'),
1465 (self.ui.debugflag, 'debug'),
1466 ]
1466 ]
1467
1467
1468 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1468 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1469 'docheader': '', 'docfooter': ''}
1469 'docheader': '', 'docfooter': ''}
1470 for mode, postfix in tmplmodes:
1470 for mode, postfix in tmplmodes:
1471 for t in self._parts:
1471 for t in self._parts:
1472 cur = t
1472 cur = t
1473 if postfix:
1473 if postfix:
1474 cur += "_" + postfix
1474 cur += "_" + postfix
1475 if mode and cur in self.t:
1475 if mode and cur in self.t:
1476 self._parts[t] = cur
1476 self._parts[t] = cur
1477
1477
1478 if self._parts['docheader']:
1478 if self._parts['docheader']:
1479 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1479 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1480
1480
1481 def close(self):
1481 def close(self):
1482 if self._parts['docfooter']:
1482 if self._parts['docfooter']:
1483 if not self.footer:
1483 if not self.footer:
1484 self.footer = ""
1484 self.footer = ""
1485 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1485 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1486 return super(changeset_templater, self).close()
1486 return super(changeset_templater, self).close()
1487
1487
1488 def _show(self, ctx, copies, matchfn, props):
1488 def _show(self, ctx, copies, matchfn, props):
1489 '''show a single changeset or file revision'''
1489 '''show a single changeset or file revision'''
1490 props = props.copy()
1490 props = props.copy()
1491 props.update(templatekw.keywords)
1491 props.update(templatekw.keywords)
1492 props['templ'] = self.t
1492 props['templ'] = self.t
1493 props['ctx'] = ctx
1493 props['ctx'] = ctx
1494 props['repo'] = self.repo
1494 props['repo'] = self.repo
1495 props['ui'] = self.repo.ui
1495 props['ui'] = self.repo.ui
1496 props['index'] = next(self._counter)
1496 props['index'] = next(self._counter)
1497 props['revcache'] = {'copies': copies}
1497 props['revcache'] = {'copies': copies}
1498 props['cache'] = self.cache
1498 props['cache'] = self.cache
1499 props = pycompat.strkwargs(props)
1499 props = pycompat.strkwargs(props)
1500
1500
1501 # write header
1501 # write header
1502 if self._parts['header']:
1502 if self._parts['header']:
1503 h = templater.stringify(self.t(self._parts['header'], **props))
1503 h = templater.stringify(self.t(self._parts['header'], **props))
1504 if self.buffered:
1504 if self.buffered:
1505 self.header[ctx.rev()] = h
1505 self.header[ctx.rev()] = h
1506 else:
1506 else:
1507 if self.lastheader != h:
1507 if self.lastheader != h:
1508 self.lastheader = h
1508 self.lastheader = h
1509 self.ui.write(h)
1509 self.ui.write(h)
1510
1510
1511 # write changeset metadata, then patch if requested
1511 # write changeset metadata, then patch if requested
1512 key = self._parts['changeset']
1512 key = self._parts['changeset']
1513 self.ui.write(templater.stringify(self.t(key, **props)))
1513 self.ui.write(templater.stringify(self.t(key, **props)))
1514 self.showpatch(ctx, matchfn)
1514 self.showpatch(ctx, matchfn)
1515
1515
1516 if self._parts['footer']:
1516 if self._parts['footer']:
1517 if not self.footer:
1517 if not self.footer:
1518 self.footer = templater.stringify(
1518 self.footer = templater.stringify(
1519 self.t(self._parts['footer'], **props))
1519 self.t(self._parts['footer'], **props))
1520
1520
1521 def gettemplate(ui, tmpl, style):
1521 def gettemplate(ui, tmpl, style):
1522 """
1522 """
1523 Find the template matching the given template spec or style.
1523 Find the template matching the given template spec or style.
1524 """
1524 """
1525
1525
1526 # ui settings
1526 # ui settings
1527 if not tmpl and not style: # template are stronger than style
1527 if not tmpl and not style: # template are stronger than style
1528 tmpl = ui.config('ui', 'logtemplate')
1528 tmpl = ui.config('ui', 'logtemplate')
1529 if tmpl:
1529 if tmpl:
1530 return templater.unquotestring(tmpl), None
1530 return templater.unquotestring(tmpl), None
1531 else:
1531 else:
1532 style = util.expandpath(ui.config('ui', 'style', ''))
1532 style = util.expandpath(ui.config('ui', 'style', ''))
1533
1533
1534 if not tmpl and style:
1534 if not tmpl and style:
1535 mapfile = style
1535 mapfile = style
1536 if not os.path.split(mapfile)[0]:
1536 if not os.path.split(mapfile)[0]:
1537 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1537 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1538 or templater.templatepath(mapfile))
1538 or templater.templatepath(mapfile))
1539 if mapname:
1539 if mapname:
1540 mapfile = mapname
1540 mapfile = mapname
1541 return None, mapfile
1541 return None, mapfile
1542
1542
1543 if not tmpl:
1543 if not tmpl:
1544 return None, None
1544 return None, None
1545
1545
1546 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1546 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1547
1547
1548 def show_changeset(ui, repo, opts, buffered=False):
1548 def show_changeset(ui, repo, opts, buffered=False):
1549 """show one changeset using template or regular display.
1549 """show one changeset using template or regular display.
1550
1550
1551 Display format will be the first non-empty hit of:
1551 Display format will be the first non-empty hit of:
1552 1. option 'template'
1552 1. option 'template'
1553 2. option 'style'
1553 2. option 'style'
1554 3. [ui] setting 'logtemplate'
1554 3. [ui] setting 'logtemplate'
1555 4. [ui] setting 'style'
1555 4. [ui] setting 'style'
1556 If all of these values are either the unset or the empty string,
1556 If all of these values are either the unset or the empty string,
1557 regular display via changeset_printer() is done.
1557 regular display via changeset_printer() is done.
1558 """
1558 """
1559 # options
1559 # options
1560 matchfn = None
1560 matchfn = None
1561 if opts.get('patch') or opts.get('stat'):
1561 if opts.get('patch') or opts.get('stat'):
1562 matchfn = scmutil.matchall(repo)
1562 matchfn = scmutil.matchall(repo)
1563
1563
1564 if opts.get('template') == 'json':
1564 if opts.get('template') == 'json':
1565 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1565 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1566
1566
1567 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1567 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1568
1568
1569 if not tmpl and not mapfile:
1569 if not tmpl and not mapfile:
1570 return changeset_printer(ui, repo, matchfn, opts, buffered)
1570 return changeset_printer(ui, repo, matchfn, opts, buffered)
1571
1571
1572 return changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile, buffered)
1572 return changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile, buffered)
1573
1573
1574 def showmarker(fm, marker, index=None):
1574 def showmarker(fm, marker, index=None):
1575 """utility function to display obsolescence marker in a readable way
1575 """utility function to display obsolescence marker in a readable way
1576
1576
1577 To be used by debug function."""
1577 To be used by debug function."""
1578 if index is not None:
1578 if index is not None:
1579 fm.write('index', '%i ', index)
1579 fm.write('index', '%i ', index)
1580 fm.write('precnode', '%s ', hex(marker.precnode()))
1580 fm.write('precnode', '%s ', hex(marker.precnode()))
1581 succs = marker.succnodes()
1581 succs = marker.succnodes()
1582 fm.condwrite(succs, 'succnodes', '%s ',
1582 fm.condwrite(succs, 'succnodes', '%s ',
1583 fm.formatlist(map(hex, succs), name='node'))
1583 fm.formatlist(map(hex, succs), name='node'))
1584 fm.write('flag', '%X ', marker.flags())
1584 fm.write('flag', '%X ', marker.flags())
1585 parents = marker.parentnodes()
1585 parents = marker.parentnodes()
1586 if parents is not None:
1586 if parents is not None:
1587 fm.write('parentnodes', '{%s} ',
1587 fm.write('parentnodes', '{%s} ',
1588 fm.formatlist(map(hex, parents), name='node', sep=', '))
1588 fm.formatlist(map(hex, parents), name='node', sep=', '))
1589 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1589 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1590 meta = marker.metadata().copy()
1590 meta = marker.metadata().copy()
1591 meta.pop('date', None)
1591 meta.pop('date', None)
1592 fm.write('metadata', '{%s}', fm.formatdict(meta, fmt='%r: %r', sep=', '))
1592 fm.write('metadata', '{%s}', fm.formatdict(meta, fmt='%r: %r', sep=', '))
1593 fm.plain('\n')
1593 fm.plain('\n')
1594
1594
1595 def finddate(ui, repo, date):
1595 def finddate(ui, repo, date):
1596 """Find the tipmost changeset that matches the given date spec"""
1596 """Find the tipmost changeset that matches the given date spec"""
1597
1597
1598 df = util.matchdate(date)
1598 df = util.matchdate(date)
1599 m = scmutil.matchall(repo)
1599 m = scmutil.matchall(repo)
1600 results = {}
1600 results = {}
1601
1601
1602 def prep(ctx, fns):
1602 def prep(ctx, fns):
1603 d = ctx.date()
1603 d = ctx.date()
1604 if df(d[0]):
1604 if df(d[0]):
1605 results[ctx.rev()] = d
1605 results[ctx.rev()] = d
1606
1606
1607 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1607 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1608 rev = ctx.rev()
1608 rev = ctx.rev()
1609 if rev in results:
1609 if rev in results:
1610 ui.status(_("found revision %s from %s\n") %
1610 ui.status(_("found revision %s from %s\n") %
1611 (rev, util.datestr(results[rev])))
1611 (rev, util.datestr(results[rev])))
1612 return '%d' % rev
1612 return '%d' % rev
1613
1613
1614 raise error.Abort(_("revision matching date not found"))
1614 raise error.Abort(_("revision matching date not found"))
1615
1615
1616 def increasingwindows(windowsize=8, sizelimit=512):
1616 def increasingwindows(windowsize=8, sizelimit=512):
1617 while True:
1617 while True:
1618 yield windowsize
1618 yield windowsize
1619 if windowsize < sizelimit:
1619 if windowsize < sizelimit:
1620 windowsize *= 2
1620 windowsize *= 2
1621
1621
1622 class FileWalkError(Exception):
1622 class FileWalkError(Exception):
1623 pass
1623 pass
1624
1624
1625 def walkfilerevs(repo, match, follow, revs, fncache):
1625 def walkfilerevs(repo, match, follow, revs, fncache):
1626 '''Walks the file history for the matched files.
1626 '''Walks the file history for the matched files.
1627
1627
1628 Returns the changeset revs that are involved in the file history.
1628 Returns the changeset revs that are involved in the file history.
1629
1629
1630 Throws FileWalkError if the file history can't be walked using
1630 Throws FileWalkError if the file history can't be walked using
1631 filelogs alone.
1631 filelogs alone.
1632 '''
1632 '''
1633 wanted = set()
1633 wanted = set()
1634 copies = []
1634 copies = []
1635 minrev, maxrev = min(revs), max(revs)
1635 minrev, maxrev = min(revs), max(revs)
1636 def filerevgen(filelog, last):
1636 def filerevgen(filelog, last):
1637 """
1637 """
1638 Only files, no patterns. Check the history of each file.
1638 Only files, no patterns. Check the history of each file.
1639
1639
1640 Examines filelog entries within minrev, maxrev linkrev range
1640 Examines filelog entries within minrev, maxrev linkrev range
1641 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1641 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1642 tuples in backwards order
1642 tuples in backwards order
1643 """
1643 """
1644 cl_count = len(repo)
1644 cl_count = len(repo)
1645 revs = []
1645 revs = []
1646 for j in xrange(0, last + 1):
1646 for j in xrange(0, last + 1):
1647 linkrev = filelog.linkrev(j)
1647 linkrev = filelog.linkrev(j)
1648 if linkrev < minrev:
1648 if linkrev < minrev:
1649 continue
1649 continue
1650 # only yield rev for which we have the changelog, it can
1650 # only yield rev for which we have the changelog, it can
1651 # happen while doing "hg log" during a pull or commit
1651 # happen while doing "hg log" during a pull or commit
1652 if linkrev >= cl_count:
1652 if linkrev >= cl_count:
1653 break
1653 break
1654
1654
1655 parentlinkrevs = []
1655 parentlinkrevs = []
1656 for p in filelog.parentrevs(j):
1656 for p in filelog.parentrevs(j):
1657 if p != nullrev:
1657 if p != nullrev:
1658 parentlinkrevs.append(filelog.linkrev(p))
1658 parentlinkrevs.append(filelog.linkrev(p))
1659 n = filelog.node(j)
1659 n = filelog.node(j)
1660 revs.append((linkrev, parentlinkrevs,
1660 revs.append((linkrev, parentlinkrevs,
1661 follow and filelog.renamed(n)))
1661 follow and filelog.renamed(n)))
1662
1662
1663 return reversed(revs)
1663 return reversed(revs)
1664 def iterfiles():
1664 def iterfiles():
1665 pctx = repo['.']
1665 pctx = repo['.']
1666 for filename in match.files():
1666 for filename in match.files():
1667 if follow:
1667 if follow:
1668 if filename not in pctx:
1668 if filename not in pctx:
1669 raise error.Abort(_('cannot follow file not in parent '
1669 raise error.Abort(_('cannot follow file not in parent '
1670 'revision: "%s"') % filename)
1670 'revision: "%s"') % filename)
1671 yield filename, pctx[filename].filenode()
1671 yield filename, pctx[filename].filenode()
1672 else:
1672 else:
1673 yield filename, None
1673 yield filename, None
1674 for filename_node in copies:
1674 for filename_node in copies:
1675 yield filename_node
1675 yield filename_node
1676
1676
1677 for file_, node in iterfiles():
1677 for file_, node in iterfiles():
1678 filelog = repo.file(file_)
1678 filelog = repo.file(file_)
1679 if not len(filelog):
1679 if not len(filelog):
1680 if node is None:
1680 if node is None:
1681 # A zero count may be a directory or deleted file, so
1681 # A zero count may be a directory or deleted file, so
1682 # try to find matching entries on the slow path.
1682 # try to find matching entries on the slow path.
1683 if follow:
1683 if follow:
1684 raise error.Abort(
1684 raise error.Abort(
1685 _('cannot follow nonexistent file: "%s"') % file_)
1685 _('cannot follow nonexistent file: "%s"') % file_)
1686 raise FileWalkError("Cannot walk via filelog")
1686 raise FileWalkError("Cannot walk via filelog")
1687 else:
1687 else:
1688 continue
1688 continue
1689
1689
1690 if node is None:
1690 if node is None:
1691 last = len(filelog) - 1
1691 last = len(filelog) - 1
1692 else:
1692 else:
1693 last = filelog.rev(node)
1693 last = filelog.rev(node)
1694
1694
1695 # keep track of all ancestors of the file
1695 # keep track of all ancestors of the file
1696 ancestors = {filelog.linkrev(last)}
1696 ancestors = {filelog.linkrev(last)}
1697
1697
1698 # iterate from latest to oldest revision
1698 # iterate from latest to oldest revision
1699 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1699 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1700 if not follow:
1700 if not follow:
1701 if rev > maxrev:
1701 if rev > maxrev:
1702 continue
1702 continue
1703 else:
1703 else:
1704 # Note that last might not be the first interesting
1704 # Note that last might not be the first interesting
1705 # rev to us:
1705 # rev to us:
1706 # if the file has been changed after maxrev, we'll
1706 # if the file has been changed after maxrev, we'll
1707 # have linkrev(last) > maxrev, and we still need
1707 # have linkrev(last) > maxrev, and we still need
1708 # to explore the file graph
1708 # to explore the file graph
1709 if rev not in ancestors:
1709 if rev not in ancestors:
1710 continue
1710 continue
1711 # XXX insert 1327 fix here
1711 # XXX insert 1327 fix here
1712 if flparentlinkrevs:
1712 if flparentlinkrevs:
1713 ancestors.update(flparentlinkrevs)
1713 ancestors.update(flparentlinkrevs)
1714
1714
1715 fncache.setdefault(rev, []).append(file_)
1715 fncache.setdefault(rev, []).append(file_)
1716 wanted.add(rev)
1716 wanted.add(rev)
1717 if copied:
1717 if copied:
1718 copies.append(copied)
1718 copies.append(copied)
1719
1719
1720 return wanted
1720 return wanted
1721
1721
1722 class _followfilter(object):
1722 class _followfilter(object):
1723 def __init__(self, repo, onlyfirst=False):
1723 def __init__(self, repo, onlyfirst=False):
1724 self.repo = repo
1724 self.repo = repo
1725 self.startrev = nullrev
1725 self.startrev = nullrev
1726 self.roots = set()
1726 self.roots = set()
1727 self.onlyfirst = onlyfirst
1727 self.onlyfirst = onlyfirst
1728
1728
1729 def match(self, rev):
1729 def match(self, rev):
1730 def realparents(rev):
1730 def realparents(rev):
1731 if self.onlyfirst:
1731 if self.onlyfirst:
1732 return self.repo.changelog.parentrevs(rev)[0:1]
1732 return self.repo.changelog.parentrevs(rev)[0:1]
1733 else:
1733 else:
1734 return filter(lambda x: x != nullrev,
1734 return filter(lambda x: x != nullrev,
1735 self.repo.changelog.parentrevs(rev))
1735 self.repo.changelog.parentrevs(rev))
1736
1736
1737 if self.startrev == nullrev:
1737 if self.startrev == nullrev:
1738 self.startrev = rev
1738 self.startrev = rev
1739 return True
1739 return True
1740
1740
1741 if rev > self.startrev:
1741 if rev > self.startrev:
1742 # forward: all descendants
1742 # forward: all descendants
1743 if not self.roots:
1743 if not self.roots:
1744 self.roots.add(self.startrev)
1744 self.roots.add(self.startrev)
1745 for parent in realparents(rev):
1745 for parent in realparents(rev):
1746 if parent in self.roots:
1746 if parent in self.roots:
1747 self.roots.add(rev)
1747 self.roots.add(rev)
1748 return True
1748 return True
1749 else:
1749 else:
1750 # backwards: all parents
1750 # backwards: all parents
1751 if not self.roots:
1751 if not self.roots:
1752 self.roots.update(realparents(self.startrev))
1752 self.roots.update(realparents(self.startrev))
1753 if rev in self.roots:
1753 if rev in self.roots:
1754 self.roots.remove(rev)
1754 self.roots.remove(rev)
1755 self.roots.update(realparents(rev))
1755 self.roots.update(realparents(rev))
1756 return True
1756 return True
1757
1757
1758 return False
1758 return False
1759
1759
1760 def walkchangerevs(repo, match, opts, prepare):
1760 def walkchangerevs(repo, match, opts, prepare):
1761 '''Iterate over files and the revs in which they changed.
1761 '''Iterate over files and the revs in which they changed.
1762
1762
1763 Callers most commonly need to iterate backwards over the history
1763 Callers most commonly need to iterate backwards over the history
1764 in which they are interested. Doing so has awful (quadratic-looking)
1764 in which they are interested. Doing so has awful (quadratic-looking)
1765 performance, so we use iterators in a "windowed" way.
1765 performance, so we use iterators in a "windowed" way.
1766
1766
1767 We walk a window of revisions in the desired order. Within the
1767 We walk a window of revisions in the desired order. Within the
1768 window, we first walk forwards to gather data, then in the desired
1768 window, we first walk forwards to gather data, then in the desired
1769 order (usually backwards) to display it.
1769 order (usually backwards) to display it.
1770
1770
1771 This function returns an iterator yielding contexts. Before
1771 This function returns an iterator yielding contexts. Before
1772 yielding each context, the iterator will first call the prepare
1772 yielding each context, the iterator will first call the prepare
1773 function on each context in the window in forward order.'''
1773 function on each context in the window in forward order.'''
1774
1774
1775 follow = opts.get('follow') or opts.get('follow_first')
1775 follow = opts.get('follow') or opts.get('follow_first')
1776 revs = _logrevs(repo, opts)
1776 revs = _logrevs(repo, opts)
1777 if not revs:
1777 if not revs:
1778 return []
1778 return []
1779 wanted = set()
1779 wanted = set()
1780 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1780 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1781 opts.get('removed'))
1781 opts.get('removed'))
1782 fncache = {}
1782 fncache = {}
1783 change = repo.changectx
1783 change = repo.changectx
1784
1784
1785 # First step is to fill wanted, the set of revisions that we want to yield.
1785 # First step is to fill wanted, the set of revisions that we want to yield.
1786 # When it does not induce extra cost, we also fill fncache for revisions in
1786 # When it does not induce extra cost, we also fill fncache for revisions in
1787 # wanted: a cache of filenames that were changed (ctx.files()) and that
1787 # wanted: a cache of filenames that were changed (ctx.files()) and that
1788 # match the file filtering conditions.
1788 # match the file filtering conditions.
1789
1789
1790 if match.always():
1790 if match.always():
1791 # No files, no patterns. Display all revs.
1791 # No files, no patterns. Display all revs.
1792 wanted = revs
1792 wanted = revs
1793 elif not slowpath:
1793 elif not slowpath:
1794 # We only have to read through the filelog to find wanted revisions
1794 # We only have to read through the filelog to find wanted revisions
1795
1795
1796 try:
1796 try:
1797 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1797 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1798 except FileWalkError:
1798 except FileWalkError:
1799 slowpath = True
1799 slowpath = True
1800
1800
1801 # We decided to fall back to the slowpath because at least one
1801 # We decided to fall back to the slowpath because at least one
1802 # of the paths was not a file. Check to see if at least one of them
1802 # of the paths was not a file. Check to see if at least one of them
1803 # existed in history, otherwise simply return
1803 # existed in history, otherwise simply return
1804 for path in match.files():
1804 for path in match.files():
1805 if path == '.' or path in repo.store:
1805 if path == '.' or path in repo.store:
1806 break
1806 break
1807 else:
1807 else:
1808 return []
1808 return []
1809
1809
1810 if slowpath:
1810 if slowpath:
1811 # We have to read the changelog to match filenames against
1811 # We have to read the changelog to match filenames against
1812 # changed files
1812 # changed files
1813
1813
1814 if follow:
1814 if follow:
1815 raise error.Abort(_('can only follow copies/renames for explicit '
1815 raise error.Abort(_('can only follow copies/renames for explicit '
1816 'filenames'))
1816 'filenames'))
1817
1817
1818 # The slow path checks files modified in every changeset.
1818 # The slow path checks files modified in every changeset.
1819 # This is really slow on large repos, so compute the set lazily.
1819 # This is really slow on large repos, so compute the set lazily.
1820 class lazywantedset(object):
1820 class lazywantedset(object):
1821 def __init__(self):
1821 def __init__(self):
1822 self.set = set()
1822 self.set = set()
1823 self.revs = set(revs)
1823 self.revs = set(revs)
1824
1824
1825 # No need to worry about locality here because it will be accessed
1825 # No need to worry about locality here because it will be accessed
1826 # in the same order as the increasing window below.
1826 # in the same order as the increasing window below.
1827 def __contains__(self, value):
1827 def __contains__(self, value):
1828 if value in self.set:
1828 if value in self.set:
1829 return True
1829 return True
1830 elif not value in self.revs:
1830 elif not value in self.revs:
1831 return False
1831 return False
1832 else:
1832 else:
1833 self.revs.discard(value)
1833 self.revs.discard(value)
1834 ctx = change(value)
1834 ctx = change(value)
1835 matches = filter(match, ctx.files())
1835 matches = filter(match, ctx.files())
1836 if matches:
1836 if matches:
1837 fncache[value] = matches
1837 fncache[value] = matches
1838 self.set.add(value)
1838 self.set.add(value)
1839 return True
1839 return True
1840 return False
1840 return False
1841
1841
1842 def discard(self, value):
1842 def discard(self, value):
1843 self.revs.discard(value)
1843 self.revs.discard(value)
1844 self.set.discard(value)
1844 self.set.discard(value)
1845
1845
1846 wanted = lazywantedset()
1846 wanted = lazywantedset()
1847
1847
1848 # it might be worthwhile to do this in the iterator if the rev range
1848 # it might be worthwhile to do this in the iterator if the rev range
1849 # is descending and the prune args are all within that range
1849 # is descending and the prune args are all within that range
1850 for rev in opts.get('prune', ()):
1850 for rev in opts.get('prune', ()):
1851 rev = repo[rev].rev()
1851 rev = repo[rev].rev()
1852 ff = _followfilter(repo)
1852 ff = _followfilter(repo)
1853 stop = min(revs[0], revs[-1])
1853 stop = min(revs[0], revs[-1])
1854 for x in xrange(rev, stop - 1, -1):
1854 for x in xrange(rev, stop - 1, -1):
1855 if ff.match(x):
1855 if ff.match(x):
1856 wanted = wanted - [x]
1856 wanted = wanted - [x]
1857
1857
1858 # Now that wanted is correctly initialized, we can iterate over the
1858 # Now that wanted is correctly initialized, we can iterate over the
1859 # revision range, yielding only revisions in wanted.
1859 # revision range, yielding only revisions in wanted.
1860 def iterate():
1860 def iterate():
1861 if follow and match.always():
1861 if follow and match.always():
1862 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1862 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1863 def want(rev):
1863 def want(rev):
1864 return ff.match(rev) and rev in wanted
1864 return ff.match(rev) and rev in wanted
1865 else:
1865 else:
1866 def want(rev):
1866 def want(rev):
1867 return rev in wanted
1867 return rev in wanted
1868
1868
1869 it = iter(revs)
1869 it = iter(revs)
1870 stopiteration = False
1870 stopiteration = False
1871 for windowsize in increasingwindows():
1871 for windowsize in increasingwindows():
1872 nrevs = []
1872 nrevs = []
1873 for i in xrange(windowsize):
1873 for i in xrange(windowsize):
1874 rev = next(it, None)
1874 rev = next(it, None)
1875 if rev is None:
1875 if rev is None:
1876 stopiteration = True
1876 stopiteration = True
1877 break
1877 break
1878 elif want(rev):
1878 elif want(rev):
1879 nrevs.append(rev)
1879 nrevs.append(rev)
1880 for rev in sorted(nrevs):
1880 for rev in sorted(nrevs):
1881 fns = fncache.get(rev)
1881 fns = fncache.get(rev)
1882 ctx = change(rev)
1882 ctx = change(rev)
1883 if not fns:
1883 if not fns:
1884 def fns_generator():
1884 def fns_generator():
1885 for f in ctx.files():
1885 for f in ctx.files():
1886 if match(f):
1886 if match(f):
1887 yield f
1887 yield f
1888 fns = fns_generator()
1888 fns = fns_generator()
1889 prepare(ctx, fns)
1889 prepare(ctx, fns)
1890 for rev in nrevs:
1890 for rev in nrevs:
1891 yield change(rev)
1891 yield change(rev)
1892
1892
1893 if stopiteration:
1893 if stopiteration:
1894 break
1894 break
1895
1895
1896 return iterate()
1896 return iterate()
1897
1897
1898 def _makefollowlogfilematcher(repo, files, followfirst):
1898 def _makefollowlogfilematcher(repo, files, followfirst):
1899 # When displaying a revision with --patch --follow FILE, we have
1899 # When displaying a revision with --patch --follow FILE, we have
1900 # to know which file of the revision must be diffed. With
1900 # to know which file of the revision must be diffed. With
1901 # --follow, we want the names of the ancestors of FILE in the
1901 # --follow, we want the names of the ancestors of FILE in the
1902 # revision, stored in "fcache". "fcache" is populated by
1902 # revision, stored in "fcache". "fcache" is populated by
1903 # reproducing the graph traversal already done by --follow revset
1903 # reproducing the graph traversal already done by --follow revset
1904 # and relating revs to file names (which is not "correct" but
1904 # and relating revs to file names (which is not "correct" but
1905 # good enough).
1905 # good enough).
1906 fcache = {}
1906 fcache = {}
1907 fcacheready = [False]
1907 fcacheready = [False]
1908 pctx = repo['.']
1908 pctx = repo['.']
1909
1909
1910 def populate():
1910 def populate():
1911 for fn in files:
1911 for fn in files:
1912 fctx = pctx[fn]
1912 fctx = pctx[fn]
1913 fcache.setdefault(fctx.introrev(), set()).add(fctx.path())
1913 fcache.setdefault(fctx.introrev(), set()).add(fctx.path())
1914 for c in fctx.ancestors(followfirst=followfirst):
1914 for c in fctx.ancestors(followfirst=followfirst):
1915 fcache.setdefault(c.rev(), set()).add(c.path())
1915 fcache.setdefault(c.rev(), set()).add(c.path())
1916
1916
1917 def filematcher(rev):
1917 def filematcher(rev):
1918 if not fcacheready[0]:
1918 if not fcacheready[0]:
1919 # Lazy initialization
1919 # Lazy initialization
1920 fcacheready[0] = True
1920 fcacheready[0] = True
1921 populate()
1921 populate()
1922 return scmutil.matchfiles(repo, fcache.get(rev, []))
1922 return scmutil.matchfiles(repo, fcache.get(rev, []))
1923
1923
1924 return filematcher
1924 return filematcher
1925
1925
1926 def _makenofollowlogfilematcher(repo, pats, opts):
1926 def _makenofollowlogfilematcher(repo, pats, opts):
1927 '''hook for extensions to override the filematcher for non-follow cases'''
1927 '''hook for extensions to override the filematcher for non-follow cases'''
1928 return None
1928 return None
1929
1929
1930 def _makelogrevset(repo, pats, opts, revs):
1930 def _makelogrevset(repo, pats, opts, revs):
1931 """Return (expr, filematcher) where expr is a revset string built
1931 """Return (expr, filematcher) where expr is a revset string built
1932 from log options and file patterns or None. If --stat or --patch
1932 from log options and file patterns or None. If --stat or --patch
1933 are not passed filematcher is None. Otherwise it is a callable
1933 are not passed filematcher is None. Otherwise it is a callable
1934 taking a revision number and returning a match objects filtering
1934 taking a revision number and returning a match objects filtering
1935 the files to be detailed when displaying the revision.
1935 the files to be detailed when displaying the revision.
1936 """
1936 """
1937 opt2revset = {
1937 opt2revset = {
1938 'no_merges': ('not merge()', None),
1938 'no_merges': ('not merge()', None),
1939 'only_merges': ('merge()', None),
1939 'only_merges': ('merge()', None),
1940 '_ancestors': ('ancestors(%(val)s)', None),
1940 '_ancestors': ('ancestors(%(val)s)', None),
1941 '_fancestors': ('_firstancestors(%(val)s)', None),
1941 '_fancestors': ('_firstancestors(%(val)s)', None),
1942 '_descendants': ('descendants(%(val)s)', None),
1942 '_descendants': ('descendants(%(val)s)', None),
1943 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1943 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1944 '_matchfiles': ('_matchfiles(%(val)s)', None),
1944 '_matchfiles': ('_matchfiles(%(val)s)', None),
1945 'date': ('date(%(val)r)', None),
1945 'date': ('date(%(val)r)', None),
1946 'branch': ('branch(%(val)r)', ' or '),
1946 'branch': ('branch(%(val)r)', ' or '),
1947 '_patslog': ('filelog(%(val)r)', ' or '),
1947 '_patslog': ('filelog(%(val)r)', ' or '),
1948 '_patsfollow': ('follow(%(val)r)', ' or '),
1948 '_patsfollow': ('follow(%(val)r)', ' or '),
1949 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1949 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1950 'keyword': ('keyword(%(val)r)', ' or '),
1950 'keyword': ('keyword(%(val)r)', ' or '),
1951 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1951 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1952 'user': ('user(%(val)r)', ' or '),
1952 'user': ('user(%(val)r)', ' or '),
1953 }
1953 }
1954
1954
1955 opts = dict(opts)
1955 opts = dict(opts)
1956 # follow or not follow?
1956 # follow or not follow?
1957 follow = opts.get('follow') or opts.get('follow_first')
1957 follow = opts.get('follow') or opts.get('follow_first')
1958 if opts.get('follow_first'):
1958 if opts.get('follow_first'):
1959 followfirst = 1
1959 followfirst = 1
1960 else:
1960 else:
1961 followfirst = 0
1961 followfirst = 0
1962 # --follow with FILE behavior depends on revs...
1962 # --follow with FILE behavior depends on revs...
1963 it = iter(revs)
1963 it = iter(revs)
1964 startrev = next(it)
1964 startrev = next(it)
1965 followdescendants = startrev < next(it, startrev)
1965 followdescendants = startrev < next(it, startrev)
1966
1966
1967 # branch and only_branch are really aliases and must be handled at
1967 # branch and only_branch are really aliases and must be handled at
1968 # the same time
1968 # the same time
1969 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1969 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1970 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1970 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1971 # pats/include/exclude are passed to match.match() directly in
1971 # pats/include/exclude are passed to match.match() directly in
1972 # _matchfiles() revset but walkchangerevs() builds its matcher with
1972 # _matchfiles() revset but walkchangerevs() builds its matcher with
1973 # scmutil.match(). The difference is input pats are globbed on
1973 # scmutil.match(). The difference is input pats are globbed on
1974 # platforms without shell expansion (windows).
1974 # platforms without shell expansion (windows).
1975 wctx = repo[None]
1975 wctx = repo[None]
1976 match, pats = scmutil.matchandpats(wctx, pats, opts)
1976 match, pats = scmutil.matchandpats(wctx, pats, opts)
1977 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1977 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1978 opts.get('removed'))
1978 opts.get('removed'))
1979 if not slowpath:
1979 if not slowpath:
1980 for f in match.files():
1980 for f in match.files():
1981 if follow and f not in wctx:
1981 if follow and f not in wctx:
1982 # If the file exists, it may be a directory, so let it
1982 # If the file exists, it may be a directory, so let it
1983 # take the slow path.
1983 # take the slow path.
1984 if os.path.exists(repo.wjoin(f)):
1984 if os.path.exists(repo.wjoin(f)):
1985 slowpath = True
1985 slowpath = True
1986 continue
1986 continue
1987 else:
1987 else:
1988 raise error.Abort(_('cannot follow file not in parent '
1988 raise error.Abort(_('cannot follow file not in parent '
1989 'revision: "%s"') % f)
1989 'revision: "%s"') % f)
1990 filelog = repo.file(f)
1990 filelog = repo.file(f)
1991 if not filelog:
1991 if not filelog:
1992 # A zero count may be a directory or deleted file, so
1992 # A zero count may be a directory or deleted file, so
1993 # try to find matching entries on the slow path.
1993 # try to find matching entries on the slow path.
1994 if follow:
1994 if follow:
1995 raise error.Abort(
1995 raise error.Abort(
1996 _('cannot follow nonexistent file: "%s"') % f)
1996 _('cannot follow nonexistent file: "%s"') % f)
1997 slowpath = True
1997 slowpath = True
1998
1998
1999 # We decided to fall back to the slowpath because at least one
1999 # We decided to fall back to the slowpath because at least one
2000 # of the paths was not a file. Check to see if at least one of them
2000 # of the paths was not a file. Check to see if at least one of them
2001 # existed in history - in that case, we'll continue down the
2001 # existed in history - in that case, we'll continue down the
2002 # slowpath; otherwise, we can turn off the slowpath
2002 # slowpath; otherwise, we can turn off the slowpath
2003 if slowpath:
2003 if slowpath:
2004 for path in match.files():
2004 for path in match.files():
2005 if path == '.' or path in repo.store:
2005 if path == '.' or path in repo.store:
2006 break
2006 break
2007 else:
2007 else:
2008 slowpath = False
2008 slowpath = False
2009
2009
2010 fpats = ('_patsfollow', '_patsfollowfirst')
2010 fpats = ('_patsfollow', '_patsfollowfirst')
2011 fnopats = (('_ancestors', '_fancestors'),
2011 fnopats = (('_ancestors', '_fancestors'),
2012 ('_descendants', '_fdescendants'))
2012 ('_descendants', '_fdescendants'))
2013 if slowpath:
2013 if slowpath:
2014 # See walkchangerevs() slow path.
2014 # See walkchangerevs() slow path.
2015 #
2015 #
2016 # pats/include/exclude cannot be represented as separate
2016 # pats/include/exclude cannot be represented as separate
2017 # revset expressions as their filtering logic applies at file
2017 # revset expressions as their filtering logic applies at file
2018 # level. For instance "-I a -X a" matches a revision touching
2018 # level. For instance "-I a -X a" matches a revision touching
2019 # "a" and "b" while "file(a) and not file(b)" does
2019 # "a" and "b" while "file(a) and not file(b)" does
2020 # not. Besides, filesets are evaluated against the working
2020 # not. Besides, filesets are evaluated against the working
2021 # directory.
2021 # directory.
2022 matchargs = ['r:', 'd:relpath']
2022 matchargs = ['r:', 'd:relpath']
2023 for p in pats:
2023 for p in pats:
2024 matchargs.append('p:' + p)
2024 matchargs.append('p:' + p)
2025 for p in opts.get('include', []):
2025 for p in opts.get('include', []):
2026 matchargs.append('i:' + p)
2026 matchargs.append('i:' + p)
2027 for p in opts.get('exclude', []):
2027 for p in opts.get('exclude', []):
2028 matchargs.append('x:' + p)
2028 matchargs.append('x:' + p)
2029 matchargs = ','.join(('%r' % p) for p in matchargs)
2029 matchargs = ','.join(('%r' % p) for p in matchargs)
2030 opts['_matchfiles'] = matchargs
2030 opts['_matchfiles'] = matchargs
2031 if follow:
2031 if follow:
2032 opts[fnopats[0][followfirst]] = '.'
2032 opts[fnopats[0][followfirst]] = '.'
2033 else:
2033 else:
2034 if follow:
2034 if follow:
2035 if pats:
2035 if pats:
2036 # follow() revset interprets its file argument as a
2036 # follow() revset interprets its file argument as a
2037 # manifest entry, so use match.files(), not pats.
2037 # manifest entry, so use match.files(), not pats.
2038 opts[fpats[followfirst]] = list(match.files())
2038 opts[fpats[followfirst]] = list(match.files())
2039 else:
2039 else:
2040 op = fnopats[followdescendants][followfirst]
2040 op = fnopats[followdescendants][followfirst]
2041 opts[op] = 'rev(%d)' % startrev
2041 opts[op] = 'rev(%d)' % startrev
2042 else:
2042 else:
2043 opts['_patslog'] = list(pats)
2043 opts['_patslog'] = list(pats)
2044
2044
2045 filematcher = None
2045 filematcher = None
2046 if opts.get('patch') or opts.get('stat'):
2046 if opts.get('patch') or opts.get('stat'):
2047 # When following files, track renames via a special matcher.
2047 # When following files, track renames via a special matcher.
2048 # If we're forced to take the slowpath it means we're following
2048 # If we're forced to take the slowpath it means we're following
2049 # at least one pattern/directory, so don't bother with rename tracking.
2049 # at least one pattern/directory, so don't bother with rename tracking.
2050 if follow and not match.always() and not slowpath:
2050 if follow and not match.always() and not slowpath:
2051 # _makefollowlogfilematcher expects its files argument to be
2051 # _makefollowlogfilematcher expects its files argument to be
2052 # relative to the repo root, so use match.files(), not pats.
2052 # relative to the repo root, so use match.files(), not pats.
2053 filematcher = _makefollowlogfilematcher(repo, match.files(),
2053 filematcher = _makefollowlogfilematcher(repo, match.files(),
2054 followfirst)
2054 followfirst)
2055 else:
2055 else:
2056 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2056 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2057 if filematcher is None:
2057 if filematcher is None:
2058 filematcher = lambda rev: match
2058 filematcher = lambda rev: match
2059
2059
2060 expr = []
2060 expr = []
2061 for op, val in sorted(opts.iteritems()):
2061 for op, val in sorted(opts.iteritems()):
2062 if not val:
2062 if not val:
2063 continue
2063 continue
2064 if op not in opt2revset:
2064 if op not in opt2revset:
2065 continue
2065 continue
2066 revop, andor = opt2revset[op]
2066 revop, andor = opt2revset[op]
2067 if '%(val)' not in revop:
2067 if '%(val)' not in revop:
2068 expr.append(revop)
2068 expr.append(revop)
2069 else:
2069 else:
2070 if not isinstance(val, list):
2070 if not isinstance(val, list):
2071 e = revop % {'val': val}
2071 e = revop % {'val': val}
2072 else:
2072 else:
2073 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2073 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2074 expr.append(e)
2074 expr.append(e)
2075
2075
2076 if expr:
2076 if expr:
2077 expr = '(' + ' and '.join(expr) + ')'
2077 expr = '(' + ' and '.join(expr) + ')'
2078 else:
2078 else:
2079 expr = None
2079 expr = None
2080 return expr, filematcher
2080 return expr, filematcher
2081
2081
2082 def _logrevs(repo, opts):
2082 def _logrevs(repo, opts):
2083 # Default --rev value depends on --follow but --follow behavior
2083 # Default --rev value depends on --follow but --follow behavior
2084 # depends on revisions resolved from --rev...
2084 # depends on revisions resolved from --rev...
2085 follow = opts.get('follow') or opts.get('follow_first')
2085 follow = opts.get('follow') or opts.get('follow_first')
2086 if opts.get('rev'):
2086 if opts.get('rev'):
2087 revs = scmutil.revrange(repo, opts['rev'])
2087 revs = scmutil.revrange(repo, opts['rev'])
2088 elif follow and repo.dirstate.p1() == nullid:
2088 elif follow and repo.dirstate.p1() == nullid:
2089 revs = smartset.baseset()
2089 revs = smartset.baseset()
2090 elif follow:
2090 elif follow:
2091 revs = repo.revs('reverse(:.)')
2091 revs = repo.revs('reverse(:.)')
2092 else:
2092 else:
2093 revs = smartset.spanset(repo)
2093 revs = smartset.spanset(repo)
2094 revs.reverse()
2094 revs.reverse()
2095 return revs
2095 return revs
2096
2096
2097 def getgraphlogrevs(repo, pats, opts):
2097 def getgraphlogrevs(repo, pats, opts):
2098 """Return (revs, expr, filematcher) where revs is an iterable of
2098 """Return (revs, expr, filematcher) where revs is an iterable of
2099 revision numbers, expr is a revset string built from log options
2099 revision numbers, expr is a revset string built from log options
2100 and file patterns or None, and used to filter 'revs'. If --stat or
2100 and file patterns or None, and used to filter 'revs'. If --stat or
2101 --patch are not passed filematcher is None. Otherwise it is a
2101 --patch are not passed filematcher is None. Otherwise it is a
2102 callable taking a revision number and returning a match objects
2102 callable taking a revision number and returning a match objects
2103 filtering the files to be detailed when displaying the revision.
2103 filtering the files to be detailed when displaying the revision.
2104 """
2104 """
2105 limit = loglimit(opts)
2105 limit = loglimit(opts)
2106 revs = _logrevs(repo, opts)
2106 revs = _logrevs(repo, opts)
2107 if not revs:
2107 if not revs:
2108 return smartset.baseset(), None, None
2108 return smartset.baseset(), None, None
2109 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2109 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2110 if opts.get('rev'):
2110 if opts.get('rev'):
2111 # User-specified revs might be unsorted, but don't sort before
2111 # User-specified revs might be unsorted, but don't sort before
2112 # _makelogrevset because it might depend on the order of revs
2112 # _makelogrevset because it might depend on the order of revs
2113 if not (revs.isdescending() or revs.istopo()):
2113 if not (revs.isdescending() or revs.istopo()):
2114 revs.sort(reverse=True)
2114 revs.sort(reverse=True)
2115 if expr:
2115 if expr:
2116 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2116 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2117 revs = matcher(repo, revs)
2117 revs = matcher(repo, revs)
2118 if limit is not None:
2118 if limit is not None:
2119 limitedrevs = []
2119 limitedrevs = []
2120 for idx, rev in enumerate(revs):
2120 for idx, rev in enumerate(revs):
2121 if idx >= limit:
2121 if idx >= limit:
2122 break
2122 break
2123 limitedrevs.append(rev)
2123 limitedrevs.append(rev)
2124 revs = smartset.baseset(limitedrevs)
2124 revs = smartset.baseset(limitedrevs)
2125
2125
2126 return revs, expr, filematcher
2126 return revs, expr, filematcher
2127
2127
2128 def getlogrevs(repo, pats, opts):
2128 def getlogrevs(repo, pats, opts):
2129 """Return (revs, expr, filematcher) where revs is an iterable of
2129 """Return (revs, expr, filematcher) where revs is an iterable of
2130 revision numbers, expr is a revset string built from log options
2130 revision numbers, expr is a revset string built from log options
2131 and file patterns or None, and used to filter 'revs'. If --stat or
2131 and file patterns or None, and used to filter 'revs'. If --stat or
2132 --patch are not passed filematcher is None. Otherwise it is a
2132 --patch are not passed filematcher is None. Otherwise it is a
2133 callable taking a revision number and returning a match objects
2133 callable taking a revision number and returning a match objects
2134 filtering the files to be detailed when displaying the revision.
2134 filtering the files to be detailed when displaying the revision.
2135 """
2135 """
2136 limit = loglimit(opts)
2136 limit = loglimit(opts)
2137 revs = _logrevs(repo, opts)
2137 revs = _logrevs(repo, opts)
2138 if not revs:
2138 if not revs:
2139 return smartset.baseset([]), None, None
2139 return smartset.baseset([]), None, None
2140 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2140 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2141 if expr:
2141 if expr:
2142 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2142 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2143 revs = matcher(repo, revs)
2143 revs = matcher(repo, revs)
2144 if limit is not None:
2144 if limit is not None:
2145 limitedrevs = []
2145 limitedrevs = []
2146 for idx, r in enumerate(revs):
2146 for idx, r in enumerate(revs):
2147 if limit <= idx:
2147 if limit <= idx:
2148 break
2148 break
2149 limitedrevs.append(r)
2149 limitedrevs.append(r)
2150 revs = smartset.baseset(limitedrevs)
2150 revs = smartset.baseset(limitedrevs)
2151
2151
2152 return revs, expr, filematcher
2152 return revs, expr, filematcher
2153
2153
2154 def _graphnodeformatter(ui, displayer):
2154 def _graphnodeformatter(ui, displayer):
2155 spec = ui.config('ui', 'graphnodetemplate')
2155 spec = ui.config('ui', 'graphnodetemplate')
2156 if not spec:
2156 if not spec:
2157 return templatekw.showgraphnode # fast path for "{graphnode}"
2157 return templatekw.showgraphnode # fast path for "{graphnode}"
2158
2158
2159 spec = templater.unquotestring(spec)
2159 spec = templater.unquotestring(spec)
2160 templ = formatter.gettemplater(ui, 'graphnode', spec)
2160 templ = formatter.gettemplater(ui, 'graphnode', spec)
2161 cache = {}
2161 cache = {}
2162 if isinstance(displayer, changeset_templater):
2162 if isinstance(displayer, changeset_templater):
2163 cache = displayer.cache # reuse cache of slow templates
2163 cache = displayer.cache # reuse cache of slow templates
2164 props = templatekw.keywords.copy()
2164 props = templatekw.keywords.copy()
2165 props['templ'] = templ
2165 props['templ'] = templ
2166 props['cache'] = cache
2166 props['cache'] = cache
2167 def formatnode(repo, ctx):
2167 def formatnode(repo, ctx):
2168 props['ctx'] = ctx
2168 props['ctx'] = ctx
2169 props['repo'] = repo
2169 props['repo'] = repo
2170 props['ui'] = repo.ui
2170 props['ui'] = repo.ui
2171 props['revcache'] = {}
2171 props['revcache'] = {}
2172 return templater.stringify(templ('graphnode', **props))
2172 return templater.stringify(templ('graphnode', **props))
2173 return formatnode
2173 return formatnode
2174
2174
2175 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2175 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2176 filematcher=None):
2176 filematcher=None):
2177 formatnode = _graphnodeformatter(ui, displayer)
2177 formatnode = _graphnodeformatter(ui, displayer)
2178 state = graphmod.asciistate()
2178 state = graphmod.asciistate()
2179 styles = state['styles']
2179 styles = state['styles']
2180
2180
2181 # only set graph styling if HGPLAIN is not set.
2181 # only set graph styling if HGPLAIN is not set.
2182 if ui.plain('graph'):
2182 if ui.plain('graph'):
2183 # set all edge styles to |, the default pre-3.8 behaviour
2183 # set all edge styles to |, the default pre-3.8 behaviour
2184 styles.update(dict.fromkeys(styles, '|'))
2184 styles.update(dict.fromkeys(styles, '|'))
2185 else:
2185 else:
2186 edgetypes = {
2186 edgetypes = {
2187 'parent': graphmod.PARENT,
2187 'parent': graphmod.PARENT,
2188 'grandparent': graphmod.GRANDPARENT,
2188 'grandparent': graphmod.GRANDPARENT,
2189 'missing': graphmod.MISSINGPARENT
2189 'missing': graphmod.MISSINGPARENT
2190 }
2190 }
2191 for name, key in edgetypes.items():
2191 for name, key in edgetypes.items():
2192 # experimental config: experimental.graphstyle.*
2192 # experimental config: experimental.graphstyle.*
2193 styles[key] = ui.config('experimental', 'graphstyle.%s' % name,
2193 styles[key] = ui.config('experimental', 'graphstyle.%s' % name,
2194 styles[key])
2194 styles[key])
2195 if not styles[key]:
2195 if not styles[key]:
2196 styles[key] = None
2196 styles[key] = None
2197
2197
2198 # experimental config: experimental.graphshorten
2198 # experimental config: experimental.graphshorten
2199 state['graphshorten'] = ui.configbool('experimental', 'graphshorten')
2199 state['graphshorten'] = ui.configbool('experimental', 'graphshorten')
2200
2200
2201 for rev, type, ctx, parents in dag:
2201 for rev, type, ctx, parents in dag:
2202 char = formatnode(repo, ctx)
2202 char = formatnode(repo, ctx)
2203 copies = None
2203 copies = None
2204 if getrenamed and ctx.rev():
2204 if getrenamed and ctx.rev():
2205 copies = []
2205 copies = []
2206 for fn in ctx.files():
2206 for fn in ctx.files():
2207 rename = getrenamed(fn, ctx.rev())
2207 rename = getrenamed(fn, ctx.rev())
2208 if rename:
2208 if rename:
2209 copies.append((fn, rename[0]))
2209 copies.append((fn, rename[0]))
2210 revmatchfn = None
2210 revmatchfn = None
2211 if filematcher is not None:
2211 if filematcher is not None:
2212 revmatchfn = filematcher(ctx.rev())
2212 revmatchfn = filematcher(ctx.rev())
2213 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2213 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2214 lines = displayer.hunk.pop(rev).split('\n')
2214 lines = displayer.hunk.pop(rev).split('\n')
2215 if not lines[-1]:
2215 if not lines[-1]:
2216 del lines[-1]
2216 del lines[-1]
2217 displayer.flush(ctx)
2217 displayer.flush(ctx)
2218 edges = edgefn(type, char, lines, state, rev, parents)
2218 edges = edgefn(type, char, lines, state, rev, parents)
2219 for type, char, lines, coldata in edges:
2219 for type, char, lines, coldata in edges:
2220 graphmod.ascii(ui, state, type, char, lines, coldata)
2220 graphmod.ascii(ui, state, type, char, lines, coldata)
2221 displayer.close()
2221 displayer.close()
2222
2222
2223 def graphlog(ui, repo, pats, opts):
2223 def graphlog(ui, repo, pats, opts):
2224 # Parameters are identical to log command ones
2224 # Parameters are identical to log command ones
2225 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2225 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2226 revdag = graphmod.dagwalker(repo, revs)
2226 revdag = graphmod.dagwalker(repo, revs)
2227
2227
2228 getrenamed = None
2228 getrenamed = None
2229 if opts.get('copies'):
2229 if opts.get('copies'):
2230 endrev = None
2230 endrev = None
2231 if opts.get('rev'):
2231 if opts.get('rev'):
2232 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2232 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2233 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2233 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2234
2234
2235 ui.pager('log')
2235 ui.pager('log')
2236 displayer = show_changeset(ui, repo, opts, buffered=True)
2236 displayer = show_changeset(ui, repo, opts, buffered=True)
2237 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2237 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2238 filematcher)
2238 filematcher)
2239
2239
2240 def checkunsupportedgraphflags(pats, opts):
2240 def checkunsupportedgraphflags(pats, opts):
2241 for op in ["newest_first"]:
2241 for op in ["newest_first"]:
2242 if op in opts and opts[op]:
2242 if op in opts and opts[op]:
2243 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2243 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2244 % op.replace("_", "-"))
2244 % op.replace("_", "-"))
2245
2245
2246 def graphrevs(repo, nodes, opts):
2246 def graphrevs(repo, nodes, opts):
2247 limit = loglimit(opts)
2247 limit = loglimit(opts)
2248 nodes.reverse()
2248 nodes.reverse()
2249 if limit is not None:
2249 if limit is not None:
2250 nodes = nodes[:limit]
2250 nodes = nodes[:limit]
2251 return graphmod.nodes(repo, nodes)
2251 return graphmod.nodes(repo, nodes)
2252
2252
2253 def add(ui, repo, match, prefix, explicitonly, **opts):
2253 def add(ui, repo, match, prefix, explicitonly, **opts):
2254 join = lambda f: os.path.join(prefix, f)
2254 join = lambda f: os.path.join(prefix, f)
2255 bad = []
2255 bad = []
2256
2256
2257 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2257 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2258 names = []
2258 names = []
2259 wctx = repo[None]
2259 wctx = repo[None]
2260 cca = None
2260 cca = None
2261 abort, warn = scmutil.checkportabilityalert(ui)
2261 abort, warn = scmutil.checkportabilityalert(ui)
2262 if abort or warn:
2262 if abort or warn:
2263 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2263 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2264
2264
2265 badmatch = matchmod.badmatch(match, badfn)
2265 badmatch = matchmod.badmatch(match, badfn)
2266 dirstate = repo.dirstate
2266 dirstate = repo.dirstate
2267 # We don't want to just call wctx.walk here, since it would return a lot of
2267 # We don't want to just call wctx.walk here, since it would return a lot of
2268 # clean files, which we aren't interested in and takes time.
2268 # clean files, which we aren't interested in and takes time.
2269 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2269 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2270 True, False, full=False)):
2270 True, False, full=False)):
2271 exact = match.exact(f)
2271 exact = match.exact(f)
2272 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2272 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2273 if cca:
2273 if cca:
2274 cca(f)
2274 cca(f)
2275 names.append(f)
2275 names.append(f)
2276 if ui.verbose or not exact:
2276 if ui.verbose or not exact:
2277 ui.status(_('adding %s\n') % match.rel(f))
2277 ui.status(_('adding %s\n') % match.rel(f))
2278
2278
2279 for subpath in sorted(wctx.substate):
2279 for subpath in sorted(wctx.substate):
2280 sub = wctx.sub(subpath)
2280 sub = wctx.sub(subpath)
2281 try:
2281 try:
2282 submatch = matchmod.subdirmatcher(subpath, match)
2282 submatch = matchmod.subdirmatcher(subpath, match)
2283 if opts.get(r'subrepos'):
2283 if opts.get(r'subrepos'):
2284 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2284 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2285 else:
2285 else:
2286 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2286 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2287 except error.LookupError:
2287 except error.LookupError:
2288 ui.status(_("skipping missing subrepository: %s\n")
2288 ui.status(_("skipping missing subrepository: %s\n")
2289 % join(subpath))
2289 % join(subpath))
2290
2290
2291 if not opts.get(r'dry_run'):
2291 if not opts.get(r'dry_run'):
2292 rejected = wctx.add(names, prefix)
2292 rejected = wctx.add(names, prefix)
2293 bad.extend(f for f in rejected if f in match.files())
2293 bad.extend(f for f in rejected if f in match.files())
2294 return bad
2294 return bad
2295
2295
2296 def addwebdirpath(repo, serverpath, webconf):
2296 def addwebdirpath(repo, serverpath, webconf):
2297 webconf[serverpath] = repo.root
2297 webconf[serverpath] = repo.root
2298 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2298 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
2299
2299
2300 for r in repo.revs('filelog("path:.hgsub")'):
2300 for r in repo.revs('filelog("path:.hgsub")'):
2301 ctx = repo[r]
2301 ctx = repo[r]
2302 for subpath in ctx.substate:
2302 for subpath in ctx.substate:
2303 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2303 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
2304
2304
2305 def forget(ui, repo, match, prefix, explicitonly):
2305 def forget(ui, repo, match, prefix, explicitonly):
2306 join = lambda f: os.path.join(prefix, f)
2306 join = lambda f: os.path.join(prefix, f)
2307 bad = []
2307 bad = []
2308 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2308 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2309 wctx = repo[None]
2309 wctx = repo[None]
2310 forgot = []
2310 forgot = []
2311
2311
2312 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2312 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2313 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2313 forget = sorted(s.modified + s.added + s.deleted + s.clean)
2314 if explicitonly:
2314 if explicitonly:
2315 forget = [f for f in forget if match.exact(f)]
2315 forget = [f for f in forget if match.exact(f)]
2316
2316
2317 for subpath in sorted(wctx.substate):
2317 for subpath in sorted(wctx.substate):
2318 sub = wctx.sub(subpath)
2318 sub = wctx.sub(subpath)
2319 try:
2319 try:
2320 submatch = matchmod.subdirmatcher(subpath, match)
2320 submatch = matchmod.subdirmatcher(subpath, match)
2321 subbad, subforgot = sub.forget(submatch, prefix)
2321 subbad, subforgot = sub.forget(submatch, prefix)
2322 bad.extend([subpath + '/' + f for f in subbad])
2322 bad.extend([subpath + '/' + f for f in subbad])
2323 forgot.extend([subpath + '/' + f for f in subforgot])
2323 forgot.extend([subpath + '/' + f for f in subforgot])
2324 except error.LookupError:
2324 except error.LookupError:
2325 ui.status(_("skipping missing subrepository: %s\n")
2325 ui.status(_("skipping missing subrepository: %s\n")
2326 % join(subpath))
2326 % join(subpath))
2327
2327
2328 if not explicitonly:
2328 if not explicitonly:
2329 for f in match.files():
2329 for f in match.files():
2330 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2330 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2331 if f not in forgot:
2331 if f not in forgot:
2332 if repo.wvfs.exists(f):
2332 if repo.wvfs.exists(f):
2333 # Don't complain if the exact case match wasn't given.
2333 # Don't complain if the exact case match wasn't given.
2334 # But don't do this until after checking 'forgot', so
2334 # But don't do this until after checking 'forgot', so
2335 # that subrepo files aren't normalized, and this op is
2335 # that subrepo files aren't normalized, and this op is
2336 # purely from data cached by the status walk above.
2336 # purely from data cached by the status walk above.
2337 if repo.dirstate.normalize(f) in repo.dirstate:
2337 if repo.dirstate.normalize(f) in repo.dirstate:
2338 continue
2338 continue
2339 ui.warn(_('not removing %s: '
2339 ui.warn(_('not removing %s: '
2340 'file is already untracked\n')
2340 'file is already untracked\n')
2341 % match.rel(f))
2341 % match.rel(f))
2342 bad.append(f)
2342 bad.append(f)
2343
2343
2344 for f in forget:
2344 for f in forget:
2345 if ui.verbose or not match.exact(f):
2345 if ui.verbose or not match.exact(f):
2346 ui.status(_('removing %s\n') % match.rel(f))
2346 ui.status(_('removing %s\n') % match.rel(f))
2347
2347
2348 rejected = wctx.forget(forget, prefix)
2348 rejected = wctx.forget(forget, prefix)
2349 bad.extend(f for f in rejected if f in match.files())
2349 bad.extend(f for f in rejected if f in match.files())
2350 forgot.extend(f for f in forget if f not in rejected)
2350 forgot.extend(f for f in forget if f not in rejected)
2351 return bad, forgot
2351 return bad, forgot
2352
2352
2353 def files(ui, ctx, m, fm, fmt, subrepos):
2353 def files(ui, ctx, m, fm, fmt, subrepos):
2354 rev = ctx.rev()
2354 rev = ctx.rev()
2355 ret = 1
2355 ret = 1
2356 ds = ctx.repo().dirstate
2356 ds = ctx.repo().dirstate
2357
2357
2358 for f in ctx.matches(m):
2358 for f in ctx.matches(m):
2359 if rev is None and ds[f] == 'r':
2359 if rev is None and ds[f] == 'r':
2360 continue
2360 continue
2361 fm.startitem()
2361 fm.startitem()
2362 if ui.verbose:
2362 if ui.verbose:
2363 fc = ctx[f]
2363 fc = ctx[f]
2364 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2364 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2365 fm.data(abspath=f)
2365 fm.data(abspath=f)
2366 fm.write('path', fmt, m.rel(f))
2366 fm.write('path', fmt, m.rel(f))
2367 ret = 0
2367 ret = 0
2368
2368
2369 for subpath in sorted(ctx.substate):
2369 for subpath in sorted(ctx.substate):
2370 submatch = matchmod.subdirmatcher(subpath, m)
2370 submatch = matchmod.subdirmatcher(subpath, m)
2371 if (subrepos or m.exact(subpath) or any(submatch.files())):
2371 if (subrepos or m.exact(subpath) or any(submatch.files())):
2372 sub = ctx.sub(subpath)
2372 sub = ctx.sub(subpath)
2373 try:
2373 try:
2374 recurse = m.exact(subpath) or subrepos
2374 recurse = m.exact(subpath) or subrepos
2375 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2375 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2376 ret = 0
2376 ret = 0
2377 except error.LookupError:
2377 except error.LookupError:
2378 ui.status(_("skipping missing subrepository: %s\n")
2378 ui.status(_("skipping missing subrepository: %s\n")
2379 % m.abs(subpath))
2379 % m.abs(subpath))
2380
2380
2381 return ret
2381 return ret
2382
2382
2383 def remove(ui, repo, m, prefix, after, force, subrepos, warnings=None):
2383 def remove(ui, repo, m, prefix, after, force, subrepos, warnings=None):
2384 join = lambda f: os.path.join(prefix, f)
2384 join = lambda f: os.path.join(prefix, f)
2385 ret = 0
2385 ret = 0
2386 s = repo.status(match=m, clean=True)
2386 s = repo.status(match=m, clean=True)
2387 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2387 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2388
2388
2389 wctx = repo[None]
2389 wctx = repo[None]
2390
2390
2391 if warnings is None:
2391 if warnings is None:
2392 warnings = []
2392 warnings = []
2393 warn = True
2393 warn = True
2394 else:
2394 else:
2395 warn = False
2395 warn = False
2396
2396
2397 subs = sorted(wctx.substate)
2397 subs = sorted(wctx.substate)
2398 total = len(subs)
2398 total = len(subs)
2399 count = 0
2399 count = 0
2400 for subpath in subs:
2400 for subpath in subs:
2401 count += 1
2401 count += 1
2402 submatch = matchmod.subdirmatcher(subpath, m)
2402 submatch = matchmod.subdirmatcher(subpath, m)
2403 if subrepos or m.exact(subpath) or any(submatch.files()):
2403 if subrepos or m.exact(subpath) or any(submatch.files()):
2404 ui.progress(_('searching'), count, total=total, unit=_('subrepos'))
2404 ui.progress(_('searching'), count, total=total, unit=_('subrepos'))
2405 sub = wctx.sub(subpath)
2405 sub = wctx.sub(subpath)
2406 try:
2406 try:
2407 if sub.removefiles(submatch, prefix, after, force, subrepos,
2407 if sub.removefiles(submatch, prefix, after, force, subrepos,
2408 warnings):
2408 warnings):
2409 ret = 1
2409 ret = 1
2410 except error.LookupError:
2410 except error.LookupError:
2411 warnings.append(_("skipping missing subrepository: %s\n")
2411 warnings.append(_("skipping missing subrepository: %s\n")
2412 % join(subpath))
2412 % join(subpath))
2413 ui.progress(_('searching'), None)
2413 ui.progress(_('searching'), None)
2414
2414
2415 # warn about failure to delete explicit files/dirs
2415 # warn about failure to delete explicit files/dirs
2416 deleteddirs = util.dirs(deleted)
2416 deleteddirs = util.dirs(deleted)
2417 files = m.files()
2417 files = m.files()
2418 total = len(files)
2418 total = len(files)
2419 count = 0
2419 count = 0
2420 for f in files:
2420 for f in files:
2421 def insubrepo():
2421 def insubrepo():
2422 for subpath in wctx.substate:
2422 for subpath in wctx.substate:
2423 if f.startswith(subpath + '/'):
2423 if f.startswith(subpath + '/'):
2424 return True
2424 return True
2425 return False
2425 return False
2426
2426
2427 count += 1
2427 count += 1
2428 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2428 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2429 isdir = f in deleteddirs or wctx.hasdir(f)
2429 isdir = f in deleteddirs or wctx.hasdir(f)
2430 if (f in repo.dirstate or isdir or f == '.'
2430 if (f in repo.dirstate or isdir or f == '.'
2431 or insubrepo() or f in subs):
2431 or insubrepo() or f in subs):
2432 continue
2432 continue
2433
2433
2434 if repo.wvfs.exists(f):
2434 if repo.wvfs.exists(f):
2435 if repo.wvfs.isdir(f):
2435 if repo.wvfs.isdir(f):
2436 warnings.append(_('not removing %s: no tracked files\n')
2436 warnings.append(_('not removing %s: no tracked files\n')
2437 % m.rel(f))
2437 % m.rel(f))
2438 else:
2438 else:
2439 warnings.append(_('not removing %s: file is untracked\n')
2439 warnings.append(_('not removing %s: file is untracked\n')
2440 % m.rel(f))
2440 % m.rel(f))
2441 # missing files will generate a warning elsewhere
2441 # missing files will generate a warning elsewhere
2442 ret = 1
2442 ret = 1
2443 ui.progress(_('deleting'), None)
2443 ui.progress(_('deleting'), None)
2444
2444
2445 if force:
2445 if force:
2446 list = modified + deleted + clean + added
2446 list = modified + deleted + clean + added
2447 elif after:
2447 elif after:
2448 list = deleted
2448 list = deleted
2449 remaining = modified + added + clean
2449 remaining = modified + added + clean
2450 total = len(remaining)
2450 total = len(remaining)
2451 count = 0
2451 count = 0
2452 for f in remaining:
2452 for f in remaining:
2453 count += 1
2453 count += 1
2454 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2454 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2455 warnings.append(_('not removing %s: file still exists\n')
2455 warnings.append(_('not removing %s: file still exists\n')
2456 % m.rel(f))
2456 % m.rel(f))
2457 ret = 1
2457 ret = 1
2458 ui.progress(_('skipping'), None)
2458 ui.progress(_('skipping'), None)
2459 else:
2459 else:
2460 list = deleted + clean
2460 list = deleted + clean
2461 total = len(modified) + len(added)
2461 total = len(modified) + len(added)
2462 count = 0
2462 count = 0
2463 for f in modified:
2463 for f in modified:
2464 count += 1
2464 count += 1
2465 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2465 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2466 warnings.append(_('not removing %s: file is modified (use -f'
2466 warnings.append(_('not removing %s: file is modified (use -f'
2467 ' to force removal)\n') % m.rel(f))
2467 ' to force removal)\n') % m.rel(f))
2468 ret = 1
2468 ret = 1
2469 for f in added:
2469 for f in added:
2470 count += 1
2470 count += 1
2471 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2471 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2472 warnings.append(_("not removing %s: file has been marked for add"
2472 warnings.append(_("not removing %s: file has been marked for add"
2473 " (use 'hg forget' to undo add)\n") % m.rel(f))
2473 " (use 'hg forget' to undo add)\n") % m.rel(f))
2474 ret = 1
2474 ret = 1
2475 ui.progress(_('skipping'), None)
2475 ui.progress(_('skipping'), None)
2476
2476
2477 list = sorted(list)
2477 list = sorted(list)
2478 total = len(list)
2478 total = len(list)
2479 count = 0
2479 count = 0
2480 for f in list:
2480 for f in list:
2481 count += 1
2481 count += 1
2482 if ui.verbose or not m.exact(f):
2482 if ui.verbose or not m.exact(f):
2483 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2483 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2484 ui.status(_('removing %s\n') % m.rel(f))
2484 ui.status(_('removing %s\n') % m.rel(f))
2485 ui.progress(_('deleting'), None)
2485 ui.progress(_('deleting'), None)
2486
2486
2487 with repo.wlock():
2487 with repo.wlock():
2488 if not after:
2488 if not after:
2489 for f in list:
2489 for f in list:
2490 if f in added:
2490 if f in added:
2491 continue # we never unlink added files on remove
2491 continue # we never unlink added files on remove
2492 repo.wvfs.unlinkpath(f, ignoremissing=True)
2492 repo.wvfs.unlinkpath(f, ignoremissing=True)
2493 repo[None].forget(list)
2493 repo[None].forget(list)
2494
2494
2495 if warn:
2495 if warn:
2496 for warning in warnings:
2496 for warning in warnings:
2497 ui.warn(warning)
2497 ui.warn(warning)
2498
2498
2499 return ret
2499 return ret
2500
2500
2501 def cat(ui, repo, ctx, matcher, prefix, **opts):
2501 def cat(ui, repo, ctx, matcher, prefix, **opts):
2502 err = 1
2502 err = 1
2503
2503
2504 def write(path):
2504 def write(path):
2505 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2505 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2506 pathname=os.path.join(prefix, path))
2506 pathname=os.path.join(prefix, path))
2507 data = ctx[path].data()
2507 data = ctx[path].data()
2508 if opts.get('decode'):
2508 if opts.get('decode'):
2509 data = repo.wwritedata(path, data)
2509 data = repo.wwritedata(path, data)
2510 fp.write(data)
2510 fp.write(data)
2511 fp.close()
2511 fp.close()
2512
2512
2513 # Automation often uses hg cat on single files, so special case it
2513 # Automation often uses hg cat on single files, so special case it
2514 # for performance to avoid the cost of parsing the manifest.
2514 # for performance to avoid the cost of parsing the manifest.
2515 if len(matcher.files()) == 1 and not matcher.anypats():
2515 if len(matcher.files()) == 1 and not matcher.anypats():
2516 file = matcher.files()[0]
2516 file = matcher.files()[0]
2517 mfl = repo.manifestlog
2517 mfl = repo.manifestlog
2518 mfnode = ctx.manifestnode()
2518 mfnode = ctx.manifestnode()
2519 try:
2519 try:
2520 if mfnode and mfl[mfnode].find(file)[0]:
2520 if mfnode and mfl[mfnode].find(file)[0]:
2521 write(file)
2521 write(file)
2522 return 0
2522 return 0
2523 except KeyError:
2523 except KeyError:
2524 pass
2524 pass
2525
2525
2526 for abs in ctx.walk(matcher):
2526 for abs in ctx.walk(matcher):
2527 write(abs)
2527 write(abs)
2528 err = 0
2528 err = 0
2529
2529
2530 for subpath in sorted(ctx.substate):
2530 for subpath in sorted(ctx.substate):
2531 sub = ctx.sub(subpath)
2531 sub = ctx.sub(subpath)
2532 try:
2532 try:
2533 submatch = matchmod.subdirmatcher(subpath, matcher)
2533 submatch = matchmod.subdirmatcher(subpath, matcher)
2534
2534
2535 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2535 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2536 **opts):
2536 **opts):
2537 err = 0
2537 err = 0
2538 except error.RepoLookupError:
2538 except error.RepoLookupError:
2539 ui.status(_("skipping missing subrepository: %s\n")
2539 ui.status(_("skipping missing subrepository: %s\n")
2540 % os.path.join(prefix, subpath))
2540 % os.path.join(prefix, subpath))
2541
2541
2542 return err
2542 return err
2543
2543
2544 def commit(ui, repo, commitfunc, pats, opts):
2544 def commit(ui, repo, commitfunc, pats, opts):
2545 '''commit the specified files or all outstanding changes'''
2545 '''commit the specified files or all outstanding changes'''
2546 date = opts.get('date')
2546 date = opts.get('date')
2547 if date:
2547 if date:
2548 opts['date'] = util.parsedate(date)
2548 opts['date'] = util.parsedate(date)
2549 message = logmessage(ui, opts)
2549 message = logmessage(ui, opts)
2550 matcher = scmutil.match(repo[None], pats, opts)
2550 matcher = scmutil.match(repo[None], pats, opts)
2551
2551
2552 # extract addremove carefully -- this function can be called from a command
2552 # extract addremove carefully -- this function can be called from a command
2553 # that doesn't support addremove
2553 # that doesn't support addremove
2554 if opts.get('addremove'):
2554 if opts.get('addremove'):
2555 if scmutil.addremove(repo, matcher, "", opts) != 0:
2555 if scmutil.addremove(repo, matcher, "", opts) != 0:
2556 raise error.Abort(
2556 raise error.Abort(
2557 _("failed to mark all new/missing files as added/removed"))
2557 _("failed to mark all new/missing files as added/removed"))
2558
2558
2559 return commitfunc(ui, repo, message, matcher, opts)
2559 return commitfunc(ui, repo, message, matcher, opts)
2560
2560
2561 def samefile(f, ctx1, ctx2):
2561 def samefile(f, ctx1, ctx2):
2562 if f in ctx1.manifest():
2562 if f in ctx1.manifest():
2563 a = ctx1.filectx(f)
2563 a = ctx1.filectx(f)
2564 if f in ctx2.manifest():
2564 if f in ctx2.manifest():
2565 b = ctx2.filectx(f)
2565 b = ctx2.filectx(f)
2566 return (not a.cmp(b)
2566 return (not a.cmp(b)
2567 and a.flags() == b.flags())
2567 and a.flags() == b.flags())
2568 else:
2568 else:
2569 return False
2569 return False
2570 else:
2570 else:
2571 return f not in ctx2.manifest()
2571 return f not in ctx2.manifest()
2572
2572
2573 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2573 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2574 # avoid cycle context -> subrepo -> cmdutil
2574 # avoid cycle context -> subrepo -> cmdutil
2575 from . import context
2575 from . import context
2576
2576
2577 # amend will reuse the existing user if not specified, but the obsolete
2577 # amend will reuse the existing user if not specified, but the obsolete
2578 # marker creation requires that the current user's name is specified.
2578 # marker creation requires that the current user's name is specified.
2579 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2579 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2580 ui.username() # raise exception if username not set
2580 ui.username() # raise exception if username not set
2581
2581
2582 ui.note(_('amending changeset %s\n') % old)
2582 ui.note(_('amending changeset %s\n') % old)
2583 base = old.p1()
2583 base = old.p1()
2584 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2584 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2585
2585
2586 wlock = lock = newid = None
2586 wlock = lock = newid = None
2587 try:
2587 try:
2588 wlock = repo.wlock()
2588 wlock = repo.wlock()
2589 lock = repo.lock()
2589 lock = repo.lock()
2590 with repo.transaction('amend') as tr:
2590 with repo.transaction('amend') as tr:
2591 # See if we got a message from -m or -l, if not, open the editor
2591 # See if we got a message from -m or -l, if not, open the editor
2592 # with the message of the changeset to amend
2592 # with the message of the changeset to amend
2593 message = logmessage(ui, opts)
2593 message = logmessage(ui, opts)
2594 # ensure logfile does not conflict with later enforcement of the
2594 # ensure logfile does not conflict with later enforcement of the
2595 # message. potential logfile content has been processed by
2595 # message. potential logfile content has been processed by
2596 # `logmessage` anyway.
2596 # `logmessage` anyway.
2597 opts.pop('logfile')
2597 opts.pop('logfile')
2598 # First, do a regular commit to record all changes in the working
2598 # First, do a regular commit to record all changes in the working
2599 # directory (if there are any)
2599 # directory (if there are any)
2600 ui.callhooks = False
2600 ui.callhooks = False
2601 activebookmark = repo._bookmarks.active
2601 activebookmark = repo._bookmarks.active
2602 try:
2602 try:
2603 repo._bookmarks.active = None
2603 repo._bookmarks.active = None
2604 opts['message'] = 'temporary amend commit for %s' % old
2604 opts['message'] = 'temporary amend commit for %s' % old
2605 node = commit(ui, repo, commitfunc, pats, opts)
2605 node = commit(ui, repo, commitfunc, pats, opts)
2606 finally:
2606 finally:
2607 repo._bookmarks.active = activebookmark
2607 repo._bookmarks.active = activebookmark
2608 repo._bookmarks.recordchange(tr)
2608 repo._bookmarks.recordchange(tr)
2609 ui.callhooks = True
2609 ui.callhooks = True
2610 ctx = repo[node]
2610 ctx = repo[node]
2611
2611
2612 # Participating changesets:
2612 # Participating changesets:
2613 #
2613 #
2614 # node/ctx o - new (intermediate) commit that contains changes
2614 # node/ctx o - new (intermediate) commit that contains changes
2615 # | from working dir to go into amending commit
2615 # | from working dir to go into amending commit
2616 # | (or a workingctx if there were no changes)
2616 # | (or a workingctx if there were no changes)
2617 # |
2617 # |
2618 # old o - changeset to amend
2618 # old o - changeset to amend
2619 # |
2619 # |
2620 # base o - parent of amending changeset
2620 # base o - parent of amending changeset
2621
2621
2622 # Update extra dict from amended commit (e.g. to preserve graft
2622 # Update extra dict from amended commit (e.g. to preserve graft
2623 # source)
2623 # source)
2624 extra.update(old.extra())
2624 extra.update(old.extra())
2625
2625
2626 # Also update it from the intermediate commit or from the wctx
2626 # Also update it from the intermediate commit or from the wctx
2627 extra.update(ctx.extra())
2627 extra.update(ctx.extra())
2628
2628
2629 if len(old.parents()) > 1:
2629 if len(old.parents()) > 1:
2630 # ctx.files() isn't reliable for merges, so fall back to the
2630 # ctx.files() isn't reliable for merges, so fall back to the
2631 # slower repo.status() method
2631 # slower repo.status() method
2632 files = set([fn for st in repo.status(base, old)[:3]
2632 files = set([fn for st in repo.status(base, old)[:3]
2633 for fn in st])
2633 for fn in st])
2634 else:
2634 else:
2635 files = set(old.files())
2635 files = set(old.files())
2636
2636
2637 # Second, we use either the commit we just did, or if there were no
2637 # Second, we use either the commit we just did, or if there were no
2638 # changes the parent of the working directory as the version of the
2638 # changes the parent of the working directory as the version of the
2639 # files in the final amend commit
2639 # files in the final amend commit
2640 if node:
2640 if node:
2641 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2641 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2642
2642
2643 user = ctx.user()
2643 user = ctx.user()
2644 date = ctx.date()
2644 date = ctx.date()
2645 # Recompute copies (avoid recording a -> b -> a)
2645 # Recompute copies (avoid recording a -> b -> a)
2646 copied = copies.pathcopies(base, ctx)
2646 copied = copies.pathcopies(base, ctx)
2647 if old.p2:
2647 if old.p2:
2648 copied.update(copies.pathcopies(old.p2(), ctx))
2648 copied.update(copies.pathcopies(old.p2(), ctx))
2649
2649
2650 # Prune files which were reverted by the updates: if old
2650 # Prune files which were reverted by the updates: if old
2651 # introduced file X and our intermediate commit, node,
2651 # introduced file X and our intermediate commit, node,
2652 # renamed that file, then those two files are the same and
2652 # renamed that file, then those two files are the same and
2653 # we can discard X from our list of files. Likewise if X
2653 # we can discard X from our list of files. Likewise if X
2654 # was deleted, it's no longer relevant
2654 # was deleted, it's no longer relevant
2655 files.update(ctx.files())
2655 files.update(ctx.files())
2656 files = [f for f in files if not samefile(f, ctx, base)]
2656 files = [f for f in files if not samefile(f, ctx, base)]
2657
2657
2658 def filectxfn(repo, ctx_, path):
2658 def filectxfn(repo, ctx_, path):
2659 try:
2659 try:
2660 fctx = ctx[path]
2660 fctx = ctx[path]
2661 flags = fctx.flags()
2661 flags = fctx.flags()
2662 mctx = context.memfilectx(repo,
2662 mctx = context.memfilectx(repo,
2663 fctx.path(), fctx.data(),
2663 fctx.path(), fctx.data(),
2664 islink='l' in flags,
2664 islink='l' in flags,
2665 isexec='x' in flags,
2665 isexec='x' in flags,
2666 copied=copied.get(path))
2666 copied=copied.get(path))
2667 return mctx
2667 return mctx
2668 except KeyError:
2668 except KeyError:
2669 return None
2669 return None
2670 else:
2670 else:
2671 ui.note(_('copying changeset %s to %s\n') % (old, base))
2671 ui.note(_('copying changeset %s to %s\n') % (old, base))
2672
2672
2673 # Use version of files as in the old cset
2673 # Use version of files as in the old cset
2674 def filectxfn(repo, ctx_, path):
2674 def filectxfn(repo, ctx_, path):
2675 try:
2675 try:
2676 return old.filectx(path)
2676 return old.filectx(path)
2677 except KeyError:
2677 except KeyError:
2678 return None
2678 return None
2679
2679
2680 user = opts.get('user') or old.user()
2680 user = opts.get('user') or old.user()
2681 date = opts.get('date') or old.date()
2681 date = opts.get('date') or old.date()
2682 editform = mergeeditform(old, 'commit.amend')
2682 editform = mergeeditform(old, 'commit.amend')
2683 editor = getcommiteditor(editform=editform, **opts)
2683 editor = getcommiteditor(editform=editform, **opts)
2684 if not message:
2684 if not message:
2685 editor = getcommiteditor(edit=True, editform=editform)
2685 editor = getcommiteditor(edit=True, editform=editform)
2686 message = old.description()
2686 message = old.description()
2687
2687
2688 pureextra = extra.copy()
2688 pureextra = extra.copy()
2689 extra['amend_source'] = old.hex()
2689 extra['amend_source'] = old.hex()
2690
2690
2691 new = context.memctx(repo,
2691 new = context.memctx(repo,
2692 parents=[base.node(), old.p2().node()],
2692 parents=[base.node(), old.p2().node()],
2693 text=message,
2693 text=message,
2694 files=files,
2694 files=files,
2695 filectxfn=filectxfn,
2695 filectxfn=filectxfn,
2696 user=user,
2696 user=user,
2697 date=date,
2697 date=date,
2698 extra=extra,
2698 extra=extra,
2699 editor=editor)
2699 editor=editor)
2700
2700
2701 newdesc = changelog.stripdesc(new.description())
2701 newdesc = changelog.stripdesc(new.description())
2702 if ((not node)
2702 if ((not node)
2703 and newdesc == old.description()
2703 and newdesc == old.description()
2704 and user == old.user()
2704 and user == old.user()
2705 and date == old.date()
2705 and date == old.date()
2706 and pureextra == old.extra()):
2706 and pureextra == old.extra()):
2707 # nothing changed. continuing here would create a new node
2707 # nothing changed. continuing here would create a new node
2708 # anyway because of the amend_source noise.
2708 # anyway because of the amend_source noise.
2709 #
2709 #
2710 # This not what we expect from amend.
2710 # This not what we expect from amend.
2711 return old.node()
2711 return old.node()
2712
2712
2713 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2713 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2714 try:
2714 try:
2715 if opts.get('secret'):
2715 if opts.get('secret'):
2716 commitphase = 'secret'
2716 commitphase = 'secret'
2717 else:
2717 else:
2718 commitphase = old.phase()
2718 commitphase = old.phase()
2719 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2719 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2720 newid = repo.commitctx(new)
2720 newid = repo.commitctx(new)
2721 finally:
2721 finally:
2722 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2722 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2723 if newid != old.node():
2723 if newid != old.node():
2724 # Reroute the working copy parent to the new changeset
2724 # Reroute the working copy parent to the new changeset
2725 repo.setparents(newid, nullid)
2725 repo.setparents(newid, nullid)
2726
2726
2727 # Move bookmarks from old parent to amend commit
2727 # Move bookmarks from old parent to amend commit
2728 bms = repo.nodebookmarks(old.node())
2728 bms = repo.nodebookmarks(old.node())
2729 if bms:
2729 if bms:
2730 marks = repo._bookmarks
2730 marks = repo._bookmarks
2731 for bm in bms:
2731 for bm in bms:
2732 ui.debug('moving bookmarks %r from %s to %s\n' %
2732 ui.debug('moving bookmarks %r from %s to %s\n' %
2733 (marks, old.hex(), hex(newid)))
2733 (marks, old.hex(), hex(newid)))
2734 marks[bm] = newid
2734 marks[bm] = newid
2735 marks.recordchange(tr)
2735 marks.recordchange(tr)
2736 #commit the whole amend process
2736 #commit the whole amend process
2737 if createmarkers:
2737 if createmarkers:
2738 # mark the new changeset as successor of the rewritten one
2738 # mark the new changeset as successor of the rewritten one
2739 new = repo[newid]
2739 new = repo[newid]
2740 obs = [(old, (new,))]
2740 obs = [(old, (new,))]
2741 if node:
2741 if node:
2742 obs.append((ctx, ()))
2742 obs.append((ctx, ()))
2743
2743
2744 obsolete.createmarkers(repo, obs)
2744 obsolete.createmarkers(repo, obs, operation='amend')
2745 if not createmarkers and newid != old.node():
2745 if not createmarkers and newid != old.node():
2746 # Strip the intermediate commit (if there was one) and the amended
2746 # Strip the intermediate commit (if there was one) and the amended
2747 # commit
2747 # commit
2748 if node:
2748 if node:
2749 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2749 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2750 ui.note(_('stripping amended changeset %s\n') % old)
2750 ui.note(_('stripping amended changeset %s\n') % old)
2751 repair.strip(ui, repo, old.node(), topic='amend-backup')
2751 repair.strip(ui, repo, old.node(), topic='amend-backup')
2752 finally:
2752 finally:
2753 lockmod.release(lock, wlock)
2753 lockmod.release(lock, wlock)
2754 return newid
2754 return newid
2755
2755
2756 def commiteditor(repo, ctx, subs, editform=''):
2756 def commiteditor(repo, ctx, subs, editform=''):
2757 if ctx.description():
2757 if ctx.description():
2758 return ctx.description()
2758 return ctx.description()
2759 return commitforceeditor(repo, ctx, subs, editform=editform,
2759 return commitforceeditor(repo, ctx, subs, editform=editform,
2760 unchangedmessagedetection=True)
2760 unchangedmessagedetection=True)
2761
2761
2762 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2762 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2763 editform='', unchangedmessagedetection=False):
2763 editform='', unchangedmessagedetection=False):
2764 if not extramsg:
2764 if not extramsg:
2765 extramsg = _("Leave message empty to abort commit.")
2765 extramsg = _("Leave message empty to abort commit.")
2766
2766
2767 forms = [e for e in editform.split('.') if e]
2767 forms = [e for e in editform.split('.') if e]
2768 forms.insert(0, 'changeset')
2768 forms.insert(0, 'changeset')
2769 templatetext = None
2769 templatetext = None
2770 while forms:
2770 while forms:
2771 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2771 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2772 if tmpl:
2772 if tmpl:
2773 tmpl = templater.unquotestring(tmpl)
2773 tmpl = templater.unquotestring(tmpl)
2774 templatetext = committext = buildcommittemplate(
2774 templatetext = committext = buildcommittemplate(
2775 repo, ctx, subs, extramsg, tmpl)
2775 repo, ctx, subs, extramsg, tmpl)
2776 break
2776 break
2777 forms.pop()
2777 forms.pop()
2778 else:
2778 else:
2779 committext = buildcommittext(repo, ctx, subs, extramsg)
2779 committext = buildcommittext(repo, ctx, subs, extramsg)
2780
2780
2781 # run editor in the repository root
2781 # run editor in the repository root
2782 olddir = pycompat.getcwd()
2782 olddir = pycompat.getcwd()
2783 os.chdir(repo.root)
2783 os.chdir(repo.root)
2784
2784
2785 # make in-memory changes visible to external process
2785 # make in-memory changes visible to external process
2786 tr = repo.currenttransaction()
2786 tr = repo.currenttransaction()
2787 repo.dirstate.write(tr)
2787 repo.dirstate.write(tr)
2788 pending = tr and tr.writepending() and repo.root
2788 pending = tr and tr.writepending() and repo.root
2789
2789
2790 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2790 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2791 editform=editform, pending=pending,
2791 editform=editform, pending=pending,
2792 repopath=repo.path)
2792 repopath=repo.path)
2793 text = editortext
2793 text = editortext
2794
2794
2795 # strip away anything below this special string (used for editors that want
2795 # strip away anything below this special string (used for editors that want
2796 # to display the diff)
2796 # to display the diff)
2797 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2797 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2798 if stripbelow:
2798 if stripbelow:
2799 text = text[:stripbelow.start()]
2799 text = text[:stripbelow.start()]
2800
2800
2801 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2801 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2802 os.chdir(olddir)
2802 os.chdir(olddir)
2803
2803
2804 if finishdesc:
2804 if finishdesc:
2805 text = finishdesc(text)
2805 text = finishdesc(text)
2806 if not text.strip():
2806 if not text.strip():
2807 raise error.Abort(_("empty commit message"))
2807 raise error.Abort(_("empty commit message"))
2808 if unchangedmessagedetection and editortext == templatetext:
2808 if unchangedmessagedetection and editortext == templatetext:
2809 raise error.Abort(_("commit message unchanged"))
2809 raise error.Abort(_("commit message unchanged"))
2810
2810
2811 return text
2811 return text
2812
2812
2813 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2813 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2814 ui = repo.ui
2814 ui = repo.ui
2815 tmpl, mapfile = gettemplate(ui, tmpl, None)
2815 tmpl, mapfile = gettemplate(ui, tmpl, None)
2816
2816
2817 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2817 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2818
2818
2819 for k, v in repo.ui.configitems('committemplate'):
2819 for k, v in repo.ui.configitems('committemplate'):
2820 if k != 'changeset':
2820 if k != 'changeset':
2821 t.t.cache[k] = v
2821 t.t.cache[k] = v
2822
2822
2823 if not extramsg:
2823 if not extramsg:
2824 extramsg = '' # ensure that extramsg is string
2824 extramsg = '' # ensure that extramsg is string
2825
2825
2826 ui.pushbuffer()
2826 ui.pushbuffer()
2827 t.show(ctx, extramsg=extramsg)
2827 t.show(ctx, extramsg=extramsg)
2828 return ui.popbuffer()
2828 return ui.popbuffer()
2829
2829
2830 def hgprefix(msg):
2830 def hgprefix(msg):
2831 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2831 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2832
2832
2833 def buildcommittext(repo, ctx, subs, extramsg):
2833 def buildcommittext(repo, ctx, subs, extramsg):
2834 edittext = []
2834 edittext = []
2835 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2835 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2836 if ctx.description():
2836 if ctx.description():
2837 edittext.append(ctx.description())
2837 edittext.append(ctx.description())
2838 edittext.append("")
2838 edittext.append("")
2839 edittext.append("") # Empty line between message and comments.
2839 edittext.append("") # Empty line between message and comments.
2840 edittext.append(hgprefix(_("Enter commit message."
2840 edittext.append(hgprefix(_("Enter commit message."
2841 " Lines beginning with 'HG:' are removed.")))
2841 " Lines beginning with 'HG:' are removed.")))
2842 edittext.append(hgprefix(extramsg))
2842 edittext.append(hgprefix(extramsg))
2843 edittext.append("HG: --")
2843 edittext.append("HG: --")
2844 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2844 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2845 if ctx.p2():
2845 if ctx.p2():
2846 edittext.append(hgprefix(_("branch merge")))
2846 edittext.append(hgprefix(_("branch merge")))
2847 if ctx.branch():
2847 if ctx.branch():
2848 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2848 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2849 if bookmarks.isactivewdirparent(repo):
2849 if bookmarks.isactivewdirparent(repo):
2850 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2850 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2851 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2851 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2852 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2852 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2853 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2853 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2854 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2854 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2855 if not added and not modified and not removed:
2855 if not added and not modified and not removed:
2856 edittext.append(hgprefix(_("no files changed")))
2856 edittext.append(hgprefix(_("no files changed")))
2857 edittext.append("")
2857 edittext.append("")
2858
2858
2859 return "\n".join(edittext)
2859 return "\n".join(edittext)
2860
2860
2861 def commitstatus(repo, node, branch, bheads=None, opts=None):
2861 def commitstatus(repo, node, branch, bheads=None, opts=None):
2862 if opts is None:
2862 if opts is None:
2863 opts = {}
2863 opts = {}
2864 ctx = repo[node]
2864 ctx = repo[node]
2865 parents = ctx.parents()
2865 parents = ctx.parents()
2866
2866
2867 if (not opts.get('amend') and bheads and node not in bheads and not
2867 if (not opts.get('amend') and bheads and node not in bheads and not
2868 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2868 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2869 repo.ui.status(_('created new head\n'))
2869 repo.ui.status(_('created new head\n'))
2870 # The message is not printed for initial roots. For the other
2870 # The message is not printed for initial roots. For the other
2871 # changesets, it is printed in the following situations:
2871 # changesets, it is printed in the following situations:
2872 #
2872 #
2873 # Par column: for the 2 parents with ...
2873 # Par column: for the 2 parents with ...
2874 # N: null or no parent
2874 # N: null or no parent
2875 # B: parent is on another named branch
2875 # B: parent is on another named branch
2876 # C: parent is a regular non head changeset
2876 # C: parent is a regular non head changeset
2877 # H: parent was a branch head of the current branch
2877 # H: parent was a branch head of the current branch
2878 # Msg column: whether we print "created new head" message
2878 # Msg column: whether we print "created new head" message
2879 # In the following, it is assumed that there already exists some
2879 # In the following, it is assumed that there already exists some
2880 # initial branch heads of the current branch, otherwise nothing is
2880 # initial branch heads of the current branch, otherwise nothing is
2881 # printed anyway.
2881 # printed anyway.
2882 #
2882 #
2883 # Par Msg Comment
2883 # Par Msg Comment
2884 # N N y additional topo root
2884 # N N y additional topo root
2885 #
2885 #
2886 # B N y additional branch root
2886 # B N y additional branch root
2887 # C N y additional topo head
2887 # C N y additional topo head
2888 # H N n usual case
2888 # H N n usual case
2889 #
2889 #
2890 # B B y weird additional branch root
2890 # B B y weird additional branch root
2891 # C B y branch merge
2891 # C B y branch merge
2892 # H B n merge with named branch
2892 # H B n merge with named branch
2893 #
2893 #
2894 # C C y additional head from merge
2894 # C C y additional head from merge
2895 # C H n merge with a head
2895 # C H n merge with a head
2896 #
2896 #
2897 # H H n head merge: head count decreases
2897 # H H n head merge: head count decreases
2898
2898
2899 if not opts.get('close_branch'):
2899 if not opts.get('close_branch'):
2900 for r in parents:
2900 for r in parents:
2901 if r.closesbranch() and r.branch() == branch:
2901 if r.closesbranch() and r.branch() == branch:
2902 repo.ui.status(_('reopening closed branch head %d\n') % r)
2902 repo.ui.status(_('reopening closed branch head %d\n') % r)
2903
2903
2904 if repo.ui.debugflag:
2904 if repo.ui.debugflag:
2905 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2905 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2906 elif repo.ui.verbose:
2906 elif repo.ui.verbose:
2907 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2907 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2908
2908
2909 def postcommitstatus(repo, pats, opts):
2909 def postcommitstatus(repo, pats, opts):
2910 return repo.status(match=scmutil.match(repo[None], pats, opts))
2910 return repo.status(match=scmutil.match(repo[None], pats, opts))
2911
2911
2912 def revert(ui, repo, ctx, parents, *pats, **opts):
2912 def revert(ui, repo, ctx, parents, *pats, **opts):
2913 parent, p2 = parents
2913 parent, p2 = parents
2914 node = ctx.node()
2914 node = ctx.node()
2915
2915
2916 mf = ctx.manifest()
2916 mf = ctx.manifest()
2917 if node == p2:
2917 if node == p2:
2918 parent = p2
2918 parent = p2
2919
2919
2920 # need all matching names in dirstate and manifest of target rev,
2920 # need all matching names in dirstate and manifest of target rev,
2921 # so have to walk both. do not print errors if files exist in one
2921 # so have to walk both. do not print errors if files exist in one
2922 # but not other. in both cases, filesets should be evaluated against
2922 # but not other. in both cases, filesets should be evaluated against
2923 # workingctx to get consistent result (issue4497). this means 'set:**'
2923 # workingctx to get consistent result (issue4497). this means 'set:**'
2924 # cannot be used to select missing files from target rev.
2924 # cannot be used to select missing files from target rev.
2925
2925
2926 # `names` is a mapping for all elements in working copy and target revision
2926 # `names` is a mapping for all elements in working copy and target revision
2927 # The mapping is in the form:
2927 # The mapping is in the form:
2928 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2928 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2929 names = {}
2929 names = {}
2930
2930
2931 with repo.wlock():
2931 with repo.wlock():
2932 ## filling of the `names` mapping
2932 ## filling of the `names` mapping
2933 # walk dirstate to fill `names`
2933 # walk dirstate to fill `names`
2934
2934
2935 interactive = opts.get('interactive', False)
2935 interactive = opts.get('interactive', False)
2936 wctx = repo[None]
2936 wctx = repo[None]
2937 m = scmutil.match(wctx, pats, opts)
2937 m = scmutil.match(wctx, pats, opts)
2938
2938
2939 # we'll need this later
2939 # we'll need this later
2940 targetsubs = sorted(s for s in wctx.substate if m(s))
2940 targetsubs = sorted(s for s in wctx.substate if m(s))
2941
2941
2942 if not m.always():
2942 if not m.always():
2943 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2943 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2944 names[abs] = m.rel(abs), m.exact(abs)
2944 names[abs] = m.rel(abs), m.exact(abs)
2945
2945
2946 # walk target manifest to fill `names`
2946 # walk target manifest to fill `names`
2947
2947
2948 def badfn(path, msg):
2948 def badfn(path, msg):
2949 if path in names:
2949 if path in names:
2950 return
2950 return
2951 if path in ctx.substate:
2951 if path in ctx.substate:
2952 return
2952 return
2953 path_ = path + '/'
2953 path_ = path + '/'
2954 for f in names:
2954 for f in names:
2955 if f.startswith(path_):
2955 if f.startswith(path_):
2956 return
2956 return
2957 ui.warn("%s: %s\n" % (m.rel(path), msg))
2957 ui.warn("%s: %s\n" % (m.rel(path), msg))
2958
2958
2959 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2959 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2960 if abs not in names:
2960 if abs not in names:
2961 names[abs] = m.rel(abs), m.exact(abs)
2961 names[abs] = m.rel(abs), m.exact(abs)
2962
2962
2963 # Find status of all file in `names`.
2963 # Find status of all file in `names`.
2964 m = scmutil.matchfiles(repo, names)
2964 m = scmutil.matchfiles(repo, names)
2965
2965
2966 changes = repo.status(node1=node, match=m,
2966 changes = repo.status(node1=node, match=m,
2967 unknown=True, ignored=True, clean=True)
2967 unknown=True, ignored=True, clean=True)
2968 else:
2968 else:
2969 changes = repo.status(node1=node, match=m)
2969 changes = repo.status(node1=node, match=m)
2970 for kind in changes:
2970 for kind in changes:
2971 for abs in kind:
2971 for abs in kind:
2972 names[abs] = m.rel(abs), m.exact(abs)
2972 names[abs] = m.rel(abs), m.exact(abs)
2973
2973
2974 m = scmutil.matchfiles(repo, names)
2974 m = scmutil.matchfiles(repo, names)
2975
2975
2976 modified = set(changes.modified)
2976 modified = set(changes.modified)
2977 added = set(changes.added)
2977 added = set(changes.added)
2978 removed = set(changes.removed)
2978 removed = set(changes.removed)
2979 _deleted = set(changes.deleted)
2979 _deleted = set(changes.deleted)
2980 unknown = set(changes.unknown)
2980 unknown = set(changes.unknown)
2981 unknown.update(changes.ignored)
2981 unknown.update(changes.ignored)
2982 clean = set(changes.clean)
2982 clean = set(changes.clean)
2983 modadded = set()
2983 modadded = set()
2984
2984
2985 # We need to account for the state of the file in the dirstate,
2985 # We need to account for the state of the file in the dirstate,
2986 # even when we revert against something else than parent. This will
2986 # even when we revert against something else than parent. This will
2987 # slightly alter the behavior of revert (doing back up or not, delete
2987 # slightly alter the behavior of revert (doing back up or not, delete
2988 # or just forget etc).
2988 # or just forget etc).
2989 if parent == node:
2989 if parent == node:
2990 dsmodified = modified
2990 dsmodified = modified
2991 dsadded = added
2991 dsadded = added
2992 dsremoved = removed
2992 dsremoved = removed
2993 # store all local modifications, useful later for rename detection
2993 # store all local modifications, useful later for rename detection
2994 localchanges = dsmodified | dsadded
2994 localchanges = dsmodified | dsadded
2995 modified, added, removed = set(), set(), set()
2995 modified, added, removed = set(), set(), set()
2996 else:
2996 else:
2997 changes = repo.status(node1=parent, match=m)
2997 changes = repo.status(node1=parent, match=m)
2998 dsmodified = set(changes.modified)
2998 dsmodified = set(changes.modified)
2999 dsadded = set(changes.added)
2999 dsadded = set(changes.added)
3000 dsremoved = set(changes.removed)
3000 dsremoved = set(changes.removed)
3001 # store all local modifications, useful later for rename detection
3001 # store all local modifications, useful later for rename detection
3002 localchanges = dsmodified | dsadded
3002 localchanges = dsmodified | dsadded
3003
3003
3004 # only take into account for removes between wc and target
3004 # only take into account for removes between wc and target
3005 clean |= dsremoved - removed
3005 clean |= dsremoved - removed
3006 dsremoved &= removed
3006 dsremoved &= removed
3007 # distinct between dirstate remove and other
3007 # distinct between dirstate remove and other
3008 removed -= dsremoved
3008 removed -= dsremoved
3009
3009
3010 modadded = added & dsmodified
3010 modadded = added & dsmodified
3011 added -= modadded
3011 added -= modadded
3012
3012
3013 # tell newly modified apart.
3013 # tell newly modified apart.
3014 dsmodified &= modified
3014 dsmodified &= modified
3015 dsmodified |= modified & dsadded # dirstate added may need backup
3015 dsmodified |= modified & dsadded # dirstate added may need backup
3016 modified -= dsmodified
3016 modified -= dsmodified
3017
3017
3018 # We need to wait for some post-processing to update this set
3018 # We need to wait for some post-processing to update this set
3019 # before making the distinction. The dirstate will be used for
3019 # before making the distinction. The dirstate will be used for
3020 # that purpose.
3020 # that purpose.
3021 dsadded = added
3021 dsadded = added
3022
3022
3023 # in case of merge, files that are actually added can be reported as
3023 # in case of merge, files that are actually added can be reported as
3024 # modified, we need to post process the result
3024 # modified, we need to post process the result
3025 if p2 != nullid:
3025 if p2 != nullid:
3026 mergeadd = set(dsmodified)
3026 mergeadd = set(dsmodified)
3027 for path in dsmodified:
3027 for path in dsmodified:
3028 if path in mf:
3028 if path in mf:
3029 mergeadd.remove(path)
3029 mergeadd.remove(path)
3030 dsadded |= mergeadd
3030 dsadded |= mergeadd
3031 dsmodified -= mergeadd
3031 dsmodified -= mergeadd
3032
3032
3033 # if f is a rename, update `names` to also revert the source
3033 # if f is a rename, update `names` to also revert the source
3034 cwd = repo.getcwd()
3034 cwd = repo.getcwd()
3035 for f in localchanges:
3035 for f in localchanges:
3036 src = repo.dirstate.copied(f)
3036 src = repo.dirstate.copied(f)
3037 # XXX should we check for rename down to target node?
3037 # XXX should we check for rename down to target node?
3038 if src and src not in names and repo.dirstate[src] == 'r':
3038 if src and src not in names and repo.dirstate[src] == 'r':
3039 dsremoved.add(src)
3039 dsremoved.add(src)
3040 names[src] = (repo.pathto(src, cwd), True)
3040 names[src] = (repo.pathto(src, cwd), True)
3041
3041
3042 # determine the exact nature of the deleted changesets
3042 # determine the exact nature of the deleted changesets
3043 deladded = set(_deleted)
3043 deladded = set(_deleted)
3044 for path in _deleted:
3044 for path in _deleted:
3045 if path in mf:
3045 if path in mf:
3046 deladded.remove(path)
3046 deladded.remove(path)
3047 deleted = _deleted - deladded
3047 deleted = _deleted - deladded
3048
3048
3049 # distinguish between file to forget and the other
3049 # distinguish between file to forget and the other
3050 added = set()
3050 added = set()
3051 for abs in dsadded:
3051 for abs in dsadded:
3052 if repo.dirstate[abs] != 'a':
3052 if repo.dirstate[abs] != 'a':
3053 added.add(abs)
3053 added.add(abs)
3054 dsadded -= added
3054 dsadded -= added
3055
3055
3056 for abs in deladded:
3056 for abs in deladded:
3057 if repo.dirstate[abs] == 'a':
3057 if repo.dirstate[abs] == 'a':
3058 dsadded.add(abs)
3058 dsadded.add(abs)
3059 deladded -= dsadded
3059 deladded -= dsadded
3060
3060
3061 # For files marked as removed, we check if an unknown file is present at
3061 # For files marked as removed, we check if an unknown file is present at
3062 # the same path. If a such file exists it may need to be backed up.
3062 # the same path. If a such file exists it may need to be backed up.
3063 # Making the distinction at this stage helps have simpler backup
3063 # Making the distinction at this stage helps have simpler backup
3064 # logic.
3064 # logic.
3065 removunk = set()
3065 removunk = set()
3066 for abs in removed:
3066 for abs in removed:
3067 target = repo.wjoin(abs)
3067 target = repo.wjoin(abs)
3068 if os.path.lexists(target):
3068 if os.path.lexists(target):
3069 removunk.add(abs)
3069 removunk.add(abs)
3070 removed -= removunk
3070 removed -= removunk
3071
3071
3072 dsremovunk = set()
3072 dsremovunk = set()
3073 for abs in dsremoved:
3073 for abs in dsremoved:
3074 target = repo.wjoin(abs)
3074 target = repo.wjoin(abs)
3075 if os.path.lexists(target):
3075 if os.path.lexists(target):
3076 dsremovunk.add(abs)
3076 dsremovunk.add(abs)
3077 dsremoved -= dsremovunk
3077 dsremoved -= dsremovunk
3078
3078
3079 # action to be actually performed by revert
3079 # action to be actually performed by revert
3080 # (<list of file>, message>) tuple
3080 # (<list of file>, message>) tuple
3081 actions = {'revert': ([], _('reverting %s\n')),
3081 actions = {'revert': ([], _('reverting %s\n')),
3082 'add': ([], _('adding %s\n')),
3082 'add': ([], _('adding %s\n')),
3083 'remove': ([], _('removing %s\n')),
3083 'remove': ([], _('removing %s\n')),
3084 'drop': ([], _('removing %s\n')),
3084 'drop': ([], _('removing %s\n')),
3085 'forget': ([], _('forgetting %s\n')),
3085 'forget': ([], _('forgetting %s\n')),
3086 'undelete': ([], _('undeleting %s\n')),
3086 'undelete': ([], _('undeleting %s\n')),
3087 'noop': (None, _('no changes needed to %s\n')),
3087 'noop': (None, _('no changes needed to %s\n')),
3088 'unknown': (None, _('file not managed: %s\n')),
3088 'unknown': (None, _('file not managed: %s\n')),
3089 }
3089 }
3090
3090
3091 # "constant" that convey the backup strategy.
3091 # "constant" that convey the backup strategy.
3092 # All set to `discard` if `no-backup` is set do avoid checking
3092 # All set to `discard` if `no-backup` is set do avoid checking
3093 # no_backup lower in the code.
3093 # no_backup lower in the code.
3094 # These values are ordered for comparison purposes
3094 # These values are ordered for comparison purposes
3095 backupinteractive = 3 # do backup if interactively modified
3095 backupinteractive = 3 # do backup if interactively modified
3096 backup = 2 # unconditionally do backup
3096 backup = 2 # unconditionally do backup
3097 check = 1 # check if the existing file differs from target
3097 check = 1 # check if the existing file differs from target
3098 discard = 0 # never do backup
3098 discard = 0 # never do backup
3099 if opts.get('no_backup'):
3099 if opts.get('no_backup'):
3100 backupinteractive = backup = check = discard
3100 backupinteractive = backup = check = discard
3101 if interactive:
3101 if interactive:
3102 dsmodifiedbackup = backupinteractive
3102 dsmodifiedbackup = backupinteractive
3103 else:
3103 else:
3104 dsmodifiedbackup = backup
3104 dsmodifiedbackup = backup
3105 tobackup = set()
3105 tobackup = set()
3106
3106
3107 backupanddel = actions['remove']
3107 backupanddel = actions['remove']
3108 if not opts.get('no_backup'):
3108 if not opts.get('no_backup'):
3109 backupanddel = actions['drop']
3109 backupanddel = actions['drop']
3110
3110
3111 disptable = (
3111 disptable = (
3112 # dispatch table:
3112 # dispatch table:
3113 # file state
3113 # file state
3114 # action
3114 # action
3115 # make backup
3115 # make backup
3116
3116
3117 ## Sets that results that will change file on disk
3117 ## Sets that results that will change file on disk
3118 # Modified compared to target, no local change
3118 # Modified compared to target, no local change
3119 (modified, actions['revert'], discard),
3119 (modified, actions['revert'], discard),
3120 # Modified compared to target, but local file is deleted
3120 # Modified compared to target, but local file is deleted
3121 (deleted, actions['revert'], discard),
3121 (deleted, actions['revert'], discard),
3122 # Modified compared to target, local change
3122 # Modified compared to target, local change
3123 (dsmodified, actions['revert'], dsmodifiedbackup),
3123 (dsmodified, actions['revert'], dsmodifiedbackup),
3124 # Added since target
3124 # Added since target
3125 (added, actions['remove'], discard),
3125 (added, actions['remove'], discard),
3126 # Added in working directory
3126 # Added in working directory
3127 (dsadded, actions['forget'], discard),
3127 (dsadded, actions['forget'], discard),
3128 # Added since target, have local modification
3128 # Added since target, have local modification
3129 (modadded, backupanddel, backup),
3129 (modadded, backupanddel, backup),
3130 # Added since target but file is missing in working directory
3130 # Added since target but file is missing in working directory
3131 (deladded, actions['drop'], discard),
3131 (deladded, actions['drop'], discard),
3132 # Removed since target, before working copy parent
3132 # Removed since target, before working copy parent
3133 (removed, actions['add'], discard),
3133 (removed, actions['add'], discard),
3134 # Same as `removed` but an unknown file exists at the same path
3134 # Same as `removed` but an unknown file exists at the same path
3135 (removunk, actions['add'], check),
3135 (removunk, actions['add'], check),
3136 # Removed since targe, marked as such in working copy parent
3136 # Removed since targe, marked as such in working copy parent
3137 (dsremoved, actions['undelete'], discard),
3137 (dsremoved, actions['undelete'], discard),
3138 # Same as `dsremoved` but an unknown file exists at the same path
3138 # Same as `dsremoved` but an unknown file exists at the same path
3139 (dsremovunk, actions['undelete'], check),
3139 (dsremovunk, actions['undelete'], check),
3140 ## the following sets does not result in any file changes
3140 ## the following sets does not result in any file changes
3141 # File with no modification
3141 # File with no modification
3142 (clean, actions['noop'], discard),
3142 (clean, actions['noop'], discard),
3143 # Existing file, not tracked anywhere
3143 # Existing file, not tracked anywhere
3144 (unknown, actions['unknown'], discard),
3144 (unknown, actions['unknown'], discard),
3145 )
3145 )
3146
3146
3147 for abs, (rel, exact) in sorted(names.items()):
3147 for abs, (rel, exact) in sorted(names.items()):
3148 # target file to be touch on disk (relative to cwd)
3148 # target file to be touch on disk (relative to cwd)
3149 target = repo.wjoin(abs)
3149 target = repo.wjoin(abs)
3150 # search the entry in the dispatch table.
3150 # search the entry in the dispatch table.
3151 # if the file is in any of these sets, it was touched in the working
3151 # if the file is in any of these sets, it was touched in the working
3152 # directory parent and we are sure it needs to be reverted.
3152 # directory parent and we are sure it needs to be reverted.
3153 for table, (xlist, msg), dobackup in disptable:
3153 for table, (xlist, msg), dobackup in disptable:
3154 if abs not in table:
3154 if abs not in table:
3155 continue
3155 continue
3156 if xlist is not None:
3156 if xlist is not None:
3157 xlist.append(abs)
3157 xlist.append(abs)
3158 if dobackup:
3158 if dobackup:
3159 # If in interactive mode, don't automatically create
3159 # If in interactive mode, don't automatically create
3160 # .orig files (issue4793)
3160 # .orig files (issue4793)
3161 if dobackup == backupinteractive:
3161 if dobackup == backupinteractive:
3162 tobackup.add(abs)
3162 tobackup.add(abs)
3163 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3163 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
3164 bakname = scmutil.origpath(ui, repo, rel)
3164 bakname = scmutil.origpath(ui, repo, rel)
3165 ui.note(_('saving current version of %s as %s\n') %
3165 ui.note(_('saving current version of %s as %s\n') %
3166 (rel, bakname))
3166 (rel, bakname))
3167 if not opts.get('dry_run'):
3167 if not opts.get('dry_run'):
3168 if interactive:
3168 if interactive:
3169 util.copyfile(target, bakname)
3169 util.copyfile(target, bakname)
3170 else:
3170 else:
3171 util.rename(target, bakname)
3171 util.rename(target, bakname)
3172 if ui.verbose or not exact:
3172 if ui.verbose or not exact:
3173 if not isinstance(msg, basestring):
3173 if not isinstance(msg, basestring):
3174 msg = msg(abs)
3174 msg = msg(abs)
3175 ui.status(msg % rel)
3175 ui.status(msg % rel)
3176 elif exact:
3176 elif exact:
3177 ui.warn(msg % rel)
3177 ui.warn(msg % rel)
3178 break
3178 break
3179
3179
3180 if not opts.get('dry_run'):
3180 if not opts.get('dry_run'):
3181 needdata = ('revert', 'add', 'undelete')
3181 needdata = ('revert', 'add', 'undelete')
3182 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3182 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3183 _performrevert(repo, parents, ctx, actions, interactive, tobackup)
3183 _performrevert(repo, parents, ctx, actions, interactive, tobackup)
3184
3184
3185 if targetsubs:
3185 if targetsubs:
3186 # Revert the subrepos on the revert list
3186 # Revert the subrepos on the revert list
3187 for sub in targetsubs:
3187 for sub in targetsubs:
3188 try:
3188 try:
3189 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3189 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3190 except KeyError:
3190 except KeyError:
3191 raise error.Abort("subrepository '%s' does not exist in %s!"
3191 raise error.Abort("subrepository '%s' does not exist in %s!"
3192 % (sub, short(ctx.node())))
3192 % (sub, short(ctx.node())))
3193
3193
3194 def _revertprefetch(repo, ctx, *files):
3194 def _revertprefetch(repo, ctx, *files):
3195 """Let extension changing the storage layer prefetch content"""
3195 """Let extension changing the storage layer prefetch content"""
3196 pass
3196 pass
3197
3197
3198 def _performrevert(repo, parents, ctx, actions, interactive=False,
3198 def _performrevert(repo, parents, ctx, actions, interactive=False,
3199 tobackup=None):
3199 tobackup=None):
3200 """function that actually perform all the actions computed for revert
3200 """function that actually perform all the actions computed for revert
3201
3201
3202 This is an independent function to let extension to plug in and react to
3202 This is an independent function to let extension to plug in and react to
3203 the imminent revert.
3203 the imminent revert.
3204
3204
3205 Make sure you have the working directory locked when calling this function.
3205 Make sure you have the working directory locked when calling this function.
3206 """
3206 """
3207 parent, p2 = parents
3207 parent, p2 = parents
3208 node = ctx.node()
3208 node = ctx.node()
3209 excluded_files = []
3209 excluded_files = []
3210 matcher_opts = {"exclude": excluded_files}
3210 matcher_opts = {"exclude": excluded_files}
3211
3211
3212 def checkout(f):
3212 def checkout(f):
3213 fc = ctx[f]
3213 fc = ctx[f]
3214 repo.wwrite(f, fc.data(), fc.flags())
3214 repo.wwrite(f, fc.data(), fc.flags())
3215
3215
3216 def doremove(f):
3216 def doremove(f):
3217 try:
3217 try:
3218 repo.wvfs.unlinkpath(f)
3218 repo.wvfs.unlinkpath(f)
3219 except OSError:
3219 except OSError:
3220 pass
3220 pass
3221 repo.dirstate.remove(f)
3221 repo.dirstate.remove(f)
3222
3222
3223 audit_path = pathutil.pathauditor(repo.root)
3223 audit_path = pathutil.pathauditor(repo.root)
3224 for f in actions['forget'][0]:
3224 for f in actions['forget'][0]:
3225 if interactive:
3225 if interactive:
3226 choice = repo.ui.promptchoice(
3226 choice = repo.ui.promptchoice(
3227 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
3227 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
3228 if choice == 0:
3228 if choice == 0:
3229 repo.dirstate.drop(f)
3229 repo.dirstate.drop(f)
3230 else:
3230 else:
3231 excluded_files.append(repo.wjoin(f))
3231 excluded_files.append(repo.wjoin(f))
3232 else:
3232 else:
3233 repo.dirstate.drop(f)
3233 repo.dirstate.drop(f)
3234 for f in actions['remove'][0]:
3234 for f in actions['remove'][0]:
3235 audit_path(f)
3235 audit_path(f)
3236 if interactive:
3236 if interactive:
3237 choice = repo.ui.promptchoice(
3237 choice = repo.ui.promptchoice(
3238 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
3238 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
3239 if choice == 0:
3239 if choice == 0:
3240 doremove(f)
3240 doremove(f)
3241 else:
3241 else:
3242 excluded_files.append(repo.wjoin(f))
3242 excluded_files.append(repo.wjoin(f))
3243 else:
3243 else:
3244 doremove(f)
3244 doremove(f)
3245 for f in actions['drop'][0]:
3245 for f in actions['drop'][0]:
3246 audit_path(f)
3246 audit_path(f)
3247 repo.dirstate.remove(f)
3247 repo.dirstate.remove(f)
3248
3248
3249 normal = None
3249 normal = None
3250 if node == parent:
3250 if node == parent:
3251 # We're reverting to our parent. If possible, we'd like status
3251 # We're reverting to our parent. If possible, we'd like status
3252 # to report the file as clean. We have to use normallookup for
3252 # to report the file as clean. We have to use normallookup for
3253 # merges to avoid losing information about merged/dirty files.
3253 # merges to avoid losing information about merged/dirty files.
3254 if p2 != nullid:
3254 if p2 != nullid:
3255 normal = repo.dirstate.normallookup
3255 normal = repo.dirstate.normallookup
3256 else:
3256 else:
3257 normal = repo.dirstate.normal
3257 normal = repo.dirstate.normal
3258
3258
3259 newlyaddedandmodifiedfiles = set()
3259 newlyaddedandmodifiedfiles = set()
3260 if interactive:
3260 if interactive:
3261 # Prompt the user for changes to revert
3261 # Prompt the user for changes to revert
3262 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3262 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3263 m = scmutil.match(ctx, torevert, matcher_opts)
3263 m = scmutil.match(ctx, torevert, matcher_opts)
3264 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3264 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3265 diffopts.nodates = True
3265 diffopts.nodates = True
3266 diffopts.git = True
3266 diffopts.git = True
3267 operation = 'discard'
3267 operation = 'discard'
3268 reversehunks = True
3268 reversehunks = True
3269 if node != parent:
3269 if node != parent:
3270 operation = 'revert'
3270 operation = 'revert'
3271 reversehunks = repo.ui.configbool('experimental',
3271 reversehunks = repo.ui.configbool('experimental',
3272 'revertalternateinteractivemode',
3272 'revertalternateinteractivemode',
3273 True)
3273 True)
3274 if reversehunks:
3274 if reversehunks:
3275 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3275 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3276 else:
3276 else:
3277 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3277 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3278 originalchunks = patch.parsepatch(diff)
3278 originalchunks = patch.parsepatch(diff)
3279
3279
3280 try:
3280 try:
3281
3281
3282 chunks, opts = recordfilter(repo.ui, originalchunks,
3282 chunks, opts = recordfilter(repo.ui, originalchunks,
3283 operation=operation)
3283 operation=operation)
3284 if reversehunks:
3284 if reversehunks:
3285 chunks = patch.reversehunks(chunks)
3285 chunks = patch.reversehunks(chunks)
3286
3286
3287 except patch.PatchError as err:
3287 except patch.PatchError as err:
3288 raise error.Abort(_('error parsing patch: %s') % err)
3288 raise error.Abort(_('error parsing patch: %s') % err)
3289
3289
3290 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3290 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3291 if tobackup is None:
3291 if tobackup is None:
3292 tobackup = set()
3292 tobackup = set()
3293 # Apply changes
3293 # Apply changes
3294 fp = stringio()
3294 fp = stringio()
3295 for c in chunks:
3295 for c in chunks:
3296 # Create a backup file only if this hunk should be backed up
3296 # Create a backup file only if this hunk should be backed up
3297 if ishunk(c) and c.header.filename() in tobackup:
3297 if ishunk(c) and c.header.filename() in tobackup:
3298 abs = c.header.filename()
3298 abs = c.header.filename()
3299 target = repo.wjoin(abs)
3299 target = repo.wjoin(abs)
3300 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3300 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3301 util.copyfile(target, bakname)
3301 util.copyfile(target, bakname)
3302 tobackup.remove(abs)
3302 tobackup.remove(abs)
3303 c.write(fp)
3303 c.write(fp)
3304 dopatch = fp.tell()
3304 dopatch = fp.tell()
3305 fp.seek(0)
3305 fp.seek(0)
3306 if dopatch:
3306 if dopatch:
3307 try:
3307 try:
3308 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3308 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3309 except patch.PatchError as err:
3309 except patch.PatchError as err:
3310 raise error.Abort(str(err))
3310 raise error.Abort(str(err))
3311 del fp
3311 del fp
3312 else:
3312 else:
3313 for f in actions['revert'][0]:
3313 for f in actions['revert'][0]:
3314 checkout(f)
3314 checkout(f)
3315 if normal:
3315 if normal:
3316 normal(f)
3316 normal(f)
3317
3317
3318 for f in actions['add'][0]:
3318 for f in actions['add'][0]:
3319 # Don't checkout modified files, they are already created by the diff
3319 # Don't checkout modified files, they are already created by the diff
3320 if f not in newlyaddedandmodifiedfiles:
3320 if f not in newlyaddedandmodifiedfiles:
3321 checkout(f)
3321 checkout(f)
3322 repo.dirstate.add(f)
3322 repo.dirstate.add(f)
3323
3323
3324 normal = repo.dirstate.normallookup
3324 normal = repo.dirstate.normallookup
3325 if node == parent and p2 == nullid:
3325 if node == parent and p2 == nullid:
3326 normal = repo.dirstate.normal
3326 normal = repo.dirstate.normal
3327 for f in actions['undelete'][0]:
3327 for f in actions['undelete'][0]:
3328 checkout(f)
3328 checkout(f)
3329 normal(f)
3329 normal(f)
3330
3330
3331 copied = copies.pathcopies(repo[parent], ctx)
3331 copied = copies.pathcopies(repo[parent], ctx)
3332
3332
3333 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3333 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3334 if f in copied:
3334 if f in copied:
3335 repo.dirstate.copy(copied[f], f)
3335 repo.dirstate.copy(copied[f], f)
3336
3336
3337 def command(table):
3337 def command(table):
3338 """Returns a function object to be used as a decorator for making commands.
3338 """Returns a function object to be used as a decorator for making commands.
3339
3339
3340 This function receives a command table as its argument. The table should
3340 This function receives a command table as its argument. The table should
3341 be a dict.
3341 be a dict.
3342
3342
3343 The returned function can be used as a decorator for adding commands
3343 The returned function can be used as a decorator for adding commands
3344 to that command table. This function accepts multiple arguments to define
3344 to that command table. This function accepts multiple arguments to define
3345 a command.
3345 a command.
3346
3346
3347 The first argument is the command name.
3347 The first argument is the command name.
3348
3348
3349 The options argument is an iterable of tuples defining command arguments.
3349 The options argument is an iterable of tuples defining command arguments.
3350 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3350 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3351
3351
3352 The synopsis argument defines a short, one line summary of how to use the
3352 The synopsis argument defines a short, one line summary of how to use the
3353 command. This shows up in the help output.
3353 command. This shows up in the help output.
3354
3354
3355 The norepo argument defines whether the command does not require a
3355 The norepo argument defines whether the command does not require a
3356 local repository. Most commands operate against a repository, thus the
3356 local repository. Most commands operate against a repository, thus the
3357 default is False.
3357 default is False.
3358
3358
3359 The optionalrepo argument defines whether the command optionally requires
3359 The optionalrepo argument defines whether the command optionally requires
3360 a local repository.
3360 a local repository.
3361
3361
3362 The inferrepo argument defines whether to try to find a repository from the
3362 The inferrepo argument defines whether to try to find a repository from the
3363 command line arguments. If True, arguments will be examined for potential
3363 command line arguments. If True, arguments will be examined for potential
3364 repository locations. See ``findrepo()``. If a repository is found, it
3364 repository locations. See ``findrepo()``. If a repository is found, it
3365 will be used.
3365 will be used.
3366 """
3366 """
3367 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3367 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3368 inferrepo=False):
3368 inferrepo=False):
3369 def decorator(func):
3369 def decorator(func):
3370 func.norepo = norepo
3370 func.norepo = norepo
3371 func.optionalrepo = optionalrepo
3371 func.optionalrepo = optionalrepo
3372 func.inferrepo = inferrepo
3372 func.inferrepo = inferrepo
3373 if synopsis:
3373 if synopsis:
3374 table[name] = func, list(options), synopsis
3374 table[name] = func, list(options), synopsis
3375 else:
3375 else:
3376 table[name] = func, list(options)
3376 table[name] = func, list(options)
3377 return func
3377 return func
3378 return decorator
3378 return decorator
3379
3379
3380 return cmd
3380 return cmd
3381
3381
3382 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3382 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3383 # commands.outgoing. "missing" is "missing" of the result of
3383 # commands.outgoing. "missing" is "missing" of the result of
3384 # "findcommonoutgoing()"
3384 # "findcommonoutgoing()"
3385 outgoinghooks = util.hooks()
3385 outgoinghooks = util.hooks()
3386
3386
3387 # a list of (ui, repo) functions called by commands.summary
3387 # a list of (ui, repo) functions called by commands.summary
3388 summaryhooks = util.hooks()
3388 summaryhooks = util.hooks()
3389
3389
3390 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3390 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3391 #
3391 #
3392 # functions should return tuple of booleans below, if 'changes' is None:
3392 # functions should return tuple of booleans below, if 'changes' is None:
3393 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3393 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3394 #
3394 #
3395 # otherwise, 'changes' is a tuple of tuples below:
3395 # otherwise, 'changes' is a tuple of tuples below:
3396 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3396 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3397 # - (desturl, destbranch, destpeer, outgoing)
3397 # - (desturl, destbranch, destpeer, outgoing)
3398 summaryremotehooks = util.hooks()
3398 summaryremotehooks = util.hooks()
3399
3399
3400 # A list of state files kept by multistep operations like graft.
3400 # A list of state files kept by multistep operations like graft.
3401 # Since graft cannot be aborted, it is considered 'clearable' by update.
3401 # Since graft cannot be aborted, it is considered 'clearable' by update.
3402 # note: bisect is intentionally excluded
3402 # note: bisect is intentionally excluded
3403 # (state file, clearable, allowcommit, error, hint)
3403 # (state file, clearable, allowcommit, error, hint)
3404 unfinishedstates = [
3404 unfinishedstates = [
3405 ('graftstate', True, False, _('graft in progress'),
3405 ('graftstate', True, False, _('graft in progress'),
3406 _("use 'hg graft --continue' or 'hg update' to abort")),
3406 _("use 'hg graft --continue' or 'hg update' to abort")),
3407 ('updatestate', True, False, _('last update was interrupted'),
3407 ('updatestate', True, False, _('last update was interrupted'),
3408 _("use 'hg update' to get a consistent checkout"))
3408 _("use 'hg update' to get a consistent checkout"))
3409 ]
3409 ]
3410
3410
3411 def checkunfinished(repo, commit=False):
3411 def checkunfinished(repo, commit=False):
3412 '''Look for an unfinished multistep operation, like graft, and abort
3412 '''Look for an unfinished multistep operation, like graft, and abort
3413 if found. It's probably good to check this right before
3413 if found. It's probably good to check this right before
3414 bailifchanged().
3414 bailifchanged().
3415 '''
3415 '''
3416 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3416 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3417 if commit and allowcommit:
3417 if commit and allowcommit:
3418 continue
3418 continue
3419 if repo.vfs.exists(f):
3419 if repo.vfs.exists(f):
3420 raise error.Abort(msg, hint=hint)
3420 raise error.Abort(msg, hint=hint)
3421
3421
3422 def clearunfinished(repo):
3422 def clearunfinished(repo):
3423 '''Check for unfinished operations (as above), and clear the ones
3423 '''Check for unfinished operations (as above), and clear the ones
3424 that are clearable.
3424 that are clearable.
3425 '''
3425 '''
3426 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3426 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3427 if not clearable and repo.vfs.exists(f):
3427 if not clearable and repo.vfs.exists(f):
3428 raise error.Abort(msg, hint=hint)
3428 raise error.Abort(msg, hint=hint)
3429 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3429 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3430 if clearable and repo.vfs.exists(f):
3430 if clearable and repo.vfs.exists(f):
3431 util.unlink(repo.vfs.join(f))
3431 util.unlink(repo.vfs.join(f))
3432
3432
3433 afterresolvedstates = [
3433 afterresolvedstates = [
3434 ('graftstate',
3434 ('graftstate',
3435 _('hg graft --continue')),
3435 _('hg graft --continue')),
3436 ]
3436 ]
3437
3437
3438 def howtocontinue(repo):
3438 def howtocontinue(repo):
3439 '''Check for an unfinished operation and return the command to finish
3439 '''Check for an unfinished operation and return the command to finish
3440 it.
3440 it.
3441
3441
3442 afterresolvedstates tuples define a .hg/{file} and the corresponding
3442 afterresolvedstates tuples define a .hg/{file} and the corresponding
3443 command needed to finish it.
3443 command needed to finish it.
3444
3444
3445 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3445 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3446 a boolean.
3446 a boolean.
3447 '''
3447 '''
3448 contmsg = _("continue: %s")
3448 contmsg = _("continue: %s")
3449 for f, msg in afterresolvedstates:
3449 for f, msg in afterresolvedstates:
3450 if repo.vfs.exists(f):
3450 if repo.vfs.exists(f):
3451 return contmsg % msg, True
3451 return contmsg % msg, True
3452 workingctx = repo[None]
3452 workingctx = repo[None]
3453 dirty = any(repo.status()) or any(workingctx.sub(s).dirty()
3453 dirty = any(repo.status()) or any(workingctx.sub(s).dirty()
3454 for s in workingctx.substate)
3454 for s in workingctx.substate)
3455 if dirty:
3455 if dirty:
3456 return contmsg % _("hg commit"), False
3456 return contmsg % _("hg commit"), False
3457 return None, None
3457 return None, None
3458
3458
3459 def checkafterresolved(repo):
3459 def checkafterresolved(repo):
3460 '''Inform the user about the next action after completing hg resolve
3460 '''Inform the user about the next action after completing hg resolve
3461
3461
3462 If there's a matching afterresolvedstates, howtocontinue will yield
3462 If there's a matching afterresolvedstates, howtocontinue will yield
3463 repo.ui.warn as the reporter.
3463 repo.ui.warn as the reporter.
3464
3464
3465 Otherwise, it will yield repo.ui.note.
3465 Otherwise, it will yield repo.ui.note.
3466 '''
3466 '''
3467 msg, warning = howtocontinue(repo)
3467 msg, warning = howtocontinue(repo)
3468 if msg is not None:
3468 if msg is not None:
3469 if warning:
3469 if warning:
3470 repo.ui.warn("%s\n" % msg)
3470 repo.ui.warn("%s\n" % msg)
3471 else:
3471 else:
3472 repo.ui.note("%s\n" % msg)
3472 repo.ui.note("%s\n" % msg)
3473
3473
3474 def wrongtooltocontinue(repo, task):
3474 def wrongtooltocontinue(repo, task):
3475 '''Raise an abort suggesting how to properly continue if there is an
3475 '''Raise an abort suggesting how to properly continue if there is an
3476 active task.
3476 active task.
3477
3477
3478 Uses howtocontinue() to find the active task.
3478 Uses howtocontinue() to find the active task.
3479
3479
3480 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3480 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3481 a hint.
3481 a hint.
3482 '''
3482 '''
3483 after = howtocontinue(repo)
3483 after = howtocontinue(repo)
3484 hint = None
3484 hint = None
3485 if after[1]:
3485 if after[1]:
3486 hint = after[0]
3486 hint = after[0]
3487 raise error.Abort(_('no %s in progress') % task, hint=hint)
3487 raise error.Abort(_('no %s in progress') % task, hint=hint)
@@ -1,1284 +1,1287
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker is used:
45 - When changeset A is split into B and C, a single marker is used:
46
46
47 (A, (B, C))
47 (A, (B, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66 The header is followed by the markers. Marker format depend of the version. See
66 The header is followed by the markers. Marker format depend of the version. See
67 comment associated with each format for details.
67 comment associated with each format for details.
68
68
69 """
69 """
70 from __future__ import absolute_import
70 from __future__ import absolute_import
71
71
72 import errno
72 import errno
73 import struct
73 import struct
74
74
75 from .i18n import _
75 from .i18n import _
76 from . import (
76 from . import (
77 error,
77 error,
78 node,
78 node,
79 parsers,
79 parsers,
80 phases,
80 phases,
81 util,
81 util,
82 )
82 )
83
83
84 _pack = struct.pack
84 _pack = struct.pack
85 _unpack = struct.unpack
85 _unpack = struct.unpack
86 _calcsize = struct.calcsize
86 _calcsize = struct.calcsize
87 propertycache = util.propertycache
87 propertycache = util.propertycache
88
88
89 # the obsolete feature is not mature enough to be enabled by default.
89 # the obsolete feature is not mature enough to be enabled by default.
90 # you have to rely on third party extension extension to enable this.
90 # you have to rely on third party extension extension to enable this.
91 _enabled = False
91 _enabled = False
92
92
93 # Options for obsolescence
93 # Options for obsolescence
94 createmarkersopt = 'createmarkers'
94 createmarkersopt = 'createmarkers'
95 allowunstableopt = 'allowunstable'
95 allowunstableopt = 'allowunstable'
96 exchangeopt = 'exchange'
96 exchangeopt = 'exchange'
97
97
98 ### obsolescence marker flag
98 ### obsolescence marker flag
99
99
100 ## bumpedfix flag
100 ## bumpedfix flag
101 #
101 #
102 # When a changeset A' succeed to a changeset A which became public, we call A'
102 # When a changeset A' succeed to a changeset A which became public, we call A'
103 # "bumped" because it's a successors of a public changesets
103 # "bumped" because it's a successors of a public changesets
104 #
104 #
105 # o A' (bumped)
105 # o A' (bumped)
106 # |`:
106 # |`:
107 # | o A
107 # | o A
108 # |/
108 # |/
109 # o Z
109 # o Z
110 #
110 #
111 # The way to solve this situation is to create a new changeset Ad as children
111 # The way to solve this situation is to create a new changeset Ad as children
112 # of A. This changeset have the same content than A'. So the diff from A to A'
112 # of A. This changeset have the same content than A'. So the diff from A to A'
113 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
113 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
114 #
114 #
115 # o Ad
115 # o Ad
116 # |`:
116 # |`:
117 # | x A'
117 # | x A'
118 # |'|
118 # |'|
119 # o | A
119 # o | A
120 # |/
120 # |/
121 # o Z
121 # o Z
122 #
122 #
123 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
123 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
124 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
124 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
125 # This flag mean that the successors express the changes between the public and
125 # This flag mean that the successors express the changes between the public and
126 # bumped version and fix the situation, breaking the transitivity of
126 # bumped version and fix the situation, breaking the transitivity of
127 # "bumped" here.
127 # "bumped" here.
128 bumpedfix = 1
128 bumpedfix = 1
129 usingsha256 = 2
129 usingsha256 = 2
130
130
131 ## Parsing and writing of version "0"
131 ## Parsing and writing of version "0"
132 #
132 #
133 # The header is followed by the markers. Each marker is made of:
133 # The header is followed by the markers. Each marker is made of:
134 #
134 #
135 # - 1 uint8 : number of new changesets "N", can be zero.
135 # - 1 uint8 : number of new changesets "N", can be zero.
136 #
136 #
137 # - 1 uint32: metadata size "M" in bytes.
137 # - 1 uint32: metadata size "M" in bytes.
138 #
138 #
139 # - 1 byte: a bit field. It is reserved for flags used in common
139 # - 1 byte: a bit field. It is reserved for flags used in common
140 # obsolete marker operations, to avoid repeated decoding of metadata
140 # obsolete marker operations, to avoid repeated decoding of metadata
141 # entries.
141 # entries.
142 #
142 #
143 # - 20 bytes: obsoleted changeset identifier.
143 # - 20 bytes: obsoleted changeset identifier.
144 #
144 #
145 # - N*20 bytes: new changesets identifiers.
145 # - N*20 bytes: new changesets identifiers.
146 #
146 #
147 # - M bytes: metadata as a sequence of nul-terminated strings. Each
147 # - M bytes: metadata as a sequence of nul-terminated strings. Each
148 # string contains a key and a value, separated by a colon ':', without
148 # string contains a key and a value, separated by a colon ':', without
149 # additional encoding. Keys cannot contain '\0' or ':' and values
149 # additional encoding. Keys cannot contain '\0' or ':' and values
150 # cannot contain '\0'.
150 # cannot contain '\0'.
151 _fm0version = 0
151 _fm0version = 0
152 _fm0fixed = '>BIB20s'
152 _fm0fixed = '>BIB20s'
153 _fm0node = '20s'
153 _fm0node = '20s'
154 _fm0fsize = _calcsize(_fm0fixed)
154 _fm0fsize = _calcsize(_fm0fixed)
155 _fm0fnodesize = _calcsize(_fm0node)
155 _fm0fnodesize = _calcsize(_fm0node)
156
156
157 def _fm0readmarkers(data, off):
157 def _fm0readmarkers(data, off):
158 # Loop on markers
158 # Loop on markers
159 l = len(data)
159 l = len(data)
160 while off + _fm0fsize <= l:
160 while off + _fm0fsize <= l:
161 # read fixed part
161 # read fixed part
162 cur = data[off:off + _fm0fsize]
162 cur = data[off:off + _fm0fsize]
163 off += _fm0fsize
163 off += _fm0fsize
164 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
164 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
165 # read replacement
165 # read replacement
166 sucs = ()
166 sucs = ()
167 if numsuc:
167 if numsuc:
168 s = (_fm0fnodesize * numsuc)
168 s = (_fm0fnodesize * numsuc)
169 cur = data[off:off + s]
169 cur = data[off:off + s]
170 sucs = _unpack(_fm0node * numsuc, cur)
170 sucs = _unpack(_fm0node * numsuc, cur)
171 off += s
171 off += s
172 # read metadata
172 # read metadata
173 # (metadata will be decoded on demand)
173 # (metadata will be decoded on demand)
174 metadata = data[off:off + mdsize]
174 metadata = data[off:off + mdsize]
175 if len(metadata) != mdsize:
175 if len(metadata) != mdsize:
176 raise error.Abort(_('parsing obsolete marker: metadata is too '
176 raise error.Abort(_('parsing obsolete marker: metadata is too '
177 'short, %d bytes expected, got %d')
177 'short, %d bytes expected, got %d')
178 % (mdsize, len(metadata)))
178 % (mdsize, len(metadata)))
179 off += mdsize
179 off += mdsize
180 metadata = _fm0decodemeta(metadata)
180 metadata = _fm0decodemeta(metadata)
181 try:
181 try:
182 when, offset = metadata.pop('date', '0 0').split(' ')
182 when, offset = metadata.pop('date', '0 0').split(' ')
183 date = float(when), int(offset)
183 date = float(when), int(offset)
184 except ValueError:
184 except ValueError:
185 date = (0., 0)
185 date = (0., 0)
186 parents = None
186 parents = None
187 if 'p2' in metadata:
187 if 'p2' in metadata:
188 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
188 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
189 elif 'p1' in metadata:
189 elif 'p1' in metadata:
190 parents = (metadata.pop('p1', None),)
190 parents = (metadata.pop('p1', None),)
191 elif 'p0' in metadata:
191 elif 'p0' in metadata:
192 parents = ()
192 parents = ()
193 if parents is not None:
193 if parents is not None:
194 try:
194 try:
195 parents = tuple(node.bin(p) for p in parents)
195 parents = tuple(node.bin(p) for p in parents)
196 # if parent content is not a nodeid, drop the data
196 # if parent content is not a nodeid, drop the data
197 for p in parents:
197 for p in parents:
198 if len(p) != 20:
198 if len(p) != 20:
199 parents = None
199 parents = None
200 break
200 break
201 except TypeError:
201 except TypeError:
202 # if content cannot be translated to nodeid drop the data.
202 # if content cannot be translated to nodeid drop the data.
203 parents = None
203 parents = None
204
204
205 metadata = tuple(sorted(metadata.iteritems()))
205 metadata = tuple(sorted(metadata.iteritems()))
206
206
207 yield (pre, sucs, flags, metadata, date, parents)
207 yield (pre, sucs, flags, metadata, date, parents)
208
208
209 def _fm0encodeonemarker(marker):
209 def _fm0encodeonemarker(marker):
210 pre, sucs, flags, metadata, date, parents = marker
210 pre, sucs, flags, metadata, date, parents = marker
211 if flags & usingsha256:
211 if flags & usingsha256:
212 raise error.Abort(_('cannot handle sha256 with old obsstore format'))
212 raise error.Abort(_('cannot handle sha256 with old obsstore format'))
213 metadata = dict(metadata)
213 metadata = dict(metadata)
214 time, tz = date
214 time, tz = date
215 metadata['date'] = '%r %i' % (time, tz)
215 metadata['date'] = '%r %i' % (time, tz)
216 if parents is not None:
216 if parents is not None:
217 if not parents:
217 if not parents:
218 # mark that we explicitly recorded no parents
218 # mark that we explicitly recorded no parents
219 metadata['p0'] = ''
219 metadata['p0'] = ''
220 for i, p in enumerate(parents, 1):
220 for i, p in enumerate(parents, 1):
221 metadata['p%i' % i] = node.hex(p)
221 metadata['p%i' % i] = node.hex(p)
222 metadata = _fm0encodemeta(metadata)
222 metadata = _fm0encodemeta(metadata)
223 numsuc = len(sucs)
223 numsuc = len(sucs)
224 format = _fm0fixed + (_fm0node * numsuc)
224 format = _fm0fixed + (_fm0node * numsuc)
225 data = [numsuc, len(metadata), flags, pre]
225 data = [numsuc, len(metadata), flags, pre]
226 data.extend(sucs)
226 data.extend(sucs)
227 return _pack(format, *data) + metadata
227 return _pack(format, *data) + metadata
228
228
229 def _fm0encodemeta(meta):
229 def _fm0encodemeta(meta):
230 """Return encoded metadata string to string mapping.
230 """Return encoded metadata string to string mapping.
231
231
232 Assume no ':' in key and no '\0' in both key and value."""
232 Assume no ':' in key and no '\0' in both key and value."""
233 for key, value in meta.iteritems():
233 for key, value in meta.iteritems():
234 if ':' in key or '\0' in key:
234 if ':' in key or '\0' in key:
235 raise ValueError("':' and '\0' are forbidden in metadata key'")
235 raise ValueError("':' and '\0' are forbidden in metadata key'")
236 if '\0' in value:
236 if '\0' in value:
237 raise ValueError("':' is forbidden in metadata value'")
237 raise ValueError("':' is forbidden in metadata value'")
238 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
238 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
239
239
240 def _fm0decodemeta(data):
240 def _fm0decodemeta(data):
241 """Return string to string dictionary from encoded version."""
241 """Return string to string dictionary from encoded version."""
242 d = {}
242 d = {}
243 for l in data.split('\0'):
243 for l in data.split('\0'):
244 if l:
244 if l:
245 key, value = l.split(':')
245 key, value = l.split(':')
246 d[key] = value
246 d[key] = value
247 return d
247 return d
248
248
249 ## Parsing and writing of version "1"
249 ## Parsing and writing of version "1"
250 #
250 #
251 # The header is followed by the markers. Each marker is made of:
251 # The header is followed by the markers. Each marker is made of:
252 #
252 #
253 # - uint32: total size of the marker (including this field)
253 # - uint32: total size of the marker (including this field)
254 #
254 #
255 # - float64: date in seconds since epoch
255 # - float64: date in seconds since epoch
256 #
256 #
257 # - int16: timezone offset in minutes
257 # - int16: timezone offset in minutes
258 #
258 #
259 # - uint16: a bit field. It is reserved for flags used in common
259 # - uint16: a bit field. It is reserved for flags used in common
260 # obsolete marker operations, to avoid repeated decoding of metadata
260 # obsolete marker operations, to avoid repeated decoding of metadata
261 # entries.
261 # entries.
262 #
262 #
263 # - uint8: number of successors "N", can be zero.
263 # - uint8: number of successors "N", can be zero.
264 #
264 #
265 # - uint8: number of parents "P", can be zero.
265 # - uint8: number of parents "P", can be zero.
266 #
266 #
267 # 0: parents data stored but no parent,
267 # 0: parents data stored but no parent,
268 # 1: one parent stored,
268 # 1: one parent stored,
269 # 2: two parents stored,
269 # 2: two parents stored,
270 # 3: no parent data stored
270 # 3: no parent data stored
271 #
271 #
272 # - uint8: number of metadata entries M
272 # - uint8: number of metadata entries M
273 #
273 #
274 # - 20 or 32 bytes: precursor changeset identifier.
274 # - 20 or 32 bytes: precursor changeset identifier.
275 #
275 #
276 # - N*(20 or 32) bytes: successors changesets identifiers.
276 # - N*(20 or 32) bytes: successors changesets identifiers.
277 #
277 #
278 # - P*(20 or 32) bytes: parents of the precursors changesets.
278 # - P*(20 or 32) bytes: parents of the precursors changesets.
279 #
279 #
280 # - M*(uint8, uint8): size of all metadata entries (key and value)
280 # - M*(uint8, uint8): size of all metadata entries (key and value)
281 #
281 #
282 # - remaining bytes: the metadata, each (key, value) pair after the other.
282 # - remaining bytes: the metadata, each (key, value) pair after the other.
283 _fm1version = 1
283 _fm1version = 1
284 _fm1fixed = '>IdhHBBB20s'
284 _fm1fixed = '>IdhHBBB20s'
285 _fm1nodesha1 = '20s'
285 _fm1nodesha1 = '20s'
286 _fm1nodesha256 = '32s'
286 _fm1nodesha256 = '32s'
287 _fm1nodesha1size = _calcsize(_fm1nodesha1)
287 _fm1nodesha1size = _calcsize(_fm1nodesha1)
288 _fm1nodesha256size = _calcsize(_fm1nodesha256)
288 _fm1nodesha256size = _calcsize(_fm1nodesha256)
289 _fm1fsize = _calcsize(_fm1fixed)
289 _fm1fsize = _calcsize(_fm1fixed)
290 _fm1parentnone = 3
290 _fm1parentnone = 3
291 _fm1parentshift = 14
291 _fm1parentshift = 14
292 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
292 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
293 _fm1metapair = 'BB'
293 _fm1metapair = 'BB'
294 _fm1metapairsize = _calcsize('BB')
294 _fm1metapairsize = _calcsize('BB')
295
295
296 def _fm1purereadmarkers(data, off):
296 def _fm1purereadmarkers(data, off):
297 # make some global constants local for performance
297 # make some global constants local for performance
298 noneflag = _fm1parentnone
298 noneflag = _fm1parentnone
299 sha2flag = usingsha256
299 sha2flag = usingsha256
300 sha1size = _fm1nodesha1size
300 sha1size = _fm1nodesha1size
301 sha2size = _fm1nodesha256size
301 sha2size = _fm1nodesha256size
302 sha1fmt = _fm1nodesha1
302 sha1fmt = _fm1nodesha1
303 sha2fmt = _fm1nodesha256
303 sha2fmt = _fm1nodesha256
304 metasize = _fm1metapairsize
304 metasize = _fm1metapairsize
305 metafmt = _fm1metapair
305 metafmt = _fm1metapair
306 fsize = _fm1fsize
306 fsize = _fm1fsize
307 unpack = _unpack
307 unpack = _unpack
308
308
309 # Loop on markers
309 # Loop on markers
310 stop = len(data) - _fm1fsize
310 stop = len(data) - _fm1fsize
311 ufixed = struct.Struct(_fm1fixed).unpack
311 ufixed = struct.Struct(_fm1fixed).unpack
312
312
313 while off <= stop:
313 while off <= stop:
314 # read fixed part
314 # read fixed part
315 o1 = off + fsize
315 o1 = off + fsize
316 t, secs, tz, flags, numsuc, numpar, nummeta, prec = ufixed(data[off:o1])
316 t, secs, tz, flags, numsuc, numpar, nummeta, prec = ufixed(data[off:o1])
317
317
318 if flags & sha2flag:
318 if flags & sha2flag:
319 # FIXME: prec was read as a SHA1, needs to be amended
319 # FIXME: prec was read as a SHA1, needs to be amended
320
320
321 # read 0 or more successors
321 # read 0 or more successors
322 if numsuc == 1:
322 if numsuc == 1:
323 o2 = o1 + sha2size
323 o2 = o1 + sha2size
324 sucs = (data[o1:o2],)
324 sucs = (data[o1:o2],)
325 else:
325 else:
326 o2 = o1 + sha2size * numsuc
326 o2 = o1 + sha2size * numsuc
327 sucs = unpack(sha2fmt * numsuc, data[o1:o2])
327 sucs = unpack(sha2fmt * numsuc, data[o1:o2])
328
328
329 # read parents
329 # read parents
330 if numpar == noneflag:
330 if numpar == noneflag:
331 o3 = o2
331 o3 = o2
332 parents = None
332 parents = None
333 elif numpar == 1:
333 elif numpar == 1:
334 o3 = o2 + sha2size
334 o3 = o2 + sha2size
335 parents = (data[o2:o3],)
335 parents = (data[o2:o3],)
336 else:
336 else:
337 o3 = o2 + sha2size * numpar
337 o3 = o2 + sha2size * numpar
338 parents = unpack(sha2fmt * numpar, data[o2:o3])
338 parents = unpack(sha2fmt * numpar, data[o2:o3])
339 else:
339 else:
340 # read 0 or more successors
340 # read 0 or more successors
341 if numsuc == 1:
341 if numsuc == 1:
342 o2 = o1 + sha1size
342 o2 = o1 + sha1size
343 sucs = (data[o1:o2],)
343 sucs = (data[o1:o2],)
344 else:
344 else:
345 o2 = o1 + sha1size * numsuc
345 o2 = o1 + sha1size * numsuc
346 sucs = unpack(sha1fmt * numsuc, data[o1:o2])
346 sucs = unpack(sha1fmt * numsuc, data[o1:o2])
347
347
348 # read parents
348 # read parents
349 if numpar == noneflag:
349 if numpar == noneflag:
350 o3 = o2
350 o3 = o2
351 parents = None
351 parents = None
352 elif numpar == 1:
352 elif numpar == 1:
353 o3 = o2 + sha1size
353 o3 = o2 + sha1size
354 parents = (data[o2:o3],)
354 parents = (data[o2:o3],)
355 else:
355 else:
356 o3 = o2 + sha1size * numpar
356 o3 = o2 + sha1size * numpar
357 parents = unpack(sha1fmt * numpar, data[o2:o3])
357 parents = unpack(sha1fmt * numpar, data[o2:o3])
358
358
359 # read metadata
359 # read metadata
360 off = o3 + metasize * nummeta
360 off = o3 + metasize * nummeta
361 metapairsize = unpack('>' + (metafmt * nummeta), data[o3:off])
361 metapairsize = unpack('>' + (metafmt * nummeta), data[o3:off])
362 metadata = []
362 metadata = []
363 for idx in xrange(0, len(metapairsize), 2):
363 for idx in xrange(0, len(metapairsize), 2):
364 o1 = off + metapairsize[idx]
364 o1 = off + metapairsize[idx]
365 o2 = o1 + metapairsize[idx + 1]
365 o2 = o1 + metapairsize[idx + 1]
366 metadata.append((data[off:o1], data[o1:o2]))
366 metadata.append((data[off:o1], data[o1:o2]))
367 off = o2
367 off = o2
368
368
369 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
369 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
370
370
371 def _fm1encodeonemarker(marker):
371 def _fm1encodeonemarker(marker):
372 pre, sucs, flags, metadata, date, parents = marker
372 pre, sucs, flags, metadata, date, parents = marker
373 # determine node size
373 # determine node size
374 _fm1node = _fm1nodesha1
374 _fm1node = _fm1nodesha1
375 if flags & usingsha256:
375 if flags & usingsha256:
376 _fm1node = _fm1nodesha256
376 _fm1node = _fm1nodesha256
377 numsuc = len(sucs)
377 numsuc = len(sucs)
378 numextranodes = numsuc
378 numextranodes = numsuc
379 if parents is None:
379 if parents is None:
380 numpar = _fm1parentnone
380 numpar = _fm1parentnone
381 else:
381 else:
382 numpar = len(parents)
382 numpar = len(parents)
383 numextranodes += numpar
383 numextranodes += numpar
384 formatnodes = _fm1node * numextranodes
384 formatnodes = _fm1node * numextranodes
385 formatmeta = _fm1metapair * len(metadata)
385 formatmeta = _fm1metapair * len(metadata)
386 format = _fm1fixed + formatnodes + formatmeta
386 format = _fm1fixed + formatnodes + formatmeta
387 # tz is stored in minutes so we divide by 60
387 # tz is stored in minutes so we divide by 60
388 tz = date[1]//60
388 tz = date[1]//60
389 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
389 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
390 data.extend(sucs)
390 data.extend(sucs)
391 if parents is not None:
391 if parents is not None:
392 data.extend(parents)
392 data.extend(parents)
393 totalsize = _calcsize(format)
393 totalsize = _calcsize(format)
394 for key, value in metadata:
394 for key, value in metadata:
395 lk = len(key)
395 lk = len(key)
396 lv = len(value)
396 lv = len(value)
397 data.append(lk)
397 data.append(lk)
398 data.append(lv)
398 data.append(lv)
399 totalsize += lk + lv
399 totalsize += lk + lv
400 data[0] = totalsize
400 data[0] = totalsize
401 data = [_pack(format, *data)]
401 data = [_pack(format, *data)]
402 for key, value in metadata:
402 for key, value in metadata:
403 data.append(key)
403 data.append(key)
404 data.append(value)
404 data.append(value)
405 return ''.join(data)
405 return ''.join(data)
406
406
407 def _fm1readmarkers(data, off):
407 def _fm1readmarkers(data, off):
408 native = getattr(parsers, 'fm1readmarkers', None)
408 native = getattr(parsers, 'fm1readmarkers', None)
409 if not native:
409 if not native:
410 return _fm1purereadmarkers(data, off)
410 return _fm1purereadmarkers(data, off)
411 stop = len(data) - _fm1fsize
411 stop = len(data) - _fm1fsize
412 return native(data, off, stop)
412 return native(data, off, stop)
413
413
414 # mapping to read/write various marker formats
414 # mapping to read/write various marker formats
415 # <version> -> (decoder, encoder)
415 # <version> -> (decoder, encoder)
416 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
416 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
417 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
417 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
418
418
419 @util.nogc
419 @util.nogc
420 def _readmarkers(data):
420 def _readmarkers(data):
421 """Read and enumerate markers from raw data"""
421 """Read and enumerate markers from raw data"""
422 off = 0
422 off = 0
423 diskversion = _unpack('>B', data[off:off + 1])[0]
423 diskversion = _unpack('>B', data[off:off + 1])[0]
424 off += 1
424 off += 1
425 if diskversion not in formats:
425 if diskversion not in formats:
426 raise error.Abort(_('parsing obsolete marker: unknown version %r')
426 raise error.Abort(_('parsing obsolete marker: unknown version %r')
427 % diskversion)
427 % diskversion)
428 return diskversion, formats[diskversion][0](data, off)
428 return diskversion, formats[diskversion][0](data, off)
429
429
430 def encodemarkers(markers, addheader=False, version=_fm0version):
430 def encodemarkers(markers, addheader=False, version=_fm0version):
431 # Kept separate from flushmarkers(), it will be reused for
431 # Kept separate from flushmarkers(), it will be reused for
432 # markers exchange.
432 # markers exchange.
433 encodeone = formats[version][1]
433 encodeone = formats[version][1]
434 if addheader:
434 if addheader:
435 yield _pack('>B', version)
435 yield _pack('>B', version)
436 for marker in markers:
436 for marker in markers:
437 yield encodeone(marker)
437 yield encodeone(marker)
438
438
439
439
440 class marker(object):
440 class marker(object):
441 """Wrap obsolete marker raw data"""
441 """Wrap obsolete marker raw data"""
442
442
443 def __init__(self, repo, data):
443 def __init__(self, repo, data):
444 # the repo argument will be used to create changectx in later version
444 # the repo argument will be used to create changectx in later version
445 self._repo = repo
445 self._repo = repo
446 self._data = data
446 self._data = data
447 self._decodedmeta = None
447 self._decodedmeta = None
448
448
449 def __hash__(self):
449 def __hash__(self):
450 return hash(self._data)
450 return hash(self._data)
451
451
452 def __eq__(self, other):
452 def __eq__(self, other):
453 if type(other) != type(self):
453 if type(other) != type(self):
454 return False
454 return False
455 return self._data == other._data
455 return self._data == other._data
456
456
457 def precnode(self):
457 def precnode(self):
458 """Precursor changeset node identifier"""
458 """Precursor changeset node identifier"""
459 return self._data[0]
459 return self._data[0]
460
460
461 def succnodes(self):
461 def succnodes(self):
462 """List of successor changesets node identifiers"""
462 """List of successor changesets node identifiers"""
463 return self._data[1]
463 return self._data[1]
464
464
465 def parentnodes(self):
465 def parentnodes(self):
466 """Parents of the precursors (None if not recorded)"""
466 """Parents of the precursors (None if not recorded)"""
467 return self._data[5]
467 return self._data[5]
468
468
469 def metadata(self):
469 def metadata(self):
470 """Decoded metadata dictionary"""
470 """Decoded metadata dictionary"""
471 return dict(self._data[3])
471 return dict(self._data[3])
472
472
473 def date(self):
473 def date(self):
474 """Creation date as (unixtime, offset)"""
474 """Creation date as (unixtime, offset)"""
475 return self._data[4]
475 return self._data[4]
476
476
477 def flags(self):
477 def flags(self):
478 """The flags field of the marker"""
478 """The flags field of the marker"""
479 return self._data[2]
479 return self._data[2]
480
480
481 @util.nogc
481 @util.nogc
482 def _addsuccessors(successors, markers):
482 def _addsuccessors(successors, markers):
483 for mark in markers:
483 for mark in markers:
484 successors.setdefault(mark[0], set()).add(mark)
484 successors.setdefault(mark[0], set()).add(mark)
485
485
486 @util.nogc
486 @util.nogc
487 def _addprecursors(precursors, markers):
487 def _addprecursors(precursors, markers):
488 for mark in markers:
488 for mark in markers:
489 for suc in mark[1]:
489 for suc in mark[1]:
490 precursors.setdefault(suc, set()).add(mark)
490 precursors.setdefault(suc, set()).add(mark)
491
491
492 @util.nogc
492 @util.nogc
493 def _addchildren(children, markers):
493 def _addchildren(children, markers):
494 for mark in markers:
494 for mark in markers:
495 parents = mark[5]
495 parents = mark[5]
496 if parents is not None:
496 if parents is not None:
497 for p in parents:
497 for p in parents:
498 children.setdefault(p, set()).add(mark)
498 children.setdefault(p, set()).add(mark)
499
499
500 def _checkinvalidmarkers(markers):
500 def _checkinvalidmarkers(markers):
501 """search for marker with invalid data and raise error if needed
501 """search for marker with invalid data and raise error if needed
502
502
503 Exist as a separated function to allow the evolve extension for a more
503 Exist as a separated function to allow the evolve extension for a more
504 subtle handling.
504 subtle handling.
505 """
505 """
506 for mark in markers:
506 for mark in markers:
507 if node.nullid in mark[1]:
507 if node.nullid in mark[1]:
508 raise error.Abort(_('bad obsolescence marker detected: '
508 raise error.Abort(_('bad obsolescence marker detected: '
509 'invalid successors nullid'))
509 'invalid successors nullid'))
510
510
511 class obsstore(object):
511 class obsstore(object):
512 """Store obsolete markers
512 """Store obsolete markers
513
513
514 Markers can be accessed with two mappings:
514 Markers can be accessed with two mappings:
515 - precursors[x] -> set(markers on precursors edges of x)
515 - precursors[x] -> set(markers on precursors edges of x)
516 - successors[x] -> set(markers on successors edges of x)
516 - successors[x] -> set(markers on successors edges of x)
517 - children[x] -> set(markers on precursors edges of children(x)
517 - children[x] -> set(markers on precursors edges of children(x)
518 """
518 """
519
519
520 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
520 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
521 # prec: nodeid, precursor changesets
521 # prec: nodeid, precursor changesets
522 # succs: tuple of nodeid, successor changesets (0-N length)
522 # succs: tuple of nodeid, successor changesets (0-N length)
523 # flag: integer, flag field carrying modifier for the markers (see doc)
523 # flag: integer, flag field carrying modifier for the markers (see doc)
524 # meta: binary blob, encoded metadata dictionary
524 # meta: binary blob, encoded metadata dictionary
525 # date: (float, int) tuple, date of marker creation
525 # date: (float, int) tuple, date of marker creation
526 # parents: (tuple of nodeid) or None, parents of precursors
526 # parents: (tuple of nodeid) or None, parents of precursors
527 # None is used when no data has been recorded
527 # None is used when no data has been recorded
528
528
529 def __init__(self, svfs, defaultformat=_fm1version, readonly=False):
529 def __init__(self, svfs, defaultformat=_fm1version, readonly=False):
530 # caches for various obsolescence related cache
530 # caches for various obsolescence related cache
531 self.caches = {}
531 self.caches = {}
532 self.svfs = svfs
532 self.svfs = svfs
533 self._version = defaultformat
533 self._version = defaultformat
534 self._readonly = readonly
534 self._readonly = readonly
535
535
536 def __iter__(self):
536 def __iter__(self):
537 return iter(self._all)
537 return iter(self._all)
538
538
539 def __len__(self):
539 def __len__(self):
540 return len(self._all)
540 return len(self._all)
541
541
542 def __nonzero__(self):
542 def __nonzero__(self):
543 if not self._cached('_all'):
543 if not self._cached('_all'):
544 try:
544 try:
545 return self.svfs.stat('obsstore').st_size > 1
545 return self.svfs.stat('obsstore').st_size > 1
546 except OSError as inst:
546 except OSError as inst:
547 if inst.errno != errno.ENOENT:
547 if inst.errno != errno.ENOENT:
548 raise
548 raise
549 # just build an empty _all list if no obsstore exists, which
549 # just build an empty _all list if no obsstore exists, which
550 # avoids further stat() syscalls
550 # avoids further stat() syscalls
551 pass
551 pass
552 return bool(self._all)
552 return bool(self._all)
553
553
554 __bool__ = __nonzero__
554 __bool__ = __nonzero__
555
555
556 @property
556 @property
557 def readonly(self):
557 def readonly(self):
558 """True if marker creation is disabled
558 """True if marker creation is disabled
559
559
560 Remove me in the future when obsolete marker is always on."""
560 Remove me in the future when obsolete marker is always on."""
561 return self._readonly
561 return self._readonly
562
562
563 def create(self, transaction, prec, succs=(), flag=0, parents=None,
563 def create(self, transaction, prec, succs=(), flag=0, parents=None,
564 date=None, metadata=None):
564 date=None, metadata=None):
565 """obsolete: add a new obsolete marker
565 """obsolete: add a new obsolete marker
566
566
567 * ensuring it is hashable
567 * ensuring it is hashable
568 * check mandatory metadata
568 * check mandatory metadata
569 * encode metadata
569 * encode metadata
570
570
571 If you are a human writing code creating marker you want to use the
571 If you are a human writing code creating marker you want to use the
572 `createmarkers` function in this module instead.
572 `createmarkers` function in this module instead.
573
573
574 return True if a new marker have been added, False if the markers
574 return True if a new marker have been added, False if the markers
575 already existed (no op).
575 already existed (no op).
576 """
576 """
577 if metadata is None:
577 if metadata is None:
578 metadata = {}
578 metadata = {}
579 if date is None:
579 if date is None:
580 if 'date' in metadata:
580 if 'date' in metadata:
581 # as a courtesy for out-of-tree extensions
581 # as a courtesy for out-of-tree extensions
582 date = util.parsedate(metadata.pop('date'))
582 date = util.parsedate(metadata.pop('date'))
583 else:
583 else:
584 date = util.makedate()
584 date = util.makedate()
585 if len(prec) != 20:
585 if len(prec) != 20:
586 raise ValueError(prec)
586 raise ValueError(prec)
587 for succ in succs:
587 for succ in succs:
588 if len(succ) != 20:
588 if len(succ) != 20:
589 raise ValueError(succ)
589 raise ValueError(succ)
590 if prec in succs:
590 if prec in succs:
591 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
591 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
592
592
593 metadata = tuple(sorted(metadata.iteritems()))
593 metadata = tuple(sorted(metadata.iteritems()))
594
594
595 marker = (str(prec), tuple(succs), int(flag), metadata, date, parents)
595 marker = (str(prec), tuple(succs), int(flag), metadata, date, parents)
596 return bool(self.add(transaction, [marker]))
596 return bool(self.add(transaction, [marker]))
597
597
598 def add(self, transaction, markers):
598 def add(self, transaction, markers):
599 """Add new markers to the store
599 """Add new markers to the store
600
600
601 Take care of filtering duplicate.
601 Take care of filtering duplicate.
602 Return the number of new marker."""
602 Return the number of new marker."""
603 if self._readonly:
603 if self._readonly:
604 raise error.Abort(_('creating obsolete markers is not enabled on '
604 raise error.Abort(_('creating obsolete markers is not enabled on '
605 'this repo'))
605 'this repo'))
606 known = set(self._all)
606 known = set(self._all)
607 new = []
607 new = []
608 for m in markers:
608 for m in markers:
609 if m not in known:
609 if m not in known:
610 known.add(m)
610 known.add(m)
611 new.append(m)
611 new.append(m)
612 if new:
612 if new:
613 f = self.svfs('obsstore', 'ab')
613 f = self.svfs('obsstore', 'ab')
614 try:
614 try:
615 offset = f.tell()
615 offset = f.tell()
616 transaction.add('obsstore', offset)
616 transaction.add('obsstore', offset)
617 # offset == 0: new file - add the version header
617 # offset == 0: new file - add the version header
618 for bytes in encodemarkers(new, offset == 0, self._version):
618 for bytes in encodemarkers(new, offset == 0, self._version):
619 f.write(bytes)
619 f.write(bytes)
620 finally:
620 finally:
621 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
621 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
622 # call 'filecacheentry.refresh()' here
622 # call 'filecacheentry.refresh()' here
623 f.close()
623 f.close()
624 self._addmarkers(new)
624 self._addmarkers(new)
625 # new marker *may* have changed several set. invalidate the cache.
625 # new marker *may* have changed several set. invalidate the cache.
626 self.caches.clear()
626 self.caches.clear()
627 # records the number of new markers for the transaction hooks
627 # records the number of new markers for the transaction hooks
628 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
628 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
629 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
629 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
630 return len(new)
630 return len(new)
631
631
632 def mergemarkers(self, transaction, data):
632 def mergemarkers(self, transaction, data):
633 """merge a binary stream of markers inside the obsstore
633 """merge a binary stream of markers inside the obsstore
634
634
635 Returns the number of new markers added."""
635 Returns the number of new markers added."""
636 version, markers = _readmarkers(data)
636 version, markers = _readmarkers(data)
637 return self.add(transaction, markers)
637 return self.add(transaction, markers)
638
638
639 @propertycache
639 @propertycache
640 def _all(self):
640 def _all(self):
641 data = self.svfs.tryread('obsstore')
641 data = self.svfs.tryread('obsstore')
642 if not data:
642 if not data:
643 return []
643 return []
644 self._version, markers = _readmarkers(data)
644 self._version, markers = _readmarkers(data)
645 markers = list(markers)
645 markers = list(markers)
646 _checkinvalidmarkers(markers)
646 _checkinvalidmarkers(markers)
647 return markers
647 return markers
648
648
649 @propertycache
649 @propertycache
650 def successors(self):
650 def successors(self):
651 successors = {}
651 successors = {}
652 _addsuccessors(successors, self._all)
652 _addsuccessors(successors, self._all)
653 return successors
653 return successors
654
654
655 @propertycache
655 @propertycache
656 def precursors(self):
656 def precursors(self):
657 precursors = {}
657 precursors = {}
658 _addprecursors(precursors, self._all)
658 _addprecursors(precursors, self._all)
659 return precursors
659 return precursors
660
660
661 @propertycache
661 @propertycache
662 def children(self):
662 def children(self):
663 children = {}
663 children = {}
664 _addchildren(children, self._all)
664 _addchildren(children, self._all)
665 return children
665 return children
666
666
667 def _cached(self, attr):
667 def _cached(self, attr):
668 return attr in self.__dict__
668 return attr in self.__dict__
669
669
670 def _addmarkers(self, markers):
670 def _addmarkers(self, markers):
671 markers = list(markers) # to allow repeated iteration
671 markers = list(markers) # to allow repeated iteration
672 self._all.extend(markers)
672 self._all.extend(markers)
673 if self._cached('successors'):
673 if self._cached('successors'):
674 _addsuccessors(self.successors, markers)
674 _addsuccessors(self.successors, markers)
675 if self._cached('precursors'):
675 if self._cached('precursors'):
676 _addprecursors(self.precursors, markers)
676 _addprecursors(self.precursors, markers)
677 if self._cached('children'):
677 if self._cached('children'):
678 _addchildren(self.children, markers)
678 _addchildren(self.children, markers)
679 _checkinvalidmarkers(markers)
679 _checkinvalidmarkers(markers)
680
680
681 def relevantmarkers(self, nodes):
681 def relevantmarkers(self, nodes):
682 """return a set of all obsolescence markers relevant to a set of nodes.
682 """return a set of all obsolescence markers relevant to a set of nodes.
683
683
684 "relevant" to a set of nodes mean:
684 "relevant" to a set of nodes mean:
685
685
686 - marker that use this changeset as successor
686 - marker that use this changeset as successor
687 - prune marker of direct children on this changeset
687 - prune marker of direct children on this changeset
688 - recursive application of the two rules on precursors of these markers
688 - recursive application of the two rules on precursors of these markers
689
689
690 It is a set so you cannot rely on order."""
690 It is a set so you cannot rely on order."""
691
691
692 pendingnodes = set(nodes)
692 pendingnodes = set(nodes)
693 seenmarkers = set()
693 seenmarkers = set()
694 seennodes = set(pendingnodes)
694 seennodes = set(pendingnodes)
695 precursorsmarkers = self.precursors
695 precursorsmarkers = self.precursors
696 children = self.children
696 children = self.children
697 while pendingnodes:
697 while pendingnodes:
698 direct = set()
698 direct = set()
699 for current in pendingnodes:
699 for current in pendingnodes:
700 direct.update(precursorsmarkers.get(current, ()))
700 direct.update(precursorsmarkers.get(current, ()))
701 pruned = [m for m in children.get(current, ()) if not m[1]]
701 pruned = [m for m in children.get(current, ()) if not m[1]]
702 direct.update(pruned)
702 direct.update(pruned)
703 direct -= seenmarkers
703 direct -= seenmarkers
704 pendingnodes = set([m[0] for m in direct])
704 pendingnodes = set([m[0] for m in direct])
705 seenmarkers |= direct
705 seenmarkers |= direct
706 pendingnodes -= seennodes
706 pendingnodes -= seennodes
707 seennodes |= pendingnodes
707 seennodes |= pendingnodes
708 return seenmarkers
708 return seenmarkers
709
709
710 def commonversion(versions):
710 def commonversion(versions):
711 """Return the newest version listed in both versions and our local formats.
711 """Return the newest version listed in both versions and our local formats.
712
712
713 Returns None if no common version exists.
713 Returns None if no common version exists.
714 """
714 """
715 versions.sort(reverse=True)
715 versions.sort(reverse=True)
716 # search for highest version known on both side
716 # search for highest version known on both side
717 for v in versions:
717 for v in versions:
718 if v in formats:
718 if v in formats:
719 return v
719 return v
720 return None
720 return None
721
721
722 # arbitrary picked to fit into 8K limit from HTTP server
722 # arbitrary picked to fit into 8K limit from HTTP server
723 # you have to take in account:
723 # you have to take in account:
724 # - the version header
724 # - the version header
725 # - the base85 encoding
725 # - the base85 encoding
726 _maxpayload = 5300
726 _maxpayload = 5300
727
727
728 def _pushkeyescape(markers):
728 def _pushkeyescape(markers):
729 """encode markers into a dict suitable for pushkey exchange
729 """encode markers into a dict suitable for pushkey exchange
730
730
731 - binary data is base85 encoded
731 - binary data is base85 encoded
732 - split in chunks smaller than 5300 bytes"""
732 - split in chunks smaller than 5300 bytes"""
733 keys = {}
733 keys = {}
734 parts = []
734 parts = []
735 currentlen = _maxpayload * 2 # ensure we create a new part
735 currentlen = _maxpayload * 2 # ensure we create a new part
736 for marker in markers:
736 for marker in markers:
737 nextdata = _fm0encodeonemarker(marker)
737 nextdata = _fm0encodeonemarker(marker)
738 if (len(nextdata) + currentlen > _maxpayload):
738 if (len(nextdata) + currentlen > _maxpayload):
739 currentpart = []
739 currentpart = []
740 currentlen = 0
740 currentlen = 0
741 parts.append(currentpart)
741 parts.append(currentpart)
742 currentpart.append(nextdata)
742 currentpart.append(nextdata)
743 currentlen += len(nextdata)
743 currentlen += len(nextdata)
744 for idx, part in enumerate(reversed(parts)):
744 for idx, part in enumerate(reversed(parts)):
745 data = ''.join([_pack('>B', _fm0version)] + part)
745 data = ''.join([_pack('>B', _fm0version)] + part)
746 keys['dump%i' % idx] = util.b85encode(data)
746 keys['dump%i' % idx] = util.b85encode(data)
747 return keys
747 return keys
748
748
749 def listmarkers(repo):
749 def listmarkers(repo):
750 """List markers over pushkey"""
750 """List markers over pushkey"""
751 if not repo.obsstore:
751 if not repo.obsstore:
752 return {}
752 return {}
753 return _pushkeyescape(sorted(repo.obsstore))
753 return _pushkeyescape(sorted(repo.obsstore))
754
754
755 def pushmarker(repo, key, old, new):
755 def pushmarker(repo, key, old, new):
756 """Push markers over pushkey"""
756 """Push markers over pushkey"""
757 if not key.startswith('dump'):
757 if not key.startswith('dump'):
758 repo.ui.warn(_('unknown key: %r') % key)
758 repo.ui.warn(_('unknown key: %r') % key)
759 return 0
759 return 0
760 if old:
760 if old:
761 repo.ui.warn(_('unexpected old value for %r') % key)
761 repo.ui.warn(_('unexpected old value for %r') % key)
762 return 0
762 return 0
763 data = util.b85decode(new)
763 data = util.b85decode(new)
764 lock = repo.lock()
764 lock = repo.lock()
765 try:
765 try:
766 tr = repo.transaction('pushkey: obsolete markers')
766 tr = repo.transaction('pushkey: obsolete markers')
767 try:
767 try:
768 repo.obsstore.mergemarkers(tr, data)
768 repo.obsstore.mergemarkers(tr, data)
769 tr.close()
769 tr.close()
770 return 1
770 return 1
771 finally:
771 finally:
772 tr.release()
772 tr.release()
773 finally:
773 finally:
774 lock.release()
774 lock.release()
775
775
776 def getmarkers(repo, nodes=None):
776 def getmarkers(repo, nodes=None):
777 """returns markers known in a repository
777 """returns markers known in a repository
778
778
779 If <nodes> is specified, only markers "relevant" to those nodes are are
779 If <nodes> is specified, only markers "relevant" to those nodes are are
780 returned"""
780 returned"""
781 if nodes is None:
781 if nodes is None:
782 rawmarkers = repo.obsstore
782 rawmarkers = repo.obsstore
783 else:
783 else:
784 rawmarkers = repo.obsstore.relevantmarkers(nodes)
784 rawmarkers = repo.obsstore.relevantmarkers(nodes)
785
785
786 for markerdata in rawmarkers:
786 for markerdata in rawmarkers:
787 yield marker(repo, markerdata)
787 yield marker(repo, markerdata)
788
788
789 def relevantmarkers(repo, node):
789 def relevantmarkers(repo, node):
790 """all obsolete markers relevant to some revision"""
790 """all obsolete markers relevant to some revision"""
791 for markerdata in repo.obsstore.relevantmarkers(node):
791 for markerdata in repo.obsstore.relevantmarkers(node):
792 yield marker(repo, markerdata)
792 yield marker(repo, markerdata)
793
793
794
794
795 def precursormarkers(ctx):
795 def precursormarkers(ctx):
796 """obsolete marker marking this changeset as a successors"""
796 """obsolete marker marking this changeset as a successors"""
797 for data in ctx.repo().obsstore.precursors.get(ctx.node(), ()):
797 for data in ctx.repo().obsstore.precursors.get(ctx.node(), ()):
798 yield marker(ctx.repo(), data)
798 yield marker(ctx.repo(), data)
799
799
800 def successormarkers(ctx):
800 def successormarkers(ctx):
801 """obsolete marker making this changeset obsolete"""
801 """obsolete marker making this changeset obsolete"""
802 for data in ctx.repo().obsstore.successors.get(ctx.node(), ()):
802 for data in ctx.repo().obsstore.successors.get(ctx.node(), ()):
803 yield marker(ctx.repo(), data)
803 yield marker(ctx.repo(), data)
804
804
805 def allsuccessors(obsstore, nodes, ignoreflags=0):
805 def allsuccessors(obsstore, nodes, ignoreflags=0):
806 """Yield node for every successor of <nodes>.
806 """Yield node for every successor of <nodes>.
807
807
808 Some successors may be unknown locally.
808 Some successors may be unknown locally.
809
809
810 This is a linear yield unsuited to detecting split changesets. It includes
810 This is a linear yield unsuited to detecting split changesets. It includes
811 initial nodes too."""
811 initial nodes too."""
812 remaining = set(nodes)
812 remaining = set(nodes)
813 seen = set(remaining)
813 seen = set(remaining)
814 while remaining:
814 while remaining:
815 current = remaining.pop()
815 current = remaining.pop()
816 yield current
816 yield current
817 for mark in obsstore.successors.get(current, ()):
817 for mark in obsstore.successors.get(current, ()):
818 # ignore marker flagged with specified flag
818 # ignore marker flagged with specified flag
819 if mark[2] & ignoreflags:
819 if mark[2] & ignoreflags:
820 continue
820 continue
821 for suc in mark[1]:
821 for suc in mark[1]:
822 if suc not in seen:
822 if suc not in seen:
823 seen.add(suc)
823 seen.add(suc)
824 remaining.add(suc)
824 remaining.add(suc)
825
825
826 def allprecursors(obsstore, nodes, ignoreflags=0):
826 def allprecursors(obsstore, nodes, ignoreflags=0):
827 """Yield node for every precursors of <nodes>.
827 """Yield node for every precursors of <nodes>.
828
828
829 Some precursors may be unknown locally.
829 Some precursors may be unknown locally.
830
830
831 This is a linear yield unsuited to detecting folded changesets. It includes
831 This is a linear yield unsuited to detecting folded changesets. It includes
832 initial nodes too."""
832 initial nodes too."""
833
833
834 remaining = set(nodes)
834 remaining = set(nodes)
835 seen = set(remaining)
835 seen = set(remaining)
836 while remaining:
836 while remaining:
837 current = remaining.pop()
837 current = remaining.pop()
838 yield current
838 yield current
839 for mark in obsstore.precursors.get(current, ()):
839 for mark in obsstore.precursors.get(current, ()):
840 # ignore marker flagged with specified flag
840 # ignore marker flagged with specified flag
841 if mark[2] & ignoreflags:
841 if mark[2] & ignoreflags:
842 continue
842 continue
843 suc = mark[0]
843 suc = mark[0]
844 if suc not in seen:
844 if suc not in seen:
845 seen.add(suc)
845 seen.add(suc)
846 remaining.add(suc)
846 remaining.add(suc)
847
847
848 def foreground(repo, nodes):
848 def foreground(repo, nodes):
849 """return all nodes in the "foreground" of other node
849 """return all nodes in the "foreground" of other node
850
850
851 The foreground of a revision is anything reachable using parent -> children
851 The foreground of a revision is anything reachable using parent -> children
852 or precursor -> successor relation. It is very similar to "descendant" but
852 or precursor -> successor relation. It is very similar to "descendant" but
853 augmented with obsolescence information.
853 augmented with obsolescence information.
854
854
855 Beware that possible obsolescence cycle may result if complex situation.
855 Beware that possible obsolescence cycle may result if complex situation.
856 """
856 """
857 repo = repo.unfiltered()
857 repo = repo.unfiltered()
858 foreground = set(repo.set('%ln::', nodes))
858 foreground = set(repo.set('%ln::', nodes))
859 if repo.obsstore:
859 if repo.obsstore:
860 # We only need this complicated logic if there is obsolescence
860 # We only need this complicated logic if there is obsolescence
861 # XXX will probably deserve an optimised revset.
861 # XXX will probably deserve an optimised revset.
862 nm = repo.changelog.nodemap
862 nm = repo.changelog.nodemap
863 plen = -1
863 plen = -1
864 # compute the whole set of successors or descendants
864 # compute the whole set of successors or descendants
865 while len(foreground) != plen:
865 while len(foreground) != plen:
866 plen = len(foreground)
866 plen = len(foreground)
867 succs = set(c.node() for c in foreground)
867 succs = set(c.node() for c in foreground)
868 mutable = [c.node() for c in foreground if c.mutable()]
868 mutable = [c.node() for c in foreground if c.mutable()]
869 succs.update(allsuccessors(repo.obsstore, mutable))
869 succs.update(allsuccessors(repo.obsstore, mutable))
870 known = (n for n in succs if n in nm)
870 known = (n for n in succs if n in nm)
871 foreground = set(repo.set('%ln::', known))
871 foreground = set(repo.set('%ln::', known))
872 return set(c.node() for c in foreground)
872 return set(c.node() for c in foreground)
873
873
874
874
875 def successorssets(repo, initialnode, cache=None):
875 def successorssets(repo, initialnode, cache=None):
876 """Return set of all latest successors of initial nodes
876 """Return set of all latest successors of initial nodes
877
877
878 The successors set of a changeset A are the group of revisions that succeed
878 The successors set of a changeset A are the group of revisions that succeed
879 A. It succeeds A as a consistent whole, each revision being only a partial
879 A. It succeeds A as a consistent whole, each revision being only a partial
880 replacement. The successors set contains non-obsolete changesets only.
880 replacement. The successors set contains non-obsolete changesets only.
881
881
882 This function returns the full list of successor sets which is why it
882 This function returns the full list of successor sets which is why it
883 returns a list of tuples and not just a single tuple. Each tuple is a valid
883 returns a list of tuples and not just a single tuple. Each tuple is a valid
884 successors set. Note that (A,) may be a valid successors set for changeset A
884 successors set. Note that (A,) may be a valid successors set for changeset A
885 (see below).
885 (see below).
886
886
887 In most cases, a changeset A will have a single element (e.g. the changeset
887 In most cases, a changeset A will have a single element (e.g. the changeset
888 A is replaced by A') in its successors set. Though, it is also common for a
888 A is replaced by A') in its successors set. Though, it is also common for a
889 changeset A to have no elements in its successor set (e.g. the changeset
889 changeset A to have no elements in its successor set (e.g. the changeset
890 has been pruned). Therefore, the returned list of successors sets will be
890 has been pruned). Therefore, the returned list of successors sets will be
891 [(A',)] or [], respectively.
891 [(A',)] or [], respectively.
892
892
893 When a changeset A is split into A' and B', however, it will result in a
893 When a changeset A is split into A' and B', however, it will result in a
894 successors set containing more than a single element, i.e. [(A',B')].
894 successors set containing more than a single element, i.e. [(A',B')].
895 Divergent changesets will result in multiple successors sets, i.e. [(A',),
895 Divergent changesets will result in multiple successors sets, i.e. [(A',),
896 (A'')].
896 (A'')].
897
897
898 If a changeset A is not obsolete, then it will conceptually have no
898 If a changeset A is not obsolete, then it will conceptually have no
899 successors set. To distinguish this from a pruned changeset, the successor
899 successors set. To distinguish this from a pruned changeset, the successor
900 set will contain itself only, i.e. [(A,)].
900 set will contain itself only, i.e. [(A,)].
901
901
902 Finally, successors unknown locally are considered to be pruned (obsoleted
902 Finally, successors unknown locally are considered to be pruned (obsoleted
903 without any successors).
903 without any successors).
904
904
905 The optional `cache` parameter is a dictionary that may contain precomputed
905 The optional `cache` parameter is a dictionary that may contain precomputed
906 successors sets. It is meant to reuse the computation of a previous call to
906 successors sets. It is meant to reuse the computation of a previous call to
907 `successorssets` when multiple calls are made at the same time. The cache
907 `successorssets` when multiple calls are made at the same time. The cache
908 dictionary is updated in place. The caller is responsible for its life
908 dictionary is updated in place. The caller is responsible for its life
909 span. Code that makes multiple calls to `successorssets` *must* use this
909 span. Code that makes multiple calls to `successorssets` *must* use this
910 cache mechanism or suffer terrible performance.
910 cache mechanism or suffer terrible performance.
911 """
911 """
912
912
913 succmarkers = repo.obsstore.successors
913 succmarkers = repo.obsstore.successors
914
914
915 # Stack of nodes we search successors sets for
915 # Stack of nodes we search successors sets for
916 toproceed = [initialnode]
916 toproceed = [initialnode]
917 # set version of above list for fast loop detection
917 # set version of above list for fast loop detection
918 # element added to "toproceed" must be added here
918 # element added to "toproceed" must be added here
919 stackedset = set(toproceed)
919 stackedset = set(toproceed)
920 if cache is None:
920 if cache is None:
921 cache = {}
921 cache = {}
922
922
923 # This while loop is the flattened version of a recursive search for
923 # This while loop is the flattened version of a recursive search for
924 # successors sets
924 # successors sets
925 #
925 #
926 # def successorssets(x):
926 # def successorssets(x):
927 # successors = directsuccessors(x)
927 # successors = directsuccessors(x)
928 # ss = [[]]
928 # ss = [[]]
929 # for succ in directsuccessors(x):
929 # for succ in directsuccessors(x):
930 # # product as in itertools cartesian product
930 # # product as in itertools cartesian product
931 # ss = product(ss, successorssets(succ))
931 # ss = product(ss, successorssets(succ))
932 # return ss
932 # return ss
933 #
933 #
934 # But we can not use plain recursive calls here:
934 # But we can not use plain recursive calls here:
935 # - that would blow the python call stack
935 # - that would blow the python call stack
936 # - obsolescence markers may have cycles, we need to handle them.
936 # - obsolescence markers may have cycles, we need to handle them.
937 #
937 #
938 # The `toproceed` list act as our call stack. Every node we search
938 # The `toproceed` list act as our call stack. Every node we search
939 # successors set for are stacked there.
939 # successors set for are stacked there.
940 #
940 #
941 # The `stackedset` is set version of this stack used to check if a node is
941 # The `stackedset` is set version of this stack used to check if a node is
942 # already stacked. This check is used to detect cycles and prevent infinite
942 # already stacked. This check is used to detect cycles and prevent infinite
943 # loop.
943 # loop.
944 #
944 #
945 # successors set of all nodes are stored in the `cache` dictionary.
945 # successors set of all nodes are stored in the `cache` dictionary.
946 #
946 #
947 # After this while loop ends we use the cache to return the successors sets
947 # After this while loop ends we use the cache to return the successors sets
948 # for the node requested by the caller.
948 # for the node requested by the caller.
949 while toproceed:
949 while toproceed:
950 # Every iteration tries to compute the successors sets of the topmost
950 # Every iteration tries to compute the successors sets of the topmost
951 # node of the stack: CURRENT.
951 # node of the stack: CURRENT.
952 #
952 #
953 # There are four possible outcomes:
953 # There are four possible outcomes:
954 #
954 #
955 # 1) We already know the successors sets of CURRENT:
955 # 1) We already know the successors sets of CURRENT:
956 # -> mission accomplished, pop it from the stack.
956 # -> mission accomplished, pop it from the stack.
957 # 2) Node is not obsolete:
957 # 2) Node is not obsolete:
958 # -> the node is its own successors sets. Add it to the cache.
958 # -> the node is its own successors sets. Add it to the cache.
959 # 3) We do not know successors set of direct successors of CURRENT:
959 # 3) We do not know successors set of direct successors of CURRENT:
960 # -> We add those successors to the stack.
960 # -> We add those successors to the stack.
961 # 4) We know successors sets of all direct successors of CURRENT:
961 # 4) We know successors sets of all direct successors of CURRENT:
962 # -> We can compute CURRENT successors set and add it to the
962 # -> We can compute CURRENT successors set and add it to the
963 # cache.
963 # cache.
964 #
964 #
965 current = toproceed[-1]
965 current = toproceed[-1]
966 if current in cache:
966 if current in cache:
967 # case (1): We already know the successors sets
967 # case (1): We already know the successors sets
968 stackedset.remove(toproceed.pop())
968 stackedset.remove(toproceed.pop())
969 elif current not in succmarkers:
969 elif current not in succmarkers:
970 # case (2): The node is not obsolete.
970 # case (2): The node is not obsolete.
971 if current in repo:
971 if current in repo:
972 # We have a valid last successors.
972 # We have a valid last successors.
973 cache[current] = [(current,)]
973 cache[current] = [(current,)]
974 else:
974 else:
975 # Final obsolete version is unknown locally.
975 # Final obsolete version is unknown locally.
976 # Do not count that as a valid successors
976 # Do not count that as a valid successors
977 cache[current] = []
977 cache[current] = []
978 else:
978 else:
979 # cases (3) and (4)
979 # cases (3) and (4)
980 #
980 #
981 # We proceed in two phases. Phase 1 aims to distinguish case (3)
981 # We proceed in two phases. Phase 1 aims to distinguish case (3)
982 # from case (4):
982 # from case (4):
983 #
983 #
984 # For each direct successors of CURRENT, we check whether its
984 # For each direct successors of CURRENT, we check whether its
985 # successors sets are known. If they are not, we stack the
985 # successors sets are known. If they are not, we stack the
986 # unknown node and proceed to the next iteration of the while
986 # unknown node and proceed to the next iteration of the while
987 # loop. (case 3)
987 # loop. (case 3)
988 #
988 #
989 # During this step, we may detect obsolescence cycles: a node
989 # During this step, we may detect obsolescence cycles: a node
990 # with unknown successors sets but already in the call stack.
990 # with unknown successors sets but already in the call stack.
991 # In such a situation, we arbitrary set the successors sets of
991 # In such a situation, we arbitrary set the successors sets of
992 # the node to nothing (node pruned) to break the cycle.
992 # the node to nothing (node pruned) to break the cycle.
993 #
993 #
994 # If no break was encountered we proceed to phase 2.
994 # If no break was encountered we proceed to phase 2.
995 #
995 #
996 # Phase 2 computes successors sets of CURRENT (case 4); see details
996 # Phase 2 computes successors sets of CURRENT (case 4); see details
997 # in phase 2 itself.
997 # in phase 2 itself.
998 #
998 #
999 # Note the two levels of iteration in each phase.
999 # Note the two levels of iteration in each phase.
1000 # - The first one handles obsolescence markers using CURRENT as
1000 # - The first one handles obsolescence markers using CURRENT as
1001 # precursor (successors markers of CURRENT).
1001 # precursor (successors markers of CURRENT).
1002 #
1002 #
1003 # Having multiple entry here means divergence.
1003 # Having multiple entry here means divergence.
1004 #
1004 #
1005 # - The second one handles successors defined in each marker.
1005 # - The second one handles successors defined in each marker.
1006 #
1006 #
1007 # Having none means pruned node, multiple successors means split,
1007 # Having none means pruned node, multiple successors means split,
1008 # single successors are standard replacement.
1008 # single successors are standard replacement.
1009 #
1009 #
1010 for mark in sorted(succmarkers[current]):
1010 for mark in sorted(succmarkers[current]):
1011 for suc in mark[1]:
1011 for suc in mark[1]:
1012 if suc not in cache:
1012 if suc not in cache:
1013 if suc in stackedset:
1013 if suc in stackedset:
1014 # cycle breaking
1014 # cycle breaking
1015 cache[suc] = []
1015 cache[suc] = []
1016 else:
1016 else:
1017 # case (3) If we have not computed successors sets
1017 # case (3) If we have not computed successors sets
1018 # of one of those successors we add it to the
1018 # of one of those successors we add it to the
1019 # `toproceed` stack and stop all work for this
1019 # `toproceed` stack and stop all work for this
1020 # iteration.
1020 # iteration.
1021 toproceed.append(suc)
1021 toproceed.append(suc)
1022 stackedset.add(suc)
1022 stackedset.add(suc)
1023 break
1023 break
1024 else:
1024 else:
1025 continue
1025 continue
1026 break
1026 break
1027 else:
1027 else:
1028 # case (4): we know all successors sets of all direct
1028 # case (4): we know all successors sets of all direct
1029 # successors
1029 # successors
1030 #
1030 #
1031 # Successors set contributed by each marker depends on the
1031 # Successors set contributed by each marker depends on the
1032 # successors sets of all its "successors" node.
1032 # successors sets of all its "successors" node.
1033 #
1033 #
1034 # Each different marker is a divergence in the obsolescence
1034 # Each different marker is a divergence in the obsolescence
1035 # history. It contributes successors sets distinct from other
1035 # history. It contributes successors sets distinct from other
1036 # markers.
1036 # markers.
1037 #
1037 #
1038 # Within a marker, a successor may have divergent successors
1038 # Within a marker, a successor may have divergent successors
1039 # sets. In such a case, the marker will contribute multiple
1039 # sets. In such a case, the marker will contribute multiple
1040 # divergent successors sets. If multiple successors have
1040 # divergent successors sets. If multiple successors have
1041 # divergent successors sets, a Cartesian product is used.
1041 # divergent successors sets, a Cartesian product is used.
1042 #
1042 #
1043 # At the end we post-process successors sets to remove
1043 # At the end we post-process successors sets to remove
1044 # duplicated entry and successors set that are strict subset of
1044 # duplicated entry and successors set that are strict subset of
1045 # another one.
1045 # another one.
1046 succssets = []
1046 succssets = []
1047 for mark in sorted(succmarkers[current]):
1047 for mark in sorted(succmarkers[current]):
1048 # successors sets contributed by this marker
1048 # successors sets contributed by this marker
1049 markss = [[]]
1049 markss = [[]]
1050 for suc in mark[1]:
1050 for suc in mark[1]:
1051 # cardinal product with previous successors
1051 # cardinal product with previous successors
1052 productresult = []
1052 productresult = []
1053 for prefix in markss:
1053 for prefix in markss:
1054 for suffix in cache[suc]:
1054 for suffix in cache[suc]:
1055 newss = list(prefix)
1055 newss = list(prefix)
1056 for part in suffix:
1056 for part in suffix:
1057 # do not duplicated entry in successors set
1057 # do not duplicated entry in successors set
1058 # first entry wins.
1058 # first entry wins.
1059 if part not in newss:
1059 if part not in newss:
1060 newss.append(part)
1060 newss.append(part)
1061 productresult.append(newss)
1061 productresult.append(newss)
1062 markss = productresult
1062 markss = productresult
1063 succssets.extend(markss)
1063 succssets.extend(markss)
1064 # remove duplicated and subset
1064 # remove duplicated and subset
1065 seen = []
1065 seen = []
1066 final = []
1066 final = []
1067 candidate = sorted(((set(s), s) for s in succssets if s),
1067 candidate = sorted(((set(s), s) for s in succssets if s),
1068 key=lambda x: len(x[1]), reverse=True)
1068 key=lambda x: len(x[1]), reverse=True)
1069 for setversion, listversion in candidate:
1069 for setversion, listversion in candidate:
1070 for seenset in seen:
1070 for seenset in seen:
1071 if setversion.issubset(seenset):
1071 if setversion.issubset(seenset):
1072 break
1072 break
1073 else:
1073 else:
1074 final.append(listversion)
1074 final.append(listversion)
1075 seen.append(setversion)
1075 seen.append(setversion)
1076 final.reverse() # put small successors set first
1076 final.reverse() # put small successors set first
1077 cache[current] = final
1077 cache[current] = final
1078 return cache[initialnode]
1078 return cache[initialnode]
1079
1079
1080 # mapping of 'set-name' -> <function to compute this set>
1080 # mapping of 'set-name' -> <function to compute this set>
1081 cachefuncs = {}
1081 cachefuncs = {}
1082 def cachefor(name):
1082 def cachefor(name):
1083 """Decorator to register a function as computing the cache for a set"""
1083 """Decorator to register a function as computing the cache for a set"""
1084 def decorator(func):
1084 def decorator(func):
1085 assert name not in cachefuncs
1085 assert name not in cachefuncs
1086 cachefuncs[name] = func
1086 cachefuncs[name] = func
1087 return func
1087 return func
1088 return decorator
1088 return decorator
1089
1089
1090 def getrevs(repo, name):
1090 def getrevs(repo, name):
1091 """Return the set of revision that belong to the <name> set
1091 """Return the set of revision that belong to the <name> set
1092
1092
1093 Such access may compute the set and cache it for future use"""
1093 Such access may compute the set and cache it for future use"""
1094 repo = repo.unfiltered()
1094 repo = repo.unfiltered()
1095 if not repo.obsstore:
1095 if not repo.obsstore:
1096 return frozenset()
1096 return frozenset()
1097 if name not in repo.obsstore.caches:
1097 if name not in repo.obsstore.caches:
1098 repo.obsstore.caches[name] = cachefuncs[name](repo)
1098 repo.obsstore.caches[name] = cachefuncs[name](repo)
1099 return repo.obsstore.caches[name]
1099 return repo.obsstore.caches[name]
1100
1100
1101 # To be simple we need to invalidate obsolescence cache when:
1101 # To be simple we need to invalidate obsolescence cache when:
1102 #
1102 #
1103 # - new changeset is added:
1103 # - new changeset is added:
1104 # - public phase is changed
1104 # - public phase is changed
1105 # - obsolescence marker are added
1105 # - obsolescence marker are added
1106 # - strip is used a repo
1106 # - strip is used a repo
1107 def clearobscaches(repo):
1107 def clearobscaches(repo):
1108 """Remove all obsolescence related cache from a repo
1108 """Remove all obsolescence related cache from a repo
1109
1109
1110 This remove all cache in obsstore is the obsstore already exist on the
1110 This remove all cache in obsstore is the obsstore already exist on the
1111 repo.
1111 repo.
1112
1112
1113 (We could be smarter here given the exact event that trigger the cache
1113 (We could be smarter here given the exact event that trigger the cache
1114 clearing)"""
1114 clearing)"""
1115 # only clear cache is there is obsstore data in this repo
1115 # only clear cache is there is obsstore data in this repo
1116 if 'obsstore' in repo._filecache:
1116 if 'obsstore' in repo._filecache:
1117 repo.obsstore.caches.clear()
1117 repo.obsstore.caches.clear()
1118
1118
1119 @cachefor('obsolete')
1119 @cachefor('obsolete')
1120 def _computeobsoleteset(repo):
1120 def _computeobsoleteset(repo):
1121 """the set of obsolete revisions"""
1121 """the set of obsolete revisions"""
1122 obs = set()
1122 obs = set()
1123 getnode = repo.changelog.node
1123 getnode = repo.changelog.node
1124 notpublic = repo._phasecache.getrevset(repo, (phases.draft, phases.secret))
1124 notpublic = repo._phasecache.getrevset(repo, (phases.draft, phases.secret))
1125 for r in notpublic:
1125 for r in notpublic:
1126 if getnode(r) in repo.obsstore.successors:
1126 if getnode(r) in repo.obsstore.successors:
1127 obs.add(r)
1127 obs.add(r)
1128 return obs
1128 return obs
1129
1129
1130 @cachefor('unstable')
1130 @cachefor('unstable')
1131 def _computeunstableset(repo):
1131 def _computeunstableset(repo):
1132 """the set of non obsolete revisions with obsolete parents"""
1132 """the set of non obsolete revisions with obsolete parents"""
1133 revs = [(ctx.rev(), ctx) for ctx in
1133 revs = [(ctx.rev(), ctx) for ctx in
1134 repo.set('(not public()) and (not obsolete())')]
1134 repo.set('(not public()) and (not obsolete())')]
1135 revs.sort(key=lambda x:x[0])
1135 revs.sort(key=lambda x:x[0])
1136 unstable = set()
1136 unstable = set()
1137 for rev, ctx in revs:
1137 for rev, ctx in revs:
1138 # A rev is unstable if one of its parent is obsolete or unstable
1138 # A rev is unstable if one of its parent is obsolete or unstable
1139 # this works since we traverse following growing rev order
1139 # this works since we traverse following growing rev order
1140 if any((x.obsolete() or (x.rev() in unstable))
1140 if any((x.obsolete() or (x.rev() in unstable))
1141 for x in ctx.parents()):
1141 for x in ctx.parents()):
1142 unstable.add(rev)
1142 unstable.add(rev)
1143 return unstable
1143 return unstable
1144
1144
1145 @cachefor('suspended')
1145 @cachefor('suspended')
1146 def _computesuspendedset(repo):
1146 def _computesuspendedset(repo):
1147 """the set of obsolete parents with non obsolete descendants"""
1147 """the set of obsolete parents with non obsolete descendants"""
1148 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
1148 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
1149 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
1149 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
1150
1150
1151 @cachefor('extinct')
1151 @cachefor('extinct')
1152 def _computeextinctset(repo):
1152 def _computeextinctset(repo):
1153 """the set of obsolete parents without non obsolete descendants"""
1153 """the set of obsolete parents without non obsolete descendants"""
1154 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
1154 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
1155
1155
1156
1156
1157 @cachefor('bumped')
1157 @cachefor('bumped')
1158 def _computebumpedset(repo):
1158 def _computebumpedset(repo):
1159 """the set of revs trying to obsolete public revisions"""
1159 """the set of revs trying to obsolete public revisions"""
1160 bumped = set()
1160 bumped = set()
1161 # util function (avoid attribute lookup in the loop)
1161 # util function (avoid attribute lookup in the loop)
1162 phase = repo._phasecache.phase # would be faster to grab the full list
1162 phase = repo._phasecache.phase # would be faster to grab the full list
1163 public = phases.public
1163 public = phases.public
1164 cl = repo.changelog
1164 cl = repo.changelog
1165 torev = cl.nodemap.get
1165 torev = cl.nodemap.get
1166 for ctx in repo.set('(not public()) and (not obsolete())'):
1166 for ctx in repo.set('(not public()) and (not obsolete())'):
1167 rev = ctx.rev()
1167 rev = ctx.rev()
1168 # We only evaluate mutable, non-obsolete revision
1168 # We only evaluate mutable, non-obsolete revision
1169 node = ctx.node()
1169 node = ctx.node()
1170 # (future) A cache of precursors may worth if split is very common
1170 # (future) A cache of precursors may worth if split is very common
1171 for pnode in allprecursors(repo.obsstore, [node],
1171 for pnode in allprecursors(repo.obsstore, [node],
1172 ignoreflags=bumpedfix):
1172 ignoreflags=bumpedfix):
1173 prev = torev(pnode) # unfiltered! but so is phasecache
1173 prev = torev(pnode) # unfiltered! but so is phasecache
1174 if (prev is not None) and (phase(repo, prev) <= public):
1174 if (prev is not None) and (phase(repo, prev) <= public):
1175 # we have a public precursor
1175 # we have a public precursor
1176 bumped.add(rev)
1176 bumped.add(rev)
1177 break # Next draft!
1177 break # Next draft!
1178 return bumped
1178 return bumped
1179
1179
1180 @cachefor('divergent')
1180 @cachefor('divergent')
1181 def _computedivergentset(repo):
1181 def _computedivergentset(repo):
1182 """the set of rev that compete to be the final successors of some revision.
1182 """the set of rev that compete to be the final successors of some revision.
1183 """
1183 """
1184 divergent = set()
1184 divergent = set()
1185 obsstore = repo.obsstore
1185 obsstore = repo.obsstore
1186 newermap = {}
1186 newermap = {}
1187 for ctx in repo.set('(not public()) - obsolete()'):
1187 for ctx in repo.set('(not public()) - obsolete()'):
1188 mark = obsstore.precursors.get(ctx.node(), ())
1188 mark = obsstore.precursors.get(ctx.node(), ())
1189 toprocess = set(mark)
1189 toprocess = set(mark)
1190 seen = set()
1190 seen = set()
1191 while toprocess:
1191 while toprocess:
1192 prec = toprocess.pop()[0]
1192 prec = toprocess.pop()[0]
1193 if prec in seen:
1193 if prec in seen:
1194 continue # emergency cycle hanging prevention
1194 continue # emergency cycle hanging prevention
1195 seen.add(prec)
1195 seen.add(prec)
1196 if prec not in newermap:
1196 if prec not in newermap:
1197 successorssets(repo, prec, newermap)
1197 successorssets(repo, prec, newermap)
1198 newer = [n for n in newermap[prec] if n]
1198 newer = [n for n in newermap[prec] if n]
1199 if len(newer) > 1:
1199 if len(newer) > 1:
1200 divergent.add(ctx.rev())
1200 divergent.add(ctx.rev())
1201 break
1201 break
1202 toprocess.update(obsstore.precursors.get(prec, ()))
1202 toprocess.update(obsstore.precursors.get(prec, ()))
1203 return divergent
1203 return divergent
1204
1204
1205
1205
1206 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
1206 def createmarkers(repo, relations, flag=0, date=None, metadata=None,
1207 operation=None):
1207 """Add obsolete markers between changesets in a repo
1208 """Add obsolete markers between changesets in a repo
1208
1209
1209 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1210 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1210 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1211 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1211 containing metadata for this marker only. It is merged with the global
1212 containing metadata for this marker only. It is merged with the global
1212 metadata specified through the `metadata` argument of this function,
1213 metadata specified through the `metadata` argument of this function,
1213
1214
1214 Trying to obsolete a public changeset will raise an exception.
1215 Trying to obsolete a public changeset will raise an exception.
1215
1216
1216 Current user and date are used except if specified otherwise in the
1217 Current user and date are used except if specified otherwise in the
1217 metadata attribute.
1218 metadata attribute.
1218
1219
1219 This function operates within a transaction of its own, but does
1220 This function operates within a transaction of its own, but does
1220 not take any lock on the repo.
1221 not take any lock on the repo.
1221 """
1222 """
1222 # prepare metadata
1223 # prepare metadata
1223 if metadata is None:
1224 if metadata is None:
1224 metadata = {}
1225 metadata = {}
1225 if 'user' not in metadata:
1226 if 'user' not in metadata:
1226 metadata['user'] = repo.ui.username()
1227 metadata['user'] = repo.ui.username()
1228 if operation:
1229 metadata['operation'] = operation
1227 tr = repo.transaction('add-obsolescence-marker')
1230 tr = repo.transaction('add-obsolescence-marker')
1228 try:
1231 try:
1229 markerargs = []
1232 markerargs = []
1230 for rel in relations:
1233 for rel in relations:
1231 prec = rel[0]
1234 prec = rel[0]
1232 sucs = rel[1]
1235 sucs = rel[1]
1233 localmetadata = metadata.copy()
1236 localmetadata = metadata.copy()
1234 if 2 < len(rel):
1237 if 2 < len(rel):
1235 localmetadata.update(rel[2])
1238 localmetadata.update(rel[2])
1236
1239
1237 if not prec.mutable():
1240 if not prec.mutable():
1238 raise error.Abort(_("cannot obsolete public changeset: %s")
1241 raise error.Abort(_("cannot obsolete public changeset: %s")
1239 % prec,
1242 % prec,
1240 hint="see 'hg help phases' for details")
1243 hint="see 'hg help phases' for details")
1241 nprec = prec.node()
1244 nprec = prec.node()
1242 nsucs = tuple(s.node() for s in sucs)
1245 nsucs = tuple(s.node() for s in sucs)
1243 npare = None
1246 npare = None
1244 if not nsucs:
1247 if not nsucs:
1245 npare = tuple(p.node() for p in prec.parents())
1248 npare = tuple(p.node() for p in prec.parents())
1246 if nprec in nsucs:
1249 if nprec in nsucs:
1247 raise error.Abort(_("changeset %s cannot obsolete itself")
1250 raise error.Abort(_("changeset %s cannot obsolete itself")
1248 % prec)
1251 % prec)
1249
1252
1250 # Creating the marker causes the hidden cache to become invalid,
1253 # Creating the marker causes the hidden cache to become invalid,
1251 # which causes recomputation when we ask for prec.parents() above.
1254 # which causes recomputation when we ask for prec.parents() above.
1252 # Resulting in n^2 behavior. So let's prepare all of the args
1255 # Resulting in n^2 behavior. So let's prepare all of the args
1253 # first, then create the markers.
1256 # first, then create the markers.
1254 markerargs.append((nprec, nsucs, npare, localmetadata))
1257 markerargs.append((nprec, nsucs, npare, localmetadata))
1255
1258
1256 for args in markerargs:
1259 for args in markerargs:
1257 nprec, nsucs, npare, localmetadata = args
1260 nprec, nsucs, npare, localmetadata = args
1258 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1261 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1259 date=date, metadata=localmetadata)
1262 date=date, metadata=localmetadata)
1260 repo.filteredrevcache.clear()
1263 repo.filteredrevcache.clear()
1261 tr.close()
1264 tr.close()
1262 finally:
1265 finally:
1263 tr.release()
1266 tr.release()
1264
1267
1265 def isenabled(repo, option):
1268 def isenabled(repo, option):
1266 """Returns True if the given repository has the given obsolete option
1269 """Returns True if the given repository has the given obsolete option
1267 enabled.
1270 enabled.
1268 """
1271 """
1269 result = set(repo.ui.configlist('experimental', 'evolution'))
1272 result = set(repo.ui.configlist('experimental', 'evolution'))
1270 if 'all' in result:
1273 if 'all' in result:
1271 return True
1274 return True
1272
1275
1273 # For migration purposes, temporarily return true if the config hasn't been
1276 # For migration purposes, temporarily return true if the config hasn't been
1274 # set but _enabled is true.
1277 # set but _enabled is true.
1275 if len(result) == 0 and _enabled:
1278 if len(result) == 0 and _enabled:
1276 return True
1279 return True
1277
1280
1278 # createmarkers must be enabled if other options are enabled
1281 # createmarkers must be enabled if other options are enabled
1279 if ((allowunstableopt in result or exchangeopt in result) and
1282 if ((allowunstableopt in result or exchangeopt in result) and
1280 not createmarkersopt in result):
1283 not createmarkersopt in result):
1281 raise error.Abort(_("'createmarkers' obsolete option must be enabled "
1284 raise error.Abort(_("'createmarkers' obsolete option must be enabled "
1282 "if other obsolete options are enabled"))
1285 "if other obsolete options are enabled"))
1283
1286
1284 return option in result
1287 return option in result
@@ -1,576 +1,576
1 $ . "$TESTDIR/histedit-helpers.sh"
1 $ . "$TESTDIR/histedit-helpers.sh"
2
2
3 Enable obsolete
3 Enable obsolete
4
4
5 $ cat >> $HGRCPATH << EOF
5 $ cat >> $HGRCPATH << EOF
6 > [ui]
6 > [ui]
7 > logtemplate= {rev}:{node|short} {desc|firstline}
7 > logtemplate= {rev}:{node|short} {desc|firstline}
8 > [phases]
8 > [phases]
9 > publish=False
9 > publish=False
10 > [experimental]
10 > [experimental]
11 > evolution=createmarkers,allowunstable
11 > evolution=createmarkers,allowunstable
12 > [extensions]
12 > [extensions]
13 > histedit=
13 > histedit=
14 > rebase=
14 > rebase=
15 > EOF
15 > EOF
16
16
17 Test that histedit learns about obsolescence not stored in histedit state
17 Test that histedit learns about obsolescence not stored in histedit state
18 $ hg init boo
18 $ hg init boo
19 $ cd boo
19 $ cd boo
20 $ echo a > a
20 $ echo a > a
21 $ hg ci -Am a
21 $ hg ci -Am a
22 adding a
22 adding a
23 $ echo a > b
23 $ echo a > b
24 $ echo a > c
24 $ echo a > c
25 $ echo a > c
25 $ echo a > c
26 $ hg ci -Am b
26 $ hg ci -Am b
27 adding b
27 adding b
28 adding c
28 adding c
29 $ echo a > d
29 $ echo a > d
30 $ hg ci -Am c
30 $ hg ci -Am c
31 adding d
31 adding d
32 $ echo "pick `hg log -r 0 -T '{node|short}'`" > plan
32 $ echo "pick `hg log -r 0 -T '{node|short}'`" > plan
33 $ echo "pick `hg log -r 2 -T '{node|short}'`" >> plan
33 $ echo "pick `hg log -r 2 -T '{node|short}'`" >> plan
34 $ echo "edit `hg log -r 1 -T '{node|short}'`" >> plan
34 $ echo "edit `hg log -r 1 -T '{node|short}'`" >> plan
35 $ hg histedit -r 'all()' --commands plan
35 $ hg histedit -r 'all()' --commands plan
36 Editing (1b2d564fad96), you may commit or record as needed now.
36 Editing (1b2d564fad96), you may commit or record as needed now.
37 (hg histedit --continue to resume)
37 (hg histedit --continue to resume)
38 [1]
38 [1]
39 $ hg st
39 $ hg st
40 A b
40 A b
41 A c
41 A c
42 ? plan
42 ? plan
43 $ hg commit --amend b
43 $ hg commit --amend b
44 $ hg histedit --continue
44 $ hg histedit --continue
45 $ hg log -G
45 $ hg log -G
46 @ 6:46abc7c4d873 b
46 @ 6:46abc7c4d873 b
47 |
47 |
48 o 5:49d44ab2be1b c
48 o 5:49d44ab2be1b c
49 |
49 |
50 o 0:cb9a9f314b8b a
50 o 0:cb9a9f314b8b a
51
51
52 $ hg debugobsolete
52 $ hg debugobsolete
53 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
53 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
54 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'user': 'test'} (glob)
54 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'operation': 'amend', 'user': 'test'} (glob)
55 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'user': 'test'} (glob)
55 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
56 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
56 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
57
57
58 With some node gone missing during the edit.
58 With some node gone missing during the edit.
59
59
60 $ echo "pick `hg log -r 0 -T '{node|short}'`" > plan
60 $ echo "pick `hg log -r 0 -T '{node|short}'`" > plan
61 $ echo "pick `hg log -r 6 -T '{node|short}'`" >> plan
61 $ echo "pick `hg log -r 6 -T '{node|short}'`" >> plan
62 $ echo "edit `hg log -r 5 -T '{node|short}'`" >> plan
62 $ echo "edit `hg log -r 5 -T '{node|short}'`" >> plan
63 $ hg histedit -r 'all()' --commands plan
63 $ hg histedit -r 'all()' --commands plan
64 Editing (49d44ab2be1b), you may commit or record as needed now.
64 Editing (49d44ab2be1b), you may commit or record as needed now.
65 (hg histedit --continue to resume)
65 (hg histedit --continue to resume)
66 [1]
66 [1]
67 $ hg st
67 $ hg st
68 A b
68 A b
69 A d
69 A d
70 ? plan
70 ? plan
71 $ hg commit --amend -X . -m XXXXXX
71 $ hg commit --amend -X . -m XXXXXX
72 $ hg commit --amend -X . -m b2
72 $ hg commit --amend -X . -m b2
73 $ hg --hidden --config extensions.strip= strip 'desc(XXXXXX)' --no-backup
73 $ hg --hidden --config extensions.strip= strip 'desc(XXXXXX)' --no-backup
74 $ hg histedit --continue
74 $ hg histedit --continue
75 $ hg log -G
75 $ hg log -G
76 @ 9:273c1f3b8626 c
76 @ 9:273c1f3b8626 c
77 |
77 |
78 o 8:aba7da937030 b2
78 o 8:aba7da937030 b2
79 |
79 |
80 o 0:cb9a9f314b8b a
80 o 0:cb9a9f314b8b a
81
81
82 $ hg debugobsolete
82 $ hg debugobsolete
83 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
83 e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
84 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'user': 'test'} (glob)
84 3e30a45cf2f719e96ab3922dfe039cfd047956ce 0 {e72d22b19f8ecf4150ab4f91d0973fd9955d3ddf} (*) {'operation': 'amend', 'user': 'test'} (glob)
85 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'user': 'test'} (glob)
85 1b2d564fad96311b45362f17c2aa855150efb35f 46abc7c4d8738e8563e577f7889e1b6db3da4199 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
86 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'user': 'test'} (glob)
86 114f4176969ef342759a8a57e6bccefc4234829b 49d44ab2be1b67a79127568a67c9c99430633b48 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
87 76f72745eac0643d16530e56e2f86e36e40631f1 2ca853e48edbd6453a0674dc0fe28a0974c51b9c 0 (*) {'user': 'test'} (glob)
87 76f72745eac0643d16530e56e2f86e36e40631f1 2ca853e48edbd6453a0674dc0fe28a0974c51b9c 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
88 2ca853e48edbd6453a0674dc0fe28a0974c51b9c aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'user': 'test'} (glob)
88 2ca853e48edbd6453a0674dc0fe28a0974c51b9c aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'operation': 'amend', 'user': 'test'} (glob)
89 49d44ab2be1b67a79127568a67c9c99430633b48 273c1f3b86267ed3ec684bb13af1fa4d6ba56e02 0 (*) {'user': 'test'} (glob)
89 49d44ab2be1b67a79127568a67c9c99430633b48 273c1f3b86267ed3ec684bb13af1fa4d6ba56e02 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
90 46abc7c4d8738e8563e577f7889e1b6db3da4199 aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'user': 'test'} (glob)
90 46abc7c4d8738e8563e577f7889e1b6db3da4199 aba7da93703075eec9fb1dbaf143ff2bc1c49d46 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
91 $ cd ..
91 $ cd ..
92
92
93 Base setup for the rest of the testing
93 Base setup for the rest of the testing
94 ======================================
94 ======================================
95
95
96 $ hg init base
96 $ hg init base
97 $ cd base
97 $ cd base
98
98
99 $ for x in a b c d e f ; do
99 $ for x in a b c d e f ; do
100 > echo $x > $x
100 > echo $x > $x
101 > hg add $x
101 > hg add $x
102 > hg ci -m $x
102 > hg ci -m $x
103 > done
103 > done
104
104
105 $ hg log --graph
105 $ hg log --graph
106 @ 5:652413bf663e f
106 @ 5:652413bf663e f
107 |
107 |
108 o 4:e860deea161a e
108 o 4:e860deea161a e
109 |
109 |
110 o 3:055a42cdd887 d
110 o 3:055a42cdd887 d
111 |
111 |
112 o 2:177f92b77385 c
112 o 2:177f92b77385 c
113 |
113 |
114 o 1:d2ae7f538514 b
114 o 1:d2ae7f538514 b
115 |
115 |
116 o 0:cb9a9f314b8b a
116 o 0:cb9a9f314b8b a
117
117
118
118
119 $ HGEDITOR=cat hg histedit 1
119 $ HGEDITOR=cat hg histedit 1
120 pick d2ae7f538514 1 b
120 pick d2ae7f538514 1 b
121 pick 177f92b77385 2 c
121 pick 177f92b77385 2 c
122 pick 055a42cdd887 3 d
122 pick 055a42cdd887 3 d
123 pick e860deea161a 4 e
123 pick e860deea161a 4 e
124 pick 652413bf663e 5 f
124 pick 652413bf663e 5 f
125
125
126 # Edit history between d2ae7f538514 and 652413bf663e
126 # Edit history between d2ae7f538514 and 652413bf663e
127 #
127 #
128 # Commits are listed from least to most recent
128 # Commits are listed from least to most recent
129 #
129 #
130 # You can reorder changesets by reordering the lines
130 # You can reorder changesets by reordering the lines
131 #
131 #
132 # Commands:
132 # Commands:
133 #
133 #
134 # e, edit = use commit, but stop for amending
134 # e, edit = use commit, but stop for amending
135 # m, mess = edit commit message without changing commit content
135 # m, mess = edit commit message without changing commit content
136 # p, pick = use commit
136 # p, pick = use commit
137 # d, drop = remove commit from history
137 # d, drop = remove commit from history
138 # f, fold = use commit, but combine it with the one above
138 # f, fold = use commit, but combine it with the one above
139 # r, roll = like fold, but discard this commit's description and date
139 # r, roll = like fold, but discard this commit's description and date
140 #
140 #
141 $ hg histedit 1 --commands - --verbose <<EOF | grep histedit
141 $ hg histedit 1 --commands - --verbose <<EOF | grep histedit
142 > pick 177f92b77385 2 c
142 > pick 177f92b77385 2 c
143 > drop d2ae7f538514 1 b
143 > drop d2ae7f538514 1 b
144 > pick 055a42cdd887 3 d
144 > pick 055a42cdd887 3 d
145 > fold e860deea161a 4 e
145 > fold e860deea161a 4 e
146 > pick 652413bf663e 5 f
146 > pick 652413bf663e 5 f
147 > EOF
147 > EOF
148 [1]
148 [1]
149 $ hg log --graph --hidden
149 $ hg log --graph --hidden
150 @ 10:cacdfd884a93 f
150 @ 10:cacdfd884a93 f
151 |
151 |
152 o 9:59d9f330561f d
152 o 9:59d9f330561f d
153 |
153 |
154 | x 8:b558abc46d09 fold-temp-revision e860deea161a
154 | x 8:b558abc46d09 fold-temp-revision e860deea161a
155 | |
155 | |
156 | x 7:96e494a2d553 d
156 | x 7:96e494a2d553 d
157 |/
157 |/
158 o 6:b346ab9a313d c
158 o 6:b346ab9a313d c
159 |
159 |
160 | x 5:652413bf663e f
160 | x 5:652413bf663e f
161 | |
161 | |
162 | x 4:e860deea161a e
162 | x 4:e860deea161a e
163 | |
163 | |
164 | x 3:055a42cdd887 d
164 | x 3:055a42cdd887 d
165 | |
165 | |
166 | x 2:177f92b77385 c
166 | x 2:177f92b77385 c
167 | |
167 | |
168 | x 1:d2ae7f538514 b
168 | x 1:d2ae7f538514 b
169 |/
169 |/
170 o 0:cb9a9f314b8b a
170 o 0:cb9a9f314b8b a
171
171
172 $ hg debugobsolete
172 $ hg debugobsolete
173 96e494a2d553dd05902ba1cee1d94d4cb7b8faed 0 {b346ab9a313db8537ecf96fca3ca3ca984ef3bd7} (*) {'user': 'test'} (glob)
173 96e494a2d553dd05902ba1cee1d94d4cb7b8faed 0 {b346ab9a313db8537ecf96fca3ca3ca984ef3bd7} (*) {'operation': 'histedit', 'user': 'test'} (glob)
174 b558abc46d09c30f57ac31e85a8a3d64d2e906e4 0 {96e494a2d553dd05902ba1cee1d94d4cb7b8faed} (*) {'user': 'test'} (glob)
174 b558abc46d09c30f57ac31e85a8a3d64d2e906e4 0 {96e494a2d553dd05902ba1cee1d94d4cb7b8faed} (*) {'operation': 'histedit', 'user': 'test'} (glob)
175 d2ae7f538514cd87c17547b0de4cea71fe1af9fb 0 {cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b} (*) {'user': 'test'} (glob)
175 d2ae7f538514cd87c17547b0de4cea71fe1af9fb 0 {cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b} (*) {'operation': 'histedit', 'user': 'test'} (glob)
176 177f92b773850b59254aa5e923436f921b55483b b346ab9a313db8537ecf96fca3ca3ca984ef3bd7 0 (*) {'user': 'test'} (glob)
176 177f92b773850b59254aa5e923436f921b55483b b346ab9a313db8537ecf96fca3ca3ca984ef3bd7 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
177 055a42cdd88768532f9cf79daa407fc8d138de9b 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
177 055a42cdd88768532f9cf79daa407fc8d138de9b 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
178 e860deea161a2f77de56603b340ebbb4536308ae 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'user': 'test'} (glob)
178 e860deea161a2f77de56603b340ebbb4536308ae 59d9f330561fd6c88b1a6b32f0e45034d88db784 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
179 652413bf663ef2a641cab26574e46d5f5a64a55a cacdfd884a9321ec4e1de275ef3949fa953a1f83 0 (*) {'user': 'test'} (glob)
179 652413bf663ef2a641cab26574e46d5f5a64a55a cacdfd884a9321ec4e1de275ef3949fa953a1f83 0 (*) {'operation': 'histedit', 'user': 'test'} (glob)
180
180
181
181
182 Ensure hidden revision does not prevent histedit
182 Ensure hidden revision does not prevent histedit
183 -------------------------------------------------
183 -------------------------------------------------
184
184
185 create an hidden revision
185 create an hidden revision
186
186
187 $ hg histedit 6 --commands - << EOF
187 $ hg histedit 6 --commands - << EOF
188 > pick b346ab9a313d 6 c
188 > pick b346ab9a313d 6 c
189 > drop 59d9f330561f 7 d
189 > drop 59d9f330561f 7 d
190 > pick cacdfd884a93 8 f
190 > pick cacdfd884a93 8 f
191 > EOF
191 > EOF
192 $ hg log --graph
192 $ hg log --graph
193 @ 11:c13eb81022ca f
193 @ 11:c13eb81022ca f
194 |
194 |
195 o 6:b346ab9a313d c
195 o 6:b346ab9a313d c
196 |
196 |
197 o 0:cb9a9f314b8b a
197 o 0:cb9a9f314b8b a
198
198
199 check hidden revision are ignored (6 have hidden children 7 and 8)
199 check hidden revision are ignored (6 have hidden children 7 and 8)
200
200
201 $ hg histedit 6 --commands - << EOF
201 $ hg histedit 6 --commands - << EOF
202 > pick b346ab9a313d 6 c
202 > pick b346ab9a313d 6 c
203 > pick c13eb81022ca 8 f
203 > pick c13eb81022ca 8 f
204 > EOF
204 > EOF
205
205
206
206
207
207
208 Test that rewriting leaving instability behind is allowed
208 Test that rewriting leaving instability behind is allowed
209 ---------------------------------------------------------------------
209 ---------------------------------------------------------------------
210
210
211 $ hg up '.^'
211 $ hg up '.^'
212 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
212 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
213 $ hg log -r 'children(.)'
213 $ hg log -r 'children(.)'
214 11:c13eb81022ca f (no-eol)
214 11:c13eb81022ca f (no-eol)
215 $ hg histedit -r '.' --commands - <<EOF
215 $ hg histedit -r '.' --commands - <<EOF
216 > edit b346ab9a313d 6 c
216 > edit b346ab9a313d 6 c
217 > EOF
217 > EOF
218 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
218 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
219 adding c
219 adding c
220 Editing (b346ab9a313d), you may commit or record as needed now.
220 Editing (b346ab9a313d), you may commit or record as needed now.
221 (hg histedit --continue to resume)
221 (hg histedit --continue to resume)
222 [1]
222 [1]
223 $ echo c >> c
223 $ echo c >> c
224 $ hg histedit --continue
224 $ hg histedit --continue
225
225
226 $ hg log -r 'unstable()'
226 $ hg log -r 'unstable()'
227 11:c13eb81022ca f (no-eol)
227 11:c13eb81022ca f (no-eol)
228
228
229 stabilise
229 stabilise
230
230
231 $ hg rebase -r 'unstable()' -d .
231 $ hg rebase -r 'unstable()' -d .
232 rebasing 11:c13eb81022ca "f"
232 rebasing 11:c13eb81022ca "f"
233 $ hg up tip -q
233 $ hg up tip -q
234
234
235 Test dropping of changeset on the top of the stack
235 Test dropping of changeset on the top of the stack
236 -------------------------------------------------------
236 -------------------------------------------------------
237
237
238 Nothing is rewritten below, the working directory parent must be change for the
238 Nothing is rewritten below, the working directory parent must be change for the
239 dropped changeset to be hidden.
239 dropped changeset to be hidden.
240
240
241 $ cd ..
241 $ cd ..
242 $ hg clone base droplast
242 $ hg clone base droplast
243 updating to branch default
243 updating to branch default
244 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
244 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
245 $ cd droplast
245 $ cd droplast
246 $ hg histedit -r '40db8afa467b' --commands - << EOF
246 $ hg histedit -r '40db8afa467b' --commands - << EOF
247 > pick 40db8afa467b 10 c
247 > pick 40db8afa467b 10 c
248 > drop b449568bf7fc 11 f
248 > drop b449568bf7fc 11 f
249 > EOF
249 > EOF
250 $ hg log -G
250 $ hg log -G
251 @ 12:40db8afa467b c
251 @ 12:40db8afa467b c
252 |
252 |
253 o 0:cb9a9f314b8b a
253 o 0:cb9a9f314b8b a
254
254
255
255
256 With rewritten ancestors
256 With rewritten ancestors
257
257
258 $ echo e > e
258 $ echo e > e
259 $ hg add e
259 $ hg add e
260 $ hg commit -m g
260 $ hg commit -m g
261 $ echo f > f
261 $ echo f > f
262 $ hg add f
262 $ hg add f
263 $ hg commit -m h
263 $ hg commit -m h
264 $ hg histedit -r '40db8afa467b' --commands - << EOF
264 $ hg histedit -r '40db8afa467b' --commands - << EOF
265 > pick 47a8561c0449 12 g
265 > pick 47a8561c0449 12 g
266 > pick 40db8afa467b 10 c
266 > pick 40db8afa467b 10 c
267 > drop 1b3b05f35ff0 13 h
267 > drop 1b3b05f35ff0 13 h
268 > EOF
268 > EOF
269 $ hg log -G
269 $ hg log -G
270 @ 17:ee6544123ab8 c
270 @ 17:ee6544123ab8 c
271 |
271 |
272 o 16:269e713e9eae g
272 o 16:269e713e9eae g
273 |
273 |
274 o 0:cb9a9f314b8b a
274 o 0:cb9a9f314b8b a
275
275
276 $ cd ../base
276 $ cd ../base
277
277
278
278
279
279
280 Test phases support
280 Test phases support
281 ===========================================
281 ===========================================
282
282
283 Check that histedit respect immutability
283 Check that histedit respect immutability
284 -------------------------------------------
284 -------------------------------------------
285
285
286 $ cat >> $HGRCPATH << EOF
286 $ cat >> $HGRCPATH << EOF
287 > [ui]
287 > [ui]
288 > logtemplate= {rev}:{node|short} ({phase}) {desc|firstline}\n
288 > logtemplate= {rev}:{node|short} ({phase}) {desc|firstline}\n
289 > EOF
289 > EOF
290
290
291 $ hg ph -pv '.^'
291 $ hg ph -pv '.^'
292 phase changed for 2 changesets
292 phase changed for 2 changesets
293 $ hg log -G
293 $ hg log -G
294 @ 13:b449568bf7fc (draft) f
294 @ 13:b449568bf7fc (draft) f
295 |
295 |
296 o 12:40db8afa467b (public) c
296 o 12:40db8afa467b (public) c
297 |
297 |
298 o 0:cb9a9f314b8b (public) a
298 o 0:cb9a9f314b8b (public) a
299
299
300 $ hg histedit -r '.~2'
300 $ hg histedit -r '.~2'
301 abort: cannot edit public changeset: cb9a9f314b8b
301 abort: cannot edit public changeset: cb9a9f314b8b
302 (see 'hg help phases' for details)
302 (see 'hg help phases' for details)
303 [255]
303 [255]
304
304
305
305
306 Prepare further testing
306 Prepare further testing
307 -------------------------------------------
307 -------------------------------------------
308
308
309 $ for x in g h i j k ; do
309 $ for x in g h i j k ; do
310 > echo $x > $x
310 > echo $x > $x
311 > hg add $x
311 > hg add $x
312 > hg ci -m $x
312 > hg ci -m $x
313 > done
313 > done
314 $ hg phase --force --secret .~2
314 $ hg phase --force --secret .~2
315 $ hg log -G
315 $ hg log -G
316 @ 18:ee118ab9fa44 (secret) k
316 @ 18:ee118ab9fa44 (secret) k
317 |
317 |
318 o 17:3a6c53ee7f3d (secret) j
318 o 17:3a6c53ee7f3d (secret) j
319 |
319 |
320 o 16:b605fb7503f2 (secret) i
320 o 16:b605fb7503f2 (secret) i
321 |
321 |
322 o 15:7395e1ff83bd (draft) h
322 o 15:7395e1ff83bd (draft) h
323 |
323 |
324 o 14:6b70183d2492 (draft) g
324 o 14:6b70183d2492 (draft) g
325 |
325 |
326 o 13:b449568bf7fc (draft) f
326 o 13:b449568bf7fc (draft) f
327 |
327 |
328 o 12:40db8afa467b (public) c
328 o 12:40db8afa467b (public) c
329 |
329 |
330 o 0:cb9a9f314b8b (public) a
330 o 0:cb9a9f314b8b (public) a
331
331
332 $ cd ..
332 $ cd ..
333
333
334 simple phase conservation
334 simple phase conservation
335 -------------------------------------------
335 -------------------------------------------
336
336
337 Resulting changeset should conserve the phase of the original one whatever the
337 Resulting changeset should conserve the phase of the original one whatever the
338 phases.new-commit option is.
338 phases.new-commit option is.
339
339
340 New-commit as draft (default)
340 New-commit as draft (default)
341
341
342 $ cp -R base simple-draft
342 $ cp -R base simple-draft
343 $ cd simple-draft
343 $ cd simple-draft
344 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
344 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
345 > edit b449568bf7fc 11 f
345 > edit b449568bf7fc 11 f
346 > pick 6b70183d2492 12 g
346 > pick 6b70183d2492 12 g
347 > pick 7395e1ff83bd 13 h
347 > pick 7395e1ff83bd 13 h
348 > pick b605fb7503f2 14 i
348 > pick b605fb7503f2 14 i
349 > pick 3a6c53ee7f3d 15 j
349 > pick 3a6c53ee7f3d 15 j
350 > pick ee118ab9fa44 16 k
350 > pick ee118ab9fa44 16 k
351 > EOF
351 > EOF
352 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
352 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
353 adding f
353 adding f
354 Editing (b449568bf7fc), you may commit or record as needed now.
354 Editing (b449568bf7fc), you may commit or record as needed now.
355 (hg histedit --continue to resume)
355 (hg histedit --continue to resume)
356 [1]
356 [1]
357 $ echo f >> f
357 $ echo f >> f
358 $ hg histedit --continue
358 $ hg histedit --continue
359 $ hg log -G
359 $ hg log -G
360 @ 24:12e89af74238 (secret) k
360 @ 24:12e89af74238 (secret) k
361 |
361 |
362 o 23:636a8687b22e (secret) j
362 o 23:636a8687b22e (secret) j
363 |
363 |
364 o 22:ccaf0a38653f (secret) i
364 o 22:ccaf0a38653f (secret) i
365 |
365 |
366 o 21:11a89d1c2613 (draft) h
366 o 21:11a89d1c2613 (draft) h
367 |
367 |
368 o 20:c1dec7ca82ea (draft) g
368 o 20:c1dec7ca82ea (draft) g
369 |
369 |
370 o 19:087281e68428 (draft) f
370 o 19:087281e68428 (draft) f
371 |
371 |
372 o 12:40db8afa467b (public) c
372 o 12:40db8afa467b (public) c
373 |
373 |
374 o 0:cb9a9f314b8b (public) a
374 o 0:cb9a9f314b8b (public) a
375
375
376 $ cd ..
376 $ cd ..
377
377
378
378
379 New-commit as secret (config)
379 New-commit as secret (config)
380
380
381 $ cp -R base simple-secret
381 $ cp -R base simple-secret
382 $ cd simple-secret
382 $ cd simple-secret
383 $ cat >> .hg/hgrc << EOF
383 $ cat >> .hg/hgrc << EOF
384 > [phases]
384 > [phases]
385 > new-commit=secret
385 > new-commit=secret
386 > EOF
386 > EOF
387 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
387 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
388 > edit b449568bf7fc 11 f
388 > edit b449568bf7fc 11 f
389 > pick 6b70183d2492 12 g
389 > pick 6b70183d2492 12 g
390 > pick 7395e1ff83bd 13 h
390 > pick 7395e1ff83bd 13 h
391 > pick b605fb7503f2 14 i
391 > pick b605fb7503f2 14 i
392 > pick 3a6c53ee7f3d 15 j
392 > pick 3a6c53ee7f3d 15 j
393 > pick ee118ab9fa44 16 k
393 > pick ee118ab9fa44 16 k
394 > EOF
394 > EOF
395 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
395 0 files updated, 0 files merged, 6 files removed, 0 files unresolved
396 adding f
396 adding f
397 Editing (b449568bf7fc), you may commit or record as needed now.
397 Editing (b449568bf7fc), you may commit or record as needed now.
398 (hg histedit --continue to resume)
398 (hg histedit --continue to resume)
399 [1]
399 [1]
400 $ echo f >> f
400 $ echo f >> f
401 $ hg histedit --continue
401 $ hg histedit --continue
402 $ hg log -G
402 $ hg log -G
403 @ 24:12e89af74238 (secret) k
403 @ 24:12e89af74238 (secret) k
404 |
404 |
405 o 23:636a8687b22e (secret) j
405 o 23:636a8687b22e (secret) j
406 |
406 |
407 o 22:ccaf0a38653f (secret) i
407 o 22:ccaf0a38653f (secret) i
408 |
408 |
409 o 21:11a89d1c2613 (draft) h
409 o 21:11a89d1c2613 (draft) h
410 |
410 |
411 o 20:c1dec7ca82ea (draft) g
411 o 20:c1dec7ca82ea (draft) g
412 |
412 |
413 o 19:087281e68428 (draft) f
413 o 19:087281e68428 (draft) f
414 |
414 |
415 o 12:40db8afa467b (public) c
415 o 12:40db8afa467b (public) c
416 |
416 |
417 o 0:cb9a9f314b8b (public) a
417 o 0:cb9a9f314b8b (public) a
418
418
419 $ cd ..
419 $ cd ..
420
420
421
421
422 Changeset reordering
422 Changeset reordering
423 -------------------------------------------
423 -------------------------------------------
424
424
425 If a secret changeset is put before a draft one, all descendant should be secret.
425 If a secret changeset is put before a draft one, all descendant should be secret.
426 It seems more important to present the secret phase.
426 It seems more important to present the secret phase.
427
427
428 $ cp -R base reorder
428 $ cp -R base reorder
429 $ cd reorder
429 $ cd reorder
430 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
430 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
431 > pick b449568bf7fc 11 f
431 > pick b449568bf7fc 11 f
432 > pick 3a6c53ee7f3d 15 j
432 > pick 3a6c53ee7f3d 15 j
433 > pick 6b70183d2492 12 g
433 > pick 6b70183d2492 12 g
434 > pick b605fb7503f2 14 i
434 > pick b605fb7503f2 14 i
435 > pick 7395e1ff83bd 13 h
435 > pick 7395e1ff83bd 13 h
436 > pick ee118ab9fa44 16 k
436 > pick ee118ab9fa44 16 k
437 > EOF
437 > EOF
438 $ hg log -G
438 $ hg log -G
439 @ 23:558246857888 (secret) k
439 @ 23:558246857888 (secret) k
440 |
440 |
441 o 22:28bd44768535 (secret) h
441 o 22:28bd44768535 (secret) h
442 |
442 |
443 o 21:d5395202aeb9 (secret) i
443 o 21:d5395202aeb9 (secret) i
444 |
444 |
445 o 20:21edda8e341b (secret) g
445 o 20:21edda8e341b (secret) g
446 |
446 |
447 o 19:5ab64f3a4832 (secret) j
447 o 19:5ab64f3a4832 (secret) j
448 |
448 |
449 o 13:b449568bf7fc (draft) f
449 o 13:b449568bf7fc (draft) f
450 |
450 |
451 o 12:40db8afa467b (public) c
451 o 12:40db8afa467b (public) c
452 |
452 |
453 o 0:cb9a9f314b8b (public) a
453 o 0:cb9a9f314b8b (public) a
454
454
455 $ cd ..
455 $ cd ..
456
456
457 Changeset folding
457 Changeset folding
458 -------------------------------------------
458 -------------------------------------------
459
459
460 Folding a secret changeset with a draft one turn the result secret (again,
460 Folding a secret changeset with a draft one turn the result secret (again,
461 better safe than sorry). Folding between same phase changeset still works
461 better safe than sorry). Folding between same phase changeset still works
462
462
463 Note that there is a few reordering in this series for more extensive test
463 Note that there is a few reordering in this series for more extensive test
464
464
465 $ cp -R base folding
465 $ cp -R base folding
466 $ cd folding
466 $ cd folding
467 $ cat >> .hg/hgrc << EOF
467 $ cat >> .hg/hgrc << EOF
468 > [phases]
468 > [phases]
469 > new-commit=secret
469 > new-commit=secret
470 > EOF
470 > EOF
471 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
471 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
472 > pick 7395e1ff83bd 13 h
472 > pick 7395e1ff83bd 13 h
473 > fold b449568bf7fc 11 f
473 > fold b449568bf7fc 11 f
474 > pick 6b70183d2492 12 g
474 > pick 6b70183d2492 12 g
475 > fold 3a6c53ee7f3d 15 j
475 > fold 3a6c53ee7f3d 15 j
476 > pick b605fb7503f2 14 i
476 > pick b605fb7503f2 14 i
477 > fold ee118ab9fa44 16 k
477 > fold ee118ab9fa44 16 k
478 > EOF
478 > EOF
479 $ hg log -G
479 $ hg log -G
480 @ 27:f9daec13fb98 (secret) i
480 @ 27:f9daec13fb98 (secret) i
481 |
481 |
482 o 24:49807617f46a (secret) g
482 o 24:49807617f46a (secret) g
483 |
483 |
484 o 21:050280826e04 (draft) h
484 o 21:050280826e04 (draft) h
485 |
485 |
486 o 12:40db8afa467b (public) c
486 o 12:40db8afa467b (public) c
487 |
487 |
488 o 0:cb9a9f314b8b (public) a
488 o 0:cb9a9f314b8b (public) a
489
489
490 $ hg co 49807617f46a
490 $ hg co 49807617f46a
491 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
491 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
492 $ echo wat >> wat
492 $ echo wat >> wat
493 $ hg add wat
493 $ hg add wat
494 $ hg ci -m 'add wat'
494 $ hg ci -m 'add wat'
495 created new head
495 created new head
496 $ hg merge f9daec13fb98
496 $ hg merge f9daec13fb98
497 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
497 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
498 (branch merge, don't forget to commit)
498 (branch merge, don't forget to commit)
499 $ hg ci -m 'merge'
499 $ hg ci -m 'merge'
500 $ echo not wat > wat
500 $ echo not wat > wat
501 $ hg ci -m 'modify wat'
501 $ hg ci -m 'modify wat'
502 $ hg histedit 050280826e04
502 $ hg histedit 050280826e04
503 abort: cannot edit history that contains merges
503 abort: cannot edit history that contains merges
504 [255]
504 [255]
505 $ cd ..
505 $ cd ..
506
506
507 Check abort behavior
507 Check abort behavior
508 -------------------------------------------
508 -------------------------------------------
509
509
510 We checks that abort properly clean the repository so the same histedit can be
510 We checks that abort properly clean the repository so the same histedit can be
511 attempted later.
511 attempted later.
512
512
513 $ cp -R base abort
513 $ cp -R base abort
514 $ cd abort
514 $ cd abort
515 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
515 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
516 > pick b449568bf7fc 13 f
516 > pick b449568bf7fc 13 f
517 > pick 7395e1ff83bd 15 h
517 > pick 7395e1ff83bd 15 h
518 > pick 6b70183d2492 14 g
518 > pick 6b70183d2492 14 g
519 > pick b605fb7503f2 16 i
519 > pick b605fb7503f2 16 i
520 > roll 3a6c53ee7f3d 17 j
520 > roll 3a6c53ee7f3d 17 j
521 > edit ee118ab9fa44 18 k
521 > edit ee118ab9fa44 18 k
522 > EOF
522 > EOF
523 Editing (ee118ab9fa44), you may commit or record as needed now.
523 Editing (ee118ab9fa44), you may commit or record as needed now.
524 (hg histedit --continue to resume)
524 (hg histedit --continue to resume)
525 [1]
525 [1]
526
526
527 $ hg histedit --abort
527 $ hg histedit --abort
528 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
528 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
529 saved backup bundle to $TESTTMP/abort/.hg/strip-backup/4dc06258baa6-dff4ef05-backup.hg (glob)
529 saved backup bundle to $TESTTMP/abort/.hg/strip-backup/4dc06258baa6-dff4ef05-backup.hg (glob)
530
530
531 $ hg log -G
531 $ hg log -G
532 @ 18:ee118ab9fa44 (secret) k
532 @ 18:ee118ab9fa44 (secret) k
533 |
533 |
534 o 17:3a6c53ee7f3d (secret) j
534 o 17:3a6c53ee7f3d (secret) j
535 |
535 |
536 o 16:b605fb7503f2 (secret) i
536 o 16:b605fb7503f2 (secret) i
537 |
537 |
538 o 15:7395e1ff83bd (draft) h
538 o 15:7395e1ff83bd (draft) h
539 |
539 |
540 o 14:6b70183d2492 (draft) g
540 o 14:6b70183d2492 (draft) g
541 |
541 |
542 o 13:b449568bf7fc (draft) f
542 o 13:b449568bf7fc (draft) f
543 |
543 |
544 o 12:40db8afa467b (public) c
544 o 12:40db8afa467b (public) c
545 |
545 |
546 o 0:cb9a9f314b8b (public) a
546 o 0:cb9a9f314b8b (public) a
547
547
548 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
548 $ hg histedit -r 'b449568bf7fc' --commands - << EOF
549 > pick b449568bf7fc 13 f
549 > pick b449568bf7fc 13 f
550 > pick 7395e1ff83bd 15 h
550 > pick 7395e1ff83bd 15 h
551 > pick 6b70183d2492 14 g
551 > pick 6b70183d2492 14 g
552 > pick b605fb7503f2 16 i
552 > pick b605fb7503f2 16 i
553 > pick 3a6c53ee7f3d 17 j
553 > pick 3a6c53ee7f3d 17 j
554 > edit ee118ab9fa44 18 k
554 > edit ee118ab9fa44 18 k
555 > EOF
555 > EOF
556 Editing (ee118ab9fa44), you may commit or record as needed now.
556 Editing (ee118ab9fa44), you may commit or record as needed now.
557 (hg histedit --continue to resume)
557 (hg histedit --continue to resume)
558 [1]
558 [1]
559 $ hg histedit --continue
559 $ hg histedit --continue
560 $ hg log -G
560 $ hg log -G
561 @ 23:175d6b286a22 (secret) k
561 @ 23:175d6b286a22 (secret) k
562 |
562 |
563 o 22:44ca09d59ae4 (secret) j
563 o 22:44ca09d59ae4 (secret) j
564 |
564 |
565 o 21:31747692a644 (secret) i
565 o 21:31747692a644 (secret) i
566 |
566 |
567 o 20:9985cd4f21fa (draft) g
567 o 20:9985cd4f21fa (draft) g
568 |
568 |
569 o 19:4dc06258baa6 (draft) h
569 o 19:4dc06258baa6 (draft) h
570 |
570 |
571 o 13:b449568bf7fc (draft) f
571 o 13:b449568bf7fc (draft) f
572 |
572 |
573 o 12:40db8afa467b (public) c
573 o 12:40db8afa467b (public) c
574 |
574 |
575 o 0:cb9a9f314b8b (public) a
575 o 0:cb9a9f314b8b (public) a
576
576
@@ -1,1284 +1,1284
1 $ cat >> $HGRCPATH << EOF
1 $ cat >> $HGRCPATH << EOF
2 > [phases]
2 > [phases]
3 > # public changeset are not obsolete
3 > # public changeset are not obsolete
4 > publish=false
4 > publish=false
5 > [ui]
5 > [ui]
6 > logtemplate="{rev}:{node|short} ({phase}{if(obsolete, ' *{obsolete}*')}{if(troubles, ' {troubles}')}) [{tags} {bookmarks}] {desc|firstline}\n"
6 > logtemplate="{rev}:{node|short} ({phase}{if(obsolete, ' *{obsolete}*')}{if(troubles, ' {troubles}')}) [{tags} {bookmarks}] {desc|firstline}\n"
7 > EOF
7 > EOF
8 $ mkcommit() {
8 $ mkcommit() {
9 > echo "$1" > "$1"
9 > echo "$1" > "$1"
10 > hg add "$1"
10 > hg add "$1"
11 > hg ci -m "add $1"
11 > hg ci -m "add $1"
12 > }
12 > }
13 $ getid() {
13 $ getid() {
14 > hg log -T "{node}\n" --hidden -r "desc('$1')"
14 > hg log -T "{node}\n" --hidden -r "desc('$1')"
15 > }
15 > }
16
16
17 $ cat > debugkeys.py <<EOF
17 $ cat > debugkeys.py <<EOF
18 > def reposetup(ui, repo):
18 > def reposetup(ui, repo):
19 > class debugkeysrepo(repo.__class__):
19 > class debugkeysrepo(repo.__class__):
20 > def listkeys(self, namespace):
20 > def listkeys(self, namespace):
21 > ui.write('listkeys %s\n' % (namespace,))
21 > ui.write('listkeys %s\n' % (namespace,))
22 > return super(debugkeysrepo, self).listkeys(namespace)
22 > return super(debugkeysrepo, self).listkeys(namespace)
23 >
23 >
24 > if repo.local():
24 > if repo.local():
25 > repo.__class__ = debugkeysrepo
25 > repo.__class__ = debugkeysrepo
26 > EOF
26 > EOF
27
27
28 $ hg init tmpa
28 $ hg init tmpa
29 $ cd tmpa
29 $ cd tmpa
30 $ mkcommit kill_me
30 $ mkcommit kill_me
31
31
32 Checking that the feature is properly disabled
32 Checking that the feature is properly disabled
33
33
34 $ hg debugobsolete -d '0 0' `getid kill_me` -u babar
34 $ hg debugobsolete -d '0 0' `getid kill_me` -u babar
35 abort: creating obsolete markers is not enabled on this repo
35 abort: creating obsolete markers is not enabled on this repo
36 [255]
36 [255]
37
37
38 Enabling it
38 Enabling it
39
39
40 $ cat >> $HGRCPATH << EOF
40 $ cat >> $HGRCPATH << EOF
41 > [experimental]
41 > [experimental]
42 > evolution=createmarkers,exchange
42 > evolution=createmarkers,exchange
43 > EOF
43 > EOF
44
44
45 Killing a single changeset without replacement
45 Killing a single changeset without replacement
46
46
47 $ hg debugobsolete 0
47 $ hg debugobsolete 0
48 abort: changeset references must be full hexadecimal node identifiers
48 abort: changeset references must be full hexadecimal node identifiers
49 [255]
49 [255]
50 $ hg debugobsolete '00'
50 $ hg debugobsolete '00'
51 abort: changeset references must be full hexadecimal node identifiers
51 abort: changeset references must be full hexadecimal node identifiers
52 [255]
52 [255]
53 $ hg debugobsolete -d '0 0' `getid kill_me` -u babar
53 $ hg debugobsolete -d '0 0' `getid kill_me` -u babar
54 $ hg debugobsolete
54 $ hg debugobsolete
55 97b7c2d76b1845ed3eb988cd612611e72406cef0 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'babar'}
55 97b7c2d76b1845ed3eb988cd612611e72406cef0 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'babar'}
56
56
57 (test that mercurial is not confused)
57 (test that mercurial is not confused)
58
58
59 $ hg up null --quiet # having 0 as parent prevents it to be hidden
59 $ hg up null --quiet # having 0 as parent prevents it to be hidden
60 $ hg tip
60 $ hg tip
61 -1:000000000000 (public) [tip ]
61 -1:000000000000 (public) [tip ]
62 $ hg up --hidden tip --quiet
62 $ hg up --hidden tip --quiet
63
63
64 Killing a single changeset with itself should fail
64 Killing a single changeset with itself should fail
65 (simple local safeguard)
65 (simple local safeguard)
66
66
67 $ hg debugobsolete `getid kill_me` `getid kill_me`
67 $ hg debugobsolete `getid kill_me` `getid kill_me`
68 abort: bad obsmarker input: in-marker cycle with 97b7c2d76b1845ed3eb988cd612611e72406cef0
68 abort: bad obsmarker input: in-marker cycle with 97b7c2d76b1845ed3eb988cd612611e72406cef0
69 [255]
69 [255]
70
70
71 $ cd ..
71 $ cd ..
72
72
73 Killing a single changeset with replacement
73 Killing a single changeset with replacement
74 (and testing the format option)
74 (and testing the format option)
75
75
76 $ hg init tmpb
76 $ hg init tmpb
77 $ cd tmpb
77 $ cd tmpb
78 $ mkcommit a
78 $ mkcommit a
79 $ mkcommit b
79 $ mkcommit b
80 $ mkcommit original_c
80 $ mkcommit original_c
81 $ hg up "desc('b')"
81 $ hg up "desc('b')"
82 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
82 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
83 $ mkcommit new_c
83 $ mkcommit new_c
84 created new head
84 created new head
85 $ hg log -r 'hidden()' --template '{rev}:{node|short} {desc}\n' --hidden
85 $ hg log -r 'hidden()' --template '{rev}:{node|short} {desc}\n' --hidden
86 $ hg debugobsolete --config format.obsstore-version=0 --flag 12 `getid original_c` `getid new_c` -d '121 120'
86 $ hg debugobsolete --config format.obsstore-version=0 --flag 12 `getid original_c` `getid new_c` -d '121 120'
87 $ hg log -r 'hidden()' --template '{rev}:{node|short} {desc}\n' --hidden
87 $ hg log -r 'hidden()' --template '{rev}:{node|short} {desc}\n' --hidden
88 2:245bde4270cd add original_c
88 2:245bde4270cd add original_c
89 $ hg debugrevlog -cd
89 $ hg debugrevlog -cd
90 # rev p1rev p2rev start end deltastart base p1 p2 rawsize totalsize compression heads chainlen
90 # rev p1rev p2rev start end deltastart base p1 p2 rawsize totalsize compression heads chainlen
91 0 -1 -1 0 59 0 0 0 0 58 58 0 1 0
91 0 -1 -1 0 59 0 0 0 0 58 58 0 1 0
92 1 0 -1 59 118 59 59 0 0 58 116 0 1 0
92 1 0 -1 59 118 59 59 0 0 58 116 0 1 0
93 2 1 -1 118 193 118 118 59 0 76 192 0 1 0
93 2 1 -1 118 193 118 118 59 0 76 192 0 1 0
94 3 1 -1 193 260 193 193 59 0 66 258 0 2 0
94 3 1 -1 193 260 193 193 59 0 66 258 0 2 0
95 $ hg debugobsolete
95 $ hg debugobsolete
96 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
96 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
97
97
98 (check for version number of the obsstore)
98 (check for version number of the obsstore)
99
99
100 $ dd bs=1 count=1 if=.hg/store/obsstore 2>/dev/null
100 $ dd bs=1 count=1 if=.hg/store/obsstore 2>/dev/null
101 \x00 (no-eol) (esc)
101 \x00 (no-eol) (esc)
102
102
103 do it again (it read the obsstore before adding new changeset)
103 do it again (it read the obsstore before adding new changeset)
104
104
105 $ hg up '.^'
105 $ hg up '.^'
106 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
106 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
107 $ mkcommit new_2_c
107 $ mkcommit new_2_c
108 created new head
108 created new head
109 $ hg debugobsolete -d '1337 0' `getid new_c` `getid new_2_c`
109 $ hg debugobsolete -d '1337 0' `getid new_c` `getid new_2_c`
110 $ hg debugobsolete
110 $ hg debugobsolete
111 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
111 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
112 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
112 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
113
113
114 Register two markers with a missing node
114 Register two markers with a missing node
115
115
116 $ hg up '.^'
116 $ hg up '.^'
117 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
117 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
118 $ mkcommit new_3_c
118 $ mkcommit new_3_c
119 created new head
119 created new head
120 $ hg debugobsolete -d '1338 0' `getid new_2_c` 1337133713371337133713371337133713371337
120 $ hg debugobsolete -d '1338 0' `getid new_2_c` 1337133713371337133713371337133713371337
121 $ hg debugobsolete -d '1339 0' 1337133713371337133713371337133713371337 `getid new_3_c`
121 $ hg debugobsolete -d '1339 0' 1337133713371337133713371337133713371337 `getid new_3_c`
122 $ hg debugobsolete
122 $ hg debugobsolete
123 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
123 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
124 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
124 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
125 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
125 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
126 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
126 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
127
127
128 Test the --index option of debugobsolete command
128 Test the --index option of debugobsolete command
129 $ hg debugobsolete --index
129 $ hg debugobsolete --index
130 0 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
130 0 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
131 1 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
131 1 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
132 2 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
132 2 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
133 3 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
133 3 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
134
134
135 Refuse pathological nullid successors
135 Refuse pathological nullid successors
136 $ hg debugobsolete -d '9001 0' 1337133713371337133713371337133713371337 0000000000000000000000000000000000000000
136 $ hg debugobsolete -d '9001 0' 1337133713371337133713371337133713371337 0000000000000000000000000000000000000000
137 transaction abort!
137 transaction abort!
138 rollback completed
138 rollback completed
139 abort: bad obsolescence marker detected: invalid successors nullid
139 abort: bad obsolescence marker detected: invalid successors nullid
140 [255]
140 [255]
141
141
142 Check that graphlog detect that a changeset is obsolete:
142 Check that graphlog detect that a changeset is obsolete:
143
143
144 $ hg log -G
144 $ hg log -G
145 @ 5:5601fb93a350 (draft) [tip ] add new_3_c
145 @ 5:5601fb93a350 (draft) [tip ] add new_3_c
146 |
146 |
147 o 1:7c3bad9141dc (draft) [ ] add b
147 o 1:7c3bad9141dc (draft) [ ] add b
148 |
148 |
149 o 0:1f0dee641bb7 (draft) [ ] add a
149 o 0:1f0dee641bb7 (draft) [ ] add a
150
150
151
151
152 check that heads does not report them
152 check that heads does not report them
153
153
154 $ hg heads
154 $ hg heads
155 5:5601fb93a350 (draft) [tip ] add new_3_c
155 5:5601fb93a350 (draft) [tip ] add new_3_c
156 $ hg heads --hidden
156 $ hg heads --hidden
157 5:5601fb93a350 (draft) [tip ] add new_3_c
157 5:5601fb93a350 (draft) [tip ] add new_3_c
158 4:ca819180edb9 (draft *obsolete*) [ ] add new_2_c
158 4:ca819180edb9 (draft *obsolete*) [ ] add new_2_c
159 3:cdbce2fbb163 (draft *obsolete*) [ ] add new_c
159 3:cdbce2fbb163 (draft *obsolete*) [ ] add new_c
160 2:245bde4270cd (draft *obsolete*) [ ] add original_c
160 2:245bde4270cd (draft *obsolete*) [ ] add original_c
161
161
162
162
163 check that summary does not report them
163 check that summary does not report them
164
164
165 $ hg init ../sink
165 $ hg init ../sink
166 $ echo '[paths]' >> .hg/hgrc
166 $ echo '[paths]' >> .hg/hgrc
167 $ echo 'default=../sink' >> .hg/hgrc
167 $ echo 'default=../sink' >> .hg/hgrc
168 $ hg summary --remote
168 $ hg summary --remote
169 parent: 5:5601fb93a350 tip
169 parent: 5:5601fb93a350 tip
170 add new_3_c
170 add new_3_c
171 branch: default
171 branch: default
172 commit: (clean)
172 commit: (clean)
173 update: (current)
173 update: (current)
174 phases: 3 draft
174 phases: 3 draft
175 remote: 3 outgoing
175 remote: 3 outgoing
176
176
177 $ hg summary --remote --hidden
177 $ hg summary --remote --hidden
178 parent: 5:5601fb93a350 tip
178 parent: 5:5601fb93a350 tip
179 add new_3_c
179 add new_3_c
180 branch: default
180 branch: default
181 commit: (clean)
181 commit: (clean)
182 update: 3 new changesets, 4 branch heads (merge)
182 update: 3 new changesets, 4 branch heads (merge)
183 phases: 6 draft
183 phases: 6 draft
184 remote: 3 outgoing
184 remote: 3 outgoing
185
185
186 check that various commands work well with filtering
186 check that various commands work well with filtering
187
187
188 $ hg tip
188 $ hg tip
189 5:5601fb93a350 (draft) [tip ] add new_3_c
189 5:5601fb93a350 (draft) [tip ] add new_3_c
190 $ hg log -r 6
190 $ hg log -r 6
191 abort: unknown revision '6'!
191 abort: unknown revision '6'!
192 [255]
192 [255]
193 $ hg log -r 4
193 $ hg log -r 4
194 abort: hidden revision '4'!
194 abort: hidden revision '4'!
195 (use --hidden to access hidden revisions)
195 (use --hidden to access hidden revisions)
196 [255]
196 [255]
197 $ hg debugrevspec 'rev(6)'
197 $ hg debugrevspec 'rev(6)'
198 $ hg debugrevspec 'rev(4)'
198 $ hg debugrevspec 'rev(4)'
199 $ hg debugrevspec 'null'
199 $ hg debugrevspec 'null'
200 -1
200 -1
201
201
202 Check that public changeset are not accounted as obsolete:
202 Check that public changeset are not accounted as obsolete:
203
203
204 $ hg --hidden phase --public 2
204 $ hg --hidden phase --public 2
205 $ hg log -G
205 $ hg log -G
206 @ 5:5601fb93a350 (draft bumped) [tip ] add new_3_c
206 @ 5:5601fb93a350 (draft bumped) [tip ] add new_3_c
207 |
207 |
208 | o 2:245bde4270cd (public) [ ] add original_c
208 | o 2:245bde4270cd (public) [ ] add original_c
209 |/
209 |/
210 o 1:7c3bad9141dc (public) [ ] add b
210 o 1:7c3bad9141dc (public) [ ] add b
211 |
211 |
212 o 0:1f0dee641bb7 (public) [ ] add a
212 o 0:1f0dee641bb7 (public) [ ] add a
213
213
214
214
215 And that bumped changeset are detected
215 And that bumped changeset are detected
216 --------------------------------------
216 --------------------------------------
217
217
218 If we didn't filtered obsolete changesets out, 3 and 4 would show up too. Also
218 If we didn't filtered obsolete changesets out, 3 and 4 would show up too. Also
219 note that the bumped changeset (5:5601fb93a350) is not a direct successor of
219 note that the bumped changeset (5:5601fb93a350) is not a direct successor of
220 the public changeset
220 the public changeset
221
221
222 $ hg log --hidden -r 'bumped()'
222 $ hg log --hidden -r 'bumped()'
223 5:5601fb93a350 (draft bumped) [tip ] add new_3_c
223 5:5601fb93a350 (draft bumped) [tip ] add new_3_c
224
224
225 And that we can't push bumped changeset
225 And that we can't push bumped changeset
226
226
227 $ hg push ../tmpa -r 0 --force #(make repo related)
227 $ hg push ../tmpa -r 0 --force #(make repo related)
228 pushing to ../tmpa
228 pushing to ../tmpa
229 searching for changes
229 searching for changes
230 warning: repository is unrelated
230 warning: repository is unrelated
231 adding changesets
231 adding changesets
232 adding manifests
232 adding manifests
233 adding file changes
233 adding file changes
234 added 1 changesets with 1 changes to 1 files (+1 heads)
234 added 1 changesets with 1 changes to 1 files (+1 heads)
235 $ hg push ../tmpa
235 $ hg push ../tmpa
236 pushing to ../tmpa
236 pushing to ../tmpa
237 searching for changes
237 searching for changes
238 abort: push includes bumped changeset: 5601fb93a350!
238 abort: push includes bumped changeset: 5601fb93a350!
239 [255]
239 [255]
240
240
241 Fixing "bumped" situation
241 Fixing "bumped" situation
242 We need to create a clone of 5 and add a special marker with a flag
242 We need to create a clone of 5 and add a special marker with a flag
243
243
244 $ hg summary
244 $ hg summary
245 parent: 5:5601fb93a350 tip (bumped)
245 parent: 5:5601fb93a350 tip (bumped)
246 add new_3_c
246 add new_3_c
247 branch: default
247 branch: default
248 commit: (clean)
248 commit: (clean)
249 update: 1 new changesets, 2 branch heads (merge)
249 update: 1 new changesets, 2 branch heads (merge)
250 phases: 1 draft
250 phases: 1 draft
251 bumped: 1 changesets
251 bumped: 1 changesets
252 $ hg up '5^'
252 $ hg up '5^'
253 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
253 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
254 $ hg revert -ar 5
254 $ hg revert -ar 5
255 adding new_3_c
255 adding new_3_c
256 $ hg ci -m 'add n3w_3_c'
256 $ hg ci -m 'add n3w_3_c'
257 created new head
257 created new head
258 $ hg debugobsolete -d '1338 0' --flags 1 `getid new_3_c` `getid n3w_3_c`
258 $ hg debugobsolete -d '1338 0' --flags 1 `getid new_3_c` `getid n3w_3_c`
259 $ hg log -r 'bumped()'
259 $ hg log -r 'bumped()'
260 $ hg log -G
260 $ hg log -G
261 @ 6:6f9641995072 (draft) [tip ] add n3w_3_c
261 @ 6:6f9641995072 (draft) [tip ] add n3w_3_c
262 |
262 |
263 | o 2:245bde4270cd (public) [ ] add original_c
263 | o 2:245bde4270cd (public) [ ] add original_c
264 |/
264 |/
265 o 1:7c3bad9141dc (public) [ ] add b
265 o 1:7c3bad9141dc (public) [ ] add b
266 |
266 |
267 o 0:1f0dee641bb7 (public) [ ] add a
267 o 0:1f0dee641bb7 (public) [ ] add a
268
268
269
269
270 $ cd ..
270 $ cd ..
271
271
272 Revision 0 is hidden
272 Revision 0 is hidden
273 --------------------
273 --------------------
274
274
275 $ hg init rev0hidden
275 $ hg init rev0hidden
276 $ cd rev0hidden
276 $ cd rev0hidden
277
277
278 $ mkcommit kill0
278 $ mkcommit kill0
279 $ hg up -q null
279 $ hg up -q null
280 $ hg debugobsolete `getid kill0`
280 $ hg debugobsolete `getid kill0`
281 $ mkcommit a
281 $ mkcommit a
282 $ mkcommit b
282 $ mkcommit b
283
283
284 Should pick the first visible revision as "repo" node
284 Should pick the first visible revision as "repo" node
285
285
286 $ hg archive ../archive-null
286 $ hg archive ../archive-null
287 $ cat ../archive-null/.hg_archival.txt
287 $ cat ../archive-null/.hg_archival.txt
288 repo: 1f0dee641bb7258c56bd60e93edfa2405381c41e
288 repo: 1f0dee641bb7258c56bd60e93edfa2405381c41e
289 node: 7c3bad9141dcb46ff89abf5f61856facd56e476c
289 node: 7c3bad9141dcb46ff89abf5f61856facd56e476c
290 branch: default
290 branch: default
291 latesttag: null
291 latesttag: null
292 latesttagdistance: 2
292 latesttagdistance: 2
293 changessincelatesttag: 2
293 changessincelatesttag: 2
294
294
295
295
296 $ cd ..
296 $ cd ..
297
297
298 Exchange Test
298 Exchange Test
299 ============================
299 ============================
300
300
301 Destination repo does not have any data
301 Destination repo does not have any data
302 ---------------------------------------
302 ---------------------------------------
303
303
304 Simple incoming test
304 Simple incoming test
305
305
306 $ hg init tmpc
306 $ hg init tmpc
307 $ cd tmpc
307 $ cd tmpc
308 $ hg incoming ../tmpb
308 $ hg incoming ../tmpb
309 comparing with ../tmpb
309 comparing with ../tmpb
310 0:1f0dee641bb7 (public) [ ] add a
310 0:1f0dee641bb7 (public) [ ] add a
311 1:7c3bad9141dc (public) [ ] add b
311 1:7c3bad9141dc (public) [ ] add b
312 2:245bde4270cd (public) [ ] add original_c
312 2:245bde4270cd (public) [ ] add original_c
313 6:6f9641995072 (draft) [tip ] add n3w_3_c
313 6:6f9641995072 (draft) [tip ] add n3w_3_c
314
314
315 Try to pull markers
315 Try to pull markers
316 (extinct changeset are excluded but marker are pushed)
316 (extinct changeset are excluded but marker are pushed)
317
317
318 $ hg pull ../tmpb
318 $ hg pull ../tmpb
319 pulling from ../tmpb
319 pulling from ../tmpb
320 requesting all changes
320 requesting all changes
321 adding changesets
321 adding changesets
322 adding manifests
322 adding manifests
323 adding file changes
323 adding file changes
324 added 4 changesets with 4 changes to 4 files (+1 heads)
324 added 4 changesets with 4 changes to 4 files (+1 heads)
325 5 new obsolescence markers
325 5 new obsolescence markers
326 (run 'hg heads' to see heads, 'hg merge' to merge)
326 (run 'hg heads' to see heads, 'hg merge' to merge)
327 $ hg debugobsolete
327 $ hg debugobsolete
328 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
328 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
329 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
329 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
330 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
330 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
331 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
331 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
332 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
332 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
333
333
334 Rollback//Transaction support
334 Rollback//Transaction support
335
335
336 $ hg debugobsolete -d '1340 0' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
336 $ hg debugobsolete -d '1340 0' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
337 $ hg debugobsolete
337 $ hg debugobsolete
338 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
338 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
339 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
339 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
340 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
340 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
341 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
341 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
342 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
342 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
343 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb 0 (Thu Jan 01 00:22:20 1970 +0000) {'user': 'test'}
343 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb 0 (Thu Jan 01 00:22:20 1970 +0000) {'user': 'test'}
344 $ hg rollback -n
344 $ hg rollback -n
345 repository tip rolled back to revision 3 (undo debugobsolete)
345 repository tip rolled back to revision 3 (undo debugobsolete)
346 $ hg rollback
346 $ hg rollback
347 repository tip rolled back to revision 3 (undo debugobsolete)
347 repository tip rolled back to revision 3 (undo debugobsolete)
348 $ hg debugobsolete
348 $ hg debugobsolete
349 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
349 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
350 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
350 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
351 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
351 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
352 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
352 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
353 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
353 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
354
354
355 $ cd ..
355 $ cd ..
356
356
357 Try to push markers
357 Try to push markers
358
358
359 $ hg init tmpd
359 $ hg init tmpd
360 $ hg -R tmpb push tmpd
360 $ hg -R tmpb push tmpd
361 pushing to tmpd
361 pushing to tmpd
362 searching for changes
362 searching for changes
363 adding changesets
363 adding changesets
364 adding manifests
364 adding manifests
365 adding file changes
365 adding file changes
366 added 4 changesets with 4 changes to 4 files (+1 heads)
366 added 4 changesets with 4 changes to 4 files (+1 heads)
367 5 new obsolescence markers
367 5 new obsolescence markers
368 $ hg -R tmpd debugobsolete | sort
368 $ hg -R tmpd debugobsolete | sort
369 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
369 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
370 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
370 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
371 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
371 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
372 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
372 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
373 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
373 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
374
374
375 Check obsolete keys are exchanged only if source has an obsolete store
375 Check obsolete keys are exchanged only if source has an obsolete store
376
376
377 $ hg init empty
377 $ hg init empty
378 $ hg --config extensions.debugkeys=debugkeys.py -R empty push tmpd
378 $ hg --config extensions.debugkeys=debugkeys.py -R empty push tmpd
379 pushing to tmpd
379 pushing to tmpd
380 listkeys phases
380 listkeys phases
381 listkeys bookmarks
381 listkeys bookmarks
382 no changes found
382 no changes found
383 listkeys phases
383 listkeys phases
384 [1]
384 [1]
385
385
386 clone support
386 clone support
387 (markers are copied and extinct changesets are included to allow hardlinks)
387 (markers are copied and extinct changesets are included to allow hardlinks)
388
388
389 $ hg clone tmpb clone-dest
389 $ hg clone tmpb clone-dest
390 updating to branch default
390 updating to branch default
391 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
391 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
392 $ hg -R clone-dest log -G --hidden
392 $ hg -R clone-dest log -G --hidden
393 @ 6:6f9641995072 (draft) [tip ] add n3w_3_c
393 @ 6:6f9641995072 (draft) [tip ] add n3w_3_c
394 |
394 |
395 | x 5:5601fb93a350 (draft *obsolete*) [ ] add new_3_c
395 | x 5:5601fb93a350 (draft *obsolete*) [ ] add new_3_c
396 |/
396 |/
397 | x 4:ca819180edb9 (draft *obsolete*) [ ] add new_2_c
397 | x 4:ca819180edb9 (draft *obsolete*) [ ] add new_2_c
398 |/
398 |/
399 | x 3:cdbce2fbb163 (draft *obsolete*) [ ] add new_c
399 | x 3:cdbce2fbb163 (draft *obsolete*) [ ] add new_c
400 |/
400 |/
401 | o 2:245bde4270cd (public) [ ] add original_c
401 | o 2:245bde4270cd (public) [ ] add original_c
402 |/
402 |/
403 o 1:7c3bad9141dc (public) [ ] add b
403 o 1:7c3bad9141dc (public) [ ] add b
404 |
404 |
405 o 0:1f0dee641bb7 (public) [ ] add a
405 o 0:1f0dee641bb7 (public) [ ] add a
406
406
407 $ hg -R clone-dest debugobsolete
407 $ hg -R clone-dest debugobsolete
408 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
408 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
409 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
409 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
410 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
410 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
411 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
411 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
412 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
412 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
413
413
414
414
415 Destination repo have existing data
415 Destination repo have existing data
416 ---------------------------------------
416 ---------------------------------------
417
417
418 On pull
418 On pull
419
419
420 $ hg init tmpe
420 $ hg init tmpe
421 $ cd tmpe
421 $ cd tmpe
422 $ hg debugobsolete -d '1339 0' 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00
422 $ hg debugobsolete -d '1339 0' 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00
423 $ hg pull ../tmpb
423 $ hg pull ../tmpb
424 pulling from ../tmpb
424 pulling from ../tmpb
425 requesting all changes
425 requesting all changes
426 adding changesets
426 adding changesets
427 adding manifests
427 adding manifests
428 adding file changes
428 adding file changes
429 added 4 changesets with 4 changes to 4 files (+1 heads)
429 added 4 changesets with 4 changes to 4 files (+1 heads)
430 5 new obsolescence markers
430 5 new obsolescence markers
431 (run 'hg heads' to see heads, 'hg merge' to merge)
431 (run 'hg heads' to see heads, 'hg merge' to merge)
432 $ hg debugobsolete
432 $ hg debugobsolete
433 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
433 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
434 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
434 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
435 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
435 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
436 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
436 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
437 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
437 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
438 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
438 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
439
439
440
440
441 On push
441 On push
442
442
443 $ hg push ../tmpc
443 $ hg push ../tmpc
444 pushing to ../tmpc
444 pushing to ../tmpc
445 searching for changes
445 searching for changes
446 no changes found
446 no changes found
447 1 new obsolescence markers
447 1 new obsolescence markers
448 [1]
448 [1]
449 $ hg -R ../tmpc debugobsolete
449 $ hg -R ../tmpc debugobsolete
450 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
450 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
451 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
451 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
452 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
452 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
453 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
453 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
454 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
454 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
455 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
455 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
456
456
457 detect outgoing obsolete and unstable
457 detect outgoing obsolete and unstable
458 ---------------------------------------
458 ---------------------------------------
459
459
460
460
461 $ hg log -G
461 $ hg log -G
462 o 3:6f9641995072 (draft) [tip ] add n3w_3_c
462 o 3:6f9641995072 (draft) [tip ] add n3w_3_c
463 |
463 |
464 | o 2:245bde4270cd (public) [ ] add original_c
464 | o 2:245bde4270cd (public) [ ] add original_c
465 |/
465 |/
466 o 1:7c3bad9141dc (public) [ ] add b
466 o 1:7c3bad9141dc (public) [ ] add b
467 |
467 |
468 o 0:1f0dee641bb7 (public) [ ] add a
468 o 0:1f0dee641bb7 (public) [ ] add a
469
469
470 $ hg up 'desc("n3w_3_c")'
470 $ hg up 'desc("n3w_3_c")'
471 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
471 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
472 $ mkcommit original_d
472 $ mkcommit original_d
473 $ mkcommit original_e
473 $ mkcommit original_e
474 $ hg debugobsolete --record-parents `getid original_d` -d '0 0'
474 $ hg debugobsolete --record-parents `getid original_d` -d '0 0'
475 $ hg debugobsolete | grep `getid original_d`
475 $ hg debugobsolete | grep `getid original_d`
476 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
476 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
477 $ hg log -r 'obsolete()'
477 $ hg log -r 'obsolete()'
478 4:94b33453f93b (draft *obsolete*) [ ] add original_d
478 4:94b33453f93b (draft *obsolete*) [ ] add original_d
479 $ hg summary
479 $ hg summary
480 parent: 5:cda648ca50f5 tip (unstable)
480 parent: 5:cda648ca50f5 tip (unstable)
481 add original_e
481 add original_e
482 branch: default
482 branch: default
483 commit: (clean)
483 commit: (clean)
484 update: 1 new changesets, 2 branch heads (merge)
484 update: 1 new changesets, 2 branch heads (merge)
485 phases: 3 draft
485 phases: 3 draft
486 unstable: 1 changesets
486 unstable: 1 changesets
487 $ hg log -G -r '::unstable()'
487 $ hg log -G -r '::unstable()'
488 @ 5:cda648ca50f5 (draft unstable) [tip ] add original_e
488 @ 5:cda648ca50f5 (draft unstable) [tip ] add original_e
489 |
489 |
490 x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
490 x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
491 |
491 |
492 o 3:6f9641995072 (draft) [ ] add n3w_3_c
492 o 3:6f9641995072 (draft) [ ] add n3w_3_c
493 |
493 |
494 o 1:7c3bad9141dc (public) [ ] add b
494 o 1:7c3bad9141dc (public) [ ] add b
495 |
495 |
496 o 0:1f0dee641bb7 (public) [ ] add a
496 o 0:1f0dee641bb7 (public) [ ] add a
497
497
498
498
499 refuse to push obsolete changeset
499 refuse to push obsolete changeset
500
500
501 $ hg push ../tmpc/ -r 'desc("original_d")'
501 $ hg push ../tmpc/ -r 'desc("original_d")'
502 pushing to ../tmpc/
502 pushing to ../tmpc/
503 searching for changes
503 searching for changes
504 abort: push includes obsolete changeset: 94b33453f93b!
504 abort: push includes obsolete changeset: 94b33453f93b!
505 [255]
505 [255]
506
506
507 refuse to push unstable changeset
507 refuse to push unstable changeset
508
508
509 $ hg push ../tmpc/
509 $ hg push ../tmpc/
510 pushing to ../tmpc/
510 pushing to ../tmpc/
511 searching for changes
511 searching for changes
512 abort: push includes unstable changeset: cda648ca50f5!
512 abort: push includes unstable changeset: cda648ca50f5!
513 [255]
513 [255]
514
514
515 Test that extinct changeset are properly detected
515 Test that extinct changeset are properly detected
516
516
517 $ hg log -r 'extinct()'
517 $ hg log -r 'extinct()'
518
518
519 Don't try to push extinct changeset
519 Don't try to push extinct changeset
520
520
521 $ hg init ../tmpf
521 $ hg init ../tmpf
522 $ hg out ../tmpf
522 $ hg out ../tmpf
523 comparing with ../tmpf
523 comparing with ../tmpf
524 searching for changes
524 searching for changes
525 0:1f0dee641bb7 (public) [ ] add a
525 0:1f0dee641bb7 (public) [ ] add a
526 1:7c3bad9141dc (public) [ ] add b
526 1:7c3bad9141dc (public) [ ] add b
527 2:245bde4270cd (public) [ ] add original_c
527 2:245bde4270cd (public) [ ] add original_c
528 3:6f9641995072 (draft) [ ] add n3w_3_c
528 3:6f9641995072 (draft) [ ] add n3w_3_c
529 4:94b33453f93b (draft *obsolete*) [ ] add original_d
529 4:94b33453f93b (draft *obsolete*) [ ] add original_d
530 5:cda648ca50f5 (draft unstable) [tip ] add original_e
530 5:cda648ca50f5 (draft unstable) [tip ] add original_e
531 $ hg push ../tmpf -f # -f because be push unstable too
531 $ hg push ../tmpf -f # -f because be push unstable too
532 pushing to ../tmpf
532 pushing to ../tmpf
533 searching for changes
533 searching for changes
534 adding changesets
534 adding changesets
535 adding manifests
535 adding manifests
536 adding file changes
536 adding file changes
537 added 6 changesets with 6 changes to 6 files (+1 heads)
537 added 6 changesets with 6 changes to 6 files (+1 heads)
538 7 new obsolescence markers
538 7 new obsolescence markers
539
539
540 no warning displayed
540 no warning displayed
541
541
542 $ hg push ../tmpf
542 $ hg push ../tmpf
543 pushing to ../tmpf
543 pushing to ../tmpf
544 searching for changes
544 searching for changes
545 no changes found
545 no changes found
546 [1]
546 [1]
547
547
548 Do not warn about new head when the new head is a successors of a remote one
548 Do not warn about new head when the new head is a successors of a remote one
549
549
550 $ hg log -G
550 $ hg log -G
551 @ 5:cda648ca50f5 (draft unstable) [tip ] add original_e
551 @ 5:cda648ca50f5 (draft unstable) [tip ] add original_e
552 |
552 |
553 x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
553 x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
554 |
554 |
555 o 3:6f9641995072 (draft) [ ] add n3w_3_c
555 o 3:6f9641995072 (draft) [ ] add n3w_3_c
556 |
556 |
557 | o 2:245bde4270cd (public) [ ] add original_c
557 | o 2:245bde4270cd (public) [ ] add original_c
558 |/
558 |/
559 o 1:7c3bad9141dc (public) [ ] add b
559 o 1:7c3bad9141dc (public) [ ] add b
560 |
560 |
561 o 0:1f0dee641bb7 (public) [ ] add a
561 o 0:1f0dee641bb7 (public) [ ] add a
562
562
563 $ hg up -q 'desc(n3w_3_c)'
563 $ hg up -q 'desc(n3w_3_c)'
564 $ mkcommit obsolete_e
564 $ mkcommit obsolete_e
565 created new head
565 created new head
566 $ hg debugobsolete `getid 'original_e'` `getid 'obsolete_e'`
566 $ hg debugobsolete `getid 'original_e'` `getid 'obsolete_e'`
567 $ hg outgoing ../tmpf # parasite hg outgoing testin
567 $ hg outgoing ../tmpf # parasite hg outgoing testin
568 comparing with ../tmpf
568 comparing with ../tmpf
569 searching for changes
569 searching for changes
570 6:3de5eca88c00 (draft) [tip ] add obsolete_e
570 6:3de5eca88c00 (draft) [tip ] add obsolete_e
571 $ hg push ../tmpf
571 $ hg push ../tmpf
572 pushing to ../tmpf
572 pushing to ../tmpf
573 searching for changes
573 searching for changes
574 adding changesets
574 adding changesets
575 adding manifests
575 adding manifests
576 adding file changes
576 adding file changes
577 added 1 changesets with 1 changes to 1 files (+1 heads)
577 added 1 changesets with 1 changes to 1 files (+1 heads)
578 1 new obsolescence markers
578 1 new obsolescence markers
579
579
580 test relevance computation
580 test relevance computation
581 ---------------------------------------
581 ---------------------------------------
582
582
583 Checking simple case of "marker relevance".
583 Checking simple case of "marker relevance".
584
584
585
585
586 Reminder of the repo situation
586 Reminder of the repo situation
587
587
588 $ hg log --hidden --graph
588 $ hg log --hidden --graph
589 @ 6:3de5eca88c00 (draft) [tip ] add obsolete_e
589 @ 6:3de5eca88c00 (draft) [tip ] add obsolete_e
590 |
590 |
591 | x 5:cda648ca50f5 (draft *obsolete*) [ ] add original_e
591 | x 5:cda648ca50f5 (draft *obsolete*) [ ] add original_e
592 | |
592 | |
593 | x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
593 | x 4:94b33453f93b (draft *obsolete*) [ ] add original_d
594 |/
594 |/
595 o 3:6f9641995072 (draft) [ ] add n3w_3_c
595 o 3:6f9641995072 (draft) [ ] add n3w_3_c
596 |
596 |
597 | o 2:245bde4270cd (public) [ ] add original_c
597 | o 2:245bde4270cd (public) [ ] add original_c
598 |/
598 |/
599 o 1:7c3bad9141dc (public) [ ] add b
599 o 1:7c3bad9141dc (public) [ ] add b
600 |
600 |
601 o 0:1f0dee641bb7 (public) [ ] add a
601 o 0:1f0dee641bb7 (public) [ ] add a
602
602
603
603
604 List of all markers
604 List of all markers
605
605
606 $ hg debugobsolete
606 $ hg debugobsolete
607 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
607 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
608 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
608 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
609 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
609 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
610 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
610 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
611 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
611 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
612 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
612 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
613 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
613 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
614 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
614 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
615
615
616 List of changesets with no chain
616 List of changesets with no chain
617
617
618 $ hg debugobsolete --hidden --rev ::2
618 $ hg debugobsolete --hidden --rev ::2
619
619
620 List of changesets that are included on marker chain
620 List of changesets that are included on marker chain
621
621
622 $ hg debugobsolete --hidden --rev 6
622 $ hg debugobsolete --hidden --rev 6
623 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
623 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
624
624
625 List of changesets with a longer chain, (including a pruned children)
625 List of changesets with a longer chain, (including a pruned children)
626
626
627 $ hg debugobsolete --hidden --rev 3
627 $ hg debugobsolete --hidden --rev 3
628 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
628 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
629 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
629 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
630 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
630 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
631 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
631 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
632 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
632 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
633 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
633 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
634 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
634 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
635
635
636 List of both
636 List of both
637
637
638 $ hg debugobsolete --hidden --rev 3::6
638 $ hg debugobsolete --hidden --rev 3::6
639 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
639 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
640 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
640 1339133913391339133913391339133913391339 ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:19 1970 +0000) {'user': 'test'}
641 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
641 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f C (Thu Jan 01 00:00:01 1970 -0002) {'user': 'test'}
642 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
642 5601fb93a350734d935195fee37f4054c529ff39 6f96419950729f3671185b847352890f074f7557 1 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
643 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
643 94b33453f93bdb8d457ef9b770851a618bf413e1 0 {6f96419950729f3671185b847352890f074f7557} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
644 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
644 ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 (Thu Jan 01 00:22:18 1970 +0000) {'user': 'test'}
645 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
645 cda648ca50f50482b7055c0b0c4c117bba6733d9 3de5eca88c00aa039da7399a220f4a5221faa585 0 (*) {'user': 'test'} (glob)
646 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
646 cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 (Thu Jan 01 00:22:17 1970 +0000) {'user': 'test'}
647
647
648 List of all markers in JSON
648 List of all markers in JSON
649
649
650 $ hg debugobsolete -Tjson
650 $ hg debugobsolete -Tjson
651 [
651 [
652 {
652 {
653 "date": [1339.0, 0],
653 "date": [1339.0, 0],
654 "flag": 0,
654 "flag": 0,
655 "metadata": {"user": "test"},
655 "metadata": {"user": "test"},
656 "precnode": "1339133913391339133913391339133913391339",
656 "precnode": "1339133913391339133913391339133913391339",
657 "succnodes": ["ca819180edb99ed25ceafb3e9584ac287e240b00"]
657 "succnodes": ["ca819180edb99ed25ceafb3e9584ac287e240b00"]
658 },
658 },
659 {
659 {
660 "date": [1339.0, 0],
660 "date": [1339.0, 0],
661 "flag": 0,
661 "flag": 0,
662 "metadata": {"user": "test"},
662 "metadata": {"user": "test"},
663 "precnode": "1337133713371337133713371337133713371337",
663 "precnode": "1337133713371337133713371337133713371337",
664 "succnodes": ["5601fb93a350734d935195fee37f4054c529ff39"]
664 "succnodes": ["5601fb93a350734d935195fee37f4054c529ff39"]
665 },
665 },
666 {
666 {
667 "date": [121.0, 120],
667 "date": [121.0, 120],
668 "flag": 12,
668 "flag": 12,
669 "metadata": {"user": "test"},
669 "metadata": {"user": "test"},
670 "precnode": "245bde4270cd1072a27757984f9cda8ba26f08ca",
670 "precnode": "245bde4270cd1072a27757984f9cda8ba26f08ca",
671 "succnodes": ["cdbce2fbb16313928851e97e0d85413f3f7eb77f"]
671 "succnodes": ["cdbce2fbb16313928851e97e0d85413f3f7eb77f"]
672 },
672 },
673 {
673 {
674 "date": [1338.0, 0],
674 "date": [1338.0, 0],
675 "flag": 1,
675 "flag": 1,
676 "metadata": {"user": "test"},
676 "metadata": {"user": "test"},
677 "precnode": "5601fb93a350734d935195fee37f4054c529ff39",
677 "precnode": "5601fb93a350734d935195fee37f4054c529ff39",
678 "succnodes": ["6f96419950729f3671185b847352890f074f7557"]
678 "succnodes": ["6f96419950729f3671185b847352890f074f7557"]
679 },
679 },
680 {
680 {
681 "date": [1338.0, 0],
681 "date": [1338.0, 0],
682 "flag": 0,
682 "flag": 0,
683 "metadata": {"user": "test"},
683 "metadata": {"user": "test"},
684 "precnode": "ca819180edb99ed25ceafb3e9584ac287e240b00",
684 "precnode": "ca819180edb99ed25ceafb3e9584ac287e240b00",
685 "succnodes": ["1337133713371337133713371337133713371337"]
685 "succnodes": ["1337133713371337133713371337133713371337"]
686 },
686 },
687 {
687 {
688 "date": [1337.0, 0],
688 "date": [1337.0, 0],
689 "flag": 0,
689 "flag": 0,
690 "metadata": {"user": "test"},
690 "metadata": {"user": "test"},
691 "precnode": "cdbce2fbb16313928851e97e0d85413f3f7eb77f",
691 "precnode": "cdbce2fbb16313928851e97e0d85413f3f7eb77f",
692 "succnodes": ["ca819180edb99ed25ceafb3e9584ac287e240b00"]
692 "succnodes": ["ca819180edb99ed25ceafb3e9584ac287e240b00"]
693 },
693 },
694 {
694 {
695 "date": [0.0, 0],
695 "date": [0.0, 0],
696 "flag": 0,
696 "flag": 0,
697 "metadata": {"user": "test"},
697 "metadata": {"user": "test"},
698 "parentnodes": ["6f96419950729f3671185b847352890f074f7557"],
698 "parentnodes": ["6f96419950729f3671185b847352890f074f7557"],
699 "precnode": "94b33453f93bdb8d457ef9b770851a618bf413e1",
699 "precnode": "94b33453f93bdb8d457ef9b770851a618bf413e1",
700 "succnodes": []
700 "succnodes": []
701 },
701 },
702 {
702 {
703 "date": *, (glob)
703 "date": *, (glob)
704 "flag": 0,
704 "flag": 0,
705 "metadata": {"user": "test"},
705 "metadata": {"user": "test"},
706 "precnode": "cda648ca50f50482b7055c0b0c4c117bba6733d9",
706 "precnode": "cda648ca50f50482b7055c0b0c4c117bba6733d9",
707 "succnodes": ["3de5eca88c00aa039da7399a220f4a5221faa585"]
707 "succnodes": ["3de5eca88c00aa039da7399a220f4a5221faa585"]
708 }
708 }
709 ]
709 ]
710
710
711 Template keywords
711 Template keywords
712
712
713 $ hg debugobsolete -r6 -T '{succnodes % "{node|short}"} {date|shortdate}\n'
713 $ hg debugobsolete -r6 -T '{succnodes % "{node|short}"} {date|shortdate}\n'
714 3de5eca88c00 ????-??-?? (glob)
714 3de5eca88c00 ????-??-?? (glob)
715 $ hg debugobsolete -r6 -T '{join(metadata % "{key}={value}", " ")}\n'
715 $ hg debugobsolete -r6 -T '{join(metadata % "{key}={value}", " ")}\n'
716 user=test
716 user=test
717 $ hg debugobsolete -r6 -T '{metadata}\n'
717 $ hg debugobsolete -r6 -T '{metadata}\n'
718 'user': 'test'
718 'user': 'test'
719 $ hg debugobsolete -r6 -T '{flag} {get(metadata, "user")}\n'
719 $ hg debugobsolete -r6 -T '{flag} {get(metadata, "user")}\n'
720 0 test
720 0 test
721
721
722 Test the debug output for exchange
722 Test the debug output for exchange
723 ----------------------------------
723 ----------------------------------
724
724
725 $ hg pull ../tmpb --config 'experimental.obsmarkers-exchange-debug=True' # bundle2
725 $ hg pull ../tmpb --config 'experimental.obsmarkers-exchange-debug=True' # bundle2
726 pulling from ../tmpb
726 pulling from ../tmpb
727 searching for changes
727 searching for changes
728 no changes found
728 no changes found
729 obsmarker-exchange: 346 bytes received
729 obsmarker-exchange: 346 bytes received
730
730
731 check hgweb does not explode
731 check hgweb does not explode
732 ====================================
732 ====================================
733
733
734 $ hg unbundle $TESTDIR/bundles/hgweb+obs.hg
734 $ hg unbundle $TESTDIR/bundles/hgweb+obs.hg
735 adding changesets
735 adding changesets
736 adding manifests
736 adding manifests
737 adding file changes
737 adding file changes
738 added 62 changesets with 63 changes to 9 files (+60 heads)
738 added 62 changesets with 63 changes to 9 files (+60 heads)
739 (run 'hg heads .' to see heads, 'hg merge' to merge)
739 (run 'hg heads .' to see heads, 'hg merge' to merge)
740 $ for node in `hg log -r 'desc(babar_)' --template '{node}\n'`;
740 $ for node in `hg log -r 'desc(babar_)' --template '{node}\n'`;
741 > do
741 > do
742 > hg debugobsolete $node
742 > hg debugobsolete $node
743 > done
743 > done
744 $ hg up tip
744 $ hg up tip
745 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
745 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
746
746
747 #if serve
747 #if serve
748
748
749 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
749 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
750 $ cat hg.pid >> $DAEMON_PIDS
750 $ cat hg.pid >> $DAEMON_PIDS
751
751
752 check changelog view
752 check changelog view
753
753
754 $ get-with-headers.py --headeronly localhost:$HGPORT 'shortlog/'
754 $ get-with-headers.py --headeronly localhost:$HGPORT 'shortlog/'
755 200 Script output follows
755 200 Script output follows
756
756
757 check graph view
757 check graph view
758
758
759 $ get-with-headers.py --headeronly localhost:$HGPORT 'graph'
759 $ get-with-headers.py --headeronly localhost:$HGPORT 'graph'
760 200 Script output follows
760 200 Script output follows
761
761
762 check filelog view
762 check filelog view
763
763
764 $ get-with-headers.py --headeronly localhost:$HGPORT 'log/'`hg log -r . -T "{node}"`/'babar'
764 $ get-with-headers.py --headeronly localhost:$HGPORT 'log/'`hg log -r . -T "{node}"`/'babar'
765 200 Script output follows
765 200 Script output follows
766
766
767 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/68'
767 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/68'
768 200 Script output follows
768 200 Script output follows
769 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/67'
769 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/67'
770 404 Not Found
770 404 Not Found
771 [1]
771 [1]
772
772
773 check that web.view config option:
773 check that web.view config option:
774
774
775 $ killdaemons.py hg.pid
775 $ killdaemons.py hg.pid
776 $ cat >> .hg/hgrc << EOF
776 $ cat >> .hg/hgrc << EOF
777 > [web]
777 > [web]
778 > view=all
778 > view=all
779 > EOF
779 > EOF
780 $ wait
780 $ wait
781 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
781 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
782 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/67'
782 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/67'
783 200 Script output follows
783 200 Script output follows
784 $ killdaemons.py hg.pid
784 $ killdaemons.py hg.pid
785
785
786 Checking _enable=False warning if obsolete marker exists
786 Checking _enable=False warning if obsolete marker exists
787
787
788 $ echo '[experimental]' >> $HGRCPATH
788 $ echo '[experimental]' >> $HGRCPATH
789 $ echo "evolution=" >> $HGRCPATH
789 $ echo "evolution=" >> $HGRCPATH
790 $ hg log -r tip
790 $ hg log -r tip
791 obsolete feature not enabled but 68 markers found!
791 obsolete feature not enabled but 68 markers found!
792 68:c15e9edfca13 (draft) [tip ] add celestine
792 68:c15e9edfca13 (draft) [tip ] add celestine
793
793
794 reenable for later test
794 reenable for later test
795
795
796 $ echo '[experimental]' >> $HGRCPATH
796 $ echo '[experimental]' >> $HGRCPATH
797 $ echo "evolution=createmarkers,exchange" >> $HGRCPATH
797 $ echo "evolution=createmarkers,exchange" >> $HGRCPATH
798
798
799 $ rm hg.pid access.log errors.log
799 $ rm hg.pid access.log errors.log
800 #endif
800 #endif
801
801
802 Several troubles on the same changeset (create an unstable and bumped changeset)
802 Several troubles on the same changeset (create an unstable and bumped changeset)
803
803
804 $ hg debugobsolete `getid obsolete_e`
804 $ hg debugobsolete `getid obsolete_e`
805 $ hg debugobsolete `getid original_c` `getid babar`
805 $ hg debugobsolete `getid original_c` `getid babar`
806 $ hg log --config ui.logtemplate= -r 'bumped() and unstable()'
806 $ hg log --config ui.logtemplate= -r 'bumped() and unstable()'
807 changeset: 7:50c51b361e60
807 changeset: 7:50c51b361e60
808 user: test
808 user: test
809 date: Thu Jan 01 00:00:00 1970 +0000
809 date: Thu Jan 01 00:00:00 1970 +0000
810 trouble: unstable, bumped
810 trouble: unstable, bumped
811 summary: add babar
811 summary: add babar
812
812
813
813
814 test the "obsolete" templatekw
814 test the "obsolete" templatekw
815
815
816 $ hg log -r 'obsolete()'
816 $ hg log -r 'obsolete()'
817 6:3de5eca88c00 (draft *obsolete*) [ ] add obsolete_e
817 6:3de5eca88c00 (draft *obsolete*) [ ] add obsolete_e
818
818
819 test the "troubles" templatekw
819 test the "troubles" templatekw
820
820
821 $ hg log -r 'bumped() and unstable()'
821 $ hg log -r 'bumped() and unstable()'
822 7:50c51b361e60 (draft unstable bumped) [ ] add babar
822 7:50c51b361e60 (draft unstable bumped) [ ] add babar
823
823
824 test the default cmdline template
824 test the default cmdline template
825
825
826 $ hg log -T default -r 'bumped()'
826 $ hg log -T default -r 'bumped()'
827 changeset: 7:50c51b361e60
827 changeset: 7:50c51b361e60
828 user: test
828 user: test
829 date: Thu Jan 01 00:00:00 1970 +0000
829 date: Thu Jan 01 00:00:00 1970 +0000
830 trouble: unstable, bumped
830 trouble: unstable, bumped
831 summary: add babar
831 summary: add babar
832
832
833 $ hg log -T default -r 'obsolete()'
833 $ hg log -T default -r 'obsolete()'
834 changeset: 6:3de5eca88c00
834 changeset: 6:3de5eca88c00
835 parent: 3:6f9641995072
835 parent: 3:6f9641995072
836 user: test
836 user: test
837 date: Thu Jan 01 00:00:00 1970 +0000
837 date: Thu Jan 01 00:00:00 1970 +0000
838 summary: add obsolete_e
838 summary: add obsolete_e
839
839
840
840
841 test summary output
841 test summary output
842
842
843 $ hg up -r 'bumped() and unstable()'
843 $ hg up -r 'bumped() and unstable()'
844 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
844 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
845 $ hg summary
845 $ hg summary
846 parent: 7:50c51b361e60 (unstable, bumped)
846 parent: 7:50c51b361e60 (unstable, bumped)
847 add babar
847 add babar
848 branch: default
848 branch: default
849 commit: (clean)
849 commit: (clean)
850 update: 2 new changesets (update)
850 update: 2 new changesets (update)
851 phases: 4 draft
851 phases: 4 draft
852 unstable: 2 changesets
852 unstable: 2 changesets
853 bumped: 1 changesets
853 bumped: 1 changesets
854 $ hg up -r 'obsolete()'
854 $ hg up -r 'obsolete()'
855 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
855 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
856 $ hg summary
856 $ hg summary
857 parent: 6:3de5eca88c00 (obsolete)
857 parent: 6:3de5eca88c00 (obsolete)
858 add obsolete_e
858 add obsolete_e
859 branch: default
859 branch: default
860 commit: (clean)
860 commit: (clean)
861 update: 3 new changesets (update)
861 update: 3 new changesets (update)
862 phases: 4 draft
862 phases: 4 draft
863 unstable: 2 changesets
863 unstable: 2 changesets
864 bumped: 1 changesets
864 bumped: 1 changesets
865
865
866 Test incoming/outcoming with changesets obsoleted remotely, known locally
866 Test incoming/outcoming with changesets obsoleted remotely, known locally
867 ===============================================================================
867 ===============================================================================
868
868
869 This test issue 3805
869 This test issue 3805
870
870
871 $ hg init repo-issue3805
871 $ hg init repo-issue3805
872 $ cd repo-issue3805
872 $ cd repo-issue3805
873 $ echo "base" > base
873 $ echo "base" > base
874 $ hg ci -Am "base"
874 $ hg ci -Am "base"
875 adding base
875 adding base
876 $ echo "foo" > foo
876 $ echo "foo" > foo
877 $ hg ci -Am "A"
877 $ hg ci -Am "A"
878 adding foo
878 adding foo
879 $ hg clone . ../other-issue3805
879 $ hg clone . ../other-issue3805
880 updating to branch default
880 updating to branch default
881 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
881 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
882 $ echo "bar" >> foo
882 $ echo "bar" >> foo
883 $ hg ci --amend
883 $ hg ci --amend
884 $ cd ../other-issue3805
884 $ cd ../other-issue3805
885 $ hg log -G
885 $ hg log -G
886 @ 1:29f0c6921ddd (draft) [tip ] A
886 @ 1:29f0c6921ddd (draft) [tip ] A
887 |
887 |
888 o 0:d20a80d4def3 (draft) [ ] base
888 o 0:d20a80d4def3 (draft) [ ] base
889
889
890 $ hg log -G -R ../repo-issue3805
890 $ hg log -G -R ../repo-issue3805
891 @ 3:323a9c3ddd91 (draft) [tip ] A
891 @ 3:323a9c3ddd91 (draft) [tip ] A
892 |
892 |
893 o 0:d20a80d4def3 (draft) [ ] base
893 o 0:d20a80d4def3 (draft) [ ] base
894
894
895 $ hg incoming
895 $ hg incoming
896 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
896 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
897 searching for changes
897 searching for changes
898 3:323a9c3ddd91 (draft) [tip ] A
898 3:323a9c3ddd91 (draft) [tip ] A
899 $ hg incoming --bundle ../issue3805.hg
899 $ hg incoming --bundle ../issue3805.hg
900 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
900 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
901 searching for changes
901 searching for changes
902 3:323a9c3ddd91 (draft) [tip ] A
902 3:323a9c3ddd91 (draft) [tip ] A
903 $ hg outgoing
903 $ hg outgoing
904 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
904 comparing with $TESTTMP/tmpe/repo-issue3805 (glob)
905 searching for changes
905 searching for changes
906 1:29f0c6921ddd (draft) [tip ] A
906 1:29f0c6921ddd (draft) [tip ] A
907
907
908 #if serve
908 #if serve
909
909
910 $ hg serve -R ../repo-issue3805 -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
910 $ hg serve -R ../repo-issue3805 -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
911 $ cat hg.pid >> $DAEMON_PIDS
911 $ cat hg.pid >> $DAEMON_PIDS
912
912
913 $ hg incoming http://localhost:$HGPORT
913 $ hg incoming http://localhost:$HGPORT
914 comparing with http://localhost:$HGPORT/
914 comparing with http://localhost:$HGPORT/
915 searching for changes
915 searching for changes
916 2:323a9c3ddd91 (draft) [tip ] A
916 2:323a9c3ddd91 (draft) [tip ] A
917 $ hg outgoing http://localhost:$HGPORT
917 $ hg outgoing http://localhost:$HGPORT
918 comparing with http://localhost:$HGPORT/
918 comparing with http://localhost:$HGPORT/
919 searching for changes
919 searching for changes
920 1:29f0c6921ddd (draft) [tip ] A
920 1:29f0c6921ddd (draft) [tip ] A
921
921
922 $ killdaemons.py
922 $ killdaemons.py
923
923
924 #endif
924 #endif
925
925
926 This test issue 3814
926 This test issue 3814
927
927
928 (nothing to push but locally hidden changeset)
928 (nothing to push but locally hidden changeset)
929
929
930 $ cd ..
930 $ cd ..
931 $ hg init repo-issue3814
931 $ hg init repo-issue3814
932 $ cd repo-issue3805
932 $ cd repo-issue3805
933 $ hg push -r 323a9c3ddd91 ../repo-issue3814
933 $ hg push -r 323a9c3ddd91 ../repo-issue3814
934 pushing to ../repo-issue3814
934 pushing to ../repo-issue3814
935 searching for changes
935 searching for changes
936 adding changesets
936 adding changesets
937 adding manifests
937 adding manifests
938 adding file changes
938 adding file changes
939 added 2 changesets with 2 changes to 2 files
939 added 2 changesets with 2 changes to 2 files
940 2 new obsolescence markers
940 2 new obsolescence markers
941 $ hg out ../repo-issue3814
941 $ hg out ../repo-issue3814
942 comparing with ../repo-issue3814
942 comparing with ../repo-issue3814
943 searching for changes
943 searching for changes
944 no changes found
944 no changes found
945 [1]
945 [1]
946
946
947 Test that a local tag blocks a changeset from being hidden
947 Test that a local tag blocks a changeset from being hidden
948
948
949 $ hg tag -l visible -r 1 --hidden
949 $ hg tag -l visible -r 1 --hidden
950 $ hg log -G
950 $ hg log -G
951 @ 3:323a9c3ddd91 (draft) [tip ] A
951 @ 3:323a9c3ddd91 (draft) [tip ] A
952 |
952 |
953 | x 1:29f0c6921ddd (draft *obsolete*) [visible ] A
953 | x 1:29f0c6921ddd (draft *obsolete*) [visible ] A
954 |/
954 |/
955 o 0:d20a80d4def3 (draft) [ ] base
955 o 0:d20a80d4def3 (draft) [ ] base
956
956
957 Test that removing a local tag does not cause some commands to fail
957 Test that removing a local tag does not cause some commands to fail
958
958
959 $ hg tag -l -r tip tiptag
959 $ hg tag -l -r tip tiptag
960 $ hg tags
960 $ hg tags
961 tiptag 3:323a9c3ddd91
961 tiptag 3:323a9c3ddd91
962 tip 3:323a9c3ddd91
962 tip 3:323a9c3ddd91
963 visible 1:29f0c6921ddd
963 visible 1:29f0c6921ddd
964 $ hg --config extensions.strip= strip -r tip --no-backup
964 $ hg --config extensions.strip= strip -r tip --no-backup
965 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
965 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
966 $ hg tags
966 $ hg tags
967 visible 1:29f0c6921ddd
967 visible 1:29f0c6921ddd
968 tip 1:29f0c6921ddd
968 tip 1:29f0c6921ddd
969
969
970 Test bundle overlay onto hidden revision
970 Test bundle overlay onto hidden revision
971
971
972 $ cd ..
972 $ cd ..
973 $ hg init repo-bundleoverlay
973 $ hg init repo-bundleoverlay
974 $ cd repo-bundleoverlay
974 $ cd repo-bundleoverlay
975 $ echo "A" > foo
975 $ echo "A" > foo
976 $ hg ci -Am "A"
976 $ hg ci -Am "A"
977 adding foo
977 adding foo
978 $ echo "B" >> foo
978 $ echo "B" >> foo
979 $ hg ci -m "B"
979 $ hg ci -m "B"
980 $ hg up 0
980 $ hg up 0
981 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
981 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
982 $ echo "C" >> foo
982 $ echo "C" >> foo
983 $ hg ci -m "C"
983 $ hg ci -m "C"
984 created new head
984 created new head
985 $ hg log -G
985 $ hg log -G
986 @ 2:c186d7714947 (draft) [tip ] C
986 @ 2:c186d7714947 (draft) [tip ] C
987 |
987 |
988 | o 1:44526ebb0f98 (draft) [ ] B
988 | o 1:44526ebb0f98 (draft) [ ] B
989 |/
989 |/
990 o 0:4b34ecfb0d56 (draft) [ ] A
990 o 0:4b34ecfb0d56 (draft) [ ] A
991
991
992
992
993 $ hg clone -r1 . ../other-bundleoverlay
993 $ hg clone -r1 . ../other-bundleoverlay
994 adding changesets
994 adding changesets
995 adding manifests
995 adding manifests
996 adding file changes
996 adding file changes
997 added 2 changesets with 2 changes to 1 files
997 added 2 changesets with 2 changes to 1 files
998 updating to branch default
998 updating to branch default
999 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
999 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1000 $ cd ../other-bundleoverlay
1000 $ cd ../other-bundleoverlay
1001 $ echo "B+" >> foo
1001 $ echo "B+" >> foo
1002 $ hg ci --amend -m "B+"
1002 $ hg ci --amend -m "B+"
1003 $ hg log -G --hidden
1003 $ hg log -G --hidden
1004 @ 3:b7d587542d40 (draft) [tip ] B+
1004 @ 3:b7d587542d40 (draft) [tip ] B+
1005 |
1005 |
1006 | x 2:eb95e9297e18 (draft *obsolete*) [ ] temporary amend commit for 44526ebb0f98
1006 | x 2:eb95e9297e18 (draft *obsolete*) [ ] temporary amend commit for 44526ebb0f98
1007 | |
1007 | |
1008 | x 1:44526ebb0f98 (draft *obsolete*) [ ] B
1008 | x 1:44526ebb0f98 (draft *obsolete*) [ ] B
1009 |/
1009 |/
1010 o 0:4b34ecfb0d56 (draft) [ ] A
1010 o 0:4b34ecfb0d56 (draft) [ ] A
1011
1011
1012
1012
1013 $ hg incoming ../repo-bundleoverlay --bundle ../bundleoverlay.hg
1013 $ hg incoming ../repo-bundleoverlay --bundle ../bundleoverlay.hg
1014 comparing with ../repo-bundleoverlay
1014 comparing with ../repo-bundleoverlay
1015 searching for changes
1015 searching for changes
1016 1:44526ebb0f98 (draft) [ ] B
1016 1:44526ebb0f98 (draft) [ ] B
1017 2:c186d7714947 (draft) [tip ] C
1017 2:c186d7714947 (draft) [tip ] C
1018 $ hg log -G -R ../bundleoverlay.hg
1018 $ hg log -G -R ../bundleoverlay.hg
1019 o 4:c186d7714947 (draft) [tip ] C
1019 o 4:c186d7714947 (draft) [tip ] C
1020 |
1020 |
1021 | @ 3:b7d587542d40 (draft) [ ] B+
1021 | @ 3:b7d587542d40 (draft) [ ] B+
1022 |/
1022 |/
1023 o 0:4b34ecfb0d56 (draft) [ ] A
1023 o 0:4b34ecfb0d56 (draft) [ ] A
1024
1024
1025
1025
1026 #if serve
1026 #if serve
1027
1027
1028 Test issue 4506
1028 Test issue 4506
1029
1029
1030 $ cd ..
1030 $ cd ..
1031 $ hg init repo-issue4506
1031 $ hg init repo-issue4506
1032 $ cd repo-issue4506
1032 $ cd repo-issue4506
1033 $ echo "0" > foo
1033 $ echo "0" > foo
1034 $ hg add foo
1034 $ hg add foo
1035 $ hg ci -m "content-0"
1035 $ hg ci -m "content-0"
1036
1036
1037 $ hg up null
1037 $ hg up null
1038 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
1038 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
1039 $ echo "1" > bar
1039 $ echo "1" > bar
1040 $ hg add bar
1040 $ hg add bar
1041 $ hg ci -m "content-1"
1041 $ hg ci -m "content-1"
1042 created new head
1042 created new head
1043 $ hg up 0
1043 $ hg up 0
1044 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
1044 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
1045 $ hg graft 1
1045 $ hg graft 1
1046 grafting 1:1c9eddb02162 "content-1" (tip)
1046 grafting 1:1c9eddb02162 "content-1" (tip)
1047
1047
1048 $ hg debugobsolete `hg log -r1 -T'{node}'` `hg log -r2 -T'{node}'`
1048 $ hg debugobsolete `hg log -r1 -T'{node}'` `hg log -r2 -T'{node}'`
1049
1049
1050 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
1050 $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
1051 $ cat hg.pid >> $DAEMON_PIDS
1051 $ cat hg.pid >> $DAEMON_PIDS
1052
1052
1053 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/1'
1053 $ get-with-headers.py --headeronly localhost:$HGPORT 'rev/1'
1054 404 Not Found
1054 404 Not Found
1055 [1]
1055 [1]
1056 $ get-with-headers.py --headeronly localhost:$HGPORT 'file/tip/bar'
1056 $ get-with-headers.py --headeronly localhost:$HGPORT 'file/tip/bar'
1057 200 Script output follows
1057 200 Script output follows
1058 $ get-with-headers.py --headeronly localhost:$HGPORT 'annotate/tip/bar'
1058 $ get-with-headers.py --headeronly localhost:$HGPORT 'annotate/tip/bar'
1059 200 Script output follows
1059 200 Script output follows
1060
1060
1061 $ killdaemons.py
1061 $ killdaemons.py
1062
1062
1063 #endif
1063 #endif
1064
1064
1065 Test heads computation on pending index changes with obsolescence markers
1065 Test heads computation on pending index changes with obsolescence markers
1066 $ cd ..
1066 $ cd ..
1067 $ cat >$TESTTMP/test_extension.py << EOF
1067 $ cat >$TESTTMP/test_extension.py << EOF
1068 > from mercurial import cmdutil
1068 > from mercurial import cmdutil
1069 > from mercurial.i18n import _
1069 > from mercurial.i18n import _
1070 >
1070 >
1071 > cmdtable = {}
1071 > cmdtable = {}
1072 > command = cmdutil.command(cmdtable)
1072 > command = cmdutil.command(cmdtable)
1073 > @command("amendtransient",[], _('hg amendtransient [rev]'))
1073 > @command("amendtransient",[], _('hg amendtransient [rev]'))
1074 > def amend(ui, repo, *pats, **opts):
1074 > def amend(ui, repo, *pats, **opts):
1075 > def commitfunc(ui, repo, message, match, opts):
1075 > def commitfunc(ui, repo, message, match, opts):
1076 > return repo.commit(message, repo['.'].user(), repo['.'].date(), match)
1076 > return repo.commit(message, repo['.'].user(), repo['.'].date(), match)
1077 > opts['message'] = 'Test'
1077 > opts['message'] = 'Test'
1078 > opts['logfile'] = None
1078 > opts['logfile'] = None
1079 > cmdutil.amend(ui, repo, commitfunc, repo['.'], {}, pats, opts)
1079 > cmdutil.amend(ui, repo, commitfunc, repo['.'], {}, pats, opts)
1080 > ui.write('%s\n' % repo.changelog.headrevs())
1080 > ui.write('%s\n' % repo.changelog.headrevs())
1081 > EOF
1081 > EOF
1082 $ cat >> $HGRCPATH << EOF
1082 $ cat >> $HGRCPATH << EOF
1083 > [extensions]
1083 > [extensions]
1084 > testextension=$TESTTMP/test_extension.py
1084 > testextension=$TESTTMP/test_extension.py
1085 > EOF
1085 > EOF
1086 $ hg init repo-issue-nativerevs-pending-changes
1086 $ hg init repo-issue-nativerevs-pending-changes
1087 $ cd repo-issue-nativerevs-pending-changes
1087 $ cd repo-issue-nativerevs-pending-changes
1088 $ mkcommit a
1088 $ mkcommit a
1089 $ mkcommit b
1089 $ mkcommit b
1090 $ hg up ".^"
1090 $ hg up ".^"
1091 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
1091 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
1092 $ echo aa > a
1092 $ echo aa > a
1093 $ hg amendtransient
1093 $ hg amendtransient
1094 [1, 3]
1094 [1, 3]
1095
1095
1096 Check that corrupted hidden cache does not crash
1096 Check that corrupted hidden cache does not crash
1097
1097
1098 $ printf "" > .hg/cache/hidden
1098 $ printf "" > .hg/cache/hidden
1099 $ hg log -r . -T '{node}' --debug
1099 $ hg log -r . -T '{node}' --debug
1100 corrupted hidden cache
1100 corrupted hidden cache
1101 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1101 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1102 $ hg log -r . -T '{node}' --debug
1102 $ hg log -r . -T '{node}' --debug
1103 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1103 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1104
1104
1105 #if unix-permissions
1105 #if unix-permissions
1106 Check that wrong hidden cache permission does not crash
1106 Check that wrong hidden cache permission does not crash
1107
1107
1108 $ chmod 000 .hg/cache/hidden
1108 $ chmod 000 .hg/cache/hidden
1109 $ hg log -r . -T '{node}' --debug
1109 $ hg log -r . -T '{node}' --debug
1110 cannot read hidden cache
1110 cannot read hidden cache
1111 error writing hidden changesets cache
1111 error writing hidden changesets cache
1112 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1112 8fd96dfc63e51ed5a8af1bec18eb4b19dbf83812 (no-eol)
1113 #endif
1113 #endif
1114
1114
1115 Test cache consistency for the visible filter
1115 Test cache consistency for the visible filter
1116 1) We want to make sure that the cached filtered revs are invalidated when
1116 1) We want to make sure that the cached filtered revs are invalidated when
1117 bookmarks change
1117 bookmarks change
1118 $ cd ..
1118 $ cd ..
1119 $ cat >$TESTTMP/test_extension.py << EOF
1119 $ cat >$TESTTMP/test_extension.py << EOF
1120 > import weakref
1120 > import weakref
1121 > from mercurial import cmdutil, extensions, bookmarks, repoview
1121 > from mercurial import cmdutil, extensions, bookmarks, repoview
1122 > def _bookmarkchanged(orig, bkmstoreinst, *args, **kwargs):
1122 > def _bookmarkchanged(orig, bkmstoreinst, *args, **kwargs):
1123 > reporef = weakref.ref(bkmstoreinst._repo)
1123 > reporef = weakref.ref(bkmstoreinst._repo)
1124 > def trhook(tr):
1124 > def trhook(tr):
1125 > repo = reporef()
1125 > repo = reporef()
1126 > hidden1 = repoview.computehidden(repo)
1126 > hidden1 = repoview.computehidden(repo)
1127 > hidden = repoview.filterrevs(repo, 'visible')
1127 > hidden = repoview.filterrevs(repo, 'visible')
1128 > if sorted(hidden1) != sorted(hidden):
1128 > if sorted(hidden1) != sorted(hidden):
1129 > print "cache inconsistency"
1129 > print "cache inconsistency"
1130 > bkmstoreinst._repo.currenttransaction().addpostclose('test_extension', trhook)
1130 > bkmstoreinst._repo.currenttransaction().addpostclose('test_extension', trhook)
1131 > orig(bkmstoreinst, *args, **kwargs)
1131 > orig(bkmstoreinst, *args, **kwargs)
1132 > def extsetup(ui):
1132 > def extsetup(ui):
1133 > extensions.wrapfunction(bookmarks.bmstore, 'recordchange',
1133 > extensions.wrapfunction(bookmarks.bmstore, 'recordchange',
1134 > _bookmarkchanged)
1134 > _bookmarkchanged)
1135 > EOF
1135 > EOF
1136
1136
1137 $ hg init repo-cache-inconsistency
1137 $ hg init repo-cache-inconsistency
1138 $ cd repo-issue-nativerevs-pending-changes
1138 $ cd repo-issue-nativerevs-pending-changes
1139 $ mkcommit a
1139 $ mkcommit a
1140 a already tracked!
1140 a already tracked!
1141 $ mkcommit b
1141 $ mkcommit b
1142 $ hg id
1142 $ hg id
1143 13bedc178fce tip
1143 13bedc178fce tip
1144 $ echo "hello" > b
1144 $ echo "hello" > b
1145 $ hg commit --amend -m "message"
1145 $ hg commit --amend -m "message"
1146 $ hg book bookb -r 13bedc178fce --hidden
1146 $ hg book bookb -r 13bedc178fce --hidden
1147 $ hg log -r 13bedc178fce
1147 $ hg log -r 13bedc178fce
1148 5:13bedc178fce (draft *obsolete*) [ bookb] add b
1148 5:13bedc178fce (draft *obsolete*) [ bookb] add b
1149 $ hg book -d bookb
1149 $ hg book -d bookb
1150 $ hg log -r 13bedc178fce
1150 $ hg log -r 13bedc178fce
1151 abort: hidden revision '13bedc178fce'!
1151 abort: hidden revision '13bedc178fce'!
1152 (use --hidden to access hidden revisions)
1152 (use --hidden to access hidden revisions)
1153 [255]
1153 [255]
1154
1154
1155 Empty out the test extension, as it isn't compatible with later parts
1155 Empty out the test extension, as it isn't compatible with later parts
1156 of the test.
1156 of the test.
1157 $ echo > $TESTTMP/test_extension.py
1157 $ echo > $TESTTMP/test_extension.py
1158
1158
1159 Test ability to pull changeset with locally applying obsolescence markers
1159 Test ability to pull changeset with locally applying obsolescence markers
1160 (issue4945)
1160 (issue4945)
1161
1161
1162 $ cd ..
1162 $ cd ..
1163 $ hg init issue4845
1163 $ hg init issue4845
1164 $ cd issue4845
1164 $ cd issue4845
1165
1165
1166 $ echo foo > f0
1166 $ echo foo > f0
1167 $ hg add f0
1167 $ hg add f0
1168 $ hg ci -m '0'
1168 $ hg ci -m '0'
1169 $ echo foo > f1
1169 $ echo foo > f1
1170 $ hg add f1
1170 $ hg add f1
1171 $ hg ci -m '1'
1171 $ hg ci -m '1'
1172 $ echo foo > f2
1172 $ echo foo > f2
1173 $ hg add f2
1173 $ hg add f2
1174 $ hg ci -m '2'
1174 $ hg ci -m '2'
1175
1175
1176 $ echo bar > f2
1176 $ echo bar > f2
1177 $ hg commit --amend --config experimetnal.evolution=createmarkers
1177 $ hg commit --amend --config experimetnal.evolution=createmarkers
1178 $ hg log -G
1178 $ hg log -G
1179 @ 4:b0551702f918 (draft) [tip ] 2
1179 @ 4:b0551702f918 (draft) [tip ] 2
1180 |
1180 |
1181 o 1:e016b03fd86f (draft) [ ] 1
1181 o 1:e016b03fd86f (draft) [ ] 1
1182 |
1182 |
1183 o 0:a78f55e5508c (draft) [ ] 0
1183 o 0:a78f55e5508c (draft) [ ] 0
1184
1184
1185 $ hg log -G --hidden
1185 $ hg log -G --hidden
1186 @ 4:b0551702f918 (draft) [tip ] 2
1186 @ 4:b0551702f918 (draft) [tip ] 2
1187 |
1187 |
1188 | x 3:f27abbcc1f77 (draft *obsolete*) [ ] temporary amend commit for e008cf283490
1188 | x 3:f27abbcc1f77 (draft *obsolete*) [ ] temporary amend commit for e008cf283490
1189 | |
1189 | |
1190 | x 2:e008cf283490 (draft *obsolete*) [ ] 2
1190 | x 2:e008cf283490 (draft *obsolete*) [ ] 2
1191 |/
1191 |/
1192 o 1:e016b03fd86f (draft) [ ] 1
1192 o 1:e016b03fd86f (draft) [ ] 1
1193 |
1193 |
1194 o 0:a78f55e5508c (draft) [ ] 0
1194 o 0:a78f55e5508c (draft) [ ] 0
1195
1195
1196
1196
1197 $ hg strip -r 1 --config extensions.strip=
1197 $ hg strip -r 1 --config extensions.strip=
1198 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
1198 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
1199 saved backup bundle to $TESTTMP/tmpe/issue4845/.hg/strip-backup/e016b03fd86f-c41c6bcc-backup.hg (glob)
1199 saved backup bundle to $TESTTMP/tmpe/issue4845/.hg/strip-backup/e016b03fd86f-c41c6bcc-backup.hg (glob)
1200 $ hg log -G
1200 $ hg log -G
1201 @ 0:a78f55e5508c (draft) [tip ] 0
1201 @ 0:a78f55e5508c (draft) [tip ] 0
1202
1202
1203 $ hg log -G --hidden
1203 $ hg log -G --hidden
1204 @ 0:a78f55e5508c (draft) [tip ] 0
1204 @ 0:a78f55e5508c (draft) [tip ] 0
1205
1205
1206
1206
1207 $ hg pull .hg/strip-backup/*
1207 $ hg pull .hg/strip-backup/*
1208 pulling from .hg/strip-backup/e016b03fd86f-c41c6bcc-backup.hg
1208 pulling from .hg/strip-backup/e016b03fd86f-c41c6bcc-backup.hg
1209 searching for changes
1209 searching for changes
1210 adding changesets
1210 adding changesets
1211 adding manifests
1211 adding manifests
1212 adding file changes
1212 adding file changes
1213 added 2 changesets with 2 changes to 2 files
1213 added 2 changesets with 2 changes to 2 files
1214 (run 'hg update' to get a working copy)
1214 (run 'hg update' to get a working copy)
1215 $ hg log -G
1215 $ hg log -G
1216 o 2:b0551702f918 (draft) [tip ] 2
1216 o 2:b0551702f918 (draft) [tip ] 2
1217 |
1217 |
1218 o 1:e016b03fd86f (draft) [ ] 1
1218 o 1:e016b03fd86f (draft) [ ] 1
1219 |
1219 |
1220 @ 0:a78f55e5508c (draft) [ ] 0
1220 @ 0:a78f55e5508c (draft) [ ] 0
1221
1221
1222 $ hg log -G --hidden
1222 $ hg log -G --hidden
1223 o 2:b0551702f918 (draft) [tip ] 2
1223 o 2:b0551702f918 (draft) [tip ] 2
1224 |
1224 |
1225 o 1:e016b03fd86f (draft) [ ] 1
1225 o 1:e016b03fd86f (draft) [ ] 1
1226 |
1226 |
1227 @ 0:a78f55e5508c (draft) [ ] 0
1227 @ 0:a78f55e5508c (draft) [ ] 0
1228
1228
1229 Test that 'hg debugobsolete --index --rev' can show indices of obsmarkers when
1229 Test that 'hg debugobsolete --index --rev' can show indices of obsmarkers when
1230 only a subset of those are displayed (because of --rev option)
1230 only a subset of those are displayed (because of --rev option)
1231 $ hg init doindexrev
1231 $ hg init doindexrev
1232 $ cd doindexrev
1232 $ cd doindexrev
1233 $ echo a > a
1233 $ echo a > a
1234 $ hg ci -Am a
1234 $ hg ci -Am a
1235 adding a
1235 adding a
1236 $ hg ci --amend -m aa
1236 $ hg ci --amend -m aa
1237 $ echo b > b
1237 $ echo b > b
1238 $ hg ci -Am b
1238 $ hg ci -Am b
1239 adding b
1239 adding b
1240 $ hg ci --amend -m bb
1240 $ hg ci --amend -m bb
1241 $ echo c > c
1241 $ echo c > c
1242 $ hg ci -Am c
1242 $ hg ci -Am c
1243 adding c
1243 adding c
1244 $ hg ci --amend -m cc
1244 $ hg ci --amend -m cc
1245 $ echo d > d
1245 $ echo d > d
1246 $ hg ci -Am d
1246 $ hg ci -Am d
1247 adding d
1247 adding d
1248 $ hg ci --amend -m dd
1248 $ hg ci --amend -m dd
1249 $ hg debugobsolete --index --rev "3+7"
1249 $ hg debugobsolete --index --rev "3+7"
1250 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'user': 'test'} (re)
1250 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1251 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'user': 'test'} (re)
1251 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1252 $ hg debugobsolete --index --rev "3+7" -Tjson
1252 $ hg debugobsolete --index --rev "3+7" -Tjson
1253 [
1253 [
1254 {
1254 {
1255 "date": *, (glob)
1255 "date": *, (glob)
1256 "flag": 0,
1256 "flag": 0,
1257 "index": 1,
1257 "index": 1,
1258 "metadata": {"user": "test"},
1258 "metadata": {"operation": "amend", "user": "test"},
1259 "precnode": "6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1",
1259 "precnode": "6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1",
1260 "succnodes": ["d27fb9b066076fd921277a4b9e8b9cb48c95bc6a"]
1260 "succnodes": ["d27fb9b066076fd921277a4b9e8b9cb48c95bc6a"]
1261 },
1261 },
1262 {
1262 {
1263 "date": *, (glob)
1263 "date": *, (glob)
1264 "flag": 0,
1264 "flag": 0,
1265 "index": 3,
1265 "index": 3,
1266 "metadata": {"user": "test"},
1266 "metadata": {"operation": "amend", "user": "test"},
1267 "precnode": "4715cf767440ed891755448016c2b8cf70760c30",
1267 "precnode": "4715cf767440ed891755448016c2b8cf70760c30",
1268 "succnodes": ["7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d"]
1268 "succnodes": ["7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d"]
1269 }
1269 }
1270 ]
1270 ]
1271
1271
1272 Test the --delete option of debugobsolete command
1272 Test the --delete option of debugobsolete command
1273 $ hg debugobsolete --index
1273 $ hg debugobsolete --index
1274 0 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'user': 'test'} (re)
1274 0 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1275 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'user': 'test'} (re)
1275 1 6fdef60fcbabbd3d50e9b9cbc2a240724b91a5e1 d27fb9b066076fd921277a4b9e8b9cb48c95bc6a 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1276 2 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'user': 'test'} (re)
1276 2 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1277 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'user': 'test'} (re)
1277 3 4715cf767440ed891755448016c2b8cf70760c30 7ae79c5d60f049c7b0dd02f5f25b9d60aaf7b36d 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1278 $ hg debugobsolete --delete 1 --delete 3
1278 $ hg debugobsolete --delete 1 --delete 3
1279 deleted 2 obsolescence markers
1279 deleted 2 obsolescence markers
1280 $ hg debugobsolete
1280 $ hg debugobsolete
1281 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'user': 'test'} (re)
1281 cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b f9bd49731b0b175e42992a3c8fa6c678b2bc11f1 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1282 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'user': 'test'} (re)
1282 1ab51af8f9b41ef8c7f6f3312d4706d870b1fb74 29346082e4a9e27042b62d2da0e2de211c027621 0 \(.*\) {'operation': 'amend', 'user': 'test'} (re)
1283 $ cd ..
1283 $ cd ..
1284
1284
@@ -1,980 +1,980
1 ==========================
1 ==========================
2 Test rebase with obsolete
2 Test rebase with obsolete
3 ==========================
3 ==========================
4
4
5 Enable obsolete
5 Enable obsolete
6
6
7 $ cat >> $HGRCPATH << EOF
7 $ cat >> $HGRCPATH << EOF
8 > [ui]
8 > [ui]
9 > logtemplate= {rev}:{node|short} {desc|firstline}
9 > logtemplate= {rev}:{node|short} {desc|firstline}
10 > [experimental]
10 > [experimental]
11 > evolution=createmarkers,allowunstable
11 > evolution=createmarkers,allowunstable
12 > [phases]
12 > [phases]
13 > publish=False
13 > publish=False
14 > [extensions]
14 > [extensions]
15 > rebase=
15 > rebase=
16 > EOF
16 > EOF
17
17
18 Setup rebase canonical repo
18 Setup rebase canonical repo
19
19
20 $ hg init base
20 $ hg init base
21 $ cd base
21 $ cd base
22 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
22 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
23 adding changesets
23 adding changesets
24 adding manifests
24 adding manifests
25 adding file changes
25 adding file changes
26 added 8 changesets with 7 changes to 7 files (+2 heads)
26 added 8 changesets with 7 changes to 7 files (+2 heads)
27 (run 'hg heads' to see heads, 'hg merge' to merge)
27 (run 'hg heads' to see heads, 'hg merge' to merge)
28 $ hg up tip
28 $ hg up tip
29 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
29 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
30 $ hg log -G
30 $ hg log -G
31 @ 7:02de42196ebe H
31 @ 7:02de42196ebe H
32 |
32 |
33 | o 6:eea13746799a G
33 | o 6:eea13746799a G
34 |/|
34 |/|
35 o | 5:24b6387c8c8c F
35 o | 5:24b6387c8c8c F
36 | |
36 | |
37 | o 4:9520eea781bc E
37 | o 4:9520eea781bc E
38 |/
38 |/
39 | o 3:32af7686d403 D
39 | o 3:32af7686d403 D
40 | |
40 | |
41 | o 2:5fddd98957c8 C
41 | o 2:5fddd98957c8 C
42 | |
42 | |
43 | o 1:42ccdea3bb16 B
43 | o 1:42ccdea3bb16 B
44 |/
44 |/
45 o 0:cd010b8cd998 A
45 o 0:cd010b8cd998 A
46
46
47 $ cd ..
47 $ cd ..
48
48
49 simple rebase
49 simple rebase
50 ---------------------------------
50 ---------------------------------
51
51
52 $ hg clone base simple
52 $ hg clone base simple
53 updating to branch default
53 updating to branch default
54 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
54 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
55 $ cd simple
55 $ cd simple
56 $ hg up 32af7686d403
56 $ hg up 32af7686d403
57 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
57 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
58 $ hg rebase -d eea13746799a
58 $ hg rebase -d eea13746799a
59 rebasing 1:42ccdea3bb16 "B"
59 rebasing 1:42ccdea3bb16 "B"
60 rebasing 2:5fddd98957c8 "C"
60 rebasing 2:5fddd98957c8 "C"
61 rebasing 3:32af7686d403 "D"
61 rebasing 3:32af7686d403 "D"
62 $ hg log -G
62 $ hg log -G
63 @ 10:8eeb3c33ad33 D
63 @ 10:8eeb3c33ad33 D
64 |
64 |
65 o 9:2327fea05063 C
65 o 9:2327fea05063 C
66 |
66 |
67 o 8:e4e5be0395b2 B
67 o 8:e4e5be0395b2 B
68 |
68 |
69 | o 7:02de42196ebe H
69 | o 7:02de42196ebe H
70 | |
70 | |
71 o | 6:eea13746799a G
71 o | 6:eea13746799a G
72 |\|
72 |\|
73 | o 5:24b6387c8c8c F
73 | o 5:24b6387c8c8c F
74 | |
74 | |
75 o | 4:9520eea781bc E
75 o | 4:9520eea781bc E
76 |/
76 |/
77 o 0:cd010b8cd998 A
77 o 0:cd010b8cd998 A
78
78
79 $ hg log --hidden -G
79 $ hg log --hidden -G
80 @ 10:8eeb3c33ad33 D
80 @ 10:8eeb3c33ad33 D
81 |
81 |
82 o 9:2327fea05063 C
82 o 9:2327fea05063 C
83 |
83 |
84 o 8:e4e5be0395b2 B
84 o 8:e4e5be0395b2 B
85 |
85 |
86 | o 7:02de42196ebe H
86 | o 7:02de42196ebe H
87 | |
87 | |
88 o | 6:eea13746799a G
88 o | 6:eea13746799a G
89 |\|
89 |\|
90 | o 5:24b6387c8c8c F
90 | o 5:24b6387c8c8c F
91 | |
91 | |
92 o | 4:9520eea781bc E
92 o | 4:9520eea781bc E
93 |/
93 |/
94 | x 3:32af7686d403 D
94 | x 3:32af7686d403 D
95 | |
95 | |
96 | x 2:5fddd98957c8 C
96 | x 2:5fddd98957c8 C
97 | |
97 | |
98 | x 1:42ccdea3bb16 B
98 | x 1:42ccdea3bb16 B
99 |/
99 |/
100 o 0:cd010b8cd998 A
100 o 0:cd010b8cd998 A
101
101
102 $ hg debugobsolete
102 $ hg debugobsolete
103 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 e4e5be0395b2cbd471ed22a26b1b6a1a0658a794 0 (*) {'user': 'test'} (glob)
103 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 e4e5be0395b2cbd471ed22a26b1b6a1a0658a794 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
104 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 2327fea05063f39961b14cb69435a9898dc9a245 0 (*) {'user': 'test'} (glob)
104 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 2327fea05063f39961b14cb69435a9898dc9a245 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
105 32af7686d403cf45b5d95f2d70cebea587ac806a 8eeb3c33ad33d452c89e5dcf611c347f978fb42b 0 (*) {'user': 'test'} (glob)
105 32af7686d403cf45b5d95f2d70cebea587ac806a 8eeb3c33ad33d452c89e5dcf611c347f978fb42b 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
106
106
107
107
108 $ cd ..
108 $ cd ..
109
109
110 empty changeset
110 empty changeset
111 ---------------------------------
111 ---------------------------------
112
112
113 $ hg clone base empty
113 $ hg clone base empty
114 updating to branch default
114 updating to branch default
115 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
115 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
116 $ cd empty
116 $ cd empty
117 $ hg up eea13746799a
117 $ hg up eea13746799a
118 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
118 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
119
119
120 We make a copy of both the first changeset in the rebased and some other in the
120 We make a copy of both the first changeset in the rebased and some other in the
121 set.
121 set.
122
122
123 $ hg graft 42ccdea3bb16 32af7686d403
123 $ hg graft 42ccdea3bb16 32af7686d403
124 grafting 1:42ccdea3bb16 "B"
124 grafting 1:42ccdea3bb16 "B"
125 grafting 3:32af7686d403 "D"
125 grafting 3:32af7686d403 "D"
126 $ hg rebase -s 42ccdea3bb16 -d .
126 $ hg rebase -s 42ccdea3bb16 -d .
127 rebasing 1:42ccdea3bb16 "B"
127 rebasing 1:42ccdea3bb16 "B"
128 note: rebase of 1:42ccdea3bb16 created no changes to commit
128 note: rebase of 1:42ccdea3bb16 created no changes to commit
129 rebasing 2:5fddd98957c8 "C"
129 rebasing 2:5fddd98957c8 "C"
130 rebasing 3:32af7686d403 "D"
130 rebasing 3:32af7686d403 "D"
131 note: rebase of 3:32af7686d403 created no changes to commit
131 note: rebase of 3:32af7686d403 created no changes to commit
132 $ hg log -G
132 $ hg log -G
133 o 10:5ae4c968c6ac C
133 o 10:5ae4c968c6ac C
134 |
134 |
135 @ 9:08483444fef9 D
135 @ 9:08483444fef9 D
136 |
136 |
137 o 8:8877864f1edb B
137 o 8:8877864f1edb B
138 |
138 |
139 | o 7:02de42196ebe H
139 | o 7:02de42196ebe H
140 | |
140 | |
141 o | 6:eea13746799a G
141 o | 6:eea13746799a G
142 |\|
142 |\|
143 | o 5:24b6387c8c8c F
143 | o 5:24b6387c8c8c F
144 | |
144 | |
145 o | 4:9520eea781bc E
145 o | 4:9520eea781bc E
146 |/
146 |/
147 o 0:cd010b8cd998 A
147 o 0:cd010b8cd998 A
148
148
149 $ hg log --hidden -G
149 $ hg log --hidden -G
150 o 10:5ae4c968c6ac C
150 o 10:5ae4c968c6ac C
151 |
151 |
152 @ 9:08483444fef9 D
152 @ 9:08483444fef9 D
153 |
153 |
154 o 8:8877864f1edb B
154 o 8:8877864f1edb B
155 |
155 |
156 | o 7:02de42196ebe H
156 | o 7:02de42196ebe H
157 | |
157 | |
158 o | 6:eea13746799a G
158 o | 6:eea13746799a G
159 |\|
159 |\|
160 | o 5:24b6387c8c8c F
160 | o 5:24b6387c8c8c F
161 | |
161 | |
162 o | 4:9520eea781bc E
162 o | 4:9520eea781bc E
163 |/
163 |/
164 | x 3:32af7686d403 D
164 | x 3:32af7686d403 D
165 | |
165 | |
166 | x 2:5fddd98957c8 C
166 | x 2:5fddd98957c8 C
167 | |
167 | |
168 | x 1:42ccdea3bb16 B
168 | x 1:42ccdea3bb16 B
169 |/
169 |/
170 o 0:cd010b8cd998 A
170 o 0:cd010b8cd998 A
171
171
172 $ hg debugobsolete
172 $ hg debugobsolete
173 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'user': 'test'} (glob)
173 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'operation': 'rebase', 'user': 'test'} (glob)
174 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'user': 'test'} (glob)
174 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
175 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'user': 'test'} (glob)
175 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'operation': 'rebase', 'user': 'test'} (glob)
176
176
177
177
178 More complex case where part of the rebase set were already rebased
178 More complex case where part of the rebase set were already rebased
179
179
180 $ hg rebase --rev 'desc(D)' --dest 'desc(H)'
180 $ hg rebase --rev 'desc(D)' --dest 'desc(H)'
181 rebasing 9:08483444fef9 "D"
181 rebasing 9:08483444fef9 "D"
182 $ hg debugobsolete
182 $ hg debugobsolete
183 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'user': 'test'} (glob)
183 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'operation': 'rebase', 'user': 'test'} (glob)
184 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'user': 'test'} (glob)
184 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
185 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'user': 'test'} (glob)
185 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'operation': 'rebase', 'user': 'test'} (glob)
186 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'user': 'test'} (glob)
186 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
187 $ hg log -G
187 $ hg log -G
188 @ 11:4596109a6a43 D
188 @ 11:4596109a6a43 D
189 |
189 |
190 | o 10:5ae4c968c6ac C
190 | o 10:5ae4c968c6ac C
191 | |
191 | |
192 | x 9:08483444fef9 D
192 | x 9:08483444fef9 D
193 | |
193 | |
194 | o 8:8877864f1edb B
194 | o 8:8877864f1edb B
195 | |
195 | |
196 o | 7:02de42196ebe H
196 o | 7:02de42196ebe H
197 | |
197 | |
198 | o 6:eea13746799a G
198 | o 6:eea13746799a G
199 |/|
199 |/|
200 o | 5:24b6387c8c8c F
200 o | 5:24b6387c8c8c F
201 | |
201 | |
202 | o 4:9520eea781bc E
202 | o 4:9520eea781bc E
203 |/
203 |/
204 o 0:cd010b8cd998 A
204 o 0:cd010b8cd998 A
205
205
206 $ hg rebase --source 'desc(B)' --dest 'tip' --config experimental.rebaseskipobsolete=True
206 $ hg rebase --source 'desc(B)' --dest 'tip' --config experimental.rebaseskipobsolete=True
207 rebasing 8:8877864f1edb "B"
207 rebasing 8:8877864f1edb "B"
208 note: not rebasing 9:08483444fef9 "D", already in destination as 11:4596109a6a43 "D"
208 note: not rebasing 9:08483444fef9 "D", already in destination as 11:4596109a6a43 "D"
209 rebasing 10:5ae4c968c6ac "C"
209 rebasing 10:5ae4c968c6ac "C"
210 $ hg debugobsolete
210 $ hg debugobsolete
211 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'user': 'test'} (glob)
211 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 {cd010b8cd998f3981a5a8115f94f8da4ab506089} (*) {'operation': 'rebase', 'user': 'test'} (glob)
212 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'user': 'test'} (glob)
212 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 5ae4c968c6aca831df823664e706c9d4aa34473d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
213 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'user': 'test'} (glob)
213 32af7686d403cf45b5d95f2d70cebea587ac806a 0 {5fddd98957c8a54a4d436dfe1da9d87f21a1b97b} (*) {'operation': 'rebase', 'user': 'test'} (glob)
214 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'user': 'test'} (glob)
214 08483444fef91d6224f6655ee586a65d263ad34c 4596109a6a4328c398bde3a4a3b6737cfade3003 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
215 8877864f1edb05d0e07dc4ba77b67a80a7b86672 462a34d07e599b87ea08676a449373fe4e2e1347 0 (*) {'user': 'test'} (glob)
215 8877864f1edb05d0e07dc4ba77b67a80a7b86672 462a34d07e599b87ea08676a449373fe4e2e1347 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
216 5ae4c968c6aca831df823664e706c9d4aa34473d 98f6af4ee9539e14da4465128f894c274900b6e5 0 (*) {'user': 'test'} (glob)
216 5ae4c968c6aca831df823664e706c9d4aa34473d 98f6af4ee9539e14da4465128f894c274900b6e5 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
217 $ hg log --rev 'divergent()'
217 $ hg log --rev 'divergent()'
218 $ hg log -G
218 $ hg log -G
219 o 13:98f6af4ee953 C
219 o 13:98f6af4ee953 C
220 |
220 |
221 o 12:462a34d07e59 B
221 o 12:462a34d07e59 B
222 |
222 |
223 @ 11:4596109a6a43 D
223 @ 11:4596109a6a43 D
224 |
224 |
225 o 7:02de42196ebe H
225 o 7:02de42196ebe H
226 |
226 |
227 | o 6:eea13746799a G
227 | o 6:eea13746799a G
228 |/|
228 |/|
229 o | 5:24b6387c8c8c F
229 o | 5:24b6387c8c8c F
230 | |
230 | |
231 | o 4:9520eea781bc E
231 | o 4:9520eea781bc E
232 |/
232 |/
233 o 0:cd010b8cd998 A
233 o 0:cd010b8cd998 A
234
234
235 $ hg log --style default --debug -r 4596109a6a4328c398bde3a4a3b6737cfade3003
235 $ hg log --style default --debug -r 4596109a6a4328c398bde3a4a3b6737cfade3003
236 changeset: 11:4596109a6a4328c398bde3a4a3b6737cfade3003
236 changeset: 11:4596109a6a4328c398bde3a4a3b6737cfade3003
237 phase: draft
237 phase: draft
238 parent: 7:02de42196ebee42ef284b6780a87cdc96e8eaab6
238 parent: 7:02de42196ebee42ef284b6780a87cdc96e8eaab6
239 parent: -1:0000000000000000000000000000000000000000
239 parent: -1:0000000000000000000000000000000000000000
240 manifest: 11:a91006e3a02f1edf631f7018e6e5684cf27dd905
240 manifest: 11:a91006e3a02f1edf631f7018e6e5684cf27dd905
241 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
241 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
242 date: Sat Apr 30 15:24:48 2011 +0200
242 date: Sat Apr 30 15:24:48 2011 +0200
243 files+: D
243 files+: D
244 extra: branch=default
244 extra: branch=default
245 extra: rebase_source=08483444fef91d6224f6655ee586a65d263ad34c
245 extra: rebase_source=08483444fef91d6224f6655ee586a65d263ad34c
246 extra: source=32af7686d403cf45b5d95f2d70cebea587ac806a
246 extra: source=32af7686d403cf45b5d95f2d70cebea587ac806a
247 description:
247 description:
248 D
248 D
249
249
250
250
251 $ hg up -qr 'desc(G)'
251 $ hg up -qr 'desc(G)'
252 $ hg graft 4596109a6a4328c398bde3a4a3b6737cfade3003
252 $ hg graft 4596109a6a4328c398bde3a4a3b6737cfade3003
253 grafting 11:4596109a6a43 "D"
253 grafting 11:4596109a6a43 "D"
254 $ hg up -qr 'desc(E)'
254 $ hg up -qr 'desc(E)'
255 $ hg rebase -s tip -d .
255 $ hg rebase -s tip -d .
256 rebasing 14:9e36056a46e3 "D" (tip)
256 rebasing 14:9e36056a46e3 "D" (tip)
257 $ hg log --style default --debug -r tip
257 $ hg log --style default --debug -r tip
258 changeset: 15:627d4614809036ba22b9e7cb31638ddc06ab99ab
258 changeset: 15:627d4614809036ba22b9e7cb31638ddc06ab99ab
259 tag: tip
259 tag: tip
260 phase: draft
260 phase: draft
261 parent: 4:9520eea781bcca16c1e15acc0ba14335a0e8e5ba
261 parent: 4:9520eea781bcca16c1e15acc0ba14335a0e8e5ba
262 parent: -1:0000000000000000000000000000000000000000
262 parent: -1:0000000000000000000000000000000000000000
263 manifest: 15:648e8ede73ae3e497d093d3a4c8fcc2daa864f42
263 manifest: 15:648e8ede73ae3e497d093d3a4c8fcc2daa864f42
264 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
264 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
265 date: Sat Apr 30 15:24:48 2011 +0200
265 date: Sat Apr 30 15:24:48 2011 +0200
266 files+: D
266 files+: D
267 extra: branch=default
267 extra: branch=default
268 extra: intermediate-source=4596109a6a4328c398bde3a4a3b6737cfade3003
268 extra: intermediate-source=4596109a6a4328c398bde3a4a3b6737cfade3003
269 extra: rebase_source=9e36056a46e37c9776168c7375734eebc70e294f
269 extra: rebase_source=9e36056a46e37c9776168c7375734eebc70e294f
270 extra: source=32af7686d403cf45b5d95f2d70cebea587ac806a
270 extra: source=32af7686d403cf45b5d95f2d70cebea587ac806a
271 description:
271 description:
272 D
272 D
273
273
274
274
275 Start rebase from a commit that is obsolete but not hidden only because it's
275 Start rebase from a commit that is obsolete but not hidden only because it's
276 a working copy parent. We should be moved back to the starting commit as usual
276 a working copy parent. We should be moved back to the starting commit as usual
277 even though it is hidden (until we're moved there).
277 even though it is hidden (until we're moved there).
278
278
279 $ hg --hidden up -qr 'first(hidden())'
279 $ hg --hidden up -qr 'first(hidden())'
280 $ hg rebase --rev 13 --dest 15
280 $ hg rebase --rev 13 --dest 15
281 rebasing 13:98f6af4ee953 "C"
281 rebasing 13:98f6af4ee953 "C"
282 $ hg log -G
282 $ hg log -G
283 o 16:294a2b93eb4d C
283 o 16:294a2b93eb4d C
284 |
284 |
285 o 15:627d46148090 D
285 o 15:627d46148090 D
286 |
286 |
287 | o 12:462a34d07e59 B
287 | o 12:462a34d07e59 B
288 | |
288 | |
289 | o 11:4596109a6a43 D
289 | o 11:4596109a6a43 D
290 | |
290 | |
291 | o 7:02de42196ebe H
291 | o 7:02de42196ebe H
292 | |
292 | |
293 +---o 6:eea13746799a G
293 +---o 6:eea13746799a G
294 | |/
294 | |/
295 | o 5:24b6387c8c8c F
295 | o 5:24b6387c8c8c F
296 | |
296 | |
297 o | 4:9520eea781bc E
297 o | 4:9520eea781bc E
298 |/
298 |/
299 | @ 1:42ccdea3bb16 B
299 | @ 1:42ccdea3bb16 B
300 |/
300 |/
301 o 0:cd010b8cd998 A
301 o 0:cd010b8cd998 A
302
302
303
303
304 $ cd ..
304 $ cd ..
305
305
306 collapse rebase
306 collapse rebase
307 ---------------------------------
307 ---------------------------------
308
308
309 $ hg clone base collapse
309 $ hg clone base collapse
310 updating to branch default
310 updating to branch default
311 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
311 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
312 $ cd collapse
312 $ cd collapse
313 $ hg rebase -s 42ccdea3bb16 -d eea13746799a --collapse
313 $ hg rebase -s 42ccdea3bb16 -d eea13746799a --collapse
314 rebasing 1:42ccdea3bb16 "B"
314 rebasing 1:42ccdea3bb16 "B"
315 rebasing 2:5fddd98957c8 "C"
315 rebasing 2:5fddd98957c8 "C"
316 rebasing 3:32af7686d403 "D"
316 rebasing 3:32af7686d403 "D"
317 $ hg log -G
317 $ hg log -G
318 o 8:4dc2197e807b Collapsed revision
318 o 8:4dc2197e807b Collapsed revision
319 |
319 |
320 | @ 7:02de42196ebe H
320 | @ 7:02de42196ebe H
321 | |
321 | |
322 o | 6:eea13746799a G
322 o | 6:eea13746799a G
323 |\|
323 |\|
324 | o 5:24b6387c8c8c F
324 | o 5:24b6387c8c8c F
325 | |
325 | |
326 o | 4:9520eea781bc E
326 o | 4:9520eea781bc E
327 |/
327 |/
328 o 0:cd010b8cd998 A
328 o 0:cd010b8cd998 A
329
329
330 $ hg log --hidden -G
330 $ hg log --hidden -G
331 o 8:4dc2197e807b Collapsed revision
331 o 8:4dc2197e807b Collapsed revision
332 |
332 |
333 | @ 7:02de42196ebe H
333 | @ 7:02de42196ebe H
334 | |
334 | |
335 o | 6:eea13746799a G
335 o | 6:eea13746799a G
336 |\|
336 |\|
337 | o 5:24b6387c8c8c F
337 | o 5:24b6387c8c8c F
338 | |
338 | |
339 o | 4:9520eea781bc E
339 o | 4:9520eea781bc E
340 |/
340 |/
341 | x 3:32af7686d403 D
341 | x 3:32af7686d403 D
342 | |
342 | |
343 | x 2:5fddd98957c8 C
343 | x 2:5fddd98957c8 C
344 | |
344 | |
345 | x 1:42ccdea3bb16 B
345 | x 1:42ccdea3bb16 B
346 |/
346 |/
347 o 0:cd010b8cd998 A
347 o 0:cd010b8cd998 A
348
348
349 $ hg id --debug -r tip
349 $ hg id --debug -r tip
350 4dc2197e807bae9817f09905b50ab288be2dbbcf tip
350 4dc2197e807bae9817f09905b50ab288be2dbbcf tip
351 $ hg debugobsolete
351 $ hg debugobsolete
352 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'user': 'test'} (glob)
352 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
353 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'user': 'test'} (glob)
353 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
354 32af7686d403cf45b5d95f2d70cebea587ac806a 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'user': 'test'} (glob)
354 32af7686d403cf45b5d95f2d70cebea587ac806a 4dc2197e807bae9817f09905b50ab288be2dbbcf 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
355
355
356 $ cd ..
356 $ cd ..
357
357
358 Rebase set has hidden descendants
358 Rebase set has hidden descendants
359 ---------------------------------
359 ---------------------------------
360
360
361 We rebase a changeset which has a hidden changeset. The hidden changeset must
361 We rebase a changeset which has a hidden changeset. The hidden changeset must
362 not be rebased.
362 not be rebased.
363
363
364 $ hg clone base hidden
364 $ hg clone base hidden
365 updating to branch default
365 updating to branch default
366 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
366 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
367 $ cd hidden
367 $ cd hidden
368 $ hg rebase -s 5fddd98957c8 -d eea13746799a
368 $ hg rebase -s 5fddd98957c8 -d eea13746799a
369 rebasing 2:5fddd98957c8 "C"
369 rebasing 2:5fddd98957c8 "C"
370 rebasing 3:32af7686d403 "D"
370 rebasing 3:32af7686d403 "D"
371 $ hg rebase -s 42ccdea3bb16 -d 02de42196ebe
371 $ hg rebase -s 42ccdea3bb16 -d 02de42196ebe
372 rebasing 1:42ccdea3bb16 "B"
372 rebasing 1:42ccdea3bb16 "B"
373 $ hg log -G
373 $ hg log -G
374 o 10:7c6027df6a99 B
374 o 10:7c6027df6a99 B
375 |
375 |
376 | o 9:cf44d2f5a9f4 D
376 | o 9:cf44d2f5a9f4 D
377 | |
377 | |
378 | o 8:e273c5e7d2d2 C
378 | o 8:e273c5e7d2d2 C
379 | |
379 | |
380 @ | 7:02de42196ebe H
380 @ | 7:02de42196ebe H
381 | |
381 | |
382 | o 6:eea13746799a G
382 | o 6:eea13746799a G
383 |/|
383 |/|
384 o | 5:24b6387c8c8c F
384 o | 5:24b6387c8c8c F
385 | |
385 | |
386 | o 4:9520eea781bc E
386 | o 4:9520eea781bc E
387 |/
387 |/
388 o 0:cd010b8cd998 A
388 o 0:cd010b8cd998 A
389
389
390 $ hg log --hidden -G
390 $ hg log --hidden -G
391 o 10:7c6027df6a99 B
391 o 10:7c6027df6a99 B
392 |
392 |
393 | o 9:cf44d2f5a9f4 D
393 | o 9:cf44d2f5a9f4 D
394 | |
394 | |
395 | o 8:e273c5e7d2d2 C
395 | o 8:e273c5e7d2d2 C
396 | |
396 | |
397 @ | 7:02de42196ebe H
397 @ | 7:02de42196ebe H
398 | |
398 | |
399 | o 6:eea13746799a G
399 | o 6:eea13746799a G
400 |/|
400 |/|
401 o | 5:24b6387c8c8c F
401 o | 5:24b6387c8c8c F
402 | |
402 | |
403 | o 4:9520eea781bc E
403 | o 4:9520eea781bc E
404 |/
404 |/
405 | x 3:32af7686d403 D
405 | x 3:32af7686d403 D
406 | |
406 | |
407 | x 2:5fddd98957c8 C
407 | x 2:5fddd98957c8 C
408 | |
408 | |
409 | x 1:42ccdea3bb16 B
409 | x 1:42ccdea3bb16 B
410 |/
410 |/
411 o 0:cd010b8cd998 A
411 o 0:cd010b8cd998 A
412
412
413 $ hg debugobsolete
413 $ hg debugobsolete
414 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b e273c5e7d2d29df783dce9f9eaa3ac4adc69c15d 0 (*) {'user': 'test'} (glob)
414 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b e273c5e7d2d29df783dce9f9eaa3ac4adc69c15d 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
415 32af7686d403cf45b5d95f2d70cebea587ac806a cf44d2f5a9f4297a62be94cbdd3dff7c7dc54258 0 (*) {'user': 'test'} (glob)
415 32af7686d403cf45b5d95f2d70cebea587ac806a cf44d2f5a9f4297a62be94cbdd3dff7c7dc54258 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
416 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 7c6027df6a99d93f461868e5433f63bde20b6dfb 0 (*) {'user': 'test'} (glob)
416 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 7c6027df6a99d93f461868e5433f63bde20b6dfb 0 (*) {'operation': 'rebase', 'user': 'test'} (glob)
417
417
418 Test that rewriting leaving instability behind is allowed
418 Test that rewriting leaving instability behind is allowed
419 ---------------------------------------------------------------------
419 ---------------------------------------------------------------------
420
420
421 $ hg log -r 'children(8)'
421 $ hg log -r 'children(8)'
422 9:cf44d2f5a9f4 D (no-eol)
422 9:cf44d2f5a9f4 D (no-eol)
423 $ hg rebase -r 8
423 $ hg rebase -r 8
424 rebasing 8:e273c5e7d2d2 "C"
424 rebasing 8:e273c5e7d2d2 "C"
425 $ hg log -G
425 $ hg log -G
426 o 11:0d8f238b634c C
426 o 11:0d8f238b634c C
427 |
427 |
428 o 10:7c6027df6a99 B
428 o 10:7c6027df6a99 B
429 |
429 |
430 | o 9:cf44d2f5a9f4 D
430 | o 9:cf44d2f5a9f4 D
431 | |
431 | |
432 | x 8:e273c5e7d2d2 C
432 | x 8:e273c5e7d2d2 C
433 | |
433 | |
434 @ | 7:02de42196ebe H
434 @ | 7:02de42196ebe H
435 | |
435 | |
436 | o 6:eea13746799a G
436 | o 6:eea13746799a G
437 |/|
437 |/|
438 o | 5:24b6387c8c8c F
438 o | 5:24b6387c8c8c F
439 | |
439 | |
440 | o 4:9520eea781bc E
440 | o 4:9520eea781bc E
441 |/
441 |/
442 o 0:cd010b8cd998 A
442 o 0:cd010b8cd998 A
443
443
444
444
445
445
446 Test multiple root handling
446 Test multiple root handling
447 ------------------------------------
447 ------------------------------------
448
448
449 $ hg rebase --dest 4 --rev '7+11+9'
449 $ hg rebase --dest 4 --rev '7+11+9'
450 rebasing 9:cf44d2f5a9f4 "D"
450 rebasing 9:cf44d2f5a9f4 "D"
451 rebasing 7:02de42196ebe "H"
451 rebasing 7:02de42196ebe "H"
452 not rebasing ignored 10:7c6027df6a99 "B"
452 not rebasing ignored 10:7c6027df6a99 "B"
453 rebasing 11:0d8f238b634c "C" (tip)
453 rebasing 11:0d8f238b634c "C" (tip)
454 $ hg log -G
454 $ hg log -G
455 o 14:1e8370e38cca C
455 o 14:1e8370e38cca C
456 |
456 |
457 @ 13:bfe264faf697 H
457 @ 13:bfe264faf697 H
458 |
458 |
459 | o 12:102b4c1d889b D
459 | o 12:102b4c1d889b D
460 |/
460 |/
461 | o 10:7c6027df6a99 B
461 | o 10:7c6027df6a99 B
462 | |
462 | |
463 | x 7:02de42196ebe H
463 | x 7:02de42196ebe H
464 | |
464 | |
465 +---o 6:eea13746799a G
465 +---o 6:eea13746799a G
466 | |/
466 | |/
467 | o 5:24b6387c8c8c F
467 | o 5:24b6387c8c8c F
468 | |
468 | |
469 o | 4:9520eea781bc E
469 o | 4:9520eea781bc E
470 |/
470 |/
471 o 0:cd010b8cd998 A
471 o 0:cd010b8cd998 A
472
472
473 $ cd ..
473 $ cd ..
474
474
475 test on rebase dropping a merge
475 test on rebase dropping a merge
476
476
477 (setup)
477 (setup)
478
478
479 $ hg init dropmerge
479 $ hg init dropmerge
480 $ cd dropmerge
480 $ cd dropmerge
481 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
481 $ hg unbundle "$TESTDIR/bundles/rebase.hg"
482 adding changesets
482 adding changesets
483 adding manifests
483 adding manifests
484 adding file changes
484 adding file changes
485 added 8 changesets with 7 changes to 7 files (+2 heads)
485 added 8 changesets with 7 changes to 7 files (+2 heads)
486 (run 'hg heads' to see heads, 'hg merge' to merge)
486 (run 'hg heads' to see heads, 'hg merge' to merge)
487 $ hg up 3
487 $ hg up 3
488 4 files updated, 0 files merged, 0 files removed, 0 files unresolved
488 4 files updated, 0 files merged, 0 files removed, 0 files unresolved
489 $ hg merge 7
489 $ hg merge 7
490 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
490 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
491 (branch merge, don't forget to commit)
491 (branch merge, don't forget to commit)
492 $ hg ci -m 'M'
492 $ hg ci -m 'M'
493 $ echo I > I
493 $ echo I > I
494 $ hg add I
494 $ hg add I
495 $ hg ci -m I
495 $ hg ci -m I
496 $ hg log -G
496 $ hg log -G
497 @ 9:4bde274eefcf I
497 @ 9:4bde274eefcf I
498 |
498 |
499 o 8:53a6a128b2b7 M
499 o 8:53a6a128b2b7 M
500 |\
500 |\
501 | o 7:02de42196ebe H
501 | o 7:02de42196ebe H
502 | |
502 | |
503 | | o 6:eea13746799a G
503 | | o 6:eea13746799a G
504 | |/|
504 | |/|
505 | o | 5:24b6387c8c8c F
505 | o | 5:24b6387c8c8c F
506 | | |
506 | | |
507 | | o 4:9520eea781bc E
507 | | o 4:9520eea781bc E
508 | |/
508 | |/
509 o | 3:32af7686d403 D
509 o | 3:32af7686d403 D
510 | |
510 | |
511 o | 2:5fddd98957c8 C
511 o | 2:5fddd98957c8 C
512 | |
512 | |
513 o | 1:42ccdea3bb16 B
513 o | 1:42ccdea3bb16 B
514 |/
514 |/
515 o 0:cd010b8cd998 A
515 o 0:cd010b8cd998 A
516
516
517 (actual test)
517 (actual test)
518
518
519 $ hg rebase --dest 6 --rev '((desc(H) + desc(D))::) - desc(M)'
519 $ hg rebase --dest 6 --rev '((desc(H) + desc(D))::) - desc(M)'
520 rebasing 3:32af7686d403 "D"
520 rebasing 3:32af7686d403 "D"
521 rebasing 7:02de42196ebe "H"
521 rebasing 7:02de42196ebe "H"
522 not rebasing ignored 8:53a6a128b2b7 "M"
522 not rebasing ignored 8:53a6a128b2b7 "M"
523 rebasing 9:4bde274eefcf "I" (tip)
523 rebasing 9:4bde274eefcf "I" (tip)
524 $ hg log -G
524 $ hg log -G
525 @ 12:acd174b7ab39 I
525 @ 12:acd174b7ab39 I
526 |
526 |
527 o 11:6c11a6218c97 H
527 o 11:6c11a6218c97 H
528 |
528 |
529 | o 10:b5313c85b22e D
529 | o 10:b5313c85b22e D
530 |/
530 |/
531 | o 8:53a6a128b2b7 M
531 | o 8:53a6a128b2b7 M
532 | |\
532 | |\
533 | | x 7:02de42196ebe H
533 | | x 7:02de42196ebe H
534 | | |
534 | | |
535 o---+ 6:eea13746799a G
535 o---+ 6:eea13746799a G
536 | | |
536 | | |
537 | | o 5:24b6387c8c8c F
537 | | o 5:24b6387c8c8c F
538 | | |
538 | | |
539 o---+ 4:9520eea781bc E
539 o---+ 4:9520eea781bc E
540 / /
540 / /
541 x | 3:32af7686d403 D
541 x | 3:32af7686d403 D
542 | |
542 | |
543 o | 2:5fddd98957c8 C
543 o | 2:5fddd98957c8 C
544 | |
544 | |
545 o | 1:42ccdea3bb16 B
545 o | 1:42ccdea3bb16 B
546 |/
546 |/
547 o 0:cd010b8cd998 A
547 o 0:cd010b8cd998 A
548
548
549
549
550 Test hidden changesets in the rebase set (issue4504)
550 Test hidden changesets in the rebase set (issue4504)
551
551
552 $ hg up --hidden 9
552 $ hg up --hidden 9
553 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
553 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
554 $ echo J > J
554 $ echo J > J
555 $ hg add J
555 $ hg add J
556 $ hg commit -m J
556 $ hg commit -m J
557 $ hg debugobsolete `hg log --rev . -T '{node}'`
557 $ hg debugobsolete `hg log --rev . -T '{node}'`
558
558
559 $ hg rebase --rev .~1::. --dest 'max(desc(D))' --traceback --config experimental.rebaseskipobsolete=off
559 $ hg rebase --rev .~1::. --dest 'max(desc(D))' --traceback --config experimental.rebaseskipobsolete=off
560 rebasing 9:4bde274eefcf "I"
560 rebasing 9:4bde274eefcf "I"
561 rebasing 13:06edfc82198f "J" (tip)
561 rebasing 13:06edfc82198f "J" (tip)
562 $ hg log -G
562 $ hg log -G
563 @ 15:5ae8a643467b J
563 @ 15:5ae8a643467b J
564 |
564 |
565 o 14:9ad579b4a5de I
565 o 14:9ad579b4a5de I
566 |
566 |
567 | o 12:acd174b7ab39 I
567 | o 12:acd174b7ab39 I
568 | |
568 | |
569 | o 11:6c11a6218c97 H
569 | o 11:6c11a6218c97 H
570 | |
570 | |
571 o | 10:b5313c85b22e D
571 o | 10:b5313c85b22e D
572 |/
572 |/
573 | o 8:53a6a128b2b7 M
573 | o 8:53a6a128b2b7 M
574 | |\
574 | |\
575 | | x 7:02de42196ebe H
575 | | x 7:02de42196ebe H
576 | | |
576 | | |
577 o---+ 6:eea13746799a G
577 o---+ 6:eea13746799a G
578 | | |
578 | | |
579 | | o 5:24b6387c8c8c F
579 | | o 5:24b6387c8c8c F
580 | | |
580 | | |
581 o---+ 4:9520eea781bc E
581 o---+ 4:9520eea781bc E
582 / /
582 / /
583 x | 3:32af7686d403 D
583 x | 3:32af7686d403 D
584 | |
584 | |
585 o | 2:5fddd98957c8 C
585 o | 2:5fddd98957c8 C
586 | |
586 | |
587 o | 1:42ccdea3bb16 B
587 o | 1:42ccdea3bb16 B
588 |/
588 |/
589 o 0:cd010b8cd998 A
589 o 0:cd010b8cd998 A
590
590
591 $ hg up 14 -C
591 $ hg up 14 -C
592 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
592 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
593 $ echo "K" > K
593 $ echo "K" > K
594 $ hg add K
594 $ hg add K
595 $ hg commit --amend -m "K"
595 $ hg commit --amend -m "K"
596 $ echo "L" > L
596 $ echo "L" > L
597 $ hg add L
597 $ hg add L
598 $ hg commit -m "L"
598 $ hg commit -m "L"
599 $ hg up '.^'
599 $ hg up '.^'
600 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
600 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
601 $ echo "M" > M
601 $ echo "M" > M
602 $ hg add M
602 $ hg add M
603 $ hg commit --amend -m "M"
603 $ hg commit --amend -m "M"
604 $ hg log -G
604 $ hg log -G
605 @ 20:bfaedf8eb73b M
605 @ 20:bfaedf8eb73b M
606 |
606 |
607 | o 18:97219452e4bd L
607 | o 18:97219452e4bd L
608 | |
608 | |
609 | x 17:fc37a630c901 K
609 | x 17:fc37a630c901 K
610 |/
610 |/
611 | o 15:5ae8a643467b J
611 | o 15:5ae8a643467b J
612 | |
612 | |
613 | x 14:9ad579b4a5de I
613 | x 14:9ad579b4a5de I
614 |/
614 |/
615 | o 12:acd174b7ab39 I
615 | o 12:acd174b7ab39 I
616 | |
616 | |
617 | o 11:6c11a6218c97 H
617 | o 11:6c11a6218c97 H
618 | |
618 | |
619 o | 10:b5313c85b22e D
619 o | 10:b5313c85b22e D
620 |/
620 |/
621 | o 8:53a6a128b2b7 M
621 | o 8:53a6a128b2b7 M
622 | |\
622 | |\
623 | | x 7:02de42196ebe H
623 | | x 7:02de42196ebe H
624 | | |
624 | | |
625 o---+ 6:eea13746799a G
625 o---+ 6:eea13746799a G
626 | | |
626 | | |
627 | | o 5:24b6387c8c8c F
627 | | o 5:24b6387c8c8c F
628 | | |
628 | | |
629 o---+ 4:9520eea781bc E
629 o---+ 4:9520eea781bc E
630 / /
630 / /
631 x | 3:32af7686d403 D
631 x | 3:32af7686d403 D
632 | |
632 | |
633 o | 2:5fddd98957c8 C
633 o | 2:5fddd98957c8 C
634 | |
634 | |
635 o | 1:42ccdea3bb16 B
635 o | 1:42ccdea3bb16 B
636 |/
636 |/
637 o 0:cd010b8cd998 A
637 o 0:cd010b8cd998 A
638
638
639 $ hg rebase -s 14 -d 18 --config experimental.rebaseskipobsolete=True
639 $ hg rebase -s 14 -d 18 --config experimental.rebaseskipobsolete=True
640 note: not rebasing 14:9ad579b4a5de "I", already in destination as 17:fc37a630c901 "K"
640 note: not rebasing 14:9ad579b4a5de "I", already in destination as 17:fc37a630c901 "K"
641 rebasing 15:5ae8a643467b "J"
641 rebasing 15:5ae8a643467b "J"
642
642
643 $ cd ..
643 $ cd ..
644
644
645 Skip obsolete changeset even with multiple hops
645 Skip obsolete changeset even with multiple hops
646 -----------------------------------------------
646 -----------------------------------------------
647
647
648 setup
648 setup
649
649
650 $ hg init obsskip
650 $ hg init obsskip
651 $ cd obsskip
651 $ cd obsskip
652 $ cat << EOF >> .hg/hgrc
652 $ cat << EOF >> .hg/hgrc
653 > [experimental]
653 > [experimental]
654 > rebaseskipobsolete = True
654 > rebaseskipobsolete = True
655 > [extensions]
655 > [extensions]
656 > strip =
656 > strip =
657 > EOF
657 > EOF
658 $ echo A > A
658 $ echo A > A
659 $ hg add A
659 $ hg add A
660 $ hg commit -m A
660 $ hg commit -m A
661 $ echo B > B
661 $ echo B > B
662 $ hg add B
662 $ hg add B
663 $ hg commit -m B0
663 $ hg commit -m B0
664 $ hg commit --amend -m B1
664 $ hg commit --amend -m B1
665 $ hg commit --amend -m B2
665 $ hg commit --amend -m B2
666 $ hg up --hidden 'desc(B0)'
666 $ hg up --hidden 'desc(B0)'
667 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
667 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
668 $ echo C > C
668 $ echo C > C
669 $ hg add C
669 $ hg add C
670 $ hg commit -m C
670 $ hg commit -m C
671
671
672 Rebase finds its way in a chain of marker
672 Rebase finds its way in a chain of marker
673
673
674 $ hg rebase -d 'desc(B2)'
674 $ hg rebase -d 'desc(B2)'
675 note: not rebasing 1:a8b11f55fb19 "B0", already in destination as 3:261e70097290 "B2"
675 note: not rebasing 1:a8b11f55fb19 "B0", already in destination as 3:261e70097290 "B2"
676 rebasing 4:212cb178bcbb "C" (tip)
676 rebasing 4:212cb178bcbb "C" (tip)
677
677
678 Even when the chain include missing node
678 Even when the chain include missing node
679
679
680 $ hg up --hidden 'desc(B0)'
680 $ hg up --hidden 'desc(B0)'
681 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
681 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
682 $ echo D > D
682 $ echo D > D
683 $ hg add D
683 $ hg add D
684 $ hg commit -m D
684 $ hg commit -m D
685 $ hg --hidden strip -r 'desc(B1)'
685 $ hg --hidden strip -r 'desc(B1)'
686 saved backup bundle to $TESTTMP/obsskip/.hg/strip-backup/86f6414ccda7-b1c452ee-backup.hg (glob)
686 saved backup bundle to $TESTTMP/obsskip/.hg/strip-backup/86f6414ccda7-b1c452ee-backup.hg (glob)
687
687
688 $ hg rebase -d 'desc(B2)'
688 $ hg rebase -d 'desc(B2)'
689 note: not rebasing 1:a8b11f55fb19 "B0", already in destination as 2:261e70097290 "B2"
689 note: not rebasing 1:a8b11f55fb19 "B0", already in destination as 2:261e70097290 "B2"
690 rebasing 5:1a79b7535141 "D" (tip)
690 rebasing 5:1a79b7535141 "D" (tip)
691 $ hg up 4
691 $ hg up 4
692 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
692 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
693 $ echo "O" > O
693 $ echo "O" > O
694 $ hg add O
694 $ hg add O
695 $ hg commit -m O
695 $ hg commit -m O
696 $ echo "P" > P
696 $ echo "P" > P
697 $ hg add P
697 $ hg add P
698 $ hg commit -m P
698 $ hg commit -m P
699 $ hg log -G
699 $ hg log -G
700 @ 8:8d47583e023f P
700 @ 8:8d47583e023f P
701 |
701 |
702 o 7:360bbaa7d3ce O
702 o 7:360bbaa7d3ce O
703 |
703 |
704 | o 6:9c48361117de D
704 | o 6:9c48361117de D
705 | |
705 | |
706 o | 4:ff2c4d47b71d C
706 o | 4:ff2c4d47b71d C
707 |/
707 |/
708 o 2:261e70097290 B2
708 o 2:261e70097290 B2
709 |
709 |
710 o 0:4a2df7238c3b A
710 o 0:4a2df7238c3b A
711
711
712 $ hg debugobsolete `hg log -r 7 -T '{node}\n'` --config experimental.evolution=all
712 $ hg debugobsolete `hg log -r 7 -T '{node}\n'` --config experimental.evolution=all
713 $ hg rebase -d 6 -r "4::"
713 $ hg rebase -d 6 -r "4::"
714 rebasing 4:ff2c4d47b71d "C"
714 rebasing 4:ff2c4d47b71d "C"
715 note: not rebasing 7:360bbaa7d3ce "O", it has no successor
715 note: not rebasing 7:360bbaa7d3ce "O", it has no successor
716 rebasing 8:8d47583e023f "P" (tip)
716 rebasing 8:8d47583e023f "P" (tip)
717
717
718 If all the changeset to be rebased are obsolete and present in the destination, we
718 If all the changeset to be rebased are obsolete and present in the destination, we
719 should display a friendly error message
719 should display a friendly error message
720
720
721 $ hg log -G
721 $ hg log -G
722 @ 10:121d9e3bc4c6 P
722 @ 10:121d9e3bc4c6 P
723 |
723 |
724 o 9:4be60e099a77 C
724 o 9:4be60e099a77 C
725 |
725 |
726 o 6:9c48361117de D
726 o 6:9c48361117de D
727 |
727 |
728 o 2:261e70097290 B2
728 o 2:261e70097290 B2
729 |
729 |
730 o 0:4a2df7238c3b A
730 o 0:4a2df7238c3b A
731
731
732
732
733 $ hg up 9
733 $ hg up 9
734 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
734 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
735 $ echo "non-relevant change" > nonrelevant
735 $ echo "non-relevant change" > nonrelevant
736 $ hg add nonrelevant
736 $ hg add nonrelevant
737 $ hg commit -m nonrelevant
737 $ hg commit -m nonrelevant
738 created new head
738 created new head
739 $ hg debugobsolete `hg log -r 11 -T '{node}\n'` --config experimental.evolution=all
739 $ hg debugobsolete `hg log -r 11 -T '{node}\n'` --config experimental.evolution=all
740 $ hg rebase -r . -d 10
740 $ hg rebase -r . -d 10
741 note: not rebasing 11:f44da1f4954c "nonrelevant" (tip), it has no successor
741 note: not rebasing 11:f44da1f4954c "nonrelevant" (tip), it has no successor
742
742
743 If a rebase is going to create divergence, it should abort
743 If a rebase is going to create divergence, it should abort
744
744
745 $ hg log -G
745 $ hg log -G
746 @ 11:f44da1f4954c nonrelevant
746 @ 11:f44da1f4954c nonrelevant
747 |
747 |
748 | o 10:121d9e3bc4c6 P
748 | o 10:121d9e3bc4c6 P
749 |/
749 |/
750 o 9:4be60e099a77 C
750 o 9:4be60e099a77 C
751 |
751 |
752 o 6:9c48361117de D
752 o 6:9c48361117de D
753 |
753 |
754 o 2:261e70097290 B2
754 o 2:261e70097290 B2
755 |
755 |
756 o 0:4a2df7238c3b A
756 o 0:4a2df7238c3b A
757
757
758
758
759 $ hg up 9
759 $ hg up 9
760 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
760 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
761 $ echo "john" > doe
761 $ echo "john" > doe
762 $ hg add doe
762 $ hg add doe
763 $ hg commit -m "john doe"
763 $ hg commit -m "john doe"
764 created new head
764 created new head
765 $ hg up 10
765 $ hg up 10
766 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
766 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
767 $ echo "foo" > bar
767 $ echo "foo" > bar
768 $ hg add bar
768 $ hg add bar
769 $ hg commit --amend -m "10'"
769 $ hg commit --amend -m "10'"
770 $ hg up 10 --hidden
770 $ hg up 10 --hidden
771 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
771 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
772 $ echo "bar" > foo
772 $ echo "bar" > foo
773 $ hg add foo
773 $ hg add foo
774 $ hg commit -m "bar foo"
774 $ hg commit -m "bar foo"
775 $ hg log -G
775 $ hg log -G
776 @ 15:73568ab6879d bar foo
776 @ 15:73568ab6879d bar foo
777 |
777 |
778 | o 14:77d874d096a2 10'
778 | o 14:77d874d096a2 10'
779 | |
779 | |
780 | | o 12:3eb461388009 john doe
780 | | o 12:3eb461388009 john doe
781 | |/
781 | |/
782 x | 10:121d9e3bc4c6 P
782 x | 10:121d9e3bc4c6 P
783 |/
783 |/
784 o 9:4be60e099a77 C
784 o 9:4be60e099a77 C
785 |
785 |
786 o 6:9c48361117de D
786 o 6:9c48361117de D
787 |
787 |
788 o 2:261e70097290 B2
788 o 2:261e70097290 B2
789 |
789 |
790 o 0:4a2df7238c3b A
790 o 0:4a2df7238c3b A
791
791
792 $ hg summary
792 $ hg summary
793 parent: 15:73568ab6879d tip (unstable)
793 parent: 15:73568ab6879d tip (unstable)
794 bar foo
794 bar foo
795 branch: default
795 branch: default
796 commit: (clean)
796 commit: (clean)
797 update: 2 new changesets, 3 branch heads (merge)
797 update: 2 new changesets, 3 branch heads (merge)
798 phases: 8 draft
798 phases: 8 draft
799 unstable: 1 changesets
799 unstable: 1 changesets
800 $ hg rebase -s 10 -d 12
800 $ hg rebase -s 10 -d 12
801 abort: this rebase will cause divergences from: 121d9e3bc4c6
801 abort: this rebase will cause divergences from: 121d9e3bc4c6
802 (to force the rebase please set experimental.allowdivergence=True)
802 (to force the rebase please set experimental.allowdivergence=True)
803 [255]
803 [255]
804 $ hg log -G
804 $ hg log -G
805 @ 15:73568ab6879d bar foo
805 @ 15:73568ab6879d bar foo
806 |
806 |
807 | o 14:77d874d096a2 10'
807 | o 14:77d874d096a2 10'
808 | |
808 | |
809 | | o 12:3eb461388009 john doe
809 | | o 12:3eb461388009 john doe
810 | |/
810 | |/
811 x | 10:121d9e3bc4c6 P
811 x | 10:121d9e3bc4c6 P
812 |/
812 |/
813 o 9:4be60e099a77 C
813 o 9:4be60e099a77 C
814 |
814 |
815 o 6:9c48361117de D
815 o 6:9c48361117de D
816 |
816 |
817 o 2:261e70097290 B2
817 o 2:261e70097290 B2
818 |
818 |
819 o 0:4a2df7238c3b A
819 o 0:4a2df7238c3b A
820
820
821 With experimental.allowdivergence=True, rebase can create divergence
821 With experimental.allowdivergence=True, rebase can create divergence
822
822
823 $ hg rebase -s 10 -d 12 --config experimental.allowdivergence=True
823 $ hg rebase -s 10 -d 12 --config experimental.allowdivergence=True
824 rebasing 10:121d9e3bc4c6 "P"
824 rebasing 10:121d9e3bc4c6 "P"
825 rebasing 15:73568ab6879d "bar foo" (tip)
825 rebasing 15:73568ab6879d "bar foo" (tip)
826 $ hg summary
826 $ hg summary
827 parent: 17:61bd55f69bc4 tip
827 parent: 17:61bd55f69bc4 tip
828 bar foo
828 bar foo
829 branch: default
829 branch: default
830 commit: (clean)
830 commit: (clean)
831 update: 1 new changesets, 2 branch heads (merge)
831 update: 1 new changesets, 2 branch heads (merge)
832 phases: 8 draft
832 phases: 8 draft
833 divergent: 2 changesets
833 divergent: 2 changesets
834
834
835 rebase --continue + skipped rev because their successors are in destination
835 rebase --continue + skipped rev because their successors are in destination
836 we make a change in trunk and work on conflicting changes to make rebase abort.
836 we make a change in trunk and work on conflicting changes to make rebase abort.
837
837
838 $ hg log -G -r 17::
838 $ hg log -G -r 17::
839 @ 17:61bd55f69bc4 bar foo
839 @ 17:61bd55f69bc4 bar foo
840 |
840 |
841 ~
841 ~
842
842
843 Create the two changes in trunk
843 Create the two changes in trunk
844 $ printf "a" > willconflict
844 $ printf "a" > willconflict
845 $ hg add willconflict
845 $ hg add willconflict
846 $ hg commit -m "willconflict first version"
846 $ hg commit -m "willconflict first version"
847
847
848 $ printf "dummy" > C
848 $ printf "dummy" > C
849 $ hg commit -m "dummy change successor"
849 $ hg commit -m "dummy change successor"
850
850
851 Create the changes that we will rebase
851 Create the changes that we will rebase
852 $ hg update -C 17 -q
852 $ hg update -C 17 -q
853 $ printf "b" > willconflict
853 $ printf "b" > willconflict
854 $ hg add willconflict
854 $ hg add willconflict
855 $ hg commit -m "willconflict second version"
855 $ hg commit -m "willconflict second version"
856 created new head
856 created new head
857 $ printf "dummy" > K
857 $ printf "dummy" > K
858 $ hg add K
858 $ hg add K
859 $ hg commit -m "dummy change"
859 $ hg commit -m "dummy change"
860 $ printf "dummy" > L
860 $ printf "dummy" > L
861 $ hg add L
861 $ hg add L
862 $ hg commit -m "dummy change"
862 $ hg commit -m "dummy change"
863 $ hg debugobsolete `hg log -r ".^" -T '{node}'` `hg log -r 19 -T '{node}'` --config experimental.evolution=all
863 $ hg debugobsolete `hg log -r ".^" -T '{node}'` `hg log -r 19 -T '{node}'` --config experimental.evolution=all
864
864
865 $ hg log -G -r 17::
865 $ hg log -G -r 17::
866 @ 22:7bdc8a87673d dummy change
866 @ 22:7bdc8a87673d dummy change
867 |
867 |
868 x 21:8b31da3c4919 dummy change
868 x 21:8b31da3c4919 dummy change
869 |
869 |
870 o 20:b82fb57ea638 willconflict second version
870 o 20:b82fb57ea638 willconflict second version
871 |
871 |
872 | o 19:601db7a18f51 dummy change successor
872 | o 19:601db7a18f51 dummy change successor
873 | |
873 | |
874 | o 18:357ddf1602d5 willconflict first version
874 | o 18:357ddf1602d5 willconflict first version
875 |/
875 |/
876 o 17:61bd55f69bc4 bar foo
876 o 17:61bd55f69bc4 bar foo
877 |
877 |
878 ~
878 ~
879 $ hg rebase -r ".^^ + .^ + ." -d 19
879 $ hg rebase -r ".^^ + .^ + ." -d 19
880 rebasing 20:b82fb57ea638 "willconflict second version"
880 rebasing 20:b82fb57ea638 "willconflict second version"
881 merging willconflict
881 merging willconflict
882 warning: conflicts while merging willconflict! (edit, then use 'hg resolve --mark')
882 warning: conflicts while merging willconflict! (edit, then use 'hg resolve --mark')
883 unresolved conflicts (see hg resolve, then hg rebase --continue)
883 unresolved conflicts (see hg resolve, then hg rebase --continue)
884 [1]
884 [1]
885
885
886 $ hg resolve --mark willconflict
886 $ hg resolve --mark willconflict
887 (no more unresolved files)
887 (no more unresolved files)
888 continue: hg rebase --continue
888 continue: hg rebase --continue
889 $ hg rebase --continue
889 $ hg rebase --continue
890 rebasing 20:b82fb57ea638 "willconflict second version"
890 rebasing 20:b82fb57ea638 "willconflict second version"
891 note: not rebasing 21:8b31da3c4919 "dummy change", already in destination as 19:601db7a18f51 "dummy change successor"
891 note: not rebasing 21:8b31da3c4919 "dummy change", already in destination as 19:601db7a18f51 "dummy change successor"
892 rebasing 22:7bdc8a87673d "dummy change" (tip)
892 rebasing 22:7bdc8a87673d "dummy change" (tip)
893 $ cd ..
893 $ cd ..
894
894
895 rebase source is obsoleted (issue5198)
895 rebase source is obsoleted (issue5198)
896 ---------------------------------
896 ---------------------------------
897
897
898 $ hg clone base amended
898 $ hg clone base amended
899 updating to branch default
899 updating to branch default
900 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
900 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
901 $ cd amended
901 $ cd amended
902 $ hg up 9520eea781bc
902 $ hg up 9520eea781bc
903 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
903 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
904 $ echo 1 >> E
904 $ echo 1 >> E
905 $ hg commit --amend -m "E'"
905 $ hg commit --amend -m "E'"
906 $ hg log -G
906 $ hg log -G
907 @ 9:69abe8906104 E'
907 @ 9:69abe8906104 E'
908 |
908 |
909 | o 7:02de42196ebe H
909 | o 7:02de42196ebe H
910 | |
910 | |
911 | | o 6:eea13746799a G
911 | | o 6:eea13746799a G
912 | |/|
912 | |/|
913 | o | 5:24b6387c8c8c F
913 | o | 5:24b6387c8c8c F
914 |/ /
914 |/ /
915 | x 4:9520eea781bc E
915 | x 4:9520eea781bc E
916 |/
916 |/
917 | o 3:32af7686d403 D
917 | o 3:32af7686d403 D
918 | |
918 | |
919 | o 2:5fddd98957c8 C
919 | o 2:5fddd98957c8 C
920 | |
920 | |
921 | o 1:42ccdea3bb16 B
921 | o 1:42ccdea3bb16 B
922 |/
922 |/
923 o 0:cd010b8cd998 A
923 o 0:cd010b8cd998 A
924
924
925 $ hg rebase -d . -s 9520eea781bc
925 $ hg rebase -d . -s 9520eea781bc
926 note: not rebasing 4:9520eea781bc "E", already in destination as 9:69abe8906104 "E'"
926 note: not rebasing 4:9520eea781bc "E", already in destination as 9:69abe8906104 "E'"
927 rebasing 6:eea13746799a "G"
927 rebasing 6:eea13746799a "G"
928 $ hg log -G
928 $ hg log -G
929 o 10:17be06e82e95 G
929 o 10:17be06e82e95 G
930 |\
930 |\
931 | @ 9:69abe8906104 E'
931 | @ 9:69abe8906104 E'
932 | |
932 | |
933 +---o 7:02de42196ebe H
933 +---o 7:02de42196ebe H
934 | |
934 | |
935 o | 5:24b6387c8c8c F
935 o | 5:24b6387c8c8c F
936 |/
936 |/
937 | o 3:32af7686d403 D
937 | o 3:32af7686d403 D
938 | |
938 | |
939 | o 2:5fddd98957c8 C
939 | o 2:5fddd98957c8 C
940 | |
940 | |
941 | o 1:42ccdea3bb16 B
941 | o 1:42ccdea3bb16 B
942 |/
942 |/
943 o 0:cd010b8cd998 A
943 o 0:cd010b8cd998 A
944
944
945 $ cd ..
945 $ cd ..
946
946
947 Test that bookmark is moved and working dir is updated when all changesets have
947 Test that bookmark is moved and working dir is updated when all changesets have
948 equivalents in destination
948 equivalents in destination
949 $ hg init rbsrepo && cd rbsrepo
949 $ hg init rbsrepo && cd rbsrepo
950 $ echo "[experimental]" > .hg/hgrc
950 $ echo "[experimental]" > .hg/hgrc
951 $ echo "evolution=all" >> .hg/hgrc
951 $ echo "evolution=all" >> .hg/hgrc
952 $ echo "rebaseskipobsolete=on" >> .hg/hgrc
952 $ echo "rebaseskipobsolete=on" >> .hg/hgrc
953 $ echo root > root && hg ci -Am root
953 $ echo root > root && hg ci -Am root
954 adding root
954 adding root
955 $ echo a > a && hg ci -Am a
955 $ echo a > a && hg ci -Am a
956 adding a
956 adding a
957 $ hg up 0
957 $ hg up 0
958 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
958 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
959 $ echo b > b && hg ci -Am b
959 $ echo b > b && hg ci -Am b
960 adding b
960 adding b
961 created new head
961 created new head
962 $ hg rebase -r 2 -d 1
962 $ hg rebase -r 2 -d 1
963 rebasing 2:1e9a3c00cbe9 "b" (tip)
963 rebasing 2:1e9a3c00cbe9 "b" (tip)
964 $ hg log -r . # working dir is at rev 3 (successor of 2)
964 $ hg log -r . # working dir is at rev 3 (successor of 2)
965 3:be1832deae9a b (no-eol)
965 3:be1832deae9a b (no-eol)
966 $ hg book -r 2 mybook --hidden # rev 2 has a bookmark on it now
966 $ hg book -r 2 mybook --hidden # rev 2 has a bookmark on it now
967 $ hg up 2 && hg log -r . # working dir is at rev 2 again
967 $ hg up 2 && hg log -r . # working dir is at rev 2 again
968 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
968 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
969 2:1e9a3c00cbe9 b (no-eol)
969 2:1e9a3c00cbe9 b (no-eol)
970 $ hg rebase -r 2 -d 3
970 $ hg rebase -r 2 -d 3
971 note: not rebasing 2:1e9a3c00cbe9 "b" (mybook), already in destination as 3:be1832deae9a "b"
971 note: not rebasing 2:1e9a3c00cbe9 "b" (mybook), already in destination as 3:be1832deae9a "b"
972 Check that working directory was updated to rev 3 although rev 2 was skipped
972 Check that working directory was updated to rev 3 although rev 2 was skipped
973 during the rebase operation
973 during the rebase operation
974 $ hg log -r .
974 $ hg log -r .
975 3:be1832deae9a b (no-eol)
975 3:be1832deae9a b (no-eol)
976
976
977 Check that bookmark was moved to rev 3 although rev 2 was skipped
977 Check that bookmark was moved to rev 3 although rev 2 was skipped
978 during the rebase operation
978 during the rebase operation
979 $ hg bookmarks
979 $ hg bookmarks
980 mybook 3:be1832deae9a
980 mybook 3:be1832deae9a
General Comments 0
You need to be logged in to leave comments. Login now